Vacation, Sony FE 90mm G Macro and misc photo stuff

I was on Hvar for the last ten days, mostly to try to soak up the last warm and sunny days of the year, and also take pictures. This time I had a new lens to work with, the Sony FE 90mm G Macro:

So, what’s so cool about this one and what is it that it does, that can’t be done with the equipment that I already have. tl;dr: It’s the best macro lens in the world.

It has the least chromatic aberrations wide open, greatest sharpness, wonderful front and rear bokeh, image stabilization, autofocus and weather resistance. If you want to work in the closeup and macro range, which I do a lot, it’s the best lens you can get, with the possible exceptions of Zeiss Makro-Planar 100mm f/2 and Olympus m.Zuiko 60mm f/2.8 Macro. As a portrait lens, the Sony is so good, they compare it with Zeiss Batis 85mm f/1.8, which is one of the best portrait lenses out there. So, considering what you’re getting, it’s actually a bargain, regardless of the apparently high price. The price seems high as long as you don’t look at what it does and what you’d have to get to match it. So, why is it better than what I used so far, which is a Canon EF 85mm f/1.8 on macro extension tubes? First of all, Canon creates completely different-looking images, so it’s not a direct replacement, it’s a different tool in a toolbox, like hammer and pliers. In the same way, a Minolta MD 50mm f/1.7 on macro extenders makes completely different images, and I would prefer it for some things. What Sony 90mm G Macro does is allow me to take this:

… and in the next moment, without changing lenses or removing macro extenders, it allows me to take this:

 

Essentially, it’s a wonderfully versatile walkaround lens for my kind of photography, and the only thing I need to complement it is a good wideangle.

Talking about wideangles, I was kinda worried about the problems some photographers had with Canon lenses adapted to Sony FE bodies, where sharpness would drop off towards the edge of the frame. The problem is supposedly caused either by a focusing error, or interference with parts of the adapter, or with the FE mount itself, which is narrow for a 35mm. I couldn’t test the issue with my EF 17-40mm f/L lens, because it’s always unsharp in the corners due to its inferior optical design, but I did test it with the EF 15mm f/2.8 Fisheye, and the problem doesn’t exist with the Viltrox III adapter:

The edges and corners are completely sharp, and the only limitation is the depth of field (as visible on the above image in the bottom corners). Maybe my adapter is just that good; I do think the problem would show itself with the widest-angle lens there is. I would not hesitate to use Canon EF wideangles on a Sony FE body with this adapter, when edge and corner sharpness is critical.

There’s also controversy regarding the Sony FE 28-70mm f/3.5-5.6 OSS kit lens and its usability. In my experience, the lens is excellent. It’s very sharp even wide open, it doesn’t create distortions, chromatic aberrations or flare; vignetting is visible wide open but not when stopped down, and if used as a landscape photography lens from a tripod with meticulous technique, it creates stunningly good images and has no flaws whatsoever. Its problems are of different kind: it has poor close focus, so it’s useless for closeup/macro shots, and the aperture is slow, which makes it difficult to isolate the subject from background. When those two aspects are combined, it becomes useless as a walkaround lens for me, and considering how great the aperture blades are designed and how good the bokeh could be if only it focused closer and had bigger aperture, it’s a shame. However, as a moderate-wideangle to light-telephoto landscape lens, it’s excellent:

  

People have been maligning the Sony Vario-Tessar T* FE 24-70mm f/4 ZA OSS because it’s expensive and it isn’t sharper than the “kit lens”. The thing is, if it’s as sharp as the kit lens, it’s plenty sharp, thank you very much. It would be really difficult to get it sharper than completely sharp. As for it being expensive, I agree, but it also has harder contrast and color saturation than the 28-70mm, and it also has fixed aperture, and some dust and moisture sealing, which might make it attractive for some people. For me, the 24-70mm f/4 doesn’t add any real versatility that would make it useful for closeup photography, and I prefer the milder contrast and color rendition of the 28-70mm kit lens.

Another thing I got was the Meike battery grip for Sony A7II.

Essentially, it’s a cheap copy of the Sony battery grip, and is as good. It addresses the problem of poor camera ergonomics, and also the mediocre battery life, at the cost of making the camera bulkier and heavier. I’m not sure the result is as comfortable as a Canon 5d body, but is significantly less awkward and tendon-pain-inducing than the Sony A7II body alone with a large and heavy lens attached, when you go for long photographic walks. I recommend at least trying it; it might not be the solution to everyone’s problems, though.

As for the camera I used, the Sony A7II, I’m in love with the colors, resolution and the depth of information in deep shadows during the long exposures. I would like it to be less noisy during the long exposures, in higher ISO and in deep shadows, but regardless, the image quality is fantastic. The only problem with Sony that I had so far is that the first copy of the FE 90mm G Macro arrived with dead electronics – it was completely fubared: no aperture, no focus, no nothing. Some flat cable probably had a flimsy connection, or was subject to G-shock in transit, but I returned it, received a functioning replacement and my experiences with the lens so far were superlative, except that it’s a heavy brick. There are several other lenses I’m considering: one is a wideangle with better geometry and field curvature than my EF 17-40mm f/4L, and another is a telephoto, which is something I never bought because the good ones are very expensive and very heavy, and I would probably end up not using it much, but I still miss one considering how much I liked ones I had for review years ago. But yeah, that’s about it, rambling over. 🙂

 

About computer security

Regarding this latest ransomeware attack, I’ve seen various responses online. Here are my thoughts.

First, the origin of the problem is the NSA-discovered vulnerability in Windows, apparently in versions ranging from XP to 10, which is weird in itself considering the differences introduced first in Vista, and then in 8. This makes it unlikely that Microsoft didn’t know about it; it looks like something that was deliberately left open, as a standard back door for NSA. Either that, or it means that they managed not to find a glaring vulnerability since 2001, which makes them incompetent. Having in mind that other platforms had similar issues, it wouldn’t be unheard of, but I will make my skepticism obvious – long-term-undiscovered glaring flaws indicate either intent or incredible levels of negligence.

The immediate manifestation of the problem, the WannaCry ransomeware worm, is a sophisticated product of the most dangerous kind, the one that apparently doesn’t require you to click on stupid shit in order to be infected. The malware sniffs your IP, detects vulnerabilities and, if found, executes code on your machine. The requirement for you to be infected is a poorly configured firewall, or an infected machine behind your firewall, combined with existence of vulnerable systems. The malware encrypts the victim’s files, sends the decryption key to the hackers, deletes it from the local machine and posts a ransom notice requiring bitcoin payment on the afflicted machine. It is my opinion that the obvious explanation (of it being a money-motivated hacker attack) is implausible. The reason for this is the low probability of actually collecting any money, combined with the type of attack. A more probable explanation is that this is a test, by a nation-state actor, checking out the NSA exploit that had been published by Wikileaks. The possible purpose of this test is most likely forcing the vulnerable machines out in the open so that they can be patched and the vulnerability permanently removed, or, alternatively, assessing the impact and response in case of a real attack. It is also a good way of permanently removing the NSA-installed malware from circulation by permanently disabling the vulnerable machines by encrypting their filesystem and thus forcing a hard-drive format. Essentially, it sterilizes the world against all NSA-installed malware using this exploit, and it is much more effective than trying to advertise patches and antivirus software, since people who are vulnerable are basically too lazy to upgrade from Windows XP, let alone install patches.

As for the future, an obvious conclusion would be that this is not the only vulnerability in existence, and that our systems remain vulnerable to other, undiscovered attack vectors. What are the solutions? Some recommend to install Linux or buy a Mac, forgetting the heartbleed bug in the OpenSSL, which was as bad if not worse. All Linux and Mac machines were vulnerable. Considering how long it took Apple to do anything, and how long it remained undetected, I remain skeptical regarding the security of either platform. They are less common than Windows, which makes them a less tempting target, but having in mind that this is the exact reason why potential targets of state-actor surveillance would use them, it actually makes them more of a target, not by individual hackers, but by potentially much more dangerous people. The fact that hacker-attacks on Linux and Mac OS are not taken seriously, the protective measures are usually weak and reliant on the assumed inherent security of the UNIX-based operating systems. When reality doesn’t match the assumptions, as in case of the heartbleed bug, there are usually no additional layers of protection to catch the exceptions. Furthermore, one cannot exclude a low-level vulnerability installed in the device’s firmware, since firmwares are proprietary and even less open to inspection than the operating systems themselves.

My recommendation, therefore, would be to assume that your system is at any point vulnerable to unauthorized access by state actors, regardless of your device type or protective measures. It is useful to implement a layered defense against non-state actors: a hardware firewall on the router, a software firewall on the device, limit the amount of things shared on the network to a minimum, close all open ports except those that you actively need, and protect those as if they were a commercial payment system; for instance, don’t allow password authentication on SSH, and instead use RSA certificates. Use encryption on all network communications. Always use the newest OS version with all the updates installed. Use an antivirus to check everything that arrives on your computer. Assume that the antivirus won’t catch zero-day exploits, which is the really dangerous stuff. Don’t click on stupid shit, don’t visit sites with hacking or porn-related content, unless you’re doing it from a specially protected device or a virtual machine. Have a Linux virtual machine as a sandbox for testing potentially harmful stuff, so that it can’t damage your main device. Don’t do stupid shit from a device that’s connected to your private network, so that the attack can’t spread to other connected devices. Don’t assume you’re safe because you use an obscure operating system. Obscure operating systems can use very widespread components, such as the OpenSSL, and if those are vulnerable, your obscurity is far less than you assume. However: a combination of several layers might be a sufficient shield. For instance, if your router shields you from one attack vector, firewall and antivirus on your Windows host machine shields you from another attack vector (for instance UNIX-related exploits), Linux architecture on your virtual machine shields you from the rest (the Windows-related exploits), and your common sense does the rest, you are highly unlikely to be a victim of a conventional hacker attack. However, don’t delude yourself, the state actors, especially the NSA, have access to your system on a far deeper level and you must assume that any system that is connected to the network is vulnerable. If you want a really secure machine, get a generic laptop, install Linux on it from a CD, never connect it to the network and store everything important on an encrypted memory card. However, the more secure measures you employ, the more attention your security is likely to receive, since where such measures are employed, there must be something worth looking at. Eventually, if you really do stupid shit, you will be vulnerable to the rubber hose method of cryptanalysis, which works every time. If you don’t believe me, ask the guys in Guantanamo.

Linux failed because capitalism rules

Let me tell you why I have been gradually migrating from Linux on all the machines in my household, from the point where everything ran on Ubuntu Jaunty, to the point where only the HTPC (media player in the living room) runs Ubuntu Mate Trusty, and everything else runs either Windows 10 or Mac OS.

A year ago I bought my younger kid a new PC, because his old Thinkpad T43 was behaving unreliably. Since he didn’t move the laptop from his desk anyway I decided to get him a desktop, a Bay Trail (J1900) motherboard with the integrated CPU. I love those CPUs, BTW. They are strong enough to run all normal tasks one would require from a computer, such as web browsing, playing all the video formats, light gaming and editing documents, they are cheap, they use very little electricity, and the motherboards themselves are tiny mini ITX format.

It’s efficient enough to have passive cooling, although that didn’t work so well in Universe Sandbox, so I mounted a big silent case fan in front of the CPU to keep the temperatures down. Basically, this looks like an ideal general purpose home computer, and is exactly what a huge number of people are getting their kids for doing homework. Also, a huge number of cheap laptops run Bay Trail CPUs, so the installed base is vast. Also, to keep the cost down, one would expect a large portion of users to put Linux on them, since all the non-specific applications such a machine would be expected to run work well on Linux.

Unfortunately, Intel fubared something with the CPU design, specifically, they seem to have messed up something with the power state regulation, so when the CPU changes its power state, there’s a high probability of hanging. Sure enough, a microcode update was issued and quickly implemented in Windows 10. On Linux, a bug report was posted in 2015. This is what happened:

This FreeDesktop.org bug report was initially opened in January of 2015 about “full system freezes” and the original poster bisected it down to a bad commit within the i915 ValleyView code. There was more than 100 comments to this bug report without much action by Intel’s Linux graphics developers when finally in December they realized it might not be a bug in the Intel i915 DRM driver but rather a behavior change in the GPU driver that made a CPU cstates issue more pressing. The known workaround that came up in the year of bug reports is that booting a modern Linux kernel with intel_idle.max_cstate=1 will fix the system freezes. However, using that option will also cause your system’s power use to go up due to reduced power efficiency of the CPU.

In December when shifting the blame to the other part of the kernel, this Kernel.org bug report was opened and in the few months since has received more than 120 comments of the same issue occurring on many different Bay Trail systems.

As of right now and even with the many complaints about this bug on a multitude of systems and Linux 4.5 set to be released this weekend, this bug hasn’t been properly resolved yet.

That article was written in March 2016. It’s now May 2017, and the issue still hasn’t been resolved. Essentially, the problem with Linux is that the kernel development team apparently doesn’t have anyone competent and motivated enough to deal with this kind of a problem. It’s unclear whether they are simply unable to fix it, or they just don’t care about anything anymore, because there’s no ego-trip in it to motivate them. Let me show you what I’m talking about. There’s a huge thread where the users reported the bug, and tried to figure out solutions. One of the responses that looks very much like it came from a Linux developer, was this:

Well done on turning this into a forum thread. I wouldn’t touch this bug with a 10-foot pole and I’m sure the Intel developers feel the same.

Essentially, TL;DR. It was too long for him to read, because brainpower.

Another thing became apparent to me: they all live in an echo-chamber, where Linux is the best thing ever and it’s the only option. Linux is the most stable OS, it’s the best OS, it’s the greatest thing ever. Except it crashes on probably a third of all modern computers deployed, and Windows, which they treat with incredible contempt, works perfectly on those same machines. Let me make this very clear. I solved the Linux kernel problem with the Bay Trail CPUs by first trying all the recommended patches for Linux, seeing that they all failed, installing a BIOS update, which didn’t help, and then I installed Windows 10 on the machine, which permanently solved the problem. Not only that, it made the machine run perceivably faster, it boots more quickly, and it is stable as a rock, not a single hang in a year.

That’s why I gradually migrated from Linux to Windows and Mac. They are just better. They are faster, more stable, and cause me zero problems. The only places where I still run Linux are the HTPC, and a virtual machine on my desktop PC. Linux is so fucked up, it’s just incredible. It looks like you can only go so far on enthusiasm, without motivating developers with money. After a while, they stop caring and find something more rewarding to do, and that’s the point where Linux is right now. The parts that are maintained by people who are motivated by money work. Other parts, not so much. As a result, my estimate of stability of Linux on desktop at this time is that it is worse than Windows 98. It’s so bad, I don’t recommend it to anyone anymore, because it’s not just this one problem, it’s the whole atmosphere surrounding it. Nobody is even trying anymore, it’s a stale product that joined the army of the living dead. Since I used Linux as my daily driver for years, this pisses me off, but there’s nothing I can do about it but hope that Apple will make Mac OS support a wider range of hardware, and make it available as a commercial product one can install on anything, like Windows. That would make desktop Linux completely obsolete, and would be no more than it deserves, because its quality reveals its communist origins: it’s made like shit. It’s a Trabant, a Wartburg, a Yugo. Conceived on an ego-trip, and built by people who can’t be bothered with work. It’s proof that you can’t build a good thing on hatred of those evil capitalists. In order to get people to make really great things, you need to have a free market that rewards the winners with money. Huge, superabundant amounts of money. Bill Gates and Steve Jobs kinds of money.

Oh yes, I almost forgot. A conclusion of my project of installing Linux on an old Mac laptop. I gave the laptop to my kid. Within a month, it became so unstable, with so many different things breaking all at once, like dozens of packages reporting errors, mostly revolving around Python modules of this or that kind, with apt reporting mass breakage of packets, I gave up, backed up his data, formatted the drive and installed Mac OS Sierra on the machine. It’s slower than it should be because the machine lacks RAM (and I can’t add more because it’s soldered on), but everything works. Linux is so unreliable at the moment, it’s useless on desktop.

Linux on a Macbook Air

What do you do with an old late-2010 Core2Duo 1.8GHz Macbook with 2GB RAM, that is no longer able to run the current Mac OS quickly enough? Apple’s recommendation would be to throw it away and buy a new one because it’s about time after 6 years and the hardware probably wore out significantly by now. The second part of the recommendation I have no problem with – since the machine is indeed too slow for running a modern OS with all the applications that I need, I bought a 15” retinabook as a replacement. However, the part where I just throw the old machine away, although all the hardware still functions, it has a very good keyboard, monitor and a touchpad, the battery is above 80% – I don’t think so. So, I tried several things, just to see what can be done.

The first thing I did was boot it from a USB drive containing Ubuntu Trusty Mate LTS 64-bit, to see if it’s actually possible and if all the hardware is correctly recognized. To my surprise, it all worked, completely out-of-the-box and without any sort of additional tweaking, except for one very specific thing, which is the Croatian keyboard layout on a Mac, which is different from the standard Croatian Latin II layout used by Windows and Linux. I tried selecting a combination of a Mac keyboard and Croatian layout in the OS, but it didn’t work. I ended up editing the /usr/share/X11/xkb/symbols/hr file to modify the basic layout:

xkb_symbols "basic" {

    name[Group1]="Croatian";

    include "rs(latin)"

    // Redefine these keys to match XFree86 Croatian layout
    key <AE01> { [         1,     exclam,   asciitilde,   dead_tilde ] };
    key <AE02> { [         2,   quotedbl,           at] };
    key <AE03> { [         3, numbersign,  asciicircum, dead_circumflex ] };
    key <AE05> { [         5,    percent,       degree, dead_abovering ] };
    key <AE07> { [         7,      apostrophe,        grave,   dead_grave ] };
    key <AE11> { [         slash,    question       ]       };
    key <AB10> { [     minus, underscore, dead_belowdot, dead_abovedot ] };
    key <AD06>  { [         y,          Y,    leftarrow,          yen ] };
    key <AB01>  { [         z,          Z, guillemotleft,        less ] };
    key <AD01>  { [         q,          Q,    backslash,  Greek_OMEGA ] };
    key <AD02>  { [         w,          W,          bar,      Lstroke ] };

}; 

Essentially, what I did was reposition z and y, put apostrophe above 7, and question mark/slash to the right of 0. However, the extended right-alt functionality now works as if on a Windows keyboard, so it’s slightly confusing to have the layouts mixed. (ps.: I had to repost the code because WordPress was acting smart and modified the “tags” so I converted it into html entities).

Other than having to tweak the keyboard layout, I had to use the Nouveau driver for the Nvidia GPU, because any kind of proprietary Nvidia driver, either new or legacy, freezes the screen during boot, when xorg initializes. That’s a bummer because the proprietary driver is much faster, but since the only thing I’m about to use the GPU for is playing YouTube videos on full screen, and that works fine, I’m not worried much. Everything else seems to work fine – the wireless network, the touchpad, the sound, regulating screen brightness and sound volume with the standard Mac keys, everything.

Having ascertained that Linux works, I formatted the SSD from gparted, installed Linux, tested if everything boots properly, and copied my edit of the keyboard layout to the cloud for further use. Then, I decided to test other things, wiped the SSD again, and tried to run the Apple online recovery, which supposedly installs OS X Lion over the Internet. Now that was a disaster – the service appears to work, but after you really start doing it, the Apple server reports that the service isn’t “currently” available. After checking online for other users’ experiences, it turned out that it’s “currently” unavailable since early 2015 if not longer, so basically their service is fubared due to zero fucks given to maintenance of older systems.

OK, I found the USB drive containing the OS X Snow Leopard that I got with the laptop, and, surprisingly, it worked great – I installed the Snow Leopard on the laptop but I couldn’t do anything with it because most modern software refuses to install on a version that old, Apple’s own services such as the iCloud and the Apple Store don’t support it, so I just used it to test a few things and I found out that it’s as fast as I remember it when I just bought the laptop – there’s no lag or delays introduced by the newer versions, everything works great, except that the current Linux is a much more secure and up-to-date system than Snow Leopard, so I did the next experiment; I took the Time Machine drive with the current backup of the 15” retinabook running Sierra, and booted from that. It gave me two options – install clean Sierra, or do a full system recovery from the backup. I did the clean install first, and it surprised me how fast the machine was, much faster than my slow El Capitan installation that I was running before finally giving up on the machine, because I had no time for this shit. Then I decided to take a look at what the full recovery would look like. It worked, but it was as slow or slower than the full installation on El Capitan. I tried playing with it but gave up quickly – after getting used to my new machines, it’s like watching paint dry.

I decided to try Linux again, but with a slight modification – instead of running the perfectly reliable and capable, but visually slightly older-looking Mate (which is basically a green-themed fork of Gnome 2), I decided to try the Ubuntu Trusty Gnome LTS 64-bit version, which runs the more modern and sleek-looking, but potentially more buggy and sometimes less functional Gnome 3. Why did I do that, well, because the search function in Gnome 3 is great, and resembles both Spotlight and Windows 10 search function that I got used to in the modern systems, and visually the Adwaita theme looks very sleek and modern on a Macbook, very much in tune with its minimalist design. So, I loaded it up, copied back my modifications of the keyboard layout (which are actually more difficult to activate here than in Gnome 2, requiring some dpkg-reconfiguring from shell). I made a mistake trying to test if the Nvidia drivers work here – they don’t, and I had to fix things the hard way, by booting into root shell with networking (not so much for the networking, but because in the normal root shell mode the drive is mounted in the read-only mode), did apt-get remove nvidia*, rebooted and it worked. Then I installed the most recent kernel version, just to test that, and yes, the 4.2.0-42-generic kernel works just fine. The rest of the installation was just standard stuff, loading up my standard tools, PGP key and the RSA certificates, chat clients and Dropbox, so that I can sync my keepass2 database containing all my account passwords in encrypted form, as well as the articles for the blog.

So, what did I gain, and what did I lose? I lost the ability to run Lightroom, but this machine is too weak for that, and I removed it from the position of a photo editing laptop in any case. The second thing that doesn’t work is msecure, where I have all my current passwords stored in the original form; the keepass file is a secondary copy, so that’s not great. However, Thunderbird mail works, Skype works, Rocketchat works, Web works and LibreOffice works. The ssh/rsync connection to my servers works, all the secure stuff works, UNIX shell functionality works. Essentially, I can use it for writing, for answering mail, for chat, web and doing stuff on my server via ssh. The battery life seems to be diminished from what I would expect, but it’s actually better than it was on El Capitan and Sierra, which seemed to constantly run some CPU-demanding processes in the background, such as RAM compression, which of course drained the battery very quickly and made the machine emulate a female porn star, being very hot and making loud noises. 🙂

I gained speed. It’s as fast as it was running Snow Leopard when I initially bought it, which is great. Also, I have the ability to run all the current Linux software, and I don’t have to maintain the slow macports source-compiling layer in order to have all the Linux tools available on a Mac. I do realize, however, that I’m approaching this from a somewhat uncommon perspective of someone who uses a Mac as a Linux machine that just happens to run Adobe Lightroom and other commercial software; I never did get a Mac to get the “simple” experience that most users crave. To me, if a machine can’t rsync backups from my server, and if I can’t go into shell and write a 10-line script that will chew out some data, it’s not fit for serious use. I run a Linux virtual machine on my Windows desktop where I actually do all the programming and server maintenance, so having Linux on a laptop that’s supposed to be all about “simplicity of use” is not contradictory in any way – to me, simplicity of use is the ability to mount my server’s directories from Nautilus via ssh, and do a simple copy and paste of files. This works better on Linux than anywhere else. Also, the Geeqie image viewer on Linux is so much better than anything on a Mac, it’s not even funny. These tools can actually make you very productive, if you know how to use them, so for some things I do, Linux is actually an upgrade. However, I can’t run some very important commercial software that I use, so I can’t use Linux on my primary setup. That’s just unfortunate, but it is what it is. Linux is primarily used by people who want free stuff, and are unwilling to pay for software, so nobody serious bothers to write commercial software for it. Yeah, theoretically it’s supposed to be free as freedom, not free as free beer, but in reality, Linux is designed by communists who have serious problems with the concept of money, either because they don’t understand it, because they reject it for ideological reasons, or both. In some cases, however, Linux is an excellent way to save still functional machines from the planned obsolescence death they were sentenced to by the manufacturers. Also, it’s an excellent way of being sure that you don’t have all kinds of nefarious spyware installed by the OS manufacturer, if that’s what you care about; however, since I guess that most of the worst kinds of government spying is done by exploits in the basic SSL routines and certificate authorities, that might not help much.

Also, the thing about Linux is that it tries to write drivers for the generic components used in the hardware, instead for the actual hardware implementation. This means you get a driver for the Broadcom network chip, instead for the, I don’t know, D-Link network card. The great aspect of this is that it cuts through lots of bullshit and gets straight to the point, reducing the number of hardware drivers significantly, and increasing the probability that what you have will just work. The problem is, there isn’t much done to assure that every single implementation of the generic components will actually work, and work optimally. In reality, what this means is that if your hardware happens to be close to the generic implementation, it will just work, as it happened to just work on my late-2010 Macbook Air, for the most part. However, if something isn’t really made to generic spec, as it happens to be the case with my discrete graphics, trying to use the generic drivers will plunge you headfirst from the tall cliff of optimism into the cold sea of fail.

So, do I recommend this? Well, if you’re a hacker and you know what you’re getting yourself into, hell yeah. I did it for shits and giggles, just to see if it can be done. Would I do it on a “productivity” machine, basically my main laptop/desktop that I have to depend on to do real work reliably and produce instant results when I need something? That’s more tricky, and it depends on what you do. I used to have Linux on both my desktop and laptop for about 5 years, from Ubuntu Gutsy to Ubuntu Lucid. Obviously, I managed to get things done, and sometimes I was more productive than on anything else. At other times, I did nothing but fix shit that broke when I updated something. If anything, Linux forces you to keep your skills sharp, by occasionally waking you from sleep with surprise butt sex. On other occasions, you get to laugh watching Windows and Mac users struggle with something that you do with trivial ease. At one point I got tired of the constant whiplash experience from alternating between Dr. Jekyll and Mr. Hyde and quarantined Linux into its safe virtualized sandbox where it does what it’s good at, without trying to run my hardware with generic open source drivers, or forcing me to find free shitty substitutes for paying $200 for some professionally made piece of software that I need. Essentially, running Linux is like owning a BMW or an Alpha Romeo – it runs great when it runs, but for the most part it’s not great as a daily driver, and it’s fun if you’re a mechanic who enjoys fixing shit on your car to keep your skills sharp. I actually find it quite useful, since I maintain my Linux servers myself and this forces me to stay in touch with the Linux skill-set. It’s not just an exercise in pointless masochism. 🙂

Griffin having better ideas than Apple

Exhibit A:

breaksafe

Why is that thing not built into the new generation of Macbooks? If you need adapters anyway, why not also adapt to USB C from Magsafe? Provide four Magsafe Thunderbolt 3 ports, and provide adapters to USB C, USB A and Thunderbolt 2. You have elegance, you keep the brilliant Magsafe thing, on all ports, and you can spin the adapters by saying that you made all ports detachable and universal, compatible with all existing port standards. I would actually find this more plausible than the USB C.