Linux on a Macbook Air

What do you do with an old late-2010 Core2Duo 1.8GHz Macbook with 2GB RAM, that is no longer able to run the current Mac OS quickly enough? Apple’s recommendation would be to throw it away and buy a new one because it’s about time after 6 years and the hardware probably wore out significantly by now. The second part of the recommendation I have no problem with – since the machine is indeed too slow for running a modern OS with all the applications that I need, I bought a 15” retinabook as a replacement. However, the part where I just throw the old machine away, although all the hardware still functions, it has a very good keyboard, monitor and a touchpad, the battery is above 80% – I don’t think so. So, I tried several things, just to see what can be done.

The first thing I did was boot it from a USB drive containing Ubuntu Trusty Mate LTS 64-bit, to see if it’s actually possible and if all the hardware is correctly recognized. To my surprise, it all worked, completely out-of-the-box and without any sort of additional tweaking, except for one very specific thing, which is the Croatian keyboard layout on a Mac, which is different from the standard Croatian Latin II layout used by Windows and Linux. I tried selecting a combination of a Mac keyboard and Croatian layout in the OS, but it didn’t work. I ended up editing the /usr/share/X11/xkb/symbols/hr file to modify the basic layout:

xkb_symbols "basic" {

    name[Group1]="Croatian";

    include "rs(latin)"

    // Redefine these keys to match XFree86 Croatian layout
    key <AE01> { [         1,     exclam,   asciitilde,   dead_tilde ] };
    key <AE02> { [         2,   quotedbl,           at] };
    key <AE03> { [         3, numbersign,  asciicircum, dead_circumflex ] };
    key <AE05> { [         5,    percent,       degree, dead_abovering ] };
    key <AE07> { [         7,      apostrophe,        grave,   dead_grave ] };
    key <AE11> { [         slash,    question       ]       };
    key <AB10> { [     minus, underscore, dead_belowdot, dead_abovedot ] };
    key <AD06>  { [         y,          Y,    leftarrow,          yen ] };
    key <AB01>  { [         z,          Z, guillemotleft,        less ] };
    key <AD01>  { [         q,          Q,    backslash,  Greek_OMEGA ] };
    key <AD02>  { [         w,          W,          bar,      Lstroke ] };

}; 

Essentially, what I did was reposition z and y, put apostrophe above 7, and question mark/slash to the right of 0. However, the extended right-alt functionality now works as if on a Windows keyboard, so it’s slightly confusing to have the layouts mixed. (ps.: I had to repost the code because WordPress was acting smart and modified the “tags” so I converted it into html entities).

Other than having to tweak the keyboard layout, I had to use the Nouveau driver for the Nvidia GPU, because any kind of proprietary Nvidia driver, either new or legacy, freezes the screen during boot, when xorg initializes. That’s a bummer because the proprietary driver is much faster, but since the only thing I’m about to use the GPU for is playing YouTube videos on full screen, and that works fine, I’m not worried much. Everything else seems to work fine – the wireless network, the touchpad, the sound, regulating screen brightness and sound volume with the standard Mac keys, everything.

Having ascertained that Linux works, I formatted the SSD from gparted, installed Linux, tested if everything boots properly, and copied my edit of the keyboard layout to the cloud for further use. Then, I decided to test other things, wiped the SSD again, and tried to run the Apple online recovery, which supposedly installs OS X Lion over the Internet. Now that was a disaster – the service appears to work, but after you really start doing it, the Apple server reports that the service isn’t “currently” available. After checking online for other users’ experiences, it turned out that it’s “currently” unavailable since early 2015 if not longer, so basically their service is fubared due to zero fucks given to maintenance of older systems.

OK, I found the USB drive containing the OS X Snow Leopard that I got with the laptop, and, surprisingly, it worked great – I installed the Snow Leopard on the laptop but I couldn’t do anything with it because most modern software refuses to install on a version that old, Apple’s own services such as the iCloud and the Apple Store don’t support it, so I just used it to test a few things and I found out that it’s as fast as I remember it when I just bought the laptop – there’s no lag or delays introduced by the newer versions, everything works great, except that the current Linux is a much more secure and up-to-date system than Snow Leopard, so I did the next experiment; I took the Time Machine drive with the current backup of the 15” retinabook running Sierra, and booted from that. It gave me two options – install clean Sierra, or do a full system recovery from the backup. I did the clean install first, and it surprised me how fast the machine was, much faster than my slow El Capitan installation that I was running before finally giving up on the machine, because I had no time for this shit. Then I decided to take a look at what the full recovery would look like. It worked, but it was as slow or slower than the full installation on El Capitan. I tried playing with it but gave up quickly – after getting used to my new machines, it’s like watching paint dry.

I decided to try Linux again, but with a slight modification – instead of running the perfectly reliable and capable, but visually slightly older-looking Mate (which is basically a green-themed fork of Gnome 2), I decided to try the Ubuntu Trusty Gnome LTS 64-bit version, which runs the more modern and sleek-looking, but potentially more buggy and sometimes less functional Gnome 3. Why did I do that, well, because the search function in Gnome 3 is great, and resembles both Spotlight and Windows 10 search function that I got used to in the modern systems, and visually the Adwaita theme looks very sleek and modern on a Macbook, very much in tune with its minimalist design. So, I loaded it up, copied back my modifications of the keyboard layout (which are actually more difficult to activate here than in Gnome 2, requiring some dpkg-reconfiguring from shell). I made a mistake trying to test if the Nvidia drivers work here – they don’t, and I had to fix things the hard way, by booting into root shell with networking (not so much for the networking, but because in the normal root shell mode the drive is mounted in the read-only mode), did apt-get remove nvidia*, rebooted and it worked. Then I installed the most recent kernel version, just to test that, and yes, the 4.2.0-42-generic kernel works just fine. The rest of the installation was just standard stuff, loading up my standard tools, PGP key and the RSA certificates, chat clients and Dropbox, so that I can sync my keepass2 database containing all my account passwords in encrypted form, as well as the articles for the blog.

So, what did I gain, and what did I lose? I lost the ability to run Lightroom, but this machine is too weak for that, and I removed it from the position of a photo editing laptop in any case. The second thing that doesn’t work is msecure, where I have all my current passwords stored in the original form; the keepass file is a secondary copy, so that’s not great. However, Thunderbird mail works, Skype works, Rocketchat works, Web works and LibreOffice works. The ssh/rsync connection to my servers works, all the secure stuff works, UNIX shell functionality works. Essentially, I can use it for writing, for answering mail, for chat, web and doing stuff on my server via ssh. The battery life seems to be diminished from what I would expect, but it’s actually better than it was on El Capitan and Sierra, which seemed to constantly run some CPU-demanding processes in the background, such as RAM compression, which of course drained the battery very quickly and made the machine emulate a female porn star, being very hot and making loud noises. 🙂

I gained speed. It’s as fast as it was running Snow Leopard when I initially bought it, which is great. Also, I have the ability to run all the current Linux software, and I don’t have to maintain the slow macports source-compiling layer in order to have all the Linux tools available on a Mac. I do realize, however, that I’m approaching this from a somewhat uncommon perspective of someone who uses a Mac as a Linux machine that just happens to run Adobe Lightroom and other commercial software; I never did get a Mac to get the “simple” experience that most users crave. To me, if a machine can’t rsync backups from my server, and if I can’t go into shell and write a 10-line script that will chew out some data, it’s not fit for serious use. I run a Linux virtual machine on my Windows desktop where I actually do all the programming and server maintenance, so having Linux on a laptop that’s supposed to be all about “simplicity of use” is not contradictory in any way – to me, simplicity of use is the ability to mount my server’s directories from Nautilus via ssh, and do a simple copy and paste of files. This works better on Linux than anywhere else. Also, the Geeqie image viewer on Linux is so much better than anything on a Mac, it’s not even funny. These tools can actually make you very productive, if you know how to use them, so for some things I do, Linux is actually an upgrade. However, I can’t run some very important commercial software that I use, so I can’t use Linux on my primary setup. That’s just unfortunate, but it is what it is. Linux is primarily used by people who want free stuff, and are unwilling to pay for software, so nobody serious bothers to write commercial software for it. Yeah, theoretically it’s supposed to be free as freedom, not free as free beer, but in reality, Linux is designed by communists who have serious problems with the concept of money, either because they don’t understand it, because they reject it for ideological reasons, or both. In some cases, however, Linux is an excellent way to save still functional machines from the planned obsolescence death they were sentenced to by the manufacturers. Also, it’s an excellent way of being sure that you don’t have all kinds of nefarious spyware installed by the OS manufacturer, if that’s what you care about; however, since I guess that most of the worst kinds of government spying is done by exploits in the basic SSL routines and certificate authorities, that might not help much.

Also, the thing about Linux is that it tries to write drivers for the generic components used in the hardware, instead for the actual hardware implementation. This means you get a driver for the Broadcom network chip, instead for the, I don’t know, D-Link network card. The great aspect of this is that it cuts through lots of bullshit and gets straight to the point, reducing the number of hardware drivers significantly, and increasing the probability that what you have will just work. The problem is, there isn’t much done to assure that every single implementation of the generic components will actually work, and work optimally. In reality, what this means is that if your hardware happens to be close to the generic implementation, it will just work, as it happened to just work on my late-2010 Macbook Air, for the most part. However, if something isn’t really made to generic spec, as it happens to be the case with my discrete graphics, trying to use the generic drivers will plunge you headfirst from the tall cliff of optimism into the cold sea of fail.

So, do I recommend this? Well, if you’re a hacker and you know what you’re getting yourself into, hell yeah. I did it for shits and giggles, just to see if it can be done. Would I do it on a “productivity” machine, basically my main laptop/desktop that I have to depend on to do real work reliably and produce instant results when I need something? That’s more tricky, and it depends on what you do. I used to have Linux on both my desktop and laptop for about 5 years, from Ubuntu Gutsy to Ubuntu Lucid. Obviously, I managed to get things done, and sometimes I was more productive than on anything else. At other times, I did nothing but fix shit that broke when I updated something. If anything, Linux forces you to keep your skills sharp, by occasionally waking you from sleep with surprise butt sex. On other occasions, you get to laugh watching Windows and Mac users struggle with something that you do with trivial ease. At one point I got tired of the constant whiplash experience from alternating between Dr. Jekyll and Mr. Hyde and quarantined Linux into its safe virtualized sandbox where it does what it’s good at, without trying to run my hardware with generic open source drivers, or forcing me to find free shitty substitutes for paying $200 for some professionally made piece of software that I need. Essentially, running Linux is like owning a BMW or an Alpha Romeo – it runs great when it runs, but for the most part it’s not great as a daily driver, and it’s fun if you’re a mechanic who enjoys fixing shit on your car to keep your skills sharp. I actually find it quite useful, since I maintain my Linux servers myself and this forces me to stay in touch with the Linux skill-set. It’s not just an exercise in pointless masochism. 🙂

Griffin having better ideas than Apple

Exhibit A:

breaksafe

Why is that thing not built into the new generation of Macbooks? If you need adapters anyway, why not also adapt to USB C from Magsafe? Provide four Magsafe Thunderbolt 3 ports, and provide adapters to USB C, USB A and Thunderbolt 2. You have elegance, you keep the brilliant Magsafe thing, on all ports, and you can spin the adapters by saying that you made all ports detachable and universal, compatible with all existing port standards. I would actually find this more plausible than the USB C.

About Apple, USB C and standards

I’ve been thinking about the recent, apparently insane product releases from Apple – an iPhone that doesn’t have a headphone jack although a significant usage case for an iPhone is to play music from iTunes, a Macbook that has only one port, for both charging and data, and that port is basically incompatible with the rest of IT industry unless you use adapters, and a Macbook Pro that has only those incompatible ports, has less battery capacity, doesn’t have an SD card slot although its supposedly main target user group are the creative professionals, like photographers and videographers, who use SD cards to transfer images and video from their cameras when they are in the field and don’t have a cable with them.

To add insult to injury, all those products are more expensive than the previous, more functional generation.

I tried to think of an explanation, and I came up with several possible ones. For instance, although Apple pays formal lip service to the creative professionals, they don’t really make that much money from those. When Apple actually did make most of its money from creative professionals, somewhere in the 1990s, they were almost bankrupt and Microsoft had to rescue them by buying half a billion dollars of non-voting shares, and Steve Jobs was re-instated as iCEO (interim-CEO, which is the likely cause of him deciding to i-prefix all the product names). They then started to market to a wider audience of young hipsters, students and wealthy douchebags (as well as those who wanted to be perceived as such), and soon they started to drown in green. Yes, they continued to make products intended for the professionals, but those brought them increasingly smaller proportion of their overall earnings, and were deprioritized by the board, which is basically interested only in the bottom line. And it is only logical – if hipsters who buy iPhones bring you 99% of your money, you will try your best to make them happy and come back for more. The 1% earnings you get from the professional photographers and video editors are, essentially, a rounding error. You could lose them and not even notice. As a result, the Mac Pro got updated with ever decreasing frequency and was eventually abandoned by the professional market which is highly competitive and doesn’t have the time to waste on half a decade obsolete underperforming and overpriced products.

Keeping the hipsters happy, however, is a problem, because they want “innovation”, they want “style”, they basically want the aura of specialness they will appropriate from their gadget, since their own personality is a bland facsimile of the current trends. They are not special, they are not innovative, they are not interesting and they are not cool, but they want things that are supposed to be all that, so that they can adorn themselves with those things and live in the illusion that their existence has meaning.

So, how do you make a special smartphone, when every company out there has something that has all kinds of perfectly functional devices, within the constraint of modern technology? They have CPU and GPU that are slammed right against the wall of the thermal design, they have superfluous amounts of memory and storage, excellent screens… and there’s nothing else you can add to such a device, essentially, unless there’s a serious breakthrough in AI, and those gadgets become actually smart, in which case they will tell you what to do, instead of the other way around. So, facing the desperate need to appear innovative, and at the time facing the constraints of modern technology which defines what you can actually do, you start “inventing” gimmicky “features” such as the removal of the headphone jack and USB A sockets, and you make a second screen on the keyboard that draws a custom row of touch-sensitive icons.

And apparently, it works, as far as the corporate bottom line is concerned. The professionals noise their displeasure on YouTube, but the hipsters are apparently gobbling it all up, this stuff is selling like hot cakes. The problem is, the aura of coolness of Apple products stems from the fact that the professionals and the really cool people used them, and the hipsters wanted to emulate the cool people by appropriating their appearance, if not the essence. If the cool people migrate to something else, and it becomes a pattern for the hipsters to emulate, Apple will experience the fate of IBM. Remember PS/2? IBM decided it’s the market leader and everybody will gobble up whatever they make, so they made a PS/2 series of computers with a closed, proprietary “microchannel” bus, trying to discourage 3rd party clones. What happened is that people said “screw you”, and IBM lost all significance in the PC market, had to close huge parts of its business and eventually went out of the retail PC business altogether. And it’s not that PS/2 machines were bad. Huge parts of the PC industry standard were adopted from it – the VGA graphics, the mouse and keyboard ports, the keyboard layout, the 3.5” floppy standard, plus all kinds of stuff I probably forgot about. None of it helped it avoid the fate of the dinosaurs, because it attempted to blackmail and corner the marketplace, and the marketplace took notice and reacted accordingly.

People like standardized equipment. They like having only one standard for the power socket, so that you can plug any electrical appliance and it will work. The fact that the power socket can probably be designed as better, smaller and cooler is irrelevant. The most important thing about it is that it is standard, and you can plug everything everywhere. USB type A is the digital equivalent of a power socket. It replaced removable media, such as floppy and CD discs, with USB thumb drives, which can be plugged into any computer. Also, keyboards, mice, printers, cameras, phones, tablets, they all plug into the USB socket, and are universally recognized, so that everything works everywhere. Today, a device without a USB port is a device that cannot exchange massive amounts of data via thumb drives. It exists on an island, unable to function effectively in a modern IT environment. It doesn’t matter that the USB socket is too big, or that it’s not reversible. Nobody cares. What’s important is that you can count on the fact that everybody has it. Had Apple only replaced the Thunderbolt 2 sockets with USB C sockets, and kept the USB A sockets in place, it would be a non-issue. However, this has a very good chance of becoming their microchannel. Yes, people are saying that the USB C is the future, and it’s only a matter of time before it’s adopted by everyone, but I disagree. The same was said before about FireWire and about Thunderbolt. Neither standard was widely adopted, because it proved more easy to just make the USB faster, than to mess with another standard which basically tries to introduce yet another port that will not work anywhere else. There’s a reason why it’s so difficult for the Anglo-Saxon countries to migrate from Imperial units to the SI. Once everybody uses a certain standard, the fact that it is universally intelligible is much more important than its elegance.

Recognize those ports? Yeah, me neither.

Recognize those ports? Yeah, me neither.

Yes, we once used the 5.25” and 3.5” floppy drives and we no longer do. We once used the CD and DVD drives and we no longer do. We once used the Centronics and RS-323 ports for printers and mice. We once used MFM, RLL, ESDI and SCSI hard disk controllers. We once used the ISA system bus and the AGP graphics slot. What used to be a standard no longer is. However, there are standards that are genuinely different, such as the UTP Ethernet connector, or the USB connector, or the headphone jack, or the Schuko power socket. USB and Ethernet and PDF and JPEG and HTML are some of the universal standards that make it possible for a person to own a Mac, because you can plug it into the same peripherals as any other computer. It makes the operating differences unimportant, because you can exchange files, you can use the same keyboard and mouse, you can use the same printer, you can plug into the same network. By removing those standard connections and ways to exchange data with the rest of the world, a Mac becomes an isolated device, a useless curiosity, like the very old computers you can’t really use today because you can no longer connect them to anything. Imagine what would have happened if Apple removed the USB when they first introduced FireWire, or Thunderbolt – “this new port is the future, you no longer need that old one”. Yeah. Do you think an Ethernet port is used because it’s elegant? It’s crap. The plastic latch is prone to failure or breakage, the connection isn’t always solid, the dust can get in and create problems – it’s basically crap. You know why everybody still uses it? Because everybody uses it.

An analogy with tech

I was thinking about the similarities between the groupthink in the political sphere and its equivalent in the consumer technology sphere, and it dawned to me that I could more easily explain the political conundrum if I illustrate the problems in the technological equivalent, which might be less emotionally charged, at least for some parts of the audience.

So, let’s see the stereotypes.

1. An iPhone user is a stupid sheep who blindly follows trends and will pay more money for an inferior product.

2. Android is for people who want to customize their device.

3. Android is for poor people who can’t afford an iPhone.

4. A Mac user is a stupid sheep who will buy the overpriced shiny toy because he’s so stupid even Windows are too complicated for him.

5. Windows machines are virus-ridden, unstable, blue-screen-of-death displaying boring gray box.

6. Mac is for creative people, Windows are for accountants.

7. Windows are for poor people who can’t afford a Mac.

8. Linux is for poor people who can’t afford Windows.

Need I go on?

Now, let’s go through the list.

1 and 2: There are many reasons why one might want an iPhone. One is because he really is too stupid to understand that there are alternatives. Another is because he’s too busy doing whatever is his day-job to fiddle with a device, and just wants something that works reliably. His day-job might be “astrophysicist” or “doctor”. He doesn’t have either will or time to fiddle with a phone or to install an alternative kernel. He just wants speed, reliability, good build quality and, occasionally, he wants to run very specialized apps that are available for it. Someone who will “customize” his phone is more likely than not to live in his mom’s basement, because that’s the profile that’s likely to waste time on non-productive shit like that. If you have things to do, you use the phone to make calls, to google something or to find your way around on a map. You’re too busy operating on people’s brains, designing a new rocket engine, analyzing data from the Kepler telescope or getting that call informing you how that million-dollar deal went through. If your phone is all you have to deal with in your life, you’re either a phone designer, or someone who has too much spare time on his hands.

3: Yes, in many cases people who opt for Android phones find iPhones to be too expensive. That might be because they are poor. On the other hand, they might just want to buy something good but affordable and not too fragile for their kids. Or, they might decide that the iPhone just isn’t worth the premium; it does basically the same thing as a much cheaper Android phone, so why would you overpay for the same functionality? Essentially, you may have several good options and once you’re satisfied with the fact that any of them will do a good job, you pick one based on both preference and estimate of cost-effectiveness.

4: Yes, there are people who buy a Mac because they find Windows too complicated (although, it is difficult for me to figure out how that is possible, since both systems are more-less equally trivial to master). On the other hand, there are people who will buy a Mac because Apple’s laptops have great battery life, great screen, excellent touchpad, or because they can run open source tools via macports or homebrew, allowing them to have access to the same toolkit they would have on Linux, but with better reliability, better battery life, less bugs, and with the ability to run Adobe apps. Those are excellent reasons, and it’s easy to understand why one would get a laptop from Apple, and in fact it might explain why Apple laptops are outselling everything on the market, and why they are especially popular with technology and science professionals, who certainly aren’t using them because they find Windows intimidatingly difficult. I, for instance, migrated to a Macbook Air from a Thinkpad running Linux five years ago, simply because it was thin, light, had a great battery, had an SSD, and one of the best displays on any laptop. Also, it ran Unix natively and I was so at home with Linux command-line tools I would have great difficulties re-organizing the things I do in a way that was doable on Windows. So, the options for me weren’t Windows or OS X, but OS X or Linux, and I couldn’t run Lightroom on Linux.

5: Windows machines exist in a wide range of price, capability and performance. Yes, there are the basic Windows boxes, both laptops and desktops, that are indeed quite cringe-worthy. Then again, I’m writing this on a i7-6700K PC, with very high-end components, and it’s incredibly fast, it’s as reliable as a toaster, and my monitor has the same LG-Philips matrix as a 27” iMac, only with matte coating, so I get no reflections from the window beside me. Essentially, it’s the performance equivalent of a 6-core Mac Pro, with a better graphics card, better cooling, and at half the price. Basically, it’s as far from being a bland beige box as you can imagine. In most things, it’s equivalent to an OS X machine, except for the fact that I have to run a virtualized Linux machine in order to get the Unix functionality that I need. That’s less annoying on a desktop than it would be on a laptop, but essentially, the reliability, ease of use, performance etc. are so similar between the two I don’t really care which one I use. I do have a mail archive manager that works only on the Mac, and that does determine my preference in part, because although I did write a proof-of-concept portable alternative in Java, I would hate to write and maintain something that already exists and works great, and I wouldn’t get paid for the work. I have better uses for my time, honestly. As for the viruses, I have a simple rule that had served me well so far: don’t click on stupid shit. As a result, I don’t get viruses. The last time I got a virus I was running Windows 98 or something, and it happened because I mistakenly clicked on something. I do use an antivirus, as a precaution, but honestly, if you’re having problems with viruses, you’re more likely having problems with porn sites and stupidity.

6: As for Mac being for creative people, I used Windows 3.1 for desktop publishing with Ventura Publisher software in the late 1980s, I use a Windows machine for photo editing and writing books, I even used Linux machines for photo editing and writing books. I can basically make anything work for me, and if I’m not counted as a creative person, nobody will meet the requirements. This thing about Macs and creative work is basically propaganda. Windows machines are used by some 95% of all computer users, which basically means they are used by both the most creative people and by most accountants. It’s just that creative people tend to configure their machines differently, that’s all. A programmer will have different requirements than a photographer or a graphics designer.

7: Try configuring a dual-Xeon 24-core, 128GB PC workstation with two Titan X Pascal graphics cards and tell me it’s for poor people who can’t afford a Mac. I personally can afford a Mac laptop because it’s good, and I can’t afford a Mac desktop because it’s worse than my machine and for more money. Essentially, I can’t afford overpriced, underperforming shit of any kind.

8: Since Linux runs basically on all servers everywhere, and since Google uses it on all workstations for their developers, there are obviously good reasons for very rich people to use it in a production environment. If you’re a developer, a good Linux distro might be the best thing to have on your desktop in terms of getting things done efficiently, basically being able to use your desktop as a test-environment for the software you’re developing. Personally, I prefer having the Linux development and testing environment virtualized because Windows makes better use of my hardware, but it’s a matter of preference and I could very easily see myself running Linux on the hardware and doing everything from there if it became more convenient for some reason.

You can see how there are many reasons why someone might have a certain opinion, reasons that differ significantly from the stereotypes. One might use something because he’s too stupid to know better, and another person might use that same thing simply because it works better for his usage case. One person can have a certain political attitude because he’s stupid or evil, while another person can have a very similar position but on a far higher intellectual octave, because he knows much more than you do, has better insight, greater intelligence and, in the end, you might not have any arguments that could disprove his. So tread lightly. The leftists can’t fathom why anyone with IQ over 150 would vote for Trump or have political opinions in the right political spectrum; due to their stereotypical understanding of the opposition they are facing, they are simply unable to either comprehend it, or to argue against it, or do anything constructive about it whatsoever, which leaves them with the option of smearing fake blood on their faces and chanting slogans. This doesn’t differ greatly from the shock some people experience when they get to know an IT expert and a technology enthusiast who uses an iPhone. It’s not that the concept itself is unfathomable, it’s just that they painted themselves into a corner with their closed-minded stereotypes and inability to understand different positions and scenarios.

How powerful are the modern nukes?

I wrote generally about nuclear weapons and strategy of their use, but not much about the weapons themselves.

When there’s talk about the yield of the modern nuclear weapons, people usually compare the first Hiroshima bomb with the yield of 15KT (thousands of tons of TNT explosive) and the Tsar bomba with the yield of 50MT (millions of tons of TNT explosive), probably wanting to show how much bigger and more destructive our modern bombs are.

Well, yes and no. You see, no modern nuclear weapon has that big of a yield. The more powerful weapons have been systematically phased out, for several reasons. First, because they are heavy, big and clumsy, and thus difficult to deliver. The Tsar bomba was so cumbersome, it was difficult to deliver by a plane, let alone a rocket. It was a prototype designed for showing off, not an actual weapon. Essentially, Americans detonated the Castle Bravo test (which was so poorly controlled the Americans shat themselves), and the Russians wanted to one-up them by detonating a bomb more than twice the size, and they too shat themselves when they saw the results. The Tsar bomba was actually intentionally reduced in yield from the design’s theoretical maximum of 100MT, so that is the actual top yield nuclear device that mankind can demonstrably make. The laymen who think that the purpose of nuclear weapons is to blow us all up, and that those bombs are designed by psychopaths, probably think that the goal of nuclear weapon design is to produce stronger weapons, but that is not the case. In fact, the yield of the deployed nuclear weapons has been steadily going down for the last several decades. The strongest fielded weapons were in the order of magnitude of 20MT, but they have all been phased out. Why is that, you will ask?

Well, one of the reasons is that one big warhead is easier to destroy by an anti-ballistic rocket, than ten smaller ones. Furthermore, several smaller weapons are much more effective at destroying a wider area than one bigger weapon. And last, there isn’t much of a difference between a 100KT and a 20MT bomb when it comes to destroying military targets. In fact, a spread of 100KT warheads will probably do a better job because it’s less sensitive to accuracy of the individual hit.

When it comes to civilian targets, they are really an afterthought. The designers of warheads are more concerned with destroying hardened bunkers, command centers, aircraft carrier battle groups and similar targets; nobody really gets his dick hard thinking about destroying a city of millions of people with nuclear weapons. If anything, it makes everybody ashamed of their work, and the weapon design is harder to justify if it doesn’t have a clear military purpose. The civilians are such soft targets in any case, that even the first Hiroshima bomb did a good enough job at causing immense suffering and death. It is much more difficult, as it proved, to sink a battleship of the Prinz Eugen class, than to turn living people into shadows on concrete.

There’s no real difference between American and Russian weapons designers in that regard. They have both been following similar design parameters, working more on miniaturization, reliability, maneuverability, accuracy and yield control, than yield magnitude, because after they both ascertained that they can make very big bombs as experimental devices, they started thinking about what is actually practical as a fielded weapon, and they came to the conclusion that the best thing you can have is a spread of 10 or so individually maneuverable, highly miniaturized and reliable bombs, launched from a stealthy mobile platform. They all initially fielded 20MT weapons, but later withdrew them as they saw no practical military use for them.

There are other reasons why a reduction in weapons size is preferable to yield. You see, a small weapon can be launched with a lighter, faster rocket. Such a rocket is smaller, so you can pack many more of them inside a submarine. A lighter rocket can achieve maximum speed more quickly, and it doesn’t stay long in its vulnerable, slow phase of flight. You can put it on a truck, or in a train, or on a cruise missile. They even made suitcase-sized bombs. Miniaturization has a huge number of advantages, and yield, other than bragging rights, has precious few.

So, what do Americans and Russians currently have fielded in operational form?

The Americans have Trident II SLBMs on Ohio class submarines, armed with 8-12 MIRVed warheads. The warheads can be either 475KT or 100KT, depending on which version is used. They also have the Minuteman III silo rockets, which have fallen into disrepair and are not really taken seriously due to their vulnerability. They carry 3 MIRVs with 170KT to 500KT yield, depending on the version.

The Russians have Sineva SLBMs on Delta IV class submarines, 4-8 MIRV warheads with 100KT yield each. They also have Bulava SLBMs on Borei class submarines, 6-10 MIRVs with 150KT yield each.

They also have mobile Topol-M launchers with single 800KT warheads, and mobile Yars launchers with 4+ MIRVs with 150-250KT yield.

They also have silo-based R-36 Voivoda (known in the West as SS-18 Satan), which are in a similar state of disrepair as the American Minuteman III counterparts, and are being phased out and replaced with Yars launchers and, when its testing finishes, the Sarmat heavy launcher. The payload for those are 10 MIRVs with 750KT yield.

The Russian nuclear weapons are almost all brand new, with modern design; they are tested, reliable and dependable. It’s an irony that people in America think that Russians only have Soviet legacy nukes, and that’s not the case at all; if anything, the Americans have legacy stuff from the seventies and eighties, and it’s questionable whether it all works and how well. It is telling that of all the Russian nuclear rockets, only one is as old as the newest of the American nuclear rockets, and it’s the one they are quickly phasing out because it’s unreliable. Just food for thought.

So, what does that tell us? First of all, when you make calculations about yield in nukemap, ignore everything with a yield over 800KT because for all intents and purposes it doesn’t exist. Second, always count on more than one hit, because you’re most likely getting hit by a MIRV spread. Third, if you live in a highly populated area that doesn’t double as a strategic military installation (such as a nuclear submarine shipyard), you are probably a very low priority target and nobody really has any wish to kill you, unless your politicians already killed millions of people in the enemy’s capital cities, in which cases the enemy got so mad they’re probably going to kill you all. The name of the game in nuclear strategy is “disarm”, not “commit worst genocide of all times”. The enemy’s civilian population will be hit only by the doomsday retaliatory strike of last resort. However, the infrastructure of the world will be a priority target – satellites, Internet, electricity, transportation. All electronics will be disabled either permanently or temporarily by EMP blasts. You will hear no news, and there will be no way to find out what happened, maybe forever. There will be no food refrigeration or transportation. There will soon be no modern medicine. Honestly, it would be less painful and terrifying if they did hit us with the 20MT warheads, then at least we could hope for quick evaporation. This way, the soldiers will be the lucky ones, they will die from bombs, while the rest of us die from waterborne bacteria and hunger.