I was thinking a bit lately, running Linux as my daily driver for the last few days, at least on my desktop PC, about the rationale behind Linux as a secure OS.
Linux is secure because it’s open source so anyone can inspect it and find the back doors and insecure features. That’s the story.
However, a while ago they discovered an open-ssl vulnerability called “heartbleed”, which was there for years, in an open source library, that theoretically everybody could inspect, and yet apparently that didn’t help the slightest bit. How is that possible?
The explanation is quite easy. Yes, there is a huge number of people working on open source projects, but the trick is in how they are grouped. The largest majority is working on redundant high-level stuff, while the “invisible”, low-end, critical features are so obscure, that they are often maintained by either a single developer or a handful of them, and although people could in theory read some cryptographic c library, almost nobody does, because it’s obscure, difficult and unrewarding work. People who maintain those libraries need to have immense expertise, and yet they are usually paid nothing for their work. Nobody really competes for a job that requires a PhD in mathematics, a wizard-level knowledge of c, uses up lots of time, and pays nothing.
Which brings me to the main security issue in Linux: its critical security features are written and maintained by a few unpaid experts, are too obscure to read and understand by the vast majority of Linux developers, and the likely attacker can literally print billions of dollars that will never be tracked or accounted for, and has infinite means of intimidation.
This means Linux is in fact extremely vulnerable. It was proven to have “heart-bleeding” vulnerabilities out there in the open for years, and nobody actually bothered to read the open source code and find them. The vulnerability can be extremely obscure, and you’d need to be a professional cryptanalyst to be able to identify it, and there would be no incentive for you to go through all those mountains of code and find it, because you would assume it’s already been done, which is an easy and pleasant assumption to make, if somewhat unwarranted.
So, what am I saying here? Basically, I’m saying nothing is secure if those attacking the system have control of the hardware design, firmware design, operating system design, and can pay the best experts infinite amounts of money if they comply with their demands, or have them and their families disappear in darkness if they don’t. The idea, that you can simply install Linux instead of Windows and you’re secure, is incredibly naive.
It seems (some) Linux people are so devoted proving Windows is an insecure mess they actually made it significantly more secure since they found endless amount of exploits.
Basically, instead of digging through open source code, like mentioned OpenSSL, they were busy finding exploits in Windows … securing Windows and leaving Linux exposed in the process.
The irony.
You can’t really know how secure you are unless you’ve been under constant attack by a persistent adversary. That’s just how it is. Of course, it’s quite troubling that the adversary doesn’t necessarily reveal his level of access. When you are hit openly, it’s usually catastrophic.
I’m not sure if Heartbleed exploit should be actually blamed on Linux as it’s mistake in OpenSSL’s implementation that made it possible and OpenSSL is just optional package, if I’m not mistaken, that you can choose to use or not on most of Linux distributions. But, there’s one other thing that’s more worrying than Heartbleed when it comes to Linux related security and that’s LinuxSE brought to you people by … tadaaaa … NSA. 😀
https://en.wikipedia.org/wiki/Security-Enhanced_Linux
It seems that it’s been integrated into most important Linux distributions …
“Implementations
SELinux has been implemented in Android since version 4.3.[8]
Among free community-supported GNU/Linux distributions, Fedora was one of the earliest adopters, including support for it by default since Fedora Core 2. Other distributions include support for it such as Debian as of the Stretch release[9] and Ubuntu as of 8.04 Hardy Heron.[10] As of version 11.1, openSUSE contains SELinux “basic enablement”.[11] SUSE Linux Enterprise 11 features SELinux as a “technology preview”.[12]
SELinux is popular in systems based on linux containers, such as CoreOS Container Linux and rkt.[13] It is useful as an additional security control to help further enforce isolation between deployed containers and their host.
SELinux is available as part of Red Hat Enterprise Linux (RHEL) version 4 and all future releases. This presence is also reflected in corresponding versions of CentOS and Scientific Linux. The supported policy in RHEL4 is targeted policy which aims for maximum ease of use and thus is not as restrictive as it might be. Future versions of RHEL are planned to have more targets in the targeted policy which will mean more restrictive policies.”
And there’s even more sinister fact …
“The NSA, the original primary developer of SELinux, released the first version to the open source development community under the GNU GPL on December 22, 2000.[6] The software was merged into the mainline Linux kernel 2.6.0-test3, released on 8 August 2003. Other significant contributors include Red Hat, Network Associates, Secure Computing Corporation, Tresys Technology, and Trusted Computer Solutions. Experimental ports of the FLASK/TE implementation have been made available via the TrustedBSD Project for the FreeBSD and Darwin operating systems.”
I don’t know, maybe I’m just paranoid, but it seems that you can have operating system that safe and hardened enough against NSA only if you’re smart enough to build it like that by yourself?!
Well, if everybody uses it, it’s not optional, not unless you consider the kernel optional as well, as some GNU people would like to have it. But I only used it as an example; I’m sure there are other components that conform to the same pattern. For instance, I had KDE partitionmanager repeatedly segfault on me today and I found out that the principal developer died recently and who knows if that thing is maintained at all. I would venture a guess that every single complex piece of code, things that require real knowledge and constant work, have either a single developer or maybe two if lucky. This is formally open source software, but it’s source that nobody really inspects. Essentially, NSA or someone similar could have put almost an endless number of things in there, and none of them would look like an obvious back door, because unlike the open source community, NSA has a vast number of mathematicians/cryptanalysts/other experts on excellent pay, and they don’t really have to do a “day job”. In fact, it would be difficult to prove that NSA isn’t actually *the* day job for some crucial Linux/open source community people.
Apparently I’m not the only one thinking about this:
https://www.wired.com/story/giving-open-source-projects-life-after-a-developers-death/
“Libraries.io has identified about 3,000 open-source libraries that are used in many other programs but have only a handful of contributors.”
I’m not sure how it’s mandatory? Every security related operation by kernel requires use of OpenSSL then?
I agree that it’s easy to implant NSA exploits into opensource software as everybody is able to contribute. Nevertheless, it’s better to dig it deep into kernel. Not that Linux desktop users should worry about it because there are only handful of them compared to Windows user’s base but Android users should be on a watch I guess?!
It’s completely irrelevant what the kernel does. It’s the userspace that matters. If you can implant a back door in something used for ssl and ssh, you have absolutely everything you need. The assumption that NSA would have to implant something into the kernel is not sound. In fact, if I worked for them, I would tell them to explicitly avoid the kernel since all the attention is there; I would attack encryption and networking.
That’s one way of looking at it, and I’m not sure it’s warranted. NSA would see it differently: not what most users are using, but what the *interesting* users are most likely to use if they are up to no good. Probably some Linux or BSD flavor. And if I worked for the NSA, I would focus attention on the open source libraries controlling encryption and networking in both Linux, BSD and Mac, with the accent on the stuff used by the servers, because if you control the server infrastructure, you might not even need to tap into the client. If everybody uses “cloud” services, their data is no longer stored on their client device, and if they communicate over the Internet, everything worth knowing is communicated over the network, in encrypted form. So, essentially, if you control the servers, the network infrastructure and the widely used encryption layers, you have everything you need. Android, for instance, isn’t something that needs to be hacked; it’s already a hack if you use one. You’re basically communicating everything over Google already.
Their current problem isn’t how to get your data, it’s that they have too much of it. The haystack is so big, they essentially have no analytical foresight remaining. They have immense hindsight, though.
So, what’s your final take on possibility to get away from being pwned by NSA and Murica in general when it comes to operating systems and hardware running it? It seems like that every major OS presents some kind of backdoor and every mainstream CPU (both Intel and AMD) can present proudly “NSA Approved” badge. Maybe even ARM SoCs running in RPis has something in it as it’s designed by Broadcom which is, alas, from US. :
I’m not sure I have a final take on it. There are several possible threat scenarios, and they have different remedies. One threat is spying. For instance, if there’s a form of a keylogger on the ME, and it scans for your PGP passphrase, you can permanently lose the ability to secure your communications and data. Obviously, that’s a bad thing. I think some exotic ARM CPU combined with Linux could protect you from that, but how practical is it to use?
Another threat level are the sanctions, essentially denial of service. Everything cloud-based originating from America is a problem, of course. Everything that requires an online login can refuse you service. Using only the tools that run on your PC independently can protect you from that. The third threat level is area-specific kill switch, America disabling their enemies immediately preceding attack. This can only be done once, and would be done in a nuclear war. If they do that, and people understand that they can do it, nobody would ever trust their technology again, but by that time it would be too late.
Practicality is a major issue. For instance, if something increases my security but hampers my daily functionality, I am not likely to use it. However, if it is obviously helpful, and doesn’t get in the way much, I might use it. An example of this is not using Google services, which are starting to become incredibly bad, to the point of deliberately faking search results to fit their political agenda, influencing elections, restricting speech and, basically, controlling your life. It’s actually quite easy to quit Google; I currently use it only for address book and calendar syncing between devices, because that would be very impractical to do with something else if you’re not completely on Apple, and that might not be a great idea either, since Apple is politically very similar to Google and might do similar things.
I’m not very much worried about spying, I’m much more worried about shadowbanning, and outright denial of service. That’s why I keep my tools and skills diverse, so that I can quickly adapt to multiple scenarios. Essentially, it is impossible to completely insulate oneself from American influence because they deliberately infiltrated themselves into everything as a matter of military and intelligence strategy, and if you protect yourself so well you can’t do anything anymore, you basically already reduced yourself to the level a hostile denial of service would do. That’s not necessarily a bad thing as an experiment to see how vulnerable you are, and losing some practicality for the sake of greater independence and resistance to pressure is a good thing. It’s a matter of degree, though. Also, if shit really hits the fan, it’s the loss of connectivity to the Internet that’s the biggest and most immediate concern, because if that happens, you might never find out what hit you – I don’t use radio or TV for decades already. I also don’t have a conventional phone, and even if I did it’s just something that plugs into the router, not an analogue land line anymore. So, despite the fact that I have a redundant Internet connection, that thing would be the most vulnerable: a nuclear air burst, a kill switch attack on the routers, ISP-level denial of service, and I no longer have connectivity. Essentially, what I’m saying here is that when shit hits the fan, I might not be able to keep in touch with everyone. I *am* a very resistant cockroach in that department, though, because I already have been attacked by all kinds of scum so I have redundancies, but I am also aware of the facts.