In the fifth installment of my Top 10 List of Linux FUD patterns, I discussed various security measures used in Linux distros. Last week, the CanSecWest security conference invited hackers to circumvent security on three fully-patched computers running different operating systems: OS X, Windows Vista & Ubuntu 7.10. The OS X machine reportedly fell first, requiring only two minutes to exploit a vulnerability in the Safari browser! Vista fared well on its own, but an attack on Adobe Flash in the last day marked the end for Windows. At the end of the three-day contest, The Ubuntu machine was the only one left standing! This is good news indeed!
I’d like to note that while this is a great PR victory for Linux, please bear in mind that the parameters of the contest were controlled. Given the right circumstances and/or enough time, the outcome may have been different, and in the real world, windows of opportunity are left wide open all the time – so, protect yourself. It was also interesting to me that the Mac fell first because it was an ‘easy target’ and that the exploit that took out Vista could easily be tweaked to work on any platform.
Linux FUD Pattern #5: Linux is not secure
There are some out there who would like for you to believe that Linux is unsafe. What better way to instill fear than to form doubt in your mind about a system’s abilities to protect your data?
A reason for the supposed lack of security often cited in FUD is the origin and maintenance of Linux in the “hacker” community. The term “hacker” has evolved from a term of endearment to one associated almost exclusively with cybercrime. To say that Linux was created and is supported by hackers gives the impression that the OS and its related applications are riddled with built-in security holes, backdoors for gaining system access, spyware for purposes of identity theft, hidden network tools that help intruders cover their footprints as they travel from machine to machine through cyberspace, and any other sort of malicious software for various and sundry purposes. To “hack” no longer means to “tinker” or to “fiddle with”, but to “break into” and “cause harm”. The term may conjure mental images of a scene from a horror movie, an evil man with an axe about to hack his way through the door to the house protected by the dark of night. Such is the imagery used to spawn fear.
Let’s examine Linux security by answering two questions. Do security components exist? And, can they be trusted?
The components required to make a system secure depends on many factors, because different systems are used in different ways by different people. Moreover, a weakness in a system’s security may be mitigated by strengths in some other compensating controls. There are some basic options that are commonly used to secure systems, all of which are available on Linux.
Password protected login is the hallmark form of authentication. It is easy to implement, easy to use, can be highly effective, doesn’t require additional/expensive hardware and the expectations and conventions surrounding it are already present in modern culture. Sure, there are more advanced biometric devices such as palm readers and retina scanners, but the relative cost in money and effort of implementing these safeguards for the average home user and for most business desktops is prohibitively high. There are two aspects to password security: the strength of the password itself, and the authentication scheme behind it. Password strength is the responsibility of the user, not the OS. Most Linux distros either require password protection or at least have it enabled by default. Usually, the passwords are protected on the local system by shadowing and various schemes such as Kerberos can be used to protect the transmission of login information over a network.
Related to password authentication is the file system permissions granted to users once they’ve logged in. Linux and Unix use file-based permissions, denoting how the owner, members of the owner’s primary work group and the “world” of users on the system can interact with each file or directory. Privileges do not cascade as they do with other operating systems that use Access Control Lists.
Network security is a broad topic encompassing the combined abilities of the OS, applications, network devices, administrators and users to detect and/or prevent a breach attempted across a network connection. A basic way to accomplish this is to disallow certain types of messages from reaching the computer; this function is usually performed by a firewall server or program that monitors network traffic and filters communications based on predefined rules. Every computer that communicates over the Internet uses the TCP protocol, which allows for approximately 65,000 possible “ports”. These ports are similar to radio stations or TV channels; each application that needs to communicate does so using one port. Ports that are not used by an application but are still available for use (“open”) can be exploited. Port scans are a good way to determine if a system has any open ports that are not being used. Firewall capabilities are built into the Linux Kernel and several good front-end packages are available for configuration, monitoring and reporting purposes.
All of the safeguards discussed above constitute protection around the data. What about protection of the data? A data file can be encrypted thereby changing the contents to an encoded, unreadable format. The content is usually restored using a key or a password. E-mail can also be encrypted prior to transmission. GNU Privacy Guard (GPG) is a Pretty Good Privacy (PGP) compliant application that implements public key cryptography on multiple OS platforms, including Linux. Of course, constantly having to decrypt and encrypt every individual data file before and after use would be painful; instead, entire file systems can be encrypted by the system and several cryptographic file systems exist for Linux. It is also possible to create a loopback device, which is a file that can be mounted as an encrypted file system similar to the commercial product Cryptainer LE by Cypherix.
So, the components do exist. Now, the question remains, can these components be trusted?
FUDsters will argue that any security software for which the source code is freely available to the public is inherently not secure. This is based on the assumption that the source code will either reveal the secret functionality that makes the security software work or expose bugs in the security software itself that can be exploited as well.
First, if someone cannot open their source because they are afraid it may reveal secret functionality, then it wasn’t properly designed from the start. The worst-possible example of this is hardcoding passwords in programs, especially if they are scripts stored in clear text. Good security schemes, such as encryption, rely directly on information the user provides, and often make use of one-way functions.
Second, Open Source software is available for public scrutiny. If you cannot read and understand the code yourself, rest assured that there are many folks out there that can and do. Why? Because many businesses do actually use Open Source software and have everything to lose if they don’t test it out first. That being said, I consider many corporate “testimonials” sponsoring one OS or another based on security or other factors to be FUD, mainly because they often appear in paid advertisements and seldom reveal the details of tests performed to lead to such conclusions. Independent certification and research performed by government or other nonprofit entities are usually the most objective and reliable.
Aside from learning the code, another way to test an application’s security strength or to see if it transmits private data is to watch (or “sniff”) the port on which it communicates using a network monitoring tool. Such data may be encrypted, but the (data) size and timing of requests made by the client software should be consistent and reasonable. This is a technical task, but a bit easier than learning how the code works. Just remember, sniffing outside of your own network may be considered illegal.
Finally, there are many Linux opponents that would jump at the chance to expose real security weaknesses in Linux and its applications. These are often vendors of competing software and have both the money and channels to make themselves heard. When such a claim appears on the Web, look for specific details about the vulnerability. If there are none, it may be FUD. Also, check the software website to see if the vulnerability has been acknowledged or refuted as well as any status on its repair. Never take such claims at face value.
Here’s a few tips to remember to help protect yourself.
Any security expert worth his salt will tell you that physical security is the most important aspect of system security. If physical access to a computer is available, then it is usually just a matter of time before the system will be compromised, regardless of operating system. Obviously, the probability of such breaches skyrockets for laptop users, especially when so few (based on my own observations) choose to utilize even the most primitive of safeguards, cable locks. Also, I’ve not seen any major headlines on this so far, but Live CDs, as wonderfully useful as they can be, are ginormous threats to the security if physical access is available. This is because most Live CDs provide superuser access to a system and all of its devices. It is best to keep computers under lock & key whenever possible.
One of my friends from university used to work in an engineering lab on campus. He had set up a Linux box on the network, with full consent of the administrators of course. But one of the the permanent staff members approached him one day, asking how he managed to cloak his machine from the nightly SATAN network scans. The answer was simple! He turned the machine off before he left each day! Turning a machine off or at least disconnecting it from the Internet when not in use will deprive the would-be attacker the time needed to successfully break in using a brute force attack.
And, as always, be careful what you download. There is always a chance that someone will write spyware or malware for Linux. Stick with applications that have large communities and good reputations if you can. Search the Internet for evidence that an app may not be secure before downloading it. To quote the the Gipper, “trust, but verify”.
|<< Go To Part 4||Go To Part 6 >>|
I saw this article summarized on Slashdot this morning. Headline: Microsoft Claims ‘Vista Has Fewer Flaws Than Other First-Year OSes.’ According to the article, Vista released 17 security bulletins and fixed 36 vulnerabilities in the first year, which is a big improvement over XP’s hit counts of 30 and 65 respectively. The article continues by comparing these figures to the first-year vulnerability numbers for Red Hat (360), Ubuntu (224) and Mac OS X (116). One security specialist is quoted, stating that these numbers “prove that [Vista] is quantitatively more secure.” He then chastises other OS vendors for their negligence in QA and security testing.
When you read this article, does it make you doubt the security of Linux and OS X? That’s the intended message, no?
I find this article misleading for a few reasons:
- Vista numbers are based on actions Microsoft decided to take: release bulletins and fix vulnerabilities. There is an element of subjectivity here.
- The article discloses that 30 Vista vulnerabilities remain, which brings the number of known vulnerabilities to 66. Sometimes, ‘known’ translates into ‘disclosed’.
- An assumption is made that the writing of Vista and the compilation of a Linux distro are comparable activities. They are not, and the vulnerabilities associated with these activities likely have different statistical distributions.
- The scope and risk of the vulnerabilities fixed are not discussed. Microsoft may have fixed 17 big problems, where as Ubuntu fixed 224 small ones. For all we know, the cumulative effect could be equal.
The security specialist states that Vista underwent more testing than the other OSes. He makes no reference to the relative quality of that testing. “More” doesn’t mean “better.” And what does “more” mean anyway? Was the testing measured in dollars spent? Number of testers? Number of test cases?
An analysis of the actual report would probably provide more clarity. Moreover, independent verification of the numbers would boost the integrity of the conclusions drawn. The validity of the actual results, however, does not change the intent of those who report on them. FUD, I say!