Archive | Ubuntu RSS for this section

Easily Amused

I was greeted the other morning by a coworker grinning from ear to ear. “I love Vista!” he proclaimed and proceeded to tell me of a great discovery.

As it happened, he had been working on a personal project the night before and had inserted a blank CD in the drive. What did he see? A dialogue box asking him if he would like to burn an audio CD or a music CD. How convenient is that?! The best part is, he didn’t have to buy any third-party software!

He seemed so happy, so full of glee. I didn’t have the heart to tell him that I’ve enjoyed the same luxury with Ubuntu (Nautilus, to be more pecise) for almost three years now…well, I almost didn’t have the heart to tell him.

Cheers!
-Brandon

Trial By Hacker

In the fifth installment of my Top 10 List of Linux FUD patterns, I discussed various security measures used in Linux distros. Last week, the CanSecWest security conference invited hackers to circumvent security on three fully-patched computers running different operating systems: OS X, Windows Vista & Ubuntu 7.10. The OS X machine reportedly fell first, requiring only two minutes to exploit a vulnerability in the Safari browser! Vista fared well on its own, but an attack on Adobe Flash in the last day marked the end for Windows. At the end of the three-day contest, The Ubuntu machine was the only one left standing! This is good news indeed!

I’d like to note that while this is a great PR victory for Linux, please bear in mind that the parameters of the contest were controlled. Given the right circumstances and/or enough time, the outcome may have been different, and in the real world, windows of opportunity are left wide open all the time – so, protect yourself. It was also interesting to me that the Mac fell first because it was an ‘easy target’ and that the exploit that took out Vista could easily be tweaked to work on any platform.

Cheers!
-Brandon

Note: my original references for this post were articles on eFluxMedia and The Register.

Top 10 Linux FUD Patterns, Part 6

Linux FUD Pattern #6: Linux is low-quality software

Every once in a while, an article or post will appear, claiming that Linux is just not good enough for everyday use. The reason? Concerns over quality. Such ‘blog fodder can range from the sensationalist author’s “Is Linux Ready for Prime Time?” teaser to the rants of the disgruntled because his experience with Linux was sub par. Neither contain anything resembling an objective approach to quality and neither result in a useful conclusion. That’s the topic of this sixth installment of my Top 10 List of Linux FUD patterns, the accusation that Linux is low-quality software. To recognize when FUD of this kind occurs, we must first have a working knowledge of quality measurement.

Quality Defined

What is quality? There are several dictionary meanings, but when discussing software quality, the second definition in Merriam-Webster’s online dictionary seems to be the most applicable: a degree of excellence. Other dictionaries generally concur with only minor deviations in terms. Fine, but what does that really mean? The definitions of ‘excellence’, ‘excellent’ and ‘excel’ emphasize the superiority of the excellent, its ability to surpass others in some way. Moreover, by adding the word ‘degree’, M-W subtly introduces measurement. Therefore, quality as it applies to software is really a two-part activity: measurement of one or more attributes and the comparison of these measurements for the purpose of determining the degree to which the software excels.

Just off the top of my head, I can name three comparisons commonly used in software quality assurance: benchmarking, regression testing and expectations management.

In software benchmarking, attributes of a program are compared with the same attributes exhibited by its peers. Most benchmarking is metrics-based, measured numerically, and is often related to the relative speed of the program. The time it takes for a program to start up or the number of calculations a program can perform per unit of time are common examples. I consider feature list comparisons as types of benchmarks, albeit non-quantitative ones. Competing software packages that perform roughly the same function usually share a minimum set of features. For example, all “good” word processors are expected to have a spell checker. Of course, many factors, not just the number of features, must be considered.

Regression testing is a comparison of a program to itself over time, usually between builds, releases or other milestones in the evolution of a product. Usually, regression testing means testing unchanged functionality to determine if a program was inadvertently broken by a change to some supposedly-unrelated function (i.e. make sure it all still works). This is an example of a binary determination (working or broken); however, degradation in speed or capacity and unacceptable trends in various controls and tolerances may be detected as well, indicating programmatic problems in the code. Metrics that describe the development process provide valuable feedback, leading to process improvements that should ultimately improve the product either directly or indirectly.

I saved the best one for last, the management of users’ expectations. Don’t let the name fool you – it may not sound like a measurement-and-comparison activity, but the management of expectations involves constant gap analysis which inherently necessitates measurement. The quality of a product is most often measured as the extent to which the product meets the needs of its users. This means that the end product must be compared to the requirements defined by the users, often traceable via some sort of design documentation. The requirements specification may have been created by explicitly documenting the needs of a readily-accessible user base, or by extrapolating the needs of an generally inaccessible user base through market research and analysis. This type of comparison is the most important of all because user requirements can potentially render both benchmarking and regression testing unnecessary. For more discussion on this topic, pick up any number of books on quality at the bookstore or library and see what the experts say regarding the importance of meeting the needs of customers.

Determining Quality

Ok, so measuring quality means drawing comparisons of various kinds. Now what? Suppose you want to determine if a particular software program or package is good enough to use. This can actually be quite simple. The first step is to list your needs and wants, including how you expect the software to behave and how well you expect it to perform. The distinction between needs and wants is deliberate and necessary. If software lacks a function you need, you won’t use it, but the decision is different if it lacks something you simply want and yet is acceptable in every other way. These requirements may then be weighted or ranked, and measurement criteria defined, both qualitative and quantitative. Assuming that alternative products exist, select one or two of the competing programs to evaluate; this is called a “short-list”. Design and execute your tests, observe and measure outcomes, then weigh and rank the results. Several rounds of evaluation may be required if results are inconclusive, and the addition and/or refinement of requirements with each pass. At some point in the process, you will determine if the software meets your needs or if you would be better off with one of the competing products.

When comparing several programs or packages, it may be helpful to create classifications based on multiple criteria. For example:

Low-quality software will:
– crash or not start at all,
– contain calculations that return incorrect results,
– corrupt data,
– have components that don’t work as documented or don’t do anything at all,
– have little or no documentation
– have poorly designed GUI layout, and
– have a poor or missing CLI or API.

Medium-quality software will have none of these ills and will often:
– have a consistent look and feel,
– include useful documentation, and
– have an intuitive layout and command options and other user-friendly qualities.

High-quality software will:
– sport outstanding UI considerations,
– have accurate and friendly data validation,
– contain no material defects in functionality (especially calculations),
– include fully- and easily-configurable options,
– have a full-featured CLI, and/or API, and
– include complete documentation with examples of use.

The type of software being evaluated often establishes quality criteria. For example, one of the most important attributes of desktop productivity packages for many users is a feature-rich user interface with an intuitive layout. Some processing speed can safely be sacrificed to achieve these goals, as the program is limited to the speed at which the user can provide input anyway. Contrast this with automated server software that must process many incoming requests very quickly, but because it is configured once and allowed to run in the background (“set it and forget it”), ease-of-configuration is of lesser importance. There are always exceptions though. Some desktop users may want an extremely basic interface with only core functionality while others are willing to sacrifice server software speed on low-traffic services in exchange for easy configuration and management. Notice, these exceptions are examples of how user requirements can trump benchmarking.

Of course, if you are the author of the software and not just a user, you probably already know that testing at multiple levels is not only desirable buy almost always necessary. In an internal IT shop, developers unit test their modules, the software is often tested in an integrated environment that simulates or mirrors the ‘production’ environment, and finally, users perform functional testing and formally accept a product. For commercial software, internal testing is performed, possibly followed by alpha and beta testing performed by external users.

What About Linux?

So far, we’ve discussed quality in general, and absolutely nothing specific to Linux. Obviously, Linux is software too, so all of the points made above apply. Computer users have various needs and wants, requirements and expectations, regarding the operating system. These requirements can be translated into tests of quality for Linux just as they can for any other software.

I think that the ways in which Linux differs from other platforms, primarily in philosophy but also in more tangible respects, is arguably a major reason for the perception that Linux and its applications are low-quality software. For example, everyone’s needs and wants are different, and the Linux community strives to provide as much freedom as possible to the user in satisfying those requirements. To accomplish this, multiple distributions, windows managers, and the like are offered; unfortunately, this tends to confuse the uninitiated into believing that using Linux means living with chaos. To make matters worse, producers of commercial software products focus on market share and go to great lengths to be the ‘only game in town’. While competition is supposed to foster innovation, the unfortunate after-effect is a reduction in the number of choices available to fulfill users’ requirements. It hurts when a product that meets a specific need is acquired by a competitor of the vendor, subsequently discontinued, and replaced with the new vendor’s surviving product which didn’t fulfill the need from the beginning.

In my experience, another reason commonly cited for “poor quality” of Linux and Open Source Software in general stems from faulty logic, predicated by the old adage, “you get what you pay for”. If the software in question is sponsored by a software company, then it stands to reason that the company (a) probably knows how to develop software, (b) has adequate testing resources to do so and (c) has a reputation to protect. These companies cannot afford to build bad software. A track record for producing bad Open Source software could very easily bleed over to customers’ perceptions of the company as a software producer overall, impacting sales of existing and future commercial software packages. On the other hand, many Open Source applications are home-grown solutions, not supported by companies, but maintained and promoted though grass-roots efforts. The authors are individuals motivated to write quality programs because they themselves use them, and they are kind enough to share the fruits of their labor with the rest of us. While it is true that “quality costs”, development isn’t (economically) free either; so, just because an application it is available without monetary cost to you doesn’t mean that it is without value.

Finally, Linux distributors, especially the “name brands” such as Ubuntu, SUSE and Red Hat, usually do a good, professional job in testing their products. Applications that run on the Linux platform vary more widely in the level of testing applied. Check the websites of these applications to determine how thoroughly the software is tested before each ‘stable’ release. See if the authors employ dedicated resources to test and/or engage end users in alpha and beta testing efforts. Third-party certification, though rare, but is an invaluable tool for boosting end-user confidence.

Conclusion

Don’t believe blanket statements about the quality of software available for any particular platform unless they are backed by real data. Most are biased, unsupported or outright FUD. Unsubstantiated and grossly subjective claims are irrational and the hallmark of FUD. Instead, do research and evaluate software for yourself. Only you can determine if an application meets your needs. Only you define quality.

Cheers!
-Brandon

<< Go To Part 5 Go To Part 7 >>

IBM to Fully Support Ubuntu with Lotus: Death of Ubuntu?

by Kevin Guertin

Lotus Notes

IBM believes Linux is finally ready for the corporate desktop.

In an announcement this week at the Lotusphere 2008 conference in Orlando, IBM said that it will provide full support for Ubuntu Linux with Lotus Notes 8.5 and Lotus Symphony using its Open Collaboration Client software, which is based on open standards.

Antony Satyadas, chief competitive marketing officer for IBM Lotus, said the Ubuntu support for Notes and Symphony were a direct response to demand from customers. Lotus Notes 8.0.1 has limited support for Ubuntu Linux, but customers have asked for broader capabilities, he said.

Based on Slashdot comments from users, this isn’t such a great announcement. Some go as far as saying that it will be the death of Ubuntu. Canonical, on the other hand, has said that the availability of Notes and Symphony for use with Ubuntu will be a win for customers everywhere.

Although I’ve never used Lotus (and don’t plan to), apparently over 100,000 business users are interested in moving to Ubuntu Linux on the desktop. That number is a good chunk. If it helps to squash FUD, I’m all for it. Especially for Linux on the business desktop.

What do you think? Will this really be the Death of Ubuntu or will it definitely help solidify Linux/Ubuntu in the corporate world?

Newsreel: Crazies, Myths & Name Changes

For those who have asked for a break from the FUD and focus on why Linux is a great OS, I thought you might enjoy this short article from Linux Journal about why people are crazy about Linux. I find the author’s personal reasons for using Linux (listed just after the bulleted list) are similar to mine, especially the simplicity of text-based config files.

Also, here’s one from the downloadsquad regarding Linux myths. A few of these sound vaguely familiar.

Finally, some proposed name changes to Ubuntu derivatives have made the news. It is generally good practice to avoid changing the names of established products, especially more than once. The author hit the nail on the head… it’s confusing. It also impairs brand loyalty.

TTFN!
-Brandon

Top 10 Linux FUD Patterns, Part 3

Linux FUD Pattern #2: Linux is not “officially” supported

When you hear the phrase “official support,” what comes to mind? Informative user manuals? A well-staffed call center? But what makes it “official”? This is the second item on my Top 10 List of Linux FUD patterns: the lack of “official” Linux support. The goal of FUD based on this notion is a mixture of fear and uncertainty, to make you believe that using Linux means having no place to turn when a problem occurs.

Generally speaking, “official support” for a product is provided by the entity that owns the intellectual property for the product and/or has the right to produce and distribute it. Products are typically sold or leased, both of which are types of business transactions; this implies that the entity in question is operating as a business. A third-party provider paid to support a product may be licensed by, or otherwise affiliated with the original vendor, but only the vendor’s fixes and upgrades are “officially” supported. “Official” support connotes a certain level of authority or expertise, but also implies consequences, usually legal or fiscal, for a failure to meet service expectations. This is the model used by businesses today.

Linux, however, is not a business-supported product (per se). Linux is not “owned” by a particular entity, nor does one particular entity retain the exclusive right to update and distribute it. It is licensed under the GNU Public License (GPL), which permits any software recipient to modify and distribute the software or derivatives thereof as long as the conditions of the GPL are not violated. This is coupled with the open source philosophy, but they aren’t exactly the same thing – an open source application may be licensed under something other than the GPL.

So then, who does “officially” support Linux? The answer is that Linux has always been a grassroots movement. Though it was originally created by one man, Linux is “officially” maintained by a community made up of individuals, groups, and yes, businesses. Different groups within the community support different parts of the system. These groups are commonly known as “maintainers” and usually include original authors or those to whom the torch of authority has been successively passed. For example, assuming the Wikipedia article on the Linux Kernel is not out-of-date, Mr. Torvalds still supervises changes to the core of Linux and has designated the maintenance of older releases to other individual maintainers. The parts maintained are typically called “projects”. Various entities, such as Ubuntu and Red Hat, bundle various system parts together as a unit and ensure that their respective distributions operate as expected, that is to say, that they operate well.

While maintainer and/or community support for a Linux distribution or a particular project may be “official”, technical assistance may not be readily available, on demand, free of charge, or for that matter, available at all. Most maintainers are polite and willing to help, but please remember that much of Linux has been contributed by developers and that support offered pro bono publico doesn’t help feed the family or pay the mortgage. This is where the rest of the community helps out, in the form of online support forums.

Paid support is available as part of the commercial offerings made by Red Hat, Novell, Linspire and others. Additionally, some of these companies offer professional services, such as consulting and training, though these services are typically meant for consumption by businesses, not home users. Any company offering fee-based technical support for Linux is free to set their own price, whatever the market will bear.

In an increasingly tech-savvy world, I think the difference between commercial and community-based support is rapidly decreasing. Consider the available courses of action that may be taken when a problem does occur with a commercial OS. Almost always, the first step is to search the Internet for a root cause, if not a full-blown resolution. This is often done as a cost-saving measure (easy fix) or so that the user/administrator can better explain the problem to tech support when a call is eventually made. Moreover, help may be actively sought in a multitude of discussion groups, mailing lists, blogs, chat rooms and other forums dedicated to supporting various operating systems. Another option is to consult with a friend or relative that knows about these sorts of things. Of course, the “official” vendor or (gasp) a consultant can be called upon, usually for a fee of course. At the discretion of the user/administrator, the problem may be eliminated by brute force: reinstalling the OS. (Actually, this last option isn’t all bad as long as no data were lost – it provides an opportunity to “clean house” and possibly upgrade to a newer release or move to a different distribution.) The order of preference for these alternatives depends on the facts and circumstances surrounding the problem, but they almost always rank from the least- to the most-expensive in terms of time, effort and cash outlay.

Hardware support (or lack thereof) often appears as diversionary FUD regarding “official” support. Hardware must be able to communicate with the computer at several levels, starting with the physical. For example, a USB device can be attached to any machine with the appropriate port, but to use the device the OS must know how to communicate with both the USB itself and the device on the other side. Obviously, this issue quickly boils down to device drivers and brings us back to a discussion of “official” software support.

Rest assured, common devices such as keyboards, mice and thumb drives, almost always work using standard Linux drivers. In other words, they don’t support Linux; rather, Linux supports them. Newer device classes for which no “official” Linux drivers are provided often suffer a period of incompatibility or reduced usefulness. For example, Wi-fi network interface cards are now going through the same sort of transition that consumer-class Ethernet cards did about six or eight years ago. Many times, this is because drivers have to be derived from messages sent to and from the devices, often requiring many hours of experimentation. A general rule of thumb: hardware compatibility problems are more common as the hardware becomes more exotic. For example, I experienced new levels of frustration with the big-name vendor of a certain USB-ready programmable television remote control for which future Linux support was promised and never delivered. But, the fact is, hardware vendors have the right to choose to support Linux or not, a decision based on supply and demand. The need to operate specific hardware may dictate which OS is used.

The best advice I can give is to ignore the FUD and adopt a pragmatic approach to defining your support needs. Your needs are specific to you. Compile a scorecard and do some research. Questions that should be answered include the following. What is your level of expertise with computers? Have you needed professional OS support in the past? Do you expect to need it in the future? Are you comfortable doing your own support work? Based on community-supplied information? Is your hardware “officially” supported or listed in one of the various compatibility lists? Do you use exotic hardware components? Have you tried running a Linux Live CD, especially Knoppix? When buying a new PC or laptop, have other users posted their experiences with the same model? Research never hurts, but just be on the lookout for more FUD!

Cheers!
-Brandon

<< Go To Part 2 Read Part 4 >>

FUD Alert! Vista Security Report Card

I saw this article summarized on Slashdot this morning. Headline: Microsoft Claims ‘Vista Has Fewer Flaws Than Other First-Year OSes.’ According to the article, Vista released 17 security bulletins and fixed 36 vulnerabilities in the first year, which is a big improvement over XP’s hit counts of 30 and 65 respectively. The article continues by comparing these figures to the first-year vulnerability numbers for Red Hat (360), Ubuntu (224) and Mac OS X (116). One security specialist is quoted, stating that these numbers “prove that [Vista] is quantitatively more secure.” He then chastises other OS vendors for their negligence in QA and security testing.

When you read this article, does it make you doubt the security of Linux and OS X? That’s the intended message, no?

I find this article misleading for a few reasons:

  • Vista numbers are based on actions Microsoft decided to take: release bulletins and fix vulnerabilities. There is an element of subjectivity here.
  • The article discloses that 30 Vista vulnerabilities remain, which brings the number of known vulnerabilities to 66. Sometimes, ‘known’ translates into ‘disclosed’.
  • An assumption is made that the writing of Vista and the compilation of a Linux distro are comparable activities. They are not, and the vulnerabilities associated with these activities likely have different statistical distributions.
  • The scope and risk of the vulnerabilities fixed are not discussed. Microsoft may have fixed 17 big problems, where as Ubuntu fixed 224 small ones. For all we know, the cumulative effect could be equal.
  • The security specialist states that Vista underwent more testing than the other OSes. He makes no reference to the relative quality of that testing. “More” doesn’t mean “better.” And what does “more” mean anyway? Was the testing measured in dollars spent? Number of testers? Number of test cases?

An analysis of the actual report would probably provide more clarity. Moreover, independent verification of the numbers would boost the integrity of the conclusions drawn. The validity of the actual results, however, does not change the intent of those who report on them. FUD, I say!

Cheers!
-Brandon