Here is an interesting post from The Angry Admin ‘blog. The basic point being made is that Microsoft has succeeded in corrupting the ISO standard-setting process, attempting thereby to shake the faith in it and the standards that arise from it. Perhaps the title of the post should have made reference to the death of ISO, not ODF. Near the end, the company’s proficiency in FUD is highlighted; but, if true, it also reveals a slight difference from Microsoft’s typical modus in that a stalemate was considered acceptable. When it’s not winning the game, the company usually either bullies the other kids until it is declared the winner or picks up its toys and goes home; in this case, it opted to raze the playground. Could this be a sign of a weaker Microsoft? Maybe, maybe not. Time will tell.
In the fifth installment of my Top 10 List of Linux FUD patterns, I discussed various security measures used in Linux distros. Last week, the CanSecWest security conference invited hackers to circumvent security on three fully-patched computers running different operating systems: OS X, Windows Vista & Ubuntu 7.10. The OS X machine reportedly fell first, requiring only two minutes to exploit a vulnerability in the Safari browser! Vista fared well on its own, but an attack on Adobe Flash in the last day marked the end for Windows. At the end of the three-day contest, The Ubuntu machine was the only one left standing! This is good news indeed!
I’d like to note that while this is a great PR victory for Linux, please bear in mind that the parameters of the contest were controlled. Given the right circumstances and/or enough time, the outcome may have been different, and in the real world, windows of opportunity are left wide open all the time – so, protect yourself. It was also interesting to me that the Mac fell first because it was an ‘easy target’ and that the exploit that took out Vista could easily be tweaked to work on any platform.
IDM’s UltraEdit is arguably the world’s best text editor…for Windows. I first used it in 2002 as part of a basic programming tool set provided by my client at the time. I was hooked, and started to use it on other engagements. I even started ‘selling’ it to my colleagues, showing them how it could solve various problems. One of my colleagues, a statistician, had to routinely convert large data files of various formats (fixed-width, CSV, etc.). He did much of this by hand (i.e. in Notepad and/or Excel) until I showed him how to convert files painlessly in UltraEdit. He bought a license the same day.
Alas, my conversion to Linux several years ago forced me to abandon UltraEdit. For me, the most useful feature was the column mode (also called ‘block’ mode) and I could not find any GUI text editor that could replace that function. I use Vim most of the time now, which does have the ‘visual’ block mode, but learning the keystrokes and writing macros to do all of the things UltraEdit can do in single button-clicks is much too time-consuming for my busy schedule to allow. I tried running it under Wine (please don’t ask which versions of either – I don’t remember now), and it seemed like most things worked, but not the column mode. Crash and burn.
Still in denial, I check the UltraEdit user forums from time to time, and what did I see just a few days ago? A post written by someone on the IDM team claiming that they are indeed working on a port of UltraEdit to Linux! It is currently called UEx and is expected to hit the market in late 2008. Joy of joys!
P.S. To find the post, go to the UltraEdit website and navigate to the User Forums under the Support menu. In the UltraEdit General Discussion category, use your browser to search for the text, “UltraEdit for Linux”. The post was written by “penntap” on December 12, 2007, which showed up on page 9 when I found it.
Linux FUD Pattern #6: Linux is low-quality software
Every once in a while, an article or post will appear, claiming that Linux is just not good enough for everyday use. The reason? Concerns over quality. Such ‘blog fodder can range from the sensationalist author’s “Is Linux Ready for Prime Time?” teaser to the rants of the disgruntled because his experience with Linux was sub par. Neither contain anything resembling an objective approach to quality and neither result in a useful conclusion. That’s the topic of this sixth installment of my Top 10 List of Linux FUD patterns, the accusation that Linux is low-quality software. To recognize when FUD of this kind occurs, we must first have a working knowledge of quality measurement.
What is quality? There are several dictionary meanings, but when discussing software quality, the second definition in Merriam-Webster’s online dictionary seems to be the most applicable: a degree of excellence. Other dictionaries generally concur with only minor deviations in terms. Fine, but what does that really mean? The definitions of ‘excellence’, ‘excellent’ and ‘excel’ emphasize the superiority of the excellent, its ability to surpass others in some way. Moreover, by adding the word ‘degree’, M-W subtly introduces measurement. Therefore, quality as it applies to software is really a two-part activity: measurement of one or more attributes and the comparison of these measurements for the purpose of determining the degree to which the software excels.
Just off the top of my head, I can name three comparisons commonly used in software quality assurance: benchmarking, regression testing and expectations management.
In software benchmarking, attributes of a program are compared with the same attributes exhibited by its peers. Most benchmarking is metrics-based, measured numerically, and is often related to the relative speed of the program. The time it takes for a program to start up or the number of calculations a program can perform per unit of time are common examples. I consider feature list comparisons as types of benchmarks, albeit non-quantitative ones. Competing software packages that perform roughly the same function usually share a minimum set of features. For example, all “good” word processors are expected to have a spell checker. Of course, many factors, not just the number of features, must be considered.
Regression testing is a comparison of a program to itself over time, usually between builds, releases or other milestones in the evolution of a product. Usually, regression testing means testing unchanged functionality to determine if a program was inadvertently broken by a change to some supposedly-unrelated function (i.e. make sure it all still works). This is an example of a binary determination (working or broken); however, degradation in speed or capacity and unacceptable trends in various controls and tolerances may be detected as well, indicating programmatic problems in the code. Metrics that describe the development process provide valuable feedback, leading to process improvements that should ultimately improve the product either directly or indirectly.
I saved the best one for last, the management of users’ expectations. Don’t let the name fool you – it may not sound like a measurement-and-comparison activity, but the management of expectations involves constant gap analysis which inherently necessitates measurement. The quality of a product is most often measured as the extent to which the product meets the needs of its users. This means that the end product must be compared to the requirements defined by the users, often traceable via some sort of design documentation. The requirements specification may have been created by explicitly documenting the needs of a readily-accessible user base, or by extrapolating the needs of an generally inaccessible user base through market research and analysis. This type of comparison is the most important of all because user requirements can potentially render both benchmarking and regression testing unnecessary. For more discussion on this topic, pick up any number of books on quality at the bookstore or library and see what the experts say regarding the importance of meeting the needs of customers.
Ok, so measuring quality means drawing comparisons of various kinds. Now what? Suppose you want to determine if a particular software program or package is good enough to use. This can actually be quite simple. The first step is to list your needs and wants, including how you expect the software to behave and how well you expect it to perform. The distinction between needs and wants is deliberate and necessary. If software lacks a function you need, you won’t use it, but the decision is different if it lacks something you simply want and yet is acceptable in every other way. These requirements may then be weighted or ranked, and measurement criteria defined, both qualitative and quantitative. Assuming that alternative products exist, select one or two of the competing programs to evaluate; this is called a “short-list”. Design and execute your tests, observe and measure outcomes, then weigh and rank the results. Several rounds of evaluation may be required if results are inconclusive, and the addition and/or refinement of requirements with each pass. At some point in the process, you will determine if the software meets your needs or if you would be better off with one of the competing products.
When comparing several programs or packages, it may be helpful to create classifications based on multiple criteria. For example:
Low-quality software will:
– crash or not start at all,
– contain calculations that return incorrect results,
– corrupt data,
– have components that don’t work as documented or don’t do anything at all,
– have little or no documentation
– have poorly designed GUI layout, and
– have a poor or missing CLI or API.
Medium-quality software will have none of these ills and will often:
– have a consistent look and feel,
– include useful documentation, and
– have an intuitive layout and command options and other user-friendly qualities.
High-quality software will:
– sport outstanding UI considerations,
– have accurate and friendly data validation,
– contain no material defects in functionality (especially calculations),
– include fully- and easily-configurable options,
– have a full-featured CLI, and/or API, and
– include complete documentation with examples of use.
The type of software being evaluated often establishes quality criteria. For example, one of the most important attributes of desktop productivity packages for many users is a feature-rich user interface with an intuitive layout. Some processing speed can safely be sacrificed to achieve these goals, as the program is limited to the speed at which the user can provide input anyway. Contrast this with automated server software that must process many incoming requests very quickly, but because it is configured once and allowed to run in the background (“set it and forget it”), ease-of-configuration is of lesser importance. There are always exceptions though. Some desktop users may want an extremely basic interface with only core functionality while others are willing to sacrifice server software speed on low-traffic services in exchange for easy configuration and management. Notice, these exceptions are examples of how user requirements can trump benchmarking.
Of course, if you are the author of the software and not just a user, you probably already know that testing at multiple levels is not only desirable buy almost always necessary. In an internal IT shop, developers unit test their modules, the software is often tested in an integrated environment that simulates or mirrors the ‘production’ environment, and finally, users perform functional testing and formally accept a product. For commercial software, internal testing is performed, possibly followed by alpha and beta testing performed by external users.
What About Linux?
So far, we’ve discussed quality in general, and absolutely nothing specific to Linux. Obviously, Linux is software too, so all of the points made above apply. Computer users have various needs and wants, requirements and expectations, regarding the operating system. These requirements can be translated into tests of quality for Linux just as they can for any other software.
I think that the ways in which Linux differs from other platforms, primarily in philosophy but also in more tangible respects, is arguably a major reason for the perception that Linux and its applications are low-quality software. For example, everyone’s needs and wants are different, and the Linux community strives to provide as much freedom as possible to the user in satisfying those requirements. To accomplish this, multiple distributions, windows managers, and the like are offered; unfortunately, this tends to confuse the uninitiated into believing that using Linux means living with chaos. To make matters worse, producers of commercial software products focus on market share and go to great lengths to be the ‘only game in town’. While competition is supposed to foster innovation, the unfortunate after-effect is a reduction in the number of choices available to fulfill users’ requirements. It hurts when a product that meets a specific need is acquired by a competitor of the vendor, subsequently discontinued, and replaced with the new vendor’s surviving product which didn’t fulfill the need from the beginning.
In my experience, another reason commonly cited for “poor quality” of Linux and Open Source Software in general stems from faulty logic, predicated by the old adage, “you get what you pay for”. If the software in question is sponsored by a software company, then it stands to reason that the company (a) probably knows how to develop software, (b) has adequate testing resources to do so and (c) has a reputation to protect. These companies cannot afford to build bad software. A track record for producing bad Open Source software could very easily bleed over to customers’ perceptions of the company as a software producer overall, impacting sales of existing and future commercial software packages. On the other hand, many Open Source applications are home-grown solutions, not supported by companies, but maintained and promoted though grass-roots efforts. The authors are individuals motivated to write quality programs because they themselves use them, and they are kind enough to share the fruits of their labor with the rest of us. While it is true that “quality costs”, development isn’t (economically) free either; so, just because an application it is available without monetary cost to you doesn’t mean that it is without value.
Finally, Linux distributors, especially the “name brands” such as Ubuntu, SUSE and Red Hat, usually do a good, professional job in testing their products. Applications that run on the Linux platform vary more widely in the level of testing applied. Check the websites of these applications to determine how thoroughly the software is tested before each ‘stable’ release. See if the authors employ dedicated resources to test and/or engage end users in alpha and beta testing efforts. Third-party certification, though rare, but is an invaluable tool for boosting end-user confidence.
Don’t believe blanket statements about the quality of software available for any particular platform unless they are backed by real data. Most are biased, unsupported or outright FUD. Unsubstantiated and grossly subjective claims are irrational and the hallmark of FUD. Instead, do research and evaluate software for yourself. Only you can determine if an application meets your needs. Only you define quality.
|<< Go To Part 5||Go To Part 7 >>|
Monday, the Associated Press released a story on Wal-Mart‘s decision to discontinue the line of Everex Green gPCs in their brick-and-mortar stores. It appears that the retail giant has discovered that the demand for low-cost ($199USD) computers is much higher online than in the stores, so they decided to make the offering a web-only one, freeing up valuable floor and shelf space for other products that do sell well in the stores.
I have several news readers on my iGoogle homepage, and watched yesterday as the headline made it through each. I was intrigued by the way the story mutated as the day progressed. For example, the first headline I saw was from Yahoo! News, ” Wal-Mart ends test of Linux in stores“. LinuxInsider didn’t alter the story much, but the title was different, “Wal-Mart Yanks Linux PCs From Store Shelves“. The tone of the new title is not as objective, but slightly more disparaging. It gets deeper. According to Linux Loop, though Wal-Mart hasn’t given up on Linux completely, they have failed to “really give Linux a fair chance“. Actually, a search for Everex on the Wal-Mart website shows that the gPC is making way for the gPC2 and the Cloudbook and gBook laptops, all of which offer gOS Linux.
The worst headline I crossed was from Wired, “Middle America Hates Linux, Wal-Mart Discovers“. Following the link, the article title actually read, “Middle America ‘Rejects’ Wal-Mart Linux Experiment“. The link was obviously a teaser. Regardless, the article had a sarcastic tone, quite a departure from the original story. The main theme shifted from Wal-Mart customers are not buying gPCs from brick-and-mortar stores to Middle-America hates Linux. Come on now, get serious!
Here’s a reality check. Love ‘em or hate ‘em, Wal-Mart knows a thing or two about inventory and logistics. The company has a grossly-adequate volume of sales data to assist in pricing decisions. With unprecedented buying power, there is little left to squeeze out of suppliers. The magnitude and capabilities of the company’s logistics network are nothing short of breathtaking. Honestly, when the company’s spokeswoman says that “this really wasn’t what our [brick-and mortar store] customers were looking for,” I tend to believe her.
I’m certainly glad that the article pointed out the difference in demand between the online shoppers and the rest of us (hence, the qualification added to the quote above). To state it explicitly, the Everex Green gPC is not what offline Wal-Mart customers demanded – this pairing of product to market segment is key to understanding the decision that Wal-Mart made. It does not mean that nobody wants the gPC. It only means that selling the gPC in Wal-Mart stores is suboptimal in the current market. There are many varied reasons why this is true, but without more specific data, any attempt on my part to explain them would be purely speculative. Besides, it appears that ThinkGOS is already providing some explanations, media damage control which will undoubtedly get less press than the original story.
Personally, when I go to Wal-Mart, I am usually picking up groceries, lawn or car maintenance products, Christmas decorations or parts to repair the plumbing in the bathroom. I do not buy music there as I do not support censorship, and I do not typically think of Wal-Mart when making major computer system purchase decisions. It doesn’t necessarily stem from their offerings (which are big name brands) or their price (which I do find just a tad bit higher for some electronics items) – Wal-Mart just doesn’t scream “computer store” to me. I doubt I am alone in this.
Finally, I’d like to add that while the bulk of this article concerns Wal-Mart and Everex, and to an extent Linux, the AP still felt it was necessary to give Microsoft billing in the very first line (not that Redmond minds the much-needed free advertising, of course)! The AP just wants to make sure that everyone knows that this was a Linux-only phenomenon and rest assured that sales of machines loaded with Microsoft Corp’s Windows operating system were in no way impacted. Thanks y’all! A link to www.linux.com or to Wikipedia would have been sufficient.
Mother’s Day is upon us! You did get a gift, didn’t you? What’s that you say? Mother’s Day isn’t until May? While that is true for citizens of most of the world’s countries, in Asia, Eastern Europe & the Middle East Mother’s Day falls in the month of March and many of the countries in these regions will celebrate it tomorrow, March 8th. The origin of this holiday stems from various religious rites, including the celebration of the Vernal Equinox, pagan worship evidenced in Roman and Greek mythology, and even to ties to the Christian season of Lent. Regardless of your reasons for celebrating Mother’s Day, it’s time to start thinking about a gift!
I happened across this article on the VirtualHosting blog this morning. It links fifty-two websites to various Linux distros, tools, and guides to assist in setting up a Linux box for your mom. The premise is, of course, providing a fast, safe and highly-functional system with a clean, uncluttered interface. I agree with the commentary, the intended audience of the article is not mom herself, but a son or daughter with a Linux bias who might want to set up a system for mom to use.
My mom uses Linux too, of course! Indeed, I’ve watched her transition from complete technophobe to avid blogger over the course of about a decade. She was tired of period system reinstallations necessitated by spyware, malware and viruses. She instantly noticed the simplicity and ease-of-use of the Gnome interface. She also doesn’t know the administrative password, so new software has to be installed by my dad or myself, a sort of agreed upon system of checks and balances. The only complaint has been in regard to the availability of plugins for things like Flash, but then again, she’s still running Ubuntu 5.10 (as am I). By Mother’s Day (May 11th in the U.S. this year), release 8.04 should be available and the upgrade effort is already being planned.
No PC? Those low-cost Linux PCs and laptops currently being offered by Asus and others may be just what she needs! Unless your mother is already a programmer or gamer, they should be plenty powerful enough for daily tasks, such as typing letters and surfing the ‘Net.
Show mom you really care – give the gift of Linux!
Happy عيد الأمّ / Dita e Nënës / Մայրության օր / Дан мајки / Ден на майката / Eejiin bayar / День Матери / Свято Матері / Materinski dan / Mother’s Day!