Tag Archive | competition

Piracy or Marketing?

Linux is often mentioned in discussions on Intellectual Property (IP) and the protection thereof. The reason is two-fold. First, the Linux platform is often seen as the “Wild West” where there are no (enforceable) laws. The perception is that Linux makes it easier to pirate software, music, video and other digitized IP products. Unlike the analog piracy of the past, there is no (or imperceptibly little) degradation in the quality of the copy with respect to the original.

Second, Linux itself defies the very concept of IP protection due to the OpenSource philosophy held by its development community. Some believe that OpenSource advocates illicitly extend this philosophy to other, non-Open products – that they actually believe all products are intellectually-free – and therefore, that they do not and will never respect the true ownership of IP.

Now, I said all of that as a segway into this very non-Linux story. A British band out of Devon, England called Show of Hands admits in an interview that they depend “utterly” on piracy viral marketing to support ticket and album sales. I mulled this story over for a while and came to realize that this band is to the recording industry what a shareware developer is to the software industry.

A small band, Show of Hands probably does not enjoy nearly the amount of radio airtime as, say, Metallica. This means that album sales rely much more heavily on concert attendance and I’d venture to guess that concert ticket revenues constitute a much higher percentage of the band’s total revenues than for bigger names. Like shareware companies, tolerating some piracy actually earns them more money than preventing it outright.

Let’s look at the other side of the coin. A big-name band does receive a lot of airplay, which can translate into fewer tours (if they like). Concert venues, being oppressively spatial in nature, can hold a limited number of humans safely and the band usually has a minimum return in mind; thus the ticket price is adjusted to allow just the right number of real fans to enjoy the performance first-hand. Not everyone can see the show, but everyone can buy the band’s albums on CD. For many bands, CD sales far outweigh concert revenues, so piracy is a much bigger threat to the band’s monetary success, especially considering that sound quality is not sacrificed. Albums re-released on CD probably sold well on cassette and possibly in LP format as well. Some groups like to repackage old material into “Greatest Hits” albums and other compilations, sometimes adding one or two “new cuts” to keep old fans buying. Like big software shops, big bands like to lock you in and repeatedly resell to you.

Enter the RIAA and other IP groups who claim to have the protection of the artists at heart. Like legislators, these groups want to represent their constituents, but all too often the only folks they get to talk to are the lobbyists and the influential. The “best interests” of the recording industry and the artists themselves begin to look a lot like what the big guys want. Forget that the small bands may be able to use viral marketing to their advantage. I know, nothing is stopping them from “giving away” their IP if they choose to do so, right?

Wrong. So-called “digital media rights” must somehow be managed to protect IP (read: imposed, because legal punishment is obviously not an effective deterrent) and technological controls are increasingly replied upon to achieve this. If it becomes illegal or highly cost-prohibitive to own or operate equipment free of IP protection functionality, the small band will have no choice but to conform, eliminating one of its most effective marketing strategies. This constitutes a barrier to entry for competition, strengthening the resale potential of established big-name bands.

What does this mean for Linux? It seems that the creators of codecs and IP protection software are reluctant to share their algorithms with the Linux community, the most-likely reason being the fear of the “Wild West” described above. If you don’t want stuff stolen from your gym locker, don’t write the combination of your lock on the door, right? The IP folks probably won’t budge on this point unless the Linux community can be trusted (read: controlled).

One last thought – if music piracy is such a big problem and Windows+Mac still has 95% or more of market share, I really don’t see how Linux is the root of that problem (no pun intended).

-Brandon

NEWS: Linux Developers Make A Living

In January, I wrote at length about the perception that Linux is not ‘officially’ supported. Yesterday, Linux-Watch released some figures that demonstrate how much of work toward the development of the Linux kernel has been contributed by paid professionals hired by large, profit-seeking corporations. Yes, I said paid professionals.

Two great quotes from Linux Foundation Marketing Director, Amanda McPherson, can be found in the last few paragraphs, both in relation to the unthinkable notion that profit-seeking companies would expend resources (money, time, people) improving something that they do not exclusively own and cannot sell. She notes that a savings from shared R&D costs do ultimately impact the bottom line (i.e. profit increases due to a decrease in expense, not an increase in revenue). I suspect that she wouldn’t be mentioning this if the cost savings weren’t (or weren’t expected to be) material.

McPherson also notes that “it’s difficult for most people to get their minds around competitive mass collaboration.” Indeed, this is what the freedom afforded by Linux is all about. People (and companies) contribute not for humanitarian reasons, but because they expect a benefit. Work together to create the best platform, openly usable by everyone, and if it still doesn’t meet your needs perfectly, you are free to change it accordingly. Everyone wins. No compromises.

-Brandon

Top 10 Linux FUD Patterns, Part 6

Linux FUD Pattern #6: Linux is low-quality software

Every once in a while, an article or post will appear, claiming that Linux is just not good enough for everyday use. The reason? Concerns over quality. Such ‘blog fodder can range from the sensationalist author’s “Is Linux Ready for Prime Time?” teaser to the rants of the disgruntled because his experience with Linux was sub par. Neither contain anything resembling an objective approach to quality and neither result in a useful conclusion. That’s the topic of this sixth installment of my Top 10 List of Linux FUD patterns, the accusation that Linux is low-quality software. To recognize when FUD of this kind occurs, we must first have a working knowledge of quality measurement.

Quality Defined

What is quality? There are several dictionary meanings, but when discussing software quality, the second definition in Merriam-Webster’s online dictionary seems to be the most applicable: a degree of excellence. Other dictionaries generally concur with only minor deviations in terms. Fine, but what does that really mean? The definitions of ‘excellence’, ‘excellent’ and ‘excel’ emphasize the superiority of the excellent, its ability to surpass others in some way. Moreover, by adding the word ‘degree’, M-W subtly introduces measurement. Therefore, quality as it applies to software is really a two-part activity: measurement of one or more attributes and the comparison of these measurements for the purpose of determining the degree to which the software excels.

Just off the top of my head, I can name three comparisons commonly used in software quality assurance: benchmarking, regression testing and expectations management.

In software benchmarking, attributes of a program are compared with the same attributes exhibited by its peers. Most benchmarking is metrics-based, measured numerically, and is often related to the relative speed of the program. The time it takes for a program to start up or the number of calculations a program can perform per unit of time are common examples. I consider feature list comparisons as types of benchmarks, albeit non-quantitative ones. Competing software packages that perform roughly the same function usually share a minimum set of features. For example, all “good” word processors are expected to have a spell checker. Of course, many factors, not just the number of features, must be considered.

Regression testing is a comparison of a program to itself over time, usually between builds, releases or other milestones in the evolution of a product. Usually, regression testing means testing unchanged functionality to determine if a program was inadvertently broken by a change to some supposedly-unrelated function (i.e. make sure it all still works). This is an example of a binary determination (working or broken); however, degradation in speed or capacity and unacceptable trends in various controls and tolerances may be detected as well, indicating programmatic problems in the code. Metrics that describe the development process provide valuable feedback, leading to process improvements that should ultimately improve the product either directly or indirectly.

I saved the best one for last, the management of users’ expectations. Don’t let the name fool you – it may not sound like a measurement-and-comparison activity, but the management of expectations involves constant gap analysis which inherently necessitates measurement. The quality of a product is most often measured as the extent to which the product meets the needs of its users. This means that the end product must be compared to the requirements defined by the users, often traceable via some sort of design documentation. The requirements specification may have been created by explicitly documenting the needs of a readily-accessible user base, or by extrapolating the needs of an generally inaccessible user base through market research and analysis. This type of comparison is the most important of all because user requirements can potentially render both benchmarking and regression testing unnecessary. For more discussion on this topic, pick up any number of books on quality at the bookstore or library and see what the experts say regarding the importance of meeting the needs of customers.

Determining Quality

Ok, so measuring quality means drawing comparisons of various kinds. Now what? Suppose you want to determine if a particular software program or package is good enough to use. This can actually be quite simple. The first step is to list your needs and wants, including how you expect the software to behave and how well you expect it to perform. The distinction between needs and wants is deliberate and necessary. If software lacks a function you need, you won’t use it, but the decision is different if it lacks something you simply want and yet is acceptable in every other way. These requirements may then be weighted or ranked, and measurement criteria defined, both qualitative and quantitative. Assuming that alternative products exist, select one or two of the competing programs to evaluate; this is called a “short-list”. Design and execute your tests, observe and measure outcomes, then weigh and rank the results. Several rounds of evaluation may be required if results are inconclusive, and the addition and/or refinement of requirements with each pass. At some point in the process, you will determine if the software meets your needs or if you would be better off with one of the competing products.

When comparing several programs or packages, it may be helpful to create classifications based on multiple criteria. For example:

Low-quality software will:
– crash or not start at all,
– contain calculations that return incorrect results,
– corrupt data,
– have components that don’t work as documented or don’t do anything at all,
– have little or no documentation
– have poorly designed GUI layout, and
– have a poor or missing CLI or API.

Medium-quality software will have none of these ills and will often:
– have a consistent look and feel,
– include useful documentation, and
– have an intuitive layout and command options and other user-friendly qualities.

High-quality software will:
– sport outstanding UI considerations,
– have accurate and friendly data validation,
– contain no material defects in functionality (especially calculations),
– include fully- and easily-configurable options,
– have a full-featured CLI, and/or API, and
– include complete documentation with examples of use.

The type of software being evaluated often establishes quality criteria. For example, one of the most important attributes of desktop productivity packages for many users is a feature-rich user interface with an intuitive layout. Some processing speed can safely be sacrificed to achieve these goals, as the program is limited to the speed at which the user can provide input anyway. Contrast this with automated server software that must process many incoming requests very quickly, but because it is configured once and allowed to run in the background (“set it and forget it”), ease-of-configuration is of lesser importance. There are always exceptions though. Some desktop users may want an extremely basic interface with only core functionality while others are willing to sacrifice server software speed on low-traffic services in exchange for easy configuration and management. Notice, these exceptions are examples of how user requirements can trump benchmarking.

Of course, if you are the author of the software and not just a user, you probably already know that testing at multiple levels is not only desirable buy almost always necessary. In an internal IT shop, developers unit test their modules, the software is often tested in an integrated environment that simulates or mirrors the ‘production’ environment, and finally, users perform functional testing and formally accept a product. For commercial software, internal testing is performed, possibly followed by alpha and beta testing performed by external users.

What About Linux?

So far, we’ve discussed quality in general, and absolutely nothing specific to Linux. Obviously, Linux is software too, so all of the points made above apply. Computer users have various needs and wants, requirements and expectations, regarding the operating system. These requirements can be translated into tests of quality for Linux just as they can for any other software.

I think that the ways in which Linux differs from other platforms, primarily in philosophy but also in more tangible respects, is arguably a major reason for the perception that Linux and its applications are low-quality software. For example, everyone’s needs and wants are different, and the Linux community strives to provide as much freedom as possible to the user in satisfying those requirements. To accomplish this, multiple distributions, windows managers, and the like are offered; unfortunately, this tends to confuse the uninitiated into believing that using Linux means living with chaos. To make matters worse, producers of commercial software products focus on market share and go to great lengths to be the ‘only game in town’. While competition is supposed to foster innovation, the unfortunate after-effect is a reduction in the number of choices available to fulfill users’ requirements. It hurts when a product that meets a specific need is acquired by a competitor of the vendor, subsequently discontinued, and replaced with the new vendor’s surviving product which didn’t fulfill the need from the beginning.

In my experience, another reason commonly cited for “poor quality” of Linux and Open Source Software in general stems from faulty logic, predicated by the old adage, “you get what you pay for”. If the software in question is sponsored by a software company, then it stands to reason that the company (a) probably knows how to develop software, (b) has adequate testing resources to do so and (c) has a reputation to protect. These companies cannot afford to build bad software. A track record for producing bad Open Source software could very easily bleed over to customers’ perceptions of the company as a software producer overall, impacting sales of existing and future commercial software packages. On the other hand, many Open Source applications are home-grown solutions, not supported by companies, but maintained and promoted though grass-roots efforts. The authors are individuals motivated to write quality programs because they themselves use them, and they are kind enough to share the fruits of their labor with the rest of us. While it is true that “quality costs”, development isn’t (economically) free either; so, just because an application it is available without monetary cost to you doesn’t mean that it is without value.

Finally, Linux distributors, especially the “name brands” such as Ubuntu, SUSE and Red Hat, usually do a good, professional job in testing their products. Applications that run on the Linux platform vary more widely in the level of testing applied. Check the websites of these applications to determine how thoroughly the software is tested before each ‘stable’ release. See if the authors employ dedicated resources to test and/or engage end users in alpha and beta testing efforts. Third-party certification, though rare, but is an invaluable tool for boosting end-user confidence.

Conclusion

Don’t believe blanket statements about the quality of software available for any particular platform unless they are backed by real data. Most are biased, unsupported or outright FUD. Unsubstantiated and grossly subjective claims are irrational and the hallmark of FUD. Instead, do research and evaluate software for yourself. Only you can determine if an application meets your needs. Only you define quality.

Cheers!
-Brandon

<< Go To Part 5 Go To Part 7 >>