Yesterday, David Williams posted on ITWire’s Linux Distillery an article about how Linux is keeping Microsoft honest. The real meat begins with a discussion about Windows PowerShell, Microsoft’s newest scripting language. ‘New’ is a relative term, as Williams points out that the scripting concept is not only a very old one, but that the punch cards of computer lore could be considered the first form of scripting. Williams points out that the Windows trend of ‘dumbing it down’, creating GUI tools to replace thousands of keystrokes, may be reversing. The focus of PowerShell, a CLI, is to replace thousands of mouse clicks with scripts. Williams continues with the revelation that PowerShell is becoming ‘entrenched’ in Microsoft’s server offerings, including a headless, GUI-less mode for Windows Server 2008. He attributes this shift in design philosophy to Linux.
I think this is great news for Windows, because as systems grow, especially online offerings, effective system management depends on efficiency. Ultimately, this means automating as many maintenance functions as possible. With Linux and other *nix platforms, this has never been a problem, but the Windows CLI has been fading into obscurity for many years now. The DOS shell sat right on top of the kernel, but beginning with NT, the ‘command prompt’ became just another application that had to operate through various other layers, such as the oppressive NT HAL, diminishing its power. Moreover, the range of CLI utilities remained unimpressive. Thankfully, products such as MKS Toolkit, Cygwin and Sourceforge’s UnxUtils have helped to fill that gap.
Let’s not forget that the CLI is useful for far more than executing OS-related functions. In my experience, all the best software applications offer a CLI interface. I implement systems that help IT managers manage the activities of their staffs, including helpdesk and other customer issue management suites, source code control and software media distribution centers, and project/programme management repositories. I always look for software that provides a Unix release, even if the target platform is Windows. Why? Unix-based applications almost always include a CLI which is almost always ported to the Windows release if one exists. Not only is the CLI of great use to me from a user’s and administrator’s perspective, but I know that the existence of a CLI usually indicates that the software has tested more thoroughly. If an application has been designed well, then the CLI functions call the same underlying subroutines as their GUI counterparts – this allows the vendor to easily write (and more importantly, to execute) scripts for regression and load testing. Nightly smoke tests of new builds are possible without the maintenance of complex GUI-based test harnesses. Don’t misread me – the GUI must be tested, just not to the same extent as when the GUI is the only interface available.
Where’s the FUD? For years, Windows zealots have denounced Linux for being arcane, hard-to-use, and backward. Heavy reliance on the CLI for administration was cited as a failure to progress (through obstinacy, ignorance or both). Now, it appears that Microsoft is admitting that a powerful shell is indeed useful, forcing its fanboys to dine on crow tartare.
The return of a powerful shell is a step in the right direction for Windows! Is this really due to Linux? I wouldn’t be surprised.
The #1 item on my Top 10 List of Linux FUD Patterns concerns its learning curve. This pattern is probably the most prevalent and primarily appeals to fear by attempting to convince you that Linux is too hard for the average person to use or that it is simply not user friendly. There are many variations of this pattern, from the straight-forward “Linux is for geeks” assault to more mature, logical arguments, such as “if Linux can do everything the fill-in-the-blank OS can do, why bother with the hassle of switching?”.
To be honest, as with every convincing piece of FUD, I think this line of reason has…or should I say, had…a glimmer of truth behind it. Back in the day, when I was casually messing around with Linux as a hobby, I spent many hours on “administrative” tasks, such as installing Slackware from 30+ floppy disks on old retired hardware and trying to configure the RedHat-bundled Metro-X server for specific video cards and monitors. Looking back, these tasks were difficult enough for a seasoned PC tech like myself, let alone for the general public. But today, it’s a different story, especially since Ubuntu makes it so easy.Nonetheless, web news headlines asking “Is Linux Ready for Prime Time?” still appear frequently. What makes Linux so difficult anyway? A quick look through screenshots and how-tos for modern Linux distributions tells quite a different story, does it not? I believe its close association with Unix is the primary reason.
Unix in general has a “bad” reputation for being a command-line-driven OS. It was written in the late 1960s and the graphical ‘X’ windowing system was not introduced until the mid 1980s. In contrast, Linux was first released by Linus Torvalds about 1991 and the development of the XFree86 windowing system for PCs began about a year later. Therefore, one could argue that Linux had a graphical user interface “from the start”. Moreover, Ubuntu and others have done a great job in reducing the user’s exposure to the system console altogether. The need to log into the system on a character-based screen and manually run ‘startx‘ is no more. Of course, you may forgo an X session and boot directly into a prompt if you wish, but that is not the default.
First impressions count too. Despite the availability of X, my first serious exposure to Unix was in university in the mid 1990s and took place, not on something as fancy as a Sun SPARCstation, but on an amber-on-black dumb terminal in the school’s computer lab. To me, Unix came to mean a terminal screen, often accessed via telnet over a dial-up connection with the host computer. It was not until several years later that I discovered X.
Case sensitivity is another classic example. Unix and its kin are case sensitive in practically every respect, and most visibly when saving and opening files. This can be a most obnoxious feature when working from the command line, especially for the occasional user; however, the impact is minimal in today’s point-and-click Linux world. I have heard the concern expressed more than once that having two or more different files in the same directory, each with the same name, differing only in case, would be too confusing. My usual response is in the form of a question: why would a person have so many files named essentially the same thing to begin with? Just because it can be done, doesn’t mean that it should be done.
Other differences exist, such as installation methods for both the OS and software applications, but I think I’ve made my point: Linux is very much like Unix, but it is not the same OS. Linux was made for the x86 PC platform, though other platforms are supported as well. It was written with the end-user in mind, knowing that the everyday user will demand a slick windowing environment, web browsers with plug-in support, and the like. Contributors to Linux and its applications are everyday users too, you know.
How can these negative perceptions be overcome? The concept that Linux is very similar – but not the same as – Unix is too academic, too logical and would take far too long to adequately communicate to the masses. It just doesn’t make for good marketing.
Nothing, however, beats seeing it in action! Remember what I said about first impressions? Live CDs are very useful weapons against FUD. They allow potential users to test drive the OS, to try before “buying”. This helps prove to some that Linux has come a long way in terms of automatic hardware detection and other features that make it user friendly. It’s also much easier than going to the extent of configuring a dual-boot system. The downside is, they can be a bit slow under certain conditions. If a friend has a Linux system already installed, it may be better to try that out instead.
It is also fortunate that the academic community has shown an interest in Linux. Of course, this stems partially from the never-ending need for schools to save money, but there are also purely-educational reasons for using Linux as well. For example, Linux provides an open platform for programming classes and many math- and science-based applications have been developed. Early exposure to Linux means that kids will “grow up” with it and its “peculiarities”.
Hopefully, this treatise will help you keep an open mind the next time you read an article on how Linux could dominate the market “if only it were easier to use”, or help you form an appropriate response when someone expresses the same sort of sentiment in conversation. Always seek out the reasons used to support these opinions and remember that experience should provide more convincing evidence than the rhetoric of FUD.
|<< Go To Part 1||Go To Part 3 >>|
Your first thought when reading the title of this post is probably, “WTF? Why would I need that?” Well, if you’re like me and you love customizing your Gnome system, you’ll know that during your customizations, you have to reload this and reload that for your new changes to take effect. Sometimes its simply because a change you did caused some problems and something didn’t load correctly. Whatever it is, most of the time it requires you to use the killall command in the terminal.
This is common enough for the Gnome panels and for Nautilus since it draws and handles the desktop by default. I was tired of pulling up a terminal window and typing in the killall commands to “refresh” my GUI or Desktop or repeating them if I had already ran them previously. Not that it takes that long to do. I just wanted a quick “button” I can click that will do it automatically.
So I did it myself. Not very complicated, really. Actually, it’s not complicated at all.
- Right-click the panel or drawer you want the button to be situated
- Select “Add to Panel…” and the “Add to Panel” window will open
- Click on the “Custom Application Launcher” at the top of the window
- In the Launcher Properties, select “Application in Terminal” as a Type
- Name it “Refresh GUI“
- For the command, type in: “killall gnome-panel nautilus” without the quotes
- For the comment, type in: “Reloads the panels and the desktop (Nautilus).” or whatever you want.
- Click on the “No icon” button and choose an icon of your choice.
- Click close and you’re done
Now, whenever you need to reload/refresh your Desktop, you can simply click on your brand-spanking new shiny button!
Did you remember to subscribe to Linux FUD?
Oooh… he makes it look so easy! The thing is, it is easy.
Blessen Cherian, CTO and Executive Team Member of bobcares.com writes:
“Shell scripting is nothing but a group of commands put together and executed one after another in a sequential way. Let’s start by mentioning the steps to write and execute a shell script.”
He then goes into step-by-step instructions on creating a script in a way that any idiot can understand!
Cherian says, “This is just the first part of my article on shell scripting. The Advanced Part … will be published soon.”
I’m an idiot, so this will definitely help me!
I was impatient.
I killed my Ubuntu system.
Despite what I said in a previous post, despite all the warnings, I was anxious and impatient and I went and installed the Release Candidate of Ubuntu 6.10 Edgy last night. I couldn’t wait 5 days for the final release. Stupid me.
Now I can’t load X and my cordless keyboard from my keyboard/mouse combo is not working in CLI (command line interface). It’s rendered useless (without a degree in Linux-ism! <grin>). Although I haven’t really looked into it… I don’t know if I really want to. There were a lot of errors during the upgrade so, I don’t want to spend the rest of 2006 fixing it! I wish I could just type in: $ sudo fix-it –now
Of course, in my impatience, I didn’t really back up anything. Now I have to figure out how to access my Linux partition (either from Windows XP or a Live CD) and copy what I want to keep to my Data partition (FAT32).
There’s not much to backup. I’d like to keep my Thunderbird emails and settings, though… oh, and possibly my Amarok settings and data. There’s some music, pictures, and videos I’d also like to move. As for other Ubuntu-specific configs or software, that’s not really important.
BUT, there is a couple positive things to this (believe it or not)…
First, my Ubuntu is my first-time Ubuntu installation and has been installed since April (I think?). Since then, there’s been a lot of tinkering, customizing, testing, updating, re-tinkering, etc, that my system has gained a lot of peculiarities and issues that I can’t get rid of… like my issue with transparent panels killing my x-server and crashing Ubuntu. Re-doing my system will allow me to start over from scratch with a brand new system. I know my way around it now and I know what I want, what I don’t need to try, and what not to do (like install a RC on top of a highly tinkered system — shut up, I know now!).
Second, with a new Ubuntu version from scratch, it will give me more material to write for my blog! I plan on documenting most of my experiences, issues, and reviews. Hopefully, it will help the newbies experiencing fear, uncertainty, and doubt about using Linux.
Wish me luck!
I spent quite some time customizing my desktop’s look and feel after this installation. Once all was said and done, it was late, so I logged out and went to bed.