The GUIfication of computers got out of control a long time ago, and in some respects it’s still getting worse. I remember in the mid-’90s it actually got to the point where people were excited about the “modernity” of MS Windows 95 SR2, because finally there was a Microsoft OS that wasn’t based on DOS. That might have actually been a good thing, if it was based on something better, but it wasn’t — it just cut the DOS underpinnings out from under MS Windows. The general justification for this belief in the superiority of a DOS-less system was, to a significant degree, wrapped up in the notion that “the command line is bad”. People were stupid. Hell, I was stupid, though I at least held out against the marketability of stupidity until almost until ’98 came out.
A few years later, I was back to recognizing the value of the command line — almost in time for Win2k to arrive on the scene. The polish had worn off the turd of CLI-disallowed computing for me. There’s nothing wrong with having a GUI available for desktop computer use, of course, but actively dismissing the CLI, making it more difficult to use the way Microsoft has tried to do (and Apple did for many years), is just beyond stupid.
This trend toward relentlessly discouraging CLI use in favor of the WIMP model of GUI is the source of a lot of the problems people perceive in their computing world, but if you present a lot of those people with a command line application they’ll rebel. There’ll be guillotines erected in the streets and, to mix revolutionary metaphors, those who had the gall to suggest using the CLI for a change will be the first against the wall. As a result, the perceived value of an application is significantly bound up in its appearance of high tech flashiness with whirling 3D objects flying around and animated widgets doing the can-can while you wait interminable minutes for something to happen after clicking the mouse button. People complain about those minutes wasted, of course, and they’ll even bitch and moan sometimes about how much of a pain in the butt it is to perform some particular action many times a day via a series of repetitive mouse actions — but they’ll complain even louder and longer if you take away their Clippy and their Ribbon.
Ironically, it’s crap like Clippy, and the Ribbon, and the way the application window wobbles, and the automatic word replacement when it thinks you’re using the wrong term, and all the rest of that dancing rodents BS that actually makes the applications slow in the first place, to say nothing of the fact that the piles upon piles of new features actually make the Ribbon and all those nested menus necessary. I started thinking about this today when I came to a realization about how behavior in IRC channels is probably significantly affected by the GUIfication of our OS environments.
The incidence of people popping into an only moderately busy IRC channel, saying “hi”, then disappearing again less than a minute later is kind of ridiculously high. I strongly suspect that a lot of the reason for that is the fact that many people only keep an application open while they’re looking at it. Use environments that are too GUI-focused encourage that kind of behavior; they tend to lack the kind of cognitive involvement in the operating environment that is normal — even necessary — in a more CLI-focused multitasking environment such as your average Unix-like OS (ignoring the encroaching mess of things like GNOME and KDE, for the moment). An “out of sight, out of mind” approach to computer use isn’t as compatible with running applications where you can’t see them while doing something else, then coming back to those neglected applications periodically to check up on progress and do something directly now and then.
The desire for instant gratification is surely part of what gives rise to this behavior, but that’s not all there is to it. It appears to be more related to attention span and a decreasing ability to mentally multitask in patient manner. Anything you don’t have on-screen, right in front of you, where you can see it right now, simply ceases to exist for you. Out of sight, out of mind. You don’t let the application run without being able to see it; you open it, make it do something, and close it again when you’re not actively clicking buttons. There are exceptions, but for the most part those are only partial exceptions; you still have icons in your system tray, or buttons on your taskbar, indicating that there’s an application open, and they are often designed to clamor for your attention so you needn’t do anything like remember what you were doing five minutes ago.
There are other factors at work, as well. There is, in fact, a confluence of factors involved. Part of the reason GUI applications aren’t (generally) designed to be persistent is the fact that they consume so damned much of your system’s resources. Consider the case of the IRC client. If you start up a feature-heavy IRC client with a GUI interface and no text console based interface option on a modern MS Windows machine, chances are good that it’s going to consume a nontrivial amount of RAM and CPU. These system resources are important to the smooth, quick performance of software on your system, and the more of these resources you’re using, the less of them will be available to other applications. IRC clients are far from the most egregious perpetrators when it comes to consuming system resources, of course, but as GUI applications they fall within a broader category of system usage habits learned from everyday encounters with MS Word and Outlook and Internet Explorer.
We end up training ourselves to open a GUI application and, unless it can be made to vanish into a tiny little icon in the system tray, use it pretty much exclusively until we’re done with the current task, then close the application. Open your GUI-only IRC client, connect to Freenode, sign into a couple of channels, and say “hi” in each of them. When nobody responds within a few seconds, get bored and decide to do something else. Rather than leave it running and switch to a different VT as you might if you were an old-school Unix hacker, checking back periodically to see if anyone has responded, you click the little X in the top-right corner of the application and make it go away entirely. These habits are learned, to a significant degree, from lessons taught by stuff like MS Word and Internet Explorer, which tend to slow the system down enough that you can’t do something else of equivalent absorption (like play World of Warcraft) without the performance of the latter application being unacceptably degraded by the fact you left the former application open.
Once the assumption of a single-tasking system is taken into account, it all makes sense. Because MS Windows grew directly out of MS-DOS, a single-tasking system, the gradual evolution of it into a multitasking OS didn’t change the fact that the basic assumptions of the interface were built on a single-tasking approach to getting things done. Because MS Windows is the dominant desktop OS in the mainstream computing world, those assumptions — which have necessarily carried through from the days of DOS to the present, since software evolved in that environment as new OS versions needed to conform to old software needs, and vice versa — have permeated the entire world of desktop application development. It’s no wonder that desktop application developers don’t give software efficiency much priority.
I don’t think we should develop everything in painstakingly optimized assembly language, or that GUI applications should be ejected from our computing lives entirely, of course. Hell, one of my favorite programming languages is also one of the slowest (or, at least, Ruby is one of the slowest I’ve encountered in its 1.8 iteration, though 1.9 is supposed to be significantly faster). The bottleneck is the kind of development philosophy that produces buckets of features rather than applications in any meaningful sense of the term. Consider the origin of that term — “application”. It’s supposed to be an “application” because it’s software that is applied to a given task. By the time the developers are done cobbling them together, though, most so-called applications are software that is meant to cover as many tasks, as minimally as the developers can get away with, as possible. There’s no focus on efficiently and effectively accomplishing a single task. In fact, one might say that a complete lack of focus is the whole problem. It’s the all-singing, all-dancing three ring circus of the software world. All that widely varied, largely unrelated functionality actually gets in the way of any one task type being efficiently and effectively tackled.
Thank goodness there are people who still develop applications that are intended to focus on a single, specific task. It’s not the slimmest example of an IRC client in the world, but I use irssi instead of whatever fat-assed GUI application is most popular on MS Windows these days. I run it in a persistent tmux session on a server that “never” gets turned off (other than the obvious exceptions, like moving the server or upgrading hardware). I connect to the server via SSH, attach to the tmux session, and can scroll back to see what has been said since the last time I was paying attention.
Even for my GUI applications — and I am glad to have some of the GUI applications I use (and just sad that there aren’t decent CLI replacements for some of them, which I blame in part on the nearly universal GUIfication of the mainstream computing world) — I’m glad to be able to switch between workspaces by way of simple keyboard shortcuts, with no need to move a mouse around and click on something, and no taskbar consuming screen real estate unnecessarily. It’s a shame that many other people seem to be so afraid of the command line, and never get to really experience the benefits of using the GUI only when it’s the best option, and using the CLI when that is the best option.