Chad Perrin: SOB

17 April 2007

Don’t forget your tinfoil hat.

Filed under: Geek,Security — apotheon @ 03:11

A while back, I posted the text of this entry elsewhere, in response to someone who suggested there is a conspiracy of antivirus software vendors. I just rediscovered it by accident, and realized that it’s close enough to a stand-alone statement in its original form as to be worth duplicating here. I have made only minor adjustments — it is otherwise identical to the original.

While it’s true that commercial security products and services are created and marketed by vendors that have a vested interest in there being a security market for what they’re selling, there’s no need for a conspiracy of commercial vendors to keep the state of PC security in such a mess. As long as the Microsoft philosophy of software creation continues to focus on features rather than good software architecture, and salable products rather than fixing problems, there will continue to be a market for security software that requires update services.

The problem is simply that Windows, and thus everything that runs on it, can never really be a very secure platform as long as its APIs and innards are kept to any degree secretive, or as long as Microsoft refuses to fix the underlying architectural problems that create the vulnerabilities exploited by malware such as viruses and malware. The software that protects you against viruses and spyware on Windows systems is definitions-based (as in: virus definitions, et cetera), and new definitions need to be generated and distributed to deal with new versions of old viruses and other malware on a constant basis, which should tell you this: Microsoft isn’t fixing the underlying problem that makes a virus, worm, or piece of spyware possible, but is instead letting security software vendors cover its tracks with definitions-based “solutions” (more like band-aids).

By contrast, basically every other OS in active development is fixed rather than definitions-patched when some piece of malware is discovered that can affect it. Rather than let some third-party piece of software deal with it by scanning for a given definition, the OS developers analyze the malware to determine what system vulnerability is being exploited and close that security hole.

Only the biggest malware protection vendors (companies like Symantec and McAfee) are in a position to affect Microsoft’s policy, and even they don’t really need to deal with Microsoft to get the latter company to keep them in business. Yes, there’s money involved in keeping things unsecured, but it’s not about the security conflict of interest for companies like Symantec as much as it is about security simply not being a business priority of Microsoft’s. Microsoft need only provide the appearance of giving a crap about security, while actual short-term profits and market dominance strategies dictate that its developers focus their effort not on fixing problems but on inventing more unnecessary bells, whistles, widgets, and slogan-worthy “enhancements” for Microsoft’s marketing campaigns. That’s where the real problem lies — Microsoft is ignoring the actual problems, producing “features” that merely seem to solve problems but, conversely, actually create more problems by adding more levels of complexity to its software.

It’s certainly not merely crass commercialism that’s motivating anti-malware products like ClamWin and Spybot S&D, both of which are available entirely free. Neither one of them provides any direct revenue streams for its maintainers and developers.

NOTE: I’ve noticed that while my Symantec series drew some attention from outside my little community of regular readers, it doesn’t seem to be drawing much feedback from my regulars. I suspect it isn’t of much interest to them. This, coupled with the burnout I felt toward the subject through all of last week, prompts me to consider abandoning the series entirely. I think my last post about the Symantec ISTR volume XI serves as a pretty good stopping point anyway.

OOP and the death of modularity

Filed under: Geek — apotheon @ 03:13

In recent years I’ve seen what looks like a slight resurgence of interest in software modularity. I’ve only really seen it in open source developer communities, and only in limited contexts, but it does seem to be a real, measurable increase. People are looking at programming style and saying “This needs to be decoupled,” or at least “This needs to be more loosely coupled.” This is a good thing, in and of itself — the tendency has for a long time been to increase coupling over the life of a given software project, and to increasingly fail to instill decoupled programming values in people new to established projects.

Of course, in the aggregate, that trend still seems to be just as strong as ever. I’m seeing more eddies at the edges of the mainstream, to stretch a metaphor mostly leached of its meaning by the contempt of long colloquial familiarity. Some of this can likely be attributed to excellent books about good programming practice from people who obviously know their craft, like The Pragmatic Programmer by Andrew Hunt and David Thomas. Some of the credit for the eddies can probably be laid at the feet to the increasing influence of the most experienced superhackers in the open source world, influence imparted in part by the facility of communication and community accretion provided by the Internet — communities like PerlMonks. Other reasons certainly exist as well. The growing popularity of a (very) few languages that substantively get object oriented programming “more right” than others that have hogged the limelight for several years may well be a large part of that. I’m talking about languages like Ruby that encourage good programming practice without trying to force it by depriving the programmer of tools that may prove useful, and that implement excellent OOP constructs faithfully and well. Smalltalk may never gain the prominence once dreamt for it, nor even what prominence it once had, but it certainly is having some influence in other languages today.

Unfortunately, object oriented programming — good, bad, or indifferent — is also probably somewhat to blame for the tight coupling of code in larger software projects in the first place. Think about it for a moment: what’s the most effective way to create a complex system with decoupled code? The answer is simple. Create small building blocks, and use them together to construct the system. Now think about what object oriented programming does: it creates a means of loosely coupling code in a given complex system. It doesn’t decouple it — OOP just loosely couples it. No part of a single OO program is useful all by itself. The parts all rely on each other to be at all useful as anything other than an example of how to write code. The way to really decouple the code in a massive application is not to refactor its class hierarchy and redefine its object interfaces — it’s to break the monstrosity up into individual programs that all do one job, do it well, and can be used together with glue code and other mortar for the bricks used to build the complex system.

The evidence is all around us. There are currently two major traditions of operating system design:

  • Microsoft Windows is the tradition of tightly coupled code, with more than 50,000,000 lines of code in the OS. Most of that code is very tightly coupled — pull out one brick and the whole house comes tumbling down.
  • Unix is the tradition of decoupled code, whence the very notion of a user environment composed entirely of programs that “do one thing well” comes to mind. This tradition, in general, is only more strictly enforced in open source unices.

OpenBSD, apparently, is exceedingly good at maintaining, and even improving on, the “do one thing well” tradition of the Unix Platonic ideal. Linux-based systems are in some ways quite good at it, and in some ways not quite so good (contrast APT and the extensive archives of separated little bits of software with the gooey — pun intended — blobs like KDE and GNOME). MacOS X, now composed of a unixlike core a bit like the deeper layers of the Earth with its GUI a massive, rigid surface like the Earth’s crust, minus the shifting of tectonic plates, floating on a viscous mantle of glue code. Finally, we reach the avoidance of modularity with huge, proprietary, closed source, tightly coupled application suites (think of Adobe Creative Suite 3) and the tightly coupled operating system they love (the aforementioned MS Windows). These things happened in part because it became possible to build ever-larger programs (and maintain them, after a fashion) with lots of duct tape and baling wire, once object oriented programming increased the ability of programmers to loosely couple code into somewhat maintainable chunks within the larger project as a whole. It increased the maximum effective size of programs by abstracting away the hugeness of the application into smaller, more easily reasoned chunks of code.

Yes, that’s a good thing, all else being equal. The thing that wasn’t equal was the tendency of humans to fill a new development technique’s capacity for scalability as long as that technique is used. As OOP provided greater ability to write ever-larger programs, humans wrote ever-larger programs — even when they didn’t need to do so (and probably shouldn’t have done so).

Tight coupling ruins stability. Loose coupling doesn’t damage it as much. Decoupling — actual independence of modular parts — preserves stability (all else being equal, of course). The world at large continues to move toward larger programs, which means larger interconnected, if loosely coupled, code. This happens mostly because our ability to sustain development for ever-larger programs increases, and we rush to fill the potential with ever-longer feature additions. Yes, there are those eddies in the mainstream, where people talk about code modularity and really try to practice it, but the downside of that is that they’re mostly talking about loose coupling within a program, of separating parts of a program into objects. In a few cases, things are actually decoupled, such as in CPAN, but even there things are getting tied together into larger modules as time progresses (even if they’re object oriented modules that employ loose coupling). Some of those open source operating systems (Ubuntu springs to mind) are visibly abandoning their tradition of decoupled functionality a piece at a time, but it’s in nontrivial part the tradition of small utilities that “do one thing well” that preserves the stability, and even the security, of the major open source operating systems.

What? You didn’t think open source operating systems were usually more stable just because the developers are more virtuous, did you?

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License