Chad Perrin: SOB

15 September 2006

what Microsoft has been up to these days

Filed under: Cognition,Geek — apotheon @ 07:54

Some of you have no doubt already seen the two-part Hasta la Vista series over at The Beez’ Speaks. As I mentioned to Sterling when I pointed him at the URL a few days ago, my jaw literally dropped and hung open for a little bit before I noticed while I was reading the bit about Sim City.

As of today, Sterling has reported on A Kodak moment, as he calls it, where he encounters an all-too-common problem with third-party Windows-based software: one must be logged in with administrator privileges to use it as designed. He laments the fact that this sort of common behavior is a pretty good indication that almost everybody (99% he says — I think more like 98%, if only because of the 98% Rule) must be running their home computers with only one available user account, which has administrative privileges. It’s a reinforcing cycle, of course: because so much software requires administrative privileges, people just sign in with an administrative user account, and because everybody does so, software gets designed to require administrative privileges even when that’s not strictly necessary. On the other hand, much of what the average user does actually does require administrative access.

This is all tied into the origins of Windows in DOS, a true single-user OS, and the fact that Microsoft still hasn’t done a credible job of separating Windows from that legacy to make it a true multi-user OS (if It had, we wouldn’t be having some of these problems).

Of course, aside from technical issues, there’s that history of [Holy crap, that soup is hot! Ahem, sorry, back to your regularly scheduled blaming-Microsoft commentary.] market dominance business practices. Microsoft is known widely for monopolistic practices, driving competitors out of business and designing software to lock customers into vertically integrated solutions so that the company can compel customers to give it as much money as possible. In the past, Microsoft has been guilty of a number of “crimes” in that vein, including using software patents to “protect” itself from fair competition. In contrast with its by now expected behavior, however, Microsoft is signing nonassertion covenants, wherein it promises it will not sue for patent infringement with regard to specific patents. Microsoft calls its latest concession to the market share growth of open source software the Open Specification Promise.

Simon Phipps has some reservations about the OSP, which he describes in Security Blankets. Quoted directly from there, the three points of concern he raises are:

  1. First is the phrase “necessary claims”. Whenever I see this phrase my lawyer alarm goes off as it immediately involves a judgement call which is the subjective right of the patent holder. It comes accompanied by the question “was our patent really necessary for this implementation? Surely you could have done it this other way and thus not needed it. It’s actually not necessary so here’s the invoice.” I’d like to see that phrase replaced with language to indicate that no patent claims will be made against source code implementing the standard, with no necessity test involved.
  2. Second, the phrase “to the extent it conforms” is worrisome. Just as with the earlier language around Office 12 XML, it leaves open the question of who is the arbiter of conformance. It also means that open source is placed under a FUD cloud; development is carried out in public so partial and non-conforming implementations are sure to exist. I’d like to see this replaced with language to indicate that the good-faith intent to implement the standard is sufficient to gain coverage.
  3. Third (and most complex to explain) is the asymmetry of the patent peace. The patent grant is limited to necessary claims as I mentioned in 1 above, yet the cancellation of that grant is triggered:
    If you file, maintain or voluntarily participate in a patent infringement lawsuit against a Microsoft implementation of such Covered Specification, then this personal promise does not apply with respect to any Covered Implementation of the same Covered Specification made or used by you.
    That means that while Microsoft only grants me “necessary claims” I have to effectively grant them cover on all claims, necessary or not. That asymmetry has to be corrected.

A more in-depth analysis of Microsoft’s OSP is available thanks to Andy Updegrove. It’s a bit lengthy, but worth the read when you have the time if the subject matter is of any interest to you.

All in all, the OSP looks to be a welcome turn of events, as an example of Microsoft doing something right for once (in the ethical as well as connotatively “correct” sense). Then, of course, any positive ground gained is mitigated by some more in-character act of Microsoft’s. In this case, Microsoft is trying to pull a bait-and-switch with educational standards. From Changing the Report, After the Vote, an article at Inside Higher Ed, we discover the details.

Except for David Ward, president of the American Council on Education, every member of the Secretary of Education’s Commission on the Future of Higher Education found enough to endorse in the draft the panel produced last month to support it over all. All of them, certainly, also found some aspects of the report objectionable, yet swallowed those objections and agreed, at a public meeting August 10, to sign the report. The panel’s members agreed at the time that the report would undergo only minor copy editing and “wordsmithing” between then and when it was formally presented to Education Secretary Margaret Spellings later this month. That agreement was nearly imperiled last weekend, though. Gerri Elliott, corporate vice president at Microsoft’s Worldwide Public Sector division, sent an e-mail message to fellow commissioners Friday evening saying that she “vigorously” objected to a paragraph in which the panel embraced and encouraged the development of open source software and open content projects in higher education.

Notably from the other side of the debate, again from the Inside Higher Ed article, comes the following:

As is his wont, Richard Vedder, an outspoken economics professor at Ohio University, weighed in with all of his typing fingers blazing mid-day Saturday. Not only was the original version appropriate, Vedder said (open content “is a promising new trend in higher education that needs our explicit support,” he wrote), but more importantly, “I think it is outrageous to try to change the text of a document weeks after members have been asked to read it, and long after we have met and voted to sign it.” Vedder said he thought it might even be illegal to “change a report that has been voted on in a public meeting and to which people have already signed.” Then he added: “I must say, and I am sorry if I offend Gerri, that many very important persons on this commission found the time to read the report when asked by the chair. Their corporate responsibilities, while great, were not so great that they neglected their responsibilities to the commission…. Perhaps Microsoft considered this a low priority part of Gerri’s work — if so she should not have accepted this assignment, or, if she did, she should accept the consequences of her failure to perform her duties as all other members of the commission have.”

The final edit of the paragraph in contention now reads like most outcome of decisions by committee — it says a whole lot of nothing:

The commission encourages the creation of incentives to promote the development of information-technology-based collaborative tools and capabilities at universities and colleges across the United States, enabling access, interaction, and sharing of educational materials from a variety of institutions, disciplines, and educational perspectives. Both commercial development and new collaborative paradigms such as open source, open content, and open learning will be important in building the next generation learning environments for the knowledge economy.

Yes, they probably will all be important. What the hell does that have to do with setting actual policy? It’s just a prediction.

Debian and Ubuntu founders: So what do THEY think?

Filed under: Geek,Metalog — apotheon @ 01:49

Ubuntu project founder Mark Shuttleworth has some things to say about the relationship between Ubuntu and Debian. In contrast with some of the less friendly statements made by enthusiasts on both sides of the Ubuntu/Debian divide, Shuttleworth expresses admiration for the Debian project, everything that it has accomplished, and the strong foundation it provides for Ubuntu. He also characterizes Ubuntu not as a competitor, but as a complement to Debian that targets OS market niches Debian does not — that it cannot, really, if it will continue to provide the strong OS foundation that it already does.

I, personally, have some grave reservations about some of the characteristics of Ubuntu. I understand Shuttleworth’s point, however, and agree that the two really shouldn’t be regarded as in competition. Neither should attempt to supplant the other where it serves an otherwise underserved market niche. I still don’t foresee myself ever really wanting to use Ubuntu unless I find myself in the position of developing for it or otherwise directly contributing to the project in some manner. Ubuntu just doesn’t serve my needs the way Debian does.

Despite all this, some nitwit self-identifying as “nick” in the comments claims Mark Shuttleworth, Ubuntu’s founder, just “doesn’t get it”. He seeks to educate the Ubuntu founder in the causes of Ubuntu’s success, which in the considered opinion of “nick” is because Debian is “dying”, has too many shortcomings, and suffers an “appalling lack of progress”. Funny, I never noticed any appalling lack of progress, and I’ve been using Debian since before Ubuntu ever existed. I guess maybe I don’t consider bleeding-edge instability to be necessary to the definition of “progress”. Even if I did, I’d just use Debian Sid/Unstable, like about 70% of the Debian user community (according to Shuttleworth’s statistics). Instead, I stick with Testing and Stable, depending on the purpose of the specific computer in question. I also don’t feel a desperate need for the rapid incrementation of version numbers that characterizes the Ubuntu project, especially since the Debian distribution is mature enough that it doesn’t need to do the sort of rapid project development Ubuntu has done in its first two years to find its stride.

In contrast to Ubuntu founder Mark Shuttleworth singing the praises of the Debian project, the Debian founder Ian Murdock is commenting on the surprising usability of Microsoft’s web application, Windows Live Writer. He has also recently commented on the suggestion that Windows would ultimately become a poorly debugged that was put forward by Marc Andreessen in 1995. In short, Murdock suspects that if the network largely obsolesced the local platform, turning it into nothing more than a set of hardware drivers, that would still ensure market dominance by Windows because of wide-ranging “plug and play” hardware support in Windows.

While native support in the Linux idiom provides easier setup and more trouble-free hardware support in cases where the hardware in question has open source drivers, non-native support means a long hard drag installing vendor drivers (if they exist) of half-baked support with generic drivers. In this sense, Windows might very well “win” any marketshare war if driver support were all that was really important about an OS. That, of course, is assuming that under those conditions driver support would be the same for both OSes as it is now. Since that state of affairs is unlikely to ever arrive, I doubt we’ll ever know how that would affect OS marketshare.

To punctuate the end of this entry, I’ll just add one more prominent open source software hacker’s weblog to the lineup of subject matter: Alan Cox’s online diary is in Welsh. Go fig’.

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License