Chad Perrin: SOB

20 February 2009

Did OOP do this to us?

Filed under: Geek — apotheon @ 06:54

In Two lessons for the price of one, Assaf over at Labnotes referred to No Wonder Enterprise Software Sucks^H^H^H^H^H Is Low Quality. He quoted the following passage:

My point is that the architectural complexity of these applications inhibit a person's understanding of the code. . . . Without actually doing anything, applications are becoming too complex to understand, build, and maintain.

The original piece of writing about the low quality of enterprise software by Travis Jensen is worth a read, but Assaf's own brief take on the matter is all I need here:

Every layer makes perfect sense in isolation. The cracks start showing when you pile them up into a mega-architecture, and you can clearly see how some of the layers cancel each other out.

Object Oriented Programming

One of the oft-cited benefits is the ability to "reuse code" without having to rewrite it. This is accomplished by way of encapsulating code with data in a manner that provides a sort of discrete entity to which one can refer over and over again within a program. That encapsulation, along with the protection (that is, protecting the code and data inside an object from contamination by improper access) that OOP offers, provides other benefits as well.

Among these other benefits is increased sustainability of large, complex projects, with many programmers who need not even be familiar with each others' work (or names, for that matter) helping to develop the overall project. Because object oriented design allows a program to be more fully and easily broken down into separate, largely independent chunks than many other programming styles allow, it is easier to specify an API, then assign its implementation to a single team of programmers, and send them off to finish that task. These groups, in turn, can further break down their own chunks of the overall system into smaller chunks, and hand those sub-chunks off to sub-teams or individuals.

As large, complicated software systems like this evolve over time, they tend to grow in terms of total number of chunks that make up the whole system. Furthermore, mediocre (or even downright crappy) code can flourish within carefully encapsulated and protected classes without ever having to be judged by outside eyes; that is to say, one doesn't even need anyone outside the team or programmer that developed a single segregated piece of the whole system to view the code, as long as it "works". Nobody working on any of these individual chunks needs to know anything about the project's Big Picture, either. There isn't even any particular need to have a unifying architectural style, and with big enough projects, it can quickly become nigh-impossible to impose such a unifying architectural style anyway.

The Pattern

Can we see the pattern yet?

A while back (like, two years ago), I wrote a consideration of the social effects of OOP on how software has "advanced" titled OOP and the death of modularity. In it, I may have seemed to say the opposite of what I've been implying so far, but the very perspicacious among you may notice this isn't the case. In fact, both OOP and the death of modularity and this piece point out basically the same problem, but in different ways: that software is getting bigger and more complicated without actually separating concerns in as meaningful a way as it could — as, in fact, certain older development philosophies (like "the Unix philosophy") provided significantly better modularity and significantly reduced complexity.

Of course, I wasn't then, and I'm not now, saying that object oriented programming is the proximate cause of the ills of bloated enterprise software.

What I'm saying is that people are the proximate cause — people who don't know what the hell they're doing. These people accept all the marketing for OOP, which states that it'll make your code more modular and more stable and just generally better. It won't. It will help you do good things for your code if you apply it intelligently, and if you don't use it even when you shouldn't.

OOP still isn't a silver bullet. Nor is FP, for that matter. They're just tools that can help when properly applied. Don't make the old mistake of assuming that when you have a hammer, everything's a nail. Sometimes it's a hex nut, and you need a different tool.

. . . and don't assume that just because you're using OOP techniques you don't have to get actually good programmers to do the work for you. As Paul Graham put it:

Object-oriented programming offers a sustainable way to write spaghetti code.

It doesn't force you to do so — but it sure does make spaghetti code a lot more attractive.

OOP didn't do this to us; we did it to ourselves. OOP just held our hats.

37 Comments

  1. I'm not sure what this article is about – of course OOP is not a silver bullet.

    Systems get more complex, sometimes they get too complex but that is the fault of the programmer, not his tools.

    If you're programmers aren't up to scratch, the only protection Java/C# bring is that their verbosity and their static typing allow IDE's to make the job of pulling things apart and seeing what is going on feasible. Sometimes my work consists of behaviour preserving program transformations that result in a better and more maintable software system. Thats ok – its natural for systems to have cycles of growing and then being weeded back into shape.

    I'm left confused because the tone of the article seems to suggest that OOP has somehow done something wrong without offering any examples of what that is.

    Comment by Andrew — 21 February 2009 @ 06:51

  2. Great article. Must check out the two articles you quote, by Assad and Travis.

    I'd just like to add that said lack of software quality, can and does occur in non-enterprise software too – specifically, even in open source software (though of course the two types of software are not mutually exclusive), at least nowadays. And the reason is the same as you point out, – it's the people, not the technology ... crappy software can be written while using any technology or methodology (or lack of them, too :-)

    • Vasudev

    Comment by Vasudev Ram — 21 February 2009 @ 07:37

  3. Sigh. Why did I read this?

    Comment by Anonymous — 21 February 2009 @ 08:04

  4. Just because you are using an OO language does not mean you are doing OOP. These lanaguages facilitate the opportunity to do so and nothing more. To do OOP you actually need to learn quite a few things and acquire many skills. It seems to me that everyone is looking for a way to accomplish something great without having to invest any time or effort into the process. Guess what folks?

    Comment by J.P. Hamilton — 21 February 2009 @ 08:54

  5. Who has the clout to severely trim the complexity? You cannot say yes to every need and keep a project manageable. Someone has to say "NO! Actually you can make do with this because this is what we have a chance to deliver and maintain."

    • Paddy.

    Comment by Paddy3118 — 21 February 2009 @ 10:32

  6. Andrew:

    It's unfortunate that you feel like Did OOP do this to us? is such a waste of space, especially since you paraphrased it from a slightly different perspective as a comment. Perhaps we're just talking past each other here.

    Vasudev:

    Thanks for the kind words. Assaf's Weblog post is relatively short and sweet — and though he claims to leave the "second lesson" provided by the first quote up to the reader, he also articulates that second lesson in a footnote, so don't worry that you'll miss his point like I did when I first started reading it. The bit about enterprise software by Travis Jensen was interesting too, and touched on some of the other likely social factors in making most enterprise software as sucky as it is — so I think it's worth reading, just for purposes of personal interest. That assumes your interests bear any resemblance to mine, of course.

    Anonymous:

    Sigh. Why did I read this?

    More importantly — why did you comment?

    J.P. Hamilton:

    Agreed. Contrary to the complaints of many people who have used object oriented languages for many years, I don't think something like functional programming is any more difficult to learn to do properly — to really grok, I guess — than OOP. In fact, I think it's much easier. The difference is essentially that it's easier to think you grok OOP, even if you don't.

    It seems to me that everyone is looking for a way to accomplish something great without having to invest any time or effort into the process. Guess what folks?

    Yes, exactly.

    Comment by apotheon — 21 February 2009 @ 10:36

  7. This post shows a quite typical pattern in internet blogging.. A person P had an opinion O about an issue I, not because he himself came up with O, but because it was something that was planted to him by school, peers or etc (i.e. taught). Now later on, P starts doing some thinking of his own about I, goes heureka and understands something more about I. Because he never had a reason to doubt O, it comes as a shock to P that suddenly O seems to have some flaws. Now, being a good internet blogger, P goes and writes an article A about it on the web. Giving a finalizing touch, some other reader that had accepted O without doing any thinking reads A, goes "wow!", and finds the need to tell the whole world about it on a big community site, like Reddit. The result – hundreds of people going "Right, Sherlock, good thing you have a brain." on the article.

    Comment by once again... — 21 February 2009 @ 10:41

  8. Paddy:

    I wish I could say I answered this separately because I thought it deserved its own, individual response — but the truth is that it appeared while I was composing and posting my response to everyone else.

    Who has the clout to severely trim the complexity?

    That's pretty much fully half the core problem, I think. The other half is getting the right person in that position to actually do so — someone that not only has the clout, but also knows it needs to be done, and has the guts to do it.

    Unfortunately, in many organizations, I don't think anyone really has that clout. It's an inescapable consequence of the culture of corporations, engendered by the necessities of Corporate Responsibility.

    Knowing what needs to be done can be painfully difficult, too, considering most nominal OO programmers probably don't even really know what "object oriented" means. Those who do have some idea what it means often can't explain it worth a damn to save their lives, so I guess nobody else knowing what it means isn't a very surprising outcome.

    once again...:

    Thanks for the analysis — but that's not exactly how it worked for me (assuming you're assigning me the role of person P). I've never really been in the group that accepted the flawed explanations for OOP. I went from "remind me again why this is even valuable" (due to not having gotten an explanation that made any sense) pretty directly to "Why does it seem like everyone thinks (s)he knows this stuff, when it's pretty clear that they all fail to grasp it?" The turning point was reading something Alan Kay said.

    Comment by apotheon — 21 February 2009 @ 10:46

  9. I believe that you are trying to say in the article that people blindly follow OOP design and techniques... but not the principles. There are more principles to OOP than there are techniques. If you follow these principles, you are not going to be writing spaghetti code by definition. Understood OOP and Agile Development are the answers to writing good code and redefining how businesses work with developers.

    Comment by Jonathan www.jadbox.com — 21 February 2009 @ 11:10

  10. I believe that you are trying to say in the article that people blindly follow OOP design and techniques.

    That's another way to put it — though I'm also saying that doing so can, coupled with other social factors, lead to some very unwholesome tendencies in software development above and beyond simply writing spaghetti code.

    Understood OOP and Agile Development are the answers to writing good code and redefining how businesses work with developers.

    I'd say that these things can add up to an answer, but not necessarily the answer. There's more than one way to skin a cat (or, less graphically, TIMTOWTDI).

    Comment by apotheon — 21 February 2009 @ 11:32

  11. An interesting take. I believe part of the problem is the common business language of Java. It does so many things wrong (compared to other OOP implementations), I'll just throw out the multiple inheritance issue as one example. Yet a bigger quirk is that although it's claimed to support multiple paradigms, it really doesn't. You're forced from Hello World to use their strange system of OOP. Languages like Python and Scheme work loads better for using different paradigms, and I believe that what paradigm you settle on (or mix with) depends on the job at hand. OOP is not flawless for all tasks, and right now I see it largely used just for a container object that C's structs used to handle fine. And then people who are told you should never access attributes directly instead implement a bunch of getVar() and setVar() methods, thinking they're doing good OOP...

    Finally, to reiterate something I read a while back: People who are afraid of globals are usually afraid of girls, spiders, etc.

    OOP does a good job of aiding the global fears, but there's a time and place for them, and when they're not used it can make a project a lot more difficult. So I'd say the seemingly crappy software out there is a result of bad programmers trained to only use Java for a business aspect, and they just happen to use OOP. Beginner programming has become easier, and as a result you get a lot of people who really have no idea what they're doing who are flooding the market.

    Comment by Anonymous Coward — 21 February 2009 @ 11:44

  12. Certainly neither OOP nor FP are silver bullets. However, FP can potentially help the compiler make fast, ugly optimizations to highly readable, elegant code; OOP can't even do that.

    OOP "helps" people, but you're absolutely right – good coders write good code, bad coders write bad code, whether that code be OOP or not. It's the most overblown concept of our time.

    Comment by pazuzuzu — 21 February 2009 @ 12:02

  13. Anonymous Coward:

    I think you're right, to some extent at least, about Java. I've been known to call it today's COBOL.

    Languages like Python and Scheme work loads better for using different paradigms, and I believe that what paradigm you settle on (or mix with) depends on the job at hand.

    . . . and Ruby. In fact, there's a Kent Beck quote that seems to bear on this:

    I always knew one day Smalltalk would replace Java. I just didn't know it would be called Ruby.

    Couple that with the way Ruby draws inspiration from a number of other far better designed languages than Java (and people keep comparing it to some of them, like Lisp), and it's no wonder I like Ruby as much as I do — or that I'm far from alone in that assessment.

    Beginner programming has become easier, and as a result you get a lot of people who really have no idea what they're doing who are flooding the market.

    I think the bigger danger is definitely people who do know what they're doing, but not really why they're doing it.

    pazuzuzu:

    However, FP can potentially help the compiler make fast, ugly optimizations to highly readable, elegant code; OOP can't even do that.

    Alas, I don't know enough about compiler design to be able to either agree or disagree with any certainty. I know of certain aspects of functional programming style that do what you say; I don't know of any specific principle of OOP style that prevents what you say, however.

    OOP "helps" people, but you're absolutely right – good coders write good code, bad coders write bad code, whether that code be OOP or not. It's the most overblown concept of our time.

    I think the saddest thing about it is that the point still needs to be made. Many just haven't heard it yet, because many others reject it outright for fear that it'll cut into their little kingdoms of OOP knowledge.

    Comment by apotheon — 21 February 2009 @ 12:24

  14. I'm working on a system now where the developers did the old ui trick of using the same jsp file for both the 'add' and 'edit' use cases. On the surface it looks like a good place for code reuse. Of course over time the business rule for adding an entity and editing an entity diverged and there are tons of if/else blocks in jstl making the whole thing something of a mess. I think this is a small easy to digest example of where code reuse goes wrong.

    Comment by John — 21 February 2009 @ 12:34

  15. @Andrew: Yes, there are crappy programmers. There are also crappy designers. It has been my experience that systems get bloated and too complex because of poor design, not poor programming. No amount of good programmers can help a project with a bad design.

    @J.P. Hamilton: I disagree with your first sentence. Java, in particular, forces you to know something about OOP before writing the first line of code. Just writing the ubiquitous Hello World program in Java involves creating a class; enforcing the whole "everything is an object" paradigm.

    OOP is not the holy grail of software development, but it was certainly marketed that way. It was hyped much like Structured "GOTO is Evil" Programming was then, and Design Patterns, Unified Process, Design by Contract, Agile Programming, Extreme Programming and Functional Programming was afterwards. It is much like religion in that regard.

    @once again...: What was your point?

    Comment by Buck — 21 February 2009 @ 01:09

  16. I'm appalled at your lack of understanding for how human society advances. Just saying "people" are the cause of a problem instead of releasing simple code and TEACHING them how to use and write correct software would be extremely helpful. Greed and egotistical programmers cause many of the problems with understanding code today. Some programmers, when asked a simple question by someone learning, often let their ego show by saying these simple little douchebag words, "Do it yourself." How will society get anywhere if we have to Re-Invent the wheel every time we want to make a program.

    Simply teach us how, stop fucking whining.

    ~Jonathan

    Comment by Jonathan — 21 February 2009 @ 01:41

  17. Ahh, what a vast subject. Interestingly, the most helpful bit is in the first paragraph;

    Without actually doing anything, applications are becoming too complex to understand, build, and maintain.

    I came to this precise conclusion myself in 1987, developing business applications in C (not C++) on DOS CUI systems emulating GUI metaphors that were only then coming into vogue. It seems the more things change...

    Anyway, after nearly 30 years in this game, I now know that the amount of complexity human beings can handle is more or less a constant which cannot be exceeded, like the speed of light. Likewise, business systems, with a few rare exceptions, are almost all reducible to basic CompSci 101 patterns, CRUD being chief amongst them. Everything else is left up to us. And this is where we lost our way.

    The Great Frameworks as I like to call them are doomed. We are no longer producing enough professional programmers to continue to maintain this edifice. J2EE etc was originally touted as a way to enable junior programmers to be used in place of senior programmers. Well now even the Junior Programmers are struggling to use it effectively. It has the smell of death about it.

    What comes next will be a return to 4GLs and completely sealed visual environments that allow graphics designers and not programmers to combine and recombine program elements, leaving the programmers to deal with code and exception cases. I am very surprised that this has not already happened, excepting that all of the Case Tool Makers of old have been killed off. A new generation of well-funded startups will arise offering Big Corporates a way out of Framework Hell using very expensive tools that even a marketing bunny could use to produce high quality business systems. And so we go, back to the future.

    Comment by Linda — 21 February 2009 @ 02:10

  18. apotheon:

    I almost mentioned Ruby. =P But then I figured I'd have to mention several others too, so I stopped. I really admire Ruby, but I happen to like Python loads better. To avoid a language war between good languages I won't enumerate reasons why.

    Jonathan:

    Angry..feels like a troll, but I've got some free time. Re-inventing the wheel is a useful exercise in learning, especially if the requirement is to re-invent it in the "correct" way. In creating, however, it's best to focus on your novel ideas and use what's already been done to help. (Hence the beauty of free software.) When I teach students programming I stress that many of the examples are contrived (I try to balance with some practical stuff though); you just can't start with the source code for Tremulous and expect anyone to understand anything. Just like in math you don't start with multi-variable Calculus.

    Comment by Anonymous Coward — 21 February 2009 @ 04:50

  19. "Conceptual integrity is the most important principle for developing a software system," Fred Brooks told us in the Mythical Man-Month. What you're describing is a system without a real architecture. Components have been modularized for the sake of parallel development, but no real architecture has been used to bring order to the steaming piles of code produced in this manner. No architecture always means a big ball of mud or overly complex and unwieldy designs. This is obvious and has been known for a long time in the software industry, but that doesn't matter because not enough people care enough to improve on the status quo.

    Comment by JS — 21 February 2009 @ 05:24

  20. you really need to find a more positive and constructive way to react to criticism; get over yourself.

    Comment by cmon now — 22 February 2009 @ 02:22

  21. "Why does it seem like everyone thinks (s)he knows this stuff, when it's pretty clear that they all fail to grasp it?" — Because whoever does not project the appearance of knowing this stuff, gets shoved out of the company.

    "You will re-invent the wheel. You will invent wheels that don't roll." -Physics professor Tom Tombrello, speaking to freshman physics students

    I would like very much to know who these "Case Tool Makers of old" were.

    Comment by Rod — 22 February 2009 @ 02:49

  22. What does OO programming mean? (sounds like a good interview question)

    I read one good definition – it means three things:

    1. Encapsulation, and the adding of functions to structs to make classes, and the idea of an object as an instance of a class (object is to class as variable is to datatype)

    2. Inheritance, plus overriding ("class Child is just like Parent but with specific changes") [note that if we have inheritance without overriding, then Parent and Child are identical, so Child is redundant] — so Child::f() can be different from Parent::f()

    3. Polymorphism — once we allow pointers to objects that could be of any class, this implies that a pointer P to an object of class Parent could really be pointing to an object of class Child1, Child2, Child3, ..., and then P->f() could refer, not to Parent::f(), but to Child1::f(), Child2::f(), Child3::f(), ..., thus P->f() has many ("poly") forms ("morph") [or, more accurately, many substances for one form]

    Each of these three depends on the preceding ones, so we can have (1) without having (2) but not vice versa.

    Comment by Rod — 22 February 2009 @ 03:14

  23. How about this for snarkiness:

    f(a,b,c) is not object-oriented. However, a.f(b,c) is object-oriented.

    Comment by Rod — 22 February 2009 @ 03:17

  24. Also see Why Object Oriented Programming (OOP) is not always good

    Comment by development — 22 February 2009 @ 03:30

  25. The procedural f(a,b,c) could become the object-oriented a.f(b,c) or it could become b.f(a,c)

    it all depends on which class you want to put f in.

    There is an idea based on table inversion (row-major vs. column-major) that I would like to convey.

    The procedural switch(i){case 0:f0();break; case 1:f1();break; case 2:f2();break;}

    could become the object-oriented i->f()

    and somehow I don't remember, this is a table inversion. Multiple switch statements would be the other table dimension. The functions are the table members. So really

    switch(i0){case 0:f00();break; case 1:f01();break; case 2:f02();break;} switch(i1){case 0:f10();break; case 1:f11();break; case 2:f12();break;} switch(i2){case 0:f20();break; case 1:f21();break; case 2:f22();break;}

    becomes

    i[I]->f[I]J

    except that f[I] are all polymorphed together so they all inherit from the same parent function.

    Comment by Rod — 22 February 2009 @ 03:36

  26. Hej, Chad.

    Congratulations on having written an interesting article and thank you for it.

    I hope you have a moment for some comment.

    To begin with, you quote Travis's writing, 'My point is that the architectural complexity of these applications inhibit a person's understanding of the code.' This sounds a little like saying that the complexity of the blueprint of a building inhibits a builder's understanding of that building.

    Travis is, presumably, bemoaning unneccessary complexity, rather than an architecture's necssary complexity; an architecture must, afterall, entail some complexity in specifying how a program is to be structured, how quickly external events must be serviced, how data are to be persisted, how load-modules are to be deployed, etc. If this is his point (I did not read his article), then it is a fair one, but I'm a little uncertain as to how this melds with your thrust, ' … that software is getting bigger and more complicated without actually separating concerns in as meaningful a way as it could.' Are you saying that architectures cause a worse separation of concerns than would be the case if they did not exist?

    To the next line of the quote, 'Without actually doing anything, applications are becoming too complex to understand, build, and maintain.' If true, this is surely a shocker. If we side-step the possible misplaced modifier, Travis appears to be working with applications that don't do anything. I've never seen an application that doesn't do anything, that does perfectly nothing.

    What might that look like? I presume it wouldn't look like anything, as if I can see it, it arguably does something: it presents itself to my senses. Applications that do nothing must be completely invisible. They don't appear in GUIs. They reveal themselves not even to task manager snapshots of our running systems. They just sit there, buisily doing nothing. Who is writing these applications?

    And whoever is writing them, they seem to be doing a poor job, for these applications that do nothing are becoming too complex to understand (building and maintaining them is perhaps a lesser issue). This is an extraordinary indictment of our IT industry. We note that they are not yet too complicated to understand, they're just on their way to becoming too complicated to understand. This conjures images of genuinely concerned individiuals, fretting, sweating, somehow fully aware of the furious juggernaut of incomprehensibility thundering towards them as they produce more and more perfectly useless code, yet powerless to leap out of its way.

    Assaf continues this IT-scandal exposé by revealing that research in the field of software uselessness has advanced to a point of some sophistication: some of these void-technologists are even building layers that, ' Clearly … cancel each other out.'

    Or perhaps they were both exaggerating.

    And this was exaggeration, then his point must be taken somewhat lightly, which is a shame as then we're not quite sure what the point was.

    Chad, while not guilty of the above, you yourself perhaps have some points to defend.

    You say that, ' … people are the proximate cause – people who don't know what the hell they're doing. These people accept all the marketing for OOP, which states that it'll make your code more modular and more stable and just generally better. It won't.' Correct me if I'm wrong, but you seem to be dividing agency into human factors and non-human factors, and particularly that human factors contribute most to software's balooning complexity. Yet in just the paragraph before you claim, ' … certain older development philosophies (like "the Unix philosophy") that provided significantly better modularity and significantly reduced complexity.' Doesn't this credit non-human factors (the philosophy) with success rather than the very human factor of the Unix programmers themselves?

    You then proceed to assert that OOP, ' … sure does make spaghetti code a lot more attractive.'

    I'm afraid I don't have a definition of what spaghetti code is, so let's take Wiki's:

    Spaghetti code is a pejorative term for source code which has a complex and tangled control structure, especially one using many GOTOs, exceptions, threads, or other "unstructured" branching constructs. It is named such because program flow tends to look like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence of spaghetti code, and is widely regarded as one of the most important advances in programming history.

    Given that structured programming mitigates spaghetti code, can we infer from your statement that OOP makes structured programming less attractive than some non-OOP programming technique? Could you explain how?

    Also, and again correct me if I'm wrong, allow me to tease out the overall flow of your article. You suggest that OOP's encapsulation enables large software development but this development can create such large products that the individual pieces themselves are of poor quality. You then suggest that software is getting bigger and more complicated without actually separating concerns in as meaningful a way as it could, and finally that people who don't know what the hell they're doing are the cause of this. I'm curious to understand whether you're implying that a causal thread runs through these three points.

    You make your third point admirably clearly: people are the cause of software's becoming bigger and more complicated without sufficiently meaningful separation of concerns. If we could examine with the second point, are you saying that software is getting bigger and more complicated because concerns are not separated in as meaningful a way as they could? I will presume you are suggesting causality here, that insufficient separation of concerns causes unneccessary complexity.

    The logical link between the first and second point raises the question: are you suggesting that encapsulation itself leads to a less meaningful separation of concerns than some other programming technique? If so, can you recommend a superior approach?

    Ed Kirwan

    Comment by Ed Kirwan — 22 February 2009 @ 04:11

  27. cmon now:

    you really need to find a more positive and constructive way to react to criticism; get over yourself.

    To whom, exactly, was that directed?

    Rod:

    "Why does it seem like everyone thinks (s)he knows this stuff, when it's pretty clear that they all fail to grasp it?" — Because whoever does not project the appearance of knowing this stuff, gets shoved out of the company.

    Good point.

    How about this for snarkiness:

    f(a,b,c) is not object-oriented. However, a.f(b,c) is object-oriented.

    That sounds about right, if snark is where you're going.

    I'd say that, in practical terms, "object oriented programming" is the inevitable perversion of what what the inventor of the term intended — perversion, because what he really intended was more like "message oriented programming", and inevitable, because this is really what we should have expected as a result of having called it object oriented rather than message oriented.

    development:

    Also see Why Object Oriented Programming (OOP) is not always good

    Thanks. I'll give that a look.

    Ed Kirwan:

    Travis is, presumably, bemoaning unneccessary complexity, rather than an architecture's necssary complexity; an architecture must, afterall, entail some complexity in specifying how a program is to be structured, how quickly external events must be serviced, how data are to be persisted, how load-modules are to be deployed, etc.

    Yes, I very clearly got the sense he bemoaned unnecessary complexity. It is always possible for a building to be over-architected just as a program can be over-architected. With software, though, we have far greater practical potential for that, in part because there is much more focus in the software programming world on inventing fantastic new development methodologies that actually enable this kind of sprawling complexity.

    The result of all this is that we often see unnecessary complexity in software design to a degree that we don't see in building design — and that's what he was talking about.

    Are you saying that architectures cause a worse separation of concerns than would be the case if they did not exist?

    What I'm saying is that bad software design (i.e., software "architecture") via misapplied object oriented programming principles is making things worse.

    If we side-step the possible misplaced modifier, Travis appears to be working with applications that don't do anything. I've never seen an application that doesn't do anything, that does perfectly nothing.

    I took his statement to mean "Without doing anything new".

    Or perhaps they were both exaggerating.

    I get the impression you're either engaging in satire or over-thinking what was said in a manner similar to the way a lot of software is over-architected.

    Correct me if I'm wrong, but you seem to be dividing agency into human factors and non-human factors, and particularly that human factors contribute most to software's balooning complexity.

    I skipped over the step where I might have made it excruciatingly clear that the problem — as well as the solution — is in how design principles are applied, given two sets of design principles that aren't just completely wrong-headed, rather than in the design principles themselves. Furthermore, you seem to have overlooked the obvious question of whether more modularity and less complexity is always good. While it is usually good, from a starting point of the way things are typically done, there are limits beyond which one reaches at point of diminishing returns, if not actual counterproductive overapplication of what was previously a good thing.

    . . . but as much as you're apparently overthinking all this, I rather suspect you're capable of realizing that yourself, and are being intentionally obtuse to try to undermine what others (myself included) have said.

    Given that structured programming mitigates spaghetti code, can we infer from your statement that OOP makes structured programming less attractive than some non-OOP programming technique?

    You can infer that if you want to — but I don't think that's really a reasonable inference.

    Object oriented design is, in some ways, basically a form of "structured" programming. It allows one to implement separation of concerns more "easily" (for some definition of "easy") than straightforward imperative, weakly procedural design. That gives your program a greater appearance of order at first glance, though if misapplied it won't help things as much as one might think because:

    1. Each of those implementations of separated concerns may contain a smaller discrete mess of spaghetti.
    2. The whole collection of implementations of separated concerns may be tied together in a manner that is reminiscent of spaghetti.
    3. If the concerns are separated in a manner that does not fit into a consistent, conceptual theme for the design of the system, tighter coupling and more complex inter-object relationships are unavoidable.

    The spaghetti code becomes more "attractive" because one can recognize that there was a lot of effort put into designing class hierarchies and there is a rigidly specified structure to the program, but that doesn't mean it's actually conceptually clear. If it's not conceptually clear, you basically have spaghetti code again — just a different recipe for spaghetti.

    I guess maybe it's actually linguine code.

    I'm curious to understand whether you're implying that a causal thread runs through these three points.

    I might call it more of a causal chain, if I was inclined to go this metaphorical route — but I think such a metaphor breaks down if examined too closely.

    are you saying that software is getting bigger and more complicated because concerns are not separated in as meaningful a way as they could? I will presume you are suggesting causality here, that insufficient separation of concerns causes unneccessary complexity.

    It is one causal factor, within the context of object oriented design, at least.

    The logical link between the first and second point raises the question: are you suggesting that encapsulation itself leads to a less meaningful separation of concerns than some other programming technique?

    No. I'm suggesting that people who employ encapsulating programming techniques poorly produce less meaningful separation of concerns than those who do so well — and that, as a style of encapsulating programming technique that everybody thinks (s)he understands well, the object oriented approach to programming is often abused toward that end.

    Comment by apotheon — 22 February 2009 @ 01:16

  28. Rod: The Case Tool Makers of Old I was referring to were the people who made tools like Excelerator in the late 80's and early 90's. Here is a link.

    editor's note: I'm not sure what you initially intended to do with that link, but as posted it ended up kind of broken. I hope my edit is suitable.

    In the late 80's and early 90's, we were all very serious about being Software Engineers, and along with that came our single-minded focus on standards, rigor, process, replicability and metrification. As the Software Development Manager of a very large government enterprise, my aim was to dramatically reduce the costs of development. These costs included not just the original investment in development, but all costs associated with it over its entire working life.

    With this aim in mind, we become very very efficient as developers, despite the fact that we still had a lot to learn about the science. The quality of the software we produced was very high, as demonstrated by the fact that much of it is still in use. This is especially interesting given the fact that we did not have any of the things we now regard as essential to software development, ie the Internet, or the vast array of fundamental libraries like Apache Commons or any of The Great Frameworks we have been talking about. We were still inventing all of that stuff.

    So, what happened? What happened was Windows. Suddenly, our users didn't care about high speed data entry optimizations, or even reliability and usability – if it didn't look like Windows, they didn't want it. Having Windows on your PC was a status symbol – you were important. In a flash, Windows was everywhere throughout the whole organization, and you couldn't get budget approval on an industrial strength DOS/CUI application to save your soul. The market for CASE Tools and CUI 4GLs vanished, because no one cared about the costs of development any more, so long as it worked with Windows.

    Of course that's an oversimplification, but all of us had to learn how to program with Windows as fast as possible, and an entire world of basic knowledge and professional best practice got left behind in the mass exodus. Still, human beings have not changed, and once again we are dealing with the problem of complexity. Back then, we solved the problem of complexity using standards, tools and methodologies specifically designed to dampen down and resolve that complexity. Today, by contrast, we are compounding complexity through unrestrained promotion of diversity, by multiplying tools, languages and frameworks, and dare I say it, by being completely uncritical of amateurism.

    Diversity and choice are very good things in their place, but diversity can easily collapse into chaos, and too many choices can lead to intractable confusion and dissatisfaction. As we are now discovering.

    Comment by Linda — 22 February 2009 @ 02:11

  29. [...] seems that, when one discusses software architecture at all, someone inevitably brings up building architecture. The two are compared, and conclusions [...]

    Pingback by Chad Perrin: SOB » Software design is not architecture. — 22 February 2009 @ 04:00

  30. Thanks ed :)

    Comment by Linda — 22 February 2009 @ 06:02

  31. Ah aimz ta pleeze.

    Comment by apotheon — 22 February 2009 @ 06:57

  32. [...] holder. I love the concluding remark. Spot on, Chad: OOP didn’t do this to us; we did it to ourselves. OOP just held our [...]

    Pingback by Labnotes » Rounded Corners 226 – OOP didn’t do this … OOP just held our hats — 27 February 2009 @ 10:42

  33. [...] blogger Chad Perrin wrote on his personal blog some well thought out, not-so-kind words to say about object-oriented programming (OOP). I feel roughly the same way, and I know a lot of the readers here do [...]

    Pingback by Programming news: Code Contracts for .NET, Appcelerator's latest Titanium release, the dangers of OOP | Programming and Development | TechRepublic.com — 2 March 2009 @ 05:48

  34. Excellent point. I will just quote Spiderman: "With great power comes great responsibility." Funny but it's true.

    Comment by Vukoje — 3 March 2009 @ 04:07

  35. [...] First Tweet Feb 21, 2009 addos addos Why OOP sucks.. view retweet [...]

    Pingback by Twitter Trackbacks for Chad Perrin: SOB » Did OOP do this to us? [apotheon.org] on Topsy.com — 28 August 2009 @ 10:37

  36. The key to a software systems development is to use a programming language that is 1. easy to write, 2. easy to read, 3. easy to maintain.

    OOP fails on all 3 points. OOP doesn't fail a little, it fails a lot.

    Code reuse is way down on the list as being important; speed should not even be a consideration because this is more than ever processor/hardware dependent.

    Within 10 years OOP will be phased out by the next big IT hype job. It will likely be all drag and drop. The best developers will be those who got their first set of Lego's at 6 months old.

    Comment by roberto — 5 February 2010 @ 12:41

  37. Years ago a developer would spend the bulk of their time solving a problem and then coding the solution. Now a developer solves the problem and spends the bulk of their time coding the solution. Then some poor bastard has to maintain the mess left behind. OOP sucks beyound belief.

    Genius is taking something complex and making it simple; stupidity is taking something simple and making it complex; Enter OOP languages such as c# and Java.

    Exposing a method; consuming a class; delegates, events, virtual methods, encapsalation, operator overloading, derived classes, base classes, constructors, destructors; How about kiss my ass with the terminolgy from demented digital geeks and speaking plain English to start with. I guess that would make this all too easy and boring.

    Comment by robertfo — 1 March 2010 @ 05:04

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License