Object oriented programming (OOP fer short) is “a whole new way to program”, as compared with that which came before. Wikipedia refers to it as a “paradigm”, and it is insofar as one can immerse one’s thought processes within the logical realm of programming. Anyone who has gotten into The Zone for more than a couple hours and done a serious hacking run that was so long-lived and intense that it actually substantively interfered with day-to-day living (forgetting to eat, et cetera) should easily be able to identify with the notion of a perspective on programming as a “paradigm”.
It is claimed that it leads to greater code maintainability and flexibility and, insofar as the traditional corporate culture manner of throwing code monkeys at a problem until it reaches marketability is concerned, that is 100% true. The protection and encapsulation that OOP provides, along with the orthodox emphasis on object interfaces’ rigidity, contributes to your code’s ability to withstand wave after wave of these legions of mediocre code monkeys. It is further claimed that OOP contributes to far greater ease of managing, developing, and planning large-scale software projects (think MS Office, for example), which is true insofar as OOP is contrasted with the more strictly procedural programming style familiar to users of GWBASIC, COBOL, and the first iterations of FORTRAN. This benefit is gained by providing a (theoretically) unified mechanism for both code reuse and splitting large problems into series of smaller problems that are more easily and quickly solved.
It is also claimed that OOP is easier to learn than earlier programming paradigms for people new to programming. The problem with this, as I see it, is that this claim has never been tested. I have yet to see a successful attempt to teach OOP from the very beginning. The world is full of examples of someone producing a tutorial for a given language, or for programming in general (but via a particular language), wherein the creator of the tutorial decided initially to teach programming entirely through OOP techniques, but ultimately ends up abandoning the idea and teaching oldskool procedural programming first with OOP being something to add in later.
With some languages (Ruby springs to mind), I believe OOP could very easily be taught in this idealized manner, from word one, without falling back on non-OOP programming techniques to teach programming concepts, but part of the problem of execution of such a task is simply the fact that for the most part programmers (even strong OOP proponents) tend to have to translate between “programming thinking” and “object oriented thinking”, to some degree. That arises, I think, from languages that are too obsessed with the infrastructure trappings of OOP but lack a pure OOP semantic structure (I’m looking at you, Java). If you want to do it “right”, you need to think in a language such as Ruby, or even Smalltalk. In other words, you have to first and foremost think about programming entirely in terms of this “new” paradigm, rather than think about creating objects and classes from non-object components. A house is an object: a 2×4 used to build it is also an object. Is an integer, a string literal, an arithmetic operator, and your program as a whole also an object in the language of your choice, and do you have to consciously shift your thought patterns to regard them as such?
There are those of us who didn’t start with OOP — who started with programming before all mid- to high-level languages in common use were OOP languages. My first exposure to programming was (sadly) to various incarnations of BASIC (which has, I’m afraid, stunted my growth a touch, though I’m overcoming that). As a result, OOP never came naturally to me, and still doesn’t. I’m reasonably sure it will at some point, but I’m still working on that. People like me need to be able to grok OOP, too. I tend to think that the same things that would work best for people who have never programmed before would also work best for those of use who started out in a relentlessly procedural programming paradigm. One would think this would mean that there should be a lot of excellent stuff out there for teaching OOP to both rank newbies and seasoned veterans of programming via other methodologies. The truth is that, as far as I’ve seen, instruction in OOP concepts and techniques is woefully piss-poor. Even the most basic terminology is typically presented in a manner that it appears to be deep magic, though it need not be. An example that arises in my mind is that of explaining a term of much interest to experienced OO programmers: polymorphism.
As Sterling put it in his rumination on my question about inheritance
Polymorphism means that if class B inherits from class A, it doesn’t have to inherit everything about class A – it can do some of the things that class A does differently.
I remember reading no less than a dozen different explanations of the term “polymorphism” in an OOP context that all purported to explain the concept fully, clearly, and simply, in a single sitting, in hopes of grasping the point. It didn’t work. I asked people directly, and they too seemed to suffer from a lack of good explanations — they couldn’t define it very effectively, but they knew it when they saw it, or at least seemed to think they did. Eventually, I managed to build a mental model of what polymorphism was in OOP on my own, but I’ve always wondered if there was something I missed because my understanding was much simpler than all those obtuse, obfuscating explanations seemed to suggest. Since seeing Sterling’s take, however, I’m going to stop worrying: I’ll assume he knows what he’s talking about, and that I do too, since his very simple and eminently understandable definition agrees with my understanding.
The simplicity of this, and the fact that its simplified explanation doesn’t disagree with anything in any of the more complex definitions I’ve seen, makes me wonder how much LSD most of these OOP gurus must have taken just prior to writing their “simple” definitions. For example, this is a brief explanation of Polymorphism in object-oriented programmg from Wikipedia:
polymorphism (object-oriented programming theory) is the ability of objects belonging to different types to respond to method calls of methods of the same name, each one according to an appropriate type-specific behaviour. The programmer (and the program) does not have to know the exact type of the object in advance, so this behavior can be implemented at run time (this is called late binding or dynamic binding).
Whiskey Tango Foxtrot, over? To be fair, there’s a “simple terms” version of the definition of polymorphism from that same page — the first sentence of the Wikipedia entry:
In simple terms, polymorphism lets you treat derived class members just like their parent class’s members.
Is that really as simple as they can get? Seriously, folks, it’s painfully complex for “simple terms”. (Note that the Wikipedia entry has been edited to offer Sterling’s simpler explanation as well as the more computer-science-ish pedantry quoted above.)
Polymorphism isn’t the only term so abused, but it’s the example that most stands out for me. Sterling also tackles definitions for inheritance and encapsulation in the same above-linked entry to Chip’s Quips, which are similarly elegant, clear, and helpful. He has commented on the “obscurist” proclivities of OO programmers the world over in their quest for rectitudinal orthodoxy, and as a result, though they may go on at great length about how simple OOP really is to learn and use, their explanations seem to prove the opposite.
Another little issue I have with the way people (meaning OO programmers) talk about OOP is the way they relate it to “real world” objects. For instance, one of the canonical means of demonstrating how OOP works is to create a “truck” class from which a “truck” object is instantiated. The explainer, in writing a tutorial or book, or in teaching a college course, will then proceed to utterly fail to relate the lessons of the example meaningfully to actual “real world” programming tasks. Part of the problem for this, I’m sure, is the simple fact that examples like “truck” generally suck, and don’t suit beginning-level programming tasks worth a damn. In fact, instructors and writers who provide such examples probably don’t themselves quite grasp how to relate the example to actual “real world” programming tasks, because the truth of the matter is that nobody in his right mind is going to go around instantiating “truck” objects in his code except in the most pathological of edge-cases.
Object-oriented programming is more about conceptual “nouns” than tangible “nouns”. “Real-world” objects like “truck” might be useful for getting a simple point across about how some OOP task is accomplished, but as an analogy for relating OOP concepts to “real world” programming — as a means of demonstrating how OOP tasks are actually useful for solving programming problems — it is hopelessly divorced from the real deal. Instead, the “real world” objects you need to be modeling with your OOP code are more like principles than property. Yes, absolutely model your program objects on “real world” objects, but make sure you’re using the correct objects. In a vehicle maintenance tracking program, for instance, one doesn’t actually want to track a wheel. Instead, one is best served by tracking something like brake and tire shop resource expenditures. Don’t look at physical things related to the program you’re writing: look for the goals of the program, and decompose that overarching set of goals into specific, atomic, indivisible characteristic sets. From these, create your objects. They’ll be “real world” objects modeled in your program objects, but they won’t be the sort of objects you’ll use for assembling a car. After all, in the process of assembling your car, you don’t understand how it’s done by thinking about the shape and weight of a wheel — you think about it in terms of what tasks need to be undertaken to attach the wheel to the axle.
Ultimately, I think the term “object” is the wrong term for what we’re using if we want to relate what we’re doing while programming to the “real world” tasks at hand. The term “actor”, as used in the term “actor model”, is more appropriate — though perhaps a little limited in its intuitive metaphorical familiarity for the average programmer. Regardless, “object” is the term we have, and the term with which we’re stuck for the forseeable future, despite the fact that Alan Kay himself has said “I’m sorry that I long ago coined the term ‘objects’ for this topic because it gets many people to focus on the lesser idea.” I’m going to keep this niggling little issue with the term “object” quite in mind as I continue to think about the problem of how one might make use of Perl’s implicit closure-based object model.