A few days ago, in I learn something new every day — this time about Python and Ruby, I posted on the subject of how a core language’s limitation to immutable strings can lead to limitations in how certain operations may be performed within that language. In this case, “immutable strings” was just a stand-in for any “best practices” attempt to purify the way a language works. This doesn’t mean such attempts are necessarily bad, of course — just that there’s always a trade-off when you give up flexibility for some form of “correctness”.
I was also motivated to address the subject because it was an interesting case of where a common assumption about performance is violated, in part to help make the point that in many instances choosing a tool based on its performance characteristics is not only a case of premature optimization but subject to what pharmacists call a “paradoxical reaction”. In other words, choosing a tool for performance purposes can sometimes end up hurting performance more than it helps.
One of the responses I received was from Justin James. Unlike many responses, he didn’t immediately start trying to prove my example “wrong” by addressing subject areas that are irrelevant, orthogonal, or at best tangential to my points. Instead, he brought up the interesting subject of operation optimization (as contrasted with work-arounds or just suffering with a slow operation). I’ve decided to address some of his points directly here, but before I start quoting him I’ll provide a general overview of his opinion on the matter, as I read it:
He finds multiple operator choices and reimplementing operators with different algorithms at the cost of programmer time spent building scaffolding inferior approaches for operator optimization to “smart” compiler optimization or simply using the right algorithm the first time when implementing the language. In general, I agree. There are (as always) caveats.
To expect the run of the mill programmer to know the internal differences between “less than less than” and “+=”, or to even expect them to remember thousands of little rules like “use ‘less than less than’ instead of ‘+=’ under these circumstances” is rediculous.
I agree. That’s why all you should have to remember is the difference between functional operators and operators with side-effects. See, the
+= operator is to some degree effectively a functional operator — it’s an infix operator that takes two inputs and returns a value. It’s not really a “purely” functional operator, of course, because it reuses the label on the left-hand side of the expression, but it doesn’t reuse the object that label previously “contained”. The
<< operator, on the other hand, does an in-place modification of the object.
Knowing that, and knowing something about the craft of programming in general, “use ‘less than less than’ instead of ‘+=’ under these circumstances” becomes a deduction based on expertise rather than a rote-memorized rule.
The mere mortals who make up the bulk of the population who become programmers are just not built this way.
Alas, the bulk of the programmer population aren’t worth their paychecks. The way they’re built is at great expense by shoddy “carpenters” at Blub U, where the only language allowed is Java. They don’t even know what a functional programming paradigm really implies in terms of programming practice — so of course they shouldn’t be expected to know the difference between
+=. That would require a passion for learning, an education in something other than “industry best practices”, and some critical thinking skills.
using an object instead of an operator makes it much less likely to be used!
To clarify, in the .NET example you provided, you seem to be saying that having to build a class and instantiate an object where something more like a simple operator should suffice greatly reduces the likelihood that people will use the “better” means of achieving equivalent results. I agree — great, heaping masses of scaffold code used to prop up the simplest operations is just a bad idea in general. I know you’re not speaking of objects per se, however, because everything you see in Ruby that looks like an operator is in fact a message sent to one of its apparent “operands” — an object.
This is the same reason why people use regex’s in Perl constantly and never in OO language (and halfway between “constantly” and “never” in PHP, where it seems to be a function call), even though the regex pattern syntax itselx is identical… doing it via object is a hassle and doing it through an operator is a cinch.
There are exceptions to this. Some of the more dogmatically designed OO languages definitely seem to revel in the act of throwing hurdles in the way of simple, straightforward use of regexen, of course. On the other hand, object oriented regex syntax in Ruby is every bit as easy to use as Perl’s regex syntax, in my opinion. The fact that Java seems hell-bent on dissuading you from ever leveraging the power of regexen — and even Python forces library calls before regexen are made available — does support your point in general, though.
In any event, I feel that this is the exact type of thing that optimizing compilers are for.
Optimizing operation algorithms is certainly a job for compilers, VMs, and even interpreters, as much as possible. Still, there are cases where a distinct need for doing something one way or the other in a manner where a compiler is unlikely to be able to automagically guess your intent. In such cases, it can be handy to have options such as those presented by Ruby’s