Chad Perrin: SOB

26 August 2009

Is your solution perfect?

Filed under: Cognition,Liberty — apotheon @ 02:14

The question to ask about a libertarian solution, when considering whether to apply it in the real world instead of the authoritarian solution, is not “Is this solution perfect?”

The question you should ask is, instead, “Is the authoritarian solution worse?”

23 August 2009

If I Was Better

Filed under: Cognition,Geek — apotheon @ 08:21

I suspect that if I was a better programmer:

  1. I would use more recursion in Perl.

  2. I would use more iteration in Scheme.

In April 2008, under the title recursion confessional, I discussed the fact that my mastery of the Force recursion was less than complete (to put it mildly). This is changing as I work more with Scheme, fighting through the exercises in SICP.

I suspect this will lead to a much more complete and natural understanding of recursion in the context of programming for “real world” purposes in the future. For reasons pointed out in recursion confessional, however, chances are good that I still won’t use it very much for my everyday programming needs for a while, since most of my programming is still likely to involve Perl and Ruby (and, unfortunately, some PHP when I can’t avoid it). The way Scheme itself so readily lends itself to recursive solutions to programming problems is of tremendous help with making it feel more natural, and less like an academic chore.

In fact, I find myself in what seems like a pretty odd position with regard to my programming proclivities right now: when I hack out a solution to a repetitious programming problem in Perl or Ruby, the obvious answer is iterative (especially given Ruby’s iterator constructs); when I tackle a repetitious programming problem in Scheme, however, I come up with a recursive solution first and most naturally. The oddest part of this situation, though, is the fact that when I’m writing code in Scheme, once I’ve come up with the obvious recursive solution, I have difficulty even formulating an iterative solution.

Part of the discussion of recursion in recursion confessional centered around Dan Kegel’s How To Get Hired — What CS Students Need To Know. One of three key competencies he identified for programmers — things that should be high on the list of skills to test as an interviewer — is practical recursion. As I said at the time I wrote recursion confessional, I probably wouldn’t have passed his interview at the time. Now, I think I might, if that’s all that was really tested (the ability to make practical use of recursion to solve real programming problems).

The problem that still lingers appears to be that I find it difficult, for the moment, to use recursion in a programming context that is designed more with iteration in mind, and to use iteration in a programming context that is designed more with recursion in mind. When I say “use” here, I mean everything from recognizing the usefulness of the technique in question for solving this particular problem to figuring out how to actually formulate a solution for that particular problem using the technique in question.

I expect that to change as I work more with Scheme, and as I do more “real world” coding in Perl and Ruby after having encountered recursion in the context of Scheme. It just takes practice, and (as I mentioned in recursion confessional) I’m a fast learner. I just felt like lamenting the fact that I utterly failed at coming up with an iterative solution to a problem in SICP after having already solved it recursively. I suppose I could have taken that as a good sign, though, since the solution that came to me quite naturally had a far more elegant form than the iterative solution — but giving myself credit for that would be cheating.

Anyway . . . all of this is why I suspect I’d use more recursion in Perl and more iteration in Scheme if I was a better programmer. I get a free pass for Ruby, though, because it (version 1.8.x) doesn’t do tail-call optimization and doesn’t appear to have anything up to the standards of Perl’s Memoize module. As such, it’s just a bad idea to do much recursion in Ruby anyway. Thank goodness for Ruby’s do-end iterators.

addendum: Posts about code should usually have code in them. The following are the recursive and iterative solutions to an SICP exercise that demonstrate my above point. In the case of the exercise for which these functions serve as solutions, the recursive solution just poured through my fingers into the keyboard. Meanwhile, I completely failed to see the iterative solution until I got away from the problem for a while and looked at other iterative solutions to programming problems in Scheme (such as an iterative Fibonacci function).

recursive
(define f
  (lambda (n)
    (if (< n 3)
        n
        (+ (f (- n 1))
           (* 2 (f (- n 2)))
           (* 3 (f (- n 3)))))))
iterative
(define f
  (lambda (n)
    (define fi
      (lambda (a b c i)
        (if (= i 0)
            c
            (fi (+ a (* 2 b) (* 3 c))
                a
                b
                (- i 1)))))
    (fi 2 1 0 n)))

There are surely better solutions for both approaches, of course, but I did the minimum and, unless I feel inspired, won’t bother improving either for now.

15 August 2009

Learning Memory Management

Filed under: Geek — apotheon @ 05:05

A lot of people talk about the benefits of garbage collection in a programming language. It’s true — the benefits are significant, and worth having available. On the other hand, there’s something to be said for memory management the “hard way”, particularly if you ever want to be a competent programmer.

The problem with only learning a single language, and selecting one with garbage collection, is that memory management in your software will probably become more and more prone to memory management issues. While it’s easier to write code that doesn’t have significant memory management issues as you get more of the memory management nitty gritty automated by your language implementation, it’s also harder to use memory management well. Good memory management requires a lot more knowledge in a garbage collected language than in one where you need to allocate and deallocate “manually”.

With no memory management, you explicitly see everything it’s doing right there in front of you, and it’s a relatively simple matter to allocate and deallocate. With a few rules of thumb and half a brain, you can turn that into well managed memory. The keys to good memory management when you’re doing it all yourself are making good decisions about how much to allocate at a time and not forgetting to deallocate, generally speaking.

With a simplified, but effective, automated memory management system — such as reference counting — you have to think more abstractly to make sure you’re using memory management effectively. Scoping, circular references, and other challenges arise that are not as direct and conceptually simple to deal with. Sure, you can just ignore the memory management and assume it’ll be done for you, but you’re likely to end up with less efficient code if you don’t know what the reference counting system is doing for you. In certain edge cases you can still manage to create memory leaks and other problems.

Finally, with an automated, full-featured garbage collector, you then have to think even more abstractly, to the point where nobody is likely to get it right all the time. Have you ever seen a Java application suddenly slow down in mid-operation for no apparent reason? Yeah, I see that semi-regularly too. If you don’t know enough about how the garbage collector works (and plan around that painstakingly), you could very well see your application slow to a crawl when the garbage collector is running. That can be deadly for certain types of software, of course, which is why garbage collected languages are not typically used for those types of software. There are even rarer — but typically more difficult to reason through — edge cases than with reference counting where things may not be deallocated properly, and may lead to bugs. Getting memory management right with a garbage collector is more difficult to achieve with any certainty than the difficulties of no memory management and of reference counting put together. You can get it “good enough” to think you got it right pretty easily, though (if you don’t know anything about memory management other than “the garbage collector does it for me”).

This is why learning to work without an automated memory management safety net is important, in my opinion. This is also why, sometimes, reference counting is the best way to go: it’s almost as automated as a garbage collector, and almost as conceptually simple as doing the memory management yourself.

In short, sometimes garbage collection is the best way to go, but it is not an excuse to just ignore memory management and figure it’s being done for you. It is being done for you, of course, but it’s probably not being done right unless you manage the memory management as well. You need to know about memory management to really get the full benefits of automated memory management.

This, of course, is why I so lament the fact that I don’t know more about memory management than I do. I know enough to deal effectively with Perl’s reference counting, and use it to best effect. I don’t, however, know it well enough to manage Ruby’s garbage collection as well as I’d like. I’m sure I’ll feel a lot better about the whole thing when I get around to (re)learning C to a level of some competency some time in the next couple of years.

I think it was Joel Spolsky who called the relevant principle the Law of Leaky Abstractions.

It was Sean Reifschneider who said “If Java had real garbage-collection, it would delete most programs before it executed them.”

« Newer PostsOlder Posts »

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License