Chad Perrin: SOB

27 April 2008

real closures

Filed under: Geek — apotheon @ 07:46

I haven't talked about lexical closures in a while. There was a brief time, months back, when it was almost all I talked about here — but then I went for long periods without touching the subject here at SOB. I feel inspired to discuss it again. One of the reasons I feel thusly inspired is having read Closures in Ruby and Python at the weblog "A Ship at Port". The author, "Matt", expresses his disappointment in a lack of clear explanation for why he should give a damn about the existence (or nonexistence) of proper lexical closures in any given language. I hope to provide such an explanation suitable to helping any of my readers who have the same questions understand, in basic terms at least, some of the practical value of closures.

What's a "lexical closure"?

For those who aren't entirely clear on the concept, a lexical closure (or "closure" for short) is a first-class function that has an enclosing lexical scope "closed" around it.

There are a number of implications of this state of affairs in a lexical closure — many of which are often included in definitions of closures, even though they are really consequences of lexically closing a function rather than defining characteristics. Among them:

  • A first-class function is a function that is treated, in computer science terms, as a "first-class object" — also known as a "value", "entity", or "citizen". They can be treated exactly like entities that can be assigned to variables, operated on, evaluated, and otherwise manipulated exactly like data. "First-class objects" have been occasionally defined as entities that belong to a "first-class data type", in fact — which means that first-class functions are functions that belong to a first-class data type as well. Generally, first-class functions only exist in languages that either do not differentiate between code and data (e.g. Lisp) or provide some mechanism for turning data into executable code and vice-versa (e.g. Java, though it's kind of an unnecessarily painful process in that language last I checked, and some point to flaws in the resulting Java object that they say disqualify it as a true first-class function).

  • Lexical scoping, generally speaking, is a form of static scoping (determined at "compile time") where a given scope has access to whatever's in its enclosing scope, but the opposite is not true. Thus, if you have a scope containing variable food nested inside an enclosing scope that contains variable bard, the nested scope can also access variable bard, but the enclosing scope cannot access variable food — because in the enclosing scope, food is out of scope.

  • A scope is "closed" when whatever it contains "goes out of scope". For instance, in a lexically scoped language a function has its own scope — and when that function ends, it "goes out of scope". This means that any variables that were created within that scope cease to exist, their place in memory freed up and any data they carried "forgotten" by the program. The usually way to keep data from within a function that has ended, closing its scope, is to generate a return value that contains that data — which is, in fact, simply data and no longer part of that (now closed) scope.

  • A common way to create a first-class function is to use a normal, named function, and generate a return value that consists of an unnamed function that can be assigned to a variable or otherwise operated on like any other data. Doing this all by itself does not really create a closure, though, because it doesn't have an enclosing scope "closed" around it — it just creates a first-class function (a function as data, essentially), indistinguishable in practice from a function created as a lambda without the aid of an enclosing function. The lambda kernel method in Ruby can be used for this purpose, for instance.

  • To close lexical scope around a first-class function in the example of creating it inside another function, you need to actually have some connection between the generated first-class function and the enclosing scope. Thus, if there's a bard variable in the enclosing function's scope, and the first-class function makes use of that variable somehow, the enclosing scope is "closed around" the first-class function (creating a closure). Otherwise, the enclosing scope simply closes, and that's it: it's gone. When a scope closes on nothing but itself, it evaporates; when it closes on a first-class function, however, it becomes a closure — a closed, but persistent, scope, with persistent state.

  • Protection, in computer science terms, is usually applied to concepts of object oriented programming. Encapsulation is a related term. Encapsulation basically means that a chunk of code is encapsulated as a single, (semi-)atomic entity by a surrounding API, so that the only way to access it is via that API. Object oriented languages achieve encapsulation to varying degrees of success. Protection is to a datum as encapsulation is to a chunk of code — it ensures that a given set of data is protected against access from outside its parent entity (i.e. an object) except via a specific API. As with encapsulation, object oriented languages achieve protection to varying degrees of success. In languages where code and data are not strictly separated from one another conceptually (e.g. Lisp), the distinction can get a little fuzzy.

  • A proper lexical closure effectively provides "perfect" protection. Because the enclosing scope of a closure has gone out of scope, it can no longer be directly accessed (in theory), leaving calls to the closure function's API as the only means of operating on the data from that scope. If the closure function's API does not provide any means of accessing some of that data, you simply cannot access it. There are, in practice, exceptions to this (of course) — such as in the case of pointers, references, and other ways to "get around" the closing of scope, but these involve basically reaching directly into the memory used by the program and twiddling bits in some way. That's a messy, dangerous, and usually discouraged means of working on data. Short of using such means of access to produce programming tricks like (for instance) creating objects, first-class functions, linked lists, and closures using references in Perl, these means of direct access to stored data are generally a sign you're doing something wrong — like creating memory violation conditions as some kind of dirty hack rather than coding something up properly.

Why should I care about closures?

The canonical, "Hello World" style of closure example is an incrementor — something that, every time it's called, produces a number greater than the previous number. I'll show examples of how to generate such incrementor closures that are configurable in two ways, and the closure generator will take two arguments to achieve these configurations. The first is a starting number. The second is an incrementing quantity. Thus, you could generate an incrementor that starts at a number of your choosing and, when called, produces a number incremented by a value you chose as well. My examples should work fine on most Unix/Linux system configurations, depending on your execution path configuration, et cetera.

First, I'll present an example that shouldn't really work anywhere — the most basic incrementor closure example, presented in non-executable pseudocode. In this example, there are no arguments sent to the generator, and the resulting closure is non-configurable. It just starts at zero, and prints out a value incremented by one each time it is called.

Pseudocode:
  function generate {
    base = 0
    return lambda { return base += 1 }
  }

  incrementor = generate
  execute incrementor
  execute incrementor
Incrementor closure in Perl (where the my keyword creates a variable with lexical scope):
  #!/usr/bin/perl -l

  sub generate {
    my $base = $_[0];
    my $interval = $_[1];

    sub { $base += $interval }
  }

  $incrementor = generate( 5, 2 );
  print $incrementor->();
  print $incrementor->();
Incrementor closure in Ruby:
  #!/usr/bin/env ruby

  def generate( base, interval )
    lambda { base += interval }
  end

  incrementor = generate( 5, 2 )
  puts incrementor.call
  puts incrementor.call
The output of both of these looks like so:
  7
  9

You may ask why you can't just create a function to do the above for you without all this mucking about with abstract concepts like first-class functions and closed lexical scope. For instance . . .

Perl example:
  #!/usr/bin/perl -l

  sub generate {
    $base = $_[0];
    $interval = $_[1];
  }

  sub incrementor {
    $base += $interval;
  }

  generate( 5, 2 );
  print incrementor();
  print incrementor();
. . . or in Ruby (where the dollar sign on a variable name gives the variable global scope):
  #!/usr/bin/env ruby

  def generate( base, interval )
    $base = base
    $interval = interval
  end

  def incrementor
    $base += $interval
  end

  generate( 5, 2 )
  puts incrementor()
  puts incrementor()

Both of these will produce exactly the same results, run as-is, as the previous examples. They ignore certain functionality that may become necessary in more complex programs, however. For instance, each of them provides exactly one incrementor function, and every time you need an incrementor that starts at a new number you must reset the values of $base and $interval. If you then want to go back to an earlier set of configurations for an incrementor, because you want to increment two things differently, you must restore your earlier values where you left off with them manually. Check out examples in Perl and Ruby.

With the closure example, however, you can simply create new incrementors for new configuration values, and use the old incrementors again when you need them, without so much duplication of code and data — and without the attendant maintenance nightmare. Check out examples for this state of affairs in Perl and Ruby.

Is every single Ruby block really a closure?

So far, I've explained closures and provided examples not only of how to make a closure but of how their properties may be of some use to simplify code and improve maintainability. Last but not least, I'll tackle another subject that helped inspire this post.

Some time ago, discussion of closures on the ruby-talk mailing list alerted me to a bizarre cultural characteristic of the Ruby community: it is the semi-official position of the most vocal members of the community on the subject of closures that every single "block" in the Ruby language is a closure. Keep in mind, as I discuss this, that in Ruby the word "block" is used with a very particular, specific meaning that differs from the casual manner one might use the term "block" to refer to a chunk of code within its own scope or routine in other languages.

The problem with the notion that a Ruby block is automatically and always a closure is that a block can be created that only exists as long as its parent scope — or even goes out of scope before its enclosing lexical scope does. In fact, a block could conceivably be created whose sole enclosing scope is that of the program itself. Obviously, if the program ends (thus going out of scope), the block does too.

This means that the lexical scope within which the block exists may not be closed around that block. The block is essentially a first-class object, satisfying one requirement, but the lexically closed scope around the block may never happen. This, it seems obvious to me, should disqualify some blocks from being closures — which means the statement that all blocks are closures in Ruby is false.

The rejoinder suggests that the lexical context of the block does not need to go out of scope for the block to be a closure. All that is needed, according to people who hold to this notion, is that the block needs to have access to that lexical context. It doesn't even need to make use of that context at all — doesn't have to actually access the enclosing scope, but only be allowed access.

There's one small problem with that:

That's a defining characteristic of lexical scope.

In other words, by that definition of a closure, anything that contains a lexical scope and exists within an enclosing lexical scope is automatically a closure. This, of course, would mean that any language that provides lexical scoping allows closures.

I don't know about you, but I find this notion absurd. It's poppycock. Closures consist of more than just a scope within a scope. WTF meaningful existence can a closure have if it's nothing but a scope within a scope? Why would anyone have bothered to refer to it by the name "closure"? Where, for that matter, would the term "closure" have arisen if you never had to close a scope to get a closure?

. . . and why would everyone have been lying about Scheme being the first formal existence of a proper closure for all this time? Lexical scoping certainly existed before Scheme hit the scene.

What else is there?

For a previous discussion of lexical closures, it might be worthwhile to check out my SOB entry of August 2006 titled Linguistic Elegance Test: Closures and Lists. This includes some links to explanations and definitions of closures, too.

Any questions?

(dedicated to Sterling)

edit: It suddenly occurs to me that maybe some of the members of the Ruby community fail to differentiate between closures and callbacks. Most, if not all, Ruby "blocks" are effectively callbacks. They do not fit the definition of a closure, but the difference may be subtle and easily missed, depending on one's perspective.

32 Comments

  1. If a closure never gets called, is it still a closure? The answer is yes. It also doesn't matter which function calls the closure. Closures are by definition not by invocation.

    The function closes on its enclosing scope, becoming independent of the existence of that scope. The thing is, the function doesn't have to prove that! In other words, it doesn't have to be called after the enclosing scope dies to be a closure.

    For example:

    apply { |x| x + 2 }

    is always a closure. It may be that the block is never caller, or only ever called by apply, or that apply passes it to someone else that will call it some other time, because it can. It can because the block is a closure.

    That applies the same to all languages that support closures. Closure by definition, not by invocation.

    Comment by Assaf — 28 April 2008 @ 12:47

  2. If a closure never gets called, is it still a closure? The answer is yes.

    . . . because you say so, I presume.

    Perhaps, if unclosed, we should call it an "unclosure" — or, y'know, just "first-class function", since that's all it really is. I suppose you could nail down enough specifics to call it something more particular than just "first-class function" — but the closest you'd get to "closure" without closing it is "callback". When it doesn't get closed, nothing differentiates it from other first-class functions.

    It also doesn't matter which function calls the closure. Closures are by definition not by invocation.

    I don't recall saying anything that would have prompted that statement as a counterargument.

    The function closes on its enclosing scope, becoming independent of the existence of that scope. The thing is, the function doesn't have to prove that! In other words, it doesn't have to be called after the enclosing scope dies to be a closure.

    Did someone say it had to be called? I didn't say that. It just has to exist after the parent goes out of scope.

    For example:

    apply { |x| x + 2 }

    is always a closure.

    You keep saying things like this without providing any supporting evidence.

    It may be that the block is never caller, or only ever called by apply, or that apply passes it to someone else that will call it some other time, because it can.

    I'm not sure why you keep going on about the idea of a closure existing without being called.

    It can because the block is a closure.

    No — it can because it's a callback, which is not necessarily a closure.

    Closure by definition, not by invocation.

    To whom are you replying when you insinuate that the target of your response said a closure has to be called/invoked to be a closure?

      sub generate {
        $foo = 1;
        sub { $foo += 1 }
      }
    
      $iterator = generate();
    

    See that? I created a closure without invoking/calling it.

      $iterator->();
    

    There. Now I've called it. Guess what — it's still a closure, same as before it was called.

    Comment by apotheon — 28 April 2008 @ 04:35

  3. > Generally, first-class functions only exist in languages that either do not differentiate between code and data (e.g. Lisp) or provide some mechanism for turning data into executable code and vice-versa

    Not true. D 2.0 has full closures, but no dynamic executable code generation. They're basically treated as callbacks with an additional context pointer (what D calls delegates). The compiler does some simple escape analysis and moves the variables the nested function needs on the heap.

    Here's a generator in D 2.0: int delegate() gen() { int foo; return { return foo++; }; } And in 1.0, using the tools library: int delegate() gen() { return (int* p) { return (*p)++; } /fix/ new int; }

    To react to your later points, in D, every nested delegate literal actually is a closure :) This is the case because the defining characteristic, the existence of the closure context independent of the surrounding function, happens as soon as the closure is defined – the compiler doesn't wait until the parent function exits to move the data to the heap.

    (And yeah, I realize this post was about Ruby, but I think it's still relevant to see how it's done in other languages :)

    Comment by FeepingCreature — 28 April 2008 @ 04:53

  4. The problem with understanding closures is that the term is used loosely which makes it difficult to identify the concept.

    Knowing where the term comes from helps clarify, and I posted a follow-up post a few years ago that provides some historical perspective: http://mrevelle.blogspot.com/2006/10/closure-on-closures.html

    Comment by mattrepl — 28 April 2008 @ 07:10

  5. FeepingCreature:

    D 2.0 has full closures, but no dynamic executable code generation.

    Please explain what definition of "dynamic executable code generation" you're using that makes what I said imply that dynamic executable code generation is necessary to creating closures. I'm pretty sure I didn't say anything to that effect, unless you mean something different by "dynamic executable code generation" than I think you mean.

    To react to your later points, in D, every nested delegate literal actually is a closure :) This is the case because the defining characteristic, the existence of the closure context independent of the surrounding function, happens as soon as the closure is defined – the compiler doesn't wait until the parent function exits to move the data to the heap.

    I think you're entering a gray area with that. The lexical scope isn't closed, but the manner in which D achieves lexical closure is part of what has already happened. I'm not sure that qualifies, though.

    (And yeah, I realize this post was about Ruby, but I think it's still relevant to see how it's done in other languages :)

    I agree, and I appreciate the added perspective.

    mattrepl

    The problem with understanding closures is that the term is used loosely which makes it difficult to identify the concept.

    No shit. I sympathize with that problem. It has become very hip in some circles to claim that any given language supports proper closures, so people redefine "closure" to mean whatever they want it to, to satisfy the notion that their favorite languages support closures.

    Knowing where the term comes from helps clarify

    Hailpern's explanation of the origin of the term "closure" in this context strikes me as a candidate for inclusion on snopes.com. He seems to be defining closures as "lexically scoped procedures", which is not what anyone else calls a closure — and is exactly what you get any time you have a procedure in a lexically scoped language. That's a bit like saying lexically scoped procedures are objects or even classes, making any lexically scoped language automatically a strongly object oriented language. I don't buy it.

    Comment by apotheon — 28 April 2008 @ 07:45

  6. "It just has to exist after the parent goes out of scope." It can exist after the parent goes out of scope. There's the difference there. When you define a function that can exist independently of the parent scope, which is more than just being a first-class function, it becomes a closure. At this point (definition) it's unclear how long that reference will persist and be put to use. In Ruby the block defines that closure, while the function receiving the block is the one holding the first reference and therefore deciding how long it will exist.

    I don't know of ay language that defines closures with the added "has to exist" requirement.

    Comment by Assaf — 28 April 2008 @ 08:53

  7. > Please explain what definition of "dynamic executable code generation" you're using that makes what I said imply that dynamic executable code generation is necessary to creating closures.

    Isn't that what you meant by "converting between executable code and data"?

    I mean, I take from your reaction that it isn't, but if so, what else did you mean by that? :confused:

    Comment by FeepingCreature — 28 April 2008 @ 09:14

  8. Assaf:

    When you define a function that can exist independently of the parent scope, which is more than just being a first-class function, it becomes a closure.

    Any first-class function can exist independently of the parent scope. That's why it's a first-class function.

    I don't know of ay language that defines closures with the added "has to exist" requirement.

    Err . . . so now something can exist without existing? That's confusing.

    FeepingCreature:

    Isn't that what you meant by "converting between executable code and data"?

    Executing data as code and generating code are not the same thing, as far as I'm aware — just as executing a program at the shell and writing a program are not the same thing.

    Comment by apotheon — 28 April 2008 @ 11:30

  9. A closure is a function that has access to the lexical scope in which it was defined, even after that scope has been exited.

    You can't define a closure as any function that has access to a containing scope, because as apotheon pointed out, that would mean that any language with containing scopes has closures.

    But I agree with Assaf that you can't define a closure by whether or not it's called outside that scope, either.

    I think the crux of the definition lies in the potential for being called outside that scope, and still having access to it. By that definition, Ruby code blocks are closures — because they can be passed around to any scope and, if called, they have access to their containing scope. I doubt that the Ruby interpreter would be able to tell whether or not a code block will get invoked outside its scope and optimize for not creating the lexical binding.

    Comment by SterlingCamden — 28 April 2008 @ 01:00

  10. But I agree with Assaf that you can't define a closure by whether or not it's called outside that scope, either.

    I'm still not sure where Assaf got the idea I was saying it's not a closure unless you call it.

    I think the crux of the definition lies in the potential for being called outside that scope, and still having access to it. By that definition, Ruby code blocks are closures — because they can be passed around to any scope and, if called, they have access to their containing scope.

    That's only true if they exist outside the containing scope (regardless of whether they've been called), though — otherwise, they just disappear when the containing scope closes, leaving you without something that "has the potential for being called outside the scope". Also, if a block isn't bound to the containing scope before that scope closes, it lacks the scope access component of the definition of a closure.

    I doubt that the Ruby interpreter would be able to tell whether or not a code block will get invoked outside its scope and optimize for not creating the lexical binding.

    That binding has to happen, though. If there's nothing in the block to bind to the parent scope, no binding happens. The containing scope can be garbage collected in its entirety without that binding, leaving a block (even if the block survives past that point) as nothing more spectacular than a first-order function.

    Comment by apotheon — 28 April 2008 @ 01:52

  11. What Sterling said. And it doesn't matter if the runtime optimizes and discards states it will never access, if you can't tell the difference it's still a closure, much the same way inlined functions are still functions.

    Comment by Assaf — 28 April 2008 @ 02:12

  12. And it doesn't matter if the runtime optimizes and discards states it will never access, if you can't tell the difference it's still a closure, much the same way inlined functions are still functions.

    Translation:

    "And it doesn't matter if the lexical context is gone and it's not really a first-order function — it still conforms to the definition of a closure, which requires access to bound lexical context and existence of a first-order function."

    You contradict yourself at every turn. You also just said you agreed with what Sterling said, then proceeded to contradict him. Good job.

    Comment by apotheon — 28 April 2008 @ 03:10

  13. I don't think Assaf contradicted me here. In fact, I had the same thought after I posted. Language definitions aren't about optimizations we can't see — they're about how we think about the language we're using. Creating a function object that retains potential access to its enclosing scope makes it logically a closure, whether the enclosing scope is actually accessed or not. IMHO.

    Comment by SterlingCamden — 28 April 2008 @ 04:17

  14. Having nothing left to lose, I'm going to contradict myself one last time, by agreeing with Sterling word for word.

    Comment by Assaf — 28 April 2008 @ 05:03

  15. Guys . . . if the language "where you can't see it" throws away the lexical scope that's supposed to be bound to your closure, then you try to use the closure and it fails, you didn't really have a closure. How can something be a closure if "the runtime optimizes and discards states"? Unless you're talking about during a compiler optimization phase or something like that, where you really will never use the state under any conditions — at which point whether or not there was a closure doesn't matter any longer, because all you have left is a binary anyway (all language abstractions having gone out the window) — making some kind of remark about it being okay for the runtime to optimize and discard state while still calling the closure that depends on that state a "closure" seems kind of self-contradictory.

    I guess, looking at it, you may be talking about after the code is translated to some kind of executable form (parse tree, bytecode, binary) and is no longer actually code, but in that case discussion of closures is meaningless anyway. Programming abstractions like "first-class function", "closure", and even "variable" cease to have any meaning at all at that point.

    So . . . what? Are we using a point in the life-cycle of a program where there's no such thing as a closure at all to say that there doesn't have to be a bound lexical scope for something to be a closure? That doesn't make any sense.

    I don't see how you can seriously argue that something is a closure at a point in the program's life-cycle when the lexical state is discarded because it's that point in the life-cycle when concepts like "closure" and "bound lexical state" have no meaning.

    Comment by apotheon — 28 April 2008 @ 05:18

  16. OK, the discussion about optimizations what somewhat off-topic. I was talking about optimizations the runtime might make after parsing, when/if the runtime could determine that your code could never call the function outside its original context — which is not something a user of the language need be concerned about. The point of bringing it up is that a function is a closure if it has the potential of being called outside its original scope and has the potential to access members in that scope — regardless of how the runtime achieves that potential under the hood, and regardless of whether any of that actually happens.

    In your examples in the post (which are concise and to the point, BTW) — your generator creates and returns a closure, whether or not the returned function is discarded immediately. At least, that's how I see it.

    Comment by SterlingCamden — 28 April 2008 @ 06:04

  17. In your examples in the post (which are concise and to the point, BTW) — your generator creates and returns a closure, whether or not the returned function is discarded immediately. At least, that's how I see it.

    This is true — and I agree, that's a closure. It exists (persists) after the generator function goes out of scope, even if only for the brief instant of the generator producing a return value (the closure) and the assignment. There's a closure there, regardless of whether it's ever called after being "generated".

    I never disputed such a thing, which is why I'm so confused by Assaf using that as some kind of counterargument.

    Comment by apotheon — 28 April 2008 @ 09:40

  18. [...] Chad Perrin: SOB » real closures What are closures, and why should you care? (tags: closures programming ruby perl python functional) [...]

    Pingback by links for 2008-04-29 -- Chip’s Quips — 29 April 2008 @ 01:36

  19. This is a closure:

    x = 5 apply { |y| x + y }

    The anonymous function (block) closes over the scope to gain access to X. As long as you use blocks this way (is there any other way), they're all closures. This is not the only example for closure, just one that illustrates why all Ruby blocks are closures.

    Comment by Assaf — 30 April 2008 @ 11:07

  20. . . . except that the enclosing scope never closed, because it's the (effectively) global scope.

    Comment by apotheon — 30 April 2008 @ 12:10

  21. Of course it did, they all do. This might be a snippet from a larger function which defines the scope. Or maybe a test.rb Ruby file, in which case the scope is created with require/load and discarded the moment require/load is done evaluating the file.

    Comment by Assaf — 30 April 2008 @ 12:37

  22. So, basically, you're saying "Scope closes, but I'm not going to show it." Okay.

    What about the case where the parent scope doesn't close? In such a case, the block does not get its context closed around it, and doesn't become a closure.

    Comment by apotheon — 30 April 2008 @ 01:03

  23. When you use a block, it creates an anonymous function and closes over the enclosing scope. That's the closure. The anonymous function has a scope that's independent of the scope in which it was defined, but can still access the same bound variables. That independence is what closures are about. It allows the other scope to die, and the closure to still access those bound variables, but it doesn't require the other scope to ever die. Independence is achieved when you're no longer affected by what happens elsewhere, whether or not something happens elsewhere is immaterial, because you're no longer dependent on it.

    Comment by Assaf — 30 April 2008 @ 01:14

  24. When you use a block, it creates an anonymous function and closes over the enclosing scope. That's the closure.

    I don't see how "closes over the enclosing scope" has any meaning other than "I'm trying to come up with a way to sound credible when I suggest that the parent scope doesn't have to close to create a closure." The nested scope doesn't "close over" its context. It exists within its context, and can access it if that context is lexical or global. If it's lexical, and the parent scope closes, the nested scope still has access to that context — unless the nested scope no longer exists. It's almost tautological.

    It allows the other scope to die, and the closure to still access those bound variables, but it doesn't require the other scope to ever die.

    If the parent scope doesn't die, it's not a closure — it's a nested function (or local linguistic equivalent of a function, for the pedants in the audience).

    Otherwise, you're just talking about what global and lexical scope have in common.

    I suppose one might conceivably suggest that a closure could exist without the parent scope having closed, in some hypothetical language where a function can have a return value without closing, but as far as I'm aware such a language doesn't exist.

    Comment by apotheon — 30 April 2008 @ 01:26

  25. Actually, the anonymous function doesn't have access to the larger scope. It only has access to variable bindings that existed there when it closed over it. What happens in the larger scope afterwards is none of its business:

    x = 1 lambda { x = 2; y = 3 }.call puts x puts y rescue puts 'no y' y = 4 puts y

    Comment by Assaf — 30 April 2008 @ 01:38

  26. Actually, the anonymous function doesn't have access to the larger scope. It only has access to variable bindings that existed there when it closed over it.

    Thanks for pedantically missing my point.

    What happens in the larger scope afterwards is none of its business:

    Maybe so — but you have yet to suggest a situation where the parent scope staying open isn't identical to just using variables with global scope, for purposes of a hypothetical closure.

    Comment by apotheon — 30 April 2008 @ 01:49

  27. I don't because it doesn't matter. If you're saying that in some code paths a closure would be indistinguishable from nested scoping, you're right. In other code paths, where it never gets called, a closure is indistinguishable from noop. Still is a closure.

    Comment by Assaf — 30 April 2008 @ 02:03

  28. I don't because it doesn't matter.

    So — you're saying closures are the exact same thing as a first-class function in a language with global scope. The mind boggles.

    Comment by apotheon — 30 April 2008 @ 03:08

  29. Did I not say close over the scope a million times, or is someone else leaving comments in my name?

    Here's a bunch of definitions (you can Google for more) that all say the exact same thing: http://c2.com/cgi/wiki?LexicalClosure http://www.jibbering.com/faq/faq_notes/closures.html http://www.learnthat.com/define/view.asp?id=3938 http://en.wikipedia.org/wiki/Closure_(computer_science)

    Or if you're more inclined, you might want to imagine how this came about from closures in algebra: http://www.cs.ucsd.edu/~clbailey/ClosureOverview.htm

    Comment by Assaf — 30 April 2008 @ 04:14

  30. Using your links . . .

    From c2:

    Each time TWO-FUNS is called, each lambda expression is evaluated to produce a closure. The two closures are returned as a list of two elements. A different lexical environment is in effect each time, in which the name X is bound to a value. The two closures share the binding of X. However, each call to TWO-FUNS results in a new lexical environment, so that the binding of X used by the closures in FUNS1 is distinct from the bindings of X used by the closures in FUNS2.

    From Wikipedia:

    The scope of the variable encompasses only the closed-over function, so it cannot be accessed from other program code.

    From UCSD:

    An important part of closure is that when we say that the set is "closed under" an operation, it has to be true for everything in the set. So even if all kinds of candies–except for one–were still candy after the bird() operation, Candy would still not be closed under bird(). Next is an example of closure that is more formal.

    These strike me as serving as potential counterarguments to what you seem to be trying to say.

    In any case, your argument seems to be "None of these pages I've picked out explicitly contradicts me, so I'm right!" I don't find that a convincing argument.

    Comment by apotheon — 30 April 2008 @ 04:32

  31. I'm just picking from the first page of a Google search results for articles that say it in different words and illustrate it in different ways, so you might understand where you went wrong adding an unnecessary restriction to your definition.

    Comment by Assaf — 30 April 2008 @ 04:47

  32. . . . which would be great if they actually made your point. They don't.

    Comment by apotheon — 30 April 2008 @ 05:44

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License