Tuesday, September 16, 2008

Closures and Scoping Rules

Although the concept of a closure in a programming language construct is fairly clear, some of the details are a little fuzzy around the edges. The central ideas are that (a) closures are anonymous functions (b) created as first class values that (c) capture variables in the enclosing lexical scope. Unfortunately, the notions of "variable" and "enclosing lexical scope" vary subtly across languages. These lead to odd edge cases that can be demonstrated with contrived examples, but can also lead to unexpected behaviors in legitimate code in some languages.

Take, for example, the much maligned for loop in C:

    if (x) {
for (int i = 0; i < 10; i++) {
if (i == 2)
/* ... */

if (i < 10) {
/* ... */

This snippet raises two important questions: What is the scope of int i? And what should it be? Early C compilers avoided the question entirely by forcing variables to be declared before the first statement. C++ and ANSI C relaxed those rules to allow variables to be declared within the body of a function.

In a language that doesn't support closures or variable capture, these questions are mostly meaningless, unless you're writing a compiler or trying to out-geek the nerd in the next cube. Now that closures are on the table, the finer points of language design have important and subtle consequences.

There are a few ways to interpret this snippet of C code. One way is to consider the for loop a lexical block, and the lifespan of int i is limited to the loop itself; the precondition, postcondition and increment clauses are merely syntactic sugar that are still tied to the loop itself.

Another way to look at the code is to interpret the int i declaration as appearing immediately before the loop, making it visible to any subsequent point in the same block (the if block in this example). This is useful if you want to inspect the value of the loop index after the loop terminates to check whether the loop exited prematurely.

A third way to look at the code is to have the declaration of int i be visible to any subsequent point in the body of the function that declares it. This would mean that although the declaration may appear within the if statement, the variable could be used after the if statement, even if the body of the if statement were never executed.

A fourth way would be to use the declaration of int i as a kind of guidance for the parser. That is to say, the parser would enforce the desired scoping rules, but the compiler would always allocate space for the variable that is visible for the entire body of the enclosing function.

There are other possible interpretations. Sometimes they are actively chosen as the desired way to handle variable scoping. Other times, they are merely a quirk of a buggy implementation. Ruby (pre-1.9), lets code from an inner block leak into an enclosing block:

     $ ruby --version
ruby 1.8.6 (2008-03-03 patchlevel 114) [universal-darwin9.0]
$ irb
irb> i=0; [1,2,3].each {|i|}; i
=> 3

(from Sam Ruby's presentation on Ruby 1.9 from OSCon 2008)

There are many ways to get lexical scoping wrong. But how should it work?

One of the best examples comes from SICP, using Scheme, of course:

    (define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"

Assigning a variable to the result of (make-account 100) creates a new object that contains a single variable balance with the value of 100, and two local functions which reference that same value. Passing the token 'deposit or 'withdraw to that object updates the shared reference to balance.

With this example in mind, the C# team at Microsoft wrote a specification for their language that defined a clear behavior for all of these dark corners, and got them right (or at least resolved the ambiguity in previous languages). Of course, they got lexical scoping right, so closures work properly.

Yet, even with properly implemented lexical scoping, there are still some odd cases programmers need to be aware of. Erik Meijer recently posted an example in C# that can be confusing:

    var DelayedActions = new List<Func<int>>();

for (var i = 4; i < 7; i++)
DelayedActions.Add(delegate(){ return i; });

When the contents of DelayedActions are evaluated, they will return the values 7, 7 and 7. And believe it or not, this is behavior is correct and proper, even if it is unexpected. Each closure captures a reference to the single variable i. Once the loop terminates, the variable persists, and each closure has a reference to the final value of that variable, 7.

A slight change to the code produces a different result:

    var DelayedActions = new List<Func<int>>();

for (var i = 4; i < 7; i++)
int j = i;
DelayedActions.Add(delegate(){ return j; });

In this code, each iteration through the loop creates a new variable j, which copies the current value of i. The result of evaluating this set of closures are the values 4, 5, and 6. The difference is subtle, but important.

Erik Meijer noticed that with a few minor tweaks, the C# code could be run without translation as valid JavaScript. The first example, when translated, still produces the values 7, 7, 7. However, the second example produces the values 6, 6, 6.

Although JavaScript and C# have a C-like syntax in common, they have very different semantics. In JavaScript, declaring a variable inside a for loop does not create a new variable each time through the loop. Instead, it is allocated at the start of the function body! Oops.

All of a sudden, issues that used to be relegated to language lawyers are becoming relevant to everyday developers. And developers today frequently switch between languages, like, say, C# and JavaScript. You can blame Microsoft for doing a lot of things wrong, but their language research group isn't at fault here. If anything, JavaScript is exposing an optimization of dubious value, and letting it change the semantics of the language.

Incidentally, this kind of misunderstanding simply can't happen in Haskell. That's because closures (lambdas and curried functions) don't capture variables, but rather values. (Remember, variables don't vary in Haskell.) Here is a contrived example (using a Ruby-ish approach) that mimics the first C# snippet:

    each :: [a] -> (a -> b) -> [b]
each = flip map

example1 :: [a -> Int]
example1 = [4, 5, 6] `each` (\i -> (\_ -> i))

map (\f -> f ()) example1 -- [4, 5, 6]

This example seems to implement the behavior of the first C# loop, but actually implements the behavior of the second C# loop. That's because each delayed action is capturing a different unique value. To capture a shared reference takes more work, because shared references are less idiomatic:

    import Data.IORef

eachM :: [a] -> (a -> IO b) -> IO [b]
eachM = flip mapM

example2 :: IO [t -> IO Int]
example2 = do i <- newIORef 0
r <- ([4, 5, 6] `eachM` \j -> do writeIORef i j
return (\_ -> readIORef i))
modifyIORef i (+ 1)
return r

Note the odd type for example2. The outer IO is needed to actually construct the delayed actions using some form of shared state. Once the list of delayed actions is constructed, then pulling out the values requires doing something stateful. (The IO monad was used here for simplicity, but another stateful monad could be used here instead.)

Evaluating the list of delayed actions is also trickier:

    example2 >>= mapM (\f -> f ())  -- [7, 7, 7]

What's the lesson here? Closures are coming, but the rules around scoping still vary across languages. Programmers tend to hop around between languages today, much more than we did 10 or 20 years ago. As we adapt to use closures in our programs, we need to be aware of these arcane issues, and how to avoid them.

Sunday, September 7, 2008

The Closures are Coming

Last month, Chris Lattner at Apple announced support for closures in C for Clang, the llvm-based C compiler Apple has been working on as a potential replacement for the venerable gcc. Because Apple is working on this extension, it means that closures are coming to both C and Objective-C.

This is a sorely needed feature C has been missing for quite a number of years. The need may not have been pressing two decades ago, in the era of smaller programs on early 32-bit CPUs with kilobytes or maybe a couple of megabytes of RAM. But those days are long behind us, and constructing programs without closures feels clunky, cumbersome, and a throwback to the era of 8-bit home "computers" running some quirky BASIC dialect.

Nearly every major general purpose programming language in use today supports closures or soon will. There are a few exceptions, like PHP, Tcl, Cobol and Fortran that don't have them, and it's debatable whether these languages are either "major", or will get closures "soon". (Update: PHP 5.3 has/will have closures. See below.)

C# has them, and has made them easier to use in the most recent version the language by providing lambda expressions. Java is likely to get them, and the BGGA proposal looks pretty strong. And even if these languages didn't, F# (which is basically OCaml by another name) and Scala are merging the functional and object oriented disciplines on the CLR and JVM, bringing features like closures and currying to these platforms.

The only major language that doesn't have closures is C, and thanks to Chris Lattner and the llvm team, this is changing. Hopefully their changes will be accepted widely, and this feature will be added to the next C standardization.

Adding closures to C introduces exposes a few leaky abstractions, though. Local variables in C (including parameter values) live on the stack, which doesn't do much good if a closure doesn't live long enough to be invoked. Therefore, this change adds a few new features that allow closures to be copied to the heap (via persistent = _Block_copy(stack_closure)), reference counted while on the heap, and freed when no longer in use (via _Block_release(persistent)). There are a few other wrinkles, and the usage is slightly cumbersome, but, well, it's C, and you should be used to that by now.

However, the implications of this change are much more profound.

Adding closures to C is tantamount to saying it is unacceptable for a general purpose programming language to omit support for closures. All of the old arguments for withholding them are now void: it can be done in a compiled language. It can be done without using a virtual machine. It can be done in a mainstream language. Closures are useful beyond the functional ghetto where the started (and offering just pointers to functions simply doesn't cut it).

If closures are available in C, one of the oldest and most widely used languages without closures, what's your excuse for not providing them in your pet language? It may be hard to do, but the techniques are proven, well known, and open for inspection. The utility of providing programmers with the means to use idioms like map, filter and fold (or reduce) is too overwhelming to claim it's not worth the time, cost or complexity to implement.

The writing is on the wall. If you intend to develop software in the 21st century, you will be using closures, or going to great lengths to avoid them.

This may sound like a bold statement, but it really isn't. Programming languages have co-evolved in fits and starts for decades. Some feature, like recursion, proves itself in academia, but is wildly regarded as unnecessary in industry. Slowly, industry adapts, and that feature becomes a necessity in all future programming languages.

Recursion may sound like an esoteric feature to require in a programming language. After all, it's not something that you expect to use every day. But what about lexical scoping? Believe it or not, block scoped variables weren't standard practice until the mid-1980s or so; having only global variables or dynamically scoped variables was once deemed "sufficient" until the consensus formed around lexical scoping as simply the right way to do it. A similar history surrounds object oriented programming, and garbage collection.

What's next? Who knows. The jury is still out on static vs. dynamic programming environments, live code vs. dead code, metaprogramming, macros, virtual machines vs. native compilation, type inferencing, strong static typing, prototypes vs. metaclasses, and a dozen other features where languages differ. Closures, at least, are on their way to becoming a required feature for every important programming language.

Sunday, August 24, 2008

Arrows in Javascript

Last year, I mentioned a lesson I learned from one of my CS professors many years ago:

He began by recommending every project start by hiring a mathematician for a week or two to study the problem. If a mathematician could find an interesting property or algorithm, then it would be money well spent, and drastically reduce the time/money/effort needed to develop a system.

A few weeks ago, I came across a paper about Arrows in Javascript. It's an interesting adaptation of arrows to do asynchronous, event driven programming in Javascript. It makes an excellent example of this dictum.

Here's the problem in a nutshell. There's a very interesting language lurking underneath Javascript. It's an object oriented language with namespaces, closures, and dynamic typing. Doug Crockford has written and presented these concepts over the last few years.

As good as the Javascript is on its own merits (albeit abused and misunderstood), it is generally used in a pretty horrendous environment. Browsers have compatibility bugs in their language support and the cross platform APIs they expose. Most of all, browsers execute Javascript in a single-threaded manner. This makes asynchronous programming difficult, especially when there are two interwoven threads of execution, like drag and drop and dynamic page updates.

This paper by a team at UMD starts by bringing continuation passing to Javascript, and implementing arrows on top of that. Once arrows are available, building a library for asynchronous event programming is much easier than brute force chaining of event callbacks. For example, drag and drop programming can be specified common method chaining:

.run ()

The plumbing inside the library can be a little hairy, especially if you find CPS or arrows befuddling. But the exposed API is quite elegant and appears to be easy to use. It'll be interesting to watch the Javascript library space to see when and where it pops up.

Wednesday, May 14, 2008

Thought of the Day

Also from Steve Yegge's talk:

[W]hen we talk about performance, it's all crap. The most important thing is that you have a small system. And then the performance will just fall out of it naturally.

Static vs. Dynamic Languages

One permatopic across programming blogs is the good ol' static-vs-dynamic languages debate. Static languages (like C, C++, C#, C--, Java, etc.) require variables to have type declarations primarily to generate compiled code and secondarily to catch errors or emit warnings at compile time. Dynamic languages (like Perl, Python, PHP, Ruby, Tcl, Lisp, Scheme, Smalltalk, etc.) do away with all those needless keystrokes, allow programmers to be more productive, and generally aren't as dangerous as the C-programmers would have you believe. Furthermore, dynamic language programmers are quick to point out that any real program in a static language requires typecasting, which throws all of the "guarantees" out the window, and bring in all of the "dangers" of dynamic languages with none of the benefits.

However, this debate isn't limited to these opposite points of view. There's another perspective, which I heard from Mark-Jason Dominus about a decade ago: static typing in languages like C and Pascal isn't really static typing! (Correction: Mark actually said that "strong" typing in C and Pascal isn't really strong typing.)

The kind of static typing supported by C, its ancestors and its descendants, dates back to the 1950s and Fortran (or, rather FORTRAN, since lower case letters hadn't been invented yet). Type annotations in these languages are guides to the compiler in how much storage to allocate for each variable. Eventually, compilers began to emit warnings or compiler errors mixing variables of incompatible types (like floating point numbers and strings), because those kinds of operation don't make sense. If the irksome compiler gets in your way, you can always tell the it to shut up and do the operation anyway and maybe let the program crash at runtime. Or maybe silently corrupt data. Or maybe something completely different. None of which makes much sense, but there you go.

Modern static typing, on the other hand, uses a strong dose of the Hindley-Milner type inference algorithm to determine (a) what values a variable may contain and (b) prevent the use of incompatible values in expressions. Furthermore, Hindley-Milner prevents values being cast from one type to another on a whim (unlike C), and thus prevents odd runtime errors through abuse of the type system. This leads to an strong, unbreakable type system that is enforced by the compiler, and prevents a whole class of loopholes allowed by lesser type systems. Because types can be inferred, explicit type annotations aren't always needed, which leads to an interesting property: languages that are safer than "safe" static languages like C and Java, and programs that are free of those troublesome, noisy type declarations, just like dynamic languages.

Use of the Hindley-Milner type system comes to us through ML, and descendants like Haskell, OCaml, SML, F# and the like. It's such a good idea, that it's slowly making its way into new programming languages like Perl 6, Fortress, Factor, Scala, and future versions of C++, C#, and VB.Net.

So, really, the whole static-vs-dynamic languages debate is both misleading and moot. It's misleading, because it's not about static vs. dynamic typing, it's about weak compile-typing vs. runtime-typing. It's moot, because the writing is on the wall, and weakly typed languages are on the way out, to be replaced (eventually) by strongly-typed languages of one form or another. (It also tends to be a meaningless "debate", because it's typically a shouting match between C/C++/Java/etc. acolytes who are uncomfortable with Perl/Python/Ruby/etc., and Perl/Python/Ruby/etc. acolytes who have a strong distaste for C/C++/Java/etc., and neither group has a hope of persuading the other.)

Normally, I would avoid topic entirely, but today, I couldn't resist. Steve Yegge posted a transcription of his talk on dynamic languages, where he covers both sides of the debate, and moves beyond the two-sides-talking-past-each-other and uncovers the core issues. He covers the meat of the discussion, not the typical push-button issues that go nowhere and convince no one. (I think he misses the point behind Hindley-Milner, but that's an insignificant side issue.)

  • Most of the work in compiler optimization in the last few decades has focused on weakly-typed static languages.
  • Dynamic languages are, in general, slower, only because there has been relatively little research interest in making them faster.
  • There are some interesting approaches to optimizing dynamic languages that are now ready for prime time. Much of this comes from Strongtalk, an effort to optimize Smalltalk that led to Java's HotSpot VM.
  • Some of the "benefits" of "static" typing, like refactoring IDEs, aren't as good as they appear at first. Refactoring dynamic languages is different but IDEs could do about as good a job as they do for C++/C#/Java. Refactoring properly is a hard problem that no one has solved completely, yet.
  • "Fast" is relative. C++ used to be the "fast" language because it compiles down to native code. But in a multicore world, C/C++ is becoming slow, because C/C++ programs cannot take advantage of multiple CPUs easily, unlike Java.
  • Dynamic languages of the 1990s (Perl/Python/Ruby/Tcl) don't have a reasonable answer to scaling across CPUs, either.
  • Making Java bytecode run quickly takes more work than you probably realized.
  • Any sufficiently large system built with a static language will need to build in the kind of introspection that's available with a dynamic language and REPL. (Greenspun's Tenth Rule in action)
  • It was easy for the industry to switch programming languages every decade or so, up until 1995. Today, the bar is higher, and the amount of effort invested in existing code makes it much more difficult to migrate to a new language and platform.
  • Google has some of the most brilliant people and language designers in the industry, but they use C++, Java, Python and JavaScript, and none of the esoteric languages the smart kids love. This is because it's more important to standardize on a set of technologies and get stuff done than let the geeks run wild and free. But you probably knew that already. ;-)

Steve's talk is long and rambling, but certainly worth the time read or at least skim through.

Monday, March 31, 2008

Finger Exercises

Wow. Three months, no update. Hm.

I've been working on a bunch of little projects recently that individually don't amount to much, yet. I should have something to post about that soon.

In the mean time, here is a cute little problem (via Uri to the fun with Perl list):

What is an easy way to get all possible combinations of Y/N of length 5? (i.e., "YYYYY", "YYYYN", etc.)

Coming up with a solution to this exact problem seemed to cry out for a list comprehension:

[[a,b,c,d,e] | let l <- "YN", 
a <- l, b <- l, c <- l, d <- l, e <- l]

but as soon as I wrote that, I saw how unsatisfactory it was. A better solution would produce all possible combinations of Y and N for a list of any integer n > 0:

yn i = (iterate f [""]) !! i
where f l = map ('Y':) l ++ map ('N':) l

This problem itself isn't very interesting, but it's good to have little diversions like this when you want to take a break while the coffee is steeping, waiting for the Guinness to settle, or whatnot. (Something that's more than a function from the Prelude, but less involved than something from Project Euler.)

Do you have any cute little "finger exercise" problems like these? I'd love to hear about them!