Wednesday, February 7, 2007

What's Wrong with the For Loop

Closures in Java are a hot topic of late. A few really smart people are drafting a proposal to add closures to a future version of the language. However, the proposed syntax and the linguistic addition are getting a lot of push back from many Java programmers.

Today, Elliotte Rusty Harold posted his doubts about the merits of closures in Java. Specifically, he asks "Why Hate the for Loop?":
I don’t know what it is some people have against for loops that they’re so eager to get rid of them. This isn’t the first or even the second time CS theorists have revolted against for loops (or equivalent).
It's unfair to single Elliotte out for doubting the value of the lowly little closure. Mostly, he's complaining that he doesn't see the value in closures in Java, after having read Bruce Tate's recent column on the subject. (Bruce uses Ruby for illustration):
Listing 1. The simplest possible closure
3.times {puts "Inside the times method."}

Results:
Inside the times method.
Inside the times method.
Inside the times method.
times is a method on the 3 object. It executes the code in the closure three times. {puts "Inside the times method."} is the closure. It's an anonymous function that's passed into the times method and prints a static sentence. This code is tighter and simpler than the alternative with a for loop, shown in Listing 2:

Listing 2: Looping without closures
for i in 1..3 
puts "Inside the times method."
end
Given this lackluster introduction to closures, it's hard for me to see their real value. This first comparison is, at best, a subtle nuance. The rest of the examples in Bruce's article on developerWorks are equally trivial, dubious and unenlightening.

I'll ignore Elliotte's issues with Ruby style; it's not worth quibbling over. I'll also ignore the debate over the current syntax proposal for closures in Java, and the larger issue on whether or not closures belong in Java at this point. I don't have a horse in that race, and I honestly don't care how, when, or if it ever gets resolved.

Nevertheless, Elliotte does raise an important question: Why hate the for loop?

Here's a common example:
double sum = 0;
for (int i = 0; i < array.length; i++) {
sum += array[i];
}
What's going on here? I've been programming for years, and I'm comfortable speed-reading this idiom; it's obviously a summation of a set of values in an array. But to actually read this block of code, I need to process about 30 tokens spread out over four lines. Sure, there's some syntactic sugar that could shave off a few tokens. But that's a lot of stuff to write and get correct just to do a simple summation.

Want proof? Here's the very next example from Elliotte's article, lifted verbatim:
String s = "";
for (int i = 0; i < args.length; i++) {
s += array[i];
}
Spot the bug? If that code compiles and gets past code review, it could take weeks to get a bug report, and weeks more to get a fix ready. And these are simple for loops. Imagine how much more complex these things get as the loop bodies get larger, and possibly nested. (If you're still not worried about bugs, think about the possible typos. Then think about how often you write loops like this.)

If you could write a simple for loop in one line, with less repetition and fewer symbols, it should be both easier to read and easier to write. Because it's concise, there's much less likelihood that it will lead to bugs, and when bugs do occur, they should be easier to spot.

How would closures help here? Here's the first example, in Haskell:
total = sum array
OK, that was cheating. sum doesn't use closures. It is defined in terms of a fold, which does take closures:
total = foldl (+) 0 array
Here's the second example, idiomatically and with closures:
s = concat array
s = foldr (++) [] array
Admittedly, using odd-looking functions with names foldl and foldr to explain closures may not make sense to programmers more familiar with for loops. But it does highlight the key failing of for loops: they conflate three separate kinds of operations -- filtering, reduction and transformation.

In the two for loops above, the goal was to take a list of values and reduce it to a single value. Functional programmers call these operations "folds". The way a fold works is by starting with an operation (a closure) and a seed value, and visiting the first element of the list. The operation is applied to the seed and the first value, producing a new seed. The fold then performs with the operation with the new seed and the next value in the list, all the way down to the last value, when the result of the last operation becomes the result of the fold.

Here's a demonstration:
s = foldl (+) 0       [1, 2, 3]
= foldl (+) (0 + 1) [2, 3]
= foldl (+) 1 [2, 3]
= foldl (+) (1 + 2) [3]
= foldl (+) 3 [3]
= foldl (+) (3 + 3) []
= foldl (+) 6 []
= 6
Haskell provides a few fold functions; foldl starts examining the list from the beginning and works its way to the end, and foldr, which starts from the end and works its way backwards to the beginning. Other variations exist, but these are the two fundamental ones.

Of course, folds are a very primitive operation, and it would be very confusing if for loops were thrown out and replaced with various foldl and foldr incantations. Instead, higher level operations like sum, prod and concat are defined in terms of folds. And your code is then written in terms of these higher-level reducing operations, making your code more concise, easier to read, easier to write, and easier to understand.

But not every for loop is a reduction. Consider this one:
for (int i = 0; i < array.length; i++) {
array[i] *= 2;
}
This operation is a transformation, what functional programmers call a map:
new_array = map (*2) array
The way map works is by examining every element of a list, applying a function to each element, and building up a new list with all the new values. (Some languages do this in-place.) It's a much easier operation to understand. sort does something similar, in that it takes a list and returns (or modifies) a list.

The third kind of for loop is a filter. Here's an example:
int tmp[] = new int[nums.length];
int j = 0;
for (int i = 0; i < nums.length; i++) {
if ((nums[i] % 2) == 1) {
tmp[j] = nums[i];
j++;
}
}
Here we have a very simple operation that's almost completely hidden by the needless complexity of using a for loop, and two independent indices. If filtering is a fundamental operation, it should be as easy as a fold or a map, and in fact, it is:
odds = filter (\i => (i `mod` 2) == 1) nums
odds = filter isOdd nums -- more idiomatic
This, at the heart of it all, is what's wrong with the for loop: it conflates (at least) three separate kinds of operations, and focuses on a minor detail: walking over a series of (index) values. But really, fold, map and filter are three different ways to process a list of values, and they need to be handled differently. Using closures to pass in the body of a loop also makes it easier to split the what from the how. I could write anonymous functions each time I traverse a list, or I could reuse a well known function (like isOdd, (+) or sqrt).

If closures aren't an esoteric feature, but deeply baked into a language and its standard library, there's no need to mess around with these low level operations. Instead, we can create higher level actions that say what they do, like sum and prod.

Furthermore, thinking in these terms makes it easier to think about more complex operations, like mapping a tree, filtering a vector, or folding a list into a hash.


Finally, Elliotte did some handwaving about parallelized execution on multiple cores, and how code like 3.times {...} is "worse" than a for loop. Sadly, I think he's missing the point. True, there are some computations that need to be serialized, and some that may be parallelized. But figuring out which is which is a difficult compiler optimization if your only tool is a for loop. If you can split up the serial computations (i.e. foldl and foldr) from the possibly parallelized computations (i.e., map and filter), it's much easier for the compiler to figure out. Not only that, but you could explicitly ask for a serial or parallel version of map if you know your data better than your compiler.

19 comments:

Cale Gibbard said...

Your reduction of foldl is only right in a strictly evaluated language. It would be correct also if you were using the library function foldl'

Under lazy evaluation,

foldl (+) 0 [1,2,3]
= foldl (+) (0+1) [2,3]
= foldl (+) ((0+1)+2) [3]
= foldl (+) (((0+1)+2)+3) []
= ((0+1)+2)+3
= (1+2)+3
= 3+3
= 6

In this particular case, it's a waste of space, the tail recursion builds up a large expression in the base case parameter, so you really do want foldl', the strict left fold.

Most of the time though, you want a right fold, because rather than calling itself immediately, what happens is that the function parameter is immediately applied. If for any reason it doesn't need its second parameter to generate some of its output, then it's possible that the rest of the list won't get used. Thus, foldr plays well with laziness and infinite lists. (Obviously in the case of summing the elements of a list though, this isn't true, so the strict tail recursion works better.)

Anonymous said...

Java programmers: if you want closures/map/fold, type inference, pattern matching, and some of the other nice things Haskell has to offer but don't want to give up living in the JVM, may I suggest the Scala language? We have Eclipse plugins and you can use all your existing Java libraries exactly as you did before. And you can deploy your project to a WAR file, using Ant, Maven, or whatever else you want. You can get most of the benefits of a functional language without leaving Java-land per se. Give it a try! www.scala-lang.org .

Adam Turoff said...

Cale,

Thanks for the correction. I consciously wrote the strict expansion of foldl so I wouldn't need to explain lazy evaluation in order to explain folds.

Thanks,

-- Adam

Anonymous said...

Details aside, I try to understand functional programming and your column was very helpful to understand the 'why'.

Norbert

Unknown said...

I have to mention that closures weren't actually used in any of the haskell snippets you posted: a closure is not defined simply as an anonymous function, it's defined as a function paired with its environments, or "closing over" its parent scopes.

All of your examples do use functions (anonymous or otherwise), but none uses closures:

> total = sum array

Applying the `sum` function on the `array` variable, no closure.

> total = foldl (+) 0 array

Applying the `foldl` function to the arguments `(+)` (an other function), `0` and `array`, no closure

> new_array = map (*2) array

Here we get currying with `(*2)`, but still no closure

> odds = filter (\i => (i `mod` 2) == 1) nums

An anonymous function, but no closure (note that the anonymous function could've been written point(free|less) as `((== 1) . (`mod` 2))`

In none of your examples is a function closing over its context dynamically generated, so while your post does explain the advantages of first-class functions (a necessary pre-requisite to closures), higher-order functions (not a prerequisite for closure, but a strong helper) and functional programming constructs, at no point does it use nor show the advantages of closures.

Just my 2cents.

Martijn Faassen said...

It's not like all languages have for loops that go over indexes. This makes for loops quite a bit easier to read already. In Python, the sum (ugh, blog software appears to eat the whitespace, falling back on '.' to indicate it):

sum = 0
for el in list:
....sum += el

the map:

new = []
for el in list:
....new.append(operation(el))

and the filter:

new = []
for el in list:
....if condition(el):
........new.append(el)

the filter that actually checks indexes, a rare case:

new = []
for i, el in enumerate(list):
....if i % 2 == 0:
........new.append(el)

Of course Python also has list comprehensions. Can't do the sum with that, but the map:

new = [operation(el) for el in list]

and the filter:

new = [el for el in list if condition(el)]

My point is that it depends quite a bit on the language how recognisable plain loop idioms are. That's not to say there's no argument for functional-style operations, but taking indexed Java-style loops is stacking the decks against loops a bit too severely.

Anonymous said...

Please, don't confuse closures with function objects. A closure is a function paired with an environment.

Anonymous said...

This post make a very good case... Maybe a more appropriate title would be:

For-loop considered harmful

Well done.

Ivan said...

a function paired with an environment... hmm, that really sounds familiar.. almost like a "thing", an "object", if you will, that can carry its state with it...

wow!

Anonymous said...

A few people have commented that none of the examples use closures, but in Haskell closures are used not only to represent functions but also unevaluated values (i.e., thunks). For example, in the simple expression

total = sum array

the variable array is free and refers to some value, possibly unevaluated, in the enclosing environment. Until the value of total is evaluated it exists as a thunk that closes over array (and sum, for that matter).

So the examples do indeed use closures.

Cheers! ---Tom

Anonymous said...

If he thinks parallelization is a problem, he should check out Google's MapReduce paradigm.

Steven A. said...

another good use of closures is for resource use. imagine if you could write this:

withDatabaseConnection(conn, "//mysqlserverUrl")
{
doStuff(conn);
}
onConnectionFail
{
print("Could not acquire connection");
}

so basically, the withDatabaseConnection function creates a connect, runs your doStuff closure with the connection, calls your handler if it fails, and (most important) always closes the connection no matter what. so as a user of this, you don't have to worry about closing it at all! all you know/care is that within the closure, you have a good database connection to work with. this is another great thing about closures: it allows you to elegantly write re-useable code idioms. that means less possibility of someone leaving a connection open.

Unknown said...

Tom Moertel said...

> A few people have commented that none of the examples use closures, but in Haskell closures are used not only to represent functions but also unevaluated values (i.e., thunks). For example, in the simple expression
> total = sum array
> the variable array is free and refers to some value, possibly unevaluated, in the enclosing environment.

Uh no, `total = sum array` is a simple function application, the fact that Haskell uses lazy evaluation and therefore that `array` may still remain unevaluated after this operation doesn't change anything, and in a strict language `array` would get evaluated.

The definition of "closure" does not depend on the semantics of the language.

I rest my case, your examples didn't use any actual closure.

carlM said...

Two thoughts:

"for" is a fossil, the fortran "do" in c language drag. Anything that old cannot be removed without wiping old programmer's brains.

The bug in your java example is caused by the overload of the "+" operator by strings, not the "for" loop. The overload allows lazy programmers to concat. The "for" just allows them to repeat the error easily.

Peter said...

carlM said...

The bug in your java example is caused by the overload of the "+" operator by strings, not the "for" loop.


I disagree. I think that the bug is caused by a failure to keep DRY. By repeating the name of the array being looped over it's inviting typos of all kinds. Not overloading the '+' operation would catch this error at compile time, but it's easy to manufacture examples where it wouldn't.

On the other hand, I tend to agree that addition and concatenation are probably too different to merit using the same operator for both.

Anonymous said...

Thanks Adam, this was an inspiring post. Adding to my RSS reader...

Paul Bissex said...

I thought that the bug was that the loop was using "args.length" as an upper bound, but pulling characters from "array"...

Anonymous said...

That's why I love list comprehensions and generators in python. It's the best possible syntax for these sort of things. A quite complex and hard-to-read map/filter immediately becomes clear:

[x**2 for x in range(0, 10) if x % 2 == 0]

Only reduce is missing, so you have to write

sum([x**2 for x in range(0, 10) if x % 2 == 0])

or, using reduce directly:

functools.reduce(operator.add, 0, (x**2 for x in range(0, 10) if x % 2 == 0))

Richard Minerich said...

F# guy checking in.

I loved your article and wanted to add that it's much easier to write logic which only instantiates as many elements as are needed at a time when you use higher order functions instead of loops. For example, you might read lines out of a file, process them and write them out somewhere else. It would be difficult to do this in an imperative language but with lazily evaluated data structures. However, with the same code with higher order functions can instantiate everything at once if you use lists, or just what is needed at the time if you use lazily evaluated sequences.

Cheers!
-Rick