Tuesday, December 9, 2014

Functional Programming in Mainstream Languages, Part 6: Higher-Order Functions in Xtend

(Jump to Table of Contents)
I introduced the Xtend programming language in a previous post.  Xtend is similar to Java, and compiles to Java (not to bytecode), but is much more pleasant to use.  In particular, it supports functional programming in various ways, both in the language itself and in the libraries.  Some of these will have to wait for later posts in the series; for now, as in previous posts, I am only discussing higher-order functions.  Again, I will contrast Xtend with Java 8.

Here is the simple example of the curried addition function:
Java 8
IntFunction makeAdder = x -> y -> x + y; IntUnaryOperator increment = makeAdder.apply(1); int res = increment.applyAsInt(4);
Xtend
val makeAdder = [int x|[int y|x + y]] val increment = makeAdder.apply(1) val res = increment.apply(4)

The syntax for anonymous functions (lambdas) in Xtend is different from all those we saw before; instead of some kind of arrow symbol, it uses square brackets with a vertical bar separating the parameters from the body.  Type inferencing is applied (as in Scala) to infer the types of the expressions.  Unlike Scala, and like Java, Xtend still requires the method-call syntax for function calls.

The recursive fixed-point function looks like this:

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double next = f.applyAsDouble(v); if (Math.abs(next - v) < epsilon) return next; else return fixedpoint(f, next); }
Xtend
def double fixedpoint(Function1<Double, Double> f, double v) { val next = f.apply(v) if ((next - v).abs < epsilon) next else fixedpoint(f, next) }

This uses the Xtend interface for unary functions, Function1, with the Java wrapper type Double.  In principle, the compiler could have inferred the return type of this function (Scala can do it, see Part 5), but its type-inference mechanism doesn't handle recursion, so it is necessary to specify the type manually.  I hope this will be fixed in a future release.

In this example we can see a nice extension feature, one of those that gave Xtend its name.  Doubles in Java has no abs method; instead, you call the static method Math.abs with the number as a parameter.  This is awkward and places an unnecessary cognitive burden on programmers, who need to remember which methods each object has and which static methods are available for it.  (This is quite common in Java, which has many classes that offer services as static methods; Arrays and Collections are famous examples.)  In general, having different and mutually exclusive ways of expressing what is essentially the same concept is bad.  Xtend allows developers to define "extension methods," which are static methods that can be called like regular methods on the first argument.  In this example, for a double variable x the following are completely equivalent: Math.abs(x), and x.abs.  This equivalence (like many others) is defined in the standard Xtend libraries.

Because this recursive version of fixedpoint compiles in a straightforward manner to Java, tail-recursion optimization is not necesarily applied.  The imperative version is therefore preferred in this case as well:

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double prev = v; double next = v; do { prev = next; next = f.applyAsDouble(next); } while (Math.abs(next - prev) >= epsilon); return next; }
Xtend
def fixedpoint(Function1<Double, Double> f, double v) { var next = v var prev = v do { prev = next next = f.apply(next) } while ((next - prev).abs >= epsilon) next }

A seemingly slight difference between the recursive and imperative versions is the use of the keyword to define variables.  Unmodifiable variables in Xtend (which are translated into Java final variables) are introduced by the keyword val, whereas modifiable variables (non-final) are introduced by the keyword var.  In Xtend it is therefore just as easy to define final variables as non-final ones.  This is a great incentive for writing functional programs, which don't have modifiable variables.  When you write Xtend code, you should always use val rather than var, unless you convince yourself that there is good reason why the variable should be modifiable.  And you should be hard to convince on this point!

At this point you know to expect the non-terminating sqrt; here it is:

Java 8
public double sqrt(double x) { return fixedpoint(y -> x / y, 1.0); }
Xtend
def sqrt(double x) { fixedpoint([y|x / y], 1.0) }

The differences are minor: less type information is required in Xtend, and, like Scala, Xtend doesn't require the access-level designator (public) and the return keyword; however, the braces are still required.

By now you can surely write the terminating version yourselves:

Java 8
public double sqrt(double x) { return fixedpoint(y -> (y + x / y) / 2.0, 1.0); }
Xtend
def sqrt(double x) { fixedpoint([y|(y + x / y) / 2.0], 1.0) }

The general average-damp and the corresponding sqrt again hold no surprises:

Java 8
public DoubleUnaryOperator averageDamp(DoubleUnaryOperator f) { return x -> (x + f.applyAsDouble(x)) / 2.0; } public double sqrt(double x) { return fixedpoint(averageDamp(y -> x / y), 1.0); }
Xtend
def averageDamp(Function1<Double, Double> f) { [double x|(x + f.apply(x)) / 2.0] } def sqrt(double x) { fixedpoint(averageDamp[y|x / y], 1.0) }

As I said before, Xtend is a very pleasant alternative to Java, which has many similarities with Java but many points in which it improves on it significantly.  One of those is its support for functional programming (of which we have seen only one part, higher-order functions).  Java 8 has closed some (but not enough) of this distance, and made some choices that are different from Xtend's.  The most obvious is the lambda notation, but perhaps more important are the use of the functional interfaces, more limited type inference, and the inconsistent use of the method name (apply, applyAsDouble, test, etc.).  As we saw in Part 5, Scala is perhaps more pleasant that Xtend in some ways, but is places itself further away from Java (in many more ways than we have seen).

In summary, if you are now programming in Java and want a closely-related but better language, you should take a good look at Xtend.

#FunctionalProgramming #Xtend

Sunday, December 7, 2014

Functional Programming in Mainstream Languages, Part 5: Higher-Order Functions in Scala

(Jump to Table of Contents)
Scala is a relatively new language; it belongs to the bigger-is-better school of programming languages, and includes many kinds of object-oriented features as well as a functional subset.  Scala captured a lot of interest in the academic community, and for several years it featured in many papers in the leading conferences on object-oriented programming.

Scala attempts (among other things) to combine object-oriented and functional programming in a graceful way.  In this post, I will show how higher-order functions are expressed in Scala, using the same examples as previous posts, and contrast it with the Java 8 implementation.

Scala compiles to Java bytecode, and is therefore compatible with Java code.  It extends the Java type system in ways that make it more consistent.  For example, all values in Scala are objects, so it doesn't have the artificial and annoying distinction between primitive types (int, double, ...) and classes (everything else, including Integer and Double).  Like Java, Scala is strongly typed, but contains extensive type-inference mechanisms so that programmers can avoid a lot of type specifications.

Here is the simple example of the curried addition function:

Java 8
IntFunction<IntUnaryOperator> makeAdder = x -> y -> x + y; IntUnaryOperator increment = makeAdder.apply(1); int res = increment.applyAsInt(4);
Scala
def makeAdder = (x: Int) => (y: Int) => x + y def increment = makeAdder(1) def res = increment(4)

Note that in this example type inference goes in the opposite direction in the two languages: Java infers parameter types from the declaration, whereas Scala infers the function's type from the parameter types and the expression in the body.  Scala infers the following type for the add function: Int => (Int => Int).  We could specify this type manually, but don't need to.

Clearly, Scala's notation for functional types is much nicer than Java's use of multiple interfaces; and the function application notation doesn't require an artificial method call.  In both aspects, Scala notation is similar to the usual mathematical notation.

Here is the recursive fixed-point function:

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double next = f.applyAsDouble(v); if (Math.abs(next - v) < epsilon) return next; else return fixedpoint(f, next); }
Scala
def fixedpoint(f: Double => Double, v: Double): Double = { val next = f(v) if (Math.abs(next - v) < epsilon) next else fixedpoint(f, next) }

These are quite similar, except for the notations for types and for function application, and the fact that Scala doesn't require explicit return statements.

Scala emphasizes recursion, and will therefore perform tail-recursion optimization whenever possible.  If you want to make sure that the function is indeed optimized in this way, you can add the annotation @tailrec to the function (or method).  This will cause the compiler to produce an error if the tail-recursion optimization can't be applied.  In this case, the compiler accepts this annotation, so that this version is just as efficient as the imperative one.  Because recursion is natural in Scala, this would be the preferred way of writing this method.  For comparison, here is the imperative version:

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double prev = v; double next = v; do { prev = next; next = f.applyAsDouble(next); } while (Math.abs(next - prev) >= epsilon); return next; }
Scala
def fixedpoint(f: Double => Double, v: Double) = { var next = v var prev = v do { prev = next next = f(next) } while (Math.abs(next - prev) >= epsilon) next }

This is not too bad; still, I recommend using recursion instead of assignments in all languages that fully support it (that is, guarantee the tail-recursion optimization), including Scala.

Now for the naive non-terminating version of sqrt that uses fixedpoint:
Java 8
public double sqrt(double x) { return fixedpoint(y -> x / y, 1.0); }
Scala
def sqrt(x: Double) = fixedpoint(y => x / y, 1.0)

The differences are small, and have to do with Java's verbosity more than anything else.  Gone are the access-level designator (public), the return type, the return keyword, and the braces.  The Scala version is more concise, and can easily be written in one line.  These are small savings, but they add up over a large program and reduce the noise-to-signal ratio.  Except for Haskell (forthcoming), Scala is the most concise (for this kind of code) of all the languages I survery in this series.

The terminating version of sqrt that uses average damping should now come as no surprise:

Java 8
public double sqrt(double x) { return fixedpoint(y -> (y + x / y) / 2.0, 1.0); }
Scala
def sqrt(x: Double) = fixedpoint(y => (y + x / y) / 2.0, 1.0)

Here is the general average-damp procedure and the version of sqrt that uses it:

Java 8
public DoubleUnaryOperator averageDamp(DoubleUnaryOperator f) { return x -> (x + f.applyAsDouble(x)) / 2.0; } public double sqrt(double x) { return fixedpoint(averageDamp(y -> x / y), 1.0); }
Scala
def averageDamp(f: Double => Double) = (x: Double) => (x + f(x)) / 2.0 def sqrt(x: Double) = fixedpoint(averageDamp(y => x / y), 1.0)

Again, the Scala formulation is concise and clear, while still being strongly typed.  By design, Scala is well-suited for expressing functional-programming abstractions, using type inference effectively in order to reduce the burden of specifying types.  Scala is too large for my taste, but functional programming is easy and natural in it.

#functionalprogramming #scala

Friday, November 28, 2014

Functional Programming in Mainstream Languages, Part 4: Higher-Order Functions in JavaScript

(Jump to Table of Contents)
JavaScript is the only universal language for client-side programming for web applications.  If you need to write anything that will run in the browser, JavaScript is currently your only option.  That is unfortunate, since JavaScript is an amalgam of some good parts and some truly horrible parts.  (See the book JavaScript: The Good Parts for more details.)

Among the good parts of JavaScript is its functional part, which is widely used to create callbacks; but it can also be used for functional-programming techniques in general.  For a fascinating but advanced presentation of the elegant way in which NetFlix uses functional programming in JavaScript to perform asynchronous processing, see the ACM webinar titled "August 2014: Async JavaScript at Netflix."

In this part of the series, I will show the examples of higher-order functions from Part 3 in JavaScript, and contrast it with the Java 8 implementation.  For the Scheme and Java 7 versions, see Part 3.  First, however, is the simple example from Part 2:

Java 8
IntFunction<IntUnaryOperator> makeAdder = x -> y -> x + y; IntUnaryOperator increment = makeAdder.apply(1); int res = increment.applyAsInt(4);
JavaScript
var makeAdder = function(x) {     return function(y) {         return x + y;     } } var increment = makeAdder(1); var res = increment(4);

As you can see, the JavaScript syntax for defining functions is more verbose but not unreasonable.  In contrast, function calls use the familiar mathematical notation without requiring a spurious method call.

Ecma International publishes the standards for the JavaScript language, under the name ECMAScript.  The draft ECMAScript 6 standard (code named Harmony) introduces "arrow functions"; these are very similar to regular JavaScript functions but use an alternative definition syntax (and improve the semantics in small but significant ways).  For comparison, here is the same code using this notation:

ECMAScript Harmony (still in draft status)
var makeAdder = x => y => x + y; var increment = makeAdder(1); var res = increment(4);

This is very similar to Java 8, except for the use of the => symbol instead of ->.  As in Java 8, this is the simplest syntax for defining functions; there are variants for more complex cases.  For example, if the function has more (or less) than one argument, the argument list must be placed in parentheses.  Unlike Java 8, of course, ECMAScript has no syntax to specify the types of the arguments or the returned value.  There are other differences as well; see the proposed draft for further details.

Here is the recursive fixed-point function:

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) {     double next = f.applyAsDouble(v);     if (Math.abs(next - v) < epsilon)         return next;     else         return fixedpoint(f, next); }
JavaScript
function fixedpoint(f, v) {     var next = f(v);     if (Math.abs(next - v) < epsilon)         return next     else         return fixedpoint(f, next); }

These are very similar, except for the lack of types in JavaScript.  Here is the imperative version of fixedpoint.  JavaScript, like Java, doesn't guarantee that tail-recursion optimization will always be performed, and this version is more natural for JavaScript.  For these reasons, I find this acceptable practice even when using functional-programming techniques in JavaScript, provided that the variables being modified aren't used in any other context.

Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) {     double prev = v;     double next = v;     do {         prev = next;         next = f.applyAsDouble(next);     } while (Math.abs(next - prev) >= epsilon);     return next; }
JavaScript
function fixedpoint(f, v) {     var next = v;     var prev = v;     do {         prev = next;         next = f(next)     } while (Math.abs(next - prev) >= epsilon);     return next; }

Next is the naive non-terminating version of sqrt that uses fixedpoint:

Java 8
public double sqrt(double x) {     return fixedpoint(y -> x / y, 1.0); }
JavaScript
function sqrt(x) {     fixedpoint(function(y) {         return x / y;     }, 1); }

The boilerplate code is starting to get in the way here (although not as much as in Java 7, due to the lack of types).  Still, it's becoming difficult to recognize the syntactic role of the constant 1.  This annoyance should disappear with the release of ECMAScript Harmony.

The terminating version of sqrt that uses average damping implicitly looks like this:

Java 8
public double sqrt(double x) {     return fixedpoint(y -> (y + x / y) / 2.0, 1.0); }
JavaScript
function sqrt(x) {     fixedpoint(function(y) {         return (y + x / y) / 2;     }, 1); }

Finally, here is the average-damp operator and the implementation of sqrt that uses it explicitly:

Java 8
public DoubleUnaryOperator averageDamp(DoubleUnaryOperator f) {     return x -> (x + f.applyAsDouble(x)) / 2.0; } public double sqrt(double x) {     return fixedpoint(averageDamp(y -> x / y), 1.0); }
JavaScript
function averageDamp(f) {     return function(x) {         return (x + f(x)) / 2;     } } function sqrt(x) {     return fixedpoint(averageDamp(function(y) {         return x / y;     }), 1); }

In summary, higher-order functions are quite natural in JavaScript, with a syntax that is somewhat cumbersome but still manageable.  One of the reasons it is simpler than Java 7 is the lack of types, which is a result of the lack of static typing in JavaScript as a whole.  An interesting compromise is the language TypeScript, which supports gradual typing (see my review in a previous post) but compiles into JavaScript.  TypeScript uses the arrow-function notation proposed for ECMAScript, so it allows very concise function definitions that do contain types.  I will address TypeScript in a future post.

Coming up in this series: Scala, Xtend, and Haskell!

#functionalprogramming #javascript

Thursday, November 27, 2014

Functional Programming in Mainstream Languages, Part 3: Higher-Order Functions in Java 7 and 8

(Jump to Table of Contents)
Having seen (in the part 2 of this series) the basic syntax for expressing functions in Java 8, we will now explore some functional-programming techniques and compare how they are expressed in Scheme, Java 7, and Java 8.  The examples are adapted from the book Structure and Interpretation of Computer Programs (2nd Edition) by Abelson and Sussman.

The following function attempts to compute an approximation to a fixed point of a given function f; that is, a value x such that x = f(x).  The computation starts from an initial guess c, and repeatedly applies the function f to it, to get the series f(c), f(f(c)), f(f(f(c))), ....  If this process converges, the result is close to a fixed point, since convergence means that applying f one more time returns (almost) the same value.  However, the process may not converge at all, either because f has no fixed points at all, or because the process oscillates without finding a fixed point.

Here are the implementations of this function in the three languages:

Scheme
(define (fixed-point f first-guess)   (define (try guess) (let ((next (f guess))) (if (< (abs (- guess next)) epsilon) next (try next)))) (try first-guess))
Java 7
public double fixedpoint(UFunc<Double, Double> f, double v) { double next = f.apply(v); if (Math.abs(next - v) < epsilon) return next; else return fixedpoint1(f, next); }
Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double next = f.applyAsDouble(v); if (Math.abs(next - v) < epsilon)         return next; else         return fixedpoint1(f, next); }

Functional implementations usually use recursion instead of loops; the Scheme implementation defines an internal recursive function try, which computes the next value in the series and checks whether it is already close enough to the previous one to consider the series to have converged.  If so, it returns the last value in the series; if not, it continues recursively to try the next value.

The two Java implementations are quite similar, except for the types.  For the Java 7 implementation, I created the following interface to represent unary functions from a domain D to a range R:

Java 7
public interface UFunc<D, R> { R apply(D arg1); }

As I explained in the previous post, Java 8 supplies a large set of interfaces to describe functions; DoubleUnaryOperator is the type of functions that take a double and return a double.

There is a fundmemtal difference between functional languages (including Scheme) and most other languages, including Java.  In the former, tail-recursive calls are converted by the compiler into jumps that don't add a frame to the execution stack.  This means that such calls behave like loops in terms of their memory consumption.  (For more details see Abelson and Sussman.)  However, the latter languages don't guarantee this property.  As a result, the Java implementations above may use stack space proportional to the number of applications of the function required for convergence.

A more natural style for implementing these methods in Java is imperative rather than functional:

Java 7
public double fixedpoint(UFunc<Double, Double> f, double v) { double prev = v; double next = v; do {         prev = next;         next = f.apply(next); } while (Math.abs(next - prev) >= epsilon); return next; }
Java 8
public double fixedpoint(DoubleUnaryOperator f, double v) { double prev = v; double next = v; do {         prev = next;         next = f.applyAsDouble(next); } while (Math.abs(next - prev) >= epsilon); return next; }

While functional-programming purists frown at this style, which modifies the values of the variables prev and next, this style is more natural for Java, and uses constant space on the stack.  For those reasons, I find it acceptable when using functional techniques in Java, provided that the changes are confined to local variables (rather than fields), and, in particular, locals that aren't referenced in any other method (such as those in inner classes) or function.

We can now try to use the fixedpoint function to compute square roots, using the fact that y is a square root of x if y = x/y; in other words, if y is a fixed point of the function λy.x/y:

Scheme
(define (sqrt x) (fixed-point (lambda (y) (/ x y)) 1.0))
Java 7
public double sqrt(final double x) { return fixedpoint(new UFunc<Double, Double>() {         @Override         public Double apply(Double y) {             return x / y;         } }, 1.0); }
Java 8
public double sqrt(double x) { return fixedpoint(y -> x / y, 1.0); }

Here we need to create an anonymous function ("lambda"), and here the difference between Java 7 and Java 8 becomes very clear.  In Java 7 we need to create the function as an anonymous inner class, with all the necessary boilerplate code, which obscures the essentials of what we are trying to do.  (For example, it is quite difficult to figure out that 1.0 is the second parameter to the fixedpoint method.)  In contrast, in Java 8 we can just use the new lambda notation, and the meaning is immediately clear.

In both versions of Java, it is only possible to refer in inner methods to variables defined in enclosing constants if these variables are final; that is, if they can't be modified.  In Java 8 the variables doesn't need to be declared final, but it needs to be "effectively final," which means that it is never changed.  (It is as though the compiler added the final modifier itself; this is similar to the type inference done by the Java 8 compiler.)

The reason for this restriction is that local variables have a lifetime (sometimes called "extent") that coincides with the lifetime of the method in which they are defined.  In other words, when the method returns, these variables are no longer accessible.  This is done so that the variables can be allocated on the stack rather than on the heap, avoiding the need for garbage-collecting them.  However, an object containing an inner method that refers to these variables may be used after the method that generated the object has already returned.  If (and only if) the variables are final is it possible to copy their values to the new object so that references to their values are still valid.  This is a poor-man's version of a closure, which is an object containing a function pointer together with all accessible variables in outer scopes.  As we will see in a later post, Scheme (like other dialects of Lisp) supports full closures, which also allow changing bound variables.

Unfortunately, the method above doesn't work in any language.  The problem is that the computational process oscillates between two values and doesn't converge (try it for x=2).  One value is too high, and the other is too low.  However, if we take their average, the process converges; the following implementations all compute the square root:

Scheme
(define (sqrt x) (fixed-point (lambda (y) (/ (+ y (/ x y)) 2.0) 1.0))
Java 7
public double sqrt(final double x) { return fixedpoint(new UFunc<Double, Double>() { @Override public Double apply(Double y) { return (y + x / y) / 2.0; } }, 1.0); }
Java 8
public double sqrt(double x) { return fixedpoint(y -> (y + x / y) / 2.0, 1.0); }

It turns out that this is a general technique, called average damping.  It takes any function f and returns the function λx.(x+f(x))/2; this function has exactly the same fixed points as f.  In certain cases, however, the computational process of the fixed-point function converges with the new function when it diverges on the original function.  This technique is easily specified as a program:

Scheme
(define (sqrt x)(define (average-damp f) (lambda (x) (average x (f x))))
Java 7
public UFunc<Double, Double> averageDamp(final UFunc<Double, Double> f) { return new UFunc<Double, Double>() { @Override public Double apply(Double x) { return (x + f.apply(x)) / 2.0; } }; }
Java 8
public DoubleUnaryOperator averageDamp(DoubleUnaryOperator f) { return x -> (x + f.applyAsDouble(x)) / 2.0; }
Again, the Java 7 version is obscured by the boilerplate code (not to mention the annoying repetitions of the template type).  Here is the definition of sqrt that uses average damping (this time I will spare you the agony of the Java 7 version):

Scheme
(define (sqrt x) (fixed-point (average-damp (lambda (y) (/ x y))) 1.0))
Java 8
public double sqrt(double x) { return fixedpoint(averageDamp(y -> x / y), 1.0); }

Computationally, this is exactly the same as the previous version, except that we use the original function  λy.x/y after applying the average damp operator to it.

In summary, it is possible to use functional abstraction in Java 7, but it is very painful.  The one case where it is widely used (although not usually thought of as such) is the case of callback functions, which are encapsulated as methods in one-method classes.  Java 8 makes higher-order functions much easier to use, even though the cumbersome type system is still annoying and gets in the way.

In subsequent posts we will see how to express the same ideas in other languages.

#functionalprogramming #java

Functional Programming in Mainstream Languages, Part 2: Introduction to Higher-Order Functions in Java 8

(Jump to Table of Contents)
If immutability is the primary principle of functional programming, higher-order functions are its most important technique.  Higher-order functions, also called lambas, are now part of many mainstream languages.  In particular, they have been added to Java 8.  In this post I will show how higher-order functions can be expressed in Java 8.

I can't explain the details of the theory and techniques here; I highly recommend the wonderful book Structure and Interpretation of Computer Programs (2nd Edition) by Abelson and Sussman.  Most of the examples in this series of posts are taken from that book.

In functional programming, functions are first-class objects.  This means that functions can be passed as parameters to other functions, they can be returned from functions, they can be stored in data structures; in other words, anything you can do with any other object in the language, such as numbers or strings, you can do with functions.

Some languages fake it: for example, in C you can pass around function pointers, but not function objects.  The major difference is that function objects contain not only the code the function executes, but also its environment, meaning all bound variables known to the function.  This is particularly important (and powerful) when functions are nested inside other functions (or methods, or any other binding scope).

The inventors of LISP struggled with the correct semantics of first-class functions; they called this the "funarg problem."  It turns out that the power of functional programming requires lexical scope, which means that variable names must be resolved with respect to the nesting of functions in the code, rather than to the structure of the calling stack at runtime.  And in order for that to work correctly, it is impossible to allocate function parameters and other local variables on the program stack if those may be referred to in an internal function that is returned from the call.  This complicates the compiler, but is invisible to programmers using the language.

As a simple example, consider a function make-adder that accepts a single number, and returns a function that adds that number to its own argument.  For example, if make-adder is called with the value 1, it will return a function that can be called increment, since it adds one to whatever argument it is given.  Here is how to write this function in Scheme and in Java 8.

Scheme
(define make-adder (lambda (x) (lambda (y) (+ x y))) (define increment (make-adder 1)) (increment 4)
Java 8
IntFunction<IntUnaryOperator> makeAdder = x -> y -> x + y; IntUnaryOperator increment = makeAdder.apply(1); int res = increment.applyAsInt(4);

The result of the last expression (in either language) is 5.

The syntax is quite different, but the effect is the same.  Let's look at the functions first.  The Scheme syntax is a direct translation of the syntax used in the Lambda Calculus, which was invented by the great logician Alonzo Church, and which forms the theoretical basis for functional programming (in addition to its major importance in the theory of computer science).  In the lambda calculus, the make-adder function would be written like this: λx.λy.x+y.

This syntax describes a function of one argument whose name is x; this is indicated by the notation "λx.".  What follows the dot is the value of the function for a given value of x; in this case, this is another function, whose argument is called y.  The value returned by this function is the sum of x and y.

In Scheme, a function is denoted by the syntax (lambda (v1 v2 ...) body), where v1 v2 ... are the parameter names, and body is an expression representing the value that the function returns.

In Java 8, function syntax has the names of the parameters before the arrow (->), and the body follows the arrow.  It is possible to specify more than one argument, in which case the arguments need to be put in a parenthesized list.  It is also possible to specify the types of the arguments, but fortunately the designers of Java have finally realized what a burden repetitive types are on programs, and Java 7 introduced type inference into the compiler in order to eliminated some (but not enough) of this repetition.  Java 8 extends this mechanism also to lambdas.  In this case, the compiler can infer the types of the arguments from the type declared for the function makeAdder.  This type needs some explanation.

Java 8 has a new set of interfaces to describe various types of functions.  For example, Function<T,R> is the type of functions that take an argument of type T and return a result of type R.  However, there is a whole set of interfaces that cater for primitive types, in order to avoid boxing and unboxing.  (This primary mistake in the design of Java has been giving programmers grief since Java 1, and is unlikely to go away.)  So IntFunction<R> denotes functions that take an int and return a value of type R.

What is the return type of makeAdder?  It is a function that takes an int and returns an int; such functions are described by the interface IntUnaryOperator.  The corresponding interface for functions that take an object of type T and return an object of the same type is UnaryOperator<T>.

As I will discuss in a later post, typed functional languages such as ML and Haskell have a rich set of functional types and a concise yet very expressive type notation.  The type of makeAdder would be specified in such languages simply as int -> int -> int, a notation that is similar to the way functions are defined.  This one indicates a function that receives an int, and returns a function of type int -> int, that is, a function that takes an int and returns an int.  Such notations would make the burden of type specifications in Java much easier.

#functionalprogramming #java8

Sunday, September 21, 2014

Functional Programming in Mainstream Languages, Part 1: What and Why

(Jump to Table of Contents)
John Backus worked for IBM from 1950 until 1991 and was the principal developer of FORTRAN, one of the first high-level programming languages.  He is immortalized in the acronym BNF, which stands for "Backus-Naur Form" (or sometimes "Backus Normal Form").  In 1977 he received the ACM Turing Award (the "Nobel prize of Computer Science"); his award lecture was titled "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs."  This was in a way a reaction to FORTRAN, and describes the vision of functional programming.

The "von Neumann style" is a model of computation that is very close to the way computers actually work.  Its main component is the memory, which is composed of individually-addressable cells.  Each memory cell can be read and modified by the computer executing a program, which is also stored in memory.  This model was very useful for building early computers, and was naturally taken as the basis for the first high-level languages.  The closer the model of the language is to the hardware model, the easier it is to compile programs to machine instructions that the hardware can execute directly.

Backus, however, made the point that the language model can be more abstract than the hardware model.  While this puts a greater strain on compiler writers, it can free (or "liberate") programmers from a lot of the drudgery that is involved in programming directly to the hardware model.

The most basic principle of functional programming is the absence of any side effects.  This means that functional programming language do not have the concept of variables as places (representing memory cells) that can be changed.  Values in functional programs are immutable; that is, they can't change.  This concept extends to data structures as well; for example, lists, maps, and "objects" in functional languages can't be modified, in total or in part.  And it also extends to interactions of the program with the outside world.

This may seem like an impossible restriction; how can you describe a computation without changing values?  In fact, it is quite easy and natural, but it does take some getting used to, especially if you are used to programming in imperative languages.  These include most mainstream languages, in which variables and mutable data structures are the norm.

The major benefit of functional programming is its very simple computational model, which is very close to mathematical thinking.  Because values in a functional program can't change, functional programs enjoy the property of referential transparency.  This means that if an expression has some value, it will always have the same value.  (This is not true in imperative languages; just think of a parameterless function that returns pseudo-random numbers.)  This means that you can use the well-known substitution principle from mathematics: if a=b then you can replace a by b everywhere it appears.  This is not true in imperative languages, since even if a=b now, they may be different after any change that may affect one or both of the expressions.  The computational model of imperative languages therefore must incorporate the notion of time; every assertion may be true at one time and false in another.  This results in a much more complicated model.

Most programmers don't bother thinking about the computational model underlying their programming language, but it is crucial for writing, understanding, and reasoning about programs.  A simple model makes all these simpler.  Programming in a language that has a complex model is like walking around with a heavy weight on your back; you don't notice you're carrying it until it is suddenly gone.

Another important property of functional programs, which is becoming increasingly important, is that it is much easier to parallelize them.  Imperative programs suffer from problems such as race conditions and non-atomic updates, which are a result of side effects.  By eliminating side effects, functional programs get rid of these undesirable consequences as well.  One commercially-successful functional language that makes great use of this fact is Erlang.

The functional paradigm is Turing-complete; that is, it can express any computation that any conceivable machine can perform.  However, creating functional analogues of mutable data structures is complicated (although this work can mostly be relegated to library writers).  It is therefore unrealistic to expect most programmers to abandon mainstream languages in favor of functional languages.

Since functional programming is predicated on the lack of side effects, it seems to be inherently contradictory to imperative programming.  However, in the last decade or two, there is a growing movement to incorporate functional programming techniques into scripting and object-oriented languages, including JavaScript, Ruby, Scala, and, more recently, Java 8.  There is also much academic work on the integration of the functional and object-oriented paradigms; so much so, that one of the predictions of Murphy-Hill and Grossman in their paper "How Programming Languages Will Co-evolve with Software Engineering: A Bright Decade Ahead" (Future of Software Engineering, 2014) is the blurring between these paradigms (see my review of that paper).

How can these conflicting paradigms be reconciled?  The point is that even if you want to use side effects (including assignment and mutation of data structures), you don't have to go the whole hog and use them everywhere.  By making some of your data structures immutable, you will reap some of the benefits of functional programming.  Bertrand Meyer, one of the gurus of object-oriented programming, includes the Command-Query Separation principle in the chapter on designing class interfaces in his landmark book, Object-Oriented Software Construction (2nd Edition).  This principle requires methods to be either queries, which return information but don't mutate objects, or commands, which mutate objects but don't return any results, but not both.  This separation results in some measure of referential transparency, because it tends to reduce the number of mutations.

In addition to immutability, the most important aspects of functional programming that are also relevant to imperative languages are high-order functions and streams.  In following posts I will show examples of functional programming techniques, both in (fully or mostly) functional languages, as well as in mainstream languages including Java 8.

This series of posts has the following parts:

Introduction
Part 1: What and Why

Higher-order functions
Part 2: Java 8 (Introduction)
Part 3: Java 7 and Java 8
Part 4: JavaScript
Part 5: Scala
Part 6: Xtend
Part 7: Haskell

#functionalprogramming

Sunday, August 17, 2014

Review: How Programming Languages Will Co-evolve with Software Engineering

Until recently, programming languages have been developed with little consideration for programming environments and tools, with the notable exception of Smalltalk, which was co-developed with an elaborate environment from the start.  Murphy-Hill and Grossman note in their paper "How Programming Languages Will Co-evolve with Software Engineering: A Bright Decade Ahead" (Future of Software Engineering, 2014) that this situation is changing, and provide seven predictions on the convergence of programming languages and software engineering (or, more precisely, programming environments and tools) in the next decade.

They predict that future programming languages will be designed with integrated development environments (IDEs) and social networks in mind.  One example they give for the former is the addition of dot notation to F#, allowing a syntax such as x.Length in addition to the functional length(x).  (Note, though, that in F# these have different meanings.)  The authors claim that the primary benefit of this notation is the ability of the IDE to provide code-completion for method (or function) names when typing "x."; this facility doesn't (yet) exist for the functional notation.  I have my doubts about this being the major reason for the inclusion of members in F#, but it could certainly be a reason for introducing dot notation as syntactic sugar for function calls in new languages.

Another prediction is that source code will no longer be limited to a single sequence of characters, instead being replaced by multiple simultaneous views.  Multiple views are already available through the use of various tools; what I would call refactor-and-undo is probably the most widely used method, but there are various tools specifically targeted at showing multiple views of a program.  These multiple views can be stored together as the official representation of the program, instead of being generated from a single "ground truth" consisting of ASCII (or unicode) strings.

Murphy-Hill and Grossman expect that future language design will be influenced by controlled experiments on how programmers use language features and to what extent.  With the widespread availability of open-source projects, it is possible to study this rigorously, and select language features based on what really works "in the wild." This is already happening to some extent with existing languages, such as C#.  I would advocate some care when using this method, since we know that some programming features take a long time to achieve adoption; a case in point is functional-programming techniques (see the last prediction below).  We should be careful not to eliminate good ideas just because they don't receive a wide following in the short term.

Turning to other technologies, the next prediction is that formal verification will become a reality for developers, rather than being limited to a small group of highly experienced researchers.  This will require the development of proof assistants that are stronger and more usable than those available today.  Obviously, there is a lot of work to be done before the use of formal verification becomes widespread, but advances in this area provide some support for the authors' optimism.

Gradual typing is a technique that allows developers to write programs with little or no type information.  As the programs mature, types can be added in order to raise the level of confidence in the correctness of the program by making more properties statically checkable. Murphy-Hill and Grossman predict that gradual typing will become more widespread in new programming languages, and will expand to include properties that can be expressed as types but aren't available in popular contemporary languages.

Existing programming languages are being used in new domains, such as parallel and distributed programming and big data, even though these languages aren't particularly suited to these domains.  The success of Erlang is an example of how well a language built from the ground up to support distributed computation can do when applied appropriately. Murphy-Hill and Grossman predict that newer languages will focus more on new paradigms at the expense of traditional single-user (and often still single-threaded) applications.

This is related to the final prediction, which is that functional-progrmming concepts will become more widespread, and the distinction between functional and imperative (or object-oriented) programming will be more and more blurred.  This trend is already happening, with the introduction of function objects ("lambdas") into popular languages such as C++, Java, C#, and Ruby, and with mixed languages such as Scala.  First-class function objects are one primary feature of functional programming; the other is the use of immutable data, which is particularly important in distributed applications.

There is a recent (and not so recent) body of work that points in the direction of each of these predictions.  Unfortunately, there are many influences on the design of programming languages, and various features of currently-popular languages were not really designed but added without serious consideration.  I am therefore not quite as optimistic as the authors; however, the points they make are insightful and the paper is well-worth reading.  Be sure to put a link to the paper in your 2024 calendar, in order to see which of the predictions will be realized by then!

Thursday, June 5, 2014

Xtend, the Gentle and Functional Java

I didn't want to learn yet another programming language.  But I had to.

I needed to create an Eclipse editor for a domain-specific language, and I needed to customize content-assist and completion.  A friend recommended Xtext as a quick way to get an Eclipse editor with all the goodies up and running.  And it was quite easy to start with; I had to translate my ANTLR grammar into Xtext, which wasn't too painful, and within a day or so I had a working editor for my DSL.

The next step was to add custom content-assist to the editor.  This turned out to be relatively easy as well, except that the file Xtext generated for me was in this new language, Xtend.  So I had to learn the language.  Unlike quite a few other people I know, I first read the manual.  I admit I did this reluctantly, but as I kept reading I got more and more excited.

I've been using Java for many years, both in teaching and in developing applications.  It is certainly nicer than C++, and the refactoring support in Eclipse is wonderful, but there were other languages I used in the past that I enjoyed much more than Java.  Common Lisp, especially on the Lisp Machine, was a great language with a great environment.  Scheme was a special pleasure, being a very compact yet extremely powerful language.  In contrast, Java is very limiting and extremely verbose!

First, the substance.  I am a great fan of functional programming; it is a limiting yet at the same time liberating paradigm.  You don't get to make any assignments or other state changes; in return, the computation model is very simple and similar to standard mathematics.  Functional languages enjoy referential transparency, which means that the same expression in the same environment will always yield the same value.  In contrast, when side effects are possible, it is possible to call the same function or method multiple times and get different results each time, since something in the environment may have changed.  While object-oriented programming seems to be all about changing state, it is still possible to use functional-programming techniques to minimize state changes to those places where they are really useful.  Item 15 of Joshua Bloch's excellent book Effective Java (2nd Edition) is "Minimize mutabililty."  And Bertand Meyer, designer of the Eiffel programming language, expounds in his classic text Object-Oriented Software Construction (2nd Edition) on his Command-Query Separation Principle.  This principle advocates minimizing state changes, and using functional programming techniques inside object-oriented programs as much as possible.  The Scala language is a more recent attempt to reconcile the functional and object-oriented programming styles.  Java allows a functional programming style, but makes it unnecessarily difficult.  (A ray of hope, though: Java 8 is going to have lambda notation!)

And then, there is the form.  Java is obnoxiously verbose, especially in its requirements for types everywhere.  I'm not necessarily objecting to strong typing here, but even with strong typing there are many type inferences that the compiler can make.  For example, the fact that I have to write something like this is infuriating:

HashMap<String, List<String, Integer>> map = new HashMap<String, List<String, Integer>>();

One way of thinking about Xtend is as a simpler syntax for Java.  The Xtend compiler does extensive type inference, and most type specifications are optional.  But Xtend is much more than that.  It provides many conveniences, and is particularly well-geared to the use of functional techniques.  Functions ("lambda expressions") are an essential part of the language, and are a pleasure to use and read.  They simplify a lot of the boilerplate code needed in Java for callbacks of various kinds.  In addition, Xtend libraries supply many utilities in the functional style, and these look like an integral part of the language due to Xtend's extension capabilities, which allow external libraries to add methods that can be called as though they existed in the original Java classes.  Another great convenience is a much enhanced switch statements, which, among other things, can switch on types without requiring type casts.

At the same time, Xtend compiles directly into (readable) Java, not even into bytecode.  So it is 100% compatible with Java, and can be used in any combination with Java classes.  For example, Xtend classes can inherit from Java classes as well as the other way round.  Xtend is well-integrated with Eclipse, and offers most of the conveniences of Java development, including debugging the Xtext sources.  The major lack is in refactoring, although Rename, for example, is available using the Alt-Shift-R shortcut.

If you are stuck with Java but want a nicer syntax, I recommend that you take a good long look at Xtend.  You can start with the 50-second video on the Xtend website, then read the manual; surprisingly, it is quite readable!

#Xtend #FunctionalProgramming #Java

Monday, March 10, 2014

Review: Mars Code



Despite a few spectacular failures (such as the Mars Climate Orbiter and Mars Polar Lander), NASA is one of the top organizations worldwide in Systems and Software Engineering.  The Mars rover Opportunity has recently celebrated ten years of operation on Mars, and the newer Curiosity has been operating since August 2012, after a landing maneuver of unprecedented complexity.  This is quite a feat, considering the fact that the nearest repair technician is over 30 million miles away, and communications delay is about 15 minutes on average.  How do they do it?

The paper Mars Code by Gerard J. Holzmann (Communications of the ACM,  Feb. 2014) gives a rare glimpse into the process used in programming the Mars rovers.  It focuses on three methods used to reduce the risk of failure.  An important feature of all three methods is the use of automation.  The first consists of a set of risk-related coding rules, based on actual risks observed in previous missions.  For example, one rule requires the code to pass compilation and static analyzer checks without any warnings.  Another requires the code to have a minimum assertion density of 2%.  The assertions are enabled not only for testing, but also during the mission, when violations place the rover into a safe mode, during which failures can be diagnosed and fixed.  All the rules are checked automatically when the code is compiled. 

The second method consists of tool-based code review.  Four different static-analysis tools are used, with an automated system to manage the interaction between developers and reviewers in response to tool warnings.  In over 80% of the cases, developers agreed with the warnings and fixed the code without reviewer intervention.  An interesting observation Holzmann makes is that there was surprisingly little overlap between the between the warnings issued by the four tools. Apparently, each tool has its own areas of strength, and they are different enough that it is useful to use them all.

The third and strongest method is the use of a model checking tool for race conditions, one of the most difficult types of bugs to discover.  This cannot be done for the whole code base, but was used extensively in the Curiosity mission to verify critical multithreaded algorithms.   In some cases, the tool works directly from the C source code.  For larger subsystems, input models had to be prepared manually in the language of the model checker; this can reduce the input size by over one order of magnitude.

The challenges faced by NASA are obviously unique, and are very different from those involved in cloud-based mobile applications, to take a popular example.  There are many papers and books describing methodologies for developing such systems.  However, developing safety-critical systems for cars and aircraft, medical devices, and power grids, or financial systems that must be resilient to faults and deliberate attacks, requires reliability levels similar to those of NASA.  For developers of such systems, papers such as Holzmann’s should be required reading.

A related article you may want to read is They Write the Right Stuff, by Charles Fishman.  This article, from 1996, describes the culture of the group that wrote the software for the space shuttle, and how their development process led to the very high quality of their product.  For example, whenever a mistake was found in the code, it was traced to the corresponding problem in the process that allowed the mistake to be written and pass reviews and testing.  Then, the team investigated what other bugs could have been caused by the same process failure.  In this way, a bug in the control of the shuttle's robotic manipulator arm was traced back to a certain type of problem.  When investigating other effects of the same problem, a much more serious bug was found in the control of the high-gain antenna.  The bug was fixed before it had a chance to manifest itself.

It takes a lot of discipline and investment to work this way, but the result is a much better, more reliable product.  Sometimes it isn't all that important; other times, it's the difference between life and death.