My favourite session of PDC 2008 was by Anders Hejlsberg as he described the Future of C#. Anders is an excellent speaker, and it was educational watching how he made fairly complex type systems come across to an audience more used to simpler programming languages. I say this after attending ICFP recently, where that wasn’t true of all the presentations!
The high-level message was exactly what I wanted to hear: in order to exploit multi-core processors more effectively, Microsoft’s languages are focussing on improvements in three areas:
Declarative constructs: making the programmer define overall goals and constraints enables a tool-chain to more effectively scale computation across multiple processors. An example of this in C# 3.0 is the Language Integrated Query (LINQ) domain specific language for defining queries.
Dynamic interoperability: Although Anders is a big fan of static type systems (robust, high-performance, enable intelligent tools), he pointed out that a lot of the systems that C# programmers have to deal with are fundamentally dynamic. Examples are web interfaces such as SOAP/REST APIs, COM objects for talking to other Microsoft applications, or scripting language integration (e.g. Python or Ruby).
Concurrent programming: dealing with multiple cores requires splitting up computation into parallel threads, and the resulting parallel mass of code is often hard to debug (e.g. difficulties of reproducing obscure concurrency issues) and statically reason about.
An interesting aside was his assertion about “co-evolution”. Microsoft have a number of languages which have recently been unified under the CLR, such as Visual Basic, C# and most recently F#. Rather than have one language race ahead of the others, they like to “borrow” features as appropriate into other languages. This is obviously made easier by having a common run-time foundation, and an example of co-evolution is the introduction of lambdas into C# 3.0 as obviously seen in F#.
Anders observes that all the attempts to automatically parallelize software has not yielded good results in the past, and so support needs to be built into the language to do “top-down” parallelism instead of the compiler inferring it from the bottom-up. To this end, they are introducing parallel extensions into the .NET framework. This makes use of functional features to parallelize code, including LINQ queries or CPU-intensive compute.
For an example of how this code looks, check out Jurgen Van Gael’s post on using F# with the Task Parallel Library. I wanted to learn more about this, and wandered over to the Hands On Labs where F# and Visual Studio 10 were all pre-installed. I then ran into an extremely cool feature of F#… asynchronous workflows.
F# introduces an extension to the usual
let ML operator in the form of
let!. As Don Syme explains on his blog ), this can be interpreted as “run the asynchronous computation on the right and wait for its result. If necessary suspend the rest of the workflow as a callback awaiting some system event”. So this construct lets you write straight-line code which can potentially block, without the hassle of spawning threads or encoding continuation passing style constructs in the code. Already a nice improvement over OCaml! (although to be fair I’ve not had a chance to check out some of the parallel OCaml extensions).
Dynamic programming is the main focus of C# 4.0. In order to support dynamic objects as first-class citizens in a statically typed world, they introduce a
dynamic static type. This forces all method resolution on that object to happen at run-time and disables static checks by the compiler (aside from looking from mixups between dynamic and static objects). The
The actual definition of method resolvers is pretty straight-forward; he demonstrated custom getters and setters (similar to Python for example) by using the
IDynamicObject interface to define actions to take when properties are accessed. His example did the usual dictionary wrapper which mapped setting arbitrary properties onto an internal dictionary variable.
Another improvement in this space is the addition of optional arguments and labelled arguments. Both of these have well-defined semantics (optional arguments have to come after non-optional ones, and evaluation of arguments is left-to-right) and are purely syntactic improvements with no run-time cost. One of the best examples he showed of using these around COM interoperability. In current versions of C#, due to the lack of named arguments a common function such as “Save As” might require 12 or more stub arguments to be specified as
ref missing. Now, those long, repetitive lines can be folded down to only the arguments which are required.
Later on at the Future of Programming Languages panel session, Anders talked about meta-programming as being one of the future improvement he’s looking at. Currently, there is a lot of ad-hoc code generation in place when creating Windows applications, and unifying this into the language would give safety and maintainability improvements.
In order to do this, for C# 5.0 they are rewriting the compiler to be self-hosting in C#, since it has historically been a C++ application. This permits them to switch the compiler from being a traditional “black box” compiler to a hosted .NET service which can be called directly by .NET programs in order to do dynamic run-time compilation of code. Other portions of the compiler chain are also exposed to permit incremental program construction by third-party code.
He demonstrated this with a pretty nifty C# top-level, into which he directly typed Winforms code to construct a window with a few simple buttons using the C# compiler server. Not to be outdone by this, Miguel de Icaza promptly upstaged Anders at his (fantastic) Mono 2.2 session. He demonstrated the new C# shell which is present in Mono trunk builds and can essentially be used like an OCaml or Python top-level to mess around and manipulate C# code. He also talked about embedded Mono and SIMD support which pushes their compiler ahead of Microsoft’s in the 3D performance game.
I’m firmly convinced about the potential of F# now. I had the opportunity at the Open Spaces area to quiz Scott Guthrie about whether or not F# was a toy language. He replied using the same arguments as Anders that the higher-level language approach (declarative, functional) was very important strategically to Microsoft to let their developer platform continue to survive in a multi-core world.
This boils down to the individual languages not being that important any more (as seen by the sharing of features between C# and F#), and the underlying execution layer (the CLR/DLR) adding efficient support. Now any old language can adopt higher-level features without having to re-do all the optimization grunt work again and again. Much like Xen offers a new golden age for innovative new OS research by freeing programmers from writing a million hardware device drivers, it looks like .NET is ushering in a new age of programming language innovation!
Inspired by the PDC talks, I’ve got MonoDevelop and F# up and running on my Macbook Air, and am just playing with GTK# and CocoaSharp#. If this works as well as OCaml, then it might finally be time to abandon the old stalwart and move to a new language for my day-to-day stuff!