Choosing a language

One of my former students (hi Dave!) is now in graduate school, and taking a class on scientific computing. He posted on facebook about learning Fortran, and that his professor recommends Fortran for scientific applications. That got me thinking about my history with programming languages, and on the choice of language. Here are a few of my thoughts.

The first language I did anything at all with was BASIC. I used this both on our Apple //e at home as well as on PET16s at school. I learned some basic programming concepts, but mostly I was just playing around. At that point, I was much more interested in playing games on the computer than in making it do interesting things. One important thing came out of that period, though, which is that I was aware that one could make the computer do interesting things, and I acquired a sense of what kind of thinking and work was required to do so.

In high school, I got a job at the University of Washington as a bottle washer. My dad worked in the Civil Engineering department as a technician, and he knew someone up in the environmental engineering group who needed an assistant to wash glassware and do stockroom inventory and that sort of thing. So I was hired. Since I was on the payroll of the University, I had access to their multiuser computer systems. There were a couple running VMS and a couple running Unix. I discovered Usenet, and MUDs. This was the late 1980s: pre WWW, even pre gopher. I had read Cliff Stoll's The Cuckoo's Egg, and between that and my early exposure to BBS culture (we didn't have a modem, but nearly all of my friends did), I was enamored with the idea of hacking. I decided to learn C. I bought a book, and started working through examples.

As a freshman in college, I had thoughts about computer science. I took a programming class in which we learned Pascal, and another in which we used 68000 assembly language. Since I didn't have a work ethic, and skipped many of the homework assignments, I wouldn't say I really learned either of those languages, but at least I had been exposed to them.

I took two years off from school to be a missionary. I learned to work. When I got back, CS as a subject didn't seem as enticing to me any more, so I stuck with physics (which had been the original plan). I really clicked with some of my physics professors, for one thing, and I found ways to use computers to solve physics problems that struck me as more interesting than just programming for its own sake. I picked up a student version of Mathematica and wrote a fairly simple shooting code to get the energy eigenvalues for various one dimensional potentials. Mathematica was nice in two ways: I had easy access to plotting facilities (something that had been lacking, or at least, very difficult in my computer work so far), and there were all kinds of tasks that it had built in, so I didn't have to worry about how to integrate an ODE, I just had to set it up so Mathematica could do it.

Not long after that I started working on a project with a professor. Fortran was the language he knew, so I learned Fortran. This was 1996 or so, but to him, Fortran meant Fortan 77, so I didn't pick up Fortran 90 until later. Working with him was my first exposure to sophisticated numerical algorithms, and all of the things that can go wrong. The problem we were working on was a non-neutral plasma simulation. Evolving the plasma amounted to advecting a density around phase space. We were looking into some accuracy issues (in particular, the code a previous student had written didn't quite conserve energy in some situations) and to be perfectly honest, both the physics and the numerics were a bit over my head at that point. I'm not sure I really realized it until considerably later, though.

In grad school, I continued to use Fortran for most of my work. I had learned perl at some point along the way to do scripty kinds of things, and so I used it a lot for setting up runs and other little tasks that would have been tedious otherwise. I didn't use it for anything numeric, though. I was introduced to octave, and later IDL. By this point, having used a reasonable number of languages for different things, I realized that (for me at least) I can have syntax for about two languages in my head at a time, and they tend to be the languages I have used the most in recent times. I say two because I was able to use both perl and fortran contemporaneously, but that may be optimistic; I probably had to spend more time looking things up than I would have if I used only one of them exclusively. I was (and still am) intrigued by functional languages like lisp and haskell, and I even learned enough lisp to write a couple of little functions in emacs (I also worked through about half a chapter of SICP, which uses scheme, so that counts, I suppose) but there always seemed to be a pretty big gap between the kinds of toy problems I could learn how to do in online tutorials and getting real work done. I also typically frame my problems in such a way that they involve a lot of state, and that's kind of hard to deal with in a functional language.

I got a mac, and later an ipod touch, and picked up some Objective C. In doing a little web work, I've learned just a smattering of javascript. Because it was all the rage, I picked up a book on java and worked through some examples. I didn't like it though, so it didn't stick. These days, I mostly use python.

So, enough history. On to the question of choosing a language.

There's a quote from Alan Perlis that says: "A language that doesn’t affect the way you think about programming is not worth knowing." He also said, "You can measure a programmer's perspective by noting his attitude on the continuing vitality of FORTRAN." I'll come back to the latter quote in a minute (it was published in 1982 -- 30 years ago. The idea that Fortran is still kicking around is pretty remarkable). The first quote I think applies really well to choosing a programming language to learn, but not so well to choosing a programming language to actually get work done. Personally, I think that nearly all programmers should be in the habit of regularly learning new languages. They expand our catalog of idioms, they change the way we think about algorithms, data structures, and the like, and can improve the quality of our programming in the languages we already know. Having said that, time is limited. I've been exposed to a number of languages (as I mentioned in the history section), but don't feel that I have really mastered very many. I've written significant (to me) programs in Fortran and python. Even in those instances, my degree of mastery is questionable.

I guess what I'm trying to say is that my opinions on the relative quality of programming languages should be taken with a grain of salt. But, that's okay, because the first principle I would say you ought to employ in choosing a language to do work is, pick a language you're comfortable with. It seems like in any significant coding project, you end up learning new things about the language you're using and about what you're trying to do. Maybe that perspective reveals that I haven't really mastered any programming languages. In any event, you're going to be spending quite a bit of time wrestling with the limitations of whatever language you use.

Every language seems to involve some kind of tradeoff between expressive freedom and efficiency. The reason for this is simple: the more rigid the constraints of the language (as one example, static versus dynamic typing), the more state information is available at compile time, and the more aggressively the compiler can optimize. Sometimes (and for some people) that's great. You've declared the variable a to be a real number, and you know it will always and forever be a real number. You don't have to worry about what it is when you see it crop up later in the program. Other people want the flexibility to change a to something else. Pick a language whose constraints you are reasonably comfortable with.

That leads to my second point, which is that in almost every case, developer time is more important than computer execution time. You, as a programmer, don't scale. You can always throw more hardware at a problem (though with diminishing returns) but there are only so many hours in the day you can spend writing code. Pick the language that makes that process more efficient to you, and worry about performance later, if at all. Donald Knuth put it thus: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." With Moore's law, the question of what constitutes a small efficiency is a little blurry.

In terms of fitting a language to a particular task (say, writing parallel programs) there's some merit in that. As I noted above, you think about problems differently depending on what language you're working in and the constraints that language imposes on you. On the other hand, the real problem with writing parallel programs is not so much language design as it is learning to think in parallel. Then, the way you write your code will be affected by the architecture of the parallel system you're going to run it on. As an example, GPUs excel at certain kinds of operations (such as operating a static matrix on a bunch of different vectors) but are really slow at others (getting the matrix on to the GPU). The question of how and when to move data around becomes tremendously important. Which parts of the program are CPU bound? Which are memory bound? Which are I/O bound? Which are network bound? You likely have parts of your program that fall into each category. But depending on scale, they may constitute a small efficiency.

If you're using a parallel paradigm that has a well developed framework associated with it (such as MPI), you don't need to worry nearly as much about choice of language, since most languages you might want to use will have the appropriate bindings. The downside is that there is some additional overhead to using MPI on a shared memory machine which wouldn't be necessary if you were able to write your code to run on multiple cores without using MPI, but again, that's likely a small efficiency and should be ignored.

Two final points. First, all of these questions are strongly affected by scale. John Boyd has commented that a state of the art computation involves about 100 hours of cpu time on whatever current hardware happens to be. The reason for this is that few people are willing to wait more than a week for results, and it wouldn't give you enough time to do multiple runs (which you frequently need) and get a publication out in a reasonable amount of time. Choosing a language based on performance is therefore roughly equivalent to choosing what kinds of problems you consider to be state of the art. The size of your cluster just adds another dimension to that choice.

Second, programming is only partly about telling a computer what to do. My favorite idea of Donald Knuth's is the idea of literate programming---that programs ought to be written for people to read, rather than computers. Ultimately language choice is affected by the fact that we have to communicate with others. We have collaborators, or mentors, or customers, or whatever, and we need to be able to share ideas with them in a way that they can understand. In my history section, I mentioned that I learned Fortran primarily because that was what my undergraduate adviser knew, and he wasn't interested in learning another language. The other side of that is that he already had a significant code base in Fortran, and it would have represented a tremendous effort (and largely wasted effort, at that) to try to translate that into another language. Going back to the second Alan Perlis quote I mentioned earlier, I think that the persistence of Fortran has more to do with people than it does with technology. It's a perfectly fine language (even if the Fortran of today bears only a passing resemblance to the FORTRAN Perlis was talking about in 1982), but languages come in and out of fashion relatively frequently within the hacker community. Scientists, though, tend to be more conservative with their tools, if only because the apprenticeship system that exists in graduate school rewards continuity. Folklore crops up (as it always does) to explain what is observed: Fortran is widely used on parallel architectures by scientists, therefore Fortran must be well suited to parallel programming. But which is cause and which is effect? Or are both effects of a common cause? For myself, I go with the latter interpretation.

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *