Obviously Better

An idea which is "obviously better" to some people and yet obviously worse to others.

On the most recent episode of ArrayCast, host Conor recalled a strong reaction from peers to his observation about operator precedence in APL:

I just made this offhand remark that… the evaluation order in APL is a much simpler model than what we learned in school, …it was the most controversial thing

Here's the thing - there is no natural or obvious precedence to mathematical operators, functions, or any other programming notation. There are some conventions which are more common across domains than others, such as PEMDAS/BODMAS. These conventions can be memorized, but I think this does more harm than good.

The key problem with adopting these conventions in programming languages is that most programming revolves around the act of abstraction - creating new terminology to address a new problem. This is so central to programming that it's facilitated by keywords like import or #include. These keywords are roughly equivalent to a kind of copy-paste to incorporate other code from some other source into a new context. So programming languages are not only creating new vocabulary regularly, but their vocabularies tend to grow ever larger with inclusion of additional code via import/include mechanisms. Because programming involves abstraction, conventions about order of operations or operator precedence do not scale.

Most programming languages draw a line in the sand. On one side of the line you have operators with special precedence rules. On the other side of the line: user defined functions.

Different kinds of functions (asynchronous or not, effectful or not, etc.) have wildly different semantics, yet in most programming languages they can't be distinguished by their syntax alone. It might make sense to have different precedence rules for these different types of functions. But you might see the flaw in this logic - there are too many different kinds of functions to justify learning a new rule for each new type.

Then why do we tolerate precedence rules for arithmetic operators?


I don't have a problem with higher precedence (in order) for addition, multiplication, and exponentiation. But I think it's healthy to admit that the reason behind this ordering is to reduce parentheses and make written expressions easier to read (after you've memorized the rules).

This is very true when writing arithmetic expressions by hand. It's much easier to write (ab + cd) than a fully-parenthesized ((a*b) + (c*d)) or lisp-style prefix notation like (+ (* a b) (* c d)). I seem to find infix clearer than postfix: a b * c d * +, although this is probably just my own upbringing and bias talking.

But the problem is what to do with the other operators. In advanced math papers, authors define new notation all the time. The only requirement is that it's readable by their peers, or at least the people reviewing the work for publication. In programming languages, the problem is slightly worse because languages tend to be designed by a few people but used by many more people. This means that all the users of the language must learn whatever rules the designers thought made sense.

And - surprise surprise - different language designers have different ideas about what makes sense. So learning the arbitrary rules for one language may or may not help you when you need to learn another language.

Maybe this is some of the appeal of lisp-family languages. They basically don't draw a line in the sand. Instead, the only syntax to learn is parentheses. But I suspect this is also part of why other people dislike reading or writing lisp code. Because there are only parentheses, there's basically no syntax to guide your eye toward the meaning of the code. Instead you build the parse tree in your mind and keep track of functions, macros, and variables.

How APL works

In most APL-family languages, evaluation proceeds from right to left. So an expression like (ab + cd) from earlier would be written (in J) as (a*b) + c * d. This expression evaluates as (a*b) + (c*d). The parentheses around (a*b) are required because without them the expression would evaluate as a * (b + c * d).

Here's a similar example of how evaluation proceeds in J, this time swapping + and *:

(a+b) * c + d

This evaluates as:

(a+b) * (c+d)

Without parentheses, it would evaluate as

a + (b * (c + d))

The beauty of this evaluation order is that (like lisps and forths) the number of rules you need to learn is minimized. Also like lisps and forths, it extends seamlessly to other operators (not just the ones described by PEMDAS) or even user-defined operators. I don't have any evidence that infix notation is inherently better than prefix or suffix notation, but anecdotally I enjoy the fact that infix expressions map more closely to how I say the expressions out loud in English: "a plus b times c plus d" is still ambiguous, but for me it's easier to understand than the unambiguous "a b plus c d plus times".

And like in APL or J, we can verbally add parentheses to disambiguate: "paren a plus b paren times c plus d". Of course this falls apart when the expressions grow, but hopefully you aren't relying solely on verbal communication for conveying complex math.

Why we hold on to PEMDAS

As the Array Cast discussion hinted, perhaps most grown-ups simply dislike being asked to unlearn something which they spent a lot of time and effort learning. That's completely understandable. But I hope people can at least consider opinions that run counter to what they already know. If you're a Python programmer, or a C programmer, and you've never tried a language in any of the APL, lisp, or forth varieties, then maybe write some toy program in one of them just to get a feel for the language. The hardest part will be remembering what it was like to learn Python or C for the first time in order to fairly judge which language family was easier to learn.