Hero image for "APL's Notation Wasn't a Bug. It Was the Whole Point."

APL's Notation Wasn't a Bug. It Was the Whole Point.


Lesson 9: The Design Decisions That Made APL Simultaneously Brilliant and Impenetrable


Last week we looked at APL's philosophy — the idea that notation shapes thought, and that Iverson believed a sufficiently compressed notation could make array operations visible in a way verbose code never could. Today I want to go one level deeper: the specific design decisions that made that philosophy real, and why each one was both a genuine insight and a genuine trap.

Because APL didn't become legendary for being merely difficult. It became legendary for being difficult in a specific, instructive way — one that reveals something true about the tradeoffs every language designer faces.


Right-to-Left Evaluation Wasn't an Accident

The single most disorienting thing about APL for newcomers is evaluation order. Expressions evaluate right-to-left, with no operator precedence hierarchy. None. 2 × 3 + 1 is 2 × 4, not 7. The + happens first because it's rightmost.

A walkthrough of APL's Roman numeral converter makes this concrete: the entire reading strategy for APL code is "extract the largest complete expression from the right, evaluate it, then move left." Every single expression. No exceptions.

Iverson's reasoning was coherent: precedence hierarchies are arbitrary conventions that programmers memorize rather than derive. Multiplication doesn't inherently outrank addition — we just agreed it does. Strict right-to-left evaluation is at least consistent. Once you internalize it, you never have to ask "which operator wins here?" The answer is always the same: the rightmost one.

The problem is that this consistency comes at a steep price. Human readers — even experienced ones — parse arithmetic left-to-right by default. APL asks you to override a deeply ingrained habit on every single expression, forever. That's not a learning curve. That's a permanent cognitive tax.

The lesson here isn't that Iverson was wrong. It's that "internally consistent" and "cognitively accessible" are different goals, and optimizing hard for one often costs you the other.


The Symbol Set Was a Consequence, Not a Choice

APL's Greek letters and custom glyphs (, , , ) look like a deliberate barrier to entry. They weren't. They were a consequence of the compression goal.

Iverson wanted each primitive operation to occupy exactly one character. If you're working with arrays and you need shape, index, take, drop, rotate, transpose, and a dozen more operations as first-class primitives — all composable, all single-character — you run out of ASCII fast. The symbol set was the only way to achieve the density he was after.

Look at what that density actually buys you. The Roman numeral converter example fits a complete conversion algorithm — including lookup, indexing, conditional arithmetic, and reduction — into a single line: z←+/(¯1+2×z≥1↓z,0)×z←(1 5 10 50 100 500 1000)['IVXLCDM'⍳a]. In most languages, that's twenty lines. In APL, it's one expression you can hold in working memory all at once.

That's the genuine insight: when an algorithm fits in one visual unit, you can reason about it as a unit. You can see the whole thing. Verbose code forces you to mentally assemble pieces; APL hands you the assembled structure.

The trap is that "one visual unit" only helps if you can read the units. For someone who knows the glyphs, that Roman numeral line is transparent. For someone who doesn't, it's a wall. APL made the expert experience extraordinary at the direct expense of the beginner experience — and unlike most languages, it offered almost no intermediate ramp.


What This Means for How You Design (or Choose) Tools

APL is an extreme case, but the tension it embodies shows up everywhere. Every language makes tradeoffs between consistency and familiarity, between compression and readability, between expert power and beginner accessibility.

Rust's borrow checker is internally consistent in the same way APL's evaluation order is — once you understand the model, it never surprises you. But it imposes a steep upfront cognitive cost that many developers find prohibitive. Haskell's type system is extraordinarily expressive for people who've internalized it, and nearly opaque to those who haven't. The pattern repeats.

I'd argue the real lesson from APL isn't "don't use symbols" or "don't be consistent." It's that every design decision that optimizes for expert fluency creates a corresponding barrier at the entry point — and language designers rarely reckon honestly with that cost at the time. Iverson was solving a real problem. The impenetrability wasn't malice or carelessness. It was the shadow cast by a genuine insight pushed to its logical extreme.


Your next action: Find one tool, library, or language feature in your current work that you find elegant but that your teammates find confusing. Ask yourself: what design decision created that gap? Is the elegance worth the friction, or has the tool optimized for the wrong user?

That question is what APL forces you to ask. That's why it's still worth studying.