Daniel Sockwell’s Email June 18, 2021

Hi and thanks for The Array Cast,

I greatly enjoyed your recent "What Challenges Face the Array Languages" episode but I have a fairly different take on why APL-family languages aren't more popular – at least from my perspective as someone who got very excited about APL but eventually decided not to pursue learning the language further.

Here's my APL story: I'd always heard about APL (I think first encountered it in the novel The Wizardry Consulted, where it was used as a language that puzzled dragons). But I didn't look into the language closely until I encounterd some of Aaron Hsu's writing/talks about how APL enabled concise, expressive code. What Aaron and others wrote about keeping the cost of rewriting code as low as possible resonated deeply with me, and I decided to investigate APL seriously. During my initial deep dive, I realized that Aaron and I actually lived in the same small town and that coincidence inspired me to send him an email. We ended up having coffee – which I thought would be a short chat, but turned into a multi-hour talk where Aaron thoroughly convinced me of the virtues of APL.

My biggest take away from that talk was that the fundamental task in programming is to (attempt to) solve the impossible problem of leaky abstractions. Most languages take the strategy of trying to make abstractions as non-leaky as possible – and as a then-Rust programmer, I recognized that goal in many of the language design choices in Rust. But there's another way to solve the problem: recognize that all abstractions leak and write code that minimizes their number. Compare the APL expression (+⌿÷≢)⍳10 to the equivalent Rust expression: average((0..10).collect::<Vec<_>>()) . The advantage of the APL code isn't (just) that it's shorter – it's that the APL expression removes a level of abstraction. The APL code is all right there, so you can check for any edge cases or bugs: no abstraction; no leaks. And because APL is so concise – defining `average` is literally shorter than calling it in other languages! – it facilitates this abstraction removal, over and over.

Put simply, I was sold, and greatly enjoyed solving Advent of Code puzzles in APL – frequently with a solution that would fit in a tweet. (Or, in my case, a toot, since I use Mastodon rather than Twitter).

So what pushed me away from APL? I still believe in the power of everything Aaron and I discussed, and still think that reduced abstraction enabled by concise code is a programming superpower. But, despite all that, three things caused me to give up on APL – none of which were squarely addressed on the podcast.

Poor integration with Linux/Free Software. This is related to the point Nick started with, about sharing code – but I have something much broader in mind. After writing enough APL to get serious about it, I decided to set up my environment to support writing programs with a broader scope than solving AoC puzzles. One of my first steps turned out to be surprisingly difficult: writing a simple program that read from standard input and printed to standard output. The Dyalog docs had far more info about connecting APL to Excel than about using it with a terminal – even though I think of the latter as the bread-and-butter of programming in a Linux environment. With some help from Adám and other saintly members of the APL Orchard, I was able to get that working (if you're reading this, Adám, thanks!). But it was clear that integrating with the normal Linux/server tools that are part of my standard workflow (Unix/TCP sockets, Emacs, nginx or other servers, terminal emulators, etc.) would be a recurring challenge. As someone deeply committed to the free software/foss ecosystem, being siloed like that felt like a much bigger handicap than just not having a package manager.

Sometimes, arrays aren't enough. On the podcast, Conor mentioned that APL can sometimes struggle with the text munging required for Advent of Code input. I agree; as great as APL is, it doesn't shine in domains that are primarily text-centric. To a data scientist, that might sound like a niche concern, but I'm primarily a web developer – my entire platform is built on text. And I'm focusing on text here, but there are other times when a set, hashmap, or other non-ordered data structure is a far better conceptual fit for a particular domain than an array. You can model these domains as arrays (just as you can model strings as arrays of characters) but doing so sacrifices considerable expressive power. Similarly (without wading into a whole other debate), sometimes types are helpful.

Metaprogramming, introspection, and extensibility. These days, with dfns, user-defined-operators, and all the features of modern APL, it's no longer fair to describe APL as a diamond that's beautiful in its current form but that can't be extended in any way without ruining the whole effect. Yet some of that spirit remains: when writing APL, I didn't feel that I was on a par with the language designers, building my own libraries with exactly the same power they have when adding new primitives. The power of metaprogramming isn't one I reach for often, but when it's the right tool, it's usually the right tool by a huge margin.

Due to these drawbacks, I reluctantly decided to shelve APL and go back to Rust. But I resolved to keep my eyes out for another language that could deliver the abstraction-smashing power of APL without some of the tradeoffs.

And, eventually, I found it in the programming language Raku.

In case you're not familiar with it, Raku is the programming language that Larry Wall and a team of enthusiastic volunteers spent 15 years (2000–2015) designing after Larry stopped playing as large a role in Perl5. In 2015, Raku (then known as Perl6), had its initial release; since then, the language has released two new versions and, in 2019, was renamed to Raku (to avoid the confusion that was created by having two very different languages with the names Perl5 and Perl6). It also shifted from a BDFL governance model to a steering council one, since Larry is mostly retired and no single person could step into those shoes. Many of the early Raku hackers had a background in Haskell – in fact, the first (now deprecated) language implementation was written in Haskell. Due to that functional influence, many people say that Raku feels like a decedent of both Perl and Haskell.

But, personally, I can't help feeling that Raku shares a huge number of features with APL (and, especially, dfns). Here's a small sampling: distinguishing between functions and operators (we call them "metaoperators") as APL does; support for implicit arguments (we say $^a where APL would say ⍺); a distinction between monadic and dyadic functions; the ability to call dyadic functions in infix position; support for applying functions that typically take scalar arguments to arrays (^5 «+» ^5 returns (5, 6, 7, 8, 9); ^5 «+» ^5 returns (0, 2, 4, 6, 8)). Oh, and as I guess I already gave away, full support/use of Unicode for functions and operators, whether built in or user-defined (though all built-ins have ASCII alternatives: the « » operator can be written << >>). I could go on, and on – it sometimes feels like Raku has every feature I loved in APL other than being fundamentally array-oriented.

And it pairs those features with amazing text-processing abilities; a powerful, though optional, gradual type system (example signature: sub fn(MyType $t, Str:D $text, Int $age where $age ≥ 0 --> Str) is pure {...}) that supports generics; and some of the best support for introspection, metaprogramming, and language oriented programming I've seen – enough to rival Racket. I guess it's clear that I think Raku is a great language – I believe in its power enough that I have made it my primary language and now serve on the Raku Steering Council.

Given my opinion of Raku and the number of features I haven't mentioned, I could go on for quite a while, but I'll restrain myself – both because this email is already too long and because if I say too much more I'd need to start mentioning the language's flaws (which are mostly related to performance and are, I hope, largely temporary). Exercising that restraint, I'll make just two more comments about Raku:

First, it strikes me (and many people) as a very thoughtfully-designed language in just the way APL is. It may, from an adoption perspective, have been a terrible mistake to spend 15 years designing and prototyping Raku before its first production release. But the result of that deliberation was a language where nearly everything fits together wonderfully; from here in 2021, I'm very glad they took that time. One tiny example of what I mean by "thoughtfully designed": in his other podcast, Conor mentioned that very few language use names that capture the relationship between "fold" and "scan" nearly as well as APL's / and \ do; in Raku, those two operations are named reduce and produce, respectively. (And, just in case the APL influence isn't clear enough, reduce can be written as [*] – where * can be any operator of your choice – and produce can be written as [\*].) Just like APL, Raku is full of tiny touches that might take you a while to notice – but, once you learn them, they create an "oh, of course!" reaction; the phrase we like to use is that Raku is "strangely consistent".

And, second, all of this adds up to make Raku into a language that is pretty amazingly concise and expressive. It's typically not quite as concise as APL (at least in the domains where APL shines) but it's the first language I've seen come close. (I'm not talking about golfing languages, of course). To really see what I mean you'd need to look at a larger Raku program, but here's a taste: The shownotes previously linked to the talk Algorithms as a Tool of Thought, in which Conor presented 8 APL versions of an all-equals algorithm. The shortest was ⍋≡⍒ (or, if saved as a function, f ← {⍋≡⍒⍵}), but that version sorts the array twice, which reduces performance. The best version seemed to be ⊃∧.=⊢. The Raku version is [≡] (or, saved as a function my &f = {[≡] $_}) – which matches the brevity of the 3-character APL solution while providing the semantics of the "longer" APL approach. (In case you're curious: the way Raku is able to make [≡] work, when the equivalent reduction in APL (=/) doesn't return the correct answer is that Raku allows for user-specified associativity and precedence levels and the built-in ≡ operator has chaining binary associativity. You might be getting a feel for the way that Raku, like APL, embraces syntax that differs from many Algol-family languages but that, when understood, can lead to far more expressive and concise code.

It's my hope that some of you (or your listeners) might be interested in trying out Raku; if you do, I'd love to hear what you think of it and whether you see the same similarities to APL that I do. And, totally apart from Raku, I'd be interested in any thoughts you have about the downsides to APL that I brought up – including telling me that I have it all wrong!

Thanks again for the podcast. I'm very much enjoying listening and look forward to future episodes.

Best regards,

Daniel