Transcript

Many thanks to Rodrigo Girão Serrão for producing this transcription
[ ] reference numbers refer to Show Notes

00:00:00 Romilly Cocking

You'd get a new language in pretty well every other issue of BYTE, [01] and I'm dangerously curious so if I saw another language I would almost be unable to resist playing with it. If I see a new language and it looks like it might be interesting, i try and get to the point where I have solved a simple problem in it. And if that was fun enough, I try and solve a hard problem in it and then usually I move on to the next one. The last time I counted, it was somewhere between 60 and 80.

00:00:29 Conor Hoekstra

60 and 80 languages?

00:00:30 [RC]

That's counting a few assembler languages.

00:00:33 [CH]

Wooooow.

00:00:35 [Music]

00:00:45 [CH]

Welcome to another episode of Array Cast. I'm your host, Conor, and today with us we have three different panelists, soon to be 4, and a special guest that will introduce in a second. First, we'll go around and introduce, short introductions. We'll start with Rich, then go to Marshall, and then go to Bob.

00:00:56 Rich Park

Alright, I'm Rich Park. I'm an APL programmer and educator working for Dyalog Ltd.

00:01:02 Marshal Lochbaum

I'm Marshal Lochbaum, I'm a former J programmer and Dyalog developer and now I make BQN, my own language.

00:01:09 Bob Therriault

I'm Bob Therriault, and I'm a J enthusiast, and joining us later will be Stephen Taylor, dealing with some things right now, so he's going to join us partway through. So that's a preview of coming attractions.

00:01:21 [CH]

And as mentioned before, my name is Conor and I'm a c++ programmer slash research scientist at NVIDIA and array language enthusiast at large, and so we've got I think 1 announcement from myself. But first, we'll go to Bob who's got a follow up from our previous episode? And then I'll introduce our guest and we'll hop into that conversation.

00:01:41 [BT]

And just as a follow up, I was actually just getting ready to record here and I was looking at APL Farm and there was a message from Alex Shroyer who said the whole time I was screaming - this is this is a follow up from our last episode about tools of thought with the array languages - since the whole time I was screaming in my head Org Mode![02] How do they not know about Org Mode? It's basically a superset of all the newer tools of thought, and it is an effortless, effortless PKM system. I hear it's it's it's based on Emacs, so if you're not an Emacs user, it may not be quite as useful for you, but if you are an Emacs user, he says Org Mode. And and that was a good follow up, it was, it was good information to me, but other people in the panel know more about it and they went, Oh yeah, Org Mode. But for me it was something, so yeah, thank you, Alex.

00:02:32 [CH]

And my 2 announcements, or maybe one depending on how you count them, is that there are two meetups happening on September 1st and September 7th in Toronto, Canada and New York, North America [03]. Hopefully you've heard of those two cities and they're both happening at 6:00 PM they're being hosted by Dyalog. There's going to be 3 speakers at each one. In the Toronto one it's going to be myself, Morten Kromberg, the CTO of Dyalog Ltd, and then Lib Gibson, who is retired now but has a very, very long career as the CEO of several companies and APL obviously is involved and was a former IPS employee and in the New York meetup it'll be Morten, myself, and then Josh David who was actually a previous guest or interviewee on this podcast. So if you are interested in array languages, which you probably are if you are listening to this, and you happen to be in either Toronto on September 1st or New York on September 7th, definitely be sure to check those out. There will be meetup.com links in the show notes. And with that said, let's hop into introducing our guest today who is Romilly Cocking. I am super excited about this conversation. If you are a long time listener, you will know of Romilly or may have. I may have remembered his name being mentioned when Morten Kromberg was initially on [04] and at a certain point Morten mentioned that at a certain point in the 80s and 90s when the mainframe sort of business that APL was, you know, really successful in for a couple decades. As that went down, several or many of the APLers switched over to Smalltalk and I was sort of shocked when I heard this. Like, really? And he said, Oh yeah, you should definitely talk to Romilly, romilly can tell you all about that transition. I did a little bit of research into Romilly. He's had a very, very long career, has founded a couple of his own companies and worked there for many years, worked at a couple different banks. I'll get him to tell us all about that. Recently he's been sort of doing small startups to do with, you know, Raspberry Pis and he knows, and I also found, and - there will be a link for this, I haven't gotten through the full talk - but he gave a APL talk on genetic algorithms in 2008 called "An Excellent Return" [05]. And in the start of that talk he introduces himself with this one slide says 1958 saw computer liked it, wrote software. 1968 saw APL, cool, but we'll catch on. 1973 saw APL again, loved it, wrote lots. 1987 small- saw Smalltalk, liked it, wrote lots. 1995 saw Java, hated it, wrote lots, and 2008 back to APL still love it writing lots, so I can't wait to hear all about that it's going to be a super awesome conversation, so I will throw it over to you Romilly. Fill in all of the gaps that I missed and feel free to go back as far as you want to the 1950s or or whenever you got your start in computing. As it says, sort of 1958 and we'll go from there.

00:05:30 [RC]

Thanks. Yeah, but it was back in 58. I guess two important things happened for me, uh, though I didn't know about one of them at the time. One was, of course, Ken Iverson wrote and published "A Programming Language" at the same time I saw my first computer. My uncle was an electronic engineer. He took me along to the first British computer exhibition at Olympia. I saw a Ferranti Pegasus [06], and I just knew that that's what I wanted to do for the rest of my life. And I felt I got so fired up about computers that a a colleague of my mother's gave me a chance to run a program in 1958. I got it working in 2014. Which I think means that I hold the record for the longest debugging session in history. And what was even better was that I I was only able to debug it 'cause I wrote an emulator for the language that I've written the first program in. Anyway here for that? Uh, I I got pretty excited about computers and in my last year at school and the careers manager said, oh, did you know IBM do a scholarship scheme? And you ought to apply, and I did. And I was lucky enough to get a scholarship which paid for me to go through uni. But it also meant that I had a chance to work for them in each of the summers that I was at university. And as a result of that in, i thought it was 1968 and that's what the slide that Conor said referred to, it actually turns out thinking through it, and it was 1969. So I'm more of a novice at APL than I thought I was, but in 1969 and I was doing one of those summer jobs at IBM education centers in in the UK. And one of the guys I was working with, guy called George Neal. And said, hey, there's this really interesting language which might be relevant to what you were doing, called APL, and it's up on our system 360 model 40. For an hour a week you want to come down and have a play? And damn, I went. And I typed 2 + 2 and pressed enter as one does and back came click click click 4. I was using a typing to terminal obviously in those days and I thought, God, that's pretty cool. And then he showed me quad input. And I suddenly realized that here is a language where you could dynamically program in that language while running it, and my mind was just blown. So I thought they feel looked pretty busy and I ought to find out more about it and about that time, I think somebody showed me the, a description of System 360 architecture in APL[07], in which you had a complete program that emulated a mainframe in a relatively few readable pages in the IBM Systems Journal and OK, this this is beginning to look like it's a really serious tool. And then there was a bit of a break because I graduated in in 1970 and the Arabs rather unobligingly quadrupled the price of oil, and as a result the IBM graduate intake went from 200 down to 5.

00:09:32 [CH]

Oh wow.

00:09:34 [RC]

So I've been expecting to get a job as a systems engineer or programmer with IBM, and it was clear that that just wasn't going to happen. And I talked to the department that runs the scholarships team oh, could I borrow a flipchart stand because I want to talk to the the boys at my school about computers. So interested in computers and education, why don't you come and work for us as a temp? Uh, and so for a year I had a very strange job. And in which part of what I had to do was deal with the letters that IBM got from schoolkids, which could range from a I'm doing a uh GCSE course in with typewriters in it. Can you tell me about the typewriters you make? Through too I'm I'm at school, but I'm trying to build a computer, can you give me any advice? Two of the guys that I talked to actually went on to build computers with bits, which we managed to scrounge for them from the stores. But strange, in front of that was I really wanted to to to do some proper programming. Uh, joined the software house in 74, did a Masters degree, and while I was doing the masters, looked at APL for the third time because one of my fellow students on the course was working for IP Sharp Associates and talked a bit about APL and what he was doing, Guy Gorder and actually had called Phil Chesney. And then went back to doing software and the small software house I was working for, i had a company which was not quite a customer, but was run by an ex director of the software house. And they came along and asked the company I was working for for a rather strange favor. And they were setting up as competitors to IP Sharp and they wanted to send one of their people on an IP Sharp course but they were worried that IP Sharp wouldn't like that, and so could we send the student as if he worked for us. Uh, and my boss said, yeah, sure, how much is it gonna, how much are you going to pay us? And and so somebody from the relevant company went along and the next thing we knew a guy called Dave Saunders arrived in our office with the an APL terminal, saying of course, having sent someone on the course, you now get a free account with us for a month. Uh, and here's an APL manual in case anyone else wants to have a go. Uh, so I put my hand up and I was doing a really grotty job at the time, we were we had a customer that was using Philips unit record computer to track their imports of food from Eastern Europe and the government had just changed the VAT tax rates and so we had this ghastly a dump of object code which we had to disassemble back into assembly and then patch it so that it could deal with the VAT. That sounded too much like hard work, so my first ever APL program was a disassembler in APL, which was probably probably the most stupid thing to try and do because it surfaced all the issues about how on Earth do you do stuff in parallel, when essentially the problem, the normal way of thinking about the problem you're trying to tackle is a serial problem. For instance, now if they're there, if there are jumps, you need to simultaneously, know that it's a jump instruction, but also know about the instruction it's jumping to, so you have to treat the whole program as a single array. I got it working surprisingly, and by the end of that process I was convinced that APL was really pretty busy. And and it was something that I wanted to stick with. Uh couldn't convince our boss at the software house, so the next year two of us set up a company called Cocking and Rury. And all we did was APL. And we got quite a bit of work from IP Sharp. Uh, quite a bit of work from other APL users. By 1976 we'd set up an APL user group and by the late 80s, we had about 40 employees who were doing nothing but APL, which was quite a sizeable shop for those days. Unfortunately, as as Conor mentioned, there were really three APL markets. There was the timesharing market, which was vibrant in the 70s because the oil price and the the cost of energy meant that suddenly everybody economic forecasts were wrong. Uh, budgets were wrong. People needed to redo them very quickly, APL time shames, great way to do it. And then that migrated into a market was based around the mainframe. And to be honest, the reason that APL was so popular on the mainframe was simply that there were lots of jobs people needed to do, and there weren't any other tools to do them. And the spreadsheet had only recently been invented, it was relatively new. Tools for querying corporate data were very unusual, and so IBM came up with a couple of products, ADRS and ADI which everybody loved them, all the big companies used. But then by 1987 people were beginning to use spreadsheets a lot, people were beginning to use other tools to access corporate data, and the mainframe part of the market was, although we didn't know it, heading for freefall. There was a separate market which we dabbled our toes in, the workstation market, and that was very much the market the Dyalog had focused on. And and that's why Dyalog has continued to see pretty steady expansion through times when we saw our main business just disappear. And I think it was consciously trying to fill that gap [09], but again in the the late 80s, i've been using technique with mind mapping a lot [08], and I really wanted a mind mapping tool that was visual. And nobody had one. And I suddenly heard that there was a version of Smalltalk that was available for DOS and it looked like it was going to be really good for doing graphics stuff. So I started playing with Smalltalk. I'd had much the same experience, only with APL that for the job that needed doing it was just so much better than anything that was an alternative that it looked like a market we should get, should get into. Uh, we did. And the thing, one of the things that Conor mentioned earlier was that, uh, i guess it would have been slightly later, 88 or 89, we'd done a a partnership deal with a company called Digitalk [10], who sold the low end version of Smalltalk [11] which was becoming very, very popular. And a guy called Alan McKean and who you might have heard of he's a co-author of of a very well known book on Object Oriented Design, along with Rebecca Wirfs-Brock. Alan came over to teach us how to teach Smalltalk. And we got to chatting and I told her about my APL background. Yeah, that's that's that's quite common of the people that had got into Smalltalk. A very large number had previously been APLers, and a very large number had previously been people who programmed in a language called Forth. And we talked quite a lot about that, we came to the conclusion that there were two things that all three of those languages had in common. One was the record, the fact that you you had an immediate execution mode that meant that you could develop by successive approximation, as it were, that you could write stuff and then write a bit more stuff and then write a bit more stuff, and at each stage you had a working program. And and the other was just expressiveness. That's one of the things about all three of those languages is that you program by saying what you want to do, uhm, in a different way in APL from the others because of its very condensed syntax, but it seems to appeal to the same need that we all had to use software not as a way of getting a program a computer to do something, but as a way of capturing knowledge and understanding, which happened to be executable. So that was interesting. Uh, had a lot of fun with Smalltalk. Uh, various things happened to the market, that meant that Java ate Smalltalk's lunch. And I found myself for a few years writing Smalltalk., writing Java so that people would pay me, and increasingly writing Python for fun 'cause Python, like the other languages, is it's it's got a REPL and it's very expressive and it was a lot of fun. And eventually I, by myself getting to a stage where I both wanted to and could uh, gradually taper off the work I was doing for pay and switch to doing some stuff for fun. And then back in I guess 2012, a friend of mine, Nat Price, told me about the Raspberry Pi [12]. And I was absolutely blown away. I'd already used a computer called the Beagle Board, which was a a fairly small, the next based computer, but the Raspberry Pi was a smaller and much, much cheaper, so cheap in fact, that it was almost pocket money cost. And I got involved with that for quite some while, was one of the reasons I think, why Dyalog decided to make APL available on the Raspberry Pi, uh, which continues to delight me. It's very inexpensive and fun route for people to get into APL and and do some very interesting things with it. Morten did quite a lot of fun robotics and if you actually take a look around on the Dyalog website, there's one, one thing which still absolutely amazes me, where he uses the concurrency primitives to have two robots dancing in step, both driven by a Pi, which I think is really cool [13]. And the other thing that's been a constant but fairly quiet thread through all that time is an interest in neural networks. When I did my masters degree uh, I just heard a talk by a guy called David Marr [14] who was uh for a year, poor chap, my supervisor at Cambridge. And David had produced a model of the cerebellar cortex which I simulated back in 1974. Ah, which I think made me the second person in the world to have done that, although the other guy published which is cheating really. And and I've been interested in that ever since. So that part of along with the move to Raspberry Pi, I also got quite involved with the Jetson Nano [15] range from NVIDIA. And one of the things which has been very satisfying at the moment, I'm looking at spiking neural networks [16], not the sort of neural network that everybody uses that do lots of great stuff for us, but ah, I believe fundamentally misguided in that there are there's things about the uh or think about back propagation and feedforward networks and lots of other issues that just don't seem to match the way the brain works. And and so I'm playing around with spiking neural networks, and to do that I'm using a Google tool called JAX [17] which basically it's it's if you're familiar at all with Python and NumPy [18], JAX is NumPy on steroids, and it runs on GPUs. So if I want to run something that runs really fast on my NVIDIA hardware, what I do these days is think about the algorithm in APL and then turn that into JAX. And the APL background has been really helpful for that. So one of the things that I hope we're going to see more of, and the people who are trying to get their heads around how to do sIMD type computation that the the NVIDIA hardware will support really well, learning how to think that way by exploring algorithms in in APL or J and that would that would work just as well, I guess. So that's me up to date. Quite a long history, as you said, but it's been fun all the way.

00:24:49 [CH]

So awesome. I have a couple questions, but before we get to that, Stephen did join us a couple minutes into that history. So we'll we'll let him pop in and introduce himself. And then I mean if if Stephen has a question, he can ask it right after that intro or otherwise, we can start doing a round Robin of questions that people have queued up.

00:25:08 Stephen Taylor

Thank you, Conor. I'm a near neighbour of Romilly's in in London and being, I've been touching gloves with him from time to time over the over the decades, we were competitors, I think, at IP Sharp Associates, probably. It was fascinating listening to your tour of the different technologies you've been working with. The question which comes immediately to me is, given all that you've been up to, what do you now turn to APL for?

00:25:44 [RC]

Two or three things, and the thing that the characteristics they all have in common is they all involve what I call hard sums, complex calculations where I'm starting out and I don't know how to solve the problem I'm trying to tackle, but I have some kind of idea, and so it's exploratory programming in which the computation plays an important part. I have to confess that stuff that 20 years ago I'd have used APL for I now use often use Python for, simply because there's a huge ecosystem and my skill is more current. Uh, but if it's hard calculation stuff, uh, I find I'm probably 5 to time 10 times faster writing code in APL than I am in NumPy. I I hesitate to say this because I've used NumPy a lot and I've got a lot of time for it, but my brain regard it as APL done wrong in Python. There are a lot of subtle mistakes that the APL team navigated their way around in the very early days, one of the bits of history that seems to have got lost is that when Iverson and his his friends were working on APL in the early days, they came up with quite a long list of mathematical invariants, and they verified, formally proved, that APL maintained those invariants. And that's why the structure is so rigorous and so consistent. And why, if you know a lot about it, if even if you know a little about APL and you guess how something works, it's often right. I had one proviso to that, that was true until we had nested arrays [19]. And when nested arrays came in and they did them the popular way and the easy way, which I happen to think was probably the wrong way. But we're not going to get that changed, or not easily anyways.

00:28:05 [ML]

I am.

00:28:05 [RC]

Well, yeah, I expected you to leap in at that point, Marshall. And and I'm, good for you and I I hope that goes well. But the trouble is, there are so many people now that are using the other stuff that there's quite a bit of inertia to overcome, as you know better than most, yeah.

00:28:23 [ML]

Certainly, yeah.

00:28:25 [CH]

Yeah, I've just been over the last couple of weeks, ramping up on other array languages, of which I consider NumPy one of, even though it's technically a library. And yeah, there's some very surprising things that are just bread and butter operations that you can do regardless of whether you're working with, you know, J, BQN, or APL. Like the the one that's just top of mind is if you want to reverse the columns in a matrix in NumPy, i was trying to do that with their reverse algorithm which is called flip [20]and you can specify like an axis similar to rank and what, you know, Julia calls dimensions, and just I couldn't figure out how to work and so finally I Google it and it's not possible with flip, their reverse algorithm. They have a different algorithm called FlipUD which stands for up and down I assume, that is just, the function that you reach for when you need to do that, and I was like, oh that's that definitely seems like a, like, yeah, like, uh, I don't necessarily know if it's a war, but it's and it's not like a specialization of, you know, the flip generic function that you can specify. It's just for the specific case where you have a rank 2 array and you need to reverse the columns... Anyways. So yeah, I've I've had similar, you probably know a lot more about NumPy than I do, but coming from APL, where everything is you know seems very cohesively designed and works together very nicely, and if you do something with you know a reduction, you might be able to do it with something else as you would expect, you just sort of type it and it works. A lot of the times that's not the case with with a NumPy library, but what can you do?

00:30:03 [ML]

Well, that one seems fixable, 'cause they just need to add a axis argument to to the flip function, right?

00:30:08 [CH]

There is an axis argument in the flip function.

00:30:10 [RC]

There is one.

00:30:11 [ML]

It just doesn't work?

00:30:12 [RC]

It doesn't do what you would...

00:30:13 [CH]

Only for a rank 2 array. It seems to only work for axis equal to 1 and then no axis specified. Which that's the, that's the.

00:30:22 [ML]

All right then, that's even easier to fix. They just support axis 2.

00:30:27 [CH]

Yeah, maybe that's true. I maybe I'll open a PR somewhere. One of the questions I had, which is super specific, but so you you mentioned, you know, you might hold the world record for longest, you know, debugging of a program to get it to work. What was the language that was written in, in 1958 originally? Oh.

00:30:44 [RC]

Uh, let me see if I can get the name right. I have the manual tucked away... The Pegasus autocode [21].

00:30:57 [CH]

The Pegasus autocode. I'm going to have to look that up 'cause, yeah, when you were saying 1958, I was like, well, Lisp was in 1958. But like, I don't know, and then I know that Grace Hopper, I think, worked on a couple different machine languages, which I'm not, you know, you can call them programming languages or but anyway, so the Pegasus auto thing.

00:31:18 [RC]

Autocode goes back even further. I can't remember whether it was, which of the Manchester, early Manchester machines it worked on, but it was certainly around in the late 40s, early 50s. And the Pegasus implementation, absolutely staggering coding. That machine had 56 words of high speed memory. And 4K RAM. 4K word RAM. Uh, and so the high level language was actually swapping in on an interpretive routine for each of its primitives that you used. But it was still usable, uh, it's quite a simple language. I'll I'll drop you a link later because there's actually a, it's not terribly well documented, but the the emulator that I wrote for it is up on GitHub, along with the the corrected version of my program and a couple of others I think.

00:32:23 [CH]

No, that's awesome. Yeah, well, definitely if you, if you send that to us, add that in the show notes. I'm sure there's a I myself, I'm very interested to see what code from the 1950s working 70 years later look and what what was the program doing, if you don't mind that me asking.

00:32:36 [RC]

Oh well, I think you meant to ask, what wasn't it doing? It was supposed to be, it was supposed to be calculating the value of e.

00:32:47 [CH]

OK.

00:32:48 [RC]

And the problem that I had was that I didn't understand how mixed division between a float and an integer worked and so my loop never terminated. But luckily the guy who wrote the program, he fixed it for me, but he didn't give me back the fixed version of the code, so I had to re discover the problem and resolve it back in 2014 or whenever it was. But the first, uh the next lot of of languages that I got familiar with were in that job and I I got a a job before going to UNI in 1966 working on the the Atlas [22] which was the successor to, the successor to Pegasus. And Atlas was the supercomputer of its day. Today a story went that when the first Atlas was turned on in Manchester, the computing power in the UK doubled instantly. Uh, now what's really scary, is uh, sadly my friends listening to the podcast won't be able to see it, but that's a Raspberry Pi pico [23]. It has about the same computing power as Atlas.

00:34:15 [RP]

It's about the size of a thumb

00:34:16 [RC]

and the the cheaper version is $4. Uh, so yeah, 'cause quite a quite a change over over my career.

00:34:29 [CH]

There's there's some, i'm not sure if it's an XKCD or some comic or whatever that shows some alien race, you know, discovers Earth and it's these sentient beings that are all walking around with super computers in their pockets, basically, and we're playing like Candy Crush on them and anyways, Bob, you had a question.

00:34:49 [BT]

Well actually, was more of a comment I guess. Do you find wrongly that you have an advantage in working with Raspberry Pi because you worked in these early computers that essentially were reduced capability to a Raspberry Pi. Is that is that useful to you? Do you find people who use Raspberry Pi now or having to drop back and shifting gears down is hard?

00:35:09 [RC]

Frankly, no. Uh, even the first Pi was so much more powerful than those earlier machines. And and the only machines encouraged you to do all kinds of absolutely fightful things, like if you spotted that one of the instructions in your program looked like a constant you might need, you might use the same bit of memory to hold the instruction, and also elsewhere in the program treat it as a value. Uh, and self modifying programs were the norm, because how else are you going to do much in 56 words of memory? So there are all kinds of really scary bad practices that that one picked up in those days. But by contrast, by the time we got to Atlas and two guys called Booker and Morris [24], had written something called the compiler compiler, and it was one of the first compiler generators, and suddenly the process of creating something was really quite usable, ah, became relatively straightforward. And actually a project I was working on was, it was a small compiler team. And because I was a late addition to the team, I actually shared an office with the the guy that was running the project, who is a brilliant maverick called David Hendry. And David's battlecry was "death to the algorithm", his view of computing was that instead of writing an algorithm, you should be defining the problem and then the computer should be working out the solution. Like #1 moment when we're talking about how the language we're working on might handle arrays and I scrolled up on the whiteboard, A becomes B + C instead. But if those are both matrices, it could still just work the right answer out. Luckily I I picked up the vibes from Ken Iverson there over the cosmic microwave, because that's exactly what APL does.

00:37:23 [CH]

Yeah, rank polymorphism is is once you have it and then you go to a language that doesn't have it, it's it's painful having to write the nested nested mappings or transforms, absolutely.

00:37:36 [RC]

Actually, going back to an earlier question, Bob, about mh oh, and also some question from Stephen about when I turned to APL, there was an example fairly recently did a short blog post about it where I had a markdown file [25] and I needed to tweak, i needed to detect the bits in the file that were program and flip between two different representations because two of the tools I was using needed either embedded or in triple quotes. And it it was a bit of a pain doing it in in Python 'cause the whole thing became state machine driven, and that's always, i find that harder to get working and test than it might be. In APL it was almost trivial. It took me longer to to write the APL solution because I'm out of practice, but once I'd done it, it was much easier to understand. And and yeah, about 1/10 of the length and much more flexible.

00:38:55 [ST]

I recognize the issue. I have scripts in Q that do that for the, i think now hundreds, possibly thousands of pages we have on the Kx documentation website.

00:39:06 [CH]

Yeah, one of the things I wanted to ask about was coming back to the two things you said that APL, Smalltalk and Forth, which interestingly are languages that in my mind sit in different paradigms, one's an array language, one's an object-oriented language and one's a stack-based language. So to find commonality between those three things this is, I think, a very unique observation, and the two things you said, i've heard of it as REPL-driven development, but you called it and I I almost like your phrase better, development of successive approximation, i think that's awesome. And then saying, saying what you mean, like the expressivity of it. I was wondering if you could maybe talk a little bit more because I'm sure, you know, several of us right now in the call know what you mean but if I were, you know, pre falling down the APL or sort of Haskell rabbit holes and heard someone say, oh, it's expressive. You just say what you mean. I would think to myself, well, I mean it. Don't you say what you mean in every language like C++ or C or Java like, so what is it about I guess those that category of languages that you know, is really the delineation between that and languages where you're potentially not saying what you want to do.

00:40:27 [RC]

OK, so you're absolutely right that the three use quite different paradigms. But one of the things they have in common is that you do something complicated by splitting it up into simple bits and just chaining them together [26]. If you choose really good names for those bits, then the code you write reads almost like English, in some cases very very similarly very very similar to English. So that by expressing the thought you're creating something which, when executing and executed, implements that thought. It's that there's a I can't remember there, there's a term in linguistics for lang-, for human languages which work that way, where you string things together. To gradually build up what it is that you're trying to say. But in computer language terms, there's a world of difference between having a succession of of of the choices tend to be a succession of function calls, the parentheses kind of get in the way. And a succession of words which are successful, successively interacting, which feel like what you were thinking. I'm sorry that's not a very good explanation, but...

00:42:11 [CH]

It made sense to me, I mean.

00:42:14 [RC]

OK. And I think the other the other way of looking at it is that you program by expressing intent. And and I feel like I've got a lot of time for Haskell, there are things that it teaches you that no other language will teach as well, uh, although I think Clojure comes comes close. You have to learn to stand on your head rather in order to convert what you're trying to do into something which will compile. The Haskell has another characteristic in common with both APL and Smalltalk, not quite so true at least not quite true of my Forth, which is that if it runs at all, it probably works correctly. And that's quite interesting and I still don't entirely understand why that is so. But I've I've heard it from enough other people that it's not just a quirk of the way that I use them. It seems to be a quirk of the languages.

00:43:19 [CH]

Yeah, I've definitely heard that about Haskell. I haven't heard it as much about APL and Smalltalk, but I I guess the lack of is like, you know what do you hear the two most common problems in computer science, or actually I don't remember the other one, but it's one is off by 1 errors and there's some joke whatever, or there's three common computer science problems.

00:43:40 [BT]

I think, I think he just told the joke.

00:43:43 [RC]

Yeah, yeah, yeah, yeah, yes. That was a perfect demonstration. Thank you.

00:43:48 [CH]

But APL and Smalltalk don't really, i mean, depending on how you program in Smalltalk 'cause you you have the facilities to do looping in Smalltalk, but i think idiomatic Smalltalk and and that's I think actually what was one of the goals of Alan Kay [27] was it to be a very beginner friendly like you know student in elementary school friendly language such that you literally for certain sentences it would be like an English sentence like you were you were just I mean APL, you could say it's similar, but it's Unicode, so you know someone will say that's hieroglyphics and then Ken Iverson will say "Thank you, that's a very nice compliment", but it's it's not English per se. It's a different language, whereas Smalltalk can actually end up like like English which uh...

00:44:38 [RC]

Well the the simplest example is you say something like department size and it tells you how many elements there are in the department collection.

00:44:46 [CH]

And it's not, yeah, it's not composed with, you know, a dot operator. It's literally composed with spaces and a period at the end of the quote unquote sentence, I don't actually know what they're called in Smalltalk but yeah...

00:45:01 [RC]

It's, but also has one of the most delightfully expressive error messages that I've ever encountered in any language, uh, which occurs or used to occur when we were trying to debug stuff that had windows where there was a strong parent child hierarchy, and if you got your code slightly wrong, you'd get an error message which says parent does not understand children.

00:45:28 [BT]

Worldly advice as well as debugging.

00:45:33 [RC]

And that reminds me, another, I've got time for one more Smalltalk story?

00:45:37 [CH]

You know we've got time for several more if you have them.

00:45:43 [RC]

One of the things you could do very easily in Smalltalk, because Smalltalk is mostly written in Smalltalk, is that you could change the behavior of the development and debugging environment fairly easily. At one stage we were working with a guy who'd written, a crazy guy who'd written a very clever and extremely unreliable database program in Smalltalk, which, whenever you demonstrated it, always fell over at least once. And he said I I have a technique for doing demos this German guy I I seem to change the font color in the debugger to green and the background to green and then when I get an error I have completely green screen and I say ah the signal to proceed and I carry on, which worked for him.

00:46:36 [CH]

I guess that's one way to do it. Where is your your because you've mentioned Clojure, Haskell, and then that you also mentioned, i wasn't sure if you had done Forth programming yourself [28]. Is is this or like a hobby of yours on the side where you just experiment with other languages, or has there been times where you're actually...

00:46:51 [RC]

Uh, well, in the in the 80s basically, you'd get a new language in pretty well every other issue of BYTE, and I'm dangerously curious, so if I saw another language, I would almost be unable to resist playing with it. But the Forth thing was actually a, because I wanted to do a port of APL that ran on the ZX Spectrum. And I figured the, that Forth would be quite a good implementation language for APL. Unfortunately, I was trying to run a company at the same time. Doing the APL port died out, but a guy that I actually employed briefly, he worked with me in the the software house where I first saw APL and he came and worked for me for a while when Cocking and Rury set up, guy called Paul Chapman. Paul hm went on to do a very good job of what I'd started and failed to do, uhm, except that Paul being Paul, he created a a fork of Forth, called Diego, which was rather better than Forth in one sense, in that it did stack depth checking, so that it made sure that whenever you return from a function call, there were as many things on the stack as there should be, and he implemented, created an implementation called I APL, which for quite a while had a very big following, a free version of APL that ran on a lot of Zed 80 and other small micro computer architectures. But no, I've just, I, I i've always if I see a new language and it looks like it might be interesting, uh, I try and get to the point where I have solved a simple problem in it. And if that was fun enough, I try and solve a hard problem in it. And then usually I move on to the next one, so I've accumulated, last time I counted it was somewhere between 60 and 80.

00:49:20 [CH]

60 and 80 languages?

00:49:23 [RC]

Along the way, that's counting a few of similar languages, but quite a few, yeah.

00:49:30 [CH]

Well, I have a question, but I think Rich has got a question, so i'll hang onto mine, OK?

00:49:33 [RP]

I've never played with it, but you mentioned Forth, so I thought I'd mention and ask. Have you ever seen Cosy? [29] Bob Armstrong's and I don't wonder what you think of that. I've I've never touched it personally. Like, I just haven't...

00:49:47 [RC]

I it it looks interesting, uh, my my problem is that I've kind of I've moved, well, for me the world has moved on from the situation in which Forth was really useful, 'cause the key thing about Forth was that you could build an entire development environment in somewhere between 4 and 8K bytes. And Forth allowed you to bootstrap the language itself. You could typically do a Forth port if you were expert in three or four days and a relative novice in a couple of weeks, and then you had a complete development environment with an editor and a debugger and all kinds of cool stuff.

00:50:29 [RP]

That kind of explains how why that looks like,

00:50:31 [RC]

none of the none of the hardware that I work with these days has got that little memory. And I'm not sure i've talked to Bob a bit about Cosy and the niche it's trying to fit

00:50:44 [RP]

and I feel that Armstrong,

00:50:45 [RC]

i still don't really understand. The problem that he's trying to solve, yeah.

00:50:53 [RP]

I suppose I've never had that problem, hence why I have not looked at it and and the other thing I want to ask is when you are, it sounds like you're solving lots of different types of problems and really finding a tool that works quite well for each of those. But do you find yourself combining language as much?

00:51:13 [RC]

Yes and when the python to APL bridge, Py'n'APL [30], came out, i used that quite a bit. These days it tends to be more unix pipeline-y things where i use not one language calling another, but one language outputting stuff that another language can then use. But yeah, I, I I'm, I'm I tend to mix tools quite a bit from a very small set these days. I think 95, maybe 99% of what I do is, as far as the code that I write is concerned is, it's either Python or APL or a mixture of the two. But that includes stuff like JAX where I think I'm writing Python, but actually it's doing all the hard work for me and running that code and using modules in other languages.

00:52:15 [BT]

You were saying that you were, you, when you see the language is interesting, you investigated what makes a language interesting to you.

00:52:24 [RC]

Oh, it might be worth two or three things. One is, it might just be that one of the people I know and respect. Says I should take a look. Uhm, I can't remember who told me I should take a look at Python, but I'm very grateful and actually I can remember who told me I should take a look at Linux. And before it a thing called Coherent, which was a kind of Unix clone that ran on 286s, an extraordinary character. I'd love to get in touch with him again, but I've not been in touch for years, who ran a bulletin board and who was a Marxist custody Sergeant? And Marxist policemen are fairly unusual in the UK. And and Marxist policeman who knew much more about technology than I did are even rarer, but he introduced me to all kinds of cool stuff. So that's one thing, just somebody who whose judgment has proved good in the past, take a look at that. The other is when you take a look at something and you clearly can't understand everything, but you see what the program is getting at and it looks very different from anything you've seen before. So APL certainly ticked that box. Later on, Smalltalk ticked that box. And Python, to a degree, ticked that box because of the fact that it it it gives meaning to to the physical layout of the code. Oh gosh, so I'm going to have to write code that looks nice in order for it to run. That sounds cool. And by the way, it's one of the reasons why some of the people I know that really respect won't touch it, because they think that's absolutely abhorrent. And ideally they would like to write their entire program in one long line. I wonder what language that would be good to do it.

00:54:32 [BT]

I wonder whether you were talking about Python being, you know, the physical structure of the language has to has to be right for it to run proper. Do you think that's part of the popularity? Because whenever you get a Python program in, it's going to, it's going to be structurally set up the way the others are.

00:54:48 [RC]

I think so, although that I think is more down to the fact that although they are quite clearly optional, python has got standards for just about everything that that shapes the way you code, and almost all of us choose to work to those standards, that all the certainly all the people who are software professionals, who program in Python will be aware of them, and most of them will use them, use them, and so it it means that you can look at somebody's program and there's a lot you can know just from the look of it. And and that goes down even to the casing standards for variable names and method names and class names. And again, I think part of that is we all like to think that freedom is great, but actually people are at their most creative when they're constrained.

00:55:49 [RP]

There's a quote from a video so recently about like coffee pots or something, and it was like the only thing better than perfect is standardized.

00:55:58 [RC]

Oh, I love that, I'm so going to steal that, thanks Rich.

00:56:02 [CH]

Some languages have taken that to like, I wouldn't say the extreme, because there's many languages that are doing that.

00:56:08 [ML]

Well, Haskell does this a lot. Haskell, I mean, i would say that's one of the reasons why when you when you write you know working code, it it does the right thing is because, you know, Haskell just won't let you do the wrong thing. I mean, if you if you want to go outside of the supported of the area where an operation is doing what you expect, then it just it gives you an error. Haskell is very very eager with giving errors about stuff at compile time.

00:56:39 [CH]

Yeah, I was going to mention Go 'cause Go, they ship with basically a formatter, and there's certainly like a lot of newer languages are just choosing this model as well. Whereas previous languages would have formatter, but they're formatters, but they're configurable, but newer languages are just saying there's one way to format your code to get that effect of like, all code looks roughly the same style, so you don't need to spend, you know, 30 minutes or 10 minutes acquainting yourself with the you know the the guidelines and whatnot. And why is this slightly different 'cause, it's just they all use the exact set formatter formatter.

00:57:18 [ML]

Yeah, so that's it, it's totally different. That's at the surface level, I mean, which I guess it's the same as we were saying about Python but semantically, I'd say Go, with like slices, definitely lets you do things that are that would not be what you expect.

00:57:31 [CH]

Yeah, Stephen, I think. You had your hand up a couple times, but you're muted, which is why we can't hear you.

00:57:39 [ST]

Yeah, this hand raised silently to some extent about that like, let's go back to your earlier point about memory. If I recall correctly, Forth was designed for controlling telescopes and and I think the the design, the target ideal target machine is a wristwatch, maybe even a mechanical one. In contrast, I you could say that APL is designed for the infinite spaces of mathematics. And certainly when I was learning it that that kind of appealed to me. And as a young program it was a rude shock to find that with only a couple of 100K of memory, that I had to take my elegant solutions in APL and actually break them up into pieces and loops to get them to execute and do all kinds of computer-y science things. No in in my own history I took a break from programming in APL for I think something like 15 years and when I came back to it, pC's were just f-ing huge, far bigger than we'd ever dreamed, massive memory space path, and it seemed like, well, I'll come back to just like the perfect environment in which to write in in APL. And the the Raspberry Pi provides a good deal of this, and you've spoken about using it to explore, so I'm wondering what the ability to put a big language like APL onto a tiny machine, what you've been, which you found in your explorations, what does that make possible?

00:59:32 [RC]

When you say a tiny machine, you're talking about the Pi?

00:59:34 [ST]

Physically tiny, yeah.

00:59:36 [RC]

Yeah, uhm, it's a combination of two things. 1, an awful lot of what I do involved interaction with the physical world. So in a sense, it's not so much the size of the thing as its connectivity. And, and I give you a concrete example, when you're doing particularly doing digital electronics, one of the one of the useful tools is a logic analyzer [31], which is a piece of equipment that monitors lots of inputs at the same time so you can see what changes when. The problem with logic analyzers is that very often you've got a huge amount of data and only a small amount of it is interesting. And you can detect the interesting bit by looking for patterns in lots and lots of binary vectors. And wouldn't it be nice to have a language that did that well? So for that you know, for that purpose APL is just a dream. And as it happens, one of the projects I'm working on at the moment is is a logic analyzer in which the data is captured on a Raspberry Pi Pico W, and then it shares that, the W is wireless, so it shares that wirelessly with a Raspberry Pi, which will be running APL, that actually does the analysis of the data that comes off, which is Boolean data. And lots and lots and lots and lots and lots of it. So absolutely perfect for APL.

01:01:16 [BT]

Well you mentioned earlier neural networks, and spiking neural networks is the patterns that a logic analyzer would see but may not react to. Is a spiky neural network working like a logic analyzer that would see those patterns?

01:01:34 [RC]

Uh, not at all.

01:01:35 [BT]

OK.

01:01:36 [RC]

I I guess you could use it to learn how to do those things, but no, they, the spiking network is, i I I'll spend a minute but no more talking about the the the the two common ways of of trying to build neural networks that do interesting stuff. Since about 1986, pretty well everything that's worked has been based around something called back propagation [32] and the core idea is that you have data coming in going through a series of of things we call neurons, which are very loosely reflecting what we know about the way that neurons in a real brain work. That information goes through to the far end and something at the far end says, yeah, you got that right, you got that wrong, and then that information flows backwards. And stuff gets changed so that next time, it does it better. That's kind of, you know, simplifying a 2 semester course into a minute so that I left a little out, but that's core of it. The the brain doesn't work that way at all. One of the key things about the brain is that there is no distinction as far as we can see well, so that there isn't the same kind distinction between the training mode and the mode in which you see patterns and work out what they are. So inception and training are happening simultaneously in the brain. The second thing is that there's massive feedback at every level. And it's not feedback that's waiting to get there until right and go back. There's something going on there, it just isn't going on and then or doesn't seem to be. And and these days, we know much, much more than we did in 1972 about the way that stuff is actually wired up in in real brains, and we know much more about than we did about the relationship between the input history and the output history of an individual nerve cell. And so what the spiking neural networks are doing is stimulating what we believe to be a good model for real physical neurons. And the challenge in doing that, is that uh, the obvious way of programming that when you look at the the the algorithms is something that's going to behave very, very badly on an architecture that those parallel computation.

01:04:16 [BT]

Right.

01:04:20 [RC]

So what you have to do to get, uh, spiking your network to run fast is you have to use all the old APL tricks. And to put that in perspective, uhm, i did a a timing test a while ago simulating a million individual neurons. And in pure Python, which nobody but a lunatic would do, but it's obviously going to be slow, it takes about half a second. Uh, using, uh, NumPy, drops that down to a few milliseconds. And uh, on uh, uh, uh, Jetson Nano dropped down to some microseconds.

01:05:08 [CH]

On the nano, is that still using the NumPy implementation or if you switch to JAX at that point?

01:05:13 [RC]

No, no, I was using JAX. Yeah, and JAX, JAX on my workstation was was even faster because it's got a it's a 16, 16 mega RAM, so sorry, 16 gig RAM working. So one of the things that I'm doing at the moment is saving my pennies up for uh, Jetson so that I suddenly get, i think, i think I get enough computing power that I can simulate a mouse brain, which would be pretty witty.

01:05:47 [CH]

Do you think this is a, because this sounds like a really neat raspberry Pi, Arduino, you know, Jetson Nano kind of project. Do you think it's something that beginners you know trying to pick up APL, NumPy and JAX, is is a doable thing or is there a lot of effort in trying to learn the JAX API?

01:06:11 [RC]

Wow, that's a hard one. Because of course my APL background meant that I worked quite hard at trying to learn enough of of NumPy to be able to do what I wanted to. And I'm not sure, in hindsight, I'm not sure how much time that actually took in terms of both elap- elapsed time and study time, and also one of the benefits of having wasted so much time learning so many different languages is I tend to learn new ones faster than I might have done because there's stuff that's in common with that process. But certainly if somebody is interested in doing that kind of project, it's doable on that hardware and the investment in time in learning to think in APL it's a very good book, which I didn't write, sadly. I can't remember the name. But it's the book that was written by the IBMer on starting.

01:07:16 [RP]

Stefan Kruger book, it's just called "Learn APL" [33]. It's very good.

01:07:22 [CH]

You need something more catchy so we can remember it.

01:07:26 [RC]

And and he does a very good job, I think, of explaining the kind of thinking you need. The other guy who's done a very good job of that and who has had the same experience as I have about APL changing the way he codes in Python is is Rodrigo [34]. Uhm, he's, uh, he's talked quite a bit about that and certainly that as far as the JAX stuff is concerned, the bits that are hard are much, much easier once you've got APL in your brain.

01:08:02 [CH]

Yeah, that's almost identical to what João said on the last recording [35], is that what was this exact quote? A week programming in APL taught me more about like array thinking than a year in Python or NumPy, which is a pretty astounding statement.

01:08:22 [RC]

That makes sense.

01:08:23 [CH]

Uhm yeah, maybe at some point someone will put a little to talk, i know that Rodrigo has a neural networks in APL video series on YouTube [36], but I think the next evolution of that, and maybe will, Rodrigo will take that and run with this, maybe a Dyalog combined Jetson Nano, maybe already has one would be, yeah, doing doing some kind of neural network stuff flow with APL and JAX on NumPy and and, you know, showing, you know, starting with APL, then going to NumPy, then go into to JAX and running that on some little, you know, Jetson device and actually showing that you can get a significant amount of perf going from one to the other, I don't, some, there's some cool weekend project or a month long project in there. That would be very cool to watch on on some sort of blog series.

01:09:12 [RC]

Yeah, yeah, absolutely. I've been encouraging Morten to to buy Rodrigo a nano for a while. Maybe after today.

01:09:23 [CH]

Watch this. Maybe I should just talk, i mean, I happen to work for the company that makes

01:09:26 [RC]

yeah, yeah.

01:09:26 [CH]

Them, maybe i'll I'll talk to NVIDIA, but we'll do giveaways on this podcast of and and then all the hosts will get get them too.

01:09:38 [RP]

Well, here we go. I'll ask Aaron as well whether we can do Co-Dfns on on the on the Nano.

01:09:45 [CH]

Oh yeah, that'd be interesting.

01:09:47 [RC]

That would be very interesting, wouldn't it?

01:09:51 [CH]

All right. Well, we don't, i think roughly around the hour mark. Are there any last questions that folks?

01:09:58 [BT]

I'm going to mention those show notes because we've finished the number of times that there are show notes with each episode and also Rodrigo, who we mentioned earlier, in addition to doing Py'n'APL and maybe getting a Jetson at some point, uM also does the transcripts, and there are transcripts available as well. So we really appreciate what Rodrigo does that on top of all the other things that he does. And I think that's my chance to sort of wrap up and put those little plugs.

01:10:25 [CH]

In there's one thing I've always been trying to mean to do and I always forget, and it's something that my favorite podcast does religiously, Functional Geekery [37], they always ask at the end, do you have any like talks, books or like resources that like you want to recommend in that it had like a really strong impact on you or that like is like a thing that you revisit time and time again 'cause I think we all personally have a few of these things, things, but we never really go around, you know, we don't like go to conferences and say, everyone read this book or watch this talk or every once in a while they come up organically. But yeah, it seems like you spent a lot of time poking your head in different areas of computing, and I'm I'm always curious to, like, I asked what someone wants and they recommended me this security talk because I love watching talks online. It's like my favorite way to consume media. And I was like, I don't really care about security. And so I didn't even watch it for like a couple months. And then finally I was like I, i've run out of talks to watch. I'll watch it. It was fun at one of the best talks I've ever seen. I don't care about security, but this was like this was like a keynote talk from some kind of like, it was so entertaining because it was basically just one long meme of him making people like security keynote [38]. I can send it if folks are interested of him like making fun of the security community of like how bad they deal with certain things. And the whole audience was just, like, laughing the entire time 'cause they're basically, it's like a basically a a roast of everyone in attendance at this conference. And anyways, if you have stuff like that that you want to share, I'd be yeah, super curious.

01:12:01 [RC]

So one of the things that is now available online as a course, and there's also a book "Structure and Interpretation of Computer Programs", SICP [39], and that's very famous. A lot of people here that the other two things that come to mind are more to do with object orientation than array languages, but I'll mention them because they are

Really good one is the book by Rebecca Wirfs-Brock and Alan McKean about object oriented design [40].

01:12:39 [RC]

I'll, I'll give you the link to the full title of that. And and the other is a book that I guess is still relevant, whatever language one is is programming in, although a lot of the techniques are fairly specific to actually Java development, i'm currently trying to do my own version of it, well a version of it for Python programmers. "Growing Object-Oriented Software, Guided by Tests" [41] by Nat Price and Steve Friedman gives you, I think, the best perspective I know of uh, of the hierarchy of tests and how you work your way down from acceptance tests to unit tests in a very, very disciplined way. Probably an awful lot of the techniques that they're talking about are to do with discovering the objects you need in your design, so from an array point of view, uhm, less interesting.

01:13:46 [ML]

Well, I personally I mean I since I have a language in or a language that does objects well I I end up using them a fair amount.

01:13:54 [RC]

Oh, that might that might be interesting.

01:13:56 [ML]

I think it's, uh the fact that it's not relevant is is sort of a historical accident, i think. You definitely can use objects to organize array programming, and just the array languages...

01:14:09 [RC]

Have you talked about that with Aaron?

01:14:11 [ML]

Soo uhm, no, I I I don't think I've talked to him about BQN at all.

01:14:19 [RC]

The last time that, so I was physically in the same space as him, uh, I was at that, i was delivering that talk about neural networks. And uhm he was very resistant to the idea of using objects with arrays.

01:14:39 [ML]

Yeah, well, he has the the talk that I don't particularly like about how like you have all the the patterns and ordinary programming languages which are a lot of object oriented driven and then you always do the opposite thing in APL which, yeah, I I don't like that one so much [42]. It's it's very uhm, it's a very memorable idea, but I don't think you should use it to guide your array programming.

01:15:03 [RC]

It still reflects the way that I tend to do things in APL, and it's interesting that it's still looks it's different from the way that I tend to do things.

01:15:10 [ML]

Yeah, which I think is kind of unfortunate.

01:15:14 [RC]

In other languages sometimes it is and sometimes it isn't.

01:15:18 [ML]

Yeah, I mean, it's I don't, like, completely disagree with every point. It's just that I don't, I think you don't want to start doing APL and think I'm going to flip everything around and do everything the opposite way like this.

01:15:29 [RC]

I think it's OK as an observation of what many people do. I don't think it's particularly good as a guidance for what you ought to be doing.

01:15:38 [ML]

Well, and the way it's most often taken is this is what the array community as a whole believes, which is very unfortunate.

01:15:46 [RC]

And, well, it's unfortunate, but it's also fairly accurate in that an awful lot of people in it do do think that way. I have, yeah i, I, I've, I've had the tremendous fun of pair programming when we were converting a Python application of mine to APL to see if it needed objects, yeah, and there was a very interesting experience, there was a lot of fun.

01:16:12 [CH]

Alright we will end with this and this is kind of putting you on the spot, i I was going to it was, going to be a two-part question, but you kind of already answered one of them. It was as soon as I heard that you've dabbled around in 60 to 80 languages my first thought is because we live in a top five, top ten world now where everything instead of articles or listicles. And you know, it's it's hard scrolling through the Internet to just actually find a decent article 'cause everything is just, even on Twitter, now there's a new trend where it's like top 10 things and then it's like one tweet followed by nine other sub tweets of a list of 10 things that you don't need, don't really need, but here's the question anyways. What would, or if if you you don't need to necessarily order them, what would you say your top five languages are? And the the question that you kind of already answered, which is a different one, is what would you say maybe the top five interesting ones are because you said APL, Smalltalk, Python, uhm, because I'd be curious, yeah, whenever I meet someone that is, uh, you know, a polyglot extreme that has dabbled around, i'm sure you've got some languages or insights that I have yet to stumble across.

01:17:20 [RC]

So, four of them would be no surprise, APL, Smalltalk, Python, uh Oh well, actually no. Co- Clojure might be a surprise. Uh, I I I like what I've seen. I haven't done anything serious in Clojure, but I like, i like it. It seems to be better than Lisp, not least because it runs on the JVM, which makes it commercially much more acceptable. The last of my five would be a language that probably very few people have heard of these days, uhm, but I loved it in the 70s and I I still go back and occasionally have a play with it, a language called Pop2 [43].

01:18:05 [CH]

Pop2, ok.

01:18:09 [RC]

Uh, Pop2 was created, it was the second language, hence the two, created by a guy called Robin Popplestone, and Robin, robin was another maverick who worked at the University of Edinburgh Department of Machine Intelligence and Perception [44]. They, in the 70s, were busily building robots and programming them in Pop2. Uh, and Pop2 was kind of a blend of a bit of Algol and quite a lot of Lisp. And quite a lot of new ideas. So that it had partial application in, it had this processing in, it had a REPL, and it was very expressive. Are they, anyone who's interested in the history of programming uh, might well get some pleasure and some insight out of taking a look at that.

01:19:08 [CH]

Is it a language that you can find like the executable of online and and run or...

01:19:14 [RC]

Sort of. There's a a port to it called Pop 11 uh which ran on the PDP 11 and if you search for Pop2, uh, you will find such things. One of the guys who works on the currently available ports, i'm going by memory here 'cause i haven't touched it for a couple of years, but I think it's Aaron Sloman who moved from Edinburgh to Sussex at some stage and was part of the team that did the port and and has maintained a current working version. Yeah, it's certainly available if you do, again, again, kind of, what I'll do is find a link and send that on to you.

01:19:59 [CH]

Awesome.

01:19:59 [RC]

So you can put it in the notes.

01:20:01 [ST]

This must have been from the from Donald Michie time at the...

01:20:05 [RC]

Yes, that's right, Donald. Donald was a was Pop's boss, and when he hired Pop, Pop decided, for some strange reason, that he was going to sail to his new job. And unfortunately the boat sank on route, so he turned up on a Monday morning saying, I'm Robin Popplestone, I work for you, and I haven't got any clothes or money. His other great claim to fame is that the only person I know who managed to have his folding bicycle fold under him on Princess St in Edinburgh in the middle of the rush hour. He he was a great guy.

01:20:49 [ST]

I ran into some people from the Humanities faculty in Edinburgh, at that time they told me that within the university, the Department of Machine Intelligence was known as the Department of Romance Robotics.

01:21:06 [RC]

Oh, I love it. Yeah. And Michie, of course, was famous also for producing a machine called Menace, uh, which was a a matchbox trainable robot, which, uh, which you did using reinforcement learning with beans, and it learned to play a perfect game of noughts and crosses [45]. And and I'm 85% finished a book about an implementation of Menace in APL [46], which you can find reference to on the Orchard, now one day I really will finish, honest.

01:21:48 [CH]

It well, they always say the last 10% is the hardest, right? Alright, with that then I I think, well yeah, thank Romilly for coming on. This has been awesome and learned a ton of stuff. I was not expecting it, it is very, i wouldn't say rare, it happens at least every once in a while, but I hear about a language that I've not even even heard the name mentioned in a talk or something, so I'm definitely have to look into Pop2.

01:22:11 [RC]

Ha ha.

01:22:13 [CH]

And also, what was it Pegasus Autocode was the first language too? I'll have to look into that as well. So yeah, tons of great little nuggets that I had never even heard of before. So thanks so much for for coming on and spending your time with us.

01:22:26 [RC]

And thank you, that was a lot of fun.

01:22:30 [CH]

And with that we'll say happy array programming.

01:22:32 [All]

Happy array programming.