Transcript
Thanks to Adám Brudzewsky for providing the transcript.
[ ] reference numbers refer to Show Notes
00:00:00 [Aaron Hsu]
I think APL enables exploring your data more quickly and more intuitively than just about any other language that I've seen. So even if you want to link in and use some specialized libraries to do certain things that you're comfortable with, doing it with a pull the sort of glue makes a lot of sense just because of how easy it is to integrate and clean up your data and manage your data and work with it and store it and serialize and poll and all of that cleanup and management, I think is made a lot easier in APL.
00:00:42 [Conor Hoekstra]
Welcome to a special episode of Array Cast. We are live from the APL Seeds 2022 Conference [1], a one day, or half-day, conference put on by Dyalog Ltd aimed at introducing the language to people that either are beginners or only have sort of dabbled in APL or Dyalog APL or the array languages. So, it's going to be a little bit of a different episode today because, not only do we have our regular 3/4 panelists; we also have all of the presenters from the conference today, as this is the last session of that conference day. And we've also got a couple other extra people and I think we'll go around and do introductions.
00:01:21 [CH]
Brief introductions for everyone, but as usual, we'll start with our regular panelists and we will count Rich as a presenter for today, even though he is a regular panelist in the past. So, we'll start with Bob, then go to Stephen and then go to Adám. And from there I'll then introduce the next set of people that'll give their brief introduction. So, Bob, you take it away.
00:01:41 [Bob Therriault]
Thanks Conor, I am an impostor. I am a J enthusiast here at an APL conference, but I'm also an APL beginner, so actually I kind of fit into both sides, but that's what I'm here for and I'm really interested to hear what newcomers have to say about APL.
00:01:58 [Stephen Taylor]
I'm Stephen Taylor. I'm an APL and Q programmer these days and fascinated to see what APL has grown into.
00:02:08 [AB]
I am Adám Brudzewsky. I work full time as an APL programmer for Dyalog. I've been doing APL all my life, even when not professionally.
00:02:17 [CH]
So those are our three panelists. From here, we will go to brief introductions from, I believe we'll count it as, four presenters, if we include Gitte. So, we'll start with Gitte and then we'll go in order of the presenters. So, Rich, Stefan and then Andrew.
00:02:31 [Gitte Christensen]
Yeah, I guess, I mean, if you were here at the beginning, you probably know who I am. I'm Gitte Christensen [2] and I'm the managing director of Dyalog Ltd. I've been using APL since '83, and actually, I think I have to admit that I stopped around 2005 when I became the director of Dyalog because there hasn't really been time for coding, but I think I'm nearing retirement and then… I get back to it. So, that's me.
00:03:04 [Richard Park]
Yeah, I'm Rich Park. I also spoke at the beginning of today. APL evangelist, and working on, sort of, teaching, training. Getting the word of APL out there. And thinking of projects to keep Gitte busy in retirement, I guess.
00:03:22 [Stefan Kruger]
I'm Stefan Kruger. One of the speakers in the conference. I'm a sort of an array language fanboy, I guess. I tried to learn new languages every year and I sort of got to APL via K and I kind of got stuck in and now I'm sort of trying to help spread the word, really, by my writing material to help other people learn.
00:03:46 [Andrew Sengul]
I'm Andrew Sengul. I'm a kind of multidisciplinary software developer. I've been focused on Common Lisp [3] for a long time, and then APL has been a more recent interest, but that has led me to develop my April APL compiler [4]. And if you just saw my presentation that's being leveraged in a hardware startup, Bloxl [5], and in many other areas. And I'm eager to explore more potential connections between languages and programming communities.
00:04:20 [CH]
And, last but not least, we have two other panelists, I'll say, joining us. They are both employees of Dyalog Ltd, but more importantly, of course, they are past guests on Array Cast. So first we'll have an Aaron introduce himself and then Rodrigo.
00:04:34 [Aaron Hsu]
Yeah, I'm Aaron Hsu [6]. I work on the Co-dfns compiler, and I guess you call me a computing researcher at Dyalog and I've been doing APL for a little while now.
00:04:46 [Rodrigo Girão Serrão]
And my name is Rodrigo Girão Serrão, but you can just go with Rodrigo and I work for Dyalog as well. I've been learning APL for two years now. Still pretty much feel like a beginner. And yeah, I was this close to presenting today, but turns out I didn't so, yeah.
00:05:04 [CH]
Awesome so that, I think, is nine people in total excluding myself. So, I'm your host. Conor Hoekstra. I am an un-professional array language programmer. Day-to-day, I program in C++, but I'm a huge array language fan and combinator [7] fan and super excited to have so many people here today on this episode. And the way it's going to work… So, we have, I think, three questions queued up. I will be moderating the questions and basically asking them to everyone that is in the chat and we're going to get, I'm assuming, multiple different answers for each of the questions and less if it's a simple question, and so if you have questions and you are attending live, obviously, feel free to drop them in the Q&A and I will keep track of them and we'll go from there. And also, I'll be monitoring the chat and if some interesting discussion comes up there will also relay it live on this episode. So, I think we will start with the first question that's queued up in Q&A, which is probably going to have a pretty short answer, but it's a certain amount entertaining. And maybe there will be a follow up question that I will ask after it and the question comes from Adnilson Delgado — I hope I pronounce that correctly — and it is "Where can I find APL keycaps for mechanical keyboards?" Who wants to take this question?
00:06:20 [GC]
I can take that. Well, we do make keyboards whole keyboards, not just the keycaps. And we have them in a US version, a UK version, a German version, and a Danish version or Nordic keyboard. Keycaps were done in the past, predominantly for the IBM keyboards and you might be able to find at an antiquarian, a bunch of those, but if that's not possible, then we do… there's a guy who recently started making stickers for the keys, and you can find his details on our website as well if you want that, but keycaps for different keyboards, no, there's a huge setup cost to make these, and so we have to stick to a few options.
00:07:14 [CH]
Adám, did you want to add anything to that?
00:07:17 [AB]
Gitte mentioned the IBM keyboards. Actually, that if I remember right, the employees of IBM's division for making keyboards bought out that part of IBM and set up something called Unicomp. And lo and behold, they still produce APL keyboards with injection molded APL keys. If you like, you can buy the caps themselves from Model M keyboards, or you can buy whole keyboards from them as well. And you can find all this on the on APL Wiki, just apl.wiki/typing [8] and then you'll find something called Hardware — also has links to Dyalog selling keyboards.
00:07:52 [AS]
It's worth mentioning that keycap production is considerably cheaper now than it was in the past, thanks to many people getting into designing them, and enthusiasts, so if a significant number of people wanted APL keycaps, they could have them injection molded. The recent keycap design has been using the top quality double-shot injection molding, which means the legends can never wear off, but there might be something like that out there; I haven't checked myself. I just follow the approach of, you know, use a keycap legend and, you know, after a few days, you won't need it anymore.
00:08:29 [AB]
There's actually an APL Wiki article on mnemonics [9]. If you look that up. It's not really that hard to remember where the keys are.
00:08:38 [AH]
Yeah, it's also worth noting that if you have a Cherry MX keyboard, it's going to be a lot easier to source printed keycaps for that the traditional dye sublimation for the model Fs and M keyboards has become harder to source, but there is a guy who's making brand new model F keyboards who has sources in actual production facilities for producing custom dye sublimation projects that you could talk to if you had the volume for it. But otherwise, Cherry MX is probably the easiest one to find now — compared to buckling spring, at least that's my end.
00:09:13 [AS]
That's what I should have mentioned, is the Cherry MX [10] that are getting a lot of support in aftermarket keycaps, now. Lots of keyboards have those switches, so you can find a huge variety of keycaps for those Cherry MX.
00:09:24 [AH]
Yeah, but there's nothing quite like typing traditional APL on an IBM buckling spring keyboard [11]. Like you know, it's just…
00:09:33 [AB]
This is the way!
00:09:34 [AH]
Yes, it is the way.
00:09:37 [ST]
Well, a word of warning that I noticed the fashion for revived model M keyboards amongst my younger colleagues and I got hold of one. Yes, a beautiful thing; took up half my desk. And, well, my partner complained she was going deaf, so I had to move on.
00:09:57 [AB]
Now we see benefit of APL is you have to type many fewer keys than other programming languages, so much less shooting machine guns.
00:10:05 [CH]
All right, so we'll make sure that we link all of the links in the show notes, which we always do a great job with. The follow-up question I sort of have to this is one of the most common complaints or, I don't know, sort of "chirps", maybe you'll call it on Twitter, that I get is, you know, "How do you type and, you know, these symbols?" and, I guess, the form, the question will take, "Do you actually need an APL keyboard, like, do you think it's a necessary thing to really get started with APL?" Who wants to go first on this?
00:10:35 [AS]
One never had one, never saw need.
00:10:37 [CH]
You want to elaborate why you don't think there's a need for it?
00:10:41 [AS]
Like I said, I just started with a legend and typed until I didn't need the legend anymore, but they all have mnemonics like Adám was mentioning like O for the circle (○), R for Rho (⍴), I for Iota (⍳), … The encode and decode can be a little counter-intuitive, but most of them I found pretty easy to memorize. [13]
00:11:03 [CH]
Rodrigo?
00:11:04 [RGS]
I just, I think we need the clarification. Do you mean the physical keyboard or my keyboards being able to type APL glyphs directly?
00:11:13 [CH]
I mean more just like, "Do you need glyphs on your keyboard, a.k.a. in the form of an APL bought purchase keyboard or in the form of you know, a keyboard with mechanical keycaps that you can either get stickers for by, you know, custom printed ones. Because… I think that's like… a lot of people have this aversion to like, well, "it's going to be such an overwhelming, impenetrable task to learn how to type this stuff." And so, I'm just trying to get the get the sense of how people feel like, is that actually what did they? How did they find having to learn these symbols and type this?
00:11:48 [AH]
So, these days I would just ask how many emojis do you use on a regular basis
00:11:53 [CH]
Quite a few.
00:11:54 [AH]
Yeah.
00:11:55 [RP]
Definitely simpler than typing emojis or… I have a Japanese IME as well and selecting the kanji for that or whatever is pretty straightforward, but I remember, actually, I think it's like the second day of doing any APL at all. I was like on a Zoom call with Morten doing some pair programming. At one point he asks to get the tally of something and I kind of scanned across and I wasn't typing then, but I just clicked the symbol that looked most like tally to me, and he was like, "How did you know that?" "I don't know; I just kind of guessed it."
00:12:32 [RP]
In terms of typing, yeah, I've maybe… I'm not enough of a fanboy actually, 'cause I don't have stickers or keycaps. I should probably get something. I'd literally just have that type of memory where… Yeah, it didn't take too long to figure out where everything is. So, maybe, you know, maybe for some people it is actually really beneficial to have to have that option; something to remember how to type the glyphs.
00:13:00 [RP]
I know, you know, when you're getting absolutely started at the beginning, TryAPL has two good methods. One is the kind of classic — we have it in the Remote IDE, RIDE, as well — it's the prefix key. For most people it's on the top left; it's the backtick. And then the keys are somehow associated mnemonically with the letters that they're associated with, like backtick A is the alpha (⍺), I guess.
00:13:26 [CH]
Backtick W is Omega (⍵).
00:13:28 [RP]
Yeah, backtick D is Downstile (⌊) and there's other things like that. Some people have, I mean… I don't know whose idea or who implemented this, but the TryAPL input method also lets you do this "tab input composition" symbols thing so you can type 2 ASCII characters that kind of look like the symbol you're going for, and then you press Tab and then it completes it into the APL symbol, and, you know, for a long time, I didn't think there was anyone really using that until I got a message asking, "Oh, can I get a MacOS Tab completion keyboard?" So, then I borrowed the office Mac and made a little keyboard layout for MacOS that lets you type two symbols that vaguely look like APL symbols. And I think it's Alt you have to press instead of Tab. There are lots of methods… Sorry, Aaron!
00:14:20 [AH]
No, I was gonna say I'm a little old school. I think one of the best ways to learn, is you just get a virtual print out of the key layout [14] and put it right above your keyboard and try to encourage yourself not to actually look at the keys, but I think it's related to touch typing. I think the value you get from learning how to not to look at the keys is really quite high.
00:14:40 [CH]
We'll go to Steven and then Adám.
00:14:42 [ST]
Now you can say it's muscle memory. Even ordinary typing, I mean by, say ordinary using it using the ASCII set you stop, you stop looking at the keys and you learn where they are, or your fingers learn where they are they. You could argue, "Well, can you learn an alternate set, so you type backtick A for alpha for example?" Well, most of us have found that you can do that. It's not different from regular typing. I guess that's my point.
00:15:12 [CH]
Yeah, do you want to add something Adám.
00:15:13 [AB]
Well, I wanted to answer Mark Shure's question in the in the Q&A.
00:15:16 [CH]
Wait, we'll get that. We'll get to that one. I am Master Moderator. We will get to Mark Shure's question in a little bit, and I guess the last thing I'll, 'cause we don't want to spend the whole time talking about keyboards, is that one of the things that I wondered why APL didn't have for the longest time was just a way to like type in, like, the name of it and then hit some autocomplete key. So like in Julia, I think one of the ways — because they also support Unicode characters; they have like the same Jot symbol (∘) for compose — and the way that I think you type that is, you go backslash (\) and then you just type out "compose" and then when you either do a Tab or a space; it just converts that into the symbol. And obviously that's a lot slower than learning the, you know, muscle memory of just backtick or special, you know, key plus the single letter, but it's a great, you know, onboarding way if you know what they're called, and even… That's the thing is, in APL there's a bunch of nicknames like the comment lamp glyph (⍝) is, you know, sort of nicknamed "R2D2". And so, in the RIDE editor you can, I think, if you do a single backtick plus the single key, you get a symbol, but if you do two backticks, you can actually type in what it's called, so you can go backtick, backtick "lamp", or if you know the nicknames you can go "R2D2". And then it'll complete it, which I think is both awesome for beginners and super fun. But that's, uh, unless if there's any anything anyone wants to add, and we'll close up this this question on keyboards.
00:16:45 [RP]
You're gonna be surprised by the next question, then, Conor. You go, you go ahead and read it though.
00:16:49 [CH]
Well, I'm not sure, what we think the uh… Oh I see! We're going to go out of order because this is a follow-up.
00:16:55 [RP]
Is Mark Shure not the top one for you?
00:16:59 [CH]
Well, no, because I've been monitoring both the chat and the Q&A as the expert moderator of questions.
00:17:05 [RP]
So, then it's up to you whether we want to go away from keyboards and back or…
00:17:07 [CH]
We can go, yeah, we'll answer. So, this is actually Mark Shure's second question, which is why I was confused a little bit earlier. So, Mark Shure asks, "APLX had a nice popup keyboard display showing glyphs on the keyboard. Does Dyalog have something like that to help newbies?" So yeah, Adám, do you want to answer that?
00:17:25 [AB]
Yeah, so we don't actually have something like that, but now that I think of it, that would be pretty easy to supply. And so, we don't have a pop up per se that you can like keep open as part of the session.
00:17:38 [AH]
Can't you just undock the language bar.
00:17:40 [AB]
Sure, but that doesn't show you… that doesn't show the keyboard layout.
00:17:43 [AH]
Oh, OK, OK. Yeah, yeah, yeah.
00:17:45 [AB]
So, but that, but again, it should be really easy to add that. I'll look into that.
00:17:49 [AH]
I thought that was on TryAPL.
00:17:51 [AB]
TryAPL has that, but that's the whole question, is… whether the actual development environment has that as a feature, and then Fiona is really nice and to paste the link into chat which is dfns.dyalog.com/n_keyboards.htm, which lists all the different key layouts that, like default type layouts as well, and that is also part of the install, so you can actually pick them up there, and that's why I'm saying it should be really easy to add such a pop up. I'll look into it.
00:18:21 [GC]
It is quite easy. We had one for quite a while, but nobody was using it and that was when JD made the language bar, the list of the glyphs at the top of the development session and I find that I use that. I have, like Stephen, memory in my hands for all the normal glyphs, but all the new ones, that I haven't sort of used in anger, I am not entirely sure where they are, but I find it in the language bar and then you just click on it and it's inserted in your session so, I mean, yeah, we had one, but nobody used it and nobody complained when it disappeared, when the language file came on. So, I think, I mean, try and use the language bar, is my own recommendation.
00:19:15 [AB]
It also shows how to type things, by the way.
00:19:18 [GC]
Yes, it does.
00:19:21 [CH]
I feel like yeah, at some point now in the next couple months, there's going to be a video that comes out 'cause we just chatted for 20 minutes of all the different methods to type symbols. Because there are a bunch of different methods. But yeah, I think that in summary, it's a lot easier than you think it is. I've actually never explicitly done anything to learn how to type APL symbols. It just, kind of, comes. You start, you do it once, twice and then after the fifth time, somehow, you've remembered it.
00:19:48 [CH]
It's like sometimes I forget my passwords for certain accounts because I'm typing it on my cellphone, and I don't actually remember what my password is. Just like I know I've been doing this for six years. It's muscle memory, but if I have to type it with two thumbs I can't remember. So, the brain is an amazing tool for picking up these things, sort of subliminally.
00:20:06 [CH]
So, we'll move to… We're actually going to come back to Brian's question, Brian Becker's question, 'cause I think that one is going to yield the longer response from folks, but we'll skip to Mark Shur's first question, that they asked, and it reads, "I like to learn from reading several different books. Can anyone recommend good introductory APL books other than the Mastering [Dyalog APL] book? I'm aware that there's an online update to the book in progress and I really like Stefan's Learning APL site but looking for other references as well." So, do folks have any good answers to that is good sort of introductory books or material for learning APL other than those two.
00:20:39 [RP]
I guess, as I probably mentioned right at the beginning, I mean, the APL Wiki tends to be the place to go to find all the collections of this, so I did paste a link to apl.wiki/learning_resources for the vast collection of things.
00:20:56 [AB]
There's also…
00:20:56 [RP]
Absolutely split up.
00:20:58 [AB]
Sorry. There's also apl.wiki/books — for books. [15]
00:21:01 [RP]
For books.
00:21:05 [AH]
Ah, I've always found it… this isn't quite a book, but I got a lot of value out of reading the old APL Quote Quad papers [16] and things inside of the Vector journal archives [17].
00:21:18 [CH]
And I was going to add, this isn't necessarily a book, but probably at the point where I am at in my learning journey, one of the most useful tools is actually a tool that Adám build called APLcart [18], which is sort of a searchable database of small expressions with the description so you can type in if you want to. I don't know, do something trivial like reverse a list, like, there's obviously a glyph for that, but if you didn't know that, you could just type in "reverse" and it'll match all the descriptions of the expression. And so, a lot of times when I don't know how to do something, I'll just go search it there, and either the exact expression that I want is there, which is perfect, or it'll something very similar that I can just tweak will be there, so not necessarily a reading resource per se, but a very, very useful resource at that.
00:22:05 [CH]
Any other resources, just in general, so maybe outside of books that people want to recommend? I know at least like three or four of the folks here have YouTube channels where they post, you know, little APL 5 to 10-minute videos or even longer.
00:22:21 [RGS]
I just want to say that I started out learning APL in a chat room online [19] and there's a — what is it, fifty? — conversations bookmarked where Adám goes through many different glyphs and some ideas and some concepts and so it worked decently for me. I'm not totally incompetent with APL.
00:22:42 [BT]
I was going to add Adám's — I think he does them weekly — is the interactive sessions where he works through a problem [20]. And I think books are a great way of, you know, using as a reference, but when you actually have somebody working something in front of them — and YouTube videos can work as well… But doing it live is, I think, that's the power move.
00:23:02 [RP]
Yeah, it's great when you have the opportunity. I don't know this particular resource, but Eric Sargeant in chat has pasted "Understanding APL (in only 43 pages) by Susan M Bryson" [21]. So, I'll have to check that out among the, yeah, many other resources that exist. I don't wonder where you can find that. One of the craziest, well, one of the great things about APL and its history is definitely the amount of resources that exist from decades of publications and different conferences. A lot of that is cataloged fairly well on the APL Wiki. In addition to… So, you know the tutorials and books that that exist, like Mastering Dyalog [APL] will obviously hold your hand going through the basics of the language, but, you know, I think as people have mentioned here, at a certain point you're going to be looking for, I guess, broader and deeper tricks and applications and ways of doing things that aren't so easily put into a structured order like that. And those can be found in like Vector articles, APL Quote Quad, as Aaron said. But the people solving problems on YouTube — should we plug all the channels?
00:24:27 [CH]
Well, I was going to say I can just go through it 'cause I'm staring at it, so I'm going to go from like just top left to bottom right on my screen. I know Bob has a YouTube channel on J [22]. Rich, you have a YouTube channel under the handle RikedyP [23], I believe. You haven't been posting as of late, but you have a huge backlog of videos. Stephen, I don't think you have a YouTube channel, but I know you have a blog [24], and you did a great series recently on the Advent of Code [25], I think, in Q, which had some really, really sort of enlightening solutions there. Aaron, I know you have a YouTube channel too [26]. You don't post super regularly, but there's a couple of awesome talks that you have posted there. You did a — was it one-hour or two-hour? — deep dive into your Co-dfns compiler, which is mind-blowing. And there's also like a Hacker News thread [27] that sort of came out of… Adám, you have a YouTube channel that you just… I'm not sure if you started it recently or you just started posting the Quest videos too, recently [28]. Rodrigo, you have a YouTube channel where you solve LeetCode problems in APL [29]. And then I'm not actually sure if Gitte, Stefan, or Andrew? Did the three of you? So, Gitte does not. Stephan and Andrew, do you have YouTube channels?
00:25:35 [AS]
Not at the moment, but I'm pulled between a lot of things right now.
00:25:39 [CH]
Yeah, Andrew has multiple talks about April online and also Seed which was a prior project that Andrew worked on, and Stefan obviously has his book. So, I mean between the… And I guess I should mention I also have a YouTube channel [30] which I flip flop between you know videos on a bunch of programming languages and then covering different textbooks. So, you know, mine is less concentrated than some of the other folks that have YouTube channels here, so I think, you know, of the ten of us, I think — what is that? — like eight or nine of us are producing some amount of digestible content in one form of another. So, I'm sure we'll put all of that stuff in the show notes. Rich?
00:26:14 [RP]
I should also add Dyalog Ltd YouTube channels [31] and Dyalog User Meeting YouTube channels, both of those, collectively, you can find on dyalog.tv and that includes not just, you know, webinars and sort of teaching materials about specific topics, but also presentations from our users talking about the things that they actually do with APL in the real world.
00:26:39 [CH]
Yeah, and I think all the talks from this conference today, APL Seeds. The last time we did one of these live recordings at the Dyalog APL conference in November of 2021 [32] — I think it was, or late October — all of those talks are online. And I think maybe we should start like a "intro to APL" list of talks, because Rodrigo just gave, and Aaron gave talks. Although Aaron's was less on the introductory side compared to Rodrigo's at a Functional Conf 2022 conference, just in the past week. So, maybe worth, because some of these talks are not given at Dyalog conferences. If we created some curated list, we could you know, add talks that happen from different conferences like Functional Conf or functional conferences. That might be a useful resource for learners.
00:27:26 [CH]
All right, we'll wrap up that that section on content and maybe we'll hop back now to Brian Becker's question from a while ago now where he asked, "I'd like people to comment on clarity versus efficiency" and adds in parentheses that "I have some rather strong opinions on the subject." So, if you want, maybe you can type in the in the chat, Brian, a little bit of your strong opinions and maybe we can fold them into this. Who wants to tackle this this question first? Rodrigo?
00:27:57 [RGS]
Yeah, I just want to be the one to give the annoying answer of "it depends", right? Are you writing performance critical code or if I am writing code to teach others then I will prefer clarity over performance anytime of the day and twice on Sunday, but if it's performance critical, then I'll just go for performance, probably.
00:28:21 [CH]
Right, who's going to hit the tennis ball back across the court? Stefan?
00:28:25 [SK]
Well, I think, I mean, Rodrigo is obviously right, but I think, in the ideal scenario, the elegant, clear code should also be the efficient. And let's say the, the principle of least surprise and this is what I found with learning APL that this was something that took a long time to, sort of, wrap my head around, that the code that I found that I wrote that was obvious and clear, was always the slowest, yeah, and then so someone checked up in in the chat room and said "you can do it like this, instead", and it's like, you know, two orders of magnitude faster and it took a long time to develop that sort of intuition, which comes naturally to me in other languages like Python or C or whatever, because I've obviously got more experiencing what's actually going on behind the scenes, right? And so, I think that's the bit that I found somewhat counter-intuitive in APL, sometimes that, the super-fast stuff doesn't didn't seem to, sort of, marry with my idea, was obvious, at least not initially.
00:29:28 [CH]
Aaron?
00:29:28 [AH]
I think Bob, did you have something you want to say or I…?
00:29:31 [BT]
I'm probably going to say what you were going to say, which I think clarity is subjective, because if you really know this stuff well — and I've heard Aaron talk about this — it's clear! But if it's something that you've sort of come up with to be efficient, you're not comfortable with sort of as what Stefan's saying, you may want to, you know, provide more background in case… As Brian points out in the chat, you may be reading it in a year. You may have forgotten some of the tricks you've come up with, so I think it is… Clarity is kind of subjective. If you're teaching somebody, then of course you're going to try and make it more clear because the subject you're dealing with is a person who may not understand it.
00:30:11 [AH]
So, I've got a pretty strong opinion here, I guess, which is that both always and we should be able to have our cake and eat it too. The way I think of it is like there's three points on the triangle. You've got your future self, that's going to have to read some code or that everybody else is going to read the code. The results systemically of what happens when you follow one approach or the other. Then you've got yourself in the moment trying to make something happen. How much clarity or efficiency benefits you? And then you've got the actual language designers, who are working to build these languages and I think people always forget about the language designers or how you build your systems, because clarity isn't sort of an isolated thing, it's something that appears emergent out of a context and efficiency also has the same effect so if you… We want clarity. We need clarity. Clarity is sort of the preeminent point of view. But clarity also has to involve efficiency. Clarity, like there's an element here of clarity of understanding how your code is going to perform. A clarity of understanding what's going to happen with your code. Understanding what the ramifications of this code in the future are. Can you predict what its scaling behavior it's going to be? Can you, you know, understand how much technical debt I'm incurring into the future, perhaps because of performance issues? And, you know, there's an aspect of language design there, where you want to really maximize the property of clarity; be the most clear solution, quote-unquote also happen to be the most fast solution. And you want to try to think about how you can design and approach an architecture system so that these things begin to be the case. And one of the things that makes that difficult is that we often build our software in layers. And the problem with layers is that even though we in theory have localized clarity at each of these layers, the moment you insert one of these hard barriers, you remove your ability to change your decision about clarity any further down than where you're at, at that point and you force decisions of clarity on anybody who's above you in that stack. And so, I've seen massive performance issues crop up systemically in a lot of programming fields, simply because they're using, as the layers get up, every little piece seems clear, but then if you look at the whole system it breaks down because you've introduced just enough combinatorial complexity that, you know, at each layer it just builds and builds and builds until you can't actually fix the problem. And that's a big issue in my point of view. Is if you reach a point where you can't actually make your code efficient if you need to, then then you You've, you're no longer really clear in my opinion, 'cause then you're just you're making tons of work for yourself in the future.
00:32:55 [CH]
Andrew, go ahead!
00:32:56 [AS]
Yeah, so this ties into something that I've discussed about April in some other discussions. So, I see fundamentally when you're talking about clarity and considering these issues of performance, I see a lot of programming, really the fundament of programming is about balance, and creating balance, so one of the reasons I implemented April was to allow me to balance my code in a way that I couldn't before. Here's an example: The Bloxls uses animation specs to define its animations. When I wish to define a custom animation behavior, I would have — if I were using just Common Lisp — I would have two choices. Either I would have to define the custom behavior with many loops inside the animation spec and bulk it out by dozens of lines, or I would have to place that custom behavior somewhere else, put it in some file, collecting all these animation functions. Many of these functions are only used in one animation. So, in that case I would be bulking out my code with all these functions that are only used for one thing and that goes to what Aaron was saying about creating these layers and layers of complexity that can grow to a point where you can no longer manage performance or manage the development itself.
00:34:18 [AS]
What April allows me to do is to create custom animation functions that only take a line or two of code so those can fit neatly inside those specs. Then that spec is comprehensible to me in a way that it wouldn't be if I were shunting the code elsewhere, or if I were blowing up their length. That kind of balancing act is what I see as really the heart of programming and understanding programming, so if something is not clear then ultimately you will not be able to maintain performance because performance is not just defined in the moment. Performance is assessed over the lifetime of your application and for all the use cases that it needs to fill and you won't have the flexibility to meet those needs if you can't understand your codebase.
00:35:07 [CH]
Yeah, we always hear about "Notation as a Tool of Thought", but you can almost think, on top of that it is a notation for comprehension. Once you really understand the symbol set, it sort of sounds like, that's what Andrew is talking about, and I have to completely agree with what Aaron said, that I want my cake and I want to eat it too or whatever the expression is. I want both. I want the most clear and expressive solution to be the fastest solution, and I think a great example of this where currently array languages don't have this is if you do +/⍳ and then a number. You're just, that is, summing up the numbers from one to whatever number you said. This is going to be a lot slower than the constant time mathematics formula of the handshake expression of N×(N+1)÷2 because you just, instead of building up a sequence and then summing it all up, you're just plugging N into this formula and then getting it. But in a language like C++, because we have such an advanced compiler, compilers can actually see through that, and they can transform that code into a constant time formula. And the fact that array languages are built out of these primitive operations, I actually think that there's a way, way bigger opportunity for array languages to build these kinds of like AST transformations [33], which, if you were to build that into an array language, you actually could have both where you know you know the alternative to writing +/⍳ insert number is actually programming that N×(N+1)÷2. But like in my opinion the +/⍳number is way, way more clear because it's, you know, three symbols plus a number, but yeah, well, I guess that's like a future thing…
00:36:54 [CH]
That and I know a lot of that stuff is actually built into the APL language, and you can see it in the RIDE editor whenever something highlights, as — I think it's pink in the Dracula syntax highlighting — they actually have like a syntax highlighting color for idioms. So, like you'll be typing and then all of a sudden your three or four character expression will turn some color and then that's the editor letting you know, "Oh, actually we have a built in special case for that, and it's going to be a lot faster than whatever the naive implementation of that." Does anyone have comments or thoughts they want to add to that?
00:37:27 [AS]
I was going to say that's what I see is the ultimate destination for April, is a more deep AST transformation for optimizing code. Because Lisp gives me some more flexibility in dealing with semantic structures than other languages might, obviously it'll take a lot of work and a long road to get there, but that could be… And Common Lisp does support SIMD operations, and there's potential for integration with GPU libraries like ArrayFire, but that's all on the horizon. But if that were realized, it could be possible to achieve some significant performance gains that way.
00:38:07 [CH]
Aaron?
00:38:08 [AH]
So, I was going to say that we part of the Co-dfns project [34] was researching this this question and there one really good example of this is in loop fusion [35], because a significant amount of research has gone into things like polyhedral loop optimization models and other things like this to try to prove that certain operations can be done, in fused and looped in a traditional programming language, that's imperative and very loopy. But what I found is, that that same kind of code when you express it in APL, is an order of magnitude less complex to prove that it's usable, it's almost trivial at that point it can be done either at runtime or at compile time, very efficiently and, you know, so when you have a language that does this sort of stuff, the clear expression turns out to also be the one that you can analyze and make much faster compared to other options.
00:39:00 [AH]
But I also want to highlight that a lot of people focus on the computation side of performance, but on our modern machines, memory performance is a huge deal and a huge bottleneck in a lot of applications. And oftentimes, array representations of your data structures with the clear APL formulation of a problem set in arrays, is often many orders of magnitude more efficient in terms of space and memory than your alternative representations, and what you end up with, there is not just space savings, but because you have reduced memory pressure, you end up having a lot more compute performance over that same operation. So, I found this in my compiler as well where we're talking about the differences being like 1 GB or 1.5 GB versus 64 MB. And when you're talking about those orders of magnitude in terms of the data you're working with, the performance difference is huge, despite the fact that you're actually using the clearest representation you can.
00:40:00 [CH]
Gitte?
00:40:01 [GC]
Yeah well, I'm just talking from experience that sometimes you get these performance surprises, and it turns out that maybe if you reverse or rotate the data before you actually apply exactly the same formula it's like a number of magnitudes faster and so it actually teaches you what goes on under the hood. And you learn how to organize your data. So, in organizing the arrays, the data, for the computations you can actually achieve both speed and clarity because you still have the clear computation, but if you organize your data better and you also get the speed.
00:40:55 [CH]
Yeah, and that's and that's one of my favorite things about working with the Dyalog APL and in the form of, you know the editor that comes with it or RIDE is and we saw it, I believe, in Stefan's presentation, is the little cmpx, or there's another way to spell it with, you know, ]RunTime, that, although some languages like Python have like a little timeit or you know you can do that in the REPL, I don't think I've seen in any other language an expression where you can just pass it three or four different quoted expressions, and then it'll evaluate it and give you this little beautiful histogram. Like, the first time I saw that in a presentation. I was like, "Oh my goodness, that is it's so useful", because so many times one of the properties of not just APL, but array languages in general is that you can find six or seven different ways to spell something and you might have some in the back of your mind like, "Oh this is a using an inner product or this is using a bunch of linear, or what I think are linear time operations", so you have a sense of what's the fastest, but a lot of the times you know you need to profile to actually know like what is the fastest and it's so easy to do that in Dyalog APL you can just pass in your different expressions. You get this little thing, and it teaches you very quickly and like one time I did it with like eight different expressions and then everything was like 0 or 1% and then the one that used two different sorts was like you know 3000 or something like that. And I was like, "Oh yeah, that makes sense 'cause sort is slow", and as soon as you get rid of that one, you know they all become a lot more legible.
00:42:20 [CH]
And I guess I'll there's two comments in the chat. We said we'd read from Brian, the individual who asked the original question, and he wrote, and this is, you know, answering the clarity versus efficiency where we sort of started: "Unless there's a huge performance improvement, clarity should always rule. Just remember, you may be looking at this code in a year, or worse, someone less capable than you will need to look at it in a year." And yeah, I can't remember how many times in my career I've been staring at a code piece of code a year later being like, "Who?" and, "Who did this?", and then I go to, you know, Git history and it was me. And I was like, "Why didn't I write a comment here‽"
00:42:56 [CH]
And then we also have a comment from the CTO of Dyalog Ltd, is Morten Kromberg, who writes in the chat, "Dyalog's goal is that the clearest code should be the fastest if this is 'reasonably possible'. As in any system that contains a lot of optimizations, this isn't always true. As previously mentioned, we are keen to hear about it when it isn't. The least intuitive cases are typically operators that take user defined functions, including derived functions as operands such as Each (¨), Rank (⍤), and Stencil (⌺). In these cases, the choices of cases to optimize is rarely obvious…" and then goes on to say, "The Stencil case that Stefan mentions where the default case is optimized, but the tacit form of the same function is not, is due to a decision by Roger Hui to only special case dfns forms because he found them more pleasant."
00:43:43 [CH]
Interesting, so there's a little piece of language lore from Roger Hui — many of our listeners will know is the original implementer of the J language and then went on to work on Dyalog APL as well. Any follow up comments? I know we've actually got a question from Ray Polivka in the Q&A that says, "How do you know what is efficient in Dyalog? A new user has no idea what would be efficient and there hasn't been too much discussion on this from a new user's point of view.
00:44:11 [AH]
Yeah, so I was going to say that they're usually at every Dyalog user meeting, there is some discussion around this and Dyalog has in its documentation the list of idioms that it recognizes, and it published, and Roger published a list of special cases for Stencil [36] that you can look at. So, the first start is to know what things are recognized as idioms and what aren't, and then the next steps, if you really want to know what's fast and what's not, is, you look at the documentation around the data types that are used in the interpreter. And from there, all you really need — and this is one of the nice things about APL — is, all you really need to get an idea on the performance above that, is to recognize how many operations, roughly speaking, a given primitive needs to work on in order to get its work done, and then you can sort of count that and get pretty accurate, and I've put up a little performance manual into Co-dfns documentation [37], precisely for this reason, to help give people an intuition about what will and will not be fast, and those rules generally also apply to Dyalog's interpreter, because the language has similar or proportional performance across the board.
00:45:24 [AH]
So, any of the Dyalog talks about specific idioms that have been optimized is a great place to start, and usually there's one of those in every user meeting, or a couple. And then the, you know, just some basic reasoning about the groups and classes of your primitives and what their asymptotic complexity are, and that that's all you really need.
00:45:46 [CH]
Andrew?
00:45:46 [AS]
And Gitte just mentioned a much simpler, something much simpler than that, which is the highlighting of idioms in RIDE. So that's something that you could point out to a new user fairly quickly.
00:46:00 [CH]
Yeah, I would echo what Aaron said is, and it's sort of the similar as most languages is, that if you know the time complexity of each of your individual sort of primitives or operations… So, like in C++, we have an algorithm header and there's you know sorts that are N log N. There's reductions that are linear. There's scans that are linear and it's usually just, you know you're looking across the board of what primitives are you using and you can get a good estimate, but I mean, I'm always the worst. I always think, especially in C++, 'cause the compiler is doing so much work when you do -O2 with optimization, is, I'm like "I know what this is. I know what's gonna happen" and then you know we've got all this compile time stuff too. And I compiled the program and it's literally compiles down to nothing. Like it's just, it returns the answer, and there's no generated assembly and I'm like, "Oh OK, I guess I didn't know it was happening." So yeah, profiling is always, I think, a great place to start.
00:46:52 [CH]
So, we I think we've got about 10 minutes left and we've… I see two questions: one in the queue and then one from the chat. We'll go with the one from the chat earlier, which is from João, who asks, "Any plans on an APL OCR?" And OCR, for those of you that don't know, stands for "Optical Character Recognizer or Recognition". Someone would be able to write APL on paper and get code from it or in a potentially different world, you know, use on your phone, you could sort of the same way that they have like Chinese pinion character recognizers, you could get the same thing. So, are there? Maybe it already exists, and if not, are there plans to do that in the future? Does anybody know?
00:47:29 [AH]
I dream of it on a regular basis. But I've got too many projects to do already.
00:47:35 [GC]
You might be one of the only ones whose writing could be read by one of these two guys.
00:47:43 [CH]
You'd be surprised! I know Google, they had an OCR challenge where I think it was like one of those alpha challenges to where you would just go to some website, and they would tell you to draw a duck and a house. And like people would do terrible versions of it, but they use that as a training set and like you would draw two lines as the roof, and it would guess like house or something. And I was like, "Well, I wasn't even done!" So, you'd be surprised at how good these systems can be. Adám, what were you going to say?
00:48:10 [AB]
Well, I'm trying to remember the name of the project where there's actually somebody who is working on a handwriting interface for an APL-like language, with the eventual goal of it also working for say, Dyalog APL, and that seems to be really neat. See if I can figure out where it is.
00:48:29 [CH]
And I see someone raising their hand and they've also just commented in the chat. Raghu says a "New Kind of Paper" by — I'm not going to pronounce that correct — Milan Lajtoš [38].
00:48:42 [AB]
Yeah, that's the one, that's the one.
00:48:44 [CH]
So, this is a new like an actual new kind of like digital paper?
00:48:47 [AH]
Is this a digital paper technology?
00:48:49 [AB]
No, no, the idea is just that you take a device that has handwriting capability, like a tablet and stylus or something. And then there's an application that looks at what you're writing. And actually, the target set of characters for APL is relatively small. So doing OCR is not very hard — problem is when you have a wide range of characters — and that apparently already works so.
00:49:14 [AH]
So, it's a very doable problem, it's just you know…
00:49:19 [CH]
Yeah, I think most off the top of my head, most of these symbols, some are pretty distinct, and even if it's similar to like a Japanese or Chinese or Korean input method, a lot of the times when you are typing it in like they — when you're learning the language — they have like the same thing, you draw it because you don't know how to type it and it'll give you a couple different options. So, like I'm trying to think like the logical And (∧) or Or (∨) and like the Union (∪) or Intersection (∩) ones in APL, like those are slightly similar if you don't get the point at the top of the logical And right. But even if the program showed you both options then you could just tap on it with your hand, they would. it would be a pretty interesting tool.
00:50:02 [AH]
If anybody wants to take this on, I really dream of having a portable projector REPL environment where I can carry it along and set it down and it projects a REPL session onto a whiteboard and so then I can write up my APL and it evaluates on the whiteboard while I'm writing stuff out.
00:50:18 [CH]
Oh wow.
00:50:19 [AH]
That's what I really want.
00:50:20 [CH]
It sounds like Aaron offering mentorship.
00:50:22 [CH]
If there's someone that wants to do all the heavy lifting, Aaron will gladly mentor you to get his dream become a reality.
00:50:31 [AS]
That reminds me that something I envisioned at one point was using tactile controllers like on a video game pad to input letters for a programming language, and APL would obviously be one of the best candidates because you have all these one character functions and operators, or anything that has a real strong auto-complete system where you could, you could have a focus on navigation and moving things around and then to type you would lean on the auto-recognition and auto-completion as much as possible.
00:51:01 [CH]
Awesome. I think, unless if there's any last comments on this, we'll maybe move to what looks like could be our last question. And that is from once again Adnilson Delgado, who asks, "As a data analyst/engineer, how can APL help me with my job?" Gitte?
00:51:20 [GC]
Yeah, I have a bit on that. I mean, the way you work with data in APL allows you to explore them and actually visually inspect your data, or at least a bite of it if it's a big data set. But just half a year ago or like that, Richard and Rodrigo and I visited a science fair for solutions to visualize climate projects and we worked with a bunch of guys who were having a lot of data that they wanted to analyze and they all used R, but the biggest problem was to organize the data for the analysis they wanted us to run and every time we figured a new strategy to look at the data, they had to go away overnight and reorganize them. And some guy had been working all night writing a program to reorganize the data. That's the kind of thing that's really easy to do with APL. You get your data in, you inspect them. Are there any outliers? You do the summary statistics and then you start organizing them for the — if you use R, if you use other kinds of statistical packages or business analysis tools or stuff like that — you can organize your data cleaned and nicely organized for these tools, so… But obviously you can also write solutions to problems that nobody has solved before, but start with looking at your data.
00:53:06 [CH]
Anyone want to add to that?
00:53:07 [AH]
Yeah, I would just reiterate that I think APL enables exploring your data more quickly and more intuitively than just about any other language that I've seen. So even if you want to link in and use some specialized libraries to do certain things that you're comfortable with, doing it with APL the sort of glue makes a lot of sense just because of how easy it is to integrate and clean up your data and manage your data and work with it and store it and serialize and poll. And all of that cleanup and management I think is made a lot easier in APL, and it can be very useful as a as a tool to make everything linked together a lot nicer.
00:53:49 [RP]
Yeah, I mean, even if, at the start, if you're currently using different packages that have different purposes and you're spending a fair amount of effort munging the data to go from one output into the next input, you know you can start by doing that type of manipulation fairly straightforwardly in APL, but eventually you might find it even better to just implement all that functionality straight in APL instead of having to rely on all the packages, things like dplyr or whatever. Actually, I mean that's a series of functions that are similar to some APL primitives. Yeah, a lot of those data manipulations, Pandas, and NumPy — that type of stuff. You know, those are always compared to APL because of the similarities of what those packages implement and what's already available out the box in APL.
00:54:38 [BT]
I think one of the things that if you're a programmer, if you've already had a programming language, one of the things that APL does is that it's REPL, its Read Eval Print Loop, allows you to play with it quite a bit, and I think that really opens up your understanding of the problem you're working with, and I think in a lot of cases that understanding is what an engineer is really trying to get a handle on. In the end, they may want to have something that's optimized and does the job, but if it's something new, being able to go in and interact with the data and see what it's doing, depending on the scale — maybe a smaller scale. — I think Stefan did that with some of his genetic stuff when he did his presentation. You start small, work around to understand it, and then expand. And APL is particularly good with that. It just gives you a much better sense of the problem you're working with.
00:55:30 [CH]
Andrew, I think you had your hand up for a second?
00:55:32 [AS]
Yeah, so that's exactly what I experienced working on Bloxl. So, I would not have been able to do the thing as I did with any other approach. And that was just another form of data munging because I need to mash up these pixels and animations in different ways. I've live-coded things at events that would have taken me hours to write if I was writing loops and common Lisp or another language. It's really about shrinking the feedback loop. That's what I've worked to do in a lot of my projects to shrink the feedback loop when someone is working on something, because what kills productivity more than anything else is when it takes a long time for you to get your results back. I mean, that's why we all like interactive environments, REPLs instead of having to compile everything and wait for it and then test it, so that that shrinking the feedback loop to me is the key functionality in reducing the time from ideation to realization. So that's the biggest win You would get for any kind of data processing.
00:56:38 [CH]
Aaron?
00:56:39 [AH]
Yeah, another way I think we can think about shrinking the feedback loops is that as you get better with APL, the amount of extra tooling that you have to depend on tends to shrink and so you can actually like… It can get pretty complicated trying to manage a pretty complex software stack, pulling things from here, tracking which documentations what which ones are up to date with which versions, and making sure you're compatible and why isn't your build system working right, and there's a whole bunch of things that you end up having to manage. And APL often allows you to really close that up and you don't have to have nearly as complex a software stack to get the same work done and I found out to be a very big bonus that, you know, oftentimes I can just implement something in APL and simplify my entire software architecture so that I don't have this extra complexity and it saves me a lot of brain cycles over time because I'm not worrying about all of this software architecture stuff and I'm focusing more on solutions to my problems and how I'm thinking about and understanding the domain.
00:57:36 [CH]
And yeah, I've just had — and maybe this is where we'll start to wind down 'cause we're just at time — I just had an epiphany, literally in the last five minutes listening to this last answer, and it ties into that you for those that sort of follow different programming languages, you always hear about the people bouncing off of certain languages like Haskell or like APL, you know it's sometimes some people immediately fall in love with it like myself, but other people it takes them a couple different times because they find either the syntax or something about the language like a little bit, you know, it's a barrier to entry and I've always wondered, you know, "Why is it that I immediately fell in love with the language and why doesn't everyone immediately fall in love with the language?" and it ties into the fact that I was kind of thinking that you can think of APL as like the OG Excel spreadsheet, like APL existed way before Excel spreadsheets did, and what was the… VisiCalc was the first one on, like, the Apple II and that was .a huge, you know, popular piece of software and I realized that, like, I love Excel, I've always loved Excel. At one point I had three different Excel versions, 2003, 2007, and 2017 or something — or 13; I can't… Whichever one, second edition of the ribbon.
00:58:52 [CH]
And I think maybe that has something to do with it. I don't really know why I love Excel so much. It's something about, like you know, I did a lot of Visual Basic Application programming, which is pretty awful, behind the beautiful interface of like looking at a grid of data, and I think my love for like that model when I discovered APL and like the outer product, I was like "Oh my goodness, this is like the better best version of Excel! It's interactive. It's more dynamic. You know, the rank polymorphism is just absolutely beautiful." And this sort of ties back to the original question that Adnilson asked, which is, you know, as a data analyst or engineer, how does this help you? It's like the most powerful system when your data comes in the form of tables or matrices. It's like the perfect tool and I think what a lot of people don't realize is like, you can express all data in in the form of tables and matrices. It just it might not come in that form, but you can find a way to represent graphs as adjacency lists and et cetera, et cetera. So that's my sort of a little monologue on maybe you need to learn to love Excel first, or and then you'll immediately fall in love with it, or spend like a couple months learning the power of Excel and then APL will just be like, "Oh my God, this is so much better?" Rich?
01:00:02 [RP]
You're going to say one thing, just if you really want to take this to the in this degree, it is possible. through OLE and COM, to plug Dyalog into Excel and you can actually write Excel cell formulas in APL so that when you run that on a grid of data, it's your APL code actually running instead of VBA. That is possible if you really need to.
01:00:26 [CH]
And Andrew, you got your hand up?
01:00:28 [AS]
You watched the Seed presentation, right?
01:00:30 [CH]
Uh, I'm actually… I think I've watched it, but at this point, it must be like two or three years ago, because I first stumbled across your stuff. Is the Seed something I should go revisit, 'cause the thing is, I might not have been at the right moment to really fully appreciate it. 'cause at one point I thought…
01:00:44 [AS]
Yeah, starting from this starting from this topic certainly.
01:00:48 [CH]
And João just said that the SPJ, which is the initials for Simon Peyton Jones, one of the key people behind Haskell, said similar stuff about Excel and FP in general. Uhm yeah, I think, well, and we can find a link. There is a couple talks given by Simon Peyton Jones on sort of functional programming in Excel and the most recent one, I think, is when like Excel 2019 or 2020, actually introduced like functional, like they introduced Lambda expressions and like functional programming features so that now technically, like Simon Peyton Jones said that his goal was for Excel to be Turing complete, which it now is. It looks horrible. It looks horror… and he even said that himself said, "Is this really the way you want functional programming? Eh, maybe not." But it's in Excel now.
01:01:37 [CH]
So, I think with that, I will say first of all, thank you to all our panelists and our presenters and Aaron, Rodrigo for joining as well and even more so thank you to Gitte, Morten, Fiona, Adám, Rich, I assume you were involved in helping organize, and just all of the folks behind organizing APL Seeds. I absolutely love the fact that there's not just one APL conference. Now there are two APL conferences and I look forward to a future where there are a double-digit number of APL conferences throughout the year. Because I absolutely love getting to come to these and see different seeing different presentations by folks. So, thank you for putting all this work into this conference today and for letting us steal an hour of it to do a live recording. And I think with that we will say "happy array programming".
01:02:25 [all]
Happy array programming!