Transcript
Transcript prepared by
Adám Brudzewsky, Bob Therriault, and Sanjay Cherian
00:00:00 [Aaron Hsu]
Then my flexibility to just change representations on the fly whenever I feel like it is so trivial with that, I can tackle problems in almost exactly the way that I want to express them pretty much all the time, which I feel is worth giving up the extra types of guarantees that I get from these other languages.
00:00:18 [Music]
00:00:28 [Conor Hoekstra]
Welcome to episode 100 of ArrayCast. I'm your host, Conor. And today with us, we have a special guest for episode 100, as well as our four panelists who we will introduce first. We're going to go around. Well, first, we'll start with Bob, then we'll go to Stephen, then to Adám, and then we'll finish with Marshall.
00:00:46 [Bob Therriault]
I'm Bob Theriault, and this is episode 100. And I'm a J enthusiast.
00:00:51 [Stephen Taylor]
I'm Stephen, APL, and q, and I've been allowed to stay up late for this.
00:00:56 [Adám Brudzewsky]
I'm Adám Brudzewsky, and this is episode 100, our 100th episode. I'm an APLer.
00:01:03 [Marshall Lochbaum]
I'm Marshall Lachbaum. Disregard what anyone else tells you. This is episode 35. I started out with J. I worked at Dyalog for a bit, and then I made BQN.
00:01:13 [CH]
I do not understand that, but okay. As mentioned before, my name is Conor, host of the podcast, fan of all the Array languages. And with those brief introductions out of the way, I believe we have a few announcements. So we have four from Adám. We'll start with that. Then we'll go to Bob, and then we'll finish with one announcement from me. And then we will get to introducing our returning guest, our special guest for episode 100.
00:01:35 [AB]
So the BAA, British APL Association, [01] they have their online meetup every other week. You should really try it out if you haven't already. At least if the time zone fits you, it's at four o'clock in the afternoon British time. But on March the 19th, they have a real life meeting in London. So if that's something you'd be interested in, then check out the show notes. And there's another real life meeting that I mentioned before in previous episode, which is the APL Germany meeting, which is going to be in Berlin on the 27th and 28th of March. And the schedule for the content is now up. So check out the show notes for a link to that. Something else we mentioned before was the Array Portal that Ed Gutzman had put up for J content. And the first pass of adding APL content to that is now live. It's still very rough around the edges, very much like an early access. But you can try out arrayportal.com and potentially give us some feedback. I'm not even sure there is a feedback link on the page. I don't think so, but you can send it to us. We'll forward it to him. And finally, Dyalog is trying its hands on Google Summer of Code this year. And if that's something that might interest you, then you can check out gsoc.Dyalog.com.
00:03:06 [CH]
And I know there's no feedback link at the moment. There might be in the future, but I can provide some real time for some definition of the world real time, because this is going to get released in a few days. Feedback, which is, it's fantastic. It is probably the best searching facility across stuff. Like, the APL wiki, fantastic if what you are looking for is on the wiki. But this is like an aggregator of everything. I actually haven't really played around. I'm not sure if J upgraded their search mechanism since the, like, revamp of the website. But I've always had trouble finding, like, I don't think the way it fuzzy matches, like, separate keywords. Anyways, the way that it works on ArrayPortal, I've been able to, like, track down basically very quickly what I've been looking for. So it's a fantastic resource. Even I think before they added the APL content on J software is a ton of J or APL and J related content. So if it was, you know, authored by Ken Iverson or Roger Hui, even before, you know, J was a thing, it might be on that site anyway. So can highly recommend if you're looking for something to head over there. And Bob's going to add something and then he's got an announcement as well.
00:04:12 [BT]
You're absolutely right. I'm going to add something. The ArrayPortal is an outgrowth of J's, or of Ed's J viewer, which was actually, you can still download on the site specifically for J. And that was a result. We didn't upgrade the search engine that Chris Burke had put together that's sort of on board the wiki. But Ed created the J viewer, which did all this fancy stuff with SQL. And he substituted J primitives with keywords and things like that. So you can do searches across things. So you can do the J viewer within the wiki. It's a couple of gigabytes, you have to download it basically as an SQL database. But that's where it came from. He evolved that into the J portal. So what you're seeing with the J portal is the most advanced way of looking at J as well.
00:05:02 [CH]
Sorry, when you say J portal, do you mean ArrayPortal or is J portal a different thing?
00:05:05 [BT]
Sorry, the ArrayPortal.
00:05:08 [CH]
ArrayPortal. Okay, gotcha.
00:05:11 [BT]
I think he first started calling it the J portal before the ArrayPortal. But as soon as he got the J portal working, he blew it up. He said, "I want to do them all." And that will include things like I think his plan is eventually to do BQN and Uiua and anybody else he can agglomerate into the whole thing, which is really cool. And knowing Ed, he'll probably end up doing it because he's pretty much a wizard at this stuff.
00:05:29 [AB]
I mean, it's already pretty impressive. It indexes 162,554 items at the moment, at least when I did the last page load. I mean, that's amazing. There's even that much content. And that's just APL and J and just a rough first pass.
00:05:45 [BT]
Yeah, and the thing that blew me away, and I think a lot of people in J away, if they've worked with the J viewer or now the ArrayPortal, is that it references all the old forums. All the J forums are indexed. So you can go back and find things mentioned within the forums going back to, I think it's like 1998 or something, which is amazing because you can see people discussing things that came up in J later on. So it's just an incredible resource. It really is. It's amazing. And my announcement is also J related, not a big surprise. J 9.6 is out and so worth looking at. We'll have Henry on, I would say, in a couple of episodes to talk specifically about it. He's got some interesting things, things I hadn't realized he was putting in. You can force type. He's got a new primitive that you can make different types so that you can force things into different types. So you can start with a floating point and force it to be integer. You can take an integer and force it to be floating point in different lengths. It's quite impressive, but we'll get Henry on and he'll talk about that in more depth in a future episode.
00:06:47 [CH]
Awesome. And with those announcements out of the way, I just have one. It's more of a thank you. I forgot to ask on episode 99, if anybody had a copy of a PDF copy, preferably of Ken Iverson's 1991 book, tangible math, which is similar to, or maybe not similar to elementary functions, which was a book he published in the 1960s or 70s, which was meant to be a pedagogical tool for, you know, how to use or get started with, it was APL for elementary functions, but this one was originally done for J, although I think someone ported it to APL later. Anyways, I went and asked on all of the socials and two different people, one on Blue--
00:07:29 [ML]
So you wanted a digital tangible math.
00:07:31 [CH]
I wanted a digital tangible math. I've just realized how odd it is now that I'm asking that. But yes, a digital tangible math. And I had looked, I think I looked on the array portal. It's on the list of creative commons articles and books on the J software site, but there are some of them aren't linked to. And obviously the internet responded, although I didn't think it would happen. Argblarg on Blue Sky found it on the internet archive. And then also Oz Coke on Twitter found it as well. So thank you, thank you. At some point, I'll be working through this. And I like the idea of like solving all the problems and all the different array languages, because it's not a huge book. I think it's only like 100 pages or less or something like that. So it's meant to be like a kind of getting started. And anyways, that is the last announcements link if you're interested in this book in the show notes, as well as everything else we mentioned up until now. And with that out of the way, we can finally introduce our special guest of episode 100 or maybe 35. We'll get Marshall explain that later. It is Aaron Hsu, returning guest from episode 19. And also I realized I looked up episode 19, but I didn't look up the numbering of the Iverson College episodes, because you were, I believe, definitely a part of both of those. We're going to go out and say 87 and 89. But if you don't know, Aaron is an employee of Dyalog Limited and probably most well known for all of his amazing talks, many of which are on the Co-Dfns parallel GPU compiler and an amazing person to chat with. We've actually got an upcoming interview with him in March, aka a month from now on Tacit Talk, the rival podcast. And I've also had him on ADSP, a pleasure always to talk to. And I think we have a ton of different questions that we're all just going to be peppering with you with Aaron. So yes, welcome back to this special episode 100.
00:09:30 [AH]
Yeah, thanks. Thanks for having me. It feels if I feel pretty honored to be on the 100th episode. So, you know, it's pretty fun.
00:09:36 [CH]
That is what we hope for our 100th episode guests is that they're just bestowed with honor.
00:09:42 [AH]
I'm just weeping with gratitude.
00:09:46 [CH]
All right. I mean, so we were chatting before we got started on the direction we wanted to take this. And it just turned out that we all individually had different questions we wanted to ask Aaron. So the first question is, which question should we ask first? I mean, I will pause. I can tell Marshall might be the first one to go or maybe he's just looking to explain the number 35.
00:10:09 [ML]
Well, yeah, well, I can't explain that. I mean, you know, everybody packs their introductions with all these true things and it just gets a little monotonous, just like true fact followed by true fact and true facts. So sometimes you have to mix it up. It's all and I mean, yeah, there's only one alternative to true.
00:10:27 [CH]
Yeah, you're just keeping it interesting.
00:10:28 [AH]
You just got to keep pay attention. You draw the listener in. Now they got to, you know, really.
00:10:33 [CH]
Is there actually only one alternative to true in like the quantum world? Like I know Microsoft just released that.
00:10:38 [ML]
I guess there's more than one alternative to 100.
00:10:41 [CH]
There's a lot of yeah, that's true. In fact, there's infinite alternatives.
00:10:43 [BT]
I thought Marshall was going to take it and say he was going to number from the first one that he came on as a panelist. And so but I thought 35. No, he's way done way more than that.
00:10:53 [ML]
I considered that. But it's well beyond my memory capabilities.
00:10:58 [CH]
He would have chosen a higher. Now, I thought the same thing, but I was like, no, no, no, no. We brought him on way before whatever, 65 episodes ago.
00:11:05 [ML]
Yeah, well, I made sure to choose a number that wasn't plausible for that.
00:11:09 [AH]
With regards to true and false, the real question is, if we're all array programmers, does that make us all classicists, which means we all accept the excluded middle?
00:11:17 [CH]
What? What, did you just say?
00:11:19 [ML]
I didn't try to give an excluded middle or a middle fat middle.
00:11:23 [AB]
I mean, Marshall, you should know better. BQN extends Boolean logic to all values between zero and one, even beyond that, doesn't it? [02]
00:11:32 [ML]
But but then it's not Boolean logic. It's just...
00:11:35 [AB]
No, it's extended Boolean logic..
00:11:37 [ML]
Primitives. It's the logic primitives, but it's not Boolean logic.
00:11:40 [CH]
So you'll have to send a link afterwards because I have no idea what you guys are talking about. I mean, I recognize some of the terms, but...
00:11:45 [AH]
In certain logics, you accept true and false as being the only acceptable values. And in some logics, you don't take that axiomatically.
00:11:55 [ML]
Yeah, and so BQNs and and or apply to any real numbers, like and is multiplication. But I don't consider that to be saying, like, yeah, that is the probability of two events if they have uncorrelated probabilities. But I don't really consider that to be a logical... It's just an extension of the and function.
00:12:15 [CH]
Is this a one-to-one mapping with the programming language concept of truthiness and falsiness or whatever, where you can have a truthy value?
00:12:23 [ML]
No, this is separate. And Booleans are still just zero and one. It's just that the and function is extended to work on any numbers. Right.
00:12:28 [AH]
No, I was making an appeal to like constructionist logics and things. So ignore me.
00:12:33 [ML]
Yeah, that's a whole different matter.
00:12:35 [AH]
Ignore me.
00:12:37 [CH]
All right. Well, seeing as we have you on the mic, Marshall, maybe we'll start with your question that was asked beforehand and to be re-asked now, but was not answered. So it was asked and not answered soon, hopefully to be answered. All right.
00:12:49 [ML]
Yeah. So my question is something that has confused me about Co-Dfns. And so the context for this question is, and I think this is still true as of now, Co-Dfns can't compile itself because it uses some functions like key, I think is the big one that... Maybe key is supported now, but it...
00:13:07 [AH]
We have an implementation of key in now.
00:13:10 [ML]
But I mean, there's some functionality that it uses that it doesn't support. Yeah. So you can't just load up Co-Dfns and run, tell Co-Dfns, "Hey, compile Co-Dfns." Because it'll...
00:13:22 [AH]
Right. Right. Particularly on the edges, the boundary places are the worst offenders in that respect.
00:13:28 [ML]
However, Aaron's paper that was, I guess, the first big result on Co-Dfns, that had the subset of the compiler, which was this 17 line thing, which also can't be compiled. That gave some GPU measurements. So it said, "How fast is this piece of code when you run it on the CPU in Dyalog? And how fast is it when you run it in the GPU?" Right. So my question is, what exactly is being run on the GPU? Because how do you run APL code on the GPU other than through Co-Dfns?
00:14:03 [AH]
Right. Right. Exactly. So those 17 lines now, I think we might have a version of the compiler that can compile those 17 lines, but it might be the version of the compiler that's too slow. So there's a version of the compiler that's a little bit faster, which I think you could still get those 17 lines to run if you did a little bit of rewriting around certain language features. The core algorithms would probably run fine. But at the time of the thesis, the strategy we took was essentially to think about what compiler transformations are reasonable to expect a compiler to be able to do at a very naive level. So the most sort of basic stuff, like we assume we have a reasonable implementation of the runtime primitives. We assume that we have a little bit of a tiny bit of function fusion, a tiny bit of things like inlining, certain types of function inlining, stuff like that. And then we essentially did the compilation by hand. So we took the 17 lines and then we sort of compiled them by hand into what the runtime would look like. And that's what we tested.
00:15:18 [ML]
And so you had, like for the primitives that weren't supported at that time, you had some sort of special case.
00:15:23 [AH]
Yeah. So the big one would have been key, like you point out, is we had an implementation of key, which was aware of the -- actually, was it key? Did we take that one out? Something -- the key-ish type things had to be done with, like, sorting primitives and things like that. So we had to have an implementation of that runtime, which turns into sorts. Okay. Yeah. So grade up and grade down. That's basically the underlying primitives that you'll see in the C code that was used. And we used a binary search algorithm, which I think you actually might have sent me some stuff around that time, which was --
00:16:09 [ML]
Yeah, I think I sent you some papers on batched binary search.
00:16:13 [AH]
So there's a -- we use a type of that binary search in the code for indexOf. Yeah. So that's what we're doing with that. And most of the other ones are pretty close to straight out copies and translations directly to the underlying runtime. But a few of those primitives, we had to come up with a reasonable implementation of that. And that's what we tested on the ArrayFire back end at the time, which was, I think, one of the version 3.2s or 3.3s or something like that.
00:16:45 [ML]
Yeah. Okay. That makes sense.
00:16:46 [AH]
Yeah. That's a fairly typical strategy in when you're testing performance of new sort of language constructs, you'll often test it by doing the hand compilation first and then work on implementing that a little bit later because you want to prototype the performance. And we were already -- that thesis was already enough work as it was. Trying to mechanize the compilation was way out of scope. So we just did it by hand.
00:17:15 [ML]
At least what I remember is the paper just kind of says, well, yeah, we ran this after presenting the whole source code of the -- the APL source code of the compiler. It's like we ran this on the CPU and got our measurements. We ran it on the GPU and got our measurements. So I couldn't find anything that really hinted at.
00:17:32 [AH]
Oh, I'll have to go back and check that. Yeah, because I thought we had mentioned that it was hand compiled.
00:17:35 [ML]
Very possible. I missed it.
00:17:40 [AH]
Yeah. We might have just had like one sentence saying we hand compiled this down to C++ for C. Yeah. Now, that's a bit of an FAQ around the thesis. So I wonder if I have the response to that up somewhere on Co-Dfns. We really should add that.
00:17:51 [ML]
Yeah, that would be good.
00:17:52 [BT]
So my question, and I had -- we were talking to Aaron before we started up. I mentioned that we'd had Brandon Wilson on recently, a couple episodes ago. And Brandon has been working a lot with Aaron. And Aaron says, oh, well, that's interesting. I haven't kept up. I didn't listen to that episode. So I thought, that's good. That's good. Because now I'm going to ask you --
00:18:12 [AH]
Yeah, now it's like an interrogation.
00:18:14 [BT]
Well, it's actually worse than an interrogation. Actually, that's exactly the point. Do your stories line up. And I'm going to take it one step further. What was it like working with Brandon?
00:18:16 [AH]
Well, so, like, where do you want me to begin?
00:18:32 [BT]
Well, actually, okay, that's a good point. When he first approached you, he talks about that process. How did you see it from your side?
00:18:36 [AH]
So I'm always open to having conversations with people or didactic workshop-type walkthroughs or anything to introduce people to how things go or interaction. But he really wanted something more. And so he came in, he was like, look, I want to do this thing. Well, let's go for it. And I was like, all right. How much time do I have? Let's see. We can go ahead and run with it. And unlike a lot of people who are very curious, but they kind of want to get the flavor, Brandon likes to go really deep. He likes to really simmer in an idea for a while and understand what it feels like to live in that world. So when he started getting into this, we discussed the approach. And he seemed really gung-ho about getting into that aspect, getting into the headspace of all this stuff. And so I recommended him sort of the most painful approach that you could possibly take to doing this, which was start with an empty project of your own design and then start trying to think about this from the ground up on a blank slate. And that's the one that really helps you see things the most because you're not getting hints from everybody else as you're exploring it because it's a greenfield project. And he took to it pretty well. And he was very disciplined about it and really took all of the different principles and ideas and said, "Let's run with it and let's do it." And then I think he would say the same thing is we probably spent the first few weeks, maybe even a month, just staring at a blank screen as he thought through what it meant to think like this and rewrite and rewrite and rewrite and rewrite until the ideas started to click and he started to predict. And then there was a turning point for us where he came to one of the Dyalog user meetings and we were discussing some stuff and we didn't have to have the code in front of us. We just could sit there and we could start talking about it and we started rewriting lines in our heads. And that's when I was like, "All right, he's got it. We're good. I think he's figured it out." And so it was a lot of... And I think it was also very satisfying for him at that point to be able to do that kind of thing. And it was a lot of fun. So I think Brandon is a lot of fun to work with. He's got a lot of curiosity. He's got a lot of ability to explore all these ideas. And he also, he's very well read in various dimensions. So he has a lot of background experience he can bring to the table, which is really nice. And he, like some people I've noticed, they won't say what they think all the time. They'll sometimes want to measure their responses. But I like to work with people who will say, "Hey, I think it should be this way or that way and go for it." And Brandon is willing to do something like that, which I appreciate a lot. I think it works really well. So he'll come up with ideas and say, "I like this better." And I'm like, "Okay, I can see it. Yeah." So it's a lot of fun. Now, he does sometimes take it in directions I never would have expected that he would have taken. So one of the most fun things that was, we were working across the world and both of us were working on somewhat limited internet connections. And our approach to that was he pulled up, set up his own TTY interface that I could essentially shell into. And then we opened up the Dyalog Editor. But because it was so costly to load the editor up and down, I had toyed around with making a line editor myself called ALE, which is sort of the ED line editing functions implemented in APL. And so for those first three or four months, we only used ALE to do line editing on a TTY session to do all of our editing. And it was very interesting. I didn't actually suggest that. I logged in one day and he's running the editor. I'm like, "Wait a second. What are you doing?" He's like, "I don't think this thing is ready for production use yet." And he was like, "That's all right. We'll go for it." So we ended up using that and making some bug fixes to that along the way, which was quite the experience. But it did make collaboration significantly easier.
00:23:21 [BT]
One of the things I think he mentioned was he said there were parts of the code that he referred to as almost like getting a joke. Is he would read the code and there'd be part of it that would diverge and would go a different direction than what anything else in the code would go. And he'd look at it and go, "What are you doing here?" And you'd go, "Oh, well, yeah, we're not putting that in yet. Like, we're just going to bridge that. That's just a bridge." And he said at a certain point, those points just sort of jump out at you, like almost like punchlines. Yeah. Yeah. "Oh, this is what he's doing here." And he said a couple of times he looked at the code and just started laughing because he realized what had gone on.
00:23:40 [AH]
Yeah. Yeah. Yeah. There's once you learn to read it, it's I like to say sometimes that I have a sort of like a novelist's approach to writing code and somewhat of a gardener's approach if you're familiar with novel writing like Brandon Sanderson's concepts of gardeners versus architects. And that means that occasionally there's the equivalent of a scribbled handwritten sticky note in the code somewhere. And sometimes when you learn to read it, it jumps out really quite strongly at you.
00:24:18 [BT]
It's interesting you bring up prose because honestly, when I look at something like Co-Dfns, a lot of Arthur Whitney's writing, it feels much more like poetry than prose. It's so condensed.
00:24:30 [AH]
But yeah, I'm not, I don't think I can call my code poetry yet. I would say it's more like short stories than it is. I think we're sort of, we're not quite at the epic fantasy Lord of the Rings level, but I do think we're kind of past what I think of as poetry. The rhythms aren't quite there for me in terms of the flavors, but I do think they read more like paragraphs that I think a short story kind of works within that. Maybe a little bit more than a short story, a novella would be something I could go for. I think poetry, I hold maybe poetry to a higher standard. And I don't think I could claim that I'm there yet on that one. I would like to be, you know, but from a practical standpoint too, poetry is very hard to get right. And poetry is often capturing such a distinct point in space that it's very hard to change it. And I think that's also another element is I'm very much exploring ideas as I go through and evolving the story. And I think if I tried to make it too poetic, I would get stuck in a space where I'm capturing this really neat, essential idea, but I can't go anywhere from there because I'm breaking the poetry if I do. Whereas with the story, I can rewrite it into a new world a little bit easier for myself.
00:26:04 [BT]
Hmm. I'm just thinking that to me, it's just a poem that's in process as opposed to one that's completed. I think of, you know, the poetry being sort of a condensed form. Every part of it has a precise meaning, but that doesn't mean it's a completed poem at that point. Whereas I always think like prose, there's a bit more flexibility if the word isn't quite right, the next word or the next, it's sort of, it involves more of, I was gonna say construction on the part of the reader to fill in all the gaps.
00:26:38 [AH]
I think poetry, you have to be really essentialist with your words and you have to get just that right word. And we kind of explicitly, like this is one of the things Brandon and I discuss a few times. We had a discussion recently about whether or not to put bracket indexing statements just adjacent to another or wrap those up in a stranded assignment with a little at function as our modifier. And, you know, I was kind of arguing for a slightly more Baroque approach, which is just the stranded statements, putting them together, because even though it doesn't condense the ideas together, it's a little more direct and a little more simple. And I stray towards that a little bit more. And so arguably that's perhaps less poetic, but a little bit more brutalist.
00:27:24 [ML]
If I can say something about the relationship, one interesting thing that happens in programming is that actually the, almost the more precise an idea is, the more fundamental an idea is, the more ways there are to get to it. So I don't really know the first thing about poetry, but maybe this isn't true in poetry. But I mean, if you've got something, if what you're trying to express is a clean idea, if it's, I mean, you might say lexical scoping is something like this. I mean, the lexical scoping is really, I mean, all that you're saying is, you know, that the layout of the source code corresponds to the layout of the variable accesses when it's running. And so that's, I mean, there are a few little details, but that's a pretty clean idea. And then as it turns out, there are a lot of different ways to express it. So.
00:28:23 [AH]
Yeah, I think another approach to this is I'll let the reader decide whether it's poetry or prose, you know, that's the ultimate artist's response, right? Is, I don't know what it is. You guys decide. Although that's always rubbed me the wrong way a little bit.
00:28:39 [ML]
Clearly Aaron Hsu intended this part as a symbol.
00:28:42 [ST]
I think you can hope for some objective measures here. Paul Berry [03] joined Ken Iverson's team fairly early on to write documentation. He got a background in psychology and he talked once about how working with Ken changed his ideas about writing. As a psychologist, he knew that you needed, you need to tell people something several times to get it, preferably in several different ways. And Ken famously, when he was writing his little book at Harvard, labored on it until he couldn't take anything else away from it. And that's sort of like the mathematician's criteria. You say things once, one time, one place, exactly the right way. And if you were to repeat yourself or say the same thing a different way, you risk confusing the reader who now thinks he's trying to understand a different thing.
00:29:38 [AH]
Although it's really, I think in my experience, about maybe 1.25 to 1.5 times in a math text. You lead in with an intuition in the prose and then you state exactly what you mean.
00:29:50 [ML]
Yeah, well, and the equations and the text usually kind of cover for each other.
00:29:54 [ST]
I'll take that. I've been accused of writing poetry. And the guy who accused me was a, I think a magazine editor, certainly a journalist. And he meant something very specific. He meant that by writing, what I'd written was too dense and elusive. I'd gone for saying something once, right? And he said this--
00:30:20 [AH]
By that metric, then you could definitely say I'm in the poetry territory.
00:30:23 [ST]
Yeah, and I think that's what we try to do in the array languages. And it's a style we probably, or a taste we probably inherited from Ken Iverson and his crew. And you don't have to write code that way. But when we see new array programmers writing it this way, we tend to jump on them and say, let me help you with this. I'm currently going through the painful experience of reluctantly and kicking and screaming and after many years, many years fighting it, I'm actually learning to code in C#. And it's exactly the stuff that the journalist editor wanted. Say things over and over again, use long names, explain everything. And I can't hear myself thinking amongst all the noise.
00:31:09 [AH]
Yeah, yeah. I am actually going to give a talk at LambdaConf this year called "My Favorite Verbose Programming Technique." So I'm going to talk then about one area where I use a verbose technique and how I allow myself to think under those conditions.
00:31:25 [ST]
Well, I should come to Spain just to hear you say that.
00:31:28 [AH]
It's in Denver, I think, not Denver, Colorado.
00:31:30 [ML]
But a trip to Spain sounds fine.
00:31:33 [AH]
Yeah, the trip to Spain is a good idea, but maybe hop through Spain into the US and come visit. But yeah, in terms of that concept, that is something I've been thinking about a little bit in terms of my code, because that's something also that Brandon has pointed out, is the code is making a lot of illusion. And there's a lot of implicit understanding in the code. And that's one of the reasons I can work with the code, is that so much of the structure is implicit, which makes it more malleable, because I can change it more easily. And so the parts of the code that show up in the source code, the ideas that show up in the source code are just the anchor points necessary for me to work with the abstract underlying ideas that are floating in the system and sort of alluded to or spoken about indirectly in the code. And I think there's a lot of value to that style, but I do think that it's not without, you know, criticizability, what is that word? I think you could make a case that people might not want to work with code like that, or that it has certain costs associated with it that people aren't comfortable with, versus a very explicit code.
00:32:49 [BT]
When you say implicit, do you mean like the foundations, the structure of the foundational structure of the code is so well understood that the stuff you build up on top of it, I mean, it basically has a strong foundation.
00:32:56 [AH]
It's the things that are holding the code together aren't represented in the code itself. So the reason you do something in the code becomes apparent when you see the implicit structure that's behind the code. But you won't be able to point to this thing in the code and that's why we don't do this. It's more of, do you see this pattern that emerges in the structure that constrains our design space? That's what drives our exploration. That design drives all the stuff we're doing, but you don't see that in the code. You absorb that from reading the code, but you can't point to a piece of the syntax in the source code that's actually saying that. And if you're not used to working that way, that is a very big shift in how you would work with a piece of code.
00:33:49 [BT]
You see, this is my interrogation working really well because this actually matches up with what Brandon was saying. As he said, quite often, you guys would just sit and have philosophical discussions without writing any lines of code because that was actually really important to the process.
00:34:02 [AH]
Actually, a lot of times it's really critical to understand what it is we're actually doing in the code. Because once we understand what we're doing, the code often is a line of code. But we often don't know what we're doing for it yet at all. And very often we're just discussing what it even means, what we're saying. And then as we clarify that, then it all falls out. So a good example of that was this new field that's coming out of the parser, this variable bindings field, which we're using to do static analysis. And it amounts to a data flow graph through the code. And we opened that field up. We basically exported it from the parser. And it became really clear when we were trying to talk about it to Morten and to other people, and even me communicating it to Brandon, it became clear how fuzzy the concept was in that space. And it wasn't crisp, and it wasn't clean. And there were too many implicit ideas that had emerged over time in that thing. And so one of the really important things that we did was figure out how to make that whole structure really crisp and clean. And what it means for that, what the invariants are around that structure in a way that can be communicated in one to two sentences versus the prior, which was just case after case, after case, after case, after case of something that we had to deal with. And that necessitated changes in the structure of our code to reflect that consistency. But once we ironed out what it meant for those things to be as they are, then the code actually simplified itself way down. And now we have a much simpler structure in our namespace or name resolution, like lexical and dynamic scoping, things like that. And it goes a lot easier. But it took us a while to unpack and discover and figure out where we weren't able to think clearly about the code
00:35:57 [BT]
Yeah, to me, and I always bring it back to sort of the physical world. I spent a lot of time walking on trails and in, you know, through various things. To me, that's exploration. You can go into an area that you don't know and you're walking all over the place. And then after a while, trails emerge. And then you realize you have the quickest way to get from one point to another. And sometimes the quickest way isn't always the way that you want to take, because an alternative takes you into an area that's more interesting. And you don't even know that when you first walk into an area. You have no concept of the map, essentially. And then as you work in it, you get a map of where the best places are and how that's going to fit.
00:36:29 [AH]
Yeah, exactly.
00:36:36 [BT]
And again, multiply that by all the areas around that you haven't explored. And to me, that's my sense of what you're trying to do there. It's quite an adventure.
00:36:45 [AH]
Yeah, well, I've got an example that I was noticing just the other day. And the podcast viewers won't be able to see this. But like, I've got these pages of my programming. And I've been thinking about this. And almost all of it is prose of me talking to myself. And very little of it ends up actually being code. Because so much of it is simply discussing the ideas with myself before I write that one line of code or two lines of code or whatever it is.
00:37:10 [ML]
Now, at the same time, I think it goes in the other direction sometimes, where if you have enough understanding of the problem to know what-- if you have some minimal understanding of the problem, you can often write a solution. And then by working formally with this solution, by saying, well, you know, here's an A plus B over here. C is A plus B. And then here's a D is C minus A over here. Well, D is equal to B. Make those sorts of dumb manipulations of the code without understanding any of the ideas. You can often arrive at a better conceptual approach just by simplifying the code. And that's one of the really interesting things about array programming is that it makes it-- because it's got more of these connections between primitives and things you can simplify-- I mean, not just addition, but array operations like, you know, say, indexing is associative is one big thing.
00:38:11 [AH]
Yeah, yeah. Yeah, no, I agree.
00:38:13 [ML]
You can often, yeah, find the simplest expression. And often that corresponds to a simpler conceptual view as well.
00:38:19 [AH]
Yes, yes. And I think that there's like a back and forth with that. So you'll write down something that seems to get what you want, and then that will show something or reveal something. And then, you know, I totally agree that that helps. We had a case just recently where we did probably-- we did a progressive set of rewrites of equivalent expressions, and they became progressively more refined towards exactly what we wanted. But I have to admit, and this is something I've been pondering a little bit as well, is in my experience when I'm working with these spaces, I spend a lot more time in ontological philosophy in my programming than I do in equational rewriting. Not that I don't do the equational rewriting, but I don't tend to spend a lot of time there relative to my other spaces. And I will get maybe ontological insights from the code, but I do tend to spend a lot more time in the philosophy. And I don't know if that's a quirk of myself, if that's a quirk of the domain I'm working in, or something. But it is something I've noticed that in terms of thinking with the notation, the notation is definitely communicative to me. But my experience is very often that I am writing down an idea that I have sort of thought about, and the writing it down helps me to formalize the idea. But I'm not sure how much I directly think in equational reasoning with the symbols. That's particular type of reasoning. I'm not sure how much I do that relative to just thinking about what is the data and what does it mean and things like that. So I don't know whether that's me or...
00:39:57 [ML]
I guess I would describe it as problems that are... where ultimately the solution is very simple, but the data structure in between is conceptually complicated, or it's hard to understand where the organization comes from. A lot of multidimensional array algorithms sort of fit in that area.
00:40:21 [AH]
You mean in terms of using the equational rewriting to get clarity on the data?
00:40:25 [ML]
Yeah. As opposed to at the larger scale, you really... if you don't come to some understanding of what... like what this part of the program needs to know about that other part of the program, if you don't know what exactly that information is, and I mean, then of course, how it's represented is just... that's just details. But if you don't fundamentally know what the connection is, then you're kind of sunk. You'll just write...
00:40:54 [AH]
Yeah, and I would say probably about 80 to 90% of my programming is in that latter camp than it is the former. Which is probably just a question of the domain, or as Brandon's pointed out, sometimes it might be just that I have so much experience with actually writing what I want to write in APL that it just doesn't take that much time for me to actually do that other part, which is the actual equational stuff.
00:41:19 [ML]
Yeah, I mean, you definitely have to know... you have to know what formats information can take before you think at this high level. So you have to... It's a prerequisite in some way.
00:41:27 [AH]
Yeah, yeah. And that's something that became clear over time is the challenge of teaching people how to think with data structures that are amenable to array programming and how to teach them how to think about the encodings and the structures and how that's all put together. And yeah, that's definitely a non-trivial, intermediate pedagogical task, I'd say.
00:41:53 [BT]
I would also think that the rewriting tends to lead you into maybe what might be local maximas, that they don't relate to the wider view all the time. That if you find the fastest path, it's the fastest path, but it might not take you into the best path with regard to the area. So that you have to...
00:42:11 [AH]
I'd say it depends on how open you are to looking at your code, right? Because if your rewrites are a line at a time, yeah, probably. But if you're looking at the flow of, say, 20 or 30 such lines, that can shift that.
00:42:23 [ML]
But I also think it's surprising how often what works better for one line works better for another line. So, and I guess partly that's because, you know, there is this underlying structure that in your style, you leave implicit of what you're actually doing. And so then, I mean, these concepts tend to express themselves in the same ways. So, I mean, like if you find that working with offsets from some position is better than working with absolute indices in one place. It's pretty likely to be true for the entire algorithm.
00:43:00 [AH]
Yeah, we have a lot of that kind of stuff in the compiler for sure. There's a few different representations that we lean on constantly because they work well across the board.
00:43:11 [BT]
And that would come back to your, sort of your implicit structure again. That's foundational to some extent, or it might become foundational.
00:43:16 [AH]
Well, I think one of the insights that Jesus pointed out that I hadn't emphasized as strong because it felt obvious to me, but he said, "No, no, no, no, no, no. You got to point this out more," is the lack of foundationalness of certain data structures. So, and by that, I mean that there's the idea of the thing that you're actually talking about, but then there's all of the representations of that idea. And there's no reason to make one of these representations more preeminent than the other representations all the time. And so if you can sort of separate out your representation from the ideas that you're working with, you can make multiple representations sort of equally first class in a sense. And I think that's a powerful idea that people don't always latch onto as much. So that was something I emphasized in my last talk at FunctionalConf on data-oriented design for APL, which was this view concept of looking at your data so that you're not attaching foundational status to a representation of the idea. And you're okay with switching the representations whenever it's convenient.
00:44:31 [ML]
Yeah, the changing representations, being able to say, well, I had this thing in the code and it's this way now, but it would be helpful if it were a different way. How do I get it that way? That's something that is surprisingly hard to reach for as a tool, I guess.
00:44:51 [AH]
I think it's because in other languages, doing the data engineering around structures and representations is very expensive. I call it very expensive. And I think part of that is the explicit nature of the data representations is it costs a lot to go from one representation to another in other programming languages.
00:45:13 [ML]
I'm not so sure, because this even happens with people learning math. If you have a variable substitution that would solve a problem, often these are very simple to write out, but they're hard to come up with. And maybe one reason for that is just that you have to just, it comes out of nowhere. There's no, like you can probably find a suggestion in the problem as to what to take. In some cases, it's just, you'll see if the solution's written out, it's, well, we make this solution, this substitution. And just looking at the problem, you'd have absolutely no idea where it comes from. So, I mean, I guess you could say it's both a wide space of things to search and it's creative as opposed to, manipulating something that's already there.
00:46:04 [AH]
Interesting.
00:46:05 [BT]
Well, I kind of wonder whether it was already there. It's just that the the idea of a substitution like that.
00:46:12 [BT]
Well, I kind of wonder whether it was already there. It's just the idea of a substitution like that feels surprising because suddenly it reveals a deeper understanding of the problem that was there, but you didn't understand it before. Does that make sense?
00:46:20 [AH]
Well, it's definitely explorative and creative. I get that. And that is fundamentally harder than deriving something, right? Like you can't just derive it. You have to sort of have an insight into it outside of the immediate mechanical derivation that you see. I don't know about how much that particular experience in math correlates to the same experience in programming. I'd be interested to see what people's experience around that are. I've noticed that my technique tends to be that I think about what I want to say and how I would prefer to say it. And usually I can get an idea of what I want to... what I want the code to look like. I have an idea of that kind of close, what it would be nice, ideally, if I had this magic representation, what it would allow me to say. And then that usually allows me to get some sort of guided exploration to what the underlying representation can be converted to or from. So maybe that part doesn't feel as hard.
00:47:20 [ML]
Yeah, part of the issue is just that no representation is better or worse. I mean, some representations are definitely worse, but it's not that you're moving, oh, you have this in one representation and you're going to switch to a better representation. It's that you move to a different one that is more suitable for one particular operation.
00:47:36 [AH]
Which I think a lot of times people just aren't taught that. That's another thing is that as a technique is an advanced technique in math, it's an advanced technique in programming. I would say you don't learn that technique all through undergrad and most undergrad computer science courses. They don't teach you, oh, you're given the data, it looks like this. You know what you could do is you could just completely transform what the data looks like and how it's entirely represented and then it makes your life easy. Most of the courses never, you assume that the data structure is what you have to work with and you just never change it. And that's definitely true at scale in a lot of companies too, is you never change your data structures or it's very expensive from a human factors perspective to propagate a data structure change throughout your code. And so if you just made a tiny data structure inside of your code to do it, I think people would like code review you into oblivion in a lot of organizations. And like the database courses I saw, you didn't touch on views very much. Like views were like, eh, that's a thing, but don't over rely on that for various reasons.
00:48:49 [ML]
Well, I mean, I think the teaching can kind of amplify that intuitiveness, but if it's something that's intuitive, if it's intuitive and useful, it's gonna be taught. So I think there, you know, once you understand how to do it, it's somewhat easier, but there is some barrier that I think is actually a real thing as opposed to, you know, we just happened to go down an unfortunate path in programming design or something.
00:49:16 [AH]
I can go with you on that up to a point. I do think that in my experience, working with the array languages makes it much, much easier to manipulate intermediate representations. If you encode your data with arrays in a certain practice and habit, that just makes it a lot easier to change representations. And if it's easier to change representations and it's cheap, it's a lot easier to reach for that. Whereas I think there's an implicit cost model that people gather that's on top of, you know, yes, it's a creative activity. You have to see the insights. That's, you know, maybe I think there's, you've got a point that already that's a leap. Like there's something you have to see there, which is more difficult than a lot of other types of programming. But on top of that, if you're thinking about that leap and your mind says, "Oh, but that means we would also have to do all this extra stuff." And suddenly there's four or five additional steps that aren't immediately obvious. I think the cost model in your head is gonna naturally steer you away from that, just in terms of pain avoidance.
00:50:15 [BT]
Yeah, the insight would be that you can change your data structure. If you don't have that insight, then you're not going to go down those paths.
00:50:22 [AH]
Well, you want it to be twofold, right? The seeing that you can change your data structure is good. And that's a skill, like Marshall's saying. I think you have to develop that skill and it's not as easy to see that as other things. But even if you see that, there's a cost to how do I change my data structure? How much is this gonna cost me? Is it easier to just solve the problem in the other way directly, even if it's not as pretty? Because all this extra cost comes with changing the data structure, either through process engineering, process limitations, or just in the source code, your programming language makes that difficult. I know for some languages, doing that would be an expensive cost, time-consuming type of thing that you'd have to consider. And there's a bunch of worry around that. So like certain complicated data structures in Haskell, for instance, I would not want to just change my representation willy-nilly because suddenly I've got to get all the types working again. And one of the things you get nice in Haskell [04] is if you are working with certain data structures, there are certain patterns that just emerge and they create this nice type guarantee. And now you've got to think of all those type guarantees all over again in your new data structure. And I think that's a part of Haskell that takes a lot of thinking to get right. And once you get it right, you tend not to want to redo that work every single time you're trying to solve a new problem. And so you would want to reach for one of those styles or representations that you know.
00:51:50 [ML]
I was actually thinking of Haskell as somewhere where people are unusually prepared to introduce new data structures. Maybe partly because you have this framework of all the type classes, functors and monads and all that around them that kind of makes a little less friction to add a new data type.
00:52:15 [AH]
Yeah, yeah. But I would argue, yes, 100%. But I think that's kind of like what happens is that now a lot of Haskell talks are, "I've got a new data representation." And I've got a new data. They become elevated to where, like, I don't know what it is. At LambdaConf, for instance, or at FunctionalConf, a ton of these talks are about, "Here's this new data structure. Here's this new representation with all these types. I'm gonna teach it to you. Now go use it in your code." And because the type engineering itself becomes a significant part of everyday life.
00:52:52 [ML]
Yeah, and it's like not something that you'd, at a dynamically typed, in a dynamically typed language, you just couldn't make a talk out of it.
00:53:01 [AH]
Yeah, you couldn't make a talk out of it. Now, you get some really interesting benefits from doing that, in something like Haskell. But these are those cost trade-offs in the design of your programming languages. You get these sort of benefits, but then the cost is, it takes work to do that and make them run right. And actually, I don't know how Rust compares to Haskell in this regard. Michael Snoyman recently at FunctionalConf said that Rust has given him everything he's ever wanted out of Haskell, more or less; I'm paraphrasing, but he basically said: "I get what I want from Rust that I could have gotten from Haskell and without all of the annoying academic politics of it, and so it just lets me get more work done".
00:53:41 [ML]
So, I mean, isn't the type system pretty much just like union types and ADTs?
00:53:47 [AH]
Well, I think it's an application of linear typing for their memory model, if I remember right.
00:53:52 [ML]
Oh, well, yeah, they do have that.
00:53:53 [AH]
Which is the big extra addition. And then you've got a bunch of practical, well-designed type structures.
00:53:58 [ML]
Solidly designed.
00:54:06 [AH]
Well, well-designed for what they're trying to achieve. You know, well-designed is a ... well-engineered [laughs]! Well-engineered, but yeah, so there's some benefits to it. But I think the thing you lose there is: with an APL in my very, very untyped world where I'm like just quote-unquote, horribly untyped in some respect, then my flexibility to just change representations on the fly whenever I feel like it is so trivial that I can tackle problems in almost exactly the way that I want to express them, pretty much all the time, which I feel is worth giving up the extra types of guarantees that I get from these other languages.
00:54:44 [ML]
With this, not putting types on anything, we have a big deal where sometimes changing the representation means doing absolutely nothing [Aaron agrees]. So I think the example I really like for this is "filter" versus "replicate" [Aaron agrees with a chuckle]. This took about 15 years for anyone to come up with. It was Bob Berneke who did it, or at least published it. The idea that you have a Boolean zero or one that says whether you want the thing in, and this is the same as saying the number of times that you want to include the thing.
00:55:18 [AH]
Yeah
00:55:19 [ML]
And if we wrote zero and one as false and true, would this ever even happened? I mean, it's like asking ... is it the Egyptians were the one who never came up with the solution to the quadratic equation or something like that because their number system was just too horrible.
00:55:39 [AH]
Yeah.
00:55:43 [ML]
I may be slandering a completely unrelated society. I think it was Romans who had the problems.
00:55:45 [AH]
The Romans had a ton of difficult time with extra layers on top of addition. Famously, they had a real hard time with doing higher multiplications and exponents and things like that because of their number system. But yeah, "filter" and "scan" are really good examples. And "expand" and all of those.
00:56:05 [ML]
I don't like "expand", but [chuckles]
00:56:08 [AH]
Well, that whole set are, I think, really good examples of: when you can connect these ideas together around a single symbol and it allows you to see the leap to the next step a lot easier. With function specialization, when I was doing the scan and the replicate, you can see that in APL fairly directly because you're familiar with the "filter" and it kind of just emerges more naturally. Whereas seeing that same pattern in a language where "filter" is just over Boolean data types is, I would argue, an entire phase shift in your thinking, which can be done, but I think is a lot harder because you're not getting any notational assistance in doing it.
00:56:57 [ML]
Yeah, well, I mean, you have to introduce this extra step to convert a Boolean to a count as opposed to Boolean just automatically being a count.
00:57:02 [AB]
But we just used "expand" the other day, actually. We had done some reductions to select out some variables and we needed to do a set minus (or a set subtraction) based off of those set items. But we had already reduced the size so we could expand it back up and use the pre-computed Boolean to do the extraction basically and removal of items and things like that with some Boolean masking. And Brandon had said: "oh, well, but it's shorter if we would just use the tilde with the set minus." It was like: "yes, let me run this to see how the performance differs because something's off here."
00:57:48 [ML]
Yeah
00:57:50 [AH]
And I did it and the smaller version with just the set minus was 60 times slower. So [laughs] we went back to the "expand" version.
00:57:57 [ML]
Yeah, I mean, I guess in BQN, you just do "under compress" for that. And that has the benefit of telling you what the fill elements are [Aaron agrees] or the cost that you have to specify what they are [chuckles].
00:58:13 [AH]
Yeah, yeah. I don't think "under compress" would work for ... [sentence left incomplete]. Well, I'd be curious to see how the "under compress" model would work in a way that Co-Dfns is written right now because it's maintaining a few variables that need to be updated at that point. I don't know if the "under compress" can track the side effects that we would need.
00:58:31 [ML]
So if you have a Boolean vector, the Boolean "expand some other vector" is that vector "constant under", which is implicit under Boolean compress.
00:58:46 [AH]
Okay, Yeah, okay. So yeah. It may be just a couple more symbols, but basic idea.
00:58:53 [ML]
So you say: "fill in the values that you would get by compressing out".
00:58:58 [AH]
Right, right. Okay, yeah, I think that could work. I take your point. The replicate is definitely a good example [chuckles]. Oh, I did wanna mention, Marshall: I don't think we have talked in a while and I don't think I ever got to say that you put up a comparison of BQN and Co-Dfns a while ago. [05]
00:59:10 [ML]
Oh yeah, I did.
00:59:15 [AH]
I really like that you put that up there. I think that's a really good document. So it's pretty cool. I enjoyed reading through it.
00:59:23 [ML]
No major factual issues, right?
00:59:25 [AH]
Well, I mean, it's hard for you to keep up with what Co-Dfns is doing at any given point. So I can't complain about that.
00:59:33 [ML]
There was like one time when I went to update it and you'd switched over to literate programming [chuckles].
00:59:38 [ML]
Yeah, yeah [laughs].
00:59:39 [ML]
After a few cycles of that, I just kinda made everything vague enough to kinda cover what hopefully the current state of Co-Dfns is.
00:59:47 [AH]
Yeah, yeah, I think, as long as you read it, not expecting it to be like at the cutting edge of what Co-Dfns is trying to do or what our dreams are, or what all this stuff [is], I think it's a very useful exploratory document for different approaches and things like that [Marshall agrees]. So I'm not sure I could really complain about any so quote-unquote factual errors.
01:00:07 [ML]
Nothing that is straightforwardly wrong and could be ... [sentence left incomplete].
01:00:11 [AH]
I know, but it's a good document. It's a lot of fun and sometimes I read it and I'm like: "we're gonna do that; we're gonna do it; if I only get enough time, we're gonna do it!"
01:00:20 [ML]
Yeah, there's some skepticism there. [For] a few things I say: "I don't really think this is possible". So I'd of course very much enjoy being proven wrong.
01:00:28 [AH]
That's like bait for me, right? My first paper that I ever got published was basically due to an online internet argument and the Scheme mailing lists over: "I don't know how to do this thing; can we even do this thing?" Like, [then] go implement it.
01:00:43 [ML]
So we'll link that in the show notes. Maybe we'll link Aaron's paper too.
01:00:48 [CH]
I have the link ready to go. I looked it up.
01:00:50 [ML]
My page, right?
01:00:52 [CH]
The Co-Dfns versus BQN, yeah.
01:00:54 [ML]
You didn't find Aaron's paper [chuckles] in the 30 seconds you had to [everyone laughs]. "Aaron Hsu ... Scheme ... can't be done".
01:01:02 [AH]
That would be really impressive research right there.
01:01:06 [BT]
Just wait for ArrayPortal to get it [Aaron laughs].
01:01:09 [CH]
I feel like search engines have been getting progressively worse in the day of LLMs. I was trying to find an Alexander Aiken paper called the "FL Project" (I was about to say Stepanov, but that was the wrong Alexander). And I Googled "Alex Aiken FL Project". And I was almost positive that the name of the paper (it was in the 90s) was the FL Project and it was a follow-up to John Backus' 1978 "Can We Be Liberated From the Von Neumann Paradigm" [paper]. And I couldn't find it! This was during a programming languages virtual meetup. And I was like: "I've read this paper two or three times". I am almost positive it's the FL Project. This is in front of 30 or 40 people and I'm feeling like: "oh man, this is a bad look". I spent two minutes and then finally I was like: "maybe I should add the 'ander' to Alex" because I had just put "Alex Aiken the FL Project". And it was because I had used "Alex" instead of "Alexander" and I was like: "are you kidding me?" This was in Google Scholar.
01:02:12 [AH]
[laughs] Oh my goodness!
01:02:13 [CH]
And the correct name of the paper is "The FL Project". So I had the exact name of the paper correct. I even put: up until 1995, because I'm pretty sure it was published in 1992 or 1993 or something like that. And then as soon as I changed "Alex" to "Alexander", there was one result. I was like: "wow, that is something". As LLMs continue to progress ... [sentence left incomplete]. Claude 3.7 just dropped yesterday. Haven't used it yet, boy, oh boy. I've heard that someone called it AGI. I'm pretty sure that's not the case [Aaron laughs]. But I'm very excited.
01:02:46 [AH]
So is this a causation or correlation?
01:02:48 [CH]
I mean, we don't know. Probably, I would guess causation.
01:02:53 [ML]
All the progress in AIs is because the search results are getting worse: incredible!
01:02:57 [AH]
You think so? [Conor and Bob laugh out loud]
01:02:59 [CH]
No, no, the cause of search getting worse is because of the switching ... [sentence left incomplete]. I know so many people that have basically just abandoned [searching]. And also too, I'm not sure if I just never noticed, but I guess I've become a bit of a curmudgeon when I have to go to Google because the LLM isn't working. So I'm not sure if it's because of this curmudgeonness, but have all the results at the top of Google always been [at] three or four? I feel like literally what is on my monitor is ads. There's not a single actual search result. I thought it used to be one or two, but now it's like the full screen.
01:03:35 [ML]
Well, it used to be zero, didn't it?
01:03:36 [CH]
It used to be zero back in the day, but I've never been a DuckDuckGo user or anything. I can't remember what I was searching yesterday, but I remember searching something and I was like: "what the heck?" And you can see the little [icon]; I pay attention to the: sponsored, sponsored, sponsored, sponsored. My theory is that because everyone's switching to LLMs and a lot of people are more than willing to pay for whatever your choice of LLM product is, because it's such a bang for your buck, that search engines aren't being used enough, so they're bleeding money. In order to make up the loss in revenue from ads, they're now putting more. Now I'm just gonna go to DuckDuckGo. I've never used it before but if I have to literally scroll down the page to get past all the ads, it's like Facebook. At some point, Facebook just started, every two out of three posts or four out of five posts is an advertisement and they do it in such a way that it's kind of like: "oh, that's an interesting". And then you're like: "wait, that's an ad". And then it's the same thing with Instagram. All these places: they're tricking you into watching something. Anyway, Bob's got some thoughts.
01:04:41 [BT]
Well, not my thoughts. Cory Doctorow has a whole term for this called enshittification [echoes of agreement and chuckles]. And the idea is you create first a useful tool for people and then over time, you decide who you're making it for. Originally you're making it for the users but after a while, you realize that what you're really making it for might be for advertisers who can get access to the users. So ads start to come in. Then after a while, it's no longer even useful for the advertisers because there's too many ads and people aren't even using it anymore. It basically is just almost like ossification. It just becomes so hardened and useless. "Nobody goes there anymore" [chuckles]. Now it's finished its life cycle. I think you do see the same thing across different domains like Google [and] Facebook. All these things get into that same zone. I mean, I'm not dealing with Twitter anymore or whatever it's called, X, or whatever. Same thing. It's run its course. It's no longer useful. The things that are going on in there, they're just not useful anymore.
01:05:50 [CH]
And it's funny in the early days of these LLMs, when they haven't been co-opted, like these companies haven't figured out that when there is an opportunity to insert [a] brand name for a recommendation. They're not monetizing that yet. You actually get like these very honest ... [sentence left incomplete]. I was listening to a podcast that was reviewing Grok 3 (we don't want to get into the politics or whatever, but it's done by Elon and friends). If you ask it stuff, it does not align with the stuff that Elon and friends are espousing. If you ask it, you actually get these very balanced, both progressive and like whatever. It's just probably stuff that in half a year, you're not going to get those balanced takes anymore. It's going to be curated for what the owners would actually prefer. But he says: "oh, it's not been filtered; you know, you can get honest response". But you actually ask it a question and it's like: "well, actually the idea of this topic of the day is not at all what they're espousing". So it's nice that we live in this [time when a] new model comes out and they haven't tweaked it. You get an actual, very good unadjusted summary, at least for the topics that I've been looking at.
01:06:54 [AH]
Well, wasn't that the complaint about DeepSeek? Was that it was just [that] you tried to do anything and you immediately knew it was run by China? Or whatever it was?
01:07:03 [CH]
Oh, I never actually tried DeepSeek, but that's only because I'm too invested in Anthropic and ChatGPT.
01:07:14 [AH]
Ah, okay. Well, I mean, I haven't tried pretty much any of these, but you know, one of these days you are going to have to teach me your LLM Fu.
01:07:21 [CH]
Oh, it's so exciting. I know there's some people that aren't on the AI train yet.
01:07:28 [AH]
I haven't been able to make use of it. I keep trying. I keep asking it questions and I keep getting answers. I'm like: "you stupid idiot; what? That's not even close to useful to me".
01:07:36 [CH]
Yeah. It's funny. Sometimes you'll ask a question about a non-APL array language and it'll return you an answer ... beause all it knows [is] a little bit is APL. And I'll be like: "I'm asking you about BQN" [Aaron laughs]. But there's tricks. You got to just feed it a corpus of stuff and say like: "the answer's in here; go find it for me, please". But we're only around the corner. Once AI and LLMs can solve the code golfing stuff, that I think is when we've actually hit AGI; if you can give it a problem and say: "solve this in J; current record is 37 characters" and it's able to go to J's NuVoc, teach itself J and go and get all the tacit stuff and then like improve on it. We're going to have some LLM that takes over code.golf. It's going to happen, folks beause it's a mechanistic thing where it can just iterate and churn for a month, trying everything.
01:08:30 [AH]
What do you think? Would I be convinced by that?
01:08:32 [ML]
I mean, there are all sorts of search algorithms that I think work for shortening code that are not anything related to intelligence. So that criterion I don't think is enough.
01:08:43 [CH]
Well, that's fair.
01:08:50 [AH]
Yeah, no, no, no. I was just going to say my metric for usability for this (in coding, not in general search or stuff like that) is, if I could point it at my code repo and say: "I want to find some improvements where I'm being inconsistent with my approach or my style and I want to look for a place where I can improve my architecture or improve clarity here or clean this up to where it's nicer; find those areas or find some places where I've got a bug that I might've missed, to where I'm failing some assertions that are implicit in my code base; pull those up and show me what I'm dealing with". I think that would be extremely useful. I would be very impressed by that. I think that would be the thing that I ... [sentence left incomplete].
01:09:33 [CH]
It definitely can already do that. Maybe just not for APL, because it doesn't have a large enough corpus, but for Python [06] and popular top 10 languages, a thousand percent. If you're using Cursor and any one of the backends (I typically use Cloud 3.5, soon to be 3.7). There's a couple of different ways of interacting, but you can just, across your code base say: "I'm looking to do some refactoring and some cleanup, especially if you can find some bugs, please list me the top 10 recommendations of making this more idiomatic or if there's duplicate code". And so that's the thing, sometimes you'll get it to generate some stuff and then you can immediately ask it: "can you clean that up though?"
01:10:15 [AH]
Have you found that it works though?
01:10:15 [CH]
Oh, amazingly, amazingly. I mean, every once in a while, don't get me wrong, if you do LLM Cursor driven development for like half a day, you'll end up in a place where you've got a ton of stuff that [has] redundant code that you can refactor, but like, it's working. It's basically just like managing a very high throughput intern that's like 80% [or] 90% knows what it's doing; you just need to point it in the right direction. And the beautiful thing is, I don't know the world of Python and libraries. If I want to build some financial dashboard (which is what I built the other day) I've got now some technical charts and whatever. And it's pulling from these libraries that I didn't know exist. yfinance is this thing that can pull from Yahoo Finance, but it doesn't work on the latest version, but you point it at the old one. Anyways, I feel like I'm like Harry Potter, waving a wand and it's like, you know, "Accio, chart" and "Accio, GUI". And it just puts it all together. I'll say, honestly, too, I think it's a better Python programmer than me [Aaron laughs out loud]. Because I want to do functional array like stuff and I'm always trying to jam that into Python. And it's like: "that's not what Python ...". Idiomatic Python is more of like an object oriented [thing]. Just write a for-loop. If you need a for-loop, you don't need to always reach for itertools and try and do a sliding window with a "functools . reduce". It writes it in a way that I look at and I'm like: "that's actually more beautiful than the code I'm writing" because I'm trying to write a different kind of Python. At the end of the day, it never feels as good.
01:11:53 [AB]
I think that's very sad. It's very sad. It's reducing you to this nothingness of just: "yeah, just do whatever everybody else is doing". I want the artificial intelligence to do my laundry, including folding it and putting it away in the closet. Do the dishes, maybe cook me a light meal and bring the children to school. So I have more time to code than my own. Don't mix into my work.
01:12:19 [CH]
[Laughs] I was like: "how is this tie back for code?" And it ties back because: "I don't want it touching me when I'm coding". It can do everything else: look after the kids. Look after my wife. But when I'm programming [Bob laughs], you stay away from me!
01:12:30 [AB]
No, I don't want it to look after my kids and my wife [laughs]. I can bring the children to school. That's mechanical work. Anything should be able to.
01:12:38 [AH]
I don't know. You mentioned it's just a high throughput intern. I can't imagine a more horrible environment than a high throughput intern. I'd be driven crazy, I think.
01:12:50 [AB]
Yeah. You want somebody who is actually intelligent, not a machine. Machines should do machine things.
01:12:56 [ML]
Well, I mean, see the one task I would ask for there is: "find the missing semicolons and the things where I meant to write 1, 2, 3, 4 and I wrote 1, 2, 3, 3".
01:13:07 [AB]
No, if you need to search for the missing semicolons, you're using the wrong tool for the job.
01:13:11 [ML]
Well, yeah, I mean that's the other factor, is that when you write code in a wonderful language, such as Singeli, you would never write 1,2,3,4. You would write "1 + iota 4". The two things that make me think that I don't even need this are, one, if you've got a good language, then you don't write repetitive code. And two, having good fuzz testing really like picks up. If you can find the errors, then you make a change; you test if it has errors and then if there was an error it's in the thing you just changed. So fix it. And that's not really that difficult [chuckles] to do most of the time.
01:13:53 [AH]
Yeah, I think Conor and I also sort of fall on opposite extremes on this one but I continue to feel like the value of AI in programming is proportional to how badly designed the programming language is.
01:14:06 [AB]
Yeah, I think that's true.
01:14:08 [ML]
That's not a new concept really.
01:14:11 [AH]
Yeah, no, I mean, I was saying this about most tooling. I do think that most tooling is there to compensate for bad programming languages for the most part. Maybe if we wanna be a little bit more measured. Instead of using the term bad, we would say insufficient in some dimension, right? That it's covering for an insufficiency in the language's expressivity or capabilities in some way.
01:14:45 [ML]
So it does sometimes happen that you want repetition in code because you have many things that are very similar, but you're only ever going to deal with one of them at a time [Aaron agrees]. So if you want to read the code, you want to know how it handles this kind of business object. And you don't wanna understand this big abstract framework for handling general objects but instead you want to know: "if I pass this in, what's it going to do?" And so in that case, it makes sense to have each individual method be very simple.
01:15:24 [AH]
Well, I'm okay with repetition. I don't mind repetition so much. That I'd say is not my main complaint about programming. If the repetition exists because of a domain question and the repetition is because you're being direct instead of indirect in your expression of the solution, I'm pretty okay with that.
01:15:46 [ML]
Yeah, but in that case something as dumb as an AI can then cross-reference the different definitions and say: "well this one acts very different from this one; is that really intended?"
01:15:59 [AH]
Yeah, it can but then so can typical compiler design anyways, right? Compilers can analyze that stuff regardless.
01:16:07 [ML]
I mean, no, I don't think so. I'm thinking of the case where one of the functions says one plus something and the other says two plus something. But there's no real reason for them to ... [sentence left incomplete]. You know, basic logic errors.
01:16:21 [AH]
Yeah, okay, okay. I'd want to see it. I want to see it. I want to see something where it is useful to me. I acknowledge the potential. I have yet to have been able to take advantage of it.
01:16:36 [ML]
So yeah, I mean, I haven't tried at all [chuckles].
01:16:37 [CH]
I've got so many thoughts and we're going to turn this into a 17-hour podcast if we're not careful.
01:16:43 [AH]
Hey, hey, let's do it [laughs]. It is the 100th episode after all [Bob laughs].
01:16:49 [CH]
Maybe we'll bring you back; we'll talk about this. I've had disagreements in the past about this high throughput employee that isn't as passionate because I had this disagreement with my boss at one point years ago: he's basically said what you said. You want people that really are going to be deep thinkers and are going to go home and they're going to noodle on this stuff and they're always going to care versus the person that clock in; they clock out. Still amazing in terms of the amount of work they get done, but they're not really thinking the extra mile. Anyways, we'll come back to that. But the point being, I disagree.
01:17:24 [AH]
That's a good one to ... [sentence left incomplete]
01:17:27 [CH]
Yeah, we'll table that discussion to make sure we don't go 17 hours. The one thing I'll say though, is that I tried to communicate this during my meetup the other night. Everybody, you get a two week free trial. You got to go try Cursor. [07] And this is what I'm talking about. This is what I'm talking about, folks [Aaron laughs]. Every week, Monday morning, the way I like to start my work week is I like to go to Perl Weekly Challenge. They got two challenges, folks, every week. Sometimes they're hard; sometimes they're super easy. The other week, it was beautiful because Uiua has their stencil, which is like a "prior" in q, which is a little bit more expensive to spell in BQN. And in APL, you got your two-wise reduction. It's beautiful but they had it in Uiua. And it was something like "find the minimum difference between any two elements". So spellable in less than seven characters across all the array languages (q [is] gonna take a couple more, folks, because we got keywords). But the thing that I do is you copy and paste the examples, which come with square brackets so that I'm unambiguous. Technically, they're just brackets, but I've actually been told that according to different countries, they refer to brackets braces differently. So square brackets with commas, basically Python lists. And for months, folks (I think I've solved a couple hundred problems over the last couple of years) you copy and paste that into your editor. And depending on the array language that you're choosing, you gotta go in manually. [For] BQN, we got the Chevron kind of angle brackets. In APL, we gotta delete them and the commas because we got space separated. [In] Uiua, we got underscores ... [sentence left incomplete].
01:18:52 [AB]
No, no, that's not true. You can just leave the commas in.
01:18:54 [CH]
You can leave the commas in if you want, but I don't like those ... [sentence left incomplete]
01:18:56 [AH]
That's non idiomatic.
01:18:58 [CH]
Yeah, it's not idiomatic. I don't like it and kind of executing an expression there. Uiua, you got two different ways: you got the underscores or you can leave the brackets, but you gotta get rid of the commas. Cursor.ai, you're probably thinking, what is he going on about all this for? I kid you not. Cursor.ai now: you copy and paste that in and you delete one bracket and replace it with whatever you want. It immediately anticipates that for the two or three examples that you have, it needs to replace the rest. You delete one comma, it then anticipates you need to delete it all. And then because there's two problems (it has like a short-term memory) when you go to solve the second problem and you copy and paste it, you literally have to do nothing and it suggests you: "do you want me to format these arrays the exact same way?" It anticipates that you have this name of the function. It only saves you like a couple minutes, but the joy and delight [Aaron laughs]. So it's not helping you do anything in terms of algorithm solving or code writing. It is just doing a mechanistic replacement of syntax to get your little array in your array language of choice. And I can't begin to tell you like how amazing it is to not have ... [sentence left incomplete]. Because that doesn't require your brain. You're just a monkey. You're a monkey right there and it's taking away the monkey work. Even if that's what AI gets me, that to me is worth whatever it costs a month, 20 bucks, 10 bucks. Think of that. Anytime you're doing that, like if you're aligning like the comments at the end of lines. I was building a formatter for it. It figures it out. It's just like: "oh, I see you're aligning your comments; would you like me to do that across your whole file?" Amazing folks! My monologue is over. I hope you're convinced that even if you don't want to use it for algorithm solving, when you're writing code, a lot of the times you're just doing this kind of mechanistic syntax removal or shifting of something. AI can anticipate that.
01:20:44 [ML]
Well, I mean, it really depends on your use case because I'm not copying in a bunch of arrays from different places because I don't do like coding challenges [chuckles].
01:20:54 [CH]
Marshall's not convinced folks. We're a zero for one; zero for one. [laughs]
01:20:58 [AH]
So what about your editor macros and just to paste into a APL text window?
01:21:04 [CH]
I got to make a editor macro for every array language. I write them all. Uiua, we're talking k, we're talking q, APL, Dyalog, APL, Kap.
01:21:13 [AH]
No, no, no, you don't make it. You do on the fly, on demand. You just do a recorded macro, right? So you just record your macro and then you hit go, go, go, go, go, go, go.
01:21:24 [CH]
Yeah. I guarantee you it's not the same because what if you got a nested list where you got multiple commas and blah, blah? It's different across all the array languages. Like, yes, I could go and macro-ify this myself. But the AI just anticipates my needs and it gives me what I want to the point where when I'm making show notes for podcasts, nowadays I do it in Cursor. It's not even coding. I'll start typing something and if it's like a Wikipedia link, it recognizes that I'm creating like hyperlinks to HTTPS links. And it'll just go and get me. I was trying to get the link for Brief Encounter (or Encounters) and I got the wrong one to like a song instead of a movie. But that was because I added an S and the movie didn't have an S. It was my fault! The AI got me the right link [Aaron laughs out loud]. I just spelt the movie the wrong way and it's game-changing folks. It's game-changing. You got to get on the AI train. I'm telling you, I'm telling you folks. Take it for a spin. Take it for a spin. Cursor, free week. I should go work for Cursor apparently! This has been like a 20 minute advertisement.
01:22:23 [AH]
Yeah, yeah. I'm telling you, "sponsored by ..."
01:22:26 [CH]
The death of search and the birth of ... [sentence left incomplete]
01:22:28 [AH]
The death of search, the rise of Connor AI [Bob laughs].
01:22:32 [AB]
I mean, I've tried a couple of times to use things like this. And I don't write a whole lot of different programming languages. I'm not like that. But I'll write JavaScript [08] sometimes. And I don't know, it probably takes issue with my way of writing JavaScript and I don't want it to change my style.
01:22:55 [CH]
Bow down to your AI overlords and just do what they want you to do.
01:22:58 [AB]
No, no! I want the artificial intelligence to serve the intelligence. I mean, I'm not trying to call myself especially intelligent here. But it's the same thing as with the laundry, right? I want it to serve me; not the other way around. If this is how I want to write my JavaScript, this is how I want to write. I don't want its comments on my style. Just because I don't write "let" and "var" and stuff like that in my JavaScript, doesn't mean it should write that. Just bend to what I do, right? And if I try to use it for something that's less known, like something I don't know, but it's not very much used, then it doesn't understand at all. [I don't know] if I mentioned this before on the podcast, the APL Quest website. Its front end is written with something called Liquid. Some language like that that's completely obscure to me. I can't really find much reference to it online either. And the only thing I wanted (and I still need this; it's been many months now) I just want to sort the pages differently. I want it to sort descending instead of ascending or the other way around or whatever it is, with a couple of pages that are not numbered that should go in the front. And I tried to use this kind of AI tool for it. And yeah, sure, it gave me a solution. Then when I tried to run it, it didn't work. So I went back to ask it: "what is this?" I got an error message on some keywords and built-in functionality that it had used. I figured maybe I need to import this or whatever. No, no! Then it explained to me that that is the name of that built-in function that it would have had it existed.
01:24:38 [CH]
It's like I said, 80%, 80% correct. You got to hold its hand. You got to hold its hand!
01:24:43 [AB]
And then I asked it if it could define it to me in terms of existing functionality? No.
01:24:48 [AH]
So I think part of this might just be, Conor, you love externalizing your process.
01:24:56 [CH]
I like being productive. I like being productive.
01:24:59 [AH]
But see, like for me, I like to learn. I like to learn new things. I like to internalize the knowledge. I like to bring things into the headspace, not put them out. So for me, the AI can be a pointer for me to go learn some stuff. But even if I had AI giving me all of my JavaScript or Python code, I don't think I personally could resist going and reading all the docs and figuring out why it gave me that and figuring out what compares to what the style is. Like if I see an API that it's calling, the first thing I'm going to do (I know instinctively) is I'm going to pause right there and go read the whole API because I'm not going to be able to help myself. With text editing, I like learning. This is why I use really, really, really simple, stupid text editors is because I like learning about the text editor too much. And I spend too much time learning how to master my text editor so I just make sure that I can't actually do anything in my text editor and it makes my life better. I don't know. Also, the repetition, a lot of what you've described is AI helping you with repetitive stuff. I take like a social approach to that and I just find ways to eliminate the need to tackle repetition in my life. I avoid unpleasant repetition and if it is repetition that is there, it's because I like the repetitions. And so I don't want AI to take that joyful repetition and monotony from me. I want that in my life. I put that there.
01:26:24 [CH]
You get joy from deleting commas?
01:26:27 [AH]
Okay. You have no idea how good it feels to play your keyboard like a piano and just spring from comma to comma, just prrrr.
01:26:38 [BT]
Some people pop bubble wrap.
01:26:40 [CH]
That's true. That's true, Bob [laughs].
01:26:42 [AH]
So to give you just how obscene and obsessive this is for me, I was editing in notepad and there were some repetitious practices. So I figured out the keyboard cording patterns that would mimic certain other extra powerful features in other languages so that I could figure out what the roles on the keyboard were to get my text editing features like macros in notepad. So where like vi will have certain combos to do certain patterns to get to the end of the line or go to this line or insight new paragraphs or things, I figured out what the three or five key combos were going to be to give me that in notepad.
01:27:17 [AB]
So I do this too!
01:27:19 [AH]
... and hit it all without the need for it. And it was fun. It's just fun. Like that's enjoyable to me.
01:27:26 [AB]
You have to also optimize it for how much can you put in the clipboard? Like you might be able to copy some weird part over the line and things [chuckles].
01:27:35 [AH]
Yeah, yeah, yeah. Okay, okay. Conor, you've got to remember I edited what amounts to like a 500 page docbook XML spec in ed, manually on a terminal.
01:27:50 [CH]
I think we might be getting into some like deep truths here about the problem with array language folks here because like something about the typing. I was just trying to think, if I could just have like a sticker (I think I've said this phrase; stick around my forehead, chip in my wrist) and I could just sit here and think the solution and like just like navigate to the web page and not actually touch the keyboard [Aaron chimes in agreement], I think that would be just as enjoyable.
01:28:20 [ML]
I think that's what I do! I mean the fraction of time I'm touching the keyboard is like 10%.
01:28:25 [AH]
No, I spend so much time enjoying my input tools, coming up with the input tools that I work with. I study the paper. I study the inks, the pens, the keyboards. That's just part of the experience. You just get to play with these tactile experiences and communicate your ideas through these interesting, quirky devices.
01:28:48 [CH]
I don't know. I'm picturing a future in the next 20, 30 years where I can go for a run and I can bring my multiple monitors like holographically with me and I don't need to type anymore. I could just be like: "oh, throw me up another chart on screen two; write that using this". And then it's like Jarvis is talking to me. It's like: "well, the better way to do this would be that." And I'd be like: "sounds good to me; Ship it!" That sounds like a dream to me, whereas you're going to be finding a way to write with your expensive calligraphy set, code into your ... [sentence left incomplete]
01:29:21 [AH]
Yeah, yeah. No, I'm going to be crafting out my code with a feather quill and iron gall ink or whatever it is and just enjoy it.
01:29:32 [CH]
We're gonna have to come back to this because we're running low, we're running low on time. It's coming up to the ... [sentence left incomplete] I'm not sure.
01:29:41 [ML]
Where did the hour mark even go? Does anybody remember the hour mark? [everyone laughs] It feels like it was yesterday.
01:29:51 [AH]
Was that that whooshing that I heard? [more laughing]
01:29:55 [CH]
I was going to say that we had two questions, one from Marshall, one from Bob. Should we circle around to Adám and Stephen?
01:30:01 [ML]
Wait, I asked my question.
01:30:02 [CH]
Yeah, yeah. We got to those, but then it was already past the hour mark when we had only heard two. So I was like: "I shouldn't do that, at risk of it going too long."
01:30:10 [BT]
And then you did your AI rant.
01:30:13 [AH]
Yeah, then AI came into the chat.
01:30:16 [CH]
I mean, it's the topic everyone talks about these days, but I feel like we're in the midst of... It's like when the internet came out in the '90s.
01:30:25 [BT]
I'm waiting for the AlphaGo moment for array languages. That will be impressive. When some AI comes up with something that nobody's thought of before.
01:30:32 [AH]
Yeah, yeah, yeah.
01:30:33 [BT]
AlphaGo was a program put together to challenge Go masters. Go was one of the games that they said: "computer will never be able to beat anything in Go; there's just too many possibilities." They used game playing against itself until it got to the point where in one of the games where it was playing a Go master, it made a move that everybody just looked at it and went: "that doesn't make any sense; it makes no sense. nobody would do that". And five moves later, it won the game. When they traced it back, they realized the computer had figured out something that nobody in the history of Go had done yet. So as a result, it changed the way the game is played today. And I believe the guy that challenged it is no longer playing Go.
01:31:24 [CH]
Yeah, he retired. We'll link [the] free YouTube documentary online to watch the whole hour. I've seen it like five times because it's amazing.
01:31:30 [BT]
It's amazing.
01:31:31 [AH]
Yeah, that was the big win for AI.
01:31:33 [AB]
No, no, no. Quite the opposite. You completely got it wrong. The big AI Go moment was when people figured out that these exceedingly superior AI Go players could trivially be beaten by humans by just doing something unexpected. That was the real AI Go moment, when you realized that the AI is not intelligent at all.
01:31:55 [ML]
There was a search algorithm that came up with a new strategy. Did it understand why the strategy worked or did it just search far enough to know that it did?
01:32:04 [CH]
Yeah, MCTS: Monte Carlo Tree Search.
01:32:06 [AH]
This isn't about AGI. [09] This was about progressing AI research forward and for a long time, AI couldn't get past certain humps. And the AlphaGo moment was them moving past a hump that was one of the biggest and most difficult challenges they'd had for a very long time. And it was extremely impressive.
01:32:26 [AB]
But it turned out that it was fake.
01:32:27 [CH]
What? It wasn't fake.
01:32:28 [AH]
But it's, I mean, certainly not AGI, but that's not the point, right?
01:32:32 [AB]
No, it didn't actually cross that hump. It just seemed like it.
01:32:35 [AH]
What? It wasn't fake!
01:32:36 [AB]
It was bullshitting well enough that people thought it had crossed the hump, but it didn't actually because it still doesn't understand.
01:32:44 [AH]
Are you saying that modern AIs are no longer capable of competing with humans at Go?
01:32:51 [AB]
I'm not saying that.
01:32:53 [ML]
Well, I think the issue is that it demonstrates that the best strategy for Go is not to have a deep understanding of the game, but to have a medium understanding of the game along with very fast search capabilities. I'm more familiar with chess engines, but I mean, this is what they do. They're just faster. They're not smarter.
01:33:13 [BT]
I don't think it was a search thing. I think it was the fact that it had played itself. It had played game against game, machine after machine.
01:33:19 [ML]
Well, that's what gives it the medium [understanding]. It has in some sense, a powerful understanding, but it's not an understanding in terms of principles or anything. It's an understanding in terms of weights that says: "here's how good this position is". But it's not an intelligent understanding. It's just a practical understanding.
01:33:41 [AB]
And I'm disgusted by this usage of the word intelligence in these contexts, because there's nothing intelligent about these systems.
01:33:47 [CH]
Yeah, it's like, do planes fly just because they don't look like birds?
01:33:49 [AH]
Okay, but this isn't about intelligence for humans. The more people think about AGI, the more that is a red herring. This is about AI as a technical term in computer science and this is the thing that everybody keeps making a mistake about. That doesn't mean we're talking about human intelligence. We've moved past that, and we're talking about competency and ability to solve a certain class of problems using computer algorithms that we classify as AI problems. That's all this is. That's all. I don't think we are anywhere close to what is real, real, actual, fundamental, metaphysical questions about like AGI.
01:34:29 [CH]
Well, I think we're super close. We're a couple years away, folks. We're a couple years away!
01:34:32 [AB]
If you think we're so close, Conor, then you go ahead and buy the stocks in those companies that ... [sentence left incomplete]
01:34:38 [CH]
I don't have to buy them. I already own them [others laugh]
01:34:43 [AH]
You got to have more self-respect. You got to have more self-respect for yourself, you know? [laughs] You are more than just the slave to the machine. You are a rich, intelligent human being who's capable of so much more [chuckles]. You sell yourself way too short.
01:34:58 [CH]
I don't know. It's like what we're saying, just because it doesn't solve problems the way that we solve problems. It's like the analogy I said. Just because a plane doesn't fly like a bird doesn't mean it doesn't fly, right? It still flies and arguably flies a lot faster than birds. To compare machines to us and say that they're not intelligent, like they're passing bar exams, whether they have a comprehensive ... [sentence left incomplete]. And I'm sure there's humans that memorize some stuff and wrote an exam and passed it because they got the right answers to the questions, but did they have a holistic, fundamental understanding of the concepts?
01:35:29 [BT]
No, but they were able to get on the bus and travel to the exam and write the exam with a pen and paper or whatever, and then leave and go back and see their kids.
01:35:38 [CH]
Haven't you seen all the robots that are coming out? I'll send a couple of videos, Bob. You're not doing flips anymore. They're pouring coffee. They're putting apples in fridges. I don't know. I guess maybe I am like one of those tech doomer bros that is just so excited: load me up into a computer and I'm ready to live forever. That's what my fiancee tells me.
01:36:00 [AH]
Well, what's the term for that? Transhumanism?
01:36:05 [BT]
Yeah.
01:36:07 [CH]
Anyways, this really flew off the handle here for all those that ... [sentence left incomplete]. [laughs] Stephen just nodding his head [everyone laughs]. For those that hung on, happy 100 episodes of ArrayCast [laughs]. And it'll be an exciting next five years. I think we can say that regardless of how technology in the world progresses, there will not be a lack of topics to talk about. Yes, I think that's a point we can all agree on at least. Anyways, thank you [chuckles]. Thank you for coming on, Aaron. It's been a blast. I don't know how you're going to name this, Bob, because we're all over the place [laughs]. We leave it to you.
01:36:54 [AH]
We can just call it hallucinations.
01:36:55 [CH]
Something like that [laughs].
01:36:57 [BT]
I'm waiting for my cue for the contact.
01:37:00 [CH]
I know. And I was going to say, I'm sure there's some folks with some thoughts and comments. And if you want to share those with us or ask us any questions, you can reach us at ...
01:37:10 [BT]
[Speaking in a robot voice] contact@arraycast.com is the place to reach us [muffled laughs]. We welcome your suggestions and your comments. We read all the emails. We make all the comments. Thank you.
01:37:24 [CH]
[Sighs] You know, it's funny, we could probably get an AI right now. There's voice synthesizers like Resemble.ai. I tried it. It didn't trick my fiancee, but she did say it had an uncanny resemblance to me.
01:37:37 [AH]
[Chuckles] Are we not supposed to also say "like, share, and subscribe"?
01:37:41 [BT]
[Laughs] No, I don't do that one. You're right. We should, but we don't.
01:37:44 [CH]
I don't even know how they like. Is they're way to like us?
01:37:47 [BT]
We're on Apple Podcasts, so I think you can like us. I don't know. We're on Google.
01:37:50 [AB]
OK. Yeah, I guess.
01:37:52 [ML]
Isn't liking just like a state in your head? You either like us or you don't.
01:37:56 [AH]
No, at this point, you've lost all human emotions. You're just feeding into the algorithm now. That's what "like" means.
01:38:03 [CH]
If it doesn't happen on a screen, then it's not real, Marshall. It's not real. There's got to be a thumb or a heart or a star, depending on the platform or the app. Yes. If you have one of those, go click it [Bob laughs]. It's been a hundred episodes [laughs], especially if you made it this far. If you literally suffered through the last 20 minutes, you owe us a star or a heart or a like somewhere.
01:38:22 [AH]
We owe them a star.
01:38:23 [CH]
Yeah, that's probably [true], yeah. Follow us and then we'll give you a return like. Anyways, thank you again, Aaron. We'll do this sometime in the future. Yes. And the virtual emojis floating up on the screen. With that, we say, Happy Array Programming!
01:38:41 [AB]
Happy Array Programming!
[music]