Transcript for September 4, 2021 Tacit Programming
Thanks to Rodrigo Girão Serrão for providing the transcript.
00:00:00 [Bob Therriault]
Hi, it's Bob Therriault here and normally we would start off with some clever editing leading into the music and we'd be away with the show. But in this case we found out after we recorded the show that there would be a celebration of life for Larry Breed on September the 12th at 1:00 o'clock in the afternoon at Mountain View, CA at the Computer History Museum. And we'll also include a link if you'd like to attend virtually. Larry was one of The Pioneers of APL. He was one of the first implementers and we talked a little bit about his life in the second episode of Array Cast. So because we didn't want to miss this opportunity to let you know about this celebration of life for Larry Breed. We thought we would start the show off this way and now on with the show.
00:00:59 [Conor Hoekstra]
Welcome to another episode of Array Cast. My name is Conor. I'm your host for today and we're going to go around and do quick introductions. We've got one short announcement and then we'll hop into the topic for today's episode. We'll start with Steve and then go to Bob and then go to Adám.
00:01:12 [Stephen Taylor]
Hi, I'm Stephen Taylor. I'm an APL and q programmer and currently the Kx librarian.
00:01:19 [BT]
And I'm Bob Therriault. I'm a J enthusiast and I've been doing this for, I keep saying 19 years. It's actually 20 now and I'm just part of this podcast.
00:01:30 [Adám Brudzewsky]
That probably means that I, Adám Brudzewsky, have been doing APL for eight years by now, working professionally for Dyalog Ltd..
00:01:38 [CH]
And my name is Conor. I'm a C++ developer and a huge array programming fan and I'm super excited about today's topic and happy 20th birthday to you Bob. I did. I assume it must have been recently. If you went from 19 to 20.
00:01:50 [BT]
Yeah, it was. I think it was the summer of 2001.
00:01:53 [CH]
All right, so let's let's do the one announcement and then we'll hop into the episode.
00:01:58 [BT]
Yeah, I woke up this morning which was good. 'cause then I'm on a podcast and I was looking at my my YouTube and tangentstorm. Who you may recognize if you've followed J for a while, it's Michael Wallace. He does a lot of video. He started doing a Twitch, a live stream an hour every morning Eastern Time. So from 8:00 to 9:00 in the morning he does a live of Twitch stream of programming J and he's currently working on a on a text editor because he wants to improve the console a bit and it's kind of interesting. He's got everything open source. Everything is available through GitHub. We will put the link in the show notes, but if you look under Twitch it's tangentstore. And if you look under YouTube, he also repeats them in YouTube because I think Twitch only keeps them up for two weeks. So if you want to look for past shows, he's just started. It just started this weekend, but it looks really interesting and I talked to I talked to him, I texted him on the J chat and he's more than willing to have us publicize it, so hopefully by the time this podcast hits the air, you still doing it and you'll get a chance to see some of some of his efforts. It's quite neat to see somebody live program in J.
00:03:13 [CH]
Awesome, that sounds really cool and it'll it'll be see it'll be cool to see where that goes. Yeah, he keeps it up. All right, so for today we don't have a guest. We are just going to be chatting about a specific topic that I brought up and I expressed interest in and everyone was super excited to talk about it. So today we're going to be talking about tacit programming, otherwise known as point-free programming. Although I think in past episodes we've noted that there are subtle differences between the two, and we're going to talk about sort of questions that I have pre loaded up. The differences between tacit programming across the different array programming programming languages. But I guess first let's let's just start with a definition of what is you know, very briefly, tacit programming, AKA point-free programming. So who wants to take this one? All right, I'll nominate Adám. And then and then, Stephen and Bob can color in his explanation if they if they feel like it so.
00:04:14 [AB]
Tacit programming. I'd like to define it in in terms of the contrast with what's informally then called explicit programming so. There's a lot of overlap between programming and mathematics mathematical formulas. You always have these variables or names of things, and then you have some some symbols tying them together, and that's the whole thing. There you have the names and then you have some operations you're doing on them or you're stating some things about them and so too in programming you mention names for things. It could be names of things I'm not defined yet, like an argument, or it could be names of known things. And the important thing in tacit programming is that you don't state the names of the things that you're working on. Everything is expressed in terms of function application. And so I guess we're going to get into more details of how that comes out in practice. But instead of saying that `a = b`, you would just say equals and then the `a` & `b` are implicit. That tacit they understood.
00:05:27 [CH]
So I will. Well, first I'll ask. Steve and Bob, do you want to add anything to that before I go on like a little mini mini rant rant slash tangent or not, tangent but.
00:05:38 [BT]
I I'm looking forward to the rant actually, so I'm not going to say anything at all.
00:05:42 [ST]
Well, I want the rant too, but it occurs to me having been in this game a long a long while. I've watched this as part of a, uh. I, I guess the move of functions becoming first class objects. When I first started programming, you assigned values to names for data variables. Here it is a variable and this is its value and you define functions as a syntax for that, and they seem like 2 entirely different things. And then when functions become first class objects. You just say this name and there's the assignment operator and it's this function. Oh, that's kind of strange. You could do that. And then it turns out that as part of this you can assign primitive functions. So you can say plus is assigned the plus symbol. And then it turns out you can. You can assign combinations of functions and values to names. And and all of this without the syntactic notation which says this is a function, and these are the arguments, so that's kind of how it appears in my world.
00:06:55 [CH]
Yeah I I completely agree that it is extremely nice to be able to just go sum equals AKA the arrow and APL plus slash instead of having to create a lambda or a dfn in APL, which would be some arrow brace plus slash Omega brace. You've basically what is that? Five you've reduced, like the length of your function by 60%. Because you've just submitted what I've started to call. Like all the ceremony, all the noise like the the omega, the braces. That's just that's just ceremony that you need to put there, except that in APL you don't actually need to put it there. You can just plus slash. That's what's really important. The ceremony is never important, it's always just there because you need to get things to. I guess we don't really say compile in APL 'cause it's an interpreted language, but you just you have to put it there. To get it to work, it's boilerplate, yeah, boilerplate. Except I like ceremony better 'cause boilerplate sort of implies. It's like stamped out code that you copied from Stack Overflow or and ceremony it's more indicative of the fact that it's it's just unnecessary, like in some different version of a programming language. You might be able to like sort of leave all that stuff out which APL and J and a lot of these array languages that have support for point-free. You know Haskell as well, or tacit give you the ability to do so. So my my little mini rant is like A is the story of how so let's pause. I absolutely love tacit programming and I love combinators. Even more combinators are the name outside of the array language world for what are now called trains in APL and J. Uhm, at one point I knew about combinators and then I was very confused, for the longest time in APL and J, by trains and hooks and it's an odd syntax that's sort of different from the rest of the language. So when you start to learn it, I was just very confused. Especially, I watched the this talk, I think it's been mentioned on one of the podcasts before by Roger Hui. He has sort of APL and like 16 sort of functions or something like that. We'll find it and add it in the show note links, but the very first one he shows is average. So like this is supposed to be introducing people to like how amazing the language is and like I couldn't even understand his first example. I was like what I mean. It looks impressive, but he was explaining it and because it's a 40 minute or 60 minute presentation with 16, you know examples. He goes through it pretty quickly, and that's supposed to be one of the easier ones. So so, the point being, is that, at least for me, when I first showed up at these trains, I didn't really know how to interpret them until I realized that they were just combinators. Which of all things I managed to track down someone, confirming my belief that they were the same thing on a tickle blog, which is like the TCL programming language? But the very first time I came across a combinator was flip in Haskell. Which is basically the equivalent of the commute operator in APL, which is the little tilde with two dots on top of it, and that actually has like 3 different meanings now, which is why probably the commute operator is my favorite glyph in all of APL, because it represents three different combinators at the same time. It's constant, it's it's flip or what's also known as the C Combinator, and then it's also doop. I believe so. So yeah, it's three combinators in one, which probably sounds confusing to the listener, but it's very easy to read once once you get used to it. But so there's an algorithm. This is getting to like the rant part of my story. There's an algorithm in C++ called adjacent difference, which is a very simple algorithm. It takes a list of things that can be you know can have a binary operation applied to them. By default it's minus, so if you have a list of numbers 1, 2, 3, 4, 5 and you apply adjacent diff to it, it's going to subtract each of the adjacent elements from each other. But the catch is that it subtracts the second like the element on the right or the element on the left from the element on the right. So you might think naively, that adjacent difference for 1, 2, 3, 4, 5 gives you four negative ones. But it actually gives you. It gives you 5 ones, UM. The positive one comes from the fact that you're going 2 - 1, 3 - 2, 4 - 3, et cetera. The fifth one is because the way in C++ it works is it copies the first value from the input sequence to be the first value of the output sequence, which is a big mistake in my opinion, because if you end up doing adjacent difference. On 10, 11, 12, 13, 14, you end up with 10, 1, 1, 1, 1, and in a lot of programming languages, the equivalent of this algorithm just gives you back n - 1 elements, so you don't end up with that 10 or 1 thing. So in Haskell they call this algorithm mapping. Except instead of it coming with a default binary operation, it doesn't come with a default. You have to specify it yourself, and so adjacent difference in Haskell is just map adjacent with parenthesis minus parenthesis, which is basically it's called a section, but it's a very very succinct version of a lambda. So in basically two things map adjacent, plus the minus section you have your algorithm except that the minus by default doesn't do the same behavior as the minus baked into the C algorithm. It's going to go 1 - 2, 2 - 3, 3 - 4 and 4 - 5, which gives you back four negative ones, and so you go from having this beautiful map adjacent minus. To having this map adjacent with, now a lambda in Haskell which just to bore the listeners, is paren slash, X arrow or XY arrow, and then you have to manually reverse the X&Y, which are your two arguments. So instead of going X - Y, which is what this section does, you have to go Y - X. So you have this really beautiful, succinct implementation of adjacent difference. But then because of this, the fact that you have to sort of rearrange the way that you're passing these two arguments to minus it, it messes it up. And then one day you know it's just an amazing day. I discover the C Combinator which is called flip in Haskell and all you have to do is basically pass that minus section to flip and it automatically does the flipping of the way you're passing those arguments in. And there's absolutely nothing like this in like the C++ standard library. And when I just, this was the first time I stumbled across a combinator I didn't know it was called the combinator, and that solution with the lambda is no longer tacit because the lambdas are explicitly mentioning X&Y, whereas now because you're using the C Combinator or flip you can just go map adjacent, flip the minus section and you now have a point-free solution, and this was like the start of me falling in love with combinators and the story goes on and on, but I'll stop there. The point being that these show up all over the place and they lead to what in my opinion, like that map adjacent flip minus is so much more elegant than the map adjacent, you know, paren slash, XY arrow, y -- X and paren. There's so much more noise there when really the flip tells you exactly what you want to do, you just want to flip the arguments. It's it's, it's. In English, communicating to you what you want to do, and then you might say, well, APL is so far from English, but they're they're like the commute operator commute is just another word for like commutativity is. You know the rearranging and and so I'll pause there. But hopefully the listeners didn't get completely overwhelmed by that. But the point being is that I think the combinators and the tacit forms that exist in APL and J are just just incredible, and they lead to extremely elegant code, and I'm very, very jealous of other languages like these when I'm in C++ because there's not a lot of. I think there are a couple, but not to the extent that there are in APL, so I'll pause. I'll let folks respond, and then we can. We can. Yeah, Bob, go ahead.
00:15:20 [BT]
Yeah, well picking up on your your English comment. What's really neat is in J. It's actually called passive and that goes back to the passive voice. It's actually reversing the two arguments, so instead of. Going active voice in in, in in the English language, you're going passive voice. So rather than doing something, you're having something done to you, which is I always thought was kind of interesting that they would name it that way, but there's actually the when you reverse the two arguments, it's called passive and they call. Well, I think if you duplicate it, it's called reflex but but and that's all dependent you were talking earlier about the difference is you have one symbol that can do two things. All it's based on is whether you've got two arguments or one argument. If you've got one argument, it's going to duplicate it right and left, and if you've got two arguments, it's going to swap them. And the thing I always think about when I'm thinking about those is I think it was in J for C programming languages Henry Rich talked about it being like fitting pipes together. You know where you start, you know where you want to end up. But you've got all these tools that you can fit these pipes together and in most languages you don't have those kind of pipes that go down to the granular granular level that you can actually say. Oh, I just want to flip these two arguments here and that's a pipe and I just got that symbol to do it. They, I think, tried to go a little bit higher level by hiding all that, but as a result of hiding it, you don't have the control within the function, so that's what I really like about tacit programming is you're getting right in and you're starting to manipulate and essentially I guess you're composing the functions and layering them on top of each other and at the end. You start from your your start arguments and you end up with your result. And if you've layered everything all the right way in the in the through the the process, it comes out the other end. And it's really consistent because the people that have actually implemented the language at each stage have tested all that stuff out for you. And it's it's very powerful. It's and it it when it comes out the other end the way you want it, it's very gratifying.
00:17:28 [AB]
I I really like that metaphor of of the pipes and and it made me think of what the what kind of was speaking about in in APL today is the symbol is little frowny face, or tilde diaeresis, that wavy line with two dots on top. But that's not actually the original symbol Iverson chose for this very functionality. It was actually a a U-shaped pipe. That was the simple he used, so I'm I'm very sure this is exactly the metaphor that I wasn't had in mind that it's it's exactly this plumbing and there is more plumbing than just that. There is there are T-shaped pipes where you kind of split up things or combine two things into one and then there's this U-shaped thing which is either leading something back to itself or crossing one thing over to the other side, switching sides like. And that I think that's very much how we think of it, and I wanted to to add that to what kind of was saying earlier about the trains being these combinators, not just the trains. Trains is one form of combining functions. But what in APL is called operators, specifically a subset of the operators and in J is called a conjunctions and I guess we could say some of the adverbs as well. Or in language they're called modifiers, 1-modifiers and 2-modifiers. Some of those are what you could call compositional operators. Those are our operators that do not involve any type of computation. Rather, they only dictate that's where the problem comes in, they dictate how arguments are being applied to the given function. And so trains are kind of special in that for these languages you can only have up to two functions being combined and trains allow you to combine three functions into a particular pattern. But other than that, the concept is the same.
00:19:34 [CH]
Yeah, one of the things that that I think it works perfectly. This analogy is this idea of, you know you're building up a pipeline of and composing these pieces. This together there's a, and I might have mentioned this before. There's a really good talk by Scott Wlaschin who's a functional programmer that primarily operates in the F# community. I believe the talk is called the power of composition and he introduces this idea of railroad programming where he uses this example if he has three different pipes. One takes an apple and then outputs a banana and the next one. Takes a banana and outputs an orange and you can. You can plug these together and then you get a function that inputs an apple and outputs an orange. If I got that correct, so the banana disappears like there's a banana connecting them together. But because you compose them, you just see the end result and a lot of the times making use of these combinators in APL, uhm, which I'm using that to refer to both the trains and sort of the operators you can. You can manipulate functions such that where you might need to use parentheses and and sort of mix things up, you can use the the commute to sort of flip the pipe. Around so that you can still build up like a linear expression that you're reading from right to left, like the example that I saw the other day in one of Rodrigo’s videos was he was using `without` which I actually didn't even know. It's basically the dyadic form of the tilde that you can give it a, I think in this case it was a string and he wanted to remove all of a certain character from it and and so it takes, you know, basically like a rank-1 vector or string on the left and then it as the left argument. And then it takes a scalar or, I think in this case it was a scalar, but the point is is it had two different things and and in order to use it the way he wanted to like you couldn't just add it to the front of your expression. So he added he had to use a commute. And so by doing that. You're avoiding having to put parentheses and then make your eyes skip around to. How am I going to evaluate this? You just get to start at the right, read right to left, which is backwards from how we read textbooks, but we still are used to reading things linearly and I think as soon as you can get past the fact that you're starting from the right hand side and actually that's a Western Eurocentric. There are certain, uh, you know countries and histories that have historically read from right to left, so I I should I should take back what I said. For some people, that's going to be the what they, what they're used to and and everything else is backwards. So yeah, I love that pipeline or sort of railroad metaphor and. And yeah, it's it's. It's sort of like. In a way, it's the opposite of lisps; like lisps you're always going to the most inner thing evaluating and then sort of going out and out and out, whereas in APL, if you use these these combinators and tacit programming, you can maintain sort of a linear flow, which I find a lot easier to read than skipping around.
00:22:26 [BT]
And you can always revert to parenthesizing if if it makes sense within it, because there are times certainly within J that you need to put parentheses around your your tacit, because if you don't, it's not going to evaluate in the right order because of the the right to left precedence. At least for verbs, of course. With operators in J, it actually does the opposite. It goes left to right, which takes a little while to get used to, but when you actually get used to it, actually even makes it clearer about what you're doing when you're actually putting things together. And it's it's more convenient to you're not having to add a lot of extra characters. It kind of works the way you would expect. Except that when you've been trained on verbs, it's not working the same way, which if you really want to experience, go in and play with it a bit. You'll get what I'm saying, and if you've done J for a while, you'll understand that completely. It just it does what you expect, but one of the best things I've ever done was I spent some time actually working through the J parser. Because that in the end is exactly how things are evaluated and when you start to get used to how, and it's not, it's not difficult, it's just a. It's a process, and when you get that process ingrained in your way of looking at a J sentence, things become much clearer, much quicker.
00:23:46 [CH]
So maybe we should, I know right before we started hitting the record button. Stephen you were talking about sort of a little bit of history and conversations that you've had with Ken Iverson on the topic. Maybe want to bring that up and in delve into that a little bit.
00:23:59 [ST]
Ken, Ken frequently used to get challenged by people saying that APL was too hard to read. And there's kind of a standard response to that. There's like a, yeah, it's harder to read a line of APL than it is a line of some more commonly used language, but the the standard rebuttal is, well. Your brandex language isn't actually doing it very much in that line. So the the premise is the APL lines worth studying 'cause there's a lot going on, and that probably corresponds to what we we like about it. We've got a lot of stuff going on in the line, a lot happening, and you can get it. You can get a lot done with the single expression. So Ken's aligned with that and it's kind of core to his thesis on notation as a tool of thought. Was that we're confused what is difficult with what is unfamiliar. And with that, he's going back to see Whitehead from mathematicians famous line about. We don't need to think more, being told all the time to think more. Actually we need notations which enable us to think less and at a higher level. So if you follow this line as as I generally have, it pushes you towards using the, what we think of as the more advanced forms, the more abstract forms. An example of that would be would commonly use in APL matrix multiplication to get the sum of them of multiplying two vectors together even though it's an arguably a reduced form of matrix multiplication. And I do that for two reasons. First, because the habit of using matrix multiplication would help me see other parallels and possibilities. Improve my vision as to what's going on, and secondly, because I know that the interpreter writers scan the what's going on in the interpreter to see what programmers are using it for. And when we use the more advanced forms, we basically give give them more scope, more scope to find optimizations. If we use the. So when I encounter tacit programming, I was thinking here is the next step. We're kind of. It's another jump in abstraction and so forth, and I. I should use this partly because it really appeals to me aesthetically, it's like wow. I really love the idea of being able to write code which doesn't have function syntax in, doesn't have arguments endlessly, we're continuing to boil things right down to the. But also because I thought it, I was thinking this is going to lift my insight, my level of insight and open up new opportunities for the interpreter writers for efficiencies. So the other week when I asked Henry if he thought that tacit was a step too far in programming, it was kind of a loaded question. It was kind of a loaded question because. Uh, if you if you feel as Henry does, that it does, it does go too far. It doesn't help you. I mean with all the qualifications and the nuances that go with that, then you run up against Ken's based premise. It's not merely a question of confusing the familiar with the unfamiliar with the difficult. In Henry's view, the tacit is actually too difficult, practical. Most of the time, or much of the time, it doesn't help you, it's not a good way to do your coding. We should step back from the and not always go to the most abstract, the most the most extreme. Soon, I was interested to hear that at at least there's other factors in there besides just getting up the level of extra abstraction and expressivity. If you go with Henry's line, then there are things that are not only unfamiliar, they actually are difficult and you're better off doing something an easier way.
00:28:30 [CH]
Bob, Adám, do you want to add anything before I hop in?
00:28:35 [BT]
Yeah, well, one of the things I I thought of in in in Henry answer as well was interesting because partway through his answer from what I my recollection is, he started to say Oh yeah, but I guess within explicit there is tacit and that is really true, and after he said that over and over again as I program, I'm looking at and going. Wow, I'm halfway through. This quotes explicit expression, and suddenly I'm dropping in these tasks adverbs because they're like little chunks that I can just drop in. I know exactly what it's going to do, and it might have a Y coming in and you know X or Y arguments coming in, but in this one little section it would be completely tacit and I could take it out and drop it in. So actually I wonder whether part of what Henry is talking about is a bit of a the fact that he didn't start out. I mean, as much as he's programmed in J, he didn't start out as a J programmer. We didn't start it as an array programmer, and possibly it's just that level of, your brain being able to chunk those things into tacit, tacit may be a lot easier for somebody who is only thought tacitly, and they may have a wider scope of what their cognitive overload is at that point, and there may be a limit. I'm sure there is a limit for for for people, but it might not be as narrow as a person who's always programmed explicitly and as you get used to it, you might be able to widen that out a bit. Having said that, yeah, if I put in a long string of tacit code. I have to either put in comments to explain back to myself in the future about what I've done, or it's just so obvious to me. And again, we're back to that cognitive level if I understand it that well. I don't have to put anything in at all, but not everything I do when I'm trying to create a function. Yeah, I have at that level some of it I'm I'm sort of grasping for and when I'm grasping for it, I might go back to it two weeks later and I'm not in the same state and it's very hard to understand. And having gone through that a couple of times, so I do know every time I go back to it and figure it out again, I'm very less likely to have forgotten it in the future.
00:30:44 [AB]
I'd like to give a concrete example of this which Bob just said. The fact that tacit programming jumps in even if you don't try to do it. Maybe the most basic form of tacit programming in in APL-like languages, is something we all do without even thinking about it right? Plus slash for summation. Every APL language uses that or extremely similar syntax to do a summation. And there's actually a lot going on here, and this is tacit programming and always was from the very first implementation of APL. Because what's happening here is the slash is is reduced. And the Plus is a function that takes 2 arguments. When we are passing this function +2 reduce without any mention of the arguments. So when I have to write some JavaScript, obviously you can just sum in JavaScript. But let's say we wanted to for simplicity, to reduce over an array, over a list in JavaScript, so I would have my array dot reduce and now I have to feed reduce the plus function. Now I know exactly what it is. I want to reduce with. I want to use the plus. Nevertheless I have to mention here. X comma Y right arrow X + Y. This is an explicit definition and you sure can write this in APL as well and in k as well and in J something similar. You could write an open brace and then the name of an argument plus the name of the other argument, close brace and then the slash for the reduction. But we don't do that. We don't even think about, this is like just the tip of the of the iceberg of tacit programming.
00:32:38 [CH]
Yeah, it's it's. Honestly, like while you're saying that and I've had this thought like a couple times, it's that this is really just like functional programming. Like I want to say plus plus, but it's not even, it's just functional. You're just programming with higher order functions. Functions that take functions as arguments like and even in a non-functional language or like non first class functional language like C. When you have your reduction algorithms std::accumulate. You know you have to pass it the the range that you're going to sum up, an initial value, which is if you're adding things up, usually zero, and then you know in the in the past, well, there's two different things you can do. You can write a lambda which is bracket bracket, paren, auto, a comma, auto A. Or auto BN paren brace return. A+B semi colon end brace. It's just a mess. Or you can write std colon colon plus brace brace. Which is a built in function object that does your summation summing for you, or the fact that std::accumulate comes with the default plus you can just omit it entirely, but look, let's pretend it doesn't come with the default like the same jump from going to have from having to write that explicit lambda without like if it's got every single type of brace you got, your braces, you got your brackets and you got your parentheses. You got all three of them and you got some returns, semi colons, a comma in there like it's just, it's the noisiest of lambdas across all the languages. Or you can just write you know std colon, colon plus with with two braces. It's not as nice as APL, but it's the same jump of having to go what you were saying, Adám, you know brace. You know X + Y or omega plus alpha end brace. That's that is not at all as satisfying as plus, and it's not. It's not so much about like you know, how nice is this to code. It's how nice is it to read later. Like like who in their right mind would want to read all the extra noise if you could just see the plus and go OK? Yeah, that makes sense. Yeah, and and and the other thing I was going to say, I think I think this was in response. Maybe to what Bob was saying was that when you're talking about the. You're writing explicit statements at the top level, but sort of tacit statements within that explicit statement is that. That, like I think, I think it's even like. Especially the fork. Uhm which? Is it's known I I released a YouTube video over the weekend covering a simple problem in 16 different languages and I go into detail explaining that a fork, AKA a train in APL or J. It's known as the S Prime Combinator or the Starling or the Phoenix. Or lift M2 or A2 from Haskell. This thing is so fundamental to the extent that actually the tacit form a lot of the times is easier to read in my opinion than the explicit form there was this problem I was solving the other day where the whole problem doesn't matter, but at one point you want to take from a list of numbers, the differences of each element. That's like you know, K elements away. So like maybe three or four elements away and and at first I was trying to do it by doing some sort of endwise reduction where you make little sort of sublists and then you look at the first and last element and then I had the realization wait a second if I just uh, do a K - 1 rotate and then subtract the original or the rotated array from the original. Uhm, that's that'll work. And then the question is, oh, you got to get it in that form where one list is above the other list. And I was like, wait a second, this is just a fork where the binary operation is subtraction. And you have identity and then the rotate. And technically because one of them is identity, that's the specialization which is a hook that exists in J and not and not APL. Which maybe maybe we can talk about that now, 'cause this was a comment that Adám had made on on Slack. Let's let's bring up the what is the exact comment that? Adám said meh hooks are overrated. I'm throwing you under the bus, Adám. They are just forks where the left tine is left function in APL slash BQN or the bracket in J with a little
00:36:56 [BT]
And and that was in response to my comment that actually my favorite forks are hooks.
00:37:00 [CH]
Well, OK yeah, that's what that was in response to say maybe maybe we can talk about one hour hooks over rated and two and and this is what really I wanted to sort of get out of this episode. Uhm, I don't. I haven't come to a decision on whether the hook plus cap style in J. Is better like? I assume that Iverson always put the evolved ideas and the better ideas in J, because he did APL 1st and then he did J 2nd. So he'd been thinking about it longer, but so and maybe Bob you can explain sort of the cap and the hook, but it leads to a much much. Like there are times where in APL you can write something because you don't have hooks, so you don't need to have some sort of mechanism for. Using a cap, but I'm getting ahead of ourselves. I'll throw it to Bob. You can explain that and then we can talk. Are they overrated? Which model is better? Is it APL? Is it J? Is there another model out there that we haven't discovered yet? I don't have the answer, so. I'll be quiet now.
00:38:00 [BT]
OK, I I can say a lot here, but probably will break it into parts talking about caps. And and a lot of this has to do with the when you've got a fork, you've got essentially three usually three verbs, but the leftmost tine in the fork. In J can be a noun as well, or it can be a cap, which is a verb that terminates. And what happens when you have a verb that terminates doesn't return anything. Is that now your right tine which processes the argument coming in is just fed to your middle tine. There's no input from the left tine. So that's how J you can use these caps to separate when you're feeding from one verb to another, and you you don't want to have input from the other side, you use a cap to do that. You can only do that from the left side. So what ends up happening when you're looking at a string of J that's written? This way is you'll see verb, verb, cap, verb, cap, verb, cap, verb, cap, and that means the middle verbs are all getting processed left, right to left and it's just sequentially adding this cap just basically stops that that left end input from coming in, and then you move on to the next verb. That's going to be applied to your result. Which sounds complicated, but the alternative is. I think it's I can't never separate between at and the atop, but essentially it's a conjunction that will join two verbs together and it goes in between the two verbs. So in the case of cap at the end, it's like you've got verb verb cap. But in the case of uh, atop or at whichever is appropriate. You've got him in between, so it's a bit more like a, uh. I guess a verb that's working between two arguments and some people find it easier one way, and some people find it easier the other. I've actually started to find it easier to read using the cap process, but there are certainly times that that actually doesn't work because the the atop, the at and the middle can do things that the cap won't do. You have to do a lot of things with rank to make cap do the same thing, so that's the short version of just the cap part of that fork, and I'll let Adám, Stephen or yourself respond to that before we move into what I like about hooks.
00:40:25 [AB]
You can have something similar. It's called nothing I think. Yeah, which is just a tiny little dot, but it serves exactly the same purpose of filling the left slot of the three in the fork. But I think I think we should step one step back and make it a little bit more clear what exactly is a fork. I don't think any of us really stated really, what it is?
00:40:47 [CH]
Well, you can. You can state it. Yes, I I agree we have not actually stated what a fork is. I stated what there are six different names for it across different languages and stuff. But yeah, it's a good call.
00:40:58 [AB]
OK, so so what we call a fork. It's it's more like a Trident is it would be a more precise name or it's a fish fork. There only has three tines on it. 3 tips on it and the idea is you can really apply it two different ways. The overall fork, which forms a derived function. You can apply it to a single argument, or you can apply it to two arguments. The common idea between them is that the two outer function, since the rightmost and the leftmost one are applied first to the single argument or to both arguments and the results from those two function applications are then used as argument to the middle function. So the middle function always is a function that takes 2 arguments. So I think maybe easiest to. Give an example of this. A very simple example of a monadic fork. Would be what we would read as max minus min. So let's say the argument is a list of numbers and so the outer functions are the Max function which would be in APL languages a maximum reduction and a minimum reduction so they find respectively the greatest element and the smallest element, and then you subtract the result of the maximum reduce and minus the result of the minimum reduction, it gives us the largest minus the smallest. That's the total range, the span that all these numbers fall in. Uhm, this is how it is how it works. For a dyadic application it could be something very similar. A neat little one is an alternative to the traditional mathematical notation of plus minus. We have like a little plus stacked on top of a minus so you could have. Of three functions, the outer function is a plus and the one side and the minus on the other side and then. In the middle you have a function that combines the results could be as simple as concatenation. So what happens is that there are two arguments now. The plus is applied between the two arguments, that gives us the sum, and the minus is applied between those two arguments, that gives us the difference; and then we combine those two results together so the two results, the sum and difference are the arguments to the concatenation and this gives us essentially plus minus. So we get 2 results. Now instead of a single result. And now the problem is. Sometimes you actually don't want to apply two functions and have their results combined by a third middle function. Sometimes you just want to apply functions in sequence. So it could either be that you want to apply a single function on a single argument and then another function to the result of that first application, or it could be that you apply a single argument to two other arguments, and then you apply a single function of a single and that takes a single argument to the result of that. And that's where there there are differences between APL languages, and so we call these sequences of functions trains, or phrasal forms is actually what Iverson called them originally. And we all agree what happens when there are three functions in the room. We apply the outer functions and then the results are gathered and combined with the middle function. If there are only two functions in a row, how exactly should we look at it, right so so how can we fit into this pattern of this trident? This fork, the idea that you only really want to? So we want a filler. A thing, something that takes up a slot out of the three. And and leaves only the two slots actually be active, and J solves this with, Bob, called it a function, but it really isn't because if you try to use this function it fails. It will fail on any argument you give it, so it is special syntax. It's special syntax that fills up the leftmost slot. And by way of that, leaves only two other slots. In the APLs that support trains and they say that you can simply omit the left function. So you only have two in the sequence and that applies one function f to another. Now why doesn't J do that? Because J uses this syntax of two sequential functions for something else, and that's the hook.
00:45:38 [CH]
Interesting, so really cap is implicit, so that makes a lot of sense. Actually, cap is implicit in APL because we don't have some special form for 2 characters or two functions which is hook and J but doesn't exist in APL.
00:45:59 [AB]
So then APL has to do something else to get the functionality the passing around of arguments to the functions that you have in the hook. I'm sure Bob will speak about what the hook does, but that you can express in a different way in in APL, whereas J must have either that special syntax or as as Bob said, you can use a combinator conjunction to bind together 2 functions so that they actually only become one, but there is no way in J other than using the special syntax to just have adjacent functions be applied sequentially.
00:46:34 [BT]
At least tacitly, you can do that explicitly, but not tacitly.
00:46:38 [AB]
Explicitly, yeah of course, so moving on to hooks and but just before I go to hooks that left tine, which can be a cap and Adám did a great job of explaining how J or how array languages used those tines.
00:46:54 [BT]
The left line is really important because in J I'm not sure where this is true in in, in APL as well, but in J, the left tine can also just be a constant. It can be a noun and that it's it's one of those things that allows you a bit more freedom. You don't actually have to go in in in. Previously, in the when I first started out with J, you couldn't do that. You would actually need to create a constant verb that would always return a constant. And in your leftmost tine, but soon after I started in J, they brought in the noun verb verb form, where the noun you can just put a noun in that leftmost tine and now it's just the argument that comes into that, that center tine. And that's really useful. But if you take it one step further. You actually remove it completely and you get a hook. And the thing I really like about hooks is unlike forks which are symmetric hooks are asymmetric, so they don't process arguments symmetrically anymore. You've got two verbs. Now you're sent to what was your center verb is now just the leftmost verb, and you're rightmost verb remains your rightmost verb now what's happening is you're rightmost verb is the only verb that is processing your argument in terms of the left argument, or your right argument, so it's the only one that's touching the right argument, and that in turn is fed into the left verb and now what's coming on the other side of it. Well, if it's a monadic argument in J, you actually end up copying your original argument. To both sides, and that's really useful, because now what you can do is your right argument can actually be like a filter. For instance, you can apply it to your original argument end up with a Boolean string saying whether things are positive or negative. You are you going to keep them or not and then you can just put a copy and then again to put it passive on it the the tilde operation. So that now you're reversing the two, and essentially you've created this Boolean array and you're applying it to your original array. Because now you've reversed the two. And now you've filtered everything out and you've just done it with two verbs within parentheses and the argument you supply it, and I think that's just beautiful.
00:49:22 [AB]
This is where I have to jump in. I mean, I think that's that's ridiculous, frankly, because the only thing you're saying here is, and by the way, you only spoke about the monadic hook. It only takes one argument. It gets even worse when it's dyadic. So in the monadic form, what you're saying is we have two functions next to each other. The rightmost function is applied as the single argument, the one argument you're giving this overall derived function the the left function, which kind of is a middle function is applied with two arguments. One argument is the result of that first function application and the other argument is the original argument. How did it even end up there? So what's happening really is? There's an implicit identity function. And if we look at it in the framework of the of the three tine form on one side, we have the monadic function application and the other side you're passing in the original argument and in tacit programming the original argument is the identity function. So we can see clearly that the hook is nothing but the normal fork where the left tine is the identity function.
00:50:33 [BT]
Yeah, in the monadic case you can sort of look at it that way for sure, yeah?
00:50:37 [CH]
I mean, I want to jump in here so so I. I mean, I understand what's being said here, but like the I agree when Bob said that it's very beautiful because it is such a common pattern in APL, because so many of the verbs and functions take masks. It's not just filters, it's partitions like there's so many functions. Where it's dyadic and the left one is a mask. I completely agree that hook is just a specialization of fork, where one of them is the identity. And sure you might need to use commute or what was it called passive in a in in J in order in order to orient them. Uh, but it is. It is a super super common pattern and I don't know even even in the even in the example that I previously stated about taking. The difference of elements that are K spaces away like that's that's a hook. It's a it's an identity minus K minus one rotate, and so the identity there like you could hook that and do the passive thing. So like even in the one example that I explicitly stated that was actually a case where a hook could have been used. So I guess the question is is. What you know you you definitely think that hooks are overrated, but are we losing something like can obviously put it in the language? So like what, what? What are we missing or or what did we miss?
00:52:02 [AB]
Maybe we're not there to say that Ken missed anything, that would be humorous. But his, he wrote a paper together with Eugene McDonnell and called "Phrasal Forms" in 89. And when he introduces things there, he actually starts by introducing the hook and then he introduces the fork and he as opposed to what APLers today do. He actually does use this language that Conor likes using, so he starts by saying combinatory logic, one of the most useful primitive combinators, is designated by S, and Curry defines S and so on, and these and keeps going. And and it's true that lots of them like it. It comes up all the time. And I do sometimes think about this. OK, so it's all very nice all these forks, but by far most of the forks I would use one of the two other times is an identity function, which means it's either hook or it's a reversed hook. Uhm, and there there are lots I should say there could be a function that the that computes some kind of mask and then that's fed with the original data to some kind of processing function. And if you looked at the plus minus I said before that's for two arguments. Let's say you want it plus, minus on a single argument. That means you want the original, concatenate it with its negation so there again one of the two is in the identity function and and there are, there are loads of those you could see whether is a matrix symmetric over it's it's diagonal. That means you want to see does the matrix itself match its transpose. Again, one of them is that identity function. Is this a palindrome? That that is, is the argument matching its reversal. There's so many of those.
00:54:04 [CH]
I tweeted that out in J once because it's what it's the minus colon. A pipe dot. Or is that right? Something like that it's 4 characters, two digraphs? It's it's it's match reverse.
00:54:20 [BT]
Just match reverse. So the first the the right, the right verb is reverse. And then you're just matching it and if they match then it's a palindrome.
00:54:28 [CH]
Yeah, but it's like when you type it out though it's like it just looks like absolute noise 'cause it's it's a hyphen, a colon, a pipe, and a period.
00:54:39 [ST]
You know, there's a version of that that is itself a palindrome.
00:54:44 [CH]
Oh, really.
00:54:45 [AB]
Like, yeah, in APL you can write a, a palindrome, palindromic, uh, tacit function which tests whether the argument is a palindrome and not only is it palindromic. Uh, if you like reverse the characters, but the characters themselves are mirror symmetric vertically, so you can actually look at it in the mirror and it will still work. Yeah, that that's and that's that's kind of cute. Uhm, so I wasn't began by defining providing in the hook and then he defines the fork and then he looks at how it is similar to things that you have in in mathematics. The examples I usually bring up is addition of functions so you have F of X and G of X and now you can write F + G of X which is F of X + G of X. That's exactly what the fork is doing on a single argument. And and I think. Him taking it from some theory that had already been built up. It was just natural to write it up like this. But once it's implemented and you can play with it and and in a way that APL is very much like Lego blocks. You see these patterns. You get to know them, become familiar with these patterns, and that's when you see that the hook is just a special case of the fork where one tine is an identity function. And so sure, building on existing mathematics is is all great. But I think APL can rise higher than the shoulders it's standing on and we and the fork you see what happens in J, right, needed the cap for things to make sense, which is in my my opinion. I mean, I'm maybe a notation, uh, extremist and I want the absolute purity and I can't stand the cap because it's it's a, it's an abomination. In my eyes, it's not. It's a function, but it's not a function, it's. And you get an error if you actually try to use it on its own. It's not, you can't reason about it, it's just something you have to get used to.
00:56:59 [BT]
Well, actually, and that's one of the uses it has is that it is actually if you feed an argument to the cap on its own, it is a domain error. It is actually used that way when you're declaring functions, so that if you didn't want a function, say, to work magically, you put a cap in that position. So any argument coming in gives you a domain error. It's telling you, in that case, exactly what you want to know, don't use monadic arguments here, only use dyadic and it's.
00:57:27 [AB]
It's just abuse. If you define your own user defined function that always errors, you can't use it in the same position. You can't take the cap and wrap it in your own, give it a name, wrap it into in braces and make it its own function. It's not natural.
00:57:45 [CH]
Stephen?
00:57:46 [ST]
I want to come back here to something Bob was saying earlier, which I really, really liked and this was going back to "is tacit too far?", and Bob was suggesting the people who were not raised as we were. We'll be able to see further, tacit may seem normal and this is going right back to the thought of Newton that Adám was just referring to stand on the sole, if I've seen further than other men's because I have stood on the shoulders of giants. I'm talking to so if if that goes then the forms that we find difficult and struggle with other people will in the future see more uses for. We'll see more that they can do. They'll see further than than we have, and I'm going to call this project the adventure. And the adventure is about finding where this goes, and I'd say the APL Iversonian languages are not the only ones, we're seeing increased levels of abstraction in other programming technologies, but where they're arguably led by developers saying, oh, we could, I'm getting sick of doing this over and over again. Is there some higher abstraction I can use for what I'm doing? I would argue that Ken's line is kind of mathematics lead? And it's led to a pattern. I think I've seen where the implementers and language designers kind of see this is possible. Hand it over to the developers, see what you can do with it. And sometimes it's left us feeling quite puzzled. It's like, here's an apparently very powerful looking tool, but we're not sure what we're supposed to do with it. And the truth is, I think that the people who put it in our hands didn't know what we're supposed to do with it either, only that it seemed like a next step. I was talking with Whitney about this couple a couple of years ago and the, so specifically about the uses of arrays for reduction. So coming from an APL background I'm I'm used to reducing arrays and functions are what I reduce arrays with. But in q and in k, can you there's a kind of fundamental principle behind them that there's functions on arrays, or kind of the same things, so you can use a matrix to reduce a list, because you can think of a matrix as in some ways like a binary function. It's got two indexes. It's like the syntax of of k and q allows you to apply it using the same notation as you would use to apply a function. And it turns out that if your matrix can be thought of as as representing a finite state machine, you can use it to reduce a list. Which produces some interesting things. So I was talking with Whitney about this and he commented, he said "I'm a pretty good vector programmer, but there's stuff to do with matrices I've probably never really looked at well thought the thought about, but I can imagine people who've been brought up with this language will go much further than I can." So that's that's my concept of the adventure. Several decades into the project and putting it out there.
01:01:33 [CH]
Bob, Adám thoughts?
01:01:34 [AB]
Need a moment of silence. Let it sink.
01:01:40 [BT]
It's an adventure. No, I I think I think there is a lot of truth to that as people, as I've become more used to some of these things, they you see ways that they can be used, and it's. And again, I'm I'm projecting a a physical meet space, you know, my physical space on the way I use tools physically to what I'm mentally doing a lot of times, and I I'm not sure whether that's a good approach like that's a good cognitive approach. But what I do find is that when I notice something. And I start to use it mentally. It's it is like a muscle. I start to use it more. I see more opportunities. I see I use it more that way. I see other ways to use it it it starts to. I I it. It's a tool that I use in that form. Now whether or not thinking about mental constructs in that physical way is right, it is something that I have noticed about the way my mind works, that's really I.
01:02:44 [CH]
Just had this like a metaphor pop into my head while you were saying that Bob of like and this ties back to the APL leads. You know what you were saying earlier, Stephen when you were talking with Ken, people constantly say that APL is hard to read. Combining when you were saying that and what Bob was just saying I was, I was 'cause I happen to UTMB, this is totally random, but UTMB, which is like the Mecca of ultrarunning just happened over the last week. It's 170 kilometre race. Anyways, I'm a big runner, so I was pretty jazzed by it and ended up watching. Some some Eliud Kipchoge who's the individual that ran the sub two hour marathon. There's a movie coming out about him anyways, so I happen to be thinking of Eliud Kipchoge, who's you know, world fastest, marathon runner if you're trying to run with him? Probably your response is going to be like it is way too hard to keep up with this guy. He's way too fast, but he's also like a God amongst men when it comes or you know, a God amongst people 'cause he's he's just so fast you know he's trained his whole life to do this one thing and he's the best in the world at it and, and there's this analog of, you know, sure, we don't want we, I guess we all shouldn't be aiming to be Eliud Kipchoge 'cause that's it's not going to happen. But if we start to train so that we can run faster. That, like you prefer to move faster and the analog is thinking, thinking faster and being able to do more with your mind and is that like at a certain point, if you get to this certain level where you no longer have a team of people that can keep up with you, is that like a bad thing, or should we be trying to really build systems where we can train people to read these array languages read tacit form so that we can all run and think at this, like we're not run, we're using the, you know mental metaphor now. You know we can, we can you know it's a tool for thought you can think at this higher level. I don't know I, I think there's definitely something there that yes it is, it is difficult, but it's I think Stephen, you were the one that said earlier it is worth the time to study because there's so much more in a single line. Yes, it's going to take longer to read that line, but with time it actually won't be that difficult and the amount of time you're going to spend reading, you know, 5 lines of APL or J or K is going to be a per you know concept, you know you're going to be able to read and understand that piece of code faster and more comprehensively than you will be where you're skipping through three different files that are each 100 lines long. Because you can't even fit it all on one screen. So then you got to cache that stuff in your brain and keep track of what's going on. I don't know. There's definitely something something there. Bob?
01:05:34 [BT]
Well, to complete your running metaphor. I'm not sure whether they went under 2 hours, but for a number of years the fastest marathoner was a wheelchair athlete.
01:05:46 [CH]
I did not know that at all. I mean. A part of that makes sense.
01:05:54 [ST]
Wheels work.
01:05:54 [CH]
So, but that's still yeah, that's yeah, yeah, that is a yeah, there's something there to add to the to the to the metaphor there is. Yeah, that's also extremely impressive.
01:06:06 [BT]
I, I believe it was Rick Hansen, but I'm not sure whether he retained that speed. I'm not. I don't know whether he broke 2 hours, but I know for a number of years he was the fastest marathoner.
01:06:18 [CH]
Yeah Canadian, go Canada.
01:06:20 [ST]
I guess if my adventure metaphor has got has got legs, so to speak, it's partly in acknowledging that we don't know where this is going. And as an analogy here, with mathematics that I see, I mean some math is done to solve a real world problem. Got a difficult problem and we need to do some original math to solve it, but a lot of it is like sex, we know that there are results, but that's not why we do it.
01:06:48 [CH]
Alright, that's that's probably probably a pretty good way to start to start to wrap up. I think we are past the hour mark. Come there are two things though that I I just want to briefly want to mention in case people are are three things actually, now that I think about it. So one you mentioned, the 89 phrasal forms paper, it's my favorite paper in APL that I've come across. We definitely got to add that to the show notes. I mentioned that tickle the tickle blog was the 1st place that I got the confirmation, but the 89 phrasal forms paper was where I actually got like the 1000%. Because Iverson was explicitly saying, Oh yeah, this is, this is the S Combinator from combinatory logic, and I was like, I, I knew it. I knew it. Because I think actually one time I showed up at the British APL webinar, and I asked a roomful of folks like is are these the same things and the response was no like Iverson, just he came up with this and I was like, I'm pretty sure anyway, so that's the first thing we'll throw that in the show notes. Second thing is I mentioned Starling and Phoenix at one point. All of these combinators have these bird names that come from a book called To Mock A Mockingbird. And like even the identity tine that we've been talking about, so that's known as, depends on the list of birds, some call it the idiot bird, which is from like some BBC program that was way before my time. I I think it's in, some people don't like that so they just they just call it identity but like left and right which are the overloaded dyadic versions of identity in APL. Those are called like kestrel and kite. The the hook is Starling before key is either Starling prime or the Phoenix. Flip that we mentioned earlier as Cardinal. I'm not an expert in all these things and I haven't even read the book. I just think it's cute that all these combinators have bird names and I tweet them out every once in a while.
01:08:43 [AB]
It's something I wanted to do for a long time since I heard you speaking about these, is to go through them, 'cause I I. They're always written in like Haskell, something I can't read that stuff. And and why did they say APL's unreadable? But anyway, I I would love for somebody who actually understands it and can explain it to me like mortal language and to go through it with me and I'll write down the equivalent in APL if I can and like make it a Dictionary of those bird names and what they are in APL.
01:09:11 [CH]
I think Marshall Lochbaum of BQN that we had on he did that once. Uh, he he found the list and then on The APL Orchard maybe we can find it by searching he he listed out all of the combinators that existed in BQN. Maybe we can do the same thing with APL at some point. But yeah, the the the last thing I will say, and then I'll open it up if there's last comments people want to make is that we haven't even gotten to this, so we'll definitely have to do Part 2, Part 3 of this, maybe we'll bring on other folks that have, you know, differing opinions of whether it's tacit till we die or have a tacit? All the things? Or, you know, I don't believe in tacit programming, I have in the back of my mind uhm, not a theory, but an idea that there is something more than. It's just like what do you prefer? Is it, uh, you know, it enables you to think more powerfully or it's more readable or more beautiful that using these combinators in a way. You could. It can enable a compiler to optimize way more aggressively certain pieces of code based on that, and this comes from having written the solution to Kadane's algorithm, which I will save it for another episode. But basically you end up with a fork that is, it's basically right, maximum plus so in contains algorithm you end up basically doing this running sum, but you always want to make sure that you're resetting effectively, your running sum if it gets negative and you can do that with either 0 max plus or right max plus. And because we know that if we were to. Create a language or compiler that could identify that certain operations are associative and commutative. Some of them are only associative, some of them are only commutative, and you have basically a scan followed by a reduction. I am almost positive you can write a compiler that can see through all of that and then compile that down using basically like graph compiler techniques to a single parallel reduction that, like either compiles down to a both associative and commutative reduction, or just you know one or the other. So if you have all flavors. The reductions you can see all the operators that are being used are primitives in the language, and those are all tagged with the associativity and commutativity of them. You can end up with, like, what would potentially be like the fastest language in the world because of the fact that you can optimize it so aggressively, it's just a theory that I have in the back of my head. I'm not sure if there's anyone doing like research out in the world on this. If you happen to be listening to this and you've had the same idea, please tweet at us or contact Bob and then he will relay it to us. Adám, you were going to say something.
01:12:10 [AB]
Well then it ties into this as you say, yeah, we definitely need a Part 2 with Part 3 on this. Uh, but something I usually call like black magic in Dyalog's interpreter is its ability to run certain functions in reverse. What that means is you can state the function and then you can use the power operator, which normally you tell it to apply this function five times or something. But you can tell it to apply this function a negative number of times, and it's automatically, magically if you want, figures out what the equivalent of this function is in reverse, what the inverse is of this function, and that basically only works if the function is trying to run backwards is tacit. Because it's, so, it's pure right? There cannot be any side effects. There cannot be any names or variables or anything that or any syntax that's ambiguous until runtime or anything like that. It's all there. All the information need is present and that the interpreter has some rules internally for how to invert things and how to invert combinations of things, and that's how it's able to do it. So I think you're right. I think something can be done in that in that area.
01:13:23 [CH]
And I think what's exciting about that is that, so that's awesome that I didn't had no idea that that precedent existed, is that? You know it can be an uphill battle to convince folks that array programming languages are worth learning that it's a notation for a tool of thought. You get an order of magnitude or two orders of magnitude more, and your ability to think about shapes in your head and whatnot. If you, if you create an array language that is the fastest language in the world that can do aggressive, you know, compiler techniques that don't exist in any other language. It becomes less of a hey, I'm trying to convince people to while I'm running my code, you know, an order of magnitude faster than anything that currently exists out there. I mean, look into it if you want, but if you don't, that's fine. It's sort of like then becomes this like secret weapon, which at one point is what I heard, is that back in the day in the 70s and 80s people didn't even want to tell people they were using APL, 'cause they thought it was such, what do they call it? Like a secret sauce or whatever that if you told them then they would start doing it as well and then you lose your competitive advantage. Alright, any last thoughts before we we we pause on this and and we'll we'll come back with Part 2 in the future, 'cause I feel like there's this is a deep rabbit hole.
01:14:40 [AB]
We definitely need to come back. And we've done, we've spoken about forks, but not thoroughly, Bob went through the hooks, but only the monadic hook. What about the dyadic hook? I mean there's a whole, there's a whole chapter right there and then what about the atops and all the other conjunctions that we haven't got to, and that's why Stephen has been so silent all this time, 'cause he's just waiting for us to come to the atops.
01:14:56 [CH]
Yeah yeah, yeah.
01:15:03 [AB]
And that's where k comes in, 'cause k does k does atop or, it, it does nothing but touch atops basically. And so definitely another episode of this tacit.
01:15:15 [CH]
It, yeah, there's the whole.
01:15:17 [ST]
01:15:33 [CH]
Yeah, and we didn't even get to our favorite. That was right. We're supposed to all bring our favorite, our favorite hooks or forks or, you know, tacit expressions. But maybe we should leave that for for the last for next episode? 'cause yeah, we're we're a bit over now. But yeah, like thanks everyone for listening. As we mentioned at the beginning, we'll have links for all of this, plus the J Twitch live stream for those that want to check that out and.
01:16:00 [BT]
If you do want to get in touch with us contact at arraycast com, we'll get you in touch with us. And as Conor mentioned, I get all the emails and stuff and I forward them onto the group is appropriate, so fire away if you've got questions and I guess I I think that's about all I got. arraycast.com will have the show notes it will have the transcripts we all work at doing that. So when this runs and if you do get a chance to check out tangentstorm on Twitch with the, the early morning J show or the ridiculously early J morning show. I, I think it'll be worth taking a look at from time to time I for me it really is ridiculously early because I'm on the West Coast and so that's five in the morning and I I don't think I'm getting up at 5:00 in the morning to watch that, but I'll watch it on YouTube afterwards, so I think it's really neat.
01:16:52 [CH]
Yeah, and and feel free if you're listening and we dove way too quickly into the deep end, I absolutely love this conversation. That being said though. I have been thinking about this and staring at it for a year, so potentially this might have been, we went, dove into the we dived in, doved in? Whatever. We went into the deep end way too quickly. So yeah, feel free to leave us feedback. And if we want us to slow down next time, we definitely can. But with that being said, we'll say happy array programming.
01:17:23 [all]
Happy array programming.