Transcript

Transcript prepared by

Sanjay Cherian, Igor Kim, and Bob Therriault

Show Notes

00:00:00 [Henry Rich]

I would feel devastated if I couldn't have mustard on my sandwich, but I wouldn't want to have to have mustard on everything. Tacit programming allows you to represent what needs to be represented tersely, tersely.

00:00:16 [MUSIC]

00:00:26 [Conor Hoekstra]

Welcome to ArrayCast episode 84. I'm your host, Conor, and today with us we have three panelists, recurring panelists, and one guest panelist. So we're going to go around and do brief introductions. We'll start with Bob, then go to Adám, then to Marshall, and finish with Henry.

00:00:39 [Bob Therriault]

I'm Bob Therriault, and I am a J enthusiast.

00:00:44 [Adám Brudzewsky]

I'm Adám Brudzewsky. I am an APL professional.

00:00:47 [Marshall Lochbaum]

I'm Marshall Lochbaum. I've worked with J in Dyalog. Now I work on BQN and Singele.

00:00:52 [HR]

I'm Henry Rich. I'm the principal developer of the J engine.

00:00:56 [CH]

And as mentioned before, my name is Conor. I am an Array language enthusiast, and super excited that I get to be here, because we weren't sure. There was a chance that I was going to miss today's episode, but super excited to be here for today's topic, which is Tacit number six. It's been a while, but we'll get to that in a few moments because we've got a few, one follow-up, a few announcements. So we'll throw it to Bob, who's got one follow-up and announcement, and then I'll finish with a few announcements myself.

00:01:22 [BT]

And my follow-up is on our last episode, we let people know that Raghu Ranganatha had unfortunately passed away and that was a shock to the whole community. And we weren't sure whether we were going to get back to his parents. We'd found out from his, his dad had actually emailed us to let us know that this sad event had happened. But other than that email being monitored, we didn't know there was anything else. We did get an email back from Raghu's dad and he appreciated, the whole family appreciated what everybody had done on the podcast and all the communities that had sent messages in, we'd sent messages back to them, they said it had kind of told them what they already knew about their son, but to see the influence he'd had made a really big difference to them. So there's a lot of comfort there and, and thoughts with them at this point, it's, it's going to be a long time, I think, if ever, that they actually feel kind of whole again. But anybody who had thoughts passed along, they did get them. And, and that's something that's something good, I suppose. And then my other announcement is actually just kind of a point. We had an email, a really good email this week from somebody who was asking about where to get access to all the ArrayCast episodes. [01] If you go to our website, we can only put up 20 episodes on each page. So you have to go through a bunch of pages, but on the APL wiki, and Conor has done the great service of updating this, although I'm sure being a wiki, there'll be other people do it as well. But all the information is there on the wiki and it's got them all on one page and it's got topics and guests and languages and, and also it's a really good information. If that's what you're looking for and links to the episodes, if that's what you're looking for, that's a great resource to use and it's updated on a semi-regular basis. And so there'll be a lot in all of the episodes, all, I guess, 84 now of them will be there. And, and that's a chance if you want to do that. And also if you're trying to do a search for terms that we use on the show, use the site colon with your search engine and then it'll restrict the search to the site. So rather than typing in rank into, you know, DuckDuckGo or something and getting all the internet, you just go site colon and then ArrayCast.com and it'll do the search. And actually, because we've got transcripts, it'll pull back all the episodes of the transcripts. And so it's really a useful way to go searching for stuff if you're using us that way. And I think that's all I've got.

00:03:55 [CH]

All right, and I've got four announcements. The first two are very quick. I put out two YouTube videos and one Tacitalk episode with Elias on the KAP language. So check those out. Links will be in the description. And the final two announcements are that I decided over the past two weeks since we last recorded that I started putting all my BQN code in a single repository and then I have a script that copies them to all the other repositories. So if you're interested in looking at BQN code, because I've started to think about -- I've been looking at Marshall's BQN libs repository, which is a repository that just houses a bunch of libraries that he's written. And it says actually I think in the readme at some point that it might become standard code. Anyways, not the topic for today. We can talk about it later. But I've been -- started to write my own libraries for things that BQN doesn't do as well as like other languages like Haskell. Anyways, check that out if you want. But the reason that I'm mentioning this mostly is because I have a PSA, which is that I have discovered that -- Marshall has mentioned this -- that GitHub kind of is wishy-washy with how it labels a repository with the language percentages. And I have discovered that if you have a repository that hasn't had a commit since Linguist has recognized BQN as a language, it doesn't show up in the percentages. So I actually went and opened a superfluous pull request on Marshall's BQN libs that aligned some arrows or something like that. And then once Marshall merged that, it went to 100% BQN. So the reason I'm making this as a PSA is because this is probably hurting the rankings of BQN. So between now and I think October 1st is like the next time that Linguist is being updated, I recommend that folks that are interested in the ranking of BQN on these sites that don't really mean anything, go and update your Advent of Codes, your BQN things. And I'm also super excited for January 2025, because at that point, all of the Advent of Code BQN repositories will have been done. Anyways, that's my PSA. I guess J, APL, they don't have this problem. KAP isn't actually Linguist-recognized language at the moment. is though, but Adám's got something to say.

00:06:11 [AB]

Well, actually APO sort of has this problem in that a few years ago at Dyalog, we decided to introduce some new file extensions. Whereas before the extension for text files from Dyalog was .Dyalog, we introduced extensions that differentiate between various types of contents that are in them. And those are not recognized yet because they're less used. But there's a trick. You can put a git attributes file and tell Linguist that these files, anything that begins with .apl, so if you write .apl? and tell it that it's apl, then it will get counted. So I can see that that affects also the percentages of the code. People should definitely do that as well.

00:06:54 [CH]

Interesting. Interesting. And I have thought that it's a bit weird, not to go down a rabbit hole here, but the way that, because I was looking into how these rankings work, and a lot of them are based off of GitHub PRs and stars and issues and stuff, but all of those stars and whatnot count towards the language that is the number one language on that repo. But you could be in a situation where you have like, you know, 34%, 33%, 33% across three languages, but the one with 34 gets all of like, they don't split it, right? And so every once in a while when you see on these things, like it flips, there'll be like some big move in Rust or something, like it'll go up and then the next month it'll go down. And I wonder if it's because there's some large repositories that are like flipping back and forth month to month. And depending on, you know, when the data gets scraped, it goes back and forth. Henry?

00:07:44 [HR]

How do you measure what fraction of a repo is a certain language? If I can do something in 100 lines of J and it takes 2000 lines of C, does that show that C is the dominant language?

00:07:58 [CH]

Yes, it is a problem. Yes. So I have a repo called LeetCode, which when I was making all my YouTube videos, when I first started my channel, like a lot of the videos were competitive programming problems. And so I would just put all of the LeetCode solutions in there. But like C++ is number one at like, I think it's 25 or 30%. And even though I have way more APL and BQN code, they only show up as like 19 and 11%. So like by solution, I think there's way more APL solutions. But because C++ is so verbose. So yes, it's not even, you know, apples to apples. We should get some multiplier effect and take over the world.

00:08:37 [ML]

Well, I know they do have some sort of waiting because the BQN repository before it was BQN should have been JavaScript, but they like waited the JavaScript file way down because it was in the documentation folder. So they considered it part of the website as opposed to part of the repository.

00:08:54 [AB]

Really? Otherwise, a lot of things are going to be marked down and HTML and so on. Almost every repository is going to be that because of the documentation. They have to strip that out.

00:09:07 [ML]

Yeah, but I don't know if they do that for programming languages that are just programming languages and they're not commonly used on websites.

00:09:16 [AB]

How do you distinguish between a website that's implemented using, I don't know, PHP or JavaScript or Elm or something that is only there for the sake of the website, but then it's actually the backend code is something else. Surely they can't know unless they also look at folder names.

00:09:33 [CH]

I think for the most part, it shows up.

00:09:35 [ML]

Yeah, well, also, I mean, do you consider the application to be the website, which it is in some cases, or to be some other thing?

00:09:41 [AB]

Yeah, they can't know unless they have somebody going around looking manually, reading readme's, but that's doubtful.

00:09:48 [ML]

Even the automatic stuff is not keeping up. So I don't think they have much manual effort on it.

00:09:53 [AB]

Yeah. Or maybe they use some kind of a large language model to analyze things, in which case all bets are off.

00:09:59 [CH]

No one knows. The point being is I'm excited. October, we'll see if people go revisit their repos. If we'll go from, because I think we're ranked 357th. I'm interested to see, do we go to 356? It could be a big move, folks. Anyways, with that out of the way, we will transition to a long awaited topic. We've mentioned this episode on past episodes multiple times. It is tacit number six. And today I think the main topic is going to be the trains, whether they are verb trains versus modifier trains in J, because J is the only language that has modifier trains. I'm not sure if we want to give an overview or just assume that everyone listening has listened to the past episodes where I have waxed rhapsodic when I first discovered them. And I think they've been mentioned by Henry, but there might be a few new people. So should we, I was going to say start off slow, but like starting off with slow with an explanation of modifier trains probably isn't actually that slow. So maybe that's a good idea. We'll throw it over to you, Henry, and then we'll go from there.

00:11:02 [HR]

I would start even earlier than that. Modifier trains, I think, are an exotic addition that don't have a great deal of practical use. But the concept of tacit programming definitely does. And here's a simple tacit program. I'm just going to type a plus sign. And I'm going to say, that's a program. The point is, you have to have-- the language that you're using has to provide the proper soil for this to grow into a program. But I'd say it starts as a program.

00:11:39 [AB]

I think you have an excellent point. Because it might seem like, well, what is that? That's nothing, right? But I often hit in, say, JavaScript, I want to sum something. So I need to reduce using plus. But I can't use a plus, the plus symbol, because that's not a proper program or function in JavaScript. I had to add a whole bunch of noise, many, many, many times the length of a single character. So there itself, the fact that plus represents itself, tacitly taking arguments, giving results without declaration, that is salient in itself.

00:12:14 [HR]

Indeed, yes. The language has to supply enough features for plus to be recognized as a program.

00:12:26 [ML]

Yeah, and so I've just put a link in the chat. You might think you could always write f gets plus in APL. But this is actually somewhat recent. I think when I looked into this-- and this is on the APL wiki. We'll put it in the show notes as well. What I found was that the first proposals for doing function assignment at all started around 1980. Yeah, so Iverson had a paper in 1978 where he proposed a new glyph for using function assignment. Actually, I think KAP does that now. But the proposals for using the regular arrow for this started around 1980. And the first implementation that I could find of it was Dyalog APL in 1986. So you can probably thank John Scholes for being very function-minded and deciding to implement this. But it's not really obvious that plus should be a program or that it should be directly assignable.

00:13:32 [HR]

Does anybody know where the name, where the word tacit in this context was originated? It sounds like something Ken would come up with.

00:13:42 [ML]

Yeah, I think so, but I don't have any hard evidence.

00:13:46 [HR]

But it's important to take, what's the alternative? The alternative is explicit functions. But tacit means quiet, and the point is that your program can refer to its arguments without any designation other than the arguments are presented in the right place. So if I have my program plus, and I give it a left argument of two and a right argument of three, then it will execute because it's a program that's been given its input in a place where it knows where to look for it. You have to have the right kind of language for that to make sense. Think about C. You can't do it because C doesn't have any way of knowing what to do with a program unless you give it an example of how to use it, a function prototype. In C++, the operator itself might be overloaded, and you can't even tell what it is until it's presented with arguments. But again, the arguments, it has to have an example to know where the arguments go, and that's a lot of typing, as Adám was saying. One character versus a function declaration, that's the sort of difference in heft that we're talking about. I can move up and make another tacit program, let's say plus slash in J. In J, the slash means reduce. [02] It's a program too, but it knows to take an operand argument on its left, which is a function. So plus slash plus reduce means add up a list or some array, take the sum of an array. It too is a tacit program because when you give it a right argument of an array, it knows what to do with it. My last example would be two words. Word is what J uses to mean what other people call tokens. I dot space greater than dot slash. I dot means find the first match. Greater than dot means find the maximum, return the maximum. And as we learned, slash means reduce. So this is saying, find the largest value in the right operand and tell me where it is in the left operand or in plain words, just it's index of the maximum. What's the index of the largest value? And I can do that by just writing those two words. Those two words are the program that does that. So what's the advantage of this style? Well, the obvious advantage is it's very terse. I don't have to have punctuation other than parentheses. In a sense, the space is the other punctuation. Space parentheses and then operators or things that look like programs that can be combined together. It's also nameless, which is important for three main things to me. When you write a program and you introduce a name, there's always the nagging doubt in the maintainer's mind that this name may be used somewhere else. If you don't have to create a name to refer to an object later on, that's just a level of worry that you can take away from the eventual maintainer. Also assignments make it hard for operations to be done in place. Name references require name lookups and if they're done in loops, they can take time. But the main advantage is that the program written in this form is very short. So what are the disadvantages of a tacit style, tacit programming? Well, they're actually the same as the advantages. It's terse. Now, is it too terse? That's a topic that we should pick up later. And also the fact that it doesn't use names, it also contributes in its way to being difficult to comprehend. For one thing, if I have a multi-line definition with function assignments, I can put a comment on each line. I can describe what part one does and then describe what part two does and then at the end mash it all together into one verb. If I put it all on one line without the assignments, we don't have the two-dimensional commenting structure that's really needed to point to the first part, give a comment for the first part, a comment for the second part and so on.

00:18:43 [AB]

But that's a technical limitation, right? It's not inherent in tacit programming that you can't make space for comments.

00:18:53 [HR]

That's true.

00:18:55 [ML]

Yes, yeah, well, and it's worth pointing out that the the solution many tacit programmers use for this is not to name the values in the program, but to name different subfunctions of the program so if you've got a function that where the highest level structure is a train of three functions You might write it out as three different function assignments followed by your final Final train which is just going to be three names together So in that way you can get names back into it.

00:19:25 [HR]

Yes You have named functions, but not named nouns.

00:19:30 [AB]

Yeah, I but what I often do also because any comments will be extremely verbose compared to the actual code, is I use two dimensions for my comments, so I might have a very terse line and then of code and then I have multiple lines of comments above and/or below the code line and I will put comments and use line drawing characters to point various parts of my comments to the various parts of the expression. I kind of exploded out putting labels on it so it's not to overcome the technical limitations, but there's no particular reason why for example APL or J or or BQN couldn't for example allow an open parenthesis and then Line break and then have each part of a chain of functions Whatever the meaning is in the respective languages on a separate line and then a closing parenthesis. Not valid syntax so it could become right.

00:20:26 [HR]

Yes.

00:20:30 [ML]

Actually, one of the big reasons why I don't want to support that is that I do want to encourage people to give names to parts of their programs. So if you can just split it up into multiple lines That's very convenient. But often it It's not actually the nicest layout for it.

00:20:45 [AB]

Okay, but I'm saying it's a technical limitation For example Marshall not wanting it because of an agenda and it's a technical limitation, right? but there's nothing in the languages themselves and language structures that prevents.

00:20:52 [HR]

Right and in J we've knocked around the having a line continuation that would allow you to put comments on parts of lines. Agreed the real problem is not just a technical limitation is that programs can or individual verbs which we've said our tested programs can have only two arguments. There's a left operand and a right operand and the essence of tacit programming is that you can refer to a value at some distance in the program without having to give it a name and that only because all languages that I know of are ultimately one dimensional and that their linear string it can be converted into a linear string of characters every word, every parenthesized sequence has only a left neighbor and a right neighbor if you want to go beyond that you would have to add something. But what we normally do in J is we box up with a sequence of arguments and pass it into the function, but that ceases to be tested you'll be then paid you'll have to refer to the values by name to extract them.

00:22:11 [ML]

Or you have this whole message indices and like if you start nesting that it gets really ugly.

00:22:15 [HR]

So the point I've been trying to make is we can use what we call tacit programming within a single line of code to combine half a dozen a dozen maybe only one or two symbols to produce a complex function can we go any more than that could we write a whole program this way? Well the answer is yes. People have done it But I'm here to try to discourage that and I particularly don't want people to come away with the idea J is that language where you do everything tacitly because that's not the way it is. You could do everything tacitly the normal J Code is just a sequence of imperative sentences that assign values to results and move on, but there has been a lot of talk about writing purely tacit systems And I guess we need a name for it, so I'm going to say I'm going to call that tea-totalling: total tacit programming. So what's bad about tea-totalling well? The biggest the first problem is this problem if you have multiple arguments, how do you refer to the arguments? tacitly and as Marshall pointed out you you can't simply say oh, it's the left or the right you now have to say oh, it's the fourth operand on the right and you have to add a notation for that and you have to do something with the actual argument on the right to join the components into one piece and when you do this you're losing the advantage of terseness With the short tacit bits that flash out do a job and go away. You don't even have to refer to an operand at all You don't have a token indicate left or right But quite often you don't even need that. All you have all you need is parentheses and spaces. If you start trying to do more than just refer to the single left and right argument you have to add some way of referring to the operand that necessarily makes the program bigger whether it's harder to understand is a different question, but it certainly makes it bigger. If you don't have. If you try to write the program as all tacit verbs you're essentially writing a functional program and you have the problem there that very often you would like to store a result for reuse. And to save a value and bring it in later violates the strictures of functional programming. [03] What you have to do is recompute the result when you need it and that requires wasted computation. The biggest problem I have with it is that the interface has become very hard to change. You're not using prototypes. You're referring to values by number usually if you want to change something in the innermost nested function. Quite often that produces problems that percolate into the whatever called that and whatever called it and so on up the line so I find tea-totalling programs hard to maintain that although there are some people who do it and seem to get along fine with it.

00:25:53 [BT]

Are there any advantages to that? Like I mean as you say there's people who do it so there must be some things. Like I can think of one which is it's possible. It could be produced by a program writing another program. So in other words your maintenance might be done. You might actually have a program that assembles a program in that style which seems incomprehensible to people but the point is you've set up the rules to put these things together in a certain order if you had to do it.

00:26:22 [HR]

Yes, you're right. Yeah, maybe so. I don't know.

00:26:26 [ML]

Well, I can say one thing is that some programs do not need to be changed. So, the mathematicians can keep trying, but I don't think there will be any updates to the formula for the volume of a sphere. If you have such a program, writing it purely tacit is not necessarily a bad idea. Like, for example, the volume of the sphere, this is, like, knowing this formula is generally not very important. It's kind of like a geometric detail that just happens to be true. So, if you put this in, you know, a tacit program, it compresses it, and even actually, you could say that making it harder to read is an advantage because it just tucks this detail away into part of your program where it's not going to change, and probably nobody's going to bother to read it. So, that kind of signals that this is some immutable fact that is not really the main subject that you're working with in your code base. So, it's nice in that way.

00:27:25 [HR]

Yes, I completely agree. Since the big problem is that interface changes are tough, it's good to write a tacit function when you absolutely know the interface isn't going to change, and the program itself isn't going to change. And the volume of the sphere is a good example of that.

00:27:46 [AB]

Hey, wait, wait, wait. What do I need to move to a different dimensional space?

00:27:49 [ML]

Yeah, so, if you're working with general relativity, this is not such a good idea. But then you probably want to introduce a new function, because then it's going to be an entirely new interface with, you know, curvature and all that, and you'll still, like the old function, you can still just leave there. That's fine.

00:28:09 [BT]

But at that point, you might be approaching tea-totaling, right? Because suddenly you're-

00:28:13 [HR]

No, no, you wouldn't. You would only use tacit definitions for the basic. If you're saying, "We're changing our notion of geometry," we're only going to write tacit verbs for the things that are unchangeable given that definition of geometry.

00:28:30 [ML]

Yeah, and so, if you have space being curved, there are many ways to represent the space. And so, you're no longer really in a realm where you think the interface won't change, because you might- Certain types of curvature, you might want to specify the geometry of the entire space or something. It's really hard to say, then.

00:28:49 [AB]

I'd like to quote the great Roger Hui, because much of what we have of tacit programming available to all of us in all the languages that we generally speak about here comes by way of him. And I had noticed that he changed style over time writing APL, where he earlier would write more tacit, and then he started writing dfns more. And so, I wrote to him and asked him back in 2021, because he had made a comment in an email, and I asked him to elaborate a bit. So, the comment he had made was, "But to me, writing tacitly is merely an amusing puzzle when you have dfns. I probably shouldn't say that, because it would get me into trouble." So, I wrote back to him, and I mean, this is part of his quote, this thing, I probably shouldn't say that, right? So, I wrote him back and said, "I long suspected you'd developed that view." And then I asked for an elaboration. And dfns were added before function trains in Dyalog, so I asked him if he thought it was a mistake to add trains to Dyalog APL. And so, he wrote me back, "Forks are fine, and it was a correct decision to add them to Dyalog APL. As you know, forks are necessary to make tacit definition fly, necessary when the root function in your computation is used dialectically. The problem with tacit definition is that it does not scale well, at least not as well as dfns, despite name follies. And to see this, try writing a moderate-sized computation. For example, tree display, which I wrote in J and first posted the 26th of September, 1991. Writing it tacitly required enormous discipline. The result is better for it for having been the result of that discipline. And when it came time to write it as a dfns, the dfns is better for it. Anyway, try writing tree display tacitly." And he even elaborated more on this, but I won't quote all of it. I just want to say that what he elaborated and said, "Writing tacitly imposes a discipline that forces you to decompose a problem into reasonable parts. Such decomposition also strongly suggests a terminology that guides thinking and discussion about the problem." I'll stop there. There's a lot more. Maybe this should be published in some Rogers Collected Writings or internal emails like this. But it brings out that which Marshall is saying, that if you force people to break it up into parts, don't just write in throwaway code unless you're writing throwaway code. If you're using APLJ or any of these languages as a desktop calculator and you want some one-time computation, fine, do whatever you want. It doesn't matter.

00:31:48 [HR]

Well, yes, but you would be crazy not to use dfns. As Roger says, it's a puzzle. You don't need to give yourself a puzzle needlessly if all you're trying to do is get an answer.

00:32:00 [AB]

It's a puzzle to write it well. Some people have accused me of obfuscating things by translating to tacit, but I often write tacit straight away. I don't start off with explicit code that works and then translate it to tacit unless I have a real good reason to do so.

00:32:15 [HR]

Yes, agreed. What I've been trying to say is if you just throw a few verbs together in a sentence, you're writing tacit code. You're just not teetotaling. Having a little bit of tacit code in every sentence, I would say that I would guess 80% of the lines of JI write have some number of hooks and forks in them, function compositions. All those things are tacit code as properly defined. They're just not all tacit systems.

00:32:54 [BT]

And I think that's an important distinction is that writing parts of your code tacit does not mean that the entire code has to be tacit. It's not a black or white situation. Although, when you're talking about teetotaling, you are talking about purely tacit, so that's at one extreme. But J almost becomes verbose if there was no tacit. If you had to write everything out, well, I'm not sure even what that means because as you say, your verbs are primitive.

00:33:24 [HR]

It would be a much more verbose language. You'd have to repeat yourself. You'd have to have temporary variables.

00:33:33 [AB]

You can't anyway, because as you started off by saying, plus is tacit. So you might say, okay, let me make it explicit so you wrap it in braces and add left and right arguments. But that wrapping also has plus in it, right?

00:33:45 [HR]

Well, that's right. Yeah. You can't get away from testing programming at at some level.

00:33:52 [AB]

And it makes no sense to wrap plus in anything in order to do a plus reduction, right? [Henry agrees] Plus reduction is already tacit. It's inherent in the language. It was from day one of APL. Actually, from day negative one, you should say; Before APL was APL; before when it was just diverse annotation. It was already tacit, though functions on their own and operators were not even recognized yet. Tacit programming ... definitely not. But it already was. We can recognize it as such today that it was tacit programming. [Henry agrees]. And it would be crazy to try to avoid that at all costs or minimize it when writing these. Richard Park and I spoke about this on the APL show [on] the episode [about] how we recommend structuring APL code. I've also written my own personal style guide on this and I also recommend that having a tiered style where the innermost data processing level can be tacit but you use that with moderation inside dfns, explicit code. And then you put those inside larger procedural programs that have the overall structure of the application that you are using, and maybe even modules in various namespaces. So I would also recommend: don't go totally tacit unless you're trying to show off or have very special reasons.

00:35:13 [HR]

I would feel desolated if I couldn't have mustard on my sandwich but I wouldn't want to have mustard on everything. [Others laugh]. Tacit programming allows you to represent what needs to be represented tersely ... tersely.

00:35:29 [BT]

And I think when Adám was talking about what Roger had written, I think there's a sort of a meta-level as well. When Roger talked about the tree display and the fact that it was difficult to write the tree display tacitly, but that improved. It became another lens to look at the problem with. And I think that's really useful with tacit; is that you can sometimes, and I do this sometimes is I'll write out explicitly because it almost just flows together easily. But if I really want to get insight to what's going on, if I try to write it tacitly, I'll learn much more about what I was doing.

00:36:01 [HR]

Yeah, the tree display is a very well-defined interface. That's not going to change, I hope. If it did, that would be tough to maintain. It's recursive; well, I don't remember. It's about 15 lines the way Roger wrote it, I think. So it's a tough problem. He learned something from doing it. If it weren't such a vital piece of the system, he probably would not have ... [sentence left incomplete]. If he had dfns, then he probably would have used them.

00:36:36 [BT]

But he was saying that the dfns version he felt was improved because he'd written it tacitly.

00:36:41 [HR]

Yeah, he agreed he had to follow the structure.

00:36:44 [BT]

Yeah.

00:36:45 [AB]

It forced him, or I mean, with discipline. His discipline forced him to break things up into mental chunks [Henry agrees]; well-defined parts. When I teach people programming, using APL, I also tell them to give things good names. I give them some guidelines as to how to name things. If you try to break out a piece of your code and you can't find a good name for it [Henry enthusiastically agrees] even if it's a slightly awkward name (or an awkwardly long name to describe what it is) but if you can't find a name for this concept at all, then that's probably because you either chopped it up too finely and it's not meaningful in its own; or you tried to reach over too much. So you're combining different things that don't belong together. That's what this discipline you talk about forces you to. You name each part. That's why Marshall shouldn't add vertical tacit support in BQN [chuckles]. And we shouldn't do that in Dyalog APL either. And we shouldn't do it in J, please!

00:37:53 [ML]

So one thing I want to point out about dfns is that they actually do not force you to name the arguments. They name them for you. So there is actually sort of a hint of tacitness about dfns, which is part of why [Henry agrees] this particular system has been popular in array languages. Like, J didn't start out with the curly braces of dfns,[04] but it did start out using these inherent names: x and y. K is similar. So even though function definition has gone through a few iterations, this idea of default names has been pretty popular. And that has some similar considerations to tacit programming. One really big issue with it is that if you have nested dfns, then the argument names only refer to their immediately enclosing ones. So if you write a nested dfn, it often gets in the way. You can't copy and paste code between the inner and the outer dfn [Henry agrees], because it might contain argument names.

00:39:00 [BT]

The context has changed.

00:39:02 [ML]

Yeah, so one thing I try to do: BQN has headers for these anonymous functions, which are very nice. In APL, you might just switch over to tradfns. But it is a good idea to write the bigger functions in an even more explicit style and give the arguments names. [Henry agrees]

00:39:19 [AB]

Something I do whenever I write dfns and I'm dealing with a larger, more complex problem (or maybe with something more obscure) is I will start the dfn by writing (if it's dyadic): "name, name <- alpha omega". And then I put that on the very first line just after the brace, and it visually makes it a header, saying: "my left and right arguments are called like this". They're not actually; it's actually just a reassignment. But I think that helps [Henry agrees] in seeing things between other names, and it, as Marshall says, allows me to refer to them from inner functions as well when that's necessary to write dirty code like that. And there's also (I don't know how it is in other languages), but in Dyalog, you cannot assign to the argument names. They are constants, not variables. So you cannot take the argument and increment it, for example.

00:40:15 [HR]

In J you can, but yeah, I agree. If you have an explicit verb of more than half a dozen lines, you should never refer to x and y in the body; assign it to a name at the beginning, just to separate the interface from the rest of the program. If teetotaling is too much tacit, well, just how much too much tacit? How much can you get away with?

00:40:45 [CH]

This is the real question. I've been just sitting here with my feet kicked up, listening. And I've got a question for Marshall in a second.

00:40:53 [ML]

Well, I guess the question ... [sentence left incomplete]. What I'd like to know is: what are the symptoms of too little tacit? One thing I sometimes notice in code in a language that is not very powerful (and this is with arrays as well as with tacit programming), is that there's more repetition. One thing you see in languages that don't have an over-operator, like if you want to test (say, if two strings are anagrams of each other) programmers in these languages will always write, "a.sorted matches b.sorted", however that's written in the language. So if you can avoid that with tacit, that's just great, because you're representing the structure of your problem better and representing the symmetries in that. So I think that's definitely one point where the line between too much and too little tacit lies.

00:41:50 [CH]

Yeah, the thing I was going to say is that I think at least amongst the five of us here, we're all in agreement that it's specifically in the array languages that are represented right now, fully tacit is not the ideal. Teetotaling as Henry called it. Even I (who, you know, am probably ranked in the top 10 in the world, maybe top five, maybe top 3 of people that love tacit programming; I love tacit programming). You can't take from me the phi combinator, the phi-one combinator, the psi combinator, which Marshall just referred to as psi, the psi-phi pattern, which I've actually never talked about.

00:42:32 [ML]

Wait, I don't think I said psi.

00:42:35 [CH]

You didn't say psi. You said "over". But "over" is the psi combinator.

00:42:37 [ML]

You just translated it in your head [everyone laughs].

00:42:38 [CH]

Yeah, into combinatorial logic.

00:42:39 [ML]

You will not take Conor's speechlessness away from him! [everyone laughs]

00:42:45 [CH]

You can't take that stuff. Like I said, when used correctly, I think tacit programming is the epitome of elegance. You can take it too far though. And I was just checking in my string library that I've started writing. I think there's like 10 functions in there. Eight of them are tacit, two of them aren't. Maybe I'll bring this up later, but there's a specific pattern that I've always failed to make tacit and probably shouldn't be made tacit. But I actually, I don't even, I don't even have the brain power to know how to do it. Maybe it's not possible. But the question is, yeah, like: what is the right amount? I think that it is probably (I don't know if Adám agrees with me), but I lean to more. Like I was looking at some of my code and I have like seven trains, but when they're written nicely (another thing is you need to know how to read them) ... [sentence left incomplete]. But once you know how to read seven trains and it's clear that it's a, you know, monadic function, which I've always had a thought too, like: ever since I've realized that KAP uses (I'm on like six different tangents now), ever since I realized that KAP uses both the single arrow and the double struck arrow. Then also I realized BQN has it as well; it uses it for exporting. Technically an idea for an array language is you could add two different assignment operators for functions when you're in tacit land (because this doesn't matter when you're in defund land, you can already tell). And the double struck one means it's a dyadic function and the regular one just means it's monadic. Or, you know, find some arrows that haven't been used already. And that takes away the readability problem of: is this a monadic or dyadic function? That is one of the biggest problems when reading tacit is you start off and if there's no docs that say this is a dyadic function, all of these functions have ambivalence and you have to start thinking: okay, well, if this was a plus that's dyadic, that means plus, but if it's actually monadic ... [sentence left incomplete] Or maybe it's supposed to be two cases and you could even have a third arrow operator. I'm sure there's tons of Unicode arrows out there. We put a little question mark over the arrow and that means that this can actually be used both monadically and dyadically. And I think that's part of the problem is that like the limit that you reach in a language where tacit is too much is defined by the properties of that language. When you have a language with ambivalence, I think the stage at which tacit becomes too hard to read is way sooner than in a language like Uiua that doesn't have ambivalence. And also Uiua has a set of different problems in that I think the pattern that I have never been able to figure out in APL and BQN doesn't exist in Uiua because it's when you need, like at this weird place in your function, you need to reach for this partially computed ... [sentence left incomplete]. It wasn't at the beginning; it wasn't one of your initial arguments. It was something halfway through and you need to use that, you know, in the middle of some operation that's being used in a higher order function, like an outer product or reduce. Trying to reach for that value at that point is really tricky. Whereas in Uiua, you just throw it at the back of the stack and then, you know, you can get it. It'll just be a rotate or a flip or a duplicate or a dip or one of their operators will allow you to reach for that. So in Uiua, you always can easily with a couple of rotates or functions access that value.

00:46:01 [ML]

Yeah, but it's context dependent, right? Which is part of the problem.

00:46:05 [AB]

Yeah. That I find that more problematic than a solution to the problem. At that point in APL and BQN, J, the right thing to do (I mean, you can disagree), but I would say the right thing to do is to be at least partially explicit. Give it a name so you can refer to it later by name. And the reader stands a chance at knowing what is it you're referring to. Because if you're writing a Uiua program, and admittedly, I've written very little Uiua, but if you're writing a Uiua program and way in the middle (not in the very beginning), somewhere in the middle, you have, I think it's a dot, like, which just duplicates the current value on the stack so you can save it for later. And then you keep going much, much later, you reach that and it pops up to the surface!

00:46:47 [ML]

Yeah, and then drag the fifth value off the stack [laughs].

00:46:50 [AB]

Right, exactly, that kind of thing. Like, really? And that's not fair to the reader.

00:46:59 [HR]

This might bring in some ideas on this question. I'm gonna tell you a story and I want you to raise your hand when you're ready to put it in your own words. So the story is only one sentence long, okay? Here it is. Raise your hand when you're [ready]. The boy, the dog, the cat attacked, scratched, recovered. [long pause]. The record will show that no hands have been ... Marshall ... [sentence left incomplete].

00:47:26 [AB]

No. I think I got it.

00:47:28 [CH]

I don't understand. I thought you meant you wanted us to like paraphrase.

00:47:30 [HR]

I do.

00:47:32 [ML]

When you understand what the sentence actually says [laughs].

00:47:35 [CH]

Was that an actual sentence? [Henry laughs out loud].

00:47:36 [AB]

It's an actual sentence and it's correct grammar. The problem is that he's using structures in English and you can even say words that are such that it's ambiguous what role they play in the sentence intentionally to confuse you because it's not a pattern you're used to hearing. And there are many such ones where you hear.

00:48:01 [HR]

Okay, I agree with you partly, but well, I think everything he says is true, except I don't think it's because you don't hear it very much. Did you have a paraphrase, Marshall?

00:48:15 [ML]

Well, [chuckles] now I have to remember what the sentence is.

00:48:17 [HR]

The boy, the dog, the cat attacked.

00:48:19 [ML]

So, yeah, there's a cat and the cat has scratched a dog sometime in the past. And at some other time, the dog has attacked the boy. And now what you're saying is that the boy has recovered.

00:48:33 [HR]

Yeah, and you got the attacked and scratched in the wrong order, but that's probably because you'd forgotten that.

00:48:40 [ML]

Oh yeah, I just forgot that because they're so similar [chuckles].

00:48:43 [CH]

That is not a sentence! That's not a sentence.

00:48:46 [HR]

Yes, it's grammatical.

00:48:47 [CH]

Are you telling me that's a grammatically correct sentence?

00:48:49 [AB]

Yes it is.

00:48:50 [HR]

Talk to Conor's point here. There was a boy, he recovered. The boy, the dog scratched, recovered. You're okay with that, right? The boy, the dog scratched, recovered.

00:49:05 [AB]

The boy that the dog scratched is now recovered.

00:49:08 [HR]

That's optional.

00:49:10 [CH]

Oh, you guys are intentionally ... [sentence left incomplete]. So I see this game. First of all, there's a bunch of clauses, that are commas that are like missing.

00:49:19 [HR]

This is more basic than what you're saying. This example is from the marvelous book by Steven Pinker, "The Language Instinct." If you want lots more examples. It's not ambiguity. Listen to this: time flies like an arrow. Fruit flies like a banana. Now that's just funny, right?

00:49:43 [AB]

Yeah, fruit flies. That's because the word flies can mean both a noun or a verb.

00:49:49 [HR]

Every word, all four words. Time flies like an arrow. Time flies like arrow. Fruit flies like a banana. Every word has changed meaning and or part of speech. And yet, you pick it up immediately.

00:50:07 [AB]

No, you don't pick it up immediately! Only if you've heard that one before. At first, most people go: what?

00:50:13 [ML]

I think if you hear the sentences separately, it's actually much easier to pick it out. The first sentence primes you to interpret the second one wrong.

00:50:21 [HR]

Well, that's a good point.

00:50:24 [ML]

I will also point out that the principal at the school where Henry taught me was very fond of this phrase. He would say it all the time [laughs].

00:50:32 [HR]

So you're ready for it. [everyone laughs]

00:50:35 [AB]

Here's another one I use with my children every once in a while is: the old man the boat, the young fish tomorrow.

00:50:44 [ML]

Well, that's actually ambiguous, yeah.

00:50:46 [AB]

No, it's not ambiguous. It's completely clear what it means.

00:50:50 [ML]

It requires you to resolve the ambiguity using semantics.

00:50:53 [AB]

If I say, the old people serve on the boat today, and the young people go out fishing tomorrow, you have no problem. Now, listen again: the old man the boat, the young fish tomorrow. [Henry agrees]. "Man the boat!" If I only say: man the boat, you have no problem. If I say: the old man the boat ...

00:51:17 [BT]

You're putting older men together.

00:51:19 [HR]

Yeah.

00:51:20 [AB]

Then because of "old", it primes you to expect something else, but it's perfectly fine to call the old people, the old. So the old should now man the boat; the old, man the boat. And the same thing is, the young (you expect the young animal, the young person, something), but that's not it. It's just calling the group, the young in plural. So the "young fish"; and then you think it's the young fish (it's not lived very long), but it's not, it's a verb, right? So the young fish tomorrow, but because of the context ... [sentence left incomplete].

00:51:49 [ML]

So the point of Henry's example, I think, is that the sort of feature that he is stacking up is the kind of thing that appears both in English and in APL. I mean, I guess you can argue that the ambivalence is like English ambiguity.

00:52:05 [AB]

I think so. That's what's throwing us off.

00:52:08 [ML]

But it's not a syntactic feature there. I mean, the syntax isn't determined by how many arguments a function takes.

00:52:15 [HR]

I think it's a very different explanation. I think evolution has left you with some hardware [so] that you can handle one open relative clause, or one and a half. You can stack two relative clauses and handle that with your grammar recognizer. For example, if I said: the boy who was scratched by the dog, who was attacked by the cat, that was looking at the mouse, who was eating the food that I left out of the refrigerator, recovered. You don't have any problem with that even though I gave you a much more complicated story.

00:52:55 [AB]

No, I think there is a problem eventually. You run out of stack space for storing all of that.

00:52:59 [HR]

No, you didn't have to.

00:53:01 [ML]

Well, the problem, if it happens, happens at "recovered".

00:53:03 [HR]

Yeah, but it didn't.

00:53:04 [AB]

Once I reach recovered, I've forgotten what it is you're speaking about. My father told me this joke because of the word order in German, where verbs go at the end of the sentence generally and if you do multi-phrase things, you can put multiple verbs at the end and all their auxiliary information for those verbs go together in the beginning of the sentence. So he painted this picture for me, a mental picture of a German university. There's a lecture, one hour long lecture. All the students are sitting with their notebooks ready and the pen in the hand, and the professor is speaking. For 45 minutes, they sit and they write nothing. [Henry laughs] Then comes all the verbs and everybody frantically writes down their notes, everything that the professor has been saying up to now [everyone laughs]. And I think we've run out of stack space, and that's mentally. And that's what bothers me both with the stack-based languages that encourages this kind of thing. This is what bothers me, I've mentioned before, with over-usage of parentheses. It forces me to mentally put things on a stack. [Henry agrees]. And that's why I prefer to commute (or swap, or whatever you call it), [05] arguments of a function so that I don't need to. I can take the simple thing that's already computed, leave that, and then go on to the next thing. And I want to conserve ... maybe it's just me being limited mentally. I don't have a lot of stack space, I start forgetting.

00:54:23 [HR]

Well, it may be, but the point I'm trying to make here is that even in English, a language, it's possible to say a sentence that overloads the hardware that you have for decoding the sentence. At that point, you have to fail over to microcode, and we all know how bad that is. In my original sentence, you can pick it apart logically and show that it is grammatical, but you can't do it naturally. You have to use the part of your brain that's not normally devoted to language.

00:55:04 [AB]

And you don't think it's because we're simply not used to that kind of sentence structure, other than the running out of stack space?

00:55:10 [HR]

Yeah, I don't think so. I think that you just lack the hardware to do that. Like I said, I'll take that sentence; I won't embellish it. I'll just say: "the boy who was scratched by the dog who was attacked by the cat recovered." You don't have any problem with that.

00:55:27 [AB]

No, but I don't think that's the same thing. I think it's because the extra words you're throwing in help me to determine in my head what role these words play, whereas I can't settle the structure until at the end.

00:55:39 [HR]

Well, again, let me try that. "The boy attacked by the cat scratched by the dog."

00:55:47 [ML]

Well, then you can't tell whether the dog's scratching the cat or the boy.

00:55:50 [HR]

"The boy scratched by the dog attacked by the cat." Yeah, I think you could still do it without that. Anyway, the question that I want to throw out here is, this is English. Is there a similar sort of limit in programming languages? As Eric Iverson said when we were talking about this: "language is innate, but programming is learned." I wonder. I'm pretty sure language is innate. Now, the question is, do we have similar limitations that are inherent? If anybody out there listening to this podcast knows some linguistics graduate student who would like to examine this, I think you could do the computing world a bit of good. You'd have to have a pretty high-level language, J or APL, so that you can focus on really compressing the expression of computation. But what I find in my own work, [is] if I see verb, verb, verb, verb, verb, noun, I have no problem with it. I just read it right to left, and you can give me as many verbs as you like, and I'll be able to follow it. Throw in a hook, and it becomes a little more difficult. Throw in a fork, and it becomes quite a bit more difficult. And if I nest too many of those things, it becomes very difficult for me to read it.

00:57:24 [CH]

So, here's the thing, though, is you don't go from verb, verb, verb, verb, verb to noun, and then add a hook. Technically, you would go from (if we shorten it) it would be parentheses, or what would it be? It would be noun ... [sentence left incomplete].

00:57:40 [HR]

It would be verb atop verb, noun, and then ... [sentence left incomplete].

00:57:43 [CH]

Yeah, it would be noun, verb, verb, noun, and actually, you'd have to ... [sentence left incomplete].

00:57:48 [BT]

You're already thinking in forks, Conor.

00:57:50 [CH]

Well, no, because in the hook, you have to refer to your argument twice, right? In the first example, he had a bunch of verbs and just one noun. So, my point is that saying verb, verb, verb, noun; the equivalent in going to a hook or a fork is that you're going to have multiple nouns in there and potentially parentheses. So, really, it's going from (in the fork example, because that's easier; forks are in all the array languages). You're going to have parentheses, verb, noun, end-parentheses, verb, verb, noun. And then you go from that to parentheses, verb, verb, verb, end-parentheses, noun. And in that case, I definitely think the fork is easier to read than the parentheses on the left of the middle verb. And then technically, if you wanted it to be symmetric, you could put superfluous parentheses around the last verb and noun. And probably we all agree on that. The question is, how many of those forks. And it's one thing if you put two forks side by side, and then you build up two unary functions and compose those together. But it's another thing if you're nesting the forks. I really love nesting forks, because it's so elegant when you do it. But every time I do it, I do feel a bit bad, because it's like you're no longer going from reading linearly. And that's what I think is nice about a lot of these tacit techniques, is that you're using these small little hooks and forks so that you can compose a series of unary functions (you might need some caps and stuff). The tricky thing is when you're not building it up so that you can read it linearly; it's that you're nesting a fork inside a fork, inside a psi combinator, etc. Anyways, I'll throw it to Adám.

00:59:30 [AB]

Well, I want to disagree with Henry.

00:59:31 [CH]

Me?

00:59:32 [AB]

No, no, no. With what Henry quoted from, was it from Eric Iverson you said? [Henry agrees]. About the languages being innate and learned.

00:59:44 [ML]

Yeah, I was also skeptical.

00:59:46 [AB]

I don't feel like that. I can't say it isn't like that, but speaking one of those, or reading or whatever, listening to one of the languages, human languages that I know, and doing the same thing with APL. Being that I grew up with APL from about the same age as I started speaking English, I also started speaking, reading, writing APL. They feel very much the same to me. And when Henry gave his example, an English sentence with a cat that scratched or whatever it was, and what Conor is drawing up here of these nested compositions, or if it was even an explicit expression with loads of parentheses everywhere, gives me very much the same kind of feeling of this structure. The structure of this statement is so complicated that I have to backtrack and reread and re-understand. And if somebody spoke it, then I might say: "hey, can you repeat that or explain that in simpler words?" But I don't feel there's a difference in what's going on in my head between them. It's just, you can formulate yourself in a complicated manner in either type of language, or in a simple manner that I can straight out read.

01:00:59 [HR]

Well, I think Pinker's book would convince you that there are hardware mechanisms that are at work at understanding language, and that you can overload them.

01:01:16 [AB]

Oh, I understand that. But why would there be a difference between language we use between humans for conversations, and language we use between ... it doesn't even have to be between humans and machines, but I can speak APL to an APLer.

01:01:31 [HR]

Well, evolution.

01:01:32 [ML]

And I think also, the fact that something hooks into brain hardware at a pretty basic level does not mean the hardware was designed for that thing. I mean, there are all sorts of programs for the CPU that are going to make good use of the CPU registers, or the CPU stack, or whatever. But the CPU is designed to be a general computation device. It's not designed for this specific program.

01:01:56 [AB]

Yeah, I think we'll tend to create programming languages that appeal to our inherent abilities to deal with communication.

01:02:06 [HR]

Well, the array languages, yes.

01:02:08 [AB]

Yes, maybe not the programming languages that evolved from further and further abstraction of the hardware, say, assembly to C code to, I don't know, Python, whatever.

01:02:22 [ML]

But you can't say the hardware is the only influence on those languages. There was a lot of influence from how people wanted to write it, because otherwise we would still be using machine code.

01:02:34 [AB]

Yeah, then there were influences from other sides as well. But those are trying to pull the hardware in language, so to say. There's an inherent language built into the hardware of the computer, on the very lowest level. You cannot do anything else than those instructions of doing the things that the hardware is wired to do, and then pull that in the direction of what feels natural to our biological hardware. But not so for computer languages that did not have that origin. So mathematical notation did not have this.

01:03:11 [ML]

Well, all right, but can't you say the exact same thing about APL with linear algebra instead of computer hardware? [06]

01:03:17 [AB]

Yes, but linear algebra was never constrained by some arbitrary hardware. It was only constrained by the way we ... [sentence left incomplete].

01:03:23 [ML]

I disagree. Linear algebra is far more constrained than computer hardware. Computer hardware is designed by humans. Linear algebra is not.

01:03:30 [AB]

Wait, what is linear algebra designed by?

01:03:32 [ML]

It's just there.

01:03:33 [AB]

Or constrained by?

01:03:34 [ML]

It is discovered.

01:03:35 [AB]

Okay! But then even more so it's natural. So if we're part of the universe and linear algebra is part of the universe, it would make sense that it just works together.

01:03:43 [ML]

Oh don't you wish, but it's absolutely not true.

01:03:46 [BT]

What's not true?

01:03:47 [ML]

That humans, because they are not designed, are fitted to work with other mathematical concepts that are not designed by humans. No, our brains are horribly ill-suited to working with mathematics.

01:04:00 [HR]

Yes. Well, yeah, that really is the point. To the extent that computer languages are closer to mathematics than natural languages, to what extent are the restrictions that apply to natural languages have analogues with the computer languages? Maybe none.

01:04:24 [BT]

I'm going to throw another curve into it. And that curve is the way we learn languages changes as we get older. So that the innate part to me, I grew up as an English speaker and I'm pretty much unilingual. I'm trying to pick up Italian right now [chuckles] because we're going to go traveling. At an older age (I think it's around five or six), the part of your brain that starts to learn a language changes and so you learn a language in a different way. In some senses, even people at the more advanced stage, (say they learn it in their teens; they learn a second language) they will not know that second language the same way a person who learned at three or four learns a second language. So the whole process in their brains, different parts of their brains translate. I'm wondering whether the same sort of thing happens. I'd be really interested for Adám [laughs] to submit to a brain scan while he's thinking about APL, because he's one person that was picking up that language in that golden zone. Very few other people have started to learn computer languages at the age of two or three. Most people start picking them up, well, recently, probably seven, eight, nine, maybe for the early ones. But for me, I didn't learn a computing language until I was in my 20s. And people subsequent to me, a lot of them learned in their teens because that's what high school curriculum was. But not many were learning when they were two or three years old and I think that changes the way your brain processes a language. That actually might come down to whether a language being innate, which would be like, this is how I'm thinking about this is innate, because I've always known this, compared to learned, which might've been picking up a language later in your life.

01:06:20 [AB]

That's why I said, this is what it feels like to me. I don't know.

01:06:22 [BT]

Yeah.

01:06:25 [AB]

But also, I think it's not justified to even speak about what humans have evolved to or created to. Our brains are way too malleable for this. Expose them to something and they get used to it. And it's a bad scientific method to go out and study a large population and make conclusions on that for these kind of things, because they might all have been exposed in the same kind of way. I thought it was a very interesting video of somebody who tries to teach himself and teach his son to ride on a bicycle where the handlebar turns the wrong way. It took him a very long time and a lot of failures. To get reasonable at that. And it was much quicker for his son, young son, to do it. And then after having used that bike for a while, he simply couldn't ride a normal bicycle for a while. And then it switched back again. But if you then go out and make a study on the general population, and you give them two different bicycles, one that turns the handlebar the usual way, one the other way, and look at how successful they are in riding. And not everybody has ridden bicycles, and some might be able to ride the ones that are wrong. But you can easily come to the conclusion that there's something innate in humans, because of, I don't know, evolution or something, that makes them want to have the handlebar turn this direction so the wheel turns in this direction. But that's so wrong, right? It was just learned. Or maybe there is something, but it's very hard to test. You have to have somebody...

01:08:00 [ML]

Yeah, I mean, definitely, inherently, humans have different abilities at learning different things. Like, a human can practice memorization all they want, and they will never be a fraction as good as a CPU hard drive. So there are some inherent limitations, but I agree that they're very difficult. It's difficult to distinguish an inherent limitation from something that comes as a result of experiences.

01:08:26 [HR]

Yes.

01:08:27 [CH]

Yeah, there was a podcast episode on the... Software Unscripted. That's Richard Feldman's podcast. He was creating the Roc Language, and he interviewed some guy that had done research or I'm not sure if he had done research or he just read the research on what makes the most readable ergonomic language. And I remember just cringing the whole time listening to it. I'm not trying to say that the research wasn't done correctly or whatever, but it was exactly what Marshall was just saying. Stuff like that is so informed by our experience. And how can you say that one language is more readable than another language when you only are able to test people that have been learning that for... And that's why there's something about... If we go back to Marshall's over, aka the psi combinator, it's like it crushes me. It crushes me to have to spell that pattern in other languages because it's so fundamental. And unlike other combinators, a lot of the times when you're reaching for over or the psi combinator, it's because you have a named function like length or something that you're applying to two different arguments. You want the difference in the length of strings or something. Whereas in other... Okay, I guess in array languages, you'll have glyphs for that stuff, but it's like... And there's ways in, I guess, other languages like C++, Python, where you can avoid that. But whenever you find yourself explicitly writing length of string A minus length of string B, it's like that... I got to type the same thing twice? Or when you were mentioning your example, Marshall, it's like when you have a sort primitive in your array language, it's not as painful as when you're in APL. Because APL, I have to now spell that sort twice. And that's like five characters. It's like, you're killing me. You're killing me. Luckily, even though APL, Dyalog APL doesn't have the sort primitive, it does have over. So I only need to spell that five character, four character sort once. And it's like, that stuff, I think, is... I don't know how to design a study to prove that stuff is fundamental and should be available to all programmers and programming languages. But like, yeah, it's, it's, I'm.

01:10:38 [AB]

Hey, Conor, next time it's painful for you, you could always reduce over the map list of the two arguments.

01:10:45 [CH]

This is actually, this has nothing to do except for what Adám just mentioned the other day, and this actually relates back to that example that I don't know how to make completely tacit. I ended up writing an outer product reduction [07] on a list of two elements so that I could get three arguments into my function and that made me very happy. Is it a good idea? I don't know, folks, but it's the first time I ever wrote an outer product reduction.

01:11:07 [AB]

And so, so, so much. Conor, I've been listening not so much lately, but listening to your other podcast ADSP and you speak about C++ a lot. And I learned certain terms I can name drop things from C++ and it will sound like I'm educated. But what I just said that you can, so let's say you want to get the, the, the sum of the lengths of two things. Okay, it's a bad example because you could concatenate them and, but it doesn't matter. It's as an example, the sum of the lengths of two things. So you want to do plus over the length. But isn't there something in C++ called map reduce? And wouldn't that do that for you? You take those two things, put them in a list, and then you map reduce with it, where the mapping is with length and the reduce is with a sum?

01:11:58 [CH]

Yes, yes. You can, there's a transform reduce and you could also do it with ranges with two separate maps, aka what they call a transform and then a sum.

01:12:08 [AB]

So then many languages have this, it's just awkward to spell.

01:12:10 [CH]

Well, I mean, it's not really the same thing. Like, as soon as you've put your two arguments in a list, you're now in a different land, right? You're not working with a binary function, you're working with a sequence of unary functions.

01:12:25 [AB]

Look, look at me, the array programmer. I know you have multiple things in an array, that's not a different land. What are you talking about?

01:12:34 [CH]

I mean, in the function space it's a different, you're binary versus unary. But yeah, I mean, it is interesting whenever I code in C++, I'm always looking at patterns that I'm writing and I just constantly, my brain's like, "B1, like psi, phi, phi1, phi, phi, phi, how come I can't have phi?" And then that's when I basically just use my Blackbird library in C++, which has a bunch of combinators. But even still, you don't have this train model. You only have the individual combinators and to build up the train sequence that you would in APL, you'd never want to do that in C++. So usually you're just using these one or two at a time. Like I used a B1 combinator to define division the other day. So you've got plus as your binary operation, a partially applied division to two, because I only had two elements. So really you're just adding two elements and divide them up by two. And then, so I had div 2 plus, and then I threw that inside my...

01:13:31 [AB]

Wait, so it was average of two things, not division? Average?

01:13:35 [CH]

Yes, the average. Or what did I say?

01:13:38 [BT]

Defining division. Division of, but there's division involved.

01:13:41 [CH]

Sorry, yeah. This is to define average for two elements.

01:13:45 [AB]

This reminds me, Conor, I have to correct you on a thing. A couple of episodes back, you briefly mentioned that it bothered you that the proposed glyph for the reverse composition inDyalogisn't a reversed version of the normal composition. The Jot, which is symmetric anyway, you can't mirror it, and the Jot underbar. But I wanted to point out to you that the proposal for it isn't actually symmetrical because the monadic form would be a hook, the monadic application of the derived function would be a hook, whereas the monadic form of the just plain Jot is just on the top. So they shouldn't be symmetric.

01:14:27 [CH]

Breaking change, breaking change, breaking change. No, I mean... Obviously not possible.

01:14:32 [AB]

No, that's not possible. And you're right, there isn't a top operator, it's not strictly necessary, but I find that it's actually useful to have F atop G and F, now it's not called, beside G. And both of them exist as monadic and dyadic form, and the difference between them is how do they use the arguments, right? They're both atops when they are monadic, but then in the dyadic you can choose where do you want the other argument to go. I think that's actually useful to have.

01:15:02 [ML]

Yeah, I've run into this in BQN where I go, well, I mean, in general I want the BQN to be a Jot with a right pointer thing, but in some cases I go, well, I could really use APL's Jot here because I want it to be ambivalent in a different way.

01:15:14 [AB]

Well, maybe, but there should be both. I think CAP has both. I'm not sure.

01:15:25 [ML]

I think it probably just has to BQN.

01:15:28 [AB]

Is it KAP? Or... No, yeah, I think it changes it, but...

01:15:32 [CH]

Alright, I'm confused. Should we dive in on this, or should we save this, because I'm not following at all what's going on. I understood your initial point is that they're not symmetric, but now I thought that what APL was getting was a subset of what BQN was getting. And I also thought KAP was analogous to what's in BQN with their...

01:15:51 [AB]

Jot in APL does not exist in BQN. When you have an ambivalent function, fJotG, then when you add a left argument to that function, then that left argument gets... Then it gets used as left argument to f. So f is the part of fJotG that is ambivalent. When you write f atop G, and you add a left argument, that left argument gets given to G. So G is the one that's ambivalent. Both of them are useful. And both of them are useful pairs. So you can have an ambivalent function that when you give it another argument, it uses that argument for this function or for that function. And this doesn't exist in BQN because the corresponding thing to the...

01:16:44 [CH]

So wait, the thing that I think I've missed is that the difference is the pairing of monadic and dyadic combinators here.

01:16:56 [AB]

So it's potentially useful that both a hook and a beside, I don't know what you want to call it, are paired up with a top in the monadic form. Because sometimes the ambivalence lies in the left and sometimes in the right.

01:17:07 [CH]

I see. I see. OK.

01:17:11 [AB]

So for example, let's say that I want the first of the ravel. So in APL, I would write first.jot.ravel. Why would I write it like that? Because whenever I add a left argument, it doesn't pick the first anymore, it picks the nth. That's very useful. I get the nth element of an array in ravel order. I wouldn't want the left argument to be paired up with the comma. That would make the comma concatenation in APL, which doesn't make sense here at all.

01:17:41 [CH]

So yeah, I guess this is... I'm not sure if this is a limit of my imagination or programming style, but when I mentioned the three different arrows, single arrow, double struck arrow, and a question mark for the ambivalent one, I threw out the ambivalent one, because why not? But I don't think I've ever written a tacit expression that I intended or wanted to use ambivalently. Which is why I was completely confused earlier. What you were arguing for is that the juxtaposition or pairing of these two combinators in a single symbol enables you to program certain functions. But I think personally, I never do that. My brain just doesn't think that way. It's like, "Oh." Because usually I'm just solving one-off problems or writing a single expression. Maybe if I did more library writing in BQN or APL, I might find myself writing these ambivalently dual-purposed functions.

01:18:35 [HR]

Yeah, that's when I've done that. I don't so much write an ambivalent function, but I've written conjunctions that combine functions. And you'd like the result to be ambivalent. You don't want one version of your conjunction that works with monadic verbs and another that works with dyadic.

01:18:59 [BT]

Well, because in a conjunction, you're at a different level. You're no longer dealing with arguments, the nouns. It's the operands.

01:19:06 [HR]

I'm not dealing with the nouns. I'm dealing with the verbs. I had some application. It was 20 years ago, so I don't remember the details. But I had several levels of this, and it was important because otherwise there was an explosion of possibilities, depending on what was monadic. The intermediate form got huge. If I didn't have the conjunctions, it could work either way.

01:19:35 [AB]

I can give you another example in contrast. Before I said it would be in DialogAPL, but it was default setting this right shoe, jot, comma. I have to say the glyph names because right shoe can be the pick or first, depending on whether you give it left argument. But let's say that I have a different one. Let's say that I want to flatten an array. I want to revel it. But after moving some x's around. So now I would do revel atop transpose. So by default, it will just do a normal transpose in APL that's reversing the axis. But I might want to specify a custom order of the axis as left argument. And that would be my left argument that I can specify a custom order. I don't want that to become the left argument to the comma. That would concatenate. That makes no sense.

01:20:23 [CH]

You use the dyadic transpose for your argument? [08]

01:20:26 [AB]

Yeah, if I want the custom order of the axis before I revel. Just specifying the reading order.

01:20:32 [CH]

You got to know your audience, Adám. You got to know your audience. I'm going to be convinced by a dyadic transpose, a glyph I have yet to understand.

01:20:40 [HR]

When you need it, there's no substitute for it.

01:20:43 [AB]

But the idea here is not so much the dyadic transpose. I use that one because it is a fairly simple-- in some way simple-- function that has a default left argument, right? Where you can give it a--

01:20:57 [ML]

Well, so part of the problem is that functions in APL and other languages are so scattershot with how they do the ambivalence that you're really going into a minefield when you try to write ambivalently. Because it's like, there are a handful of functions that are really good for this. And then most others, it's like one of the cases is just not the one you want. Even plus. Frequently, I want to write code that adds the left argument if it exists. And so for this, plus should have a default left argument of zero. But it doesn't. Instead, it's defined the monadic case is the conjugate, which is just unrelated. So like--

01:21:29 [AB]

That also bothers me. But we can do it with minus instead. You can have some function--

01:21:43 [ML]

Yeah, well, I mean, that's the problem. You have to remember the set of functions that actually work for this. And then for the others, you have to special case it.

01:21:51 [AB]

That is true for the primitives. But that's only because we're using primitives. When I write my own utility function, I very often have some default left argument for it.

01:22:04 [ML]

Yeah, yeah, this can definitely be the case where you--

01:22:06 [AB]

And then when I want to compose those together, I need those two different compositions. Which one is it-- which one of the two functions I'm composing together is it that has a default left argument you can substitute with a new-- a specific left argument to the derived function? It's funny that the examples given for tazzy programming always release the primitives. But I actually find that I quite often use tazzy programming with what APL calls system functions. And some of my colleagues frown on that when I do that. But conceptually, they're just functions. It's the same. And system functions, being that they do systemy things, very often have a left argument that can specify something special, whereas the monadic form does some kind of default. It takes a default option. So there, I might very much use it. And also, system functions very often need some kind of parameter, a file name, or something like that. And that needs to be passed in many different places. And it would be annoying to me to have to repeat the name of the file over and over and over again when they all just need the same. They just need to be strung together and take each other's results. And I find that this gives me freedom to be able to compose them like this. But of course, you can always define your own operator and/or two modifier in BQN to make up for it.

01:23:26 [ML]

Yeah, well, BQN also makes it much easier to work with explicit ambivalent code, because it has this mechanism where if the left argument doesn't exist, then when you write the name of the left argument, w, it's treated as nothing. And it just kind of disappears from the syntax. So that usually does what you want. And so BQN, explicit ambivalent code, is much easier to write than tacit ambivalent code.

01:23:52 [AB]

That's true. I mean, it may be not as slick in Dyalog. But if you assign the right tact and identity function to the left argument as a default value, even though it kind of mixes up the roles of things, that very often allows you to write neat ambivalent code as well. Because identity applied anywhere, that means if you have alpha and omega on the left and right argument, so you say alpha, function, omega, then if alpha is the identity function rather than the value, like you said, it's a nothing value in BQN, then we just apply the function in the middle monadically on the right argument. And then the result is given to identity function.

01:24:36 [ML]

Yeah. One thing you miss there is the ability to apply a function to the left argument if it's there. So transform your left argument. And yeah, otherwise, just skip over the function. Yeah.

01:24:46 [AB]

What I've seen a trick there-- but of course, then you have to choose which one it is-- is to assign power operator 0 to the left argument. So for example, in a definition of over, definition of over is the right operand applied on the left argument, and then the right operand applied on the right argument. And then you can parenthesize those if you want. And then the left operand between them. So here, we don't want just a pass through with a left argument if it's not there, because that would apply the right operand to the result of the left operand being going from right to left. But if you have the left argument, so to say, be but don't, basically. Don't apply this function. Then effectively, you get nothing out of it. Another one you could do is you could assign it to a little custom operator that just returns its right argument and ignores its operands. It consumes the function that's to its left, but doesn't actually use it for anything. So there are some tricks you can do like this. But yeah, the BQN system is somewhat--

01:25:56 [ML]

Yeah, which I guess that works as long as you have your processing step be one function. If you have multiple, then you have to combine them. So it's--

01:26:07 [AB]

Yeah, it's true. I think the bkern system in that case is a bit neater, but I've never really had any issue with doing it like this. But anyway, the point is that it's fairly easy in these languages to write ambivalent code. And then you need these compositions to have sensible pairings of what the monadic and dyadic form means. And it's not so bad to have multiple atops in the monadic form. So that's what you have in dialects, three different atops that mean something else distinct in the dyadic form.

01:26:34 [BT]

But ambivalent code seems to be what makes tacit difficult for people to parse.

01:26:40 [ML]

Well, that's partly because we are, again, using these primitives where the two meanings, if you have a left argument or not, are completely different. So if you have a composition that makes sense in both cases, you can probably just read it ambivalently. So I mean, if you read f atop g and you know that g is ambivalent, there's no problem with that. I mean, you just understand that as a unit, g is going to apply to all the arguments, and then f applies.

01:27:08 [AB]

Maybe. But the question is whether the human brain can comprehend this, right? It seems to be that we have natural language and words that, due to the language grammar, are ambivalent in their meaning, meaning they're wordless, until later in the sentence when you get that clarified. And it's hard for humans to deal with. Classic one is buffalo, buffalo, buffalo, buffalo, buffalo, buffalo, buffalo, buffalo, buffalo, buffalo, right?

01:27:38 [ML]

But again, that's the case where the meaning changes based on the grammar. If you have something where the left argument just like tweaks the meaning, it's just additional information that's passed in, then that's really not a problem.

01:27:52 [AB]

I'm not saying that it's a problem or not. I'm saying it may be. It may be that it's hard for the human brain to deal with the ambivalent situation, even though the two cases are very related to each other. Maybe. Maybe not. I don't know. It needs to be studied further.

01:28:12 [ML]

I mean, I think if you have something with sensible ambivalent functions, you just read it as though there's always a left argument.

01:28:19 [AB]

But it has this value.

01:28:21 [ML]

As though there's a left argument, and its value is either something or nothing. And so it shouldn't be any harder than assuming a dyadic case.

01:28:28 [AB]

It's the default value.

01:28:31 [CH]

Yeah. We should also point out, I've been thinking, I just had this thought, is that if we remove ambivalence, we can absorb, absorb more of the power that lies in Jelly, [09] folks. We're talking not just a monadic fork and a dyadic fork, get rid of the D2 combinator using both before and after or the hook and the reverse hook. You can just make the D2 combinator. Oh, actually, can you? One, two, one. One, two, one. Nevermind, you still need it, folks. But the point is, you can make one, one, one, just three unary functions in a row. You can do more if you don't have to rely on the fact that we don't have fixed arity. Just saying.

01:29:10 [AB]

Conor, you're sitting down, right? Yes. Good. Jelly has ambivalence.

01:29:15 [CH]

Yeah, yeah, I know. But only in little nooks and corners, right?

01:29:21 [AB]

It's at the beginning of a program. If there aren't given enough arguments, then it will use the one given argument multiple times. Then it will keep doing that until it has enough or whatever quick it needs.

01:29:39 [CH]

Yeah, so the glyphs don't have ambivalence, but programs do. A good example of this is if you have a binary function, if you pass only one argument to that, it will turn that into the W combinator. It'll duplicate it. A binary function can either-

01:30:04 [AB]

It's not even just one duplication. I think there are some that have three arguments, because it can- And then it will triplicate it.

01:30:13 [CH]

Maybe. We'll just pretend that for our sake, it's just two. If you have plus, it's double if you pass it one argument, then it's plus if you pass it two. Which, I don't know how I feel about folks. I don't know how I feel about that.

01:30:13 [AB]

It's because of CodeGolf, right? It's intended for that one purpose of doing CodeGolf.

01:30:34 [CH]

I mean, it's all because of CodeGolf, but some of the- Incidentally, some of it's very beautiful.

01:30:40 [AB]

Yeah, I understand. But Jelly's design in that respect is just because of CodeGolf. I think if you were to use Jelly or a derivative of that as a production language, you probably want to explicitly duplicate the argument. That's what you have in mind.

01:30:54 [CH]

And they actually do have a W combinator higher order function, like quick, or I guess they call them, when you need it in the case where your binary function's not at the start of your program. Anyways, we've been soaring past our time. I mean, as expected.

01:31:09 [ML]

How long is this podcast supposed to last?

01:31:10 [BT]

I don't think the hour mark crosses borders.

01:31:13 [AB]

I don't remember. In this episode, we cannot mention the hour mark.

01:31:17 [ML]

What was that? We can't mention?

01:31:18 [AB]

It's tacit.

01:31:22 [BT]

Okay. That was the problem.

01:31:23 [CH]

It's too late. I did look it up too. And tacit does exist in literature, papers. I Google Scholared it, before 1990. But it all seems to be like in psychology, like tacit knowledge. And the first academic, I searched for tacit programming and tacit program, and the first thing that I could see was the 1991 papers by Huey and Iverson called tacit Definition. That's not definitive, as Marshall said, but potentially array programmers.

01:31:55 [ML]

I think it's pretty likely that specifically Iverson came up with it, although Roger probably could have. And it's worth noting that that term wasn't used in the paper that introduced forks and hooks. So it wasn't around at that time. And then by 1990, somebody, probably Iverson had come up with it.

01:32:16 [AB]

It's interesting the way he writes about it. In 1993, the J Introduction and Dictionary, then he writes about tacit programming and uses the term. And the way he uses it, it makes it sound like it's an established thing and not something he has introduced.

01:32:39 [HR]

Yes.

01:32:39 [CH]

I mean, that's a great way to invent stuff, though, is you just start talking about it as if it's a thing. And then people are like, well, it sounds like that's a thing. I'll start repeating it. You know, that's literally how teams and corporations are created sometimes. You know, I work with my team. That team doesn't exist. But then, you know, a month later, that person saying, oh, well, I talked to their team. And then, you know, a year from now, that team is just a thing.

01:32:59 [AB]

Are you saying that he never even defines it? It's just tacitly people understand what it means? I hear you.

01:33:07 [CH]

So what have we, what have we, did we come to any conclusions, Henry, at tea-totaling? What's the final word on tacit programming and tea-totaling? Or is there none?

01:33:16 [HR]

Well, I think tacit programming is a state of mind. It's the willingness to use the trains to improve your lines of code. But it's not something you would use for an entire program. You use it often, but not exclusively.

01:33:34 [ML]

I use it for the program plus quite frequently.

01:33:43 [AB]

Conor, you said before that I might disagree with you with like seven train, seven carriage train. No, I don't think that's a problem. You'll find on Apple cart many long and tacit programs. What bothers me is the nestedness of it. I don't mind having long trains that are just without inner parentheses. They're not inside each other.

01:34:06 [HR]

Yeah. I would like to see some work done if it's possible to conduct a controlled study. It might not be.

01:34:15 [ML]

Well, my opinion on this sort of thing is that interpreting these studies is actually not only as difficult, but more difficult than interpreting your own programming experiences and informal discussions with people. So in a way, I'm glad that studies are being done, but I think they're unlikely to actually resolve any questions because then you're just stuck debating what this study meant, whether this study is actually applicable to the problem it's studying and so on. Yeah.

01:34:48 [HR]

Yeah. And if the methodology is any good. I know when I was teaching J in the high schools, I taught it. I had a beginning computer class, which was mostly spreadsheets, and another that was actual programming. And even in the baby course, we took one six weeks and they all did some J programming. And what I found is everybody, all the students picked it up and were able to write good J to the point where they could read the HTML from ESPN's website and print out the schedule of games to be played that day or something like that. Except the few people who had written a little C or Java, the people who had actually done loops had trouble. So if you're trying to do a study about computing, you can't study people who actually learned anything about computing. You'd have to take novices and now all you're going to learn is what novices can learn, which is not really what we want I don't know anyway.

01:36:00 [AB]

But then you have a bad population because you'll be self-selecting people who have a programming type mind will go study programming and then get that mindset that prevents them from doing their programming. The world is a poison well, right? We can't actually study this. We can only anecdotally speak about people's personal experiences. I think it's interesting there in that James reduction dictionary that Iverson equates functional and tacit programming. He says the use or of and then in italics functional or again italics tacit programming that requires no explicit mention of the arguments of a function program being defined and the simple use of assignment to assign names to functions. That seems to it's reflects both ways. He's saying that tacit programming is functional in the sense of dealing with functions and he's saying that what he calls functional programming should only deal with functions and not with arguments ever which is not exactly how we use the words in computing today. And I also like to quote from Neville Holmes computers and people and he says added to J more recently tacit programming arguably provides the purest form of functional programming yet to appear. The continued neglect of APL and J by scientists, engineers, mathematicians and actuaries delays recovery from the original blunder. And he had previously speaking about various blunders in the computing world. But again equates this. So maybe we've been thinking of it a little bit wrong. We speak tacit, tacit, tacit. Don't mention the arguments but maybe more importantly what they're both bringing out here both Iverson and Holmes is that it's functional in the sense that we focus on the functions and not on the arguments so much. Whereas people often speak about these array languages as doing data driven programming. It almost sounds like this is in opposition to this. Don't care about the data so much. Care about which functions are we putting together.

01:38:26 [HR]

Well if I think you're, that sets you on the road to tea-totaling and I would say don't go there.

01:38:33 [AB]

Yeah and I'm just quoting what people have been saying.

01:38:38 [CH]

Well if you have opinions on how much tacit is too much or if tea-totaling there's no other way, you have to be sure to reach out to us. We can conduct our own little mini study. We won't publish anything but we'll probably talk about it on the next episode or whenever you send us, you know, your opinion and you can do that by emailing us at

01:38:59 [BT]

Contact@arraycast.com is where you can get in touch with us and we welcome your emails and messages and actually they do inform us quite a bit about where people are looking and thinking about and I'll be interested with this particular episode because there'll be people who love this because it goes on so long but others will say it's too long. Actually I shouldn't say that because in past experience nobody has ever said it's too long. This may be an exception. I'll turn it back over to you, Conor.

01:39:30 [CH]

That's true. Didn't, didn't Elliot say one time he had a little mini blog post that said, you know, four hours is the ideal amount of time or something like that.

01:39:41 [BT]

He's not editing this.

01:39:44 [ML]

We do not consider this a serious suggestion. We have not taken this into consideration. We have ignored it.

01:39:51 [CH]

But yes, clearly we love to have these conversations. Thank you, Henry, for joining us as our guest panelist for the day. Yeah, I never know where these conversations are going to go but I always come away with my head buzzing and, and yeah, questioning, you know, the purpose, the purpose of my tacit existence in this life, you know.

01:40:07 [AB]

You shouldn't speak about that openly.

01:40:13 [CH]

Yeah, just leave it tacit, I guess. Anyways, till we do this next time. Yeah, once again, thank you, Henry. And with that we will say happy array programming.

01:40:22 [ALL]

Happy Array Programming.