Transcript
Thanks to Rodrigo Girão Serrão for providing the transcript.
00:00:00 [Henry Rich]
I basically said, Gee, I've I've got to go fix all this old documentation and it occurred to me that you know if I just bring back those old modifiers, I don't have to fix the documentation and that turned out to be the easier solution. So I did it.
00:00:25 [Conor Hoekstra]
Welcome to another episode of Array Cast. My name is Conor. Happy 2022. New year. The first episode of the New Year and today we have on our first I think returning guest ever and before we get to introducing him once again and talking about a lot of exciting news happening in the J world. We're going to go around and do quick introductions we'll first go to Bob, then to Stephen and then to Adám, and then we've got, I think, 1 quick piece of news to announce and then we'll hop straight into the interview with our returning guest.
00:00:58 [Bob Therriault]
I'm Bob Therriault. I'm a J enthusiast and currently I'm working with a group on improving the J Wiki which may come up during the discussion today, but that's keeping me busy, very busy.
00:01:10 [Stephen Taylor]
Stephen Taylor, I'm an APL programmer from way back, and these days, the KX librarian.
00:01:16 [Adám Brudzewsky]
Adám Brudzewsky, also APL programmer, but current, and do a lot of things, both programming, maintaining things and work on the APL wiki.
00:01:25 [CH]
And as mentioned before, my name is Conor. I don't program in APL professionally at the moment. I'm a C++ developer, but I love APL, J, k, q, BQN, all the array languages, the paradigm and and recently yes, since our last episode just thrilled that the Dyadic Hook and hook are spellable in other languages other than J. Still, I still yes, I'm just just thrilled that, uh, atop is spellable. I keep going back to the wiki. Anyways, we'll put it in the show notes, go back, listen to the last episode if you want it was it was a great moment in that episode when.I thought it was just a J thing, and then APL and BQN had it as well, just spelled slightly differently. But but with that ramble out of the way we'll throw it, I guess, to Bob, who's got a quick YouTube sort of video announcement and then we'll hop into our interview.
00:02:16 [BT]
Yeah, Aaron Hsu put together a promo for a conference he's going to and I don't,I I should know this off the top of my head, I don't,I think it's sometime in January [March] but it's about two minutes and it has a very distinct view of, in this case APL, but also the, I mean they all apply to the the array languages, just the way they're used, and the way people look at them and what they're really powerful at, and I've found it a very very succinct and you know, comprehensive video, and we'll put it in the show notes. And that's another plug for the show notes. So if if you hear something mentioned, look at the show notes because there's so much information we put in there. And obviously all these array languages are very dense, and if it's going over your head, it's always a good idea to jump back into the show notes and take a look. 'cause we'll give sort of more background and it might make a bit more sense that way. So those are my tips and watch the Aaron Hsu video.
00:03:13 [CH]
Awesome, thanks and yeah we'll definitely the links will be in the show notes as always and at some point we've said this now a few times, but we'll we'll have to get Aaron on on the podcast because he's got, yeah, a ton of interesting things and has obviously done a ton of interesting work with his Co-Dfns compiler and whatnot. But with all that said, let's once again introduce our, as mentioned before, first time returning guest Henry Rich, who if I have the number correct, was on episode 6. We'll have a link to that previous episode. It's an awesome listen, he he was,I believe, our first guest on the episode, so I guess, sort of great to have him as also the first returning, yes, that episode was called Henry Rich's Deep Dive into J. Very briefly, and I'll let him sort of introduce himself after this, is sort of the primary maintainer contributor to the J language, and so that's open source on GitHub. You can go take a look at all of his contributions and the code and you know has a long history. We sort of went through that whole history and experience in the first episode, but at one point was a teacher and sort of taught J and has a lot of interesting anecdotes about what it's like to teach, you know, students that haven't gone through the regular curriculum of, you know Python or Java, or insert whatever language is of the decade that's sort of used as the teaching language and teaching sort of beginners that don't have their minds warped, by or I shouldn't say warp, that's unkind. That don't have their minds affected by sort of, you know, the regular imperative thinking, but has worked sort of in different careers and has used J and worked up to sort of contributing to J. And today I believe we're going to be covering the newest release of J, which is J 903. I'm going to feel really bad if I got that wrong. But I think 903 it is.But with that I'll let Henry sort of add some more color to introduce himself and then we'll hop into, I guess the high level and the low level details of everything that's new in this latest release of J.
00:05:07 [HR]
Well, thank you for that introduction. Yes. I'm the developer of the J interpreter. But Bill Lam and Chris Burke and Eric Iverson also contribute to the release uh, mightily, but I I'm the one who works on the interpreter. We just released 9.03 after a one year beta period. And it's got quite a few new features which I'll go through. The the one that has produced the greatest amount of traffic in the forum uh, is something very old. Yeah, you were rhapsodizing about hooks and forks and how beautiful they are, uhm, and that's right, they are. It's a brilliant idea you can take 3 verbs and with no punctuation at all, create a new verb. It's magic. I mean I'm in my book I had to come up with a name for this sort of thing and I call them invisible modifiers. Uh, you know they're they're like conjunctions and adverbs, but you can't see them, they're just done with space. They're done with the ordering of primitives. You can use parentheses, but you don't really have to if you, uh, if you take 3 verbs and assign them to a name, now you have implied parentheses and you can have an entire language that has no punctuation whatsoever. It's just primitives and the semantics comes from the part of speech. Well, it turns out. Ken Iverson knew all this, and he developed uh, a very sophisticated language. So it must have been 30 years ago because it was in the very early J versions. He developed a language that it expands hooks and forks beyond verbs so...With a fork we can take a verb, verb, verb, and the fork means is, well, if you know what a fork is, it means apply the first verb, then apply the second verb, then apply the one in the middle between the results of the the other two verbs. And obviously you get a verb because it's operating on noun arguments. But you could imagine, maybe a fork where one of the components is not a verb. It could be a modifier. Again, like it could be slash for example to the reduce uh, operator and what would that produce? So, well it wouldn't, it couldn't be a verb because the reduce operator is not itself a verb, but this would be a compound that operates on other verbs to produce new verbs. That's what we call in J a modifier. Uh, the language that Ken came up with defines not just hooking fork, but about two dozen other sequences that allow you to combine verbs to create new verbs. Uh, and, that language, to me it's one of those beautiful things that it's been done in programming. But it turned out that hardly anybody used it. Maybe that's an indication that the J community has become more sophisticated over the years, because Roger Hui, the originator of the interpreter, decided maybe 15 years ago that it just wasn't worth supporting all that interesting stuff and he got rid of it. It's not necessary, you can do, there's already a way to have an explicit operator that takes verbs and produces new verbs. This is merely a way of doing it tacitly, like with hooks and forks. In other words, all it really brings it is very beautiful. In effect, when you when you create a modifier like that, what you're doing is creating a programming template. Right I have a series of operations I want to do. I feed it a verb, I get out a new verb, and that's the idea of a template. Anyway, a lot of people seem to be interested in this. I'm I'm glad to see that. I loved it and I was very sad when uh, those features were removed from the language. But they're back now. And and a side benefit of that is a lot of the old document, the documentation is more than 15 years old that referred to these things has now it finds it supported the language so you can follow the books. You know the ancient texts,I guess you could say more easily.
00:10:24 [CH]
Your explanation is fantastic. I understood. I probably someone's understanding will be assisted if they already understand what a fork is. Whereas we've we've talked about forks many times on the episodes before and given sort of simple examples so, so I guess my question is, is I have never used and I think that's that's very interesting that you call them invisible modifiers too. It's a great sort of nickname for them, because that is exactly what they're doing, at least the the two trains and the three trains. The three trains is being another name for the fork, so now we have, I guess Fork 3-train, although 3-train can refer to multiple things, invisible modifiers and then the corresponding combinator name. So we've got four different ways to refer to these things. So, so I guess, is there a simple example like similar to the way that you know "is palindrome" can be used to describe a hook and like average can be used to describe a fork. Is there a simple example of one of the you know you said I think there was a dozen or two dozen patterns. Uhm, that is explainable like an average example that's easy to grok.
00:11:35 [HR]
Well, yes, but I don't see how you're going to understand this over a podcast.
00:11:42 [CH]
OK.
00:11:46 [HR]
But I can add an example. Uh, it it happens quite often, particularly with boxed entities that, uh, the normal array rules, uh, don't work when the when argument is empty. So, so sometimes you need just an exception for empty arguments, right? So there you have a case where you the template is I,I want to execute the verb, except when the argument is empty and in that case I want to take a default value. That's an example of a simple, a very simple invisible modifier. You would, used, you used, internally it would use the power conjunction to decide whether to execute the verb or notand produce the result. But the point is that the name I give my, I give it as, "but if null". So I would write do something, but if null 6 and that would look if that that triplet. It is is a modifier and it would be applied to a to a noun argument. And if the noun argument is empty, the result 6 if the noun argument is not empty, the result is the result of do something applied to the argument, so you can see that, uh, this and that, but if null is not a verb, it needs a a verb argument and a noun argument. So they can create the thing that is to be applied to the ultimate input. That's an example of a programming template that is easy to represent as a modifier. In this case it you can you can do it invisibly now. Previously you would have to have written an explicit conjunction.
00:13:44 [AB]
I think I can, give a simple example, similar to like how we did with with the average. Let's say for simplicity we use atop. Atop is a very, very simple combinator or or conjunction. It just applies one function after another function. So we can now define something that would be for example, average atop atop. The result of average atop atopwould bea conjunction itself, so it takes 2 functions. So for example, it could do take a reverse and a concatenation (just as an example), and so this new entity, it's a conjunction. Average atop atop.It takes 2 functions, reverse and concatenate. Applies one after another because there's an outer atop and then applies the average on the result of that as yet another atop. Yeah, so you haven't, so eventually you instead of having a an atop conjunction, you have an average of atop conjunction.
00:14:53 [CH]
Interesting, I guess reverse is sort of a less of a useful in that it doesn't actually, it just mutates the values so average.
00:15:03 [AB]
Yeah, none of it made any sense. No, of course not.
00:15:08 [CH]
I'm trying to think if there's like a an equivalent name for this in functional programming, 'cause if you kind of think of two trains and three trains as when you pass them verbs AKA functions, they return you new functions. So like the definition of a higher order function is a function that either takes a function as a parameter or returns a function, so these are higher order functions in that they're returning functions, but this is sort of the extension of these two trains and three trains such that they can take, you know operators as they're called, and whatever certain versions of APL or conjunctions such as you know, reduce or scan, these are returning these are also returning functions, but functions that will also take functions as arguments.But so like if you're like I'm thinking in my head that there's probably a way. So in APL and J and BQN, the scans that come with those languages as defined by either the digraph or the single symbol is what's known as an inclusive scan, so in C++,or sort of high performance computing, there's inclusive and exclusive scans where inclusive's, you know if you have 1 1 1 1, and you do a plus inclusive scan, it's 1 2 3 4, but an exclusive scan, also known as a pre scan in certain languages. You give it initial value. So like if I have the sequence 1 1 1 1 and I give it an initial value 10 instead of ending up with four elements, you end up with five elements and the resulting scan is 10 11 12 13 14.So you basically have like an initial value, so my guess is that there's a way of defining a pre scan or an exclusive scan that doesn't come with J or APL. They all, the scan that they come with is the inclusive scan. There's probably a way of defining an exclusive scan, maybe, maybe not, but the idea is that.
00:17:02 [AB]
No, no, that you're right, you're right.
00:17:03 [HR]
Yeah, yes.
00:17:04 [AB]
So, So what you've got here is scan is is in J terms, an adverb or one modifier or a monadic operator in APL terms, and you want to apply that after you have concatenated one more element to the front of the array, you started with. So what you want is scan atop concatenate.
00:17:26 [HR]
Right.
00:17:27 [AB]
Right, and that is exactly that adverb conjunction and verb, but you never specified what the operand was to the scan. So after you have this which would called exclusive scan or something like that, then you can just say 5 exclusive scan your array and you will concatenate the files to the array and then it will run the scan on it.
00:17:48 [HR]
Well, you need
00:17:49 [AB]
Sorry you would have you need a function of course. So you do5 plus scan exclusive scan your array right and so you have a new a new adverb or one modifier or monadic operatorhere that was only defined in terms of the old one without ever giving it an operand and atop a concatenation.
00:18:10 [CH]
That's really, that's really cool. Yeah, I can see the, that's just one example off the top of my head. But yeah, so these are. These are ways of like there, there are also higher order functions, but the slight difference is that these higher order functions, it sounds like either all of the time, or at least some of the time, the the functions that they return will also take functions every time I say function, replace it with the word verb, if you want to do J speaker whatever dictionary APL speak. These these verbs will also take verbs as arguments, which I that's the thing isI don't think there's actually like, there's not a higher-higher order function, it's just a there's one word for higher order function that applies to all of these, although there's kind of a slight difference, but.
00:18:55 [HR]
You're in a state of sin, when you say that verbs take verbs in J. Like, modifiers can take verbs, verbs only apply to nouns. But you get the idea.
00:19:07 [CH]
OK.
00:19:07 [HR]
Yeah, the the higher order function you're talking about, we would call a modifier. I have to say I found this beautiful and useful a little bit, not very much. Uh, I was sad when it was taken away 15 years ago just because it was, uh, such a magnificent intellectual creation. But I don't I don't think it's it's going to change people's lives to to have they have these modifiers, but it it it might well change the way you think and that's important.
00:19:41 [CH]
I mean, this sounds cool to me and also, if I recall when you first mentioned this the first time you were on, which was back in the middle of, I think the summer of 2021,I I think you at that point I had never even known about the existence of this sort of like, you know, lost at the city of Atlantis, you know lost version of J. And then you were talking about, you know how it was this just so magnificently beautiful. But we had to get rid of it, 'cause no one used it. And I was like, wait, what? There was an even more tacit version or, like you know, tacit programming version of J. That, like I was like. Does it exist somewhere? And now look at this like half a year later.
00:20:19 [HR]
Yeah, now it does. Now you can see it again. It's great.
00:20:21 [AB]
I think I can give another example that's easy to understand. And so, in APL we have first axis and last axis reductions have separate symbols. J doesn't have that. And let's say you wanted to define a last axis and reduce, so that's reduce rank one. Now you're allowed to. Now you can say last axis reduce, is reduce rank one. You couldn't do that before because you would apply rank to an adverb, that's not allowed. And you have to have you had to give it an operand to work on. You could do plus reduce rank one, but you couldn't just do reduce rank one. And now you can do that and then it would just work.
00:21:08 [HR]
Yeah, I think you're not quite right there because slash quote one. The quote One is the rank. You're right that that would be illegal, but you could also create an adverb quote one, and because it's an adverb, the one little tiny bit of the tacit language that survived was an adverb adverb was allowed so even without this, we could have done what you said, but it would take parentheses. Now you would just write slash quote one and that would be exactly what you say. Anyway, moving on through the release says it may be more practical than aesthetic value. Yeah, we tend to focus on the the array-ness of these array languages because well, that's what makes some important tools of thought. Uhm, but it turns out an awful lot of the code that you execute is simple sentences that add one to an index or, do operate on small things. And in particular, when you have programmers who are unfamiliar with the array languages and are just learning to become J programmers. They write code that resembles scalar code to begin with, they'll get better as time goes on, but we have to keep them on board long enough for them to make the transition. I say that they're in a stage where they're writing J-TRAN which is J that was written by somebody whose first language is FORTRAN, and we've all seen that, you know somebody, somebody writes a loop and you say, oh, you don't need a loop, well, but they don't know that. They don't know any better so it it turns out that for maintaining the interpreter it's important to make big arrays work fast, but it's also important to make simple sentences that operate on scalars work fast also. And I put a lot of work into that in the 9.03 release. Uh, I completely rewrote the parser for about the 8th time. Uh, and uh it it's faster even than before there's a special path through the arithmetic, the arithmetic operations that deals with singletons. So either atoms or arrays with only one value. I rewrote that again to eliminate a couple of mispredicted branches, and now does only one mispredicted branch through the operations. I reduced the maximum rank. J originally, the rank could be up to 2 to the 32nd and it was convenient to allow to cut that back to 65,000. And now I find it convenient to cut it down to 63, which is where it is now. That's still more, I think, than anybody is likely to want? Although, it's not quite as obvious as it looks, 'cause with sparse arrays, you could imagine having an array of huge rank. But 63 it is for now. This allows the set up for the arithmetic operations to, basically I do operations on bytes and it speeds up that setup. And then there are two other features that are new in 9.03 that are not really so much for J-TRAN but they offer performance. And that has to do with parsing and execution of explicit definitions. When you parse a sentence that contains modifiers, they will just say you just say plus slash Y. Let's add up the elements in Y. The parser has to, the parser looks at that and it it processes the right to the left it says:Y, OK, there's a noun, we'll take that, then the slash, that's an adverb. Let's we can't do anything with that, you can't do with anything, can't do anything with that yet, but plus, and now the parsers is"ah, plus",I know what that means that's that's an execution in the adverb. I'll go do that. Alright, so it does that and it comes back with the verb that adds up the items in Y, then it executes that verb. So if this sentence is in a loop, then your J-TRAN programmer has decided he wants to execute a million times, the combination plus slash is going to be created a million times. It'll first be created and then immediately executed. There's not much way to avoid that in J, because yeah, we think Y as a noun, but there's nothing to make the user to prevent the user from defining Y to be a verb and then all of a sudden plus slash followed by verb is something totally different. Uhm, so you'd like to pre compile the sentence, but you you really can't? However, you could partially pre-compile it. If there are pieces of the sentence that are enclosed in parentheses, and those pieces don't contain any names, then you can safely process the bit between the parentheses, so in this case: Uhm, if I had this in a in a loop that's going to execute a million times,I would put parentheses around the plus slash. So it would be parenthesis plus slash parenthesis Y [(+/)Y]. Obviously it's gonna be, the results going to be the same, but the difference is that the initial examination of the sentence, which is done before the verb is executed, ever, it's done one time when the verb is created, the bit between the parentheses will be executed to produce the "add them up" verb. So that that will be done only once instead of at every execution of the sentence. And I have actually got some code where that makes a noticeable difference. A similar improvement is used for applications that have long search paths. I don't know yet, I don't know APL. I don't know what, does APL have anything similar to the locale system of J?
00:28:19 [AB]
Yeah, the namespaces.
00:28:20 [HR]
Namespaces right. So in J there's a private namespace for the for the executing verb and then it has a search path of namespaces in which names are looked up. Let's say if you have a verb modifier called each which says operate on the box. A typical thing that we would do in J is, say, A plus each B, it says take the contents of boxes A and B and add them together and box them up again. Well, once again, when you when you go to look for the name "each" it's defined all the way up in the base locale. If you have a long search path, and that might be because you've got a database system and you've decided you're going to have lots of mainly mostly empty search paths in in which you can define names as they're needed, or for any other reason; but if you have a long search path every time that since it's executed the the name is looked for inside the explicit definition not found, go to the first element search path, look that up.It's not found. It fails all the way up to the Z locale, the global locale, and there it finds a name. Well, that is, that can easily be a half a dozen failed searches on the way to finding a name. And again, if you execute that sentence a million times you're going to see that. So the added feature is, at the time you create a definition, you can say, this definition, once it runs, I know I haven't done anything tricky. I haven't redefined Y to be a verb. I'm not changing names. Once you find a name, remember where you found it and look for it in the same place next time. And I call that name reference caching, it cuts off almost all the searches and it's it's something that if you've got a serious application that gives the search paths got to look into.
00:30:43 [BT]
So I guess it it makes modular programming, where you've separated your namespaces or your locales into different areas, much more efficient, because you're not penalized by having to go through all those searches each time.
00:30:55 [HR]
Exactly.
00:30:55 [BT]
Does it once and then bang? It's always going to just pick that one up for you.
00:30:58 [HR]
Yeah, and you you have to be careful that you want that you, well, you know it's a cache and it's so it's like if you look something up and then change your name or you add a name to a locale after you did the first look up and you know you're gonna, you'd have a problem, but that's pretty rare and applications I've seen, there are facilities to control the features so that you can usually get what you want. Anyway, so that's a set of performance improvements that are particularly useful for programs that execute lots of short sentences, which again, I you know we like to think in terms of arrays, but we have to realize that there are people who haven't got there yet. We're trying to make the language work for them.
00:31:44 [BT]
So if we think about it like accessibility, where a lot of times when you're trying to make a website or something accessible you think you're doing it for people who might have some challenges. But actually it turns out that you're doing it for everybody, because if you make it easier for people who have challenges, everybody benefits from it. Would that be true of J-TRAN as well?
00:32:01 [HR]
Well, yeah, I certainly think the pre parse parenthesized primitives or four P. Uh, that's going to help everybody. You just put parentheses if in fact it'll it'll help you in a lot of cases, like if you wrote if you had a verb AVG to do the average, plus slash divide sharp, if you if you merely wrote that in your sentence and put it in parentheses, it would perform better than if you wrote the name AVG, because it wouldn't have to look up anything, it would it would define the verb as an anonymous verb and apply it when it needs to. So yeah, they'll they'll be beneficial side effects for people who didn't even, know they needed it.
00:32:46 [BT]
Well, I guess it it actually emphasizes as well, again, although you're not trying to get beginners to do tacit programming, it's an advantage of tacit programming, because if you're tacit programming, you can use those parentheses to then reduce the time it takes for all those loops, yeah.
00:33:02 [HR]
Although if your code is tacit, it no, the improvements apply for sentences executed out of explicit definitions, right? Because if if you define a bunch of tacit verbs, they will have already gone through the name look up process, uh, I mean, it is true that the name reference caching will help, but the the parenthesization would have been done already. But that's not true for explicit sentences. Moving on to the things in the release, there are a couple of interesting features for high precision arithmetic. We're we're finding some customers who are limited by the precision of double precision floating point, where you know the order of, depending on the order that you add stuff you get different answers, and... Floating point addition is not associative, in other words, so. We've added a couple of of features for them. One is a very old method by Bill Kahan called compensated summation and basically gives you a more accurate double decision result and then. Uh, the other that we've added is, uh, a high precision dot product that executes a dot product, well it keeps a hyper high order part in a low order part, so effectively 100 bits of mantissa. For the dot, for the dot product operations, and for people who need that, that's useful. Let's see J Qt. This is not my my thing. This is Chris Burke's thing and Bill Lam’s. So J Qt has added a bunch of things, but the really cool one is it's like a a mini git. Uh, you go to the there's a script tab and it remembers everything you've done for the past24 hours. So you can look at and say"Oh, what did I do?", what was that last thing I did and it'll show you the differences between the the last thing you loaded and the thing before that. It's really neat.If you ever wondered, right, rather than scrolling up through a terminal window, you can just look there. And then I guess the other thing that's pervasive is the last thing of the release is just trying to use 256 bit instructions and in every place that I can find to use them they they really do make things go several times faster. But that's what's in the release. Plus enough bug fixes that I think we can get there's a lab on,I had to use J, that's got, merges the Qt, dissect, the debugger and that was broken for a while, but it's back now and I'm trying to get Bob to come up with one of his beautiful videos on it. 'cause it's it's a very graphic introduction to how array languages work.
00:36:18 [BT]
Yeah J by Point and Click I think it isn't it?
00:36:21 [HR]
That's the one.
00:36:21 [BT]
Yeah, yeah, no. It's it's good and I can confirm it's working now. So that yeah it that was something that wasn't working before and is back working. In it, yes, it combines a whole bunch of different things. And I'm actually thinking, I think at some point we start talking about the wiki, 'cause I think that's involved with this as well. One of the things we're looking to do with the wiki is to make it more accessible to a newcomer and one of the ways to do that may be to put together some videos that might work through a lab like ‘J by Point and Click’. So that step by step you see some of the power of the language and, well, we've had lots of meetings and discussions, and there have been some other videos that we've looked at. I think in the k language it was Jeffrey Borror. I, I think he did for k did a video series on q which is really interesting, really good, and it's got me thinking about the ways to approach maybe doing this. Certainly a longer term process, because putting videos together that are actually trying to explain things, as opposed to, videos that are streaming things as you're doing them, takes a bit more work, but when you're when you've got it done, I think it's a more effective way to bring somebody into it, so it's it sort of helps people along, but yeah, that the J by point and click if I guess the other thing is, if you're not if you don't want to wait for the video load up J and then do that lab because you can do it yourself. I mean, it's part of your interactive environment with J Qt and you can basically do everything that I would be doing in a video, you know, in the lab itself, so you know there's no reason to hold off and wait for me to do it. If you're interested and you want to go through that process, the labs are a very powerful thing in J. I think in J Qt they're under the studio tab, or help and then studio tab.
00:38:14 [HR]
Right. Help, studio.
00:38:16 [BT]
Yeah, and so they're hidden a little bit, which I've always kind of wondered about. But if you know that's where they are and then, then do the labs there. There's so much information in them. It's interactive, it takes you step by step. It explains something and then you have the whole environment to play around with the ideas and then when you're ready to go into the next step...The very very effective way of teaching to pass along information.
00:38:41 [HR]
Yeah, I think the people who have decided they're going to try J. So we'll be well advised to do that, but what we need on the wiki, which we will use is what you're doing, something so the tire kicker can come along and look at it. Say oh, that looks cool. Well, with without committing to downloading the language and wondering whether they're gonna be able to do anything with it. I don't think you're not gonna be able to beat a video for that.
00:39:08 [BT]
Yeah, and and that's actually another area that I think it's Will Gajate if I got his, pronunciation of his name correct, and Joe Bogner and also Michal Wallace and John Hough people, together, have been working on that aspect and they're looking at taking the source code of J and running it through emscripten then and then actually making it so that you could run J on a browser. So then it would just become a link on the wiki and then that would take you to that, your browser basically would open up and then run J for you, so it's along the lines of Try APL, but it's actually run by your browser and they're working their way through it. There's, you know, but they've made some progress. In fact, Joe actually has a working version, a simple working version already. We can put the link in the show notes that will take you through some real basic stuff, but they're looking for something more ambitious and Joe's version right now I think is running on J 704. But they're looking to upgrade it to to 903 so it actually does all the things that 903 can do.
00:40:16 [HR]
That is ambitious, so that would be great if we could do that.
00:40:20 [BT]
Yeah, I think they think that too. But the thing is, they they've managed to keep the optimism and they're working at it. And Joe came up and said, yeah, I don't have much time to put together. This last email I saw between him and Will, Joe saying yeah, "I spent four hours but I I went down that and then it didn't work. But I figured this out!", and it's like wow, like,I mean, I think Joe's got the bug. I think he's back into working it. I'm not sure how much time he actually does have to put into it and but we certainly appreciate the, you know, what he's already put in, and and there are other people there to support him, so that's really great to see. We'll see where they take it. They're hoping to having something simple up and working by the end of February, and that's that, we're all sort of working towards that goal. And just to give us a, you know, a deadline to try and work towards 'cause we know that makes us more effective in terms of getting things done. But yeah, so it's that's that's our process right now. One of the things I was going to ask you though, Henry, about the parser, at this point, there's a there's a trace, a script that actually you can run through the old version of J, and it will actually breakdown, say, for instance tacit code, and show you how it's it's actually, you know, being executed. But that hasn't been updated yet. Am I correct? It's like when you've updated the parser, you've updated the internal parser, but there are things like, would dissect - because I think it also works on the same, which is your debugging feature that does the same thing visually to break down a tacit verb -has that parser been updated? Would it be able to work with the invisible modifiers as well?
00:42:04 [HR]
No, uh, traits might have. I think Raul was looking into that. Uh, it it wouldn't be all that hard to fix that. I think somebody did, but I'm not certain. Yeah, when I talk about rewriting the parser, I'm not talking about, I didn't mean, I mean, it's true that the parts were changed from the invisible modifiers, but it was a rewrite for performance. It wasn't to change the features, so they, it didn't...The the major parts that were overhauled didn't change the interface of the parser. Invisible modifiers did, and now dissect and lint, they they don't know how to parse those sentences, butI would, really... Anybody hearing this wants to take over the development of dissect and lint,I would love it, 'cause I just don't have time to keep doing it.
00:43:05 [BT]
Yeah, and and I think we actually touched on this in the last episode that we had you on. You mentioned made that call as well. If you go look, like if you're if you're you know into J and you're you're you know you're at the point where you're able to go look at the scripts, because it's all a script you can look at. Dissect is all a big script. It's a big script, but it's particularly well documented. It's a very, very clear script, and if anybody was looking at maintaining anything, I would say dissect would be one of the ones that would be much easier to maintain than just about anything else I've seen. It's just really impressive, and in fact it's a good example of a way to take a very large and complicated program and do it in a way that's robust and maintainable. And that's something that I think array languages have been criticized for, because that's typically people who use array languages are more interested in creating what they want at the moment for what they want, as opposed to maintaining something that somebody else has done. And I think dissect is such an excellent example of something that's maintainable in an array language. If you haven't looked at it before and you're interested in that sort of thing, you should definitely look at it and it's a very, very useful tool. So I think everybody owes Henry a huge debt of gratitude for producing it, but they would also have a great debt of gratitude to anybody who went on to maintain it. And and because it's it's a, it's a really useful tool.
00:44:40 [CH]
I have a couple questions. Well, we'll ask one and save one for the end, so the a lot of the changes in 903 have been performance related rewrites, increasing performance for J-TRAN code or sort of other features. Do you have like a how do you measure that? So like, clearly you know that there must be improvements, as that's what the updates say. Do you just have a set of like, local benchmarks that you're running locally and sort of testing? Or is that is that there's some sort of..?
00:45:06 [HR]
Yeah so yeah, I, but generally I will create a test case for, you know, just parsing primitives and verify that the the new parser can launch primitives twice as fast as the old one. And I say, well, it's probably worth putting in...Predicting the performance of a complex system is really hard as we all know, so much depends on caching, but I I don't rewrite something unless I'm pretty sure I can double the speed.
00:45:46 [CH]
OK, so that's good to know so and so wait, does that mean that all the performance sort of improvements are roughly around that sort of perf increase?
00:45:55 [HR]
Well the yeah, so if say next that level of increase for the thing that's being modified. But I they, you know, if you take the the the usual sentence, when you sum up 1000 numbers, most of the time is going to be spent adding up the 1000 numbers.
00:46:14 [CH]
Right, right.
00:46:16 [HR]
But yeah, for J-TRAN code it's much faster. You know, for code that all it does is add 1 to a name. Uh, that's that's a payoff that, you know, if it's assignment and execution of the verb on a singleton and there's parsing. That you can make a sentence like that go twice as fast. If when you get to that level, the speed and moving from one statement to the next and the loop through the definition becomes important, but of...Even so, if what your sentences do is add 1 to a name, it's still going to be slower than, much slower than a compiled language, and I we can't make it as fast as a compiled language. We can't make, turn J-TRAN into FORTRAN, we could just get it as close as we can.
00:47:13 [CH]
Right, I guess I can ask the second question too, even though maybe it's yeah, kind of jumping out of out of our line of thought so you're talking about, you know the labs earlier and then the J wiki and you, as mentioned sort of very briefly in this episode, and I definitely know we touched on it in the last episode. You have a book called J for C programmers, and there's a a couple other books that are listed on sort of the J wiki:"Learning J" and "Easy J". And and there's also the wiki itself, and with like NuVoc is a good way to navigate the different digraphs. So I guess my question is, because over the holiday break I was thinking about trying to tear through one of the books,I never ended up getting around to that, but...
00:47:58 [HR]
I hope you bought at least bought the book.
00:48:01 [CH]
I have not bought any,
00:48:04 [CH]
I did buy a couple of books, but no J books, but so so I guess my question is: I'm not sure how much you know about the other books other than your own slash versus the wiki, but what's your, because what I was what I'm hoping for is a resource for someone that's at my sort of, like APL, sort of array level, where I know I don't know. Maybe 75% of APL glyphs; maybe 50% of BQN.And and I know enough that maps to J, but they're like, you know, at one point Bob on an episode, you know we were solving a problem and he mentioned outfix and I'd never heard of outfix and so really what I want is like the fastest way to consume the superset or like the the asymmetric difference of what exists in J that doesn't exist in APL. Because it's very quick to learn that, like you know, the equivalent of tally in APL, is the you know the pound or the octothorpe in in J, like that stuff is just a translation, but what what is sort of more interesting is like the stuff that doesn't exist in other array languages. Is there any sort of fast way or good resource for that? And then, that's sort of part A and, then Part B, is there sort of like a you know, for at this level "Easy J" is the best intro? Or is your book the best way to go? Sort of for any level, and you can just sort of speed through the first couple chapters. Or is, you know, just going to the NuVoc, clicking each symbol and reading, you know the documentation for each symbol is what would you, your recommendation, given that there's sort of several different ways to pick up J.
00:49:44 [HR]
Well, Ian Clark and I did most of the work on NuVoc. And he is a master at writing code for the busy programmer, the programmer who wants to solve a problem, so they, the primitives, the description of the primitive starts out with, you know why do I want to use this. Give me a quick example of it. Uhm, so that,I I don't know that you could beat that, for, just to scroll through the primitives and see what they do. You know when you're an expert, you eventually learn that Ken's original dictionary is is very terse, but I don't think you want to go there as a beginner. NuVoc is pitched at the the the bright beginning programmer who does, it needs to learn something about what the J primitives do. I would recommend, I would recommend that. If NuVoc had been around, I wouldn't have written my book. It has everything that I have to say about J. There are those things, or in the, there's there's the the main J pages for the primitives, but there's a long set of ancillary pages. Like how does parsing work exactly? You know what's a noun? What are the precisions? If the the extra pages that are on the NuVoc page contain a great deal of information if you want to understand how things work.
00:51:34 [CH]
So NuVoc is the way to go, it sounds like.
00:51:36 [HR]
Well, that that would be my place. That's what it was written for. Was written for somebody who's basically, we pitched it to we're trying to go for somebody who's wondering whether J is a good language for what they want to do, so it has to have enough information quickly that you can understand what primitives are up to.
00:51:59 [CH]
OK, maybe I'll maybe What I'll do is I'll choose a day in the next couple months and then just do I'll do a YouTube live stream where I click on every single digraph and and see how quickly -
00:52:10 [HR]
Wow, yeah.
00:52:12 [CH]
It would probably take several hours at least though.
00:52:16 [HR]
Well, would it? if it's 10 say there are hundred of them out 100 or so. You have to read fast. I mean it wouldn't do well on a live stream. I mean if, it would, for yourself, if it's just for yourself, I'm sure you're like most programmers, you've learned that that's skimming through web pages is the most important skill. You could zip through those things in a hurry. There's a link on each page to take you to the next primitive. You just sit on that and click it and you you might be able to get through in five seconds a page for most of them that would be 5 minutes to get through them all. Like 10 minutes plus the the few hours that you take in digesting the stuff that's really interesting it that would, that would be a a useful use of your time.
00:53:09 [CH]
Yeah, Adám's also linked in the chat "J for the APL Programmer", and I think I've seen this before. But this is, uh, if I'm not mistaken, a pretty brief, some sort of dictionary and I've seen a few of these and what I've noticed of them is that they are sort of they’re translations of back and forth of like, oh, you know this about APL, this is the equivalent in J, whereas specifically the thing that I'm looking for is the the asymmetric difference the the outfix like there's this, my guess is this article doesn't mention outfix 'cause that's a thing that doesn't exist in an APL, or at least not as a not as a primitive. Or a glyph? Like obviously you can build an expression that does the equivalent thing. But I think that, that is very at some point I don't know if it'll be a blog post or a longer form kind of thing, but I think it'd be very interesting... It's like as the first section being these are just the exact translations of, you know, here's dyadic hook in J, here's how you spell it in BQN, here's how you spell it in APL, so there's there's a certain number of things that just map and then there's also the stuff that exists in each language, so you know these invisible modifiers now are one of the things that exist in J that don't exist in any of the other languages that I should say that I know of. But I think some resource like that where you can, like like BQN is actually like a fantastic example of, Marshall has written both like J to BQN and Dyalog to BQN and vice-versa.So like anytime there's an asymmetric difference, there's one that includes something that is not included in the other, he gives you the spelling of it in that language. So like there's certain things that BQN is dropped that exist in J and APL. And I'm like, I don't know how to do that now, there's like oh I just need to go to that little dictionary find, you know, OK, I'm in Dyalog APL, it doesn't have that, find the missing symbol and oh, he gives a two or three, you know, character expression in BQN to do it. Which is extremely useful, so half the time I'm coding in BQN, I don't know what I'm doing. I'm just copying and pasting what Marshall is has written on his little translations and I think it's like it's the it's the it's the bidirectional-ness of it that is so nice. It's sort of...It's both English to French and French to English, in case you learn a French word while while you're in France. From whatever the English speaking perspective and you're like, oh, I don't know what that means is you can look that up as well as also figuring out how to whatever find where the "salle de bain" [French for "bathroom"] is.
00:55:55 [BT]
And a lot of these things, you know, that that have been written in the past, of course, aren't keeping up with the new developments, so things like direct definition, fold, the invisible modifiers. In fact, invisible modifiers a bit weird because you could actually you'd have to go back into, I think, about 1990 something in order to actually see documentation on them.
00:56:20 [HR]
Yeah, that's probably one of the documents written. I think they were decommitted in 2005.
00:56:24 [CH]
Interesting, so yeah, probably the most up-to-date resource is also the wiki/NuVoc, or is there 'cause I can't remember the exact copyright dates, but I definitely know like "Easy J", when I looked at it it was the 40 page sort of intro. I think that was early 2000s or 2002,
00:56:41 [BT]
or something like that. Yeah, and those are good for I guess introductions 'cause they're not going to the, you know the more advanced stuff. A lot of the labs have now aged a little bit, but they're really effective ways to get the information. If you're really looking to dive into something and trying to figure it out, quite often a lab will be able to do it in the core language, and there's a couple of comprehensive labs that walk you through, I think I'm trying to think, is it "Introduction to J" is the one that you did Henry?
00:57:11 [HR]
Right.
00:57:12 [BT]
Yeah, and then there's Roger's idiosyncratic introduction to J, which is fascinating. And you know, and then there's J by point and click. So these are all labs. If you were interested in getting into J, they'll get you into it, but they probably don't have the most recent things, and that might be where NuVoc starts to come in.
00:57:32 [HR]
And I also run a a lab similar to J by point and click, called J as your first computer language, which I used for the beginning computer class that I taught. If you want to teach your child an array programming language, you could start there.
00:57:49 [CH]
We'll have to get yeah, links to all of these these labs in the in the show notes for those that are or I'm not sure, can you link labs or do you have to search them?
00:57:56 [BT]
You actually have to, I think be in the J environment to run a lab. A couple of my video labs I've actually just put together as video strings on YouTube, and then when I go to a video lab, I'm actually accessing YouTube to be able to see the video part of it. But most of the time you have to be in the environment and run the lab, to do that. A lot of the labs can be run both in JHS, which is the web based environment or J Qt which is the Qt based environment for J. I think you can run labs, I have run labs in the console as well, but I don't think that's often talked about. I don't know. I don't use the console that much. I think in order to run labs in the console you need to know the the magic words in the lab to be able to start the verbs in the right spots, know where to look for them, it's not an easy thing to do, but it can be done.
00:58:49 [CH]
Alright, so we're we've covered a lot of ground and we're closing in, or maybe even just passing the hour mark, but I know that we wanted to I think at least briefly, mention sort of what the areas that are going to be, at least at this point in time, are being focused on for the next upcoming release, which will be 904. Do we briefly want to talk about that to round out the episode? Or is there a couple other things that we wanted to get to?
00:59:17 [HR]
Well, I can talk about what little I know. We don't do a whole lot of planning. It seems that there are enough things that come up that are worth doing, so that we do them, but we are looking to bring somebody on board to help with pull requests so that we can take user contributions more quickly, and we'd really like to get people contributing as much as they can. Sounds like obviously the sort of people who use J are going to be... more experienced side of programming. Things that I'm looking at are, I've done a little bit of probing about operations on very large nouns and it it turns out that when the nouns get bigger than the caches, it changes the way you have to deal with them, and adding, if I want to add A + B + C, you know how to do that. If they're small, you have B + C and then you add A. But if the nouns are big, that's not a good way to do it, you you really you need to add part of B + C and then I add part of A and build, to keep the result from overflowing the caches.So I'm gonna be experimenting with how we can how we can better to deal efficiently with individual values that are bigger than the L3 cache on the machine. I think you're gonna find that the special methods are going to be needed for that. And also again, when you're dealing with problems that big, you need to think about multiprocessing. Uh, and the quickest way to do that will be trying to offload some of the work to GPU's. To the extent that we can do that. We're going to be looking into that and using other threads, if we can make multiple cores work. We've, did some work on that for a client but I think I would,I think that a new implementation... a fresh start on that is called for and I'll be looking into doing that.
01:01:53 [CH]
That sounds pretty exciting, and so and for the pull request 'cause I I know, I think we talked about this last time that it's open source. But currently you and the other developers that are contributing to J source they operate on sort of a private GitHub or some other repo that's mirrored on on GitHub. Is that still the case or has there been changes to how that it works for folks that are looking to contribute?
01:02:20 [HR]
I think, I think that's still right. We we have a we have the J software repo. There's a, uh, a public repo which pretty much keeps up to date with our private repo, so changes that people submit to the, the GitHub repo would be suitable for inclusion, we just have to check them out.
01:02:43 [CH]
And for folks that you said that you know you are looking on coordination of pull requests or also you know someone to maybe take over maintaining the the dissect project. What's the best way for folks that are interested in this work, to get involved? Is it to email you, to ping you on some IRC thing, or..?
01:03:02 [HR]
Email to me is the best way to get in touch with me or you could put a message to the forum. I mean, I'll see the programming forum.
01:03:10 [CH]
OK, well we'll put that information in the show notes, so if anyone is listening to this and has started their J journey or recently or has been on the J journey for years and this kind of work sounds interesting, yeah, just check out the show notes and reach out to Henry 'cause yeah, sounds like a lot of exciting things are happening and 904 is going to be just as exciting. Although, will it be? Now that we have these invisible modifiers galore, this could be the best release of all time ever.
01:03:42 [AB]
Do you have any plans for extending the language beyond this? The core language.
01:03:48 [HR]
Then I would say no, it's not to say I won't do it, but I would require,I mean, we'll add foreign conjunctions from time to time as we find things that need to be fed up, but for the the core language, I don't know that anything more is needed, but if you come up with anything, let me know. I’ll look into it.
01:04:10 [BT]
I was going to say you shortly take that with a grain of salt, because if you go back and listen to Henry's previous episode, I think I can quote "and it's not coming back". And now it's back, so.
01:04:24 [HR]
Well, yeah, well that might be true I looked into that. Yeah, I thought it would yeah I thought it wasn't coming back. It turned out to be not very hard to read, so I basically said, "Gee, I've I've got to go fix all this old documentation". And it occurred to me that you know if I just bring back those old modifiers, I don't have to fix the documentation and that turned out to be the easier solution. So I did it.
01:04:49 [AB]
You hear that. All the implementors out there. If you find the description of something you haven't implemented, don't remove the description, just implement it.
01:04:58 [BT]
That's, that's, documentation is a motivator.
01:05:03 [CH]
Documentation driven development, triple D. Alright, well with that is there any any last things we want to say before we wrap up this episode? Once again, obviously Henry thank you so much for for coming on. It's always a blast having you on and hearing about what's exciting in the J world and I definitely, I can't say by the next time you'll be on 'cause I don't know when that is, but definitely one of my goals in 2022 is to figure out the superset of things that exist across all the array languages, and I think J is probably right now the language that has that's the most number of things that I don't know about in terms of outfix and all the different digraphs that can Ken worked on, and I think that's, you know, whenever I do find the time that's going to be super exciting to be like "holy smokes", 'cause when yeah, when Bob described out"fix, I was like Oh yeah, that is super useful". And like,I mean, I'm sure all of the digraphs that exist in J are super useful because it's just you know, a 2 character spelling of something that takes, you know, 4 to 10 or however many characters in APL and BQN. And obviously, you know, based on the fact that I love array languages and the terseness of it, every single time I discover one, I'm going to be like, "oh wow". Like, I wish I had that, you know, it's a balance of how many Unicode symbols we're going to add to APL.
01:06:22 [BT]
When you do get around to investigating your Rosetta code, or your Rosetta stone of the different languages, there's a wiki page waiting for you.
01:06:32 [CH]
Is waiting for me or does it it already exists? You're saying it.
01:06:35 [BT]
No, it's waiting to be filled.
01:06:38 [HR]
Yeah, we would love it.
01:06:40 [BT]
Or we may end up putting it. If you put it together. Like honestly, if you do it as a stream, or something like that, it's a simple thing. In fact, this is something that Eric had mentioned they'd like to see in the wiki, and we'd like to see it as well. I'm sure it'll be put in the links to external sources. So, in other words, you just put a link and you could put a link to a specific video that you'd run about this and it could just be with a, you know, with a brief write up of what you're going to get. If you're interested in it, click here and you'll get that. And that's something that we're looking to put it on the front page of the wiki as well, just to make it really easy for people to get access to not just J, but some of the other ideas that go across the array languages.
01:07:16 [AB]
Absolutely same thing for APL wiki. I mean the APL wiki has pages about J, about k, about BQN. And if you do a comparative APL versus J video then it totally belongs there.
01:07:28 [BT]
Yeah, and I gotta mention, the APL wiki is a huge inspiration to us. It's one of the things that's sitting out. There are way out in front of where we're hoping to get to. But what the APL wiki has done is really impressive, and if you haven't seen that before, you definitely should go take a look at it. It's it's what we're aiming for, but it already exists...
01:07:48 [AB]
I do think the the J wiki has a lot more content than the APL wiki, but...
01:07:55 [BT]
Yeah, the the J wiki is a bit like an old library that's, you know, got all these huge you know, like, it's the library of Alexandria that nobody knows about, and you can if you can find your way through it and and get to it, there's amazing information in there, but it's not easy to navigate, and that's what we're working on is to try and get that information forward and updated, and that's not a small task. But that is what we're working on, so you know it'll take some time, but we'll get there. But as I said, the APL wiki is sort of the flashy new - I'm trying to think a real fancy new library that's come into existence, I can't think of it right now - but some flashy modern library that you know has got all the bells and whistles. That's the the APL wiki. We're more like, you know,"Oh, you need to look at the the card catalog.".
01:08:46 [HR]
That's right.
01:08:47 [BT]
Where your going?
01:08:50 [AB]
It's funny, I'm just looking at the statistics for API wiki and and the the J Wiki and the API Wiki has currently 364 content pages. Whatever that means. The J Wiki only has 189 content pages.But then there's a different measure, which is number of pages,which includes, you know, talk pages and redirects and other things and the APL wiki has 1500 of them. The J Wiki has 10,000. I think the problem is finding the information in the J wiki.
01:09:25 [BT]
Yeah, yeah, it's like you're walking through a library and somebody goes, oh, you want to go through that trapdoor over there and you pop through the trapdoor and there's a whole different floor that you've never known about. Yeah, there's, there's things in there that are, it's...Well, one of the things I'll mention is — on the sidebar there's a link with a bunch of links on it, "showcase", and we'll put a link for you to go directly in. If you go to showcase and then on the showcase page, there's actually books and it lists out a whole series of books that have been done on J as well as a brief description of what they're about. So if you haven't seen that before and you were looking at books for J, that's a good place to go take a look. It's not exhaustive 'cause this is a as most things it's not always been updated. But there is a lot of information there that you're not going to find other places, but then you have to go nowhere to go look for it, which is a challenge. It really is a challenge.
01:10:17 [CH]
All this talk about Wikipedia page, wikis, and pages I just watched inside the most recent Netflix Bo Burnham special for the second time last night, and one of his comedy songs in it is about the Internet and like, the line, the line is, it is like "anything and everything all of the time". Anyways, I'll send you an MP3 clip Bob, and you can cut it in after the episode. It's just like the whole thing is just like doesn't matter what you want, we've got the information and we're going to suck up all your attention100% of the time and he's kind of got an evil flair to the song, but Just yeah, APL wikiverse.
01:10:51 [BT]
And knowing Bo Burnham, it has, it has an edge.
01:10:53 [CH]
Yeah, it's the the point is: more information than you could possibly consume in your lifetime, plus more to come and we'll link it all, and you can just have content paralysis, analysis, analysis paralysis of which book to read or which video to watch.
01:11:09 [BT]
And now you've put my usual pitch in here for contact at arraycast.com if you want to get in touch with us. 'cause definitely we're available through email and we read the emails and we send them out. Sometimes we get a chance to run it past everybody in our slack channel and we get great responses. But we're very interested in what people feedback is and what they would like to see is doing. There have been a couple of suggestions of different guests and stuff we're definitely looking at so it takes a while to line these things up, but we don't think that because you've suggested a guest and they haven't been on yet that they're, we're not looking at it. We definitely are, so this is just on part of the ongoing thing working with community, but if we never hear from anybody, it's really hard to know what you want.
01:11:59 [CH]
Awesome yeah, so as always, yeah, links to everything we mentioned will be in the show notes and once again thank you so much Henry for coming on. It's always a blast and I always end up learning something so. You're welcome back anytime.
01:12:12 [HR]
Thank you very much.
01:12:14 [CH]
And with that, we'll say happy array programming.
01:12:16 [All]
Happy array programming.