Transcript

Transcript prepared by

Sanjay Cherian, Igor Kim, and Bob Therriault

Show Notes

00:00:00 [Bob Therriault]

There are exactly four normed division algebras. The real numbers R, complex numbers C, quaternions H and octonions O. The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on. The complex numbers are a slightly flashier but still respectable younger brother, not ordered but algebraically complete. The quaternions, being non-commutative, are the eccentric cousin who is shunned at important family gatherings. But the octonions are the crazy old unclenobody lets out of the attic. They are non-associative.

00:00:36 [MUSIC]

00:00:49 [BT]

Welcome to the ArrayCast. I am not your usual host, but you've probably already figured that out. Conor is on vacation. So let's move along and go right to the panel. And we'll start off with Stephen, and then to Adám, and then finally to Marshall.

00:01:04 [Stephen Taylor]

I'm Stephen Taylor. I'm an APL and q enthusiast. Yeah, I'm enthusiastic this week.

00:01:10 [Adám Brudzewsky]

I'm Adám Brudzewsky. I do APL full-time at Dyalog.

00:01:16 [Marshall Lochbaum]

I'm Marshall Lochbaum. I'm wondering which number episode this is.

00:01:20 [BT]

Oh, thank you, Marshall. Exactly. Yes, it's episode 82. And the last time we had Bob Smith on, who's our guest today, was episode 80. And he talked in great detail. Well, not, well, enough detail about NARS2000. But for you NARS2000 fans, we're going to take it even a step further and go into implementation details and all the things that NARS2000 does, or a lot of the things that NARS2000 does, and the ways they're implemented, which should be a great deep dive. But first, we have some announcements.So I'll go to Adám first for, I think,mtwo announcements and then Marshall for one.

00:01:58 [AB]

Okay. So until now, we mentioned this before, the British APL Association have had a meeting every other week. And the time they've had it at, which has been, I think it was 4 p.m. in the UK, is great for a great many people on the globe. The APL Bug, or Bay Area Users Group, which is located in California, are now complementing that with a meeting on alternating weeks. [01] So the weeks that BA doesn't have a meetup, these are online meetups, at 6 p.m. California time. And that means that people in East Asia and Australia and the Americas now have this as an alternative they can go to without being too crazy of an hour of the day. So they're just getting started. We'll have a link to that. It's on Zoom, just like BA meetings. And then I noticed on Hacker News, there was a discussion about Futhark, where they're announcing a design for broadcasting or conformity or scale extension, etc., whatever you want to term it. And that might be interesting for our listeners to have a look at. It's rare that a language adds that kind of area comprehension all of a sudden.

00:03:21 [BT]

And we did do an episode on Futhark with Troels. And I wish I could remember what number it is, but it was a while ago. But certainly if you're interested in...

00:03:30 [ML]

Yeah, we may need another.

00:03:31 [BT]

That's a good point. If they do implement this and there's a lot of development there, it would actually be worth another. So that's on to you, Marshall.

00:03:39 [ML]

My announcement is regarding BQN. Conor would surely be excited to hear this, but he may have to just hear it through listening to the podcast. GitHub now supports BQN. It recognizes BQN as a language. So that means there's syntax highlighting for the language. If you've committed to the repository since they started supporting it, then it will show in the little bar that says what languages your repository uses. It shows BQN. So now, like the main BQN repository, it used to be, say, it was in CocoonScript because it couldn't find any actual programming languages in there. But now it says it's about 90% BQN, as it should be. And I think even in Markdown, you can now specify that sections of it are to be highlighted in BQN. The process for this was really not very nice. Since the last time I spoke about it earlier this year, we have gotten no comment whatsoever from the staff member who handles this. So we were completely in the dark about whether there was something wrong with our commit or whether we didn't have the required popularity. No comment on the fact that GitHub Search barely works for this function. I continue to recommend you host your repositories on Codeburg. But if you have them on GitHub, they will now show up a little more nicely.

00:04:52 [BT]

And that is a work in progress because now that you're on, you can develop it as you like and tweak some things like that.

00:05:00 [ML]

Yeah, I think so. So there are some issues with the syntax highlighting that we've seen. It doesn't quite handle the single quote character correctly. But GitHub doesn't even host the highlighting file. They use the highlighting file that's-- it points to another one. So I think we can update that, and then eventually those changes will propagate in and we'll have better highlighting.

00:05:25 [BT]

So those are our announcements. And now we can turn the spotlight over to Bob Smith, who is the developer in charge of the NARS2000 language and very experimental in doing all sorts of neat things. And now I guess we'll get a chance to find out a follow-up from what we had on Episode 80, now to Episode 82. Over to you, Bob. Give us some background on implementation.

00:05:49 [Bob Smith]

It's time to go under the hood. And this is one of the things I love about APL is that it brings together this delightful combination of language design and implementation. I talked with some of the people at the very beginning of the language, Ken and Adin, Larry and others, and the APL 360 days brought together some of the giants in the field. Of course, Ken Iverson, Adin Falkoff for language design, and Larry Breed, Dick Lathwell, and Roger Moore doing implementation. And they brought together the idea of a Quaker consensus so that when they were doing language design and the five of them, and perhaps more that I apologize for leaving out, they were coming up with a common view of what the individual elements of language design would be. And so they've talked about it and waited until they got an agreement from all the people, the Quaker consensus. And I thought that was a delightful way to structure language and talk about it. But they were divided in two groups. Ken and Adin were the consummate language designers, but not implementers. And then of course, Larry, Dick, and Roger as implementers also had a lot to say about language design. So it occurred to me that they were really in the catbird seat. I remember talking to Ken at one point where he was lamenting about this division. And his lament was that he said he was amazed, I'll put that in air quotes, that the implementers ideas got done so much more quickly. And that's one of the things he was somewhat regretful about that. It's almost as if his ideas were met more with the order of, it's in the queue, Ken. So I was a good friend of Larry Breed in particular. We worked together for a number of years at SCSC. And he was my mentor and really an idol. So one of the things I really wanted to do from the beginning was to be able to master the ideas of language design and implementation. And so, of course, learning at the feet of masters, I picked up a lot of good ideas and put them together into NARS2000, started in September of 2006. So the balancing act of language design and implementation, dealing with syntax and grammar and symbols, all of which is not to be decided lightly, go back to the Quaker consensus idea. And unfortunately, I have a Quaker consensus of one here. So that's why I call this an experimental APL interpreter. So my Quaker consensus is to design something, implement it, put it out, let people look at it, and then hear back from them about some of the ideas. And so that's been very fruitful. And I really want to continue that. So some of the implementation issues, they fall into multiple categories. So the first one I want to talk about is finite state automata. [02] Now, that's a fancy term, but it really represents, it's a graph.

00:09:13 [BS]

And if you think of a graph that has nodes and lines, directed lines, so they have arrows on them, and the nodes represent states, and the arrows represent incoming characters. And so an FSA would be used to parse a line of input. You could use it pretty much in any line of input you want, although it may be more cumbersome to use a finite state automata than some others that I'll talk about as well. So I use finite state automata, first of all, as a tokenizer for a line of input. So you wait until all the characters come in, somebody hits enter, and you then convert that into tokens. And those tokens then get run through a different parser that ends up executing the line. In the usual way. At the same time, and really joined at the hip with tokenization for finite state automata, is syntax coloring. So I love that about APL in that it's a rich enough language to justify syntax coloring. We were talking offline before about a lot of languages that were doing syntax coloring. I think it's ideally suited to APL because it's such a rich language. And so the tokenizer finite state automata is exactly the same doing syntax coloring as tokenization. And that's something that's done in real time. So every character you type, the line gets run again through syntax coloring, which is what you want because you want to know when parentheses are mismatched, grouping symbols are mismatched, not properly nested, things like that. So that's another of the finite state automata. There are others that I'll talk about later. Anonymous functions, quad-FMT format strings run through a finite state automata. Conversion to rational numbers, turns out, is complicated enough to justify using an FSA. And in the future, I've been looking at array notation, and it strikes me as there are various choices there as to how to parse that. But I think a finite state automata might be perfect to parse array notation. And it also would fit very nicely with the line continuation markers that I put in so that instead of hitting enter, you hit shift-enter, and this then continues the logical line onto another physical line. And you can do that as many times as you like. And then at the end, you hit enter, and the whole thing then gets executed. The other kind of parser that I've used extensively is called LALR, which is look-ahead, left, right. That's not the most descriptive term, but the idea is that you're looking at tokens, and you can look ahead to the left, ahead to the right, typically one symbol. And this is another way of parsing input. It uses what's called a grammar definition file. So it's a little more complicated than finite state automata. And these are files that are processed by objects called compiler compilers. So YACC is a common one in the Linux community. And YACC actually stands for yet another compiler compiler. In the Windows world, I use Bison, which is essentially the same thing. So LALR parsers I'm using for control structures. So colon if, colon else, all of that. And that's an interesting problem, because in terms of the implementation, you need to keep track of not just the line that you're on, but the token that you're on, so that you can have nested control structures within each other, and even put everything on one line. APL's got a reputation for doing that, and I don't recommend it for control structures, but you could. So you could have a colon and if, diamond, colon, and if, etc., with statements in between if you wanted, and all that would be handled nicely. Function headers are another thing that I use an LALR processor for, where there are lists, the result has a list, the left argument can have a list of names, the right argument can have a list of names, and the result and the left argument can be shy and optional, respectively, for those two. Another big LALR parser is for point notation, and this is an outgrowth, again, a lovely idea from Ken Iverson, where generalizing the idea of exponential or scientific notation, 1E13, where he generalizes the point where letters of the alphabet get tossed in there. So I use R as an intermediate symbol, 1R2, which then is a rational number, the numerator is 1 and the denominator is 2, and it's stored as a rational number, so it's essentially infinite precision, or at least infinity as limited by the workspace you have. There's a whole bunch of other ones, particularly hyper-complex notation, that makes that thing fairly complicated, and so that's one I thought appropriate to use an LALR parser for. There are other parsers that I'm not using yet, but I'm learning more about that solve some of the problems of a LALR parser. And not to get too far into the weeds, there are things called shift-reduce and reduce-reduce problems of LALR parsers, which more general parsers can solve. And the choice between a finite-state automata and an LALR parser is really more of an art than a science. So they're general enough that they can do each other's work, and so the art is really choosing that. And I'm not sure I've made all the right choices for them. I could, for example, I think I could easily do function headers with a finite-state automata, but for whatever reason I chose to make out a bigger gun and use an LALR parser for that. The syntax parser for the language itself, that is after it's been tokenized and then we need to execute it, I chose to use, actually I first used an LALR parser, and it really got so unwieldy that I had to abandon it, and that was painful. I must have had half a megabyte of code just for the parser.

00:16:23 [BS]

So instead I went with a lovely paper in APL 84 by John Bonda and John Girth called APL 2x2 Syntax Analysis by Pairwise Reduction.[03] And that's my understanding of what APL 2 uses. And so it defines a large matrix of all the different states and then consults the matrix to be able to decide what actions should be taken. The great value of finite-state automata, to a lesser extent, LALR parsers, but certainly the syntax parser we're talking about, the 2x2, is that they are data-driven. And the alternative is code-driven, so that a data-driven parser involves a great deal of data out there, static data, and that describes what to do next. And my experience is that it's so much easier to debug a data-driven structure than it is a code-driven structure, and correspondingly then so much easier to understand. So those are good reasons to use parsers in general, because of the data-driven aspect of understandability and ability to debug it. Other areas that are interesting to talk about with respect to the implementation are hyper-complex numbers. And those are real complex quaternion and octonions. Those are what are called the division algebras. And there are four of them, provably only four. And they are interesting from the standpoint of point notation, because there are issues with respect to elided coefficients. So you might have 1i, 3k, 4, and you're leaving out a coefficient in there, the j coefficient. Instead of going i, j, k, you might just write it as i and k. Those elided coefficients are also tricky to do, but you can get that accomplished with point notation. And so hyper-complex numbers, in terms of notation, are relatively straightforward for the user to enter and understand. The implementation of hyper-complex numbers is also very interesting, and I was really delighted and surprised to see this come out. The implementation issues are... So there's a bunch of functions, let's say there's like 12 trigonometric functions, exponentiation, logarithm, and a few others. Each of them has an algorithm that handles the real number case, and those are readily available. A bunch of libraries, the GNU Scientific Library, and many, many others. They're also readily available in multi-precision form, which is another issue, so it's nice to be able to have a large collection of algorithms for those. Then, in the next, in going up the hyper-complex chain, comes complex and quaternions and octonions. Now, there are a large number of complex number algorithms for those 12 trigonometric functions and others like that. However, I found that those are not good ones to use, and they are good algorithms, they're efficient algorithms, however, they don't generalize. So, it's not easy to take a complex number algorithm and make it work for quaternions. There are issues there. Each time you go up the hyper-complex number chain, you lose a property, and cumulatively so. So when you go from real to complex, you lose ordering. So you can't say, "Is i greater than zero?" i being the complex number. That's not mathematically sound. At least, it doesn't satisfy the mathematical definition of order. And then when you go from complex to quaternions, you lose commutativity. And so, a times b is not the same as b times a. And when I talk about commutativity, I'm talking about multiplication only. Addition is still commutative. And then when you go from quaternions to octonions, you lose a really important property that is a difficult one to do without, and that's associativity. So if you have a times b times c, you get a different answer, depending upon how you group those multiplications. Depending on which one you do first, and which one you do second, then you'll get a different answer. Now, that is really the heart of what makes having algorithms for octonions difficult. So the original 12 trigonometric functions and others are very interesting. They've got that real number argument algorithm, and then there's another algorithm for each one of them, separate ones for each one, that implements the rest of the hypercomplex numbers. And by that, I mean, instead of using the complex number algorithm, you use a more general algorithm, and you pass to that algorithm the number 2, 4, or 8. And that's the only number that changes in the algorithm. Essentially, it's a loop limit, so that for the imaginary coefficients of the hypercomplex numbers, you're using the very same algorithm for each one of those coefficients.

00:22:00 [BS]

They don't have different algorithms, depending upon the different imaginary number coefficients. So that was surprising, to me, that it turned out to be so much easier to implement a large chunk of hypercomplex numbers, because all I had to do was implement one algorithm for the imaginary numbers, and I found a collection of those online, so it was really easy. That worked for quaternions, and so that then made it easy to implement the hypercomplex numbers for those. Other primitives, I should say, such as matrix divide and such, they have very similar algorithms too, where you can pass a number. So for example, using Gauss-Jordan inversion for matrix division and matrix inversion, that's another one where you can end up having essentially the same algorithm for the different hypercomplex values. And then it's nice to be able to have multi-precision numbers available for all of them as well. So I gave a paper at a Dyalog conference in Glasgow on 50 years of APL, and this paper was entitled "50 Years of APL Data Types." So I was focusing just on the data types and the effect that they had, and hypercomplex numbers are certainly part of that. A lovely addition to the language that I'm not surprised no one else has picked up on quaternions and octonions. Any day now, I'm sure, Dyalog's going to bite the bullet and take that on. But then multi-precision numbers, I think, are really an important element, and I would like to see that in more implementations. J has it, and I think it's an excellent addition to the language. So I think I may have mentioned in the previous talk that I am a precision nut, that I'm really keen on getting precise values. And precise values, if you're limited to just the real number versions, you've got essentially a 64-bit IEEE 754 number, and that number is limited in precision, essentially 53 bits of precision. If you need more than that, what can you do? And that's where multi-precision numbers come in. And delightfully, there are several DLLs that are available, high-quality DLLs. They're maintained by groups of people who are interested in doing that, all open source. The open source community is really generous when it comes to sharing things like that. So there are open source multi-precision DLLs that one of them I use is MPIR, which is for multi-precision integer rational. And so there are just one data type of integer and rational, and those numbers are infinite precision. So there's the numerator and denominator stored separately. So for example, you can do a matrix division, inversion with rational numbers to get back exact values. That is, you get back another matrix of rational values. It's not, you're getting something that is subject to round-off error. There is no round-off error when dealing with MPIR, multi-precision integer rational. And then beyond that, if you are dealing with numbers that are not storable as integer rationals, that is, they involve square roots, things like that, they're irrational. And there are other libraries to deal with that. And the two that I use are MPFR, which is a floating point real multi-precision library. And that provides as much precision as you like. So there's a system variable. You set that to how many bits of precision you like. I typically run with 512 as my default. It's because I have no interest in dealing with numbers that round off. So it's, and the performance disadvantage of that is just minute. So it really is delightful to be able to handle that level of precision. And the other high-precision DLL that I use is called ARB, A-R-B, and that's ball arithmetic, which is a fascinating idea where you are dealing with a number that's no longer a single value. That is, we're used to the scalars of our language being, you know, one, two, even one I2. That's still a single value. It's a point in the plane. And similarly for quaternions and octonions, it's a single point. But with ball arithmetic, the numbers, the scalars, are a midpoint coupled with a radius. And so there's an entire interval of values that are represented. And the great value of having a high-quality library like this is that you end up being able to validate your algorithms, essentially.

00:27:37 [BS]

That is, you may have had questions about your floating-point algorithms as to whether they're stable, they produce accurate results. You don't have to worry about that anymore if you use ball arithmetic because you can pass a ball in, get a ball or a matrix of balls out, and each time you take a look at the answer. And if that radius is quite small relative to the midpoint, then you know you've got a good algorithm, at least for that data. If instead you get a number whose radius is like five, that's probably not a good algorithm because it's way too big. And so if you're looking for more precision, this is a way of telling you whether you have enough precision in the algorithm to be able to use it for all the rest of your results. So ARB is actually a cheap substitute for numerical analysis, so that instead of having to do the numerical analysis of your algorithm and decide whether this is ill-conditioned or well-conditioned, instead you can use a ball arithmetic to be able to solve some of those problems without having to hire a numerical analyst. Other structures farther away, of course, we talked about the Boolean data type and that's been a fond data type of mine for many years. So it's a great way to do a reduction scan, various other operations on Boolean data using some clever algorithms that came out well before my time, and then be able to generalize them. And so we were talking earlier about whether a language actually implements a Boolean data type. And I think that's an important element because there are some lovely algorithms that work specifically on Boolean data types where each bit, each number, one or zero, takes up a single bit within the individual word. There are a bunch of papers that I've written over the years talk about some implementations and various other things. They can be found in my sudlyplace.com website [04] and the /APL directory. You'll find a bunch of papers there. Other things that I wanted to just mention are things in the future. I really want to get regular expressions. And I know that Dyalog has done a nice job implementing quad-R and quad-S operators. And I'd like to do something like that. And also, I think a tiny element that needs to be expanded is system commands. That is, I'd like to have system commands except regular expressions. Instead of just saying, I have a workspace with a bunch of functions and variables in it, I might want to be able to select just a few of them. And I think that putting regular expressions in on the system commands would be helpful. So tell me which of any of these topics. And also, I'd love to hear from you folks about other algorithms, data structures that you're using that I could add to my toolbox. Go ahead.

00:30:54 [BT]

Well, before I turn it over to the panelists, and they're far more experienced than I in a lot of these implementation details, in one of your papers, you used a quote from John Baez explaining complex numbers, real numbers, quaternions, octonions, and it's delightful. I'm just going to read it out. "There are exactly four normed division algebras. The real numbers R, complex numbers C, quaternions H, and octonions O. The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on. The complex numbers are a slightly flashier but still respectable younger brother, not ordered but algebraically complete. The quaternions, being non-commutative, are the eccentric cousin who is shunned at important family gatherings, but the octonions are the crazy old uncle nobody lets out of the attic. They are non-associative." And I just thought that was a great way of getting back to these different number systems and the way they work. And I think in the previous, in episode 80, we were talking about what the uses were. Well, it turns out octonions are especially important in some of group theory and lie group theory and particle physics, because there's string theory, basically what may or may not be the basis of the way the universe works with gravity and everything, that may be based on octonions because of the latitude they have in their higher dimensions and the unexpected ways in which they work together. So there's some really interesting ground being broken. And as far as I know, I don't know anybody else has implemented in a language octonions. I'm just not aware of it. Is that true, Bob? Are you the only one playing around with octonions?

00:32:50 [BS]

I think Paul Jackson might have implemented quaternions. I'm not sure about octonions. But yes, you're right. They are rare. They are the crazy uncle. And John Baez's quote is lovely, really. It really captures that spirit. The interesting string theory is that, not to get into the depths of it, but it's often thought of as an 11-dimensional, sometimes 12-dimensional structure. Eight of those dimensions are octonions. Any other three or four are the usual spatial dimensions, possibly coupled with time.

00:33:32 [BT]

Yeah, there's a lot of study. And this is an area of mathematics that people are really exploring. And it just seems to me that having a language that works with them may be a real advantage when you get into these sort of explorations. Although I know with mathematicians, a lot of it is done sitting back and looking out the window. It's not done on a computer. But sometimes it's useful to have a machine to try out a few things as well. With that, I'll turn it over to the panel and see if there's anything in particular anybody wants to discuss.

00:34:03 [AB]

Well, there were a lot of things. Maybe it's just me that had a hard time keeping it all in my head at the same time.

00:34:09 [BT]

Well, do we want to start off, as Bob did, with the parsers, the finite state machines and the LALR parsers, those kind of things, tokenization. Any comments on that?

00:34:24 [ML]

Well, have you seen J's version of it that also uses a syntax table and introduces all those adverb trains and stuff with more entries in the table? I have not looked at that.

00:34:35 [BS]

I've talked to Roger about it, though, and we had similar views of table-driven and such. I had the impression that he was much more parsimonious with his tables and was able to pack things together much better than I had. There are things I need to learn from looking at that in more detail. My tables are quite large. There are other large tables in there that, because of all the data types I have, that have made the implementation easier. I'll maybe get a chance to talk about that later.

00:35:12 [ML]

Yeah, maybe one of my complaints about these tables, which I agree are a pretty good tool, is that because you expand everything out entry by entry, if you look at an individual entry, you can always tell, yeah, either that's right or that's wrong. But you don't immediately see the structure if multiple entries are sharing something.

00:35:32 [BS]

Yes, there are issues there. Being able to have it data-driven, though, is just a huge advantage. There may be some places that make it a little more complicated, but that's the way to go if you at all have any choice about it. Make it data-driven. Your life will be so much simpler.

00:35:54 [AB]

I'm just fascinated by these numbers with all the letters in them, and making order of that, including how you parse those. I'm not entirely sure how a human can use them.

00:36:12 [BS]

Essentially, what I have is a hyper-complex calculator. As Bob was pointing out, most of the mathematicians think of this as a calculator, and they're actually at the theoretical world. Occasionally, they may need to test out something, see whether it works, and to that extent, having a calculator that does what NARS2000 can do can be a big advantage. But you're right, the mathematicians are staring off into space a lot, coming up with the concepts of it, and then occasionally need to come down to ground and test them out.

00:36:53 [AB]

Yeah, well, the thing is, it's not just you can do all computations in all these various fields, but they all work together. You can mix and match anything you want. So I've been sitting here thinking, "Okay, if I restrict myself to single-digit numbers..." So for example, what I consider a single-digit number would be 1E2. So that's 100.

00:37:16 [ML]

Never have two digits next to each other.

00:37:19 [AB]

What never have two digits... How big, as in how many characters of a number can I make the correct syntax? So far, I'm up to 261 characters in a number. It's just a single-digit number. But that's by throwing together Octonians with the scaled notation, with the E together with bases, something you didn't mention, is you can use any base rather than just decimal for things, and extended multiprecision with one of these number pairs, like a pi-point notation, that kind of thing. And the fact that all these things just work together, am I right that there are no holes? There are no combinations of things that don't work?

00:38:10 [BS]

Base notation is special in that regard. So you can't make that a coefficient of a hypercomplex number. But you could do rational numbers, and as you pointed out, pi notation, and exponential notation, the x notation, I should say. All of those are possible elements of that, and they can combine in a different way. It's a table that talks about the precedence of execution, and that indicates which ones you can have within which others.

00:38:48 [AB]

So you say I can't do a complex number where the real and the imaginary part are in a different number base?

00:38:54 [BS]

Yeah, you see, the business with base notation is that once you get the B, 16B, everything to the right is considered part of the number. Everything.

00:39:07 [AB]

So what do any subsequent Bs do?

00:39:08 [ML]

Those are the digit 11.

00:39:11 [BS]

They're just, they're an 11. They're an 11, if the 16 is the number to the left of B.

00:39:17 [AB]

Oh, and you allow values that are above the base, right?

00:39:22 [BS]

That's right, yep. A to Z.

00:39:24 [AB]

Okay, so that means that, for example, binary is not limited to zeros and ones. You can have any digit.

00:39:29 [BS]

Yeah, it'll just interpret it.

00:39:32 [AB]

That's what's going on. Okay.

00:39:33 [BT]

And how high a base can you go? I think in J you can go all the way up to 26, but is that— 36. 36? Oh, I guess, yeah, okay, yeah, right.

00:39:41 [BS]

You mean the number to the left of the B?

00:39:43 [BT]

Yeah, the number to the left of the B.

00:39:44 [BS]

Yeah, well, to the left of the B, it's just, it's a number.

00:39:45 [AB]

There can be any base, but how do you express the digits in that base would be difficult, right?

00:39:49 [BS]

Oh, I see, yes. Well, you're right, the zero to nine and A to Z. I don't think I distinguish upper, lower case in the alphabet. didn't seem like an area that needed a great deal of TLC applied to.

00:40:06 [AB]

So I can do 1000BAA, and that means digit values 1010 in base 1000, and so it becomes 10010.

00:40:20 [BS]

So I can do 1000BAA, and that means digit values 1010 in base 1000, and so it becomes 10010.

00:40:24 [AB]

No pun intended.

00:40:26 [BT]

And I don't think you're limited to single letters, are you? Because in the Octonians you end up having to double up some of those letters.

00:40:32 [BS]

That's right, digraphs. And there are two or three different ways of doing that, too, which makes the whole idea of point notation that much more complicated, which is why I had to use a LALR parser to handle it. So, yeah, IJK and L, that's as many letters as we wanted to use. And then beyond that, you have to double up the letters. So you can use IJK and then IJ as a digraph, JK as a digraph, and KL as a digraph. So there are seven of those separators, as it were, which then gives you eight coefficients.

00:41:11 [BT]

And it was the digraphs that made it easier to use the LALR tokenization, I'm guessing.

00:41:19 [BS]

It was a combination of that, and it turns out one of the tricky elements of that is elided coefficients. So if you do 1i, 2k, you're missing the j coefficient. That's a little tricky to do, but I found a way to do it using the LALR parser. And so I just did it. So there's as few limitations as I can possibly think of, even when you get to the crazy things. Remember the crazy things in APL 360, where you could overstrike an F and an L and get the letter E. That's what Larry used to call "visual fidelity."

00:41:58 [ML]

I do have one thing to say about numeric notation, is that for Singeli, [06] we actually came up with something that I think is new. This is, of course, with a very different focus. You're not going to have any quaternions in Singeli, because it's designed for doing the fastest stuff you can on the hardware. So the types are what the processor supports. But a lot of the time, you want to work with base 2 or base 4 or base 8. So I decided we'd carry over this B notation from J. But also what you want a lot is numbers with repeated digits. So we ended up coming up with three different prefixes that you have that specify that the part that's left is to be repeated. And I don't usually like to do a lot of stuff in numeric literal notation, but the problem with trying to repeat the number after you specify it is that then you don't know how many digits there were in the numeric literal. So within the language, you can't say, "All right, repeat this thing three times. One, two, three," because the number one, two, three, it could have been in any base. By the time you actually evaluate on it, it's just a number. So what we have are R says the number of repetitions. So you can say repeat three times or four times. D says the number of total digits. So if you said like 10R123, you'd get in base 10, 10 digits of one, two, three repeating that ends with a three. And then W is the total number of bits you want because, of course, it's hardware-oriented, so the bits are important. So that's another system.

00:43:29 [BS]

W for bits?

00:43:30 [ML]

For width, bit width, because B is already used for base, of course.

00:43:33 [BT]

So to me, that seems to fit in with a lot of forms you see for formatting. When you're trying to format a word, you've got the WD and those kinds of things. Yeah, maybe. Although with the repetition, you've got something else added in there.

00:43:41 [ML]

Yeah, I mean, you definitely -- you wouldn't say, "Here's this number formatted as though it's repeated."

00:43:50 [BT]

Yeah, no, no.

00:43:52 [BS]

It sounds similar to QuadFMT format string on the left, where you've got just any particular width, you know, F8.2. There are a lot of qualifiers you can put on the left that allow you to specify one thing or another. Number of instances of it, you know, 3F8.2, but also other qualifiers that talk about fills and substitutions and things like that.

00:44:18 [AB]

Yeah, but that actually comes from COBOL, right, I think?

00:44:23 [BS]

Good question. I'm not sure.

00:44:24 [AB]

If I remember right, QuadFMT is, for the listeners, an old system function, but that doesn't mean it's not useful. It can take numeric arrays, usually tables, matrices, and then it formats them in various ways. Like if you want to make a typewritten report with dollar signs and thousand separators and other kinds of decorators, using, for example, parentheses around the number to indicate it's negative, all those kind of things. If I remember right, the way it was designed, and they were spoken about it on the APL campfire. I think one of the people involved with that spoke about it. If I remember right, it wraps functionality from COBOL and Fortran. I suppose they were state-of-the-art ways to generate reports at the time. And so it's like an embedded language like Regex inside APL. Right. Yeah, it does sound a bit like that.

00:45:22 [BS]

Well, it had good parentage.

00:45:25 [BT]

If you're going to build something, it's nice to have a good foundation. You don't end up with something toppling over when you finish it off.

00:45:32 [ML]

Yeah, well, and if it's a business reporting tool, COBOL is not the worst source to go to.

00:45:37 [AB]

I guess not. It says here, I'm looking at the APLX documentation that they give attribution. So it begins with "QuadFmt is a Fortran-like formatter." And then it continues further down that there's a special picture format. And it says it's a G format. It says the G text is a COBOL-like picture format. So that's where they come from.

00:46:01 [ML]

Right. So I think I've seen picture format on a primitive, right, on the format primitive.

00:46:05 [AB]

That exists as well, but it's not entirely the same.

00:46:10 [ML]

I don't know what dialect.

00:46:12 [AB]

I think APL2 has that.

00:46:14 [BS]

Yeah, that was an Aiden Falkoff design, picture format.

00:46:19 [AB]

And APLX also added -- they used the alpha symbol to make it another type of picture format. It's not the same one. Oh, okay. Oh. So there are a couple of different systems for how to format things by example. I actually wanted to ask you about something you haven't touched upon and I think NARS doesn't support, and that is inverses. You have the power operator, but as far as I can tell, I can't invert anything by giving a negative power. And there's even the area of exploration of non-integer powers and then also negative non-integer powers. Scary stuff.

00:47:04 [BS]

Yes, I have implemented inverses. In fact, I implemented inverses in the original NARS system. So the power operator with negative values was working in the original NARS system.

00:47:18 [ML]

Yeah, I think that was the first time the power operator was implemented.

00:47:22 [BS]

Could be. I'm not keeping count. And then not long ago, I've expanded inverses in NARS2000. It may be available in the beta directory. It's a very complicated situation with respect to where NARS2000 is at the moment, but the beta directory has at least a last year version of NARS2000 that supported a bunch of things. And I'll put another one out there in a bit that supports inverses. And I think I've talked about some of the inverses before, like plus dot divide scan inverse produces continued fraction expansions of the number on the right. So that's a lovely one. And then, let's see.

00:48:15 [AB]

I particularly like the multiplication reduction inverse in J. It gives me the prime factors.

00:48:24 [BS]

Yes.

00:48:24 [ML]

I particularly dislike it.

00:48:28 [AB]

Yeah, no, I think that's really cute.

00:48:29 [BS]

I showed that to Roger at one point. Jim Brown, Roger, and I were on a private mailing list with just the three of us for health reasons. And I pointed that out to Roger and Jim. They both liked it, the times reduction for factoring, times reduction inverse for factoring, and then the continued diffraction ones I thought were lovely inverses.

00:48:56 [BT]

And just to be clear, I'm assuming you're referring to Roger Hui.

00:49:00 [BS]

I'm sorry, yes, yes.

00:49:01 [BT]

Because there are a few other Rogers scattered. I mean, there's many Bobs, but there's also many Rogers around the universe.

00:49:06 [BS]

That's right. Yeah, good point, good point.

00:49:09 [ML]

Which Bob were you at that time?

00:49:12 [BS]

Boolean Bob.

00:49:14 [BS]

Ah, good. Well, Uiua has also picked up the multiplication inverse, so it does live on.

00:49:17 [BS]

Oh, wonderful. Lovely. Did they get it from Roger?

00:49:21 [ML]

I think from J. I haven't asked Guy about it.

00:49:24 [BS]

Yeah. Well, he got it from me, so I'm delighted to see it moved on to others. The continued diffraction one, I hope others will pick that one up, too.

00:49:34 [ML]

Yeah, that's pretty neat. And yeah, and we were that you might not even need the jot. You take reciprocal plus now that might be backwards. Yeah, you might still need three characters, but only three because it's stack based.

00:49:48 [BS]

Not your typical inverse. Yeah. So I need to finish up more inverses. But you're right, that's a really lovely element. And for a long time, I had the quad INV as a system label so that if you're in a user defined function or an anonymous function, you can define what the inverse is for that function by just having a system label in the anonymous function or the user defined function. And that's an entry point then. So when it's called for an inverse, it then gets called at that entry point.

00:50:26 [AB]

How does it work with labels in what do you call it, dynamic functions in the curly brace functions?

00:50:34 [BS]

That's right.

00:50:35 [AB]

But they normally don't have labels, right? You can't actually branch to a label.

00:50:38 [BS]

That's correct. These are entry points.

00:50:40 [AB]

So the same thing, you have guards as well, like that. Yes. That means the same syntax is overloaded to mean both a guard and an entry point.

00:50:51 [BS]

Well, it's got a special name to it, quad INV or quad PRO for prototypes, quad ID for identity elements.

00:51:00 [AB]

So it basically forms a special symbol, quad INV colon means something else.

00:51:08 [ML]

Yeah, it almost feels like a better fit for colon INV.

00:51:11 [AB]

Yeah, it's a label.

00:51:13 [BS]

Hmm, it's an interesting idea.

00:51:15 [AB]

Because that's how labels fall. Well, how do you in a, oh yeah, then there's a keyword. Yeah. I know I've been toying with the idea of allowing non-Boolean values in a guard and in order to preserve the idea of it being a conditional. So if it evaluates to say negative one, then it's treated as true if you're looking for an inverse. Otherwise it's treated as false. And if it gives some other value, it doesn't matter too, then it's looking for a prototype, then it's true. If not looking for a prototype, it's false. So it's not an entry point. That means you can do some common computations at the beginning of the function and then you kind of branch out and decide what to do later, depending on the form that's being called with. Whereas if it's just an entry point, you can't do that.

00:52:11 [ML]

So how do you know if the inverse is ever going to be called? Like, if you call a function, if you try to invert a function that doesn't have any inverse code, does it just run the whole thing and get to the end and then say, oh, there was actually this wasn't an inverse?

00:52:30 [BS]

No, no, it's an error up front.

00:52:31 [ML]

Well, for your code, I mean, clearly yours handles this.

00:52:33 [BS]

That is, uh.

00:52:34 [AB]

So if the label isn't found, yeah.

00:52:37 [BS]

Yeah. When when parsing the function in the first place, it keeps track of what the entry points are.

00:52:44 [ML]

Yeah. The issue is if you do with an actual guard, you can't you can't figure that out statically.

00:52:49 [AB]

Maybe it says question mark two.

00:52:51 [BS]

Maybe I should have implemented it as quad IND left arrow.

00:52:54 [ML]

It's not too crazy to suggest.

00:52:57 [BS]

Come on. I thought you'd get a rise out of that.

00:52:58 [BT]

I have the benefit of watching people's faces.

00:53:04 [BS]

Look, the last time I said I stand behind all my wild ideas and I still do.

00:53:08 [AB]

But you also have the thing about giving a default value to the left argument by doing alpha gets. So then, yeah, you could say you could do that. Right. Same. That's essentially an if statement. Alpha gets means if there is no left argument, execute this line and then continue. And you could do something like quad IND gets. If there's no value we can give because we're calling the inverse, then do this line. Yeah.

00:53:36 [BS]

Yeah. OK, maybe that's not such a good idea. I'm going to go back to the entry point.

00:53:42 [AB]

OK. And then how did you speak about how how you actually implement those inverses with you said? Yeah. You think you mentioned the rule sets, right?

00:53:51 [BS]

Well, in an anonymous function, it's just an entry point. And whatever code.

00:53:56 [AB]

Yes. We're saying in the derived function with the compositions, you have rules for how the how the function is composed to inverses.

00:54:05 [BS]

Certainly in arbitrary instances like that, there's a it's a table lookup like everybody does. So there's a special table lookup for if the major operator of the inverse is reduction or scan, then it continues down the chain looking at different operands that it recognizes. Yeah. Nothing. Nothing tricky about that. It's just there are some interesting ones. I think a Dyalog did a lovely one for a base value inverse. With no left argument. So or with a left argument.

00:54:39 [AB]

It has to have a left argument?

00:54:46 [BS]

Well, without one, you could assume two. But with one, it's then as many copies of that as you need. I thought that was a really nice idea. So I implemented that too.

00:54:57 [AB]

That's the that's the one primitive I want to change. If the left argument is a scalar (on encode), it should just do that.

00:55:07 [ML]

The one problem primitive you want to break backwards compatibility, to change [others chuckle].

00:55:11 [AB]

Yeah. Yeah. Yes. Because it is the same thing as as modulus (or remainder) just without comparison tolerance. And if you really mean to do a modulus without comparison tolerance, you should probably write that, not use encode [chuckles] unless you're doing golf.

00:55:29 [BS]

Oh, I strongly agree. I think there are some integer only primitives (base value representation, modulus) things like that, that are as far as I'm concerned, integer only primitives. If you want to use them on floating point numbers, then scale the floating point number to an integer, run it and unscale it on the way back.

00:55:50 [ML]

The right argument of modulus, I think it's very useful for that to be a floating point. I'm not quite as sure about the left argument, although I probably use that. But I mean, with the right argument, like one modulus is very useful.

00:56:02 [AB]

Yes

00:56:03 [BS]

Well, I'm not convinced. I think the whole floating point values there should be ... [sentence left incomplete]. I have not done to the gotten to the point where I give a domain error. I'd have to be really crusty and curmudgeonly to do that. Not that I'm above that, but still, I think those should be banned.

00:56:25 [ML]

There are a lot of applications, though. Like in graphics, if you want to tile the plane, for example, then in order to have your tile just at one set of coordinates, you would map every point to this one tile using a modulus. And that's naturally a floating point thing.

00:56:44 [AB]

So I think it's just a plain obvious point of view. If I do the division remainder of 1.5 into 4, so then it goes two times; that gives you three. There's a remainder of one. This doesn't seem to be any logical issue here. Makes perfect sense.

00:57:03 [BS]

Yeah. Yeah, you're right. Those are lovely numbers you chose, but the real world is a lot fuzzier than that. And it's those real world examples that I'm worried about. I think they're going to give incorrect results or results you don't even understand. And we've seen these talked about on forums before of, you know: "why is this producing that value?" And of course, it's really bound up in the whole fuzziness of inexact numbers. And they're not using pleasant numbers like 1.5.

00:57:38 [ML]

Yeah, I mean, there is some precision concerns, but there are also precision concerns with exponentiation and definitely logarithms.

00:57:45 [BS]

Well, I'm a precision nut [others chuckle]. And partly that is to know when not to use it or when it shouldn't apply.

00:57:52 [AB]

Oh, so you can annotate your numbers with these letters saying that this number is not just a normal number [but] it is a precise (in these various ways) number. At that point, you could possibly err [Marshall agrees]. So if you try to use these high precision numbers and model this and it doesn't make sense, then yeah, can't do that.

00:58:13 [ML]

But you should support modulus on rationals, right?

00:58:16 [BS]

Oh, yes. Yeah. It's only inexact numbers.

00:58:19 [ML]

Yeah, I guess I can understand that. Although it's really the same sort of issue as comparison.

00:58:24 [BS]

Yeah. You deserve what you get.

00:58:24 [ML]

Yeah.

00:58:25 [BT]

When you were talking about ball arithmetic, you were saying you can use it in the complex plane, but does it actually extend to the quaternion plane? Does ball arithmetic start to break down at that point too?

00:58:40 [BS]

It's just the ball. I mean, ball is really a three dimensional concept, but if you think of it as an interval, then each coefficient is an interval and you can have all eight coefficients be intervals, that is balls: midpoint [and] radius.

00:58:59 [BT]

Oh, I see. Right, right, right, right. Okay.

00:59:01 [AB]

You never throw four-dimensional snowballs? [Bob Therriault laughs] What's wrong with those?

00:59:05 [BT]

Well, I think if it's four dimensions, then it's a question of time as well. So maybe there's a time-space thing of when you throw the snowball.

00:59:11 [BS]

That's right. The arc. The arc of the snowball.

00:59:14 [ML]

I was assuming you just had one radius, regardless of whether it's complex or quaternion or octonian.

00:59:20 [AB]

Oh, surely they can have an imprecision that's greater in one dimension than another one.

00:59:25 [ML]

Well, yeah, but you can also have an imprecision that's shaped like a banana. [Bob Smith agrees]. You can't represent exactly the ... [sentence left incomplete]

00:59:33 [AB]

That's true. That's true. So here's melon precision, not ball precision, right?

00:59:38 [ML]

Yeah, I mean, the only real representation of what exact results your computation could take is the [chuckles] original representation of the input region and the computation, at which point you're not actually computing anything.

00:59:51 [AB]

You could potentially represent a number as a function, right?

00:59:56 [ML]

Well, yeah. I mean, at some point you get to the point where you're just representing the output of your program as the [chuckles] source code of your program and ... [sentence left incomplete].

01:00:03 [AB]

I told you, you end up with "I" [others laugh], no matter which path you take. [07]

01:00:07 [ML]

Well, no, but "I" actually computes stuff.

01:00:09 [AB]

Does it now? [laughs]

01:00:11 [BS]

I think we're going to have coefficients that are named after fruit. [Bob Therriault laughs]. So we're going to have apples and oranges and pears and melons, and those are all going to be the balls of the different coefficients. Instead of ball arithmetic, we've got fruit arithmetic.

01:00:27 [BT]

And Conor probably wants birds. [laughs]

01:00:31 [AB]

Bird's number [chuckles]. Yeah. So I'm obviously playing with an older version of NARS; I might be a bit out of date here. But one thing I noticed I don't have here is the ability to grade non-real numbers. As you mentioned that briefly earlier when you went through it, that they're not ordered, but that doesn't mean you can't order them.

01:00:58 [BS]

That's right. No, you're right.

01:01:01 [ML]

The ordering will not respect addition.

01:01:02 [BS]

That's right. So you might be able to say 0i1 is greater than 0. But the mathematician requires that if that's greater than 0, then the product of the two is greater than 0. And that's where it breaks down mathematically. So 0i1 squared is negative 1, and that's not greater than 0. So that's what breaks down in the mathematical definition of order. So that's the property that is lost going from real to complex in the mathematical world. You're right, you can do a total array order, and you can order pretty much anything. It'll be an order, but what are its properties? So mathematicians look at it in terms of properties such as that. There's one other property that's similar to it, but that's not something the mathematicians like. But we could do grades of ... [sentence left incomplete]. In fact, I thought I did allow grades on complex numbers at one point. I have to think about that. I'm not sure where that stands.

01:02:14 [AB]

Now, you do have this multi-set operator, which allows you to treat arrays as sets immediately. So maybe it's less necessary. But for example, if I wanted to test if two sets were identical, I would sort them. I don't care what ordering I get, as long as it's a consistent ordering, so that I can subsequently see if the two arrays are now identical. If you don't let me grade complex numbers, I can't check if sets of complex numbers are identical anymore.

01:02:47 [BS]

Yeah, and that's what the multi-set operator would do. [Adám agrees]. Match as a left operand, and then the arrays on either side would then do that. It'll do the grade for you underneath, without you having to crank up the grade symbol.

01:03:05 [AB]

Oh, and that does implement the total order then, under the covers?

01:03:09 [BS]

I'm not sure. It's been a while.

01:03:14 [BT]

It would at least be a consistent order, so you could compare the two, though.

01:03:17 [AB]

Hopefully.

01:03:18 [BS]

That's right.

01:03:19 [AB]

Yeah, that does work. Ah, but that means I can extract the total order out of the multi-set operator.

01:03:28 [BS]

I'm not sure what you mean by "extract".

01:03:31 [AB]

Can I not use the multi-set operator to figure out what precedes what? Maybe not. Because it only sorts them. It never allows me to get the relative order between them.

01:03:45 [BS]

Yeah, it'll give you yes or no, in terms of match multi-set.

01:03:49 [BT]

I wonder, if you applied it monadically (you'd have to write this implementation), but if you applied it monadically, you could have it return the ordered set.

01:03:58 [BS]

Well, that's grade. Might as well just do a grade.

01:04:02 [BT]

Well, that's true, yeah.

01:04:03 [AB]

It's a sort.

01:04:04 [BS]

Well, it's sort. you're right.

01:04:06 [AB]

But then you give away the underlying order, which might not be a meaningful order in any way. It kind of has the implementation leak into the language. Right now, as far as I can tell, it's actually protected; I cannot access the internal order. It could be anything. Internal order, for what it's worth, could just be the bit pattern that it's stored as.

01:04:23 [BT]

Well, as Conor often says, we've blown past the hour mark [chuckles].

01:04:29 [ML]

Even without him [everyone laughs].

01:04:32 [BT]

And I haven't tracked that mark, so I'm saying that based on very rough terms. But is there anything else that we've kind of skipped over? I think the people who are really into implementation will really appreciate this episode. And my sense is, is whenever we do something like this, we're always thinking: "there's an audience out there that might be for it, but most of the audience is not". And then we find out that actually all of the audience is very interested in this stuff, because it just doesn't get discussed otherwise. So having said that, is there anything else you want to talk about now in terms of implementation? Maybe the direction you see NARS2000 going in the future? Bob?

01:05:14 [BS]

There is. There is one element that I wanted to mention that has been extremely helpful to me. And that is I've got such a large number of data types (and it's not just hypercomplex numbers, but there's hypercomplex numbers with integer coefficients; Hypercomplex numbers with floating point coefficients). They all have to be the same (the same category), but those are different representations. And similarly, all the multi-precision values take place in hypercomplex numbers as well. So there's rational, quaternion, octonian, complex. Those are all different data types. So with the large number of data types I've got, in terms of the normal algorithms you run through, it's often helpful to be able to convert one data type to another. So if I want to add two numbers, I need to convert them to a common data type. And so it's easy to determine the common data type. That's essentially a matrix lookup of row and column (left and right data types).

01:06:30 [BS]

And then the other thing I wanted to mention that has been extremely useful is, I've written a whole bunch of conversion routines that are all accessible through a table lookup. And so if you want to convert from data type A to data type B, you essentially provide (data type A, data type B) as indices to this matrix of functions; grab the function; execute the function on the left and right arguments; and whump! You've got an answer that's now in whatever that common data type, which need not be either of the two data types that you're dealing with. It could be something entirely different. Like a rational and floating point, the common data type is now a multi-precision floating point, a third data type. So all of those functions are in there written, and I am able to take advantage of them quite easily when doing the conversions. And so that kind of conversion and there are some other related ones of actions and such, that's been extremely helpful to get them under your belt, get them right, and then be able to use them through and through, through all of the primitives.

01:07:48 [BT]

Does that get around some of the restrictions, say, for instance, when you go between octonians, which are non-associative, and quaternions, which are non-commutative, but are associative, would you be able to transfer octonians to quaternions, do something associative, then transfer them back?

01:08:06 [BS]

[laughs] Well, dealing with it at the primitive level, we're not dealing with associativity or commutativity.

01:08:09 [BT]

Right

01:08:10 [BS]

That's when you have multiple A times B times C [with] both associativity and commutativity. So at the individual primitive level, I don't have to worry about that. Thankfully. Thank you for not complicating my life. [Bob Therriault laughs]

01:08:28 [ML]

Although be careful how you optimize base decode.

01:08:29 [BS]

[laughs] Okay.

01:08:32 [AB]

So because Conor isn't here, on behalf of Conor, I'd like to ask if you have plans to add additional compositional operators to NARS.

01:08:43 [BS]

I have added Commutator and Dual. I think you guys have Dual also. [08]

01:08:54 [AB]

Not in Dyalog yet, but some others have [chuckles].

01:08:56 [BS]

Okay, well that was part of doing inverses. So once I got inverses under my belt, then it was straightforward to do Dual and Commutator. So Dual is the equivalent of: you open a drawer, you modify the contents of the drawer, and then you close the drawer. So open and close are the inverse operations. And so this is a matter of doing first a function g, then a function f, and then a function g-inverse. And that as a template is a very useful operator. So it takes the operands f and g and then implements that in particular. And Commutator takes that one step further so that if you're doing (f g); g does first, then f. And then you do g-inverse, Commutator then does an f-inverse on that. So the inverses are in reverse order. And that as an operator has a variety of interesting uses. Outside the computing world, you'll find that's very common when doing, say, Rubik's Cube computations. That is, if you're flipping around on the Rubik's Cube, a Commutator is a very common operation. So (f, f inverse, g inverse, f, g) . Those are things that change the state of a Rubik's Cube. And that could well be some common operation that you do (not that I can do this) but that you do barely thinking about your fingers. And then you have that as one of the tools in your toolbox when dealing with the Rubik's Cube.

01:10:50 [BS]

And then also the other example I use is partitioned plus-reduction. So many, many years ago, I did papers on boolean functions, boolean techniques, and partition functions as well. And some of those functions are perfectly suited to the commutator operator, where you have exactly that. You might have a plus-scan, and the inverse of that is this (negative one, zero, drop, etc). And then p expand and p compress (p is the partition). And so those interleave in such a way that it's exactly a commutator operator. That was one of the first things I tested when I had the commutator operator working, to be able to make sure that that did what it was supposed to do, and likely it did. I'm sure there are many other examples. Commutator as a concept comes from group theory (mathematical group theory) back I think in the 1830s; earliest reference I could find. And it's another fundamental operation that you may want to perform on a group with respect to two of the functions, the elements and such. And so it has use in mathematics as well. So I had to put it in. In a manner similar to the way I'm putting in other elements of mathematics. So I've got calculus in that I have numerical differentiation and numerical integration, and then partial differentiation in a manner similar to the way Ken defined in one of his papers. And then thanks to a hint from Corey Scott, I implemented fractional calculus and fractional integration, so that it makes sense to be able to do a half derivative, for example. And if you do a half derivative of a half derivative, it's additive, and that becomes a first derivative. So there are general relations of that to other non-integer orders, that is, first derivative, second derivative, etc. So those are things that are in there now.

01:13:16 [BS]

And then another thing I've just put in not long ago is dealing with integer-only equations, so simultaneous linear Diophantine equations. [09] So that's in, and I'm having fun playing with that. Beyond that, I'm doing, let's see: linear algebra. So I'm putting in QR decomposition and eigenvalues and eigenvectors. Now that's been done a lot. However, they have not been done on quaternions, and that's proving to be something of a challenge. And part of my remit is, I want to be able to have these functions work on all the data types. Now, the crazy uncle, I may hold off on that. And in particular, what has caused me to hold off on that (not that I haven't tried), but what has caused me to hold off on that is the way I test to see if this is a good implementation is "does it pass the test of identities?" So something as simple as matrix inverse, well, you do it twice, you get back the original matrix. You should, and I could not find a way to do that with octonions, try as I might. I could do it for quaternions, I just couldn't find a way to do it for octonions. And I just may be not smart enough to do that.

01:14:47 [BT]

But they're non-associative, right?

01:14:50 [BS]

That's right, yeah.

01:14:54 [BT]

How? [chuckles] It depends on the order you do the inverse? Like how would that work?

01:14:55 [BS]

It depends on the order in which you implement the inverse. I'm using a Gauss-Jordan process of elimination. I need to do that in just the right way, and I've not found a way to do that. I did read an interesting paper on matrix representation. And this is the idea that you take, say, a complex number, and there is a corresponding matrix representation of it. You take the two coefficients, and they get plugged into a two-by-two matrix, some with the signs, etc. Anyway, that's a matrix representation. And the idea is that if I take two different complex numbers and multiply them, and then take their corresponding matrix representations and multiply them, and then look at the matrix representation of the product, it's the same as the inner product of the matrix representations of the individual numbers. So that's nice. And you can also do that with quaternions. So that works out well. Octonians, not so much. [Bob Therriault chuckles]. There is an interesting paper that shows that you can have two different representations of octonions. And they don't satisfy quite the identities that I just mentioned for the other hyper-complex stators, but they have different set of identities for them. So anyway, I implemented that so when you do a matrix representation for octonions, you have to set the system variable "quad LR" directly to either a lowercase l character or a lowercase r character, and you'll get the two different matrix representations.

01:16:46 [ML]

Yeah, so the issue there is that octonian multiplication isn't associative, but matrix multiplication is?

01:16:51 [BS]

Yes, it is. That's right. Exactly right.

01:16:53 [ML]

So it's not going to work?

01:16:55 [BS]

So that's a hard one to do. So I've failed to be able to do that for matrix inverses. So maybe there's another way of doing it. I just keep trying. Well, not for a while.

01:17:08 [BT]

[laughs] Well, I think people should definitely keep their eyes on what's going on with NAR2000, especially if they're interested in these areas of the higher dimension hyper-complex numbers because you seem to be exploring it in great depth. I mean, if I was a physicist and I was playing around with this stuff, I'd certainly be keeping my eye on this kind of stuff, because you may find something that touches on something that they've been thinking about and that can be magic. At that point, I'd like just to thank you, Bob, for being on the episode. It's been fascinating to listen to you talk about all the work and the [chuckles] whole Quaker method of decision making when you're one person. [laughs] I think you're remarkably restrained.

01:17:53 [ML]

Have you ever had trouble reaching consensus?

01:17:55 [BT]

[laughs] Consensus of one. [Bob Smith laughs] But I think you are, by thinking deeply, I think you actually are achieving some of that. And I'd just like to thank you for coming on. And if people do want to make comments or get in touch with us, of course, this is my time to talk about contact@arraycast.com, and you can get in touch with us, and we appreciate all your comments. Once again, Bob, thank you so much for being part of this.

01:18:17 [BS]

Oh, it's been a huge value to me. I really enjoy talking about this. I've expanded my Quaker consensus to all of you now . I don't get to do that very often. Thank you.

01:18:28 [BT]

Well, we're happy to be part of it. And with that, I'll say happy array programming.

01:18:32 [everyone together]

Happy Array Programming!

[MUSIC]