Transcript
Transcript prepared by
Adám Brudzewsky, Bob Therriault, and Sanjay Cherian
00:00:00 [Dan Bricklin]
But there's a whole other thing about APL, which is that APL is about, from my viewpoint, is about thinking a certain way, approaching a problem using a certain type of thinking. And it became much easier to solve certain problems using that. It's a point of view.
00:00:19 [Music]
00:00:29 [Conor Hoekstra]
Welcome to episode 101 of ArrayCast. My name is Conor, and we have a very special guest that we will get to introducing in a couple minutes. But first, we're going to go around and do brief introductions. We'll start with Bob, then go to Stephen, then to Adám, and finish with Marshall.
00:00:44 [Bob Therriault]
I'm Bob Therriault. I'm really excited for today's show. And I am a J enthusiast.
00:00:50 [Stephen Taylor]
I'm Stephen Taylor. I'm almost that excited about APL and J.
00:00:55 [Adám Brudzewsky]
I'm Adám Brudzewski. I work with APL at Dyalog.
00:01:00 [Marshall Lochbalm]
I'm Marshall Lochbaum. I've worked with various array languages. I made bqn. I'm a Singeli enthusiast.
00:01:06 [CH]
And as mentioned before, my name is Conor, massive fan of all the array languages, and also a massive fan of spreadsheets, which will become relevant, you might already know, by the title. But before we get to introducing our guest and the topic of spreadsheets, we have one announcement and a couple reminders. So we'll go to Bob for the announcement first, and then we'll throw it over to Adám for the two or three reminders.
00:01:28 [BT]
And I've just got a little mini announcement first, just because this is sort of like outside the realm of what everybody would probably know about the podcast. We record remotely, and today, Conor is in Yellowknife Northwest Territories. So his internet may break up at times. And so that might make for some interesting edits. I'm just preparing you, but when you hear Conor, he's actually talking and it's a beautiful sunny day up in Northwest Territories. I can see him right now. And well, it's just quite amazing, this sort of worldwide thing. [01] Anyway, my announcement is actually also quite exciting. It's about Juno, which is a new IDE that Martin Zolek has produced. And it's based on Donald Knuth's literate programming. And what he basically, it's an interesting interface. It runs on Wasm. And so it basically runs in your browser. And you program in little paragraphs, basically. And in the paragraph, you can see what the results are. And then you can update and remove paragraphs around. It's in development, but there will be a link to it. And it's very exciting. A lot of people were responding very positively to this. And it's a different way to program in J. And so really hats off to Martin for what he's done. It's been a big contribution. I've only had a day to play with it. It's interesting. It's not the way I usually program, so it might take me a while to get into it. But yeah, it's really, really neat.
00:03:01 [CH]
Awesome. We will leave a link for that in the show notes. And over to Adám for the reminders, I believe.
00:03:06 [AB]
Yeah, it's just to remind you of the spring of real life meetups. So on March 19, then the British Appeal Association has a real life meetup in London, UK. So that's in addition to those bi-weekly meetups I have online. On March 27, 28, there's the Apeal Germany spring meeting in Berlin, Germany. And on April 7, there's the DINA, that's the Dialogue North America spring meetup in New York.
00:03:40 [CH]
Awesome. We will make sure to leave links to all three of those as well. And obviously, if you are in the area and interested, be sure to check those out because I've definitely been to some in the past and they are a ton of fun. And it becomes a rarer opportunity these days to meet up with folks in person. With all of those reminders and announcements out of the way, it is my great pleasure and honor to be introducing our guest today, who is Dan Bricklin. For those of you that recognize that name, you will know him as the co-creator of VisiCalc, the first edition, if you will, of all the different spreadsheet programs. Most folks are familiar with Excel and Google Sheets, but it all started back in 1979. I think we will get the full story, hopefully, from Dan with VisiCalc. And I mean, the stuff that I know, I know, I think it was the Isaacson biography on Jobs. I've read a couple different books about the story. VisiCalc gets mentioned in a couple of these books as like, some people argue it is the main reason that the Apple II was as successful as it was because VisiCalc was such a powerhouse of an application compared to, you know, there were other programs on the computer, but people would buy the Apple II specifically for this program. So it's quite amazing. I should mention that Dan co-created VisiCalc with Bob Frankston. And on top of that is the CTO of Alpha Software Corporation and the president of Software Garden as well as, you know, has, I think, founded a couple other software companies. We're going to get all into it. And the last thing I'll say about him is that on his Wikipedia page, and I don't actually know how many folks we've had on of our guests that actually have a Wikipedia page. It says he's sometimes referred to as the father of the spreadsheet, which is quite amazing. I mean, I've on record saying that I had at one point three different Excel versions on my computer because as a former actuary, I absolutely love Excel. And anyways, we're going to throw it over to you, Dan. Super excited to have you here today. And maybe we can start off by you giving us your own personal, you know, short version or long version of your history from how you got into computers to VisiCalc to where you are today now.
00:05:54 [DB]
Okay, thank you. I was always interested in technology of various sorts and stuff like that. And in the days when I was in high school, which would be in the, what is it, the mid 60s, 1960s, you couldn't get access to computers and, you know, knowing about computers were hard and stuff. But when I was, I think it was 15, a local high school, not the one I went to, but the one that my cousin went to, had a terminal to a time sharing system. The Philadelphia public schools had a handful of schools, the magnet schools or something. There were some high end schools, some schools in neighborhoods with more disadvantaged people. All of these had access, these few schools had a terminal. And my cousin came home one day and said, "Hey, look." He came over to me and said, "Look at this book about Fortran. We now have a terminal and you can use it." And I was like, oh, I had been building circuits with transistors and stuff like that, but computers, I couldn't figure out how to build computers. And I read this book, Fortran Programming by Decima M. Anderson. And then I went over to visit. And then many afternoons, I would go off to Germantown High School in Philadelphia and be able to go in there and try programming on this QuickTran terminal. This was a version of Fortran that ran on an IBM computer that was downtown Philadelphia. So this was highly unusual for anybody to have access to computers in those days. And I would beg, steal and borrow for more time by calling up the Department of Public Education in Philadelphia saying, "Hey, can I come down and use one of your computers?" And they let me come down where they were, you know, these 1401 computers that I could run card decks through it. The next summer, the summer that I turned 16, that would have been, what is that, 67, I guess. And I took a course at the University of Pennsylvania, a National Science Foundation course, down the hall from where the first digital computer, ENIAC, had been built. And so I got some form, a little bit of semi-formal training, but a lot more computer time on a mini computer that they had there, an IBM mini computer in 1130. I then got a job at the University of Pennsylvania at the Wharton School when I was in high school, you know, doing some programming and, you know, being computer operator, where you rip off the printouts and hand it to the grad students. And so I got some experience there, did programming mainly in Fortran. But it turns out when I was at the University of Pennsylvania, down at this, taking this, not the course, but working, I would go to the building that had our computer, and that's the same building where Iverson was doing APL, believe it or not.
00:09:03 [ML]
That the Philadelphia Scientific Centre, right.
00:09:05 [DB]
Yeah. And that was, so that's, you know, what a coincidence. Who knows? We were probably using the same computer sometimes. I don't know. You know, and I'd be bringing my Cardex in or whatever to run my programs and stuff. But I went to school, I got into MIT for undergraduate and went there and then ended up working on the Multics project. Multics was a time-sharing system.[02] A lot of, an awful lot of the ideas about operating systems came out of that. Bell Labs had been involved when Bell Labs left the project. They built their own mini version of Multics, which was multiple into a single one, Unix, and Unix came out of Multics. And then of course, Linux came out of Multics, out of Unix. And that's what we use for so much stuff today comes out of that. There, I programmed, I was in the part of the, I got a job working with Multics as a, you know, way to be able to help pay for going to school besides taking the courses I took. And I worked with some of the pioneering people in programming. And I worked on the part of programming that had to do with interactive programming, interactive programming languages, interacting the command system that we use that you would type stuff into. So one thing I did was write something where you could say two plus two, and it would say four at the command line. You'd say calc two plus two, it would come back with four. We didn't have something like that. You had to write a program before that. So I'm into, I wrote a program that there may still be a Unix version or something, convert date to binary, where you could type dates in, in all sorts of different ways like tomorrow or, you know, 3.30 tonight or whatever. And it would be able to figure out and create a date value for you and stuff. So I worked on that. So it was always in the user interface side. I then worked a little bit on, on the Lisp for Multics. And I worked on the bignums part. That's the part where you can have numbers with as many significant digits as you want, because it would just make bignums and it knew the algorithms for it. And I worked a little bit on that program. And then I got to work on the first Multics implementation of APL. And that was kind of cool. It was run, led by a particular person there. And we used a particular architecture of how to build the program that turned out to use calling programs, which turned out to be very expensive on the Multics system at the time. So it wasn't as fast as you'd want, but I was one of the two people on that project. It was the lead guy and then there was me. And then we decided to rewrite it. I made the proposal with some friends to rewrite it to be a much more efficient, taking advantage of our new compiler, PL/1 compiler, version of APL. And I got to be the project leader, the head person of a few people who were developing APL for the Multics system. And this was funded by the US government, among others. It was on Honeywell equipment. And like I got to write the main program that did inner outer product and all of the operators and all sorts of stuff like that. And I got to know all that stuff really well. I wasn't really an APL programming person very much, but I did get to know the specifics of every operator and stuff like that and figure out how to make it as efficient as possible looking at the computer, the machine code that was produced to make very, very fast operations. Because obviously in APL, you want to be able, you'll do some inner product to do something trivial, but it can be very expensive in terms of computation. And there's very few characters to write and very easy to think about. So I wanted to make it fast. So I worked on that and we eventually shipped it. It was actually shipped after I graduated. I worked on it the summer after I graduated and ended up being used, you know, at one of the Air Force bases and I don't know what else. But years later, I found out that one of the regular users of it was somebody who was at Tektronix. His name is Ward Cunningham. He's the guy who invented the Wiki. And what did he do? He used APL on Multics. So I, you know, there's sort of a connection all the way through. And when you think of the Wiki, realize that that person was thinking APL also at some point. I went to Digital Equipment Corporation and worked on the computerized typesetting area and ended up working on editing, computerized typesetting editing, which was a relatively new thing at the time and then ended up in word processing. The first word processor from DEC, which is one of the first screen-based word processors at the time. And so, and I was project leader of that development, had a few programmers on it. So this was a thing where you had automatic word wrapping and stuff like that and the printout and learned about regular users who would be using it. And in typesetting, you learn that people are paid by the keystroke and you want to make it very efficient in terms of very concise, obviously coming from the APL world. I was into concise stuff. We made very, we figured out how to be able to make it very easy and not very, the commands will be very concise and easy to use. Implemented this word processing system, which was used by regular users who didn't care about programming or anything. And it ran on a very small computer, a PDP-8 that fit in a desk. So it's kind of like a small mini, a very small mini computer, like a microcomputer. Helped develop a microprocessor-based terminal with a screen and all that. And then after DEC, I went to work for a small company that made electronic cash registers for the fast food industry and used microprocessors. And I learned about a small company building hardware based on microprocessors. It was this, I think the 6800 that they used or something like that. And I learned that small companies could make some cool stuff. I decided to go back to school to learn about business because I always wanted to start business, especially with my friend, Bob Frankston, who I met at MIT, at the Multics Project. And I went to, got into the Harvard Business School and decided to go there for a couple of years, treat myself to that and learn about business. And it was there that I was with like 80 plus people in class. And we learn about, you know, running numbers and doing all sorts of stuff. And you can, I have a talk that's on TED.com where I talk about this part of coming up with the idea for the electronic spreadsheet. And you can take a look at that where I tell it better than I can tell it here and which has some visuals. But I came up with the idea of making a way to be able to put numbers and words together to lay out the output where basically you're drawing the output at the same time you're showing it the formulas that are behind each of the places on the screen. And then came up with, well, it would work much better because I need to be able to identify each of the cells, each of the places where the values are rather than making you make up a name like A or fifth, you know, or sales or whatever, that I'll use the location like a map, A1, B1, C1, D1, etc. And with that, I could do ranges. Because you could say from here to there. Now, I always expected to be able to say circle these things and do an operation like sum across this because I was used to thinking in terms of obviously in terms of working with multiple values at a time. And but what you get out of having a grid is very easily having ranges. And I knew how powerful that was. So that's where we got the electronic spreadsheets. We have it today, decided to put it on the Apple 2 for various reasons. So Apple got it first for about a year. And Steve Jobs, years later, interviewed was asked to comment about it. He talked about, well, the Apple 2 had a floppy disk drive and the competitors didn't. And that was important because that's one of the reasons that we put VisiCalc first on the Apple 2. And he said if VisiCalc had been put on some other machine, you'd be interviewing somebody else right now. You know, wow, Steve Jobs saying that. So the VisiCalc became very popular. The Apple 2 became very popular. And eventually other programs that were like spreadsheets like VisiCalc, SuperCalc was the first and then many others. Eventually there was one called 123 that came out of us. And there's various stories on that. But 123 could read VisiCalc files. And then along, eventually, years later came Excel. And Excel could read 123 files. So you could take a VisiCalc file and then, you know, so the basic idea has stayed the same. In fact, some of the keystrokes are almost the same. But the idea of recalculation, of having a grid and ranges and stuff like that has stayed with us to this day. Is that what you were looking for? Or is that way too long?
00:18:32 [CH]
It's definitely not way too long. We've had a couple folks that come on and they'll stop and be like, "Should I stop?" And then we're like, "No, no, no, keep going. This is like, you know, better than a Marvel movie," is my quote.
00:18:40 [DB]
Well, I think I got the end of the spreadsheet story that you're going to need in terms of... I mean, I've been involved a little bit with spreadsheets since. I've implemented a few of them since for various reasons, but that's not been the main area of my life. I've done a lot of other stuff. Right now, I'm working on data collection on mobile devices, like in factories, you know, and doing inspection and stuff like that, where internet connectivity is spotty. I've worked in the pen world in various incarnations over the decades. Most recently, a note-taking product early in the life of the Apple iPad, which I programmed myself for the iPad, became a very popular product, NoteTaker HD, [03] for a few years. And so I'm into tools for regular people to be able to do their work. That's what I build.
00:19:33 [CH]
Yeah. I mean, there's a bunch of follow-up questions. The main thing that I find remarkable is you kind of just... When you talk about... You came up with this idea, and I have a TED Talk about it. It's not the word is it? You're nonchalant about it. But one could tell a version of that, where you have basically invented the most popular programming language/paradigm in that... Some people will say... They'll give you a top 10 list of programming languages by the number of programmers, and then someone will come along and they say, "Well, you've left out the number one. It's Excel programming, spreadsheet programming," which people will get into arguments of whether it's programming or not. But the number is... Last time I checked, it's like a quarter billion people in the world are doing that kind of programming, of which I have written crazy study programs and things that spreadsheets were not designed for, but with VBA at the back end and whatnot these days. And I think Simon Peyton Jones added Lambda. The stuff you can do is incredible. So I'm just curious to get your thoughts. At the time, did you realize the significance of what you were doing, or is it only in hindsight that you realize you basically invented the most popular way, arguably, of programming today?
00:20:49 [DB]
Well, I mean, if you're a programmer or a developer of something, you always have to believe that what you're doing is going to be amazing. Otherwise, why go to so much trouble to be able to do it? And I had worked like on word processing before, which did not catch on as fast as you would think back then. You know, all sorts of things I've been involved with. So I saw the computers were not catching on to that extent, but we knew that it was valuable, that it would be. When we did VisiCalc... Well, first of all, VisiCalc, the way that it did save was it basically wrote out keystrokes that when it read back in would actually recreate the spreadsheet. It would start in the bottom right, which was the faster way for allocating. And it actually had the commands to as if you had typed it in. So it was like a macro. So it had an ability to do macros built in, and people actually wrote other programs that wrote these type of files to do things. I think a company called FaxSet did that, among others that's still around in the financial world. So and then later on, we thought of the idea of adding macros and stuff like that with this, etc. So early on, we knew the idea of connecting it to other systems. But when I was trying to figure out... When I was working with Bob, Bob did the program, most of the programming, and Bob Frankston, because I was still in graduate school. I was in Harvard Business School at the time. And so Bob knew how to program the 6502 chip, which is what the Apple II had and had tools for doing that. And I helped him, though, with some design of the internals, you know, programming internals and how things were laid out. So we had to decide numbers. What's the maximum number of significant digits that we should support? Remember I came from a world where there also was unlimited significant digits, but let's forget that. But just the reality of computers was we wanted to be able to... We had to say what would be the max that we would allocate for. And I said, I don't know, what could this possibly be used for? How about the budget of the United States of America? How many digits is that in dollars, not in pennies? And that's how we ended up with a number of digits that we have, you know, rounding it to what fit in bytes and stuff like that. And of course, it turned out a few years later, the Wall Street Journal had an editorial where they said, you know, hey, Reagan has a new budget and all over Washington, people have their yellow legal pads and physical spreadsheets out figuring out what it meant. So people actually used it for that. So, you know, we hoped it would be used. We know it should have been used for it. And we assumed that somebody would come up with something better. But right now, you know, it's been... So 1979 is when we shipped it. '78 is when I came up with the idea. So that's a long time ago. And we haven't come up with something better yet that is dominant, right? Right now, I mean, thank you, the people who did Google Sheets, that Google picked up to, you know, Google Sheets, they decided to be Excel compatible. So everything sort of all works the same way. Nowadays, we're moving towards AI and stuff like that, where all that may be programming. It's not the type of programming we think of when we do exact expressions, you know, characters that you put in, like you did with APL and like you did with assembler language and whatever. The spreadsheet is kind of like that type of that style of programming. Maybe we're getting out of that style of programming where you just, you know, vibe programming or whatever. So it might actually be a programming language that goes even longer, who knows. It makes me happy because people want to talk to me years later. You know, I get inducted into halls of fame from computer museums. I just did that last week down in Atlanta, Georgia. They have this new computer museum that you probably get a kick out of it because they have so many supercomputers down there. They're just crazed all over the place and silicon graphics and all that that we're used to do the type of calculations that use many, many numbers and arrays and stuff like that. Yes, Bob, yeah, you're-
00:25:22 [BT]
I had my hand up. Man, you're picking up on everything here, Dan. I can't remember when we first talked, whether I said we've sort of mentioned who was going to talk next, but I just put my hand up to my ear and bang, you're on it. So what I was going to mention, and thanks to you, prior to this, Bob Bernicky and Whitney Smith made available to us a bunch of audio tapes of IP Sharp, and I created a documentary on IP Sharp. I created, I'll say, a documentary, although it's really just Whitney's raw tapes of talking to Ken Iverson. And you put me in touch with the Computer History Museum.
00:25:58 [DB]
It's a different museum. It's a California museum, not in Atlanta, but yes. That's in California. I'm also in the Hall of Fame there, but yeah.
00:26:01 [BT]
You are. And you put me in touch with people. As a result of that, Whitney and Bob are putting together, and I think Conor's involved with this too, are putting together the files, the audio files, and they'll be listed and curated and then they'll be available through the Computer History Museum, which is- thank you so much for mentioning that.
00:26:23 [DB]
That's the place. I mean, that's- so much stuff is there. They have interviews from people that were alive, you know, interview with stuff. I was on a panel with Bob Frankston and Charles Simone, who did- was the head of the Multi-Plan Project and a lot of other things over at Microsoft. We were on a panel there, and who should be in the audience but Donald Knuth, and I got to meet him. And you know, people like that. And so that's a really important place. They try to have as much stuff, and they do do some stuff with software, and you want to be in that museum because that's one place people go. I have videos that when I was at my company, Software Arts, that did VisiCalc, we paid for the Boston Computer Society, which was a local computer club, but a very powerful one that had lots and lots of members that would get incredible speakers, and I would pay to have them videotaped. So when Steve Jobs announced the Macintosh, he first announced it at a private event for Apple shareholders and stuff in California. He then flew across the country with a whole development team of the Mac, and they showed it publicly at the Boston Computer Society. And we have a video of that, which we got people to donate, you know, Brad Feld and other people donated money to get those tapes digitized really well. We have interviews with the head of the IBM PC group at the time, may he rest in peace, you know, the head of Commodore and Apple and stuff like that, Bill Gates all speaking. So you should have the stuff. Obviously, the stuff from Iverson should definitely be in that set. The fact that it wasn't, to me, is a surprise and a loss. In the old days, everybody knew the APL programming language. Now, that may be because the joke's about it, but they knew they may have used it and they knew that there was value there. The joke's about, you know, can you figure this out? But then later, Perl became the language that you would say that here's a one-liner, can you figure out what it does? Of course, the uniqueness of the use of a different character set, because that the ability to say, I'm not going to compromise, I can take advantage of the golf ball printer, you know, typewriter thing on the IBM PC, and then later on some of the video terminals that we were able to use. Those things are important in the history of computers and computer language, besides the fact that people use them to do stuff that was valuable.
00:29:06 [BT]
How much do you think that, I mean, the typeset, the glyphs, were really instrumental in identifying APL? How much do you think that held APL back? To me, it's like I'm talking to Forrest Gump right now, because you're in the middle of everything that's happening.
00:29:23 [DB]
I wasn't in the middle. I was peripheral to a lot of it. I was in the middle of some of it, okay? Some of it, I was in the middle. I was peripheral, obviously, to APL, but not completely peripheral in that I had to implement my version of it that a few people used. Well, first of all, if you were using teletypes and stuff like that, that wasn't very good. That was a problem. If you were using Selectric, [04] the only problem was you had to push the button, take the ball off, put another ball on, put it back down. So it was a little bit of a problem, and you had to put little stickers on the keys. But then once the video terminals came in that could do any character, you had less of that. So I think that held it back a bit. On the other hand, it made it special that it had that and it showed a freedom of thought that probably inspired others in other ways. It made it harder for us. Luckily, one of the programmers we had in our project was the one had to do the driver to allow us to be able to use those characters that were not the normal character set and be able to use that ball and stuff. And worse yet, to be able to use the escape, the break key, which actually was not a character, but a thing that chain that actually supposed to cause a disconnect or whatever in the connection or something. And make that sort of the backspace erase key or something, whatever I remember. And being able to program a single character into two characters, that was a pain and that became a major project for him. I think it was Paul Green who did that. I mean, whatever. So it was more of a challenge. Something like Lisp, which used just a lot of parentheses and stuff like that. And then there's PL/1, which used other characters that you didn't see. It wasn't in the 26 plus, the few characters. It used vertical bar and stuff like that and the carrot. So different languages. Maybe it helped PL/1 and stuff like that that used - and others that used characters, the brace key that weren't in the original basic character set of the early ASCII and EBCDIC that we programmed in. So maybe that helped break that through. That's a good thing to think about. Was that the one that's, you know, all the way over here that got the freedom of thought that we don't think twice about when you think of the C language? Having all these characters that were not in the Fortran character set, were not on an 029 key, you know, punch card, you know, that weren't in the paper tape that you read, you know, for the deck stuff in the old days. Maybe that gave the freedom. Each thing does that. And there's a whole other thing about APL, which is that APL was about, from my viewpoint, is about thinking a certain way, approaching a problem using a certain type of thinking. And the spreadsheet is that type of thing where basically you lay things out with words and numbers, perhaps, and lay it out in ways that make sense to you for thinking about the problem. If you happen to think about the problem in terms of lists of numbers and arrays and, you know, whatever ways, if you're that type of person, which is a certain type of person, it's not most people, but it's most people don't think of any way of calculating, you know, and stuff like that. But if you're, if you're that, then it was the appropriate way to describe your problem and how to solve it. And it became much easier to solve certain problems using that. It's a, it's a point of view. Alan Kay, the, the great computer visionary who envisioned the DynaBook back in, what is it, 1969 or thereabouts at Xerox PARC, this thing that children would have that looks kind of like an iPad, that they would have access to all sorts of information on and they would, whatever, which is today's tablet and phone, that he envisioned that stuff. He I think quoting somebody, but he liked to say that a point of view is worth something like 30 IQ points. In other words, how you approach something. All the Copernicus did was to say, when we do our calculations, we don't use the center of the earth for calculations and have to have epicycles and things like that. Let's use the center of the sun. Change our point of view to say that. Suddenly the calculations are a lot easier. And that's the same type of thing. Some things for some problems, thinking of them in the space of what APL was very good at was, is the appropriate thing. For other things is, you know, it was tedious. The same thing with the spreadsheet, the same thing with, with regular programming, procedural programming languages, opposed to others, programming language that didn't have pointers versus ones that did. I mean, so this thing that point of view is worth it. I say this to my wife all the time, whenever she can't find something. And then I tell her it's right over there. It's because I'm standing here and you're being blocked by that chair and can't see it. But I can point of view is worth, you know, and it, it, it, that's, that's an important way of seeing things. That the way you approach stuff, this was the head of the Multics project, Professor Fernando Corbató, may he rest in peace, who was the head of this project. And he wrote some papers that were in the late sixties, I think, about the Multics project. Multics is, Multics is the first seven years. Multics was late. It took a long time. And they, they talked about a variety of things. And one is one way that you get people to continue funding your product project is get them using it for something they care about. And then they will fund you forever because they need it. And so that happens with lots of languages and stuff. But they also found out that their research showed that you could get advances by improving your compilers and your tools and stuff like that, you know, by factors of two, three, and four. That that if you had a different algorithm, if you approach the problem differently, you've got orders, you've got, you know, orders of magnitude improvements. You got 10 times or more improvements that thinking about it. We hear that just today and the last few, few months when they think about this stuff with AI and what are the Chinese doing that the Americans aren't doing whatever? Well, they're thinking about it different, differently. They're using different algorithms, approaches to it and different approaches can make things much better or much worse. That's why we still have languages for approaching things differently, like array languages and stuff like that. Perhaps taking advantage of the hardware, the Fortran language, the original Fortran language, certain things were there because that's how the hardware worked. You could only do certain subscripts to arrays because that happened to be the way the index registers worked and it worked that you could only do plus or minus there. You couldn't do multiplies or whatever. So, you can only do a constant times a variable plus or minus a constant or something because that's the way the hardware worked. And nowadays, thanks to gaming, we now have all these array processors and type of various type of processors that turned out you can use for artificial intelligence type calculations and for other type of graphics calculations and who knows what else that now make...
00:37:34 [ML]
Well, they're pretty good for APL, I have to say.
00:37:36 [DB]
Yeah, right. It's like, oh, you have to think of it that way. And when you think of it that way for graphics processing, and some of that comes from thinking, learning those algorithms and parallel stuff. I remember Gordon Bell, may he rest in peace, who was at... He did the PDP-11, the PDP-8, he was at DEC and then he went off to, at one point was at the National Science Foundation and he was funding development of computer stuff and he realized that in order to be able to keep Moore's Law going and to keep speed going, we needed to work on parallel processing. That was the big deal and he funded a lot of basic research in parallel processing, which we end up using in these type of systems and the graphic systems and all and then video chips that are for optimizing along that. And then do you, I don't know, is there compilers for APL and stuff that take advantage of that and then pull out the parallelisms and all? I don't know. I know in...
00:38:40 [ML]
There is one, just our last episode, we interviewed the author of a GPU APL compiler called Co-dfns, but generally they're done with interpreters and that's not good for the GPU, but it is really good for these CPU vector units because you have your whatever operation, you have a sum and you write that with vector instructions and then the interpreter just needs to be able to call that function. So it can pretty much get within that one primitive, it gets the full array performance.
00:39:14 [DB]
It can't fuse primitives together. And if you're spending most of your time inside of it, it doesn't matter that you're interpreting as I know when I wrote an interpreter for APL, that if there's an operation that the hardware does that used to be the dominant thing and now suddenly becomes almost instant, that's a help. But then there's the thing of what we learned to do with compilers for C and other languages, [05] right, is that it took a more global view of what was going on and did optimization, like taking calculations out of the middle of a loop that were invariant and moving them outside of the loop. And it would figure things that a good programmer would do, but it would automatically do that and do it globally. The same type of thing of figuring out that there can be parallelisms here, that we could actually use techniques that you haven't thought about to be able to speed things up and get more power. But that requires a lot of investment. And when it came to the investment in C and stuff like that, there was the reason for it because everything was written in the operating system and the benefit was so broad. And if you look at the browsers, there's an interesting thing. I heard a podcast a while back, a while back with the developers of WebKit. WebKit is what Safari uses and all and it's an open source and everybody else, all sorts of others. Now Edge uses it and Chrome comes from it and all that. And when they were developing it, Safari early on for Apple, what the project, the way they said it is that anytime we have a new release, nothing can be slower than it was in the previous releases. Okay? We're allowed to improve things, but we can't add things and improve things at the expense of something existing. This was a mindset that they had. When I heard that, I was like, "Whoa!" You know, that's what we saw is that the browser kept getting faster and faster. And of course, there were multiple at the time. There was Microsoft had their Internet Explorer. There was Firefox coming out of Mozilla and then there was WebKit. Okay? And those three were in friendly competition with each other to see who could be faster and better, which is good. That kept things, "Well, I can do this." "Well, I'll do that too, but I'll do it better." You know, and they just improved and improved. And then they would find other implementations. They bring in the V8 engine that had been used for Macromedia and Flash, got put into the browser as some of the engine for JavaScript, and JavaScript became extremely fast. It does just-in-time compiling. If you run something like a hundred, if you go through a loop a hundred times, it says, "Oh, time. It's worth it to actually compile it into some really fast bytecode or close to machine code and then runs that." So it figures, so they went to the trouble in saying, "We'll use that so that if you're actually-so you'll find out that JavaScript is extremely fast compared to what interpreted language users used to be because they put-there was so much competition, so much value to it, and so much money put into developing that. So you can do that type of stuff, and it could be true with that, and we'll see what happens over time. You know, in terms of that, maybe some of the AI programming can help the developers who are figuring out how to improve the runtimes can-taking advantage of techniques here and there.
00:43:11 [ML]
Yeah, so in terms of never making things slower, you have to be very careful with that because sometimes you back yourself into a corner. Yeah. And the thing is-
00:43:20 [DB]
Theoretically, there's some issues with it, but whatever it is, that's how they started it out. I don't know what actually happened with all the stuff, but often that forces you to think. But you know, there because-to think real hard and find alternatives. I'd have to go find the-that original-
00:43:39 [ML]
Yeah, often you think you've backed yourself into a corner, but you actually haven't because there's some trick.
00:43:44 [DB]
You know, you think that it would, but then somebody else comes up with a better idea or something like that, or time goes on, and it was worth it for them to do that because otherwise what happens is some areas start getting worse and worse, and then you decommission that part and something-you get bit rot of all sorts. Don't ask me, but think about what it meant to have that mentality, whether or not they actually did it. You know, I don't know, but that mentality of where we don't sacri-because otherwise you're robbing Peter to pay Paul all the time, and you could have things that fester that you didn't think about that suddenly become very slow and become a problem. So thinking about that-that meant the mentality of speed, which they had the-the luxury of doing because of the value that, you know, that they could now do things on a smaller, lighter machine that used less power. In doing that, it actually made the battery last. I mean, the value was so great in terms of doing some of that stuff that they could spend the money. We never-we didn't spend that money on compilers in the old days, right, by comparison, you know, but they could spend whatever was necessary and having the Steve Jobs pushing it and stuff. So whatever, I thought-I just thought that that idea would be helpful to think about.
00:45:12 [ML]
Where I was going with that was APL. I mean, we've got this great vector model, right? And with-with respect to the just-in-time compiling, we have backed ourselves into a corner in that things that you do with vector primitives are much faster than things you get with JavaScript's jit compiler because we're able to vectorize just about anything if you can write it in the array style and JavaScript isn't.
00:45:37 [DB]
No, you don't know if the-the just-in-time compiler said: "oh, this is an array operation" and converts it into an array operation that sends it off to the GPU. They can do that.
00:45:46 [ML]
I haven't seen the GPU stuff, but ... [sentence left incomplete].
00:45:48 [DB]
If you have a smart enough compiler [chuckles] and you have enough extra memory.
00:45:53 [ML]
I don't doubt it's possible. I just haven't seen that it's done.
00:45:57 [DB]
I don't know. We'll see.
00:45:59 [ML]
Well, the C and JavaScript model has some advantages and the APL model has some advantages.
00:46:04 [DB]
Depends on the problem. You should use the right tool.
00:46:06 [ML]
Yeah, it depends on what you want to [do]. Do you want to say: "well, I want to work on arrays as fast as I possibly can." It may be that APL is actually a better choice because there's some stuff (with like shuffles) that you just can't do in C unless you're going to explicitly vectorize and say: "I'm giving up portability and I'll write it with vector instructions."
00:46:27 [DB]
But you could do that by saying: "okay, there's an APL module that you can access and write the subroutine for it in APL just like you write stuff in wasm or whatever; [You'd] being able to put various languages together if you have the right stuff."
00:46:49 [ML]
Yeah, but you'd rather do without making the programmer learn two languages, right? [chuckles] So of course, from the APL side, we in APL would like to get the benefits of just-in-time compilation without giving up our vector benefits. And from the other side, I don't think they think about it quite as much.
00:47:07 [DB]
Well, they might. I don't know how many of the developers of JavaScript you've talked to because they're looking for advantages. A lot of their self-worth is in being able to make something a little faster [chuckles] and to make this case faster and stuff like that and to take advantage of stuff like that. If they could detect that a particular vector could be used there or an extension [or] a function that could be added, a built-in function, that allowed you to think a certain way and use that, would be helpful. They add stuff all the time. They add various primitives and stuff like that to the JavaScript language all the time and stuff. But then a lot of stuff is not even done in JavaScript; it's done in the assembly language underneath it , whatever, which is out of my world. You know, that's why it helps to talk to others. If you can find people on those teams, on the Mozilla team or the WebKit team, find those people. And they like to be able to have their ... [sentence left incomplete]. If their runtime could support other languages on the top, they'd love that! It makes it more valuable and it gives them something cool to work on. And to take advantage of the GPU that's there anyway that they're already taking advantage of for other things. Think about what they have to do to be able to make CSS work. And CSS is doing all sorts of transformations and stuff. I don't know how much they use the GPU because they're doing animations and stuff like that in CSS. So the browser already is taking advantage of and may already have some of the primitives there ready to be called, and you just don't know it. So whatever. That's why it helps not just to have meetups among yourselves, but to meet people elsewhere. Maybe here I'm sort of an in-between. I know a little bit about the JavaScript world and I program mainly in JavaScript right now, though I programmed in Objective-C and I programmed in machine code for this language, this machine, this machine, this machine, to C, C++. I programmed in lots and lots of different languages. But, I mean, the big thing that I got, to make APL real fast was to start using gotos! [06] [whispering] "God forbid, gotos. You're not supposed to use it!
00:49:54 [ML]
We still using it [laughter]. Once in a while. Mostly for like early exits and stuff.
00:49:59 [DB]
Well, the thing is that by using the right type of computed gotos (where you used a variable goto) to set that up at the beginning and then say "go!" and the inner outer product would do its thing (jumping around and all that) I was able to double the speed in some cases. Because the other way of doing it (the lookups of various sorts, that were not just setting up [but] that had tests in it and stuff) you spent all your time doing testing when you could make it much faster and that's how my computational thing was so much faster. I took advantage of this particular feature that was philosophically a no-no. If you looked at the actual machine code that was produced (which I was constantly looking at) was much faster. Because there are two things: there's the optimizing of the expressiveness of the person expressing the algorithm that they want to express ...
00:51:04 [ML]
That's the important one.
00:51:08 [DB]
That's a real important one. And then there's the converting that expression into something that the computer actually runs. And those are two separate things. They're related and that you want to make sure the things that you express have the right hardware to be in the right way to do it to make them efficient. If it's a good thing for thinking, it should be hopefully efficient for running and if it's an efficient thing for running, it would be helpful if you could provide a way to take advantage of it that was logical for the way you think about the project. So you sort of have to balance those.
00:51:40 [ML]
Yeah. Well, and that's one issue with JavaScript that I was thinking about: JavaScript can't make quite so many assumptions because its arrays are mutable so it always has to keep in mind the possibility that it does something with an array ... [sentence left incomplete]
00:51:53 [DB]
No, no, no, no, no, no, no, no. It looks at what's running and other stuff and says: "this is not changing; here I'm only using five elements of the array; I will make it a structure". It does this in the just-in-time [compiler]. I mean, it is doing incredible [things]. Look, this is the stuff from when they learn from compilers. It is figuring out to say: "oh, it could be a lookup for all these things in an object, but you're only using these five always; let us turn it into a structure where everything is at a known place and we'll use that". It can do those things because it can see it when you're running. It can put a little bit to detect when that isn't the case and then break out at that point. The hacks [chuckles] that are done!
00:52:48 [ML]
It definitely has tools to analyze this, but there are a lot of cases where it just says: "I don't know". So if you've got a function that returns an array and the function is used in complicated ways, then it doesn't know how that resulting array is going to be.
00:53:03 [DB]
No, but it may detect that that array is ... [sentence left incomplete]. The thing [is] when you have runtime and you have a lot of power, I don't know all the things that it does. I just know it runs like a bat out of hell and that it is so fast to do stuff that I write interpreters in JavaScript that are so incredibly fast. I mean, we're talking about, I'm writing in JavaScript, an interpreter that isn't a very smart interpreter, doing stuff and it is going through loops at incredible speeds and so that I'm able to keep up with people typing and stuff like that and I can take advantage of that. I've seen the speed increase over the last decade or two. And that type of thinking could be applied to array stuff because the hardware has increased, not by factors of two, but by factors of 10 and 20, whatever. I just saw something and say, that it makes Moore's law look like it's flat. How fast over the last few years NVIDIA products have gotten. So we now have incredible speed advantages [chuckles] that it may be worth spending a lot of time, getting things ready to run for something that can run now real fast. So I don't know. But I'm talking theory. I don't know the projects, the problems that you're solving with your systems. I don't know what ideas from the languages that you're using you can provide in other languages.
00:54:48 [ML]
Yeah. Well, so maybe to give an example of the kind of thing we do (I don't claim this is the most valuable thing to be doing, but it's pretty fun) I was working on outer product recently and the way we do outer product is instead of having any sort of looping ... [sentence left incomplete]. Well, okay, there is looping, but instead of doing the loops in the order that the outer product suggests, we take the left argument, right. Each single element of the left argument applies to a row of the result, and that's together with the entire right argument.
00:55:23 [DB]
You're having me rethink from many years ago, but yes, go on.
00:55:28 [ML]
We expand out this left argument. We do a replicate. So we say each element is repeated 5 times, 20 times, whatever the size of the right argument is. And then we just repeat the right argument over and over. And then we combine them.
00:55:43 [DB]
You're using mathematical manipulation, stuff like that, algebraic manipulation to be able to say: "here's how I can optimize" and stuff like that. There are many ways of optimizing. One thing one can do though, is also to say: "let's take some of the thinking that you use when you're writing in these languages, make it available as some functions that can be used in the JavaScript, the Python, the other worlds, and name them carefully and put them in those languages and show why they're valuable and to get people thinking algorithms that way". And then by the way, if you're there: "look, we have whole systems that work that way that you may find it's better to do this piece of your application in this language because you can express it so much better". And you're already thinking that way and using the terminology we sort of snuck in there by the naming of our functions and you could do that.
00:56:56 [ML]
The NumPy guys did that.
00:56:57 [DB]
What?
00:56:59 [ML]
NumPy guys did us a favor and did that for us.
00:57:01 [DB]
Yeah. So, I don't know your world at all. I mean, I'm just here to say: "hey, I built a thing". Stephen, do you have something you're asking?
00:57:11 [ST]
Yeah, you were speaking just a moment ago about the influence of programming languages on your thinking. And earlier on, you talked about working on APL as far back as the Multics project. This is purely a personal question. I wonder if on Multics you ran into a quiet guy called Lewis Morton, who I think might have been there about the same time as you, at MIT.
00:57:40 [DB]
What did Lewis work on?
00:57:43 [ST]
I know he worked on Multics at that time, but which part of it, I don't know.
00:57:46 [DB]
Yeah, I don't know, there were a few hundred people. Plus, I'm not good at names, especially back then. I did not remember names.
00:57:57 [ST]
Well, it's been years.
00:57:59 [CH]
It's been years? It's been decades.
00:58:01 [DB]
Well, I remember one name. I was at an office that was related. We were talking about Multics project, about APL or something, I think, or whatever. It was somewhat related to working on APL at the time and I ran into this person who told me about this book (which I wish I could find, and I can't) for doing transcendental functions and how to program the transcendental functions and stuff like that. And then I used that because I actually wrote the transcendental functions for VisiCalc (the original one) so I wrote the code for that, and I used that book. And it turned out that that person, as I recall, was Monty Davidoff. And then I read Bill Gates's latest book. I don't know if any of you have read it but it's worth reading his new book, "Source Code", about his childhood. You'll get a kick out of it, any of you who are old and remember how hard it was to get computing time in the old days. Or if you're a parent and want to learn about parenting; his parents were incredible in what they did for raising a gifted child [chuckles]; an unusual thinking child. And Monty told me [about] this book. Well, it turned out he's the person who did the arithmetic stuff and transcendentals and whatever for Microsoft. The listing that's in the front cover of the book has his name on it! And he was one of the first people. He worked on the original BASIC. He did the arithmetic routines or something for the floating point or whatever (I don't know) for it. And whoa! But I remember that it was, I think, in an office that was related to my work on APL that I ran into him, that I learned how to be able to [do] transcendentals. So the VisiCalc had transcendentals, which we needed for all sorts of stuff. The world is connected and it's worth running into people. You never know who you're going to run into.
00:59:55 [ST]
A little later on, you spoke about your discovery that with the grid metaphor [in] VisiCalc, you can now do ranges. And I understood you at the time to be relating that back to your previous work with APL. It's like, I know how useful this is.
01:00:17 [DB]
I've been assuming that you would be able to (when I originally had a VisiCalc using a mouse) circle these things to select all of them and push the "sum" button. I always was thinking in terms of that. But the ranges that the grid gave me, gave me a syntax for being able to type that into a formula. But yes, go on. Yes.
01:00:39 [ST]
I was wondering if there were other instances in your work subsequently where the early training, if you like, in APL thinking, identified ways you could exploit it.
01:00:52 [DB]
Yeah, some of it had to do with implementing it because remember, I was not programming an APL. I couldn't understand a lot of the complicated ... [sentence left incomplete]. I had to write test programs just to test things out. I didn't actually use it very much. But the thinking that I had to do to develop an APL interpreter is what influenced me quite a bit. Because I got into efficiency, and an efficiency of doing the same thing over and over again. And how to be able to write very efficient speed programs. When I was at DEC, I learned how to write very small programs. I mean, I programmed a display processor that had 256 bytes! 256 bytes total of programming space, I think it was. Or was it 512? I'm trying to remember. And what we did was, if you had a test and the test was true, we would set the high order bit of the end of the instruction counter, and if it was false, would be the low. I mean, it was a very, very small machine [laughs]. I'm used to writing very, very tight code, but very efficient code. Some of that came from working on advancing from the first version of APL that was way too slow, to one that was competitive, that was fast, and what techniques I used in implementing it and implementing something that I could not change the language, because it was unforgiving. I had to do every operator (every damn operator) that Iverson decided to put in there. I had to make every single one of them work. That wasn't up to me to be able to say: "this is a hard one; I'm going to punt". That was really important, I guess, in my development. I hadn't thought about that before, but I guess that was really important in my development. So APL was a catalyst for me, but not because it was an array language. That just happened to be an aside, but it also made me appreciate what you could do when I knew all of the cool things you could do within an outer product. [07] And I could say: "did you know if you did an inner product of max over this, whatever, is the way to be able to find the largest number in an array or something". That's so cool that you could think that way. That had an influence on me. But I didn't actually have problems that needed it very much, other than typing in examples I saw elsewhere from other people. I'm not one of those people who thinks that way. But that's the whole thing, is that different people think different ways and you give them the right tool. If you gave an accountant a spreadsheet, they're like: "wow! This changes my whole life and all that", et cetera. You showed it to a regular computer person. They'll say: "of course, a computer can do that; I can write a program". There's nothing special about that. And if you showed it to regular people, they'd say: "well, computers can do anything! can't they predict the weather and stuff? what's so special about it?" It was only when you got the people who were thinking a certain way, who said: "that's what I do for a living; I spend 20 hours a week doing a problem you just did in five minutes, thank you; by the way, here's a credit card, please take it".
01:04:24 [CH]
So this is something I want to follow up on. You've mentioned a couple of times (and when you read as many APL papers, it's not a coincidence that you hear the same thing a couple of times), you need to think a certain way in order to really get APL. And I just finished both reading and podcastifying an Alan J. Perlis paper, the subtitle of which is "APL is more French than English". Inside of it, he talks about how some people don't realize that in order to appreciate APL, you have to think a different way. He goes as far as saying that (I think a BASIC is the language he picks on) if you've learned BASIC, you've contaminated your brain and it's much harder to convince people of the value. You've mentioned you started on Fortran and then you wrote the Multics APL and PL/1, which not in that paper, but in a different paper I just learned that originally was actually named the new programming language, "NPL". But then there was some lab that filed some claims saying you can't use NPL. So NPL was renamed PL/1 for "programming language one" (as a fun anecdote). But anyways, I'm interested to get your thoughts on, having started in the land of Fortran and then, over your time working with PL/1, you seem to have grasped the value of APL. Yet you also (I'm not sure if you were referring to APL) were saying that's not the way that necessarily your brain works. From your perspective, what was your experience when you landed on APL, having started in Fortran land?
01:05:59 [DB]
I learned something early on when I was at the University of Pennsylvania and working in that computer department, at the Wharton School. The head of the Wharton Computational Services showed me some stuff where he was showing using trigonometry and geometry, how correlation had to do with the sines and cosines and stuff like that, and how this stuff all really had to do with geometric ways of looking at things. And that if you thought in a visual geometric way, you could apply it in a numeric way and stuff like that. That really hit me that this was a different way at looking at a problem. If you say that: "oh the arc tangent really is the same as this particular thing". And the correlation really is the same as a particular, trigonometric thing. And his mind was very much visual that way and he would use drawings of X-Y axes and stuff like that. And there were people who think in terms of (if you know the theory behind it) imaginary numbers (the square root of minus one and stuff like that), when you think about that way and apply that thinking, you're able to solve all sorts of problems with Fourier transforms and whatever. When you're thinking in the frequency domain versus the amplitude domain, that opens up ways of thinking. The same thing about approaching things [and] thinking them as arrays of numbers and operations on that, as opposed to, I would be thinking of things in a procedural way that went Fortran-like, but then I got into assembly code. At MIT, you're electrical engineering/computer science major; you have to learn the hardware. I learned how transistors work at the molecular level and how to put them together and integrated circuits. We had to build things with integrated circuits all the way up. So when I programmed a PDP-8, I could imagine the hardware and I knew what the hardware was doing [chuckles] for each instruction I was doing. So how to approach things. I was brought up a certain way. We used a subset of PL/1 that was easier to implement [that] turned out to be very close to C and of course that version of PL/1 ended up being ... [sentence left incomplete]. The people who developed C had been exposed to that too, as well as B before C. [08] Before C there was B and BCPL and stuff like that. There are other languages that evolved. So yes, there are different ways of organizing it. I worked in the area of handwriting (not handwriting recognition as much as using ink on pages). I worked on that in the 1990s on some of the early pen computers and then obviously in the 2000s (starting 2010) on the iPad. So, I was involved in the 90s with note-taking. There was a thing called Day-Timer, which is a book that you would carry that had a page for every day or two pages for every day. And there were places to put time on it. It was what we now think of it as a calendar and whatever. But we worked with the Day-Timer Corporation because we were making a electronic version of the Day-Timer that used ink; handwriting with a pen on the computer screen. This person at Day-Timer used to watch people. They would go out to airplane clubs, where people hang out before they get on the flight (where the business people are) and they'd stand at the phone booths and they'd watch over their necks. They'd be looking over [people's] shoulders to see what they were doing and what they were writing and how they were using Day-Timer. Some people wrote on the right page. Some people didn't. Some people drew their own line. Some people used a line for this (Day-Timer learned to write very thin, very light ink so that you could either use the stuff they had or ignore it). And what this person said is, they think that people take what's in their head and write something on a piece of paper. When they look at it again in the future, it will give them back the thought that they started with. They're basically taking their thinking and putting something down that represents their thinking in a visual way that will recreate it for them or for somebody else. If you think of what people do when they produce exhibits and stuff for a case in the business school or for anything that explains something, the way they put the words and the numbers together and what they lay out and where they put sums and where they put extra lines and stuff (the order of the columns and all) that has to do with a way of thinking and how to approach a problem. The spreadsheet turned out to be aimed at people who think in terms of a piece of paper with numbers and words on it that are organized generally in rows and columns, but they're not all homogeneous and there's other stuff around it that matters. That's turned out to be a very popular and very powerful thing you can do for many things. But for other operations, if you're in other areas, you have to think differently. If you're approaching people who are, I guess, in weather, you probably have to think quite differently. If you're trying to forecast the weather, it's different thinking and if you're thinking in terms of a time axis and all sorts of states, it's a different way of thinking. The time axis is not the same as, as a three, you know, as a real multiple possibilities and stuff like that. Spreadsheets only have one calculation at a time, though I had proposed (and we actually shipped at one point) something that allowed two values in each cell, called "by-values", so that you could show the previous and this one, and you can show the difference, whatever. But different ways of thinking are important and the more you can show how this way of thinking is useful for something, and much better (you want to not just be twice as good, you want to be 10 times or a hundred times as good) then people will switch to it. The spreadsheet wasn't just better than the calculator. It was a hundred times better for certain things. Desktop publishing was a hundred times better than you typing it up [and] scribbling it, sending it out to the typesetting place that would typeset it and then send it back and it cost a fortune. So the payback was in your first time you used desktop publishing and paid for the Macintosh and the laser printer and the software compared to sending it out to a graphic shop. So you want to think about what type of things your way (your tools' way) of approaching it, makes it so much easier to think about and solve. Not just because it's technically faster but because it makes the thinking ... [sentence left incomplete]. Back to the AI stuff of: are some of the people in China thinking differently and approaching the problems differently on certain things? And that's why they look like they're suddenly advancing without having the hardware? They're able to do it with the algorithms? That's the way computers have always been. So hopefully that's helpful.
01:14:28 [CH]
No, no, it echoes so much of the stuff that I think about on a day to day basis. And I know we've only got a couple minutes left but I did see Adám put his hand up. So hopefully we can ask and answer a question maybe in three minutes or so.
01:14:43 [DB]
I'll try to be short. Yes.
01:14:46 [AB]
Maybe I'm opening a giant can of worms here. You were clearly aware and somewhat versed in APL in thinking up VisiCalc. But now look at the ref card for it. The formula language looks very similar to what I've used in Excel and Google Sheets and other things like that.
01:15:08 [DB]
No, it's not a coincidence.
01:15:10 [AB]
No [laughs] I can understand that. But where did that come from? And it's only very recently that some spreadsheet applications have gained array comprehension, the ability to apply a single formula to multiple cells of data. Which I'm surprised by if it was so much in your mind, this idea of working on collections of data. So what happened?
01:15:34 [DB]
Wait, wait, VisiCalc ... to get it to do all the things that it did (it did split windows that if you scrolled one, they scrolled on), the whole thing would run in 32K of memory. That includes the storage for the spreadsheet, the program, the operating system and the buffer for the screen. We had no space to do anything. When you hit the slash key to bring up [and] show you the characters you could type for a command, those characters that you saw (that list) was the same characters in memory that were used for the lookup of what you press the button for, to save memory. So we had to be very judicious. If you typed "s u m" for "sum", it knew that by the time you typed "s", "u", there was no other [option], so it would auto complete at that point because it had to be very, very efficient with stuff. We had to figure out what we could put in with what we had and we had to throw so many things out that we could do. I couldn't do all of the operations, but like we said: "oh, we want to use this for taxes; well, we need a lookup function". So lookup function went in because Bob wanted to do his taxes with this thing. "Net present value" went in with what it was, because I had learned that in school and one of my professors who saw it early on said: "you should have Net Present Value, as one of the things that would be useful for people who do certain types of calculations." So it was based on what was needed at the time by the users that we were targeting, even though we could have put so much more in it. We envisioned all sorts [of things] with a mouse and this and graphing and you name it. We envisioned all that stuff, but you can't fit it into a machine, into a program that is smaller than the screenshot that you do today of our screen.
01:17:35 [AB]
Okay. I understand. Yeah. So that also then lends some credence to the story I've heard [of] why all these things propagate. All the design decisions made many, many decades ago is this propagate today, including that Microsoft's .NET uses some integers for dates and they are off by one (those dates) meaning that the 7 modulus of the day number does not give you the day of the week as it should have because Lotus 1-2-3 ... [sentence left incomplete]
01:18:08 [DB]
And why did it do that? Because Lotus came from us and why did we do it? We decided day 1 would be the day that we incorporate our company and we couldn't incorporate our company on a Sunday. So we incorporated on a Monday and New Years was on a Sunday or something like that. So then VisiCalc had a particular start date. I mean, there were things like that. Yes, I mean, you have to know the history about "why did you do such a dumb thing?" Well, you weren't there back then. We did not think that you'd be using this so far into the future.
01:18:53 [AB]
What I understood was that Lotus 1-2-3 had a simplified formula for leap years, such that it has a February 29th of year 1900, even though that date never happened, simply because they couldn't fit in the full formula.
01:19:13 [DB]
Right. We had to fit this thing in. When John Sachs wrote the program (and he was under great pressure to get it done real quick), that was just one minor little thing. And we're stuck with everything's like that. You're stuck with what you've done. If you want to reset it, it can be very expensive. Sometimes we have to, like in Y2K, we have to. But we're at our end of time, right?
01:19:37 [CH]
Yes. I'm just seeing that we're hitting the half past the hour. So I mean, thank you so much, Dan, for taking all this time to chat with us.
01:19:44 [DB]
My pleasure.
01:19:52 [CH]
I know I'm not the only one here that's saying this has been an absolute blast, especially as a spreadsheet fan since my youth and until today. But yeah, thank you so much for taking the time. And yeah, you never know; maybe in a year or two ([laughs] we've got some emojis floating up with some numbers) we'll maybe reach back out and see, because I still had a couple of questions, but it's been an absolute blast having you on.
01:20:10 [DB]
Thanks a lot. You know, let me know any feedback that you get from this and [I'd] be interested in it. So good luck.
01:20:18 [CH]
And before we say goodbye, if you want to ask any questions, comments, thoughts, you can contact us at ...
01:20:25 [BT]
Contact@ArrayCast.com [09] is the way to get in touch with us. And we've had some interesting responses this week, which have been very gratifying. Everybody does enjoy it. We had one response from a guy who actually walks around Simon Fraser University and goes to Simon Fraser University, which is where Conor and I both went to university. And he's listening (I think it was the first year episode) while he's walking around. He says it was quite a surreal experience. So thank you for all those kinds of responses. And you can get in touch with us at Contact@ArrayCast.com. And we look forward to hearing from you. And thanks as always to our transcribers, Adám, who does the initial. I'm actually working with Sanjay on the transcription. So you can let us know if we get something wrong. We're getting pretty good at it, but you never know, we work at it.
01:21:13 [CH]
And with that, we say Happy Array Programming.
01:21:15 [everyone]
Happy array programming, everyone.
[music]