Transcript
Transcript prepared by
Adám Brudzewsky, Bob Therriault and Sanjay Cherian
00:00:00 [Conor Hoekstra]
And Stephen has just posted Qsome-q.org. It's awesome and Stephen replaced an a with a q.
00:00:08 [Stephen Taylor]
No, that's just a typo. It's awesome-q.org.
00:00:13 [Marshall Lochbaum]
Oh, but it sounds like we have a new suggestion for you.
00:00:16 [CH]
I was going to say, do we leave that misspelling in or just say it's awesome-q.org? Awesome-q.org.
00:00:20 [Music]
00:00:32 [CH]
Welcome to episode 94 of Arraycast. I'm your host, Conor. And today with us, we have a special returning guest who you probably already spoiler have known by the title of this episode, but we will get to introducing him in a couple minutes. First, we're going to go around and do brief introductions. We'll start with Bob, then go to Stephen and finish with Adám and Marshall.
00:00:50 [Bob Therriault]
I'm Bob Therriault and I am a J-enthusiast.
00:00:56 [ST]
I'm Stephen Taylor, an APL and q enthusiast.
00:00:59 [Adám Brudzewski]
I'm Adám Brudzewski. I'm a professional APLer.
00:01:02 [ML]
I'm Marshall Lachbaum. I've worked with a few Array languages. Now I do BQN and Singeli.
00:01:06 [CH]
And as mentioned before, my name is Conor, host of Arraycast and massive Array language fan. It's a fantastic time. I guess that brings us right in right into the announcements. [01] It's a fantastic time to be an Array language programmer, not because Advent of Code is the only reason to program in Array languages. I think actually I might be the only one doing the Advent of Code problems right now. But it is an exciting time for lots of folks to learn new languages. And I guess that's the first announcement is that it is December 3rd when we're recording this. I think this will release on December 7th, depending on your time zone, plus or minus. And there are a cornucopia of resources. I know I've seen, Marshall, you actually have a live tracker, I think, of not everyone in the world, maybe everyone in the world?
00:01:55 [ML]
A manually updated live tracker here.
00:01:57 [CH]
Okay. I'll put it in the chat. But it is similar to the things in the past where it shows folks' progress.
00:02:04 [ML]
Yeah. So we do have a leaderboard for it that you can, I guess I'll have that in the show notes. But also because a lot of people publish the code in repositories and the leaderboard wouldn't show you that, I have a SVG that shows which solutions different people have solved and links to all their repositories on GitHub or CodeBurger or GitLab or wherever.
00:02:25 [CH]
It's a great resource, especially if you're just learning. Try the problems yourself, then you can go check out other solutions by potentially more experienced folks. I know Dzaima, creator of CBQN, definitely has some experience with the language.
00:02:39 [ML]
Yeah, so he's been working on a lot of really fast solutions that do all sorts of fancy flat array stuff.
00:02:49 [CH]
Yeah. I've seen he usually posts his original solution, then a cleaned up version, and then potentially a fast version. But yes, we have multiple people sharing links. I know, Bob, you've got some J Advent of Code links resources for folks.
:03:03 [BT]
Got some past ones, which are really interesting. Henry Rich has put together a page, a number of people actually have, called Share My Screen in the JWIKI, and we'll put a link in. And it's got past puzzles from Advent of Code. But the cool thing is, is they really go in depth at how they solve it. So it's actually a really good learning tool if you're looking to see how some of the J Wizards approach things. And if you're not into Advent of Code, this is my counter Advent of Code announcement, there's a thing called December Adventure, which is more about having a side project that you work on all December, and then you post about it. So we'll put a link in for that if somebody doesn't want to do all the puzzles, but they want to get something done. And it can be a group or it can be single people, you can take your choices. Of course, with array languages, quite often it's a single person because you don't need a lot of people to make a project. So that's my announcement.
00:03:57 [CH]
Link in the description. And I think Adám, you also have a compilation of Advent of Code progress to some extent.
00:04:05 [ML]
It's the people's compilation.
00:04:07 [AB]
Yeah, exactly. It's not me. The APL wiki has a page for Advent of Code. It's really easy to get to. APL.wiki/AOC will get you there. And people just write in their links to their solutions and videos and other things as they want to. You can also edit in links to other people's solutions, but that's mostly what happens there.
00:04:30 [CH]
Yeah. And I think it's also worth mentioning, on top of, I guess I didn't explicitly mention it, I have been solving and will try to solve as many days as possible until I get tired of the parsing and the complexity of the problems, videos on my CodeReport YouTube channel. But there's also, I mean, probably the number one comment I get is, "Please solve this in Uiua as well." There is actually a YouTube channel called Tacit Sanctuary that has solved day one and day two. It sounded like in day two that he was only planning on doing day one and then decided to do it in day two. I'm not sure if that was just the way that it accidentally ended up sounding like. But there's at least two videos as of this recording, and there might be more. And on top of my playlist of videos, I did see another one from, the handle was Tankor Smash, who I have seen in some of my live streams on YouTube. They have a YouTube channel as well and have solved day one. So there's a number of YouTube video resources where they walk you through their solutions. So tons and tons of resources. Check out the APL wiki. All the links will be in the description down below. Is that all we want to say about Advent of Code? We might be doing an episode, our next episode, with one, two guests. Stay tuned. So there'll be more Advent of Code content, if not talking about specific problems, you know, the topic in general. And I think that brings us to our second and final announcement, which is that one of my talks was from CPP North, I believe. Just came out in like the last hour. I think Adám was the one that put that on my radar. I have been checking on a daily basis, because every day they release a new video around noon my time. And that video is out. So I have yet to see it. But hopefully, if folks are interested in the first edition, which I gave, I think a year ago or a year and a half ago, this is the second edition to that. Adám's got his hand up.
00:06:13 [AB]
The comments are turned off on that video. If people want to give you feedback or ask questions and so on about that talk, how should they get in touch with you?
00:06:22 [CH]
I mean, they can always just tweet or whatever the verb is on any of the tweeting platforms. There's like three of them now or four of them if you include threads. It's such a headache. Let's choose one, okay, folks? Let's choose one. I don't need another end of these platforms. But maybe I'll set up a GitHub discussion on one of my GitHub repos where if folks want to leave comments, they can go there and post them as well. So yeah, look in the show notes as always. And with all of that out of the way, that brings us to a reintroduction of our second time guest, Ashok Reddy, who is the CEO of KX. I will point you back to, I believe it was episode 45 back in early 2023. So it's almost been, I think, two years since we had you on last. And that was an awesome conversation where you shone a light on a ton of different industries that KX was being used in that I wasn't aware of. And we talked about a ton of stuff. But it's a bit fortuitous that the day that we're having you back on for your second appearance coincides with a pretty big announcement from, I guess, First Derivative Technologies and KX and First Derivative. I'll throw it over to you to do the full announcement and maybe to break it down. We will link in the show notes a couple news articles or maybe you can give us the preferred one that you would like people to read. But I think I understand it, but I'll let you, as the CEO of KX, talk our listeners through what has happened at KX and maybe we'll go from there.
00:07:51 [Ashok Reddy]
Yeah, no, I think it's actually, interestingly, today, December 3rd, we just sent out a release saying that the completion of FD, which is our First Derivative consulting business, is complete today. So we are a standalone, KX is a standalone software company as of today. So which is actually, it's what happened, we tried to set it up last month and it so happens that today is the day. So what we announced is KX is fully funded. We are keeping quite a bit of the capital, what we sold to invest in KX. So yeah, I mean, that's a big event. So I think it's actually, Conor, as you know, 1993 is when KX started in Palo Alto and it's the same year as NVIDIA. So now we are back to being a standalone company.
00:08:42 [CH]
So yeah, this, I mean, it is very exciting news. And so let me, I will paraphrase what I think you said and what I've read. So there's, and because I didn't actually understand this before, I always knew that First Derivative, I thought First Derivative had purchased KX, but it sounds like the First Derivative that's used colloquially in that sentence actually refers to the parent company, First Derivative technologies. And that kind of is a company that operated with, I'm not sure if division is the correct word, but they had the consulting side of things, which was First Derivative. And then they had KX, which was a separate division. And it sounds like from what you just said in the releases that FD Technologies, the parent company, has sold off the FD consulting side of the business. And they sold that to a company I believe called E-PAM, which I'm not super familiar with, but I think I've seen them sponsoring a bunch of software conferences. So they might, I'm not sure if they're a software conglomerate, but I've seen their banners at C++ conferences. And so that leaves FD Technologies just with the KX division. Is that, is everything I just said there pretty much accurate or did I miss some stuff?
00:09:49 [AR]
Yeah, no, I think FD as a short form was First Derivative, but when First Derivative acquired KX, there was FD Technologies, the parent company group name is FD Technologies. First Derivative was the consulting business. MRP was another company, software-based, account-based marketing company division we had, and KX was a software. And FD Technologies invested in KX partly over the years, and about three or four years ago they acquired all of the shares from Arthur. And then, so now we sold MRP earlier this year, and we still own some equity in the company which owns MRP, but the group company now sold First Derivative for 230 million pounds, and we basically are giving some of the share, we'll buy back some of the shares is what we announced, but the rest of the money we are using to fund KX, and we are at a hundred million dollar ARR. So the investors want to make sure that we have enough capital, so we'll rename the FD Technologies PLC to something with KX in the next quarter, and also change the ticker symbol from FDP to something with KX.
00:11:01 [CH]
Interesting. Okay, we'll make sure when we see the news release for the renaming of FD Technologies to KX, we'll try and mention something about that. And so what does this mean for the new kind of KX-only company? Like the First Derivative that were sold off, were they doing KX consulting, or was the KX consulting happening within the KX division? And so is it that like you've stepped away from consulting and it'll be other companies now like E-PAM that are focusing on consulting and you're focusing on developing software? So what is the primary focus of the KX-only company going forward?
00:11:36 [AR]
Yeah, good set of questions. I think the basic premise with FD Technologies was a business model perspective, right? As a consulting company, you want to maximize your consulting services revenue. So one of the things which means KX as a software company means a business model is changed from how do I drive growth to driving as much consulting services versus pure software where you want minimum services, which allows us to where we are keeping a small amount of services where they're helping customers to adopt and get value, build out the reference architectures and make sure that they get the best performance and things for different industries. But we are not trying to make money off services. It's not a services for profit. As we did have a set of consulting implementation, people who knew k and q and KDB plus, that team was in First Derivative. Now that team has moved to E-PAM. So they do more of the implementation, integration. And if you want to build something custom on top of our technology, they can do that. And one thing what changes now is First Derivative was primarily in capital markets. Their expertise was in the capital markets, whereas E-PAM, to your question, first of all, they are Philadelphia based. They've got quite a few employees, more than 55,000 worldwide. They're very well known and many companies use them for developing products so they can use them to help people who are building ISVs and others building products. In typical IT, they do that as one. And they also are in many industries, life sciences, retail, energy trading. So we already are working with them, which I'm excited about where not only in capital markets, they're actually investing in KX practice so they can take our technology. They are pretty big in AI. So we are starting to build out AI use cases and working with customers. So anyway, E-PAM will have a KX, but not just in the capital market. So we're going to expand into other domains, which is also exciting.
00:13:50 [CH]
That's super curious, actually, because there's a parallel with the very first company that I worked at was an insurance software company called Axis. [02] And they famously didn't have consulting services. And it was like a big, I wouldn't say debate, but it always came up like every quarterly meeting. Like how come that like all the consulting basically happened with either consulting firms or accounting firms with consulting divisions. And these companies and teams were making like a ton of money consulting. But the partners that owned this insurance company, software company, were like adamant that they did not want to have in-house consulting because their like passion was the software product. And they said, as soon as we have consulting, we're going to have like a ton of our developers that realize, oh, I can go and do customer facing stuff, et cetera, et cetera, et cetera. And it's not that either business model is right or wrong, but the partners just really believe that like we have great relationships with all these other companies that are making money off of our product. And like they can focus on like developing consulting expertise, and we can focus on like really good engineering and building up like, you know, features and stuff. And like I said, it's not that one is right or the other is wrong. It's just that there are two different like business model philosophies of what should we focus on. And, you know, it's interesting. So that sounds like KX is kind of focusing on that model of focusing on the product. And you know, there will be consulting services that you can hire. And it does sound like you still are keeping on some staff. Like I think AXIS had the same thing that we had client support, which wasn't consulting. Like we didn't actually charge people for it, but it was like, if you need help, you can call us and we'll answer your questions and stuff. But if you need someone to come and build out a full model for like a team of 20 or 30 people, like that's when you can go and hire the consultants at the partner firms. Yeah, absolutely. And the other thing is, you know, we have kind of creating the ecosystem.
00:15:35 [AR]
I saw earlier in one of the episodes, you had, you know, Data Intellect, for example, they are good partners of ours, for example, they partner with us on certain things. So we are developing other partners like Slalom and some of the KPMGs of the world and others are helping build practices around it. So we will have a broader ecosystem, but people will know, you know, k and q are primarily are in Data Intellect and First Derivative right now.
00:16:15 [CH]
Interesting. Yeah. That is, we had, I think it was Jonny on, right? I can't remember how many episodes ago that was, but yeah, I saw some time. Yeah. He's an amazing individual to chat with. And I mean, I hope, that's actually a great, I was about to say, I hope I get to meet him at a future KXCon. What is the future of KXCons? Are there going to be potential ones with the new KX company or is that less of a, you know, what do you call it? Top priority, given that you've gone through a company restructuring?
00:16:44 [AR]
We actually are going to expand on it, right? Because we did KXCon, you know, when we talk about KXCon, it's just for once in seven years. So when I did this last year, when we did the, you know, it was after seven years. But since then, what we have done is we really expanded the community's outreach. We do a lot more meetups and in fact, you know, last two weeks I was in London, NVIDIA and Dell sponsored our London meetup. We had more than 140 customers showed up in New York and Wall Street. So we are doing those meetups. But where we are going is more launching a KXConnect, which is a broader like KXCon type thing. We are planning for a big event, most likely in Miami next year in the US and one in Europe. So that will be more like a typical user conference where we can share a lot of the customers will present, partners will be there. And so that's something which we are branding, you know, we'll come out with that. But I would say we're working on that broader event, but we'll do more. We just have a meetup going on right now in Hong Kong. And so every city customers have been saying that people like Nick Psaris and others said we need more meetups of the developers and quants. We're doing more and more of it. And we are into aerospace and defense big time. We're doing a lot of meetups working in that community because it's important to be in the defense and, you know, all the Air Force and type of things. And then we've got semiconductors. So I definitely think that funding what we're getting, we are able to invest more of it that in that context, but also definitely the KXCon, you know, was good for existing customers. We brought them together, but it was done. But when I joined, you know, it had not been done for six or seven years and we did that first, but now we've expanded that. And thanks to Nvidia, we ended up getting them sponsoring, you know, our meetups also, which is really good.
00:18:36 [CH]
Oh, I did not. I did not know that. And so these, so it sounds like you've got two potential events, the KXConnects happening stateside. And you just mentioned a ton of cities, London, Hong Kong. Where do these, if folks are interested in getting info, is it on the KX website or is it on meetup or LinkedIn or all the places?
00:18:53 [AR]
Yeah, we have on the KX.com, there is events, all the upcoming events. And you know, we had in London was, it was in the famous, we were in the King's Cross, the famous train station, the hotel, the Harry Potter movie was shot. And so that was, again, here we were on Wall Street, one of the most famous buildings here. So yeah, we are doing more events. New York Stock Exchange is hosting an event coming up next week, right in New York Stock Exchange. So that's again, they're sponsoring a KX event. So we have that listed. I can send you a link to all the events which are coming up.
00:19:34 [CH]
Awesome. Okay. Well, I mean, before we move on to the next kind of rounds of questions of what's been happening over the last, because this is big news, but it's also been two years since you've been on. I'll open it up because I feel like I've been hogging the questions and I saw Stephen, his hand pop up. So over to you, Stephen.
00:19:50 [ST]
Hello again, it's Stephen.
00:19:55 [AR]
Hey Stephen.
00:19:56 [ST]
KDB's unique selling point was for a long time performance, being able to do what it does faster than everybody else. And its customers for a long time were primarily, as you said, in capital markets, because it's a field where performance is not just generally being able to do things faster. It's not just like generally good, but enables you to win on the trades deal by deal. And I'm wondering, you spoke about moving out into other fields. I'm wondering if you finding fields where your customers have got an incentive to pay a premium for that kind of performance. 00:20:33 [AR]
Yeah. No, I think, you know, from a capital markets where we started, we continue to see that performance or price to performance is very important. And then having the flexibility, because I think part of the advantage is not just the speed as you know, it was really, it's about what Nick and others were talking about, it's being able to query and ask questions at the speed of thought. You're limited by your imagination in terms of what you can ask. So that flexibility is what kind of enabled us. But it came from, then you also add the challenges around accessibility and ease of use is always our challenge has been because then you added to put a lot of services to make that work and get the performance. But what we have been able to see in the last year, and you know, even the publicly reported numbers, we have been able to productize a lot of these things, the workflows, the infrastructure management of it. So people can get to a TIC architecture with high performance on AWS, AWS built something called FinSpace. So you're able to not having to know all the things, but you are able to get to a TIC architecture and get to add additional clusters as you need, and it can scale and acquire resources in the cloud. To me, that I think is the transformation we are going through and what we are seeing. So customers are willing to pay, it's more not necessarily just the premium, it's more about can I get that price to performance without having a lot of people and the skills I need, right? Because that was the two things which limited our growth in the past was, one was while we were known for high performance and all these things, but you needed people in your KDP plus and q skills, which are at premium and not many people are learning and in the ecosystem. And then the second part of it is the whole non-functional requirements and infrastructure management where if you have to build those things, if you knew q or k, you still have to figure out how to manage the database and how do I create high availability and how do I manage the memory? Is it in memory or is it stored? So you'll have to have experts who would come in and build some things and people like Data Intellect had things like TorQ, open source framework, but you still have to know that, how to performance prioritize. And so what we have been able to do is more take some of this stuff, you can instantiate this whether on AWS, Finspace or Azure, or you can do it with OpenShift on-prem with set of reference architectures out of the box that can get that sort of performance. But if I want to do back testing and come in and test multiple models, for example, we are able to have something out of the box. And then we are trying to remove the latency because a lot of times what happens is the workflow, it's like high performance people, one of the customers said recently, "Oh, we now are getting a tier two bank," which basically before wouldn't have considered KX, they went with us. And now they're saying, "Oh, well, we got a Ferrari without having to know how to drive a Ferrari and then being able to maintain that sort of a thing." So basically to me, they are getting the value of something which in the past, people had to have lots of people who know a lot of infrastructure, people have spent millions of dollars to build the infrastructure around KX. That's changing. So that is more about getting more people democratizing it beyond the core tier one banks. But then on the tier one hedge funds and others, they continue to, KXCon, you saw the meetup we did in New York about a month ago, Citadel sponsored it. And they are willing to pay and do whatever it can because they believe KX technology helps them differentiate them. So they basically want to get the best performance. In their case, they're looking at, it's like a Formula One car. They want the best performance and every little thing they get the edge. So there are class of companies who are tier ones who are willing to pay premium and being able to get that. But then you have tier two, tier threes who, they just want end-to-end workflows. They want to do things in a way where they don't need to have a lot of people with q skills whereas they can start with PyKX. I know Conor McCarthy was on one of the calls here and PyKX has become very popular with all these companies where they're able to take advantage of the Python skills and then not knowing q. It doesn't mean that you don't need q, but it's just that for most people, they can get started with PyKX and not AWS or Azure. And they're willing to still pay what it, you have to be priced to performance has to be good enough and you can't charge, it's not as much a premium, can they get the value?
00:25:16 [ST]
This sounds like very good news for KX and for its customers and sad news for q enthusiasts.
00:25:24 [AR]
Well actually, I would say it's actually opposite because we actually have found we have invested more and more with q because end of the day, the performance and everything comes because of what q is and it's based on k. And so we actually have done a lot more and that's part of the things what I could share some of the things what we are working on and also what we've done is q when I first came in, we talked about investing in some of the things. So we did focus on making it easier to use through integrating with Visual Studio Code, for example. It's a complete debugging, you don't need to worry about anything, you can download, it's built in. Microsoft worked with us to build that. So you have a first class integration, it's the most open source, easy to use IDE right now for q. We are basically, you know, part of the challenge we have had is that there are people, you know, we build different products within KX in a different branding. So that was all the things what we did SQL support. We have a really good SQL [03] support and Postgres support, but we packaged it as Insights Core. It's KDB+ with all these other things. So what you feel is like somehow people are not getting KDB+ is not evolved, but really it is, it's just that the way we branded and package is different things like Insights Enterprise does certain things at that scale. And then we've got microservices, you can use KDB+ on any cloud. Those things, it wasn't seen as KDB+. So q, it's all based on q and we start increasingly building. I'll give you an example. We just did the KDB.AI, which was again, focused on unstructured data. Basically we are taking embedding, creating embeddings from unstructured data and really it's q and KDB+ is what processes end of the day. The performance comes in because of that. But then people had to figure out how to use unstructured and structured together. So we built a new q based algorithm, what we call a temporal IQ. It can actually go and apply generate AI and more like a natural language, you know, not natural, deep learning techniques on the structured time series data to find patterns. Then you can go back and look at time and be able to apply that to as new data comes in and you're finding by meaning, you can search by meaning now semantically unstructured data, which without converting to vector embeddings, it's all because of q and KDB+. So what we are trying to do is we don't want to get everything, you know, we'd write that, then somebody can use that as an API or expose that to PyKX or KDB.AI. So it opens it up to quants and data scientists and others who don't have to learn q in that case. It doesn't mean we are, for the core q, they are getting access to it. People are asking, give me access to KDB.AI. And number one request was, can I use q with KDB.AI? So now we have made that happen. Initially we thought we should just support Python and, you know, everybody talks about all this other tools, which people use languages they know. So we have made it so you can get, you know, q support pretty much in all our products are based, you know, you can use q with Visual Studio Code. And then we are doing more and more to document it better, open it up. And I think some of the things we are looking at is going to make it much more easier to, you know, more people to adopt q.
00:28:41 [BT]
So if I've got that right, you're using q as your foundational language to develop these tools but you're also making [it] accessible through Python to people who may not know q. Is there an advantage to knowing q? If we take the automobile analogy, you've got a hot rod in the garage; you want to go out and tinker with it yourself. Is there an advantage to that?
00:29:07 [AR]
Yeah. I think it is. We all know with q, it is the true array based [language] and it's in memory and it is [a] functional programming language and it can actually bring streaming and all those things. So when you want to write new functions, you have to use q, right? What we are enabling is Python is an API which connects to it. The latest version of PyKX, for example, uses a shared memory concept; there's a zero copy feature. Basically everything what we do is q based. And then when you write something in Python (PyKX or whatever) you're transferring ... [sentence left incomplete]. You can take a NumPy array and being able to use that data frame now as a q data frame. But we do that as zero copy because you're doing it in memory, pointing at the same memory thing. You're not creating a separate thing and then transferring it. So PyKX becomes a way for somebody who is in the front office or somebody who's doing trading. They won't learn q in that case because they don't need to, right? Because you kind of get them to use this. But then people who are in the data engineering group or people who are building machine learning libraries as a service internally may use q because that's really what we want, because it's the best language. It's a vectorized programming language. That's where we want to work. We know the most downloads we get from q and kdb+ is [from] the top universities: Cambridge. We've got Princeton, MIT, Stanford. I look at the downloads every week we get. Those are the PhDs in math; they download [it]. We have to make it easier [to] get people to use it more and then make the libraries. We have a fusion library [on] GitHub; all open source. But we didn't make it easy for people to use those things and get the additional things built in. There is a lot of work we're doing now. And that's why I'm excited about it; as a product company, we'll be able to contribute more to open source and build out really not separate capabilities. There's structured in kdb+ today, but we have added unstructured PyKX as a layer for Python. There's a Postgres support for connecting to anything with Postgres and SQL. And then we want people to hopefully get access to it. One of the things I hear from a lot of our customers is: "we have a free version" (which is mostly what we call a personal edition). And that is part of the feedback we've gotten: is you really want people to use this for commercial things for up to certain like a community edition or a free edition for up to some point. That's something I would say that we are working on and we'll be able to hopefully announce it and come back here on this call or have one of our leaders to announce it. I would definitely say that that's coming.
00:32:10 [BT]
So if there's not a difference or not as much of a difference between using Python and accessing through PyKX, compared to accessing through q, what are the advantages to knowing q? I mean, there's the beauty of the language. We all know that and the flexibility of being able to change what you're doing quite quickly. But are there actually advantages to where q hits into the infrastructure of kx?
00:32:34 [AR]
Yeah, q [is] sort of the most efficient, simple way of doing things; it's a declarative language. You're actually able to use q much more succinctly to frame the questions. Like it's just like prompt engineering, right? Whereas in Python and others ... [sentence left incomplete]. One of the requests we get is: "you know, can I use q with LLMs?", right? Because more and more people want to do natural language. But the reality is, if you actually look at the q syntax, it's almost very simple. It's declarative; it's focusing on what you want from a user perspective, not how. Whereas [with] a lot of other languages, the one line of thing (what I want to do in q) you have to write 10 lines of code in any other language, right? Typically, that's how it is. So to some extent, people who like efficiency and simplicity and being declarative, would prefer q. It has the advantage of [being] functional and you can define functions, for example. That is a key. And then I think with AI, I think the beauty of this (Jensen talks about this from Nvidia) but I feel like we are seeing the same thing: which is most people use kx and kdb and q to write rule based things, right? You started off by saying: "what should be my rule for beating the market?" based on some sort of a rule and they back-tested it to see if it would work and then implemented it. Then we implemented machine learning based things to use historical data to find patterns and rules to run in the trading algorithms. But now those are functions [that] we built in q, but now you can actually learn. It's getting closer to being a general functional approximation, right? What is deep learning basically? Just like CUDA was abstracting and started off the graphics, but really what the brilliance of that with Nvidia did was it created a general functional approximator and you could run AI workloads through the same thing. So we are basically enabling that parallelization and being able to take full advantage of vector. Whereas you could do that with PyKX, but you'd have to use NumPy [04] and Pandas and others are [but] they not the most efficient. The good thing is if you actually look at [it] people have copied this; NumPy source code gives credit to kdb and q for some of the things. Each one has implemented parts of it. If you want to bring unstructured/structured/historical/streaming, and then want to do it the most efficient way, fundamentally kdb+/q is still less than one megabyte ... [chuckles] the core engine, right? That to me is the efficiency. If somebody wants to really get that sort of a performance. You'll hear a lot of our customers want to take advantage of that sort of efficiency.
00:35:31 [BT]
So at the risk of scooping Marshall's claims for BQN, it sounds like you could change your hot rod into a rocket ship in your backyard because you have access to those things. It's interesting you're mentioning the LLMs and AI because a lot of the things that challenge people now with them is it's hard to understand how those decisions are being made; in the way that the LLMs are being created. You get this thing that can sprout Dickensian prose at you, but nobody's quite sure how it actually does that [chuckles]. It sounds like if you're using Python, you've put an extra layer of obfuscation over top of it. It's going to be even harder to understand how those models are working. Does q give you a bit more insight? It sounds like it might give you a bit more insight into the model or maybe the foundations of the model.
00:36:28 [AR]
Yeah, I think there's two parts of it, right? I think one of the challenges you're pointing out is with the deep learning and natural language processing, [at] the end of the day, it's a prediction engine. It's trying to predict number of tokens or whatever. The problem is interpretability and how to explain how you came up with something; it's what the reason why capital markets and our customers are not using it. Because in a regulated industry, you can't explain how you got to something. That's still based on more unstructured sentiments and other things like, people have earnings reports and other things. But the other part of it is if I have structured time based data, LLMs are not processing that and having a state: it doesn't understand the state and the time aspects of it. In order to really have a cause versus correlation, you'll have to bring that. What we are finding is that with q and what we can process, we can audit everything (every query) and what you use to train the model, right? That's one way of kind of building that causal thing. More than that, on the structured side, we are finding ... [sentence left incomplete]. I mean, this is public information: Nvidia and we did a joint thing with Citi's Peter Dekrem and he presented this case study. He showed LLMs won't work in the context of structured; like if I'm doing market making and if I want to create liquidity for a hundred million dollars worth of stock, bonds and whatever. So he said he's trying to use different models (different XGBoost and all ML algorithms). You can use [LLM] for creating synthetic data and trying to try out different scenarios. But you're using this new SSM models: it's state-based models called Mamba or Mamba2. Those things are better for this sort of explainable interpretability because you are able to train the models and you can actually tell how you made a decision and it incorporates the time aspects of it because you're able to use time because that's the important thing from the type of use cases we serve. You need to know what happened when and the dependency of events: complex event processing. So we feel you need to combine more state-based models like Mamba and something like LLM. People call [it] "quantimental", right? There's quant analysis and then there's fundamental analysis. There are lots of people who think about fundamentally how do I evaluate companies, stocks and other things. Then there are people who use quantitative analysis. So typically kdb [and] q are used for the quantitative [side]. Now by converting all the fundamental [data] (documents, satellite images, the quality of the management, earnings reports, computation, sentiments) you're able to bring it all together. At the end of the day, you're making it automatic, right? So that's what we are very good at. We convert everything into vector embeddings on the unstructured side and then we can bring it together. That's what q enables at the end of the day. It's the world's fastest because it can do that. Everything is vectorized; you're processing, then you're doing it simultaneously and you've got functions which can operate single operations. We used to take advantage of CPUs and now we started to take advantage of GPU and that is giving us some really great use cases which we can unlock.
00:39:59 [BT]
And I should wrap up, by telling people that your academic background is actually machine learning. So this [laughs] is right down your alley.
00:40:08 [AR]
Yeah, I went back to school just a few years back, to Georgia Tech and I got a master's in AI and machine learning and computer science. That kind of [got] me grounded before all this generative AI. Because people talk about AI as generative AI but at the end of the day, I think we are all applying computational statistics to solve these problems. That's what I kind of figured when I went back. It was good to do that. It's hard when you're full time on jobs, but it gave me a lot more deeper understanding of what are the possible; what you can and cannot do. When we look at solving problems, we kind of figure out what's, you know ... [sentence left incomplete]. When people say it's AGI [laughs], you can't just take LLM and say AGI is here, but you have to start thinking about what does it mean and how does it manifest into what type of problems we can solve.
00:40:58 [BT]
And I'm in Conor's position of I feel like I've been hogging the spotlight here [chuckles] so I'll turn it over to anybody else who wants to speak.
00:41:05 [ST]
A short word about the university sounded really interesting to me. Most people who encounter q come across it as the language that they need to use to get data in and out of kdb datasets and to work with it. But I came to love it as it became my go-to language just for getting stuff done! It is like like the difference between walking into woodlands with an axe and walking with a chainsaw. It's the tool I reach for first and I wonder if given what you're saying about the downloads from the universities, do you have ambitions for the language as a general purpose programming language?
00:41:48 [AR]
Yeah, I think it's a good thought because for us, one of the things we are doing is ... [sentence left incomplete]. As much as we were domain specific (in the sense people think about time series or, you can say, we are mostly in capital markets), but we have been expanding. We have two areas we are focused on. One, aerospace and defense, primarily around situational awareness and all the things we can do to solve with similar capabilities, but it uses q in this case, right? We announced the collaboration with Lockheed Martin's SkunkWorks. They do a lot of the innovations and if you think about the planes like F-35, they are stealth and the amount of data they process and having situational awareness. That actually gives us confidence that it's not as much of a niche, capital markets, only time series high frequency [uses]. So you have a class of things where the use cases we cannot even talk about. We are working with not only the defense and aerospace [industries], but space and we have quite a few customers in that area. And then the other one is semiconductors, for high tech manufacturing. Pretty much most of the fabs in the world are starting to use kx to do predictive maintenance. There's 15 million sensors per second readings and things like that. When you say general purpose, one of the things I'm hoping [for] when we get to this general function approximation: my top request is can we have a q-LLM or q-GPT [chuckles]? ... if you can make that work across generally. That's where I would say Microsoft also built a Copilot [05] for us to do just that; the Copilot they have is about summarizing documents and other things. Anything which comes to processing data and making it much more [is] the value of q. So they have built that working with us and now we are looking we are looking for customers to work with us. That shows us if you become part of the Copilot and the Copilot Studio, you can go more general purpose. I don't know [if] that answers the question but generally, I think we have some work to do if we want to create that sort of a LLM, because you need really good sample code and examples to train. And there's an architecture, right? Kdb is not just about q as a language; there's an architecture which goes along with it. You'll have to make sure that it's optimized and it takes advantage of it and so that's the part what we'll have to train and make it easy for somebody to use it for use cases, which is it has not been used before.
00:44:57 [ST]
You mentioned earlier the personal edition, which of course allows anybody to download and use the language and learn it. You, I think, also mentioned a community edition. I wondered if you were aware of Dyalog's licensing model. They have the same issue of a proprietary language and wanting to spread the usage. I would roughly approximate the licensing model as: "you're welcome to use this to make money but if you make any substantial money, we want some". Adám might be able to enlarge a bit on that [laughs].
00:45:35 [AB]
We have a bunch of different models, but we have a default one now, which I think is a great thing. You don't have to worry about the license; you can just download and go. And I think it's if you make more than 5000 euro in a year, then anything you earn above those 5000, we want 2% of that unless you negotiate some other deal. So it works. You don't even need to contact us before you begin anything.
00:46:05 [AR]
No, I agree. It makes sense. I mean, we definitely are looking at multiple models. The goal is we want more people in more industries to adopt our technology and I think as I said today, I think it gives us even more opportunity here to do this. We need to make sure that the documentation, the user experience, all those things are also available to people who have not used it before and in the use cases which we haven't. That's part of the reason why I'm not announcing that yet, because I wanted to make sure that we have all those pieces together. So teams are working and if there are companies and customers who want to work with us and share their ideas, we would love to get that. I think there's a kxinsiders@kx.com. Just an email [and] we'll definitely incorporate that or you can contact me or my team. We are very keen on making sure that we offer something which is ... [sentence left incomplete]. We want to get people to use it. And a lot of industries, especially like the brain computer interface ... [sentence left incomplete]. If you think about what's happening with Elon Musk's Neuralink; there are other companies who are doing this. If you actually go and study there [and] how they do it, it's processing. They actually have array engineers [chuckles]. They got arrays they insert into your ... [sentence left incomplete]. So basically, they have to collect massive amounts of data as arrays in the neuroscience perspective and then you're able to understand what you're thinking. So 60 minutes had this episode around how people are using that to understand what's happening. To me, those are things what I think we can help and we should be used. But because people don't understand the language and the applicability of this, we are not always considered. It would be [the] perfect set of technology and use cases which we can support.
00:48:08 [BT]
One of the things, whenever you're working with defense industries and other areas is, ethical concerns. I understand you guys are making a tool and how people use the tool is up to them. But are there any thoughts of trying to maybe be more focused into expanding into areas of energy balancing; those kinds of things for infrastructure. Areas where in terms of climate sciences and things [chuckles] where there's definitely a need and it's a existential need actually, I think for our planet (I happen to believe that; maybe not everybody does). Do you have a sense that it's worthwhile to all of us for you to approach groups like that and make them aware of what can be done? Because you've got a tool, but this tool could be really useful in some really important challenges that we face as a species.
00:49:01 [AR]
Yeah, I think Bob, you made a good point. Definitely we want to work. I think EPAM, for example; we are talking to them about some of the things they're doing with World Bank and the United Nations around ... [sentence left incomplete]. So we are looking to partner with some of these in terms of solving, whether it's the weather or water or energy problems. We have a couple of use cases where [for instance in] the entire country of Poland, the Polish grid, is run on kx technology. [In] Finland, we monitor smart meters. We do the energy and it can be used in different aspects. We have [the] water use case. We're working with one of the big satellite providers; we process so many satellite images. And so I think definitely that's what we want to do. Also the fact we're doing something with Nvidia on the GPUs. They have a similar ambition: solving impossible things [and] making it possible, right? That's our mission also [at] kx. I kind of talk about that. We want to solve things which are impossible for most others. We feel challenged; we want to do that, right? So that to me is where we want to go. And I think the defense [industry] to me is more (aerospace and defense) really ... [sentence left incomplete]. While we help protect lives, at the end of the day, we're also leveraging technologies which came out of DARPA and all those things. End of the day, the research money, what we've gotten, Internet technologies, everything came out of it. We kind of use those technologies, but we also work with our governments and do the things to protect this thing. Generally, it's more about making sure that we can solve the problems which are impossible and partner with the right partners to do that.
00:50:50 [BT]
Yeah, I'm just thinking that [in] the same way that you've developed tools to access Python for the financial industries (FinTech and such) whether you've looked into areas where there's ... [sentence left incomplete]. I heard recently, one of the big challenges with climate is a lot of their big computers (their supercomputers) are now working on weather (which is a day to day thing) but it can take them months to get the same information about a change in climate because that timescale doesn't work for them. Is there thought about maybe working in those areas where there's an opportunity and I'm sure it would be [chuckles] lucrative, but on top of it, I think it would make a much bigger difference to the way we live our lives, in addition to the financial rewards.
00:51:37 [AR]
Yes. Now, I think we want to, as I said. One is we're working now with academic licenses and others. We're providing pretty [it] much to all the universities for free and people can use that to research. Your point is right on. I think we'll definitely take that point. You know, basically we want to figure out ways we can help more of those sort of things. It's not just commercial. I mean, we do want to help with that. And so if there are companies or agencies or whatever, if you are aware, let us know also, but I think that's definitely something which we would work on.
00:52:10 [BT]
Yeah, because to me, there's so many more things than financial. That's what this tool has been used for, but as you point out, like the energy grids for Finland and Poland, there can be some economies realized there that actually impact everybody's lives in that country in a positive way. And I would guess because of the ability to adjust the grid, such things as bringing on renewable energy and such things is much easier to do than the traditional waterfall methods of software development, which can take a long time to adjust.
00:52:45 [AR]
That makes sense.
00:52:46 [CH]
Okay! I have a whole bunch of small questions of clarifications of things that I had follow-ups, but I didn't want to interrupt the flow of conversation. But now my opportunity has arisen and I've snatched it [laughs]. So this is going to kind of skip around a bit now, because these are all have been building up on a list in a VS Code file. You mentioned that you partnered with Microsoft to build (or maybe I misheard this, so just correct anything I say that's wrong) that you partnered with them to build like a LLM or GPT that sounded like either based on q code to generate q suggestions, or maybe it wasn't that at all, and it was just trained on documents and you were looking to take some ... [sentence left incomplete]. Can you clarify what was the partnership that you had, because it sounded interesting and I didn't fully understand what that was.
00:53:33 [AR]
Yeah, actually the partnership is around Microsoft Copilot and now they're what they call a Copilot Studio. They built this Copilot engine, which basically connects ... [sentence left incomplete]. If you're [using] Microsoft Teams, for example, you can use natural language way of interfacing to usually Office 365 documents and other things to summarize [them]. But within that context, they built this [to] basically allow them to now connect to anything with kdb and q based things where people can ask something (whether you're in capital markets or healthcare or whatever), anything with data related [to] high frequency time series or whatever, you're able to interact with [using] natural language. But we underlying, we have to change that to q and how it gets manifested in terms of the queries and now what they get the answers. So that's what I meant. In fact, there was a Ignite conference a week ago; Microsoft partner, they demoed that. And what I was saying is that we are looking for customers who can bring massive amounts of data when you have structured data, combine that with LLMs to make better predictions or more accurate answers based on what people ask in natural language. That's where I was saying that based on the type of things they have, we have to figure out a way to generate q code to do more things versus some set of queries and other things which are today working, right? That was based on what natural language mappings you can do to the type of things they ask. But if you want it to be more general purpose, you'll almost need to have more like an LLM for q.
00:55:16 [CH]
I see. So I can paraphrase and then you can correct me again. It sounds like this is just a product that Microsoft has released under their Copilot suite of things and a part of this is that you can feed it a corpus of corporate or organization specific stuff that gives you answers and it'll get trained on that. And you're saying that you're looking to do the same kind of thing, but specifically for q customers, whether it's the type of financial data that they're interested if they're on Wall Street or if it's q specific commands or SQL stuff, you're hoping that you'll have some kind of q tailored Copilot version that customers can use in the future. Is that accurate?
00:56:01 [AR]
Yeah, I think that basically Microsoft, what they have built is a kx add-on, a plugin, to their Copilot, right? So with any of the kx customers' data, which tends to be, at the end of the day, in kdb+ and q but may get surfaced as [our] different product lines like Insights and Insights Enterprise. Because there is entitlement and people want to know, because in [the] trading community, you can't ask a question and ... [sentence left incomplete]. You need to know who is asking and you don't want to give them information or insider trading information or whatever so it has to be encrypted. There is some things we have done to make it work. It doesn't directly get to the database, but it's a kx plugin or kx for Copilot, basically, [that] Microsoft has built. They built this and we gave them some help to do that and now we are looking for companies to use the Microsoft's Copilot with kx using this integration.
00:57:00 [CH]
I see. So it actually already has been built. This is like you said, an add-on or a plugin to the Copilot model.
00:57:05 [AR]
It has been built. We are working with a few customers to prove it out or get people to use it and so that's where I was saying if people who are using Microsoft Copilot, Microsoft has built in their Copilot Studio to integrate with kx technology and the data.
00:57:23 [CH]
Yeah, that's, it's really interesting. I mean, I do have a theory that new languages in the future and existing ones for that matter, are all going to come with their own tailored language LLM models. I think I've mentioned on this podcast before that I've created a simple grep command on the command line to just like pull down all of Marshall's docs and then just copy and paste ... give that to an LLM, either Chat GPT [06] (or I also have access to the Anthropic models through Perplexity.ai). And then I'll just paste it all beause it allows you such a massive token size. Then just I'll ask a simple question at the bottom and I'm like: "the answer might be up here [chuckles]; can you tell me how to do this?" And a couple of times it's actually worked. Whereas if you try and Google that, because BQN is so new, you're not going to via the Google search engine, figure out how to use some kind of modifier or the correct way. So that's cool that you've built that and I'm sure it'll be useful to some of your customers.
00:58:24 [CH]
The second question: you mentioned at some point (and maybe I should know this thing, seeing that I work for Nvidia) that you had built some kind of tooling to give, I'm not sure if it was q users or kdb+ users, access to GPUs. Is that something that's been built into the q executable or can you speak more to the ability for users of the kx technologies to offload to GPUs or is that in its infancy or I might've misheard? I heard the word q plus GPU or kdb+ plus GPU at some point and I'm sure I'm not the only one that's listening [Ashok laughs]that is curious to hear more.
00:59:05 [AR]
So we have been working with a few customers with NVIDIA and with the KX. Basically part of this comes from, you know, we could do one use cases around, people want to take advantage of GPUs and for a lot of the massive amounts of data and like I talked about Peter Decker and for Citi. So he was the first one who said, I'm doing historical data and I can do that and try different models, but then I build for market making when it goes to real time, the data order books keep coming and there's massive amounts of data, you know, they cannot process it without, and when you put a GPU with KX, you're able to do that. So that's one use case. The second one, what we have found is, you know, GPUs have this memory bus problem, right? Because of the memory bus between memory and GPU while GPUs can do a lot of calculations. Most of the time they are idle because you have that data transfer problem. So obviously NVIDIA came up with this new super chip, the Grace Hopper with the shared memory and high bandwidth memory. So we are partnering with NVIDIA now on the Grace Hopper and the Grace Blackwell too, because we actually don't have to transfer data, we can do zero copy and we can run a lot of the queries and train models and infer models. So we are getting, it's early stages right now, we are showing to a few customers and what we are doing is to accelerate, you know, back testing and being able to do this massive amounts of processing. If you are doing, you know, a lot of searches, a lot of times it's really come down to binary search or whatever, and you can divide that and, you know, make sure that then there's no memory bus problem because between the CPU and GPU and the shared memory. So those are the type of use cases where we are working with NVIDIA. And it's, you know, we have the code working and we have a lot of new customers, but we want to make sure that the models like Mamba or whatever, you know, people want to use, can it scale? What's the accuracy of it? Does it improve the accuracy of the models? So those are some of the things that we are doing right now.
01:01:19 [CH]
Are you able to speak more to the mechanism in which you've done that? Because I know when we had Henry Rich on, the main implementer on the J language, and he talked about the, I think it was t. , the multi-threading primitive. Like there's always a kind of trade off that you have to do and like, how do you accelerate your workloads? I remember too, I can't remember if it was an academic paper, somewhere once I had seen someone GPU accelerating APL and I was like, holy smokes. It was different than Aaron Chu's CodeDefun's work. But the way he had done it was just by hooking into some DLLs. So like he had written some GPU library, but he wasn't like accelerating the primitives. He had to make a call into some library that was hooked into a DLL that was dispatching to a GPU. So do you have like a, I imagine you're not accelerating the primitives in the q language like some that like, now you, I'm not sure if there's some interpreter flag that you set that now like a certain set of primitives, or is it a library that you are providing that people can call? Or is it too early days that like right now everything's just in prototype stage and you haven't made those decisions?
01:02:32 [AR]
Yeah, no, I think I'm not sure if I have the information enough to give you that next level down in terms of whether we are doing the primitives. But what I would say is we are providing it as a library, but we're also taking advantage of NVIDIA has this NIM inference engine. There is a CUDA used to be the one, but now there is a bunch of things like AI enterprise or some libraries they've provided. So we are leveraging some of those from NVIDIA, but also we've done some things with q and what we've been able to implement some of the functions. So at some point, maybe there is somebody from my technical leadership team can be here and they can kind of share some of it, especially as we prove all the use cases and launch this thing. But I'm not able to answer the next level details and how we may be doing that.
01:03:27 [CH]
No, I mean, it sounds like a super fascinating topic for a future episode because it is admittedly a less explored space. We always talk about Aaron Hsu's co-dfns GPU, APL work at Dyalog. But outside of that, I mean, and I mentioned the J multi-threaded primitive, but that's still just multiple CPUs, not to GPUs. And there are a lot of challenges, especially when you're working with an existing corpus of code, right? q, you didn't start writing a couple of weeks ago and then say, "Hey, we're going to celebrate this to the GPU." So yeah, maybe we'll put that on our ever growing list of folks to come on because, I mean, I'm probably biased because I work for a company that sells GPUs. So my Venn diagram of interests tend to overlap deeply with GPUs. But yeah, we'll put that on the list of folks to bring on.
01:04:29 [AR]
What I would say is generally, if you think about more from a big picture view, right, part of the problem is this whole systems level thinking because part of it, I think that's what they, there is a Amdahl's law, then there is the, it says that if you're part of the pipeline or whatever, the slowest part of it overall will be the fastest speed, right? So the Gustafson's law, I believe is what, if I recall, it talks about systems level. So you want all aspects of your, like if you are processing the data, how you ingest data, how you do the pipeline, how you clean data, and then if the CPU is involved, and then there's a memory and a memory bus and GPU, you want, you can't have one part of it very fast and doing all the things. So you need to be able to, the bottleneck becomes the slowest thing, right? So that's to me is more, we're trying to solve the Gustafson's thing. Like you want to make sure that between the CPU, GPU, memory, pipelines, and how you transfer data or you run the vectorization functions, they all need to work seamlessly because if there is a bottleneck in one part of it, you're not as optimal.
01:05:31 [CH]
Yeah, that's 1000% correct. There's the countless anecdotes of people optimizing prematurely some piece of code that they're like, "Ah, this is it." And then it turns out that 99% of the time is spent in some I/O, limited bandwidth thing. Exactly. All that time that was spent optimizing really should have been focused on limiting the amount of I/O work that was being done.
01:05:54 [AB]
Maybe the whole architecture of our computers are wrong. Indeed, I've been thinking about this on and off for decades.
01:06:03 [CH]
That's not a can of worms.
01:06:05 [AB]
No, but I mean, we're so settled in how the computers work that we've got slow permanent storage and relatively fast working memory. And then we have these various caches that move in. And then we have the GPU as a super fast parallel processor, but somewhere else and relatively slow pipelines between them so that very often even just a normal CPU is just sitting there waiting for more data because the working memory can't even deliver enough data for the CPU to actually be busy working. It's just waiting. And then we have the array language proponents that say that, "Well, all these things that are potentially parallelizable, yeah, sure. Our implementation might not actually do it in parallel, but it could be." Or it's embarrassingly parallel problems, but they're actually being solved not in parallel. Maybe the hardware is holding us back from, and really we need APL machines. We need array processing hardware. And I know there have been experiments with such and nothing really ever came of it. I'm not sure why that was. Maybe general purpose problems are not so suitable for that.
01:07:27 [AR]
Yeah. I think to me, that's one of the things you have to think about now with... I kind of have this saying that data is becoming the algorithm, right? Because it's almost like a data structure. Everything... What happens is the differentiation comes from what happens to the data and how you learn from it and how you... So more and more, we talk about this AI factory basically transforming data to something like an algorithm, which makes decisions in that context. What's the architecture you need and how we build for that sort of a thing, versus we're trying to work around the systems we have. And I think that's why I thought this new super chip and there's all this TPUs and NPUs and so many different things that are coming up now, because they're all trying to solve one part of the problem.
01:08:17 [CH]
Marshall, were you going to say something?
01:08:18 [ML]
Yeah. I mean, there's a whole lot of array oriented hardware on particularly a CPU chip. I mean, if I were going to design a set of instructions for specifically implementing array operations, I wouldn't come too far from what we have now, because it's just... I mean, that's fundamentally the way to get high performance is to do things that are not just parallel in the sense of multi-core, but that are doing exactly the same operation across many different elements. And I mean, that's the design of a single instruction, multiple data, SMD architecture.
01:08:53 [AB]
But isn't the bandwidth, even of the widest CPUs, nothing compared to what you have in a GPU?
01:08:59 [ML]
Yeah. I mean, the best approach to that is to split up your array operations and use less bandwidth. And I don't think there's anything that the hardware can really do to help you with that. You have to do that in software.
01:09:13 [CH]
Well, I mean, there are a bunch of companies out there. They're all startups, but they are... I mean, I can't remember the name of the one. [07] There's one that's creating some entirely new computer from scratch. And it's like, you don't need multiple GPUs. You just need this one massive card, which is, I think it's worth like a million dollars or something. It's some incredibly... And it's like a single card.
01:09:37 [AB]
$1,000,000 it sounds like a lot of money, but really it's nothing compared to...
01:09:43 [CH]
Well, I said a million, and then in my head I was like, "Wait, we sell racks that are like orders of magnitude more than that," which is why I kind of backed it back. But it is some immense amount. I'll try and find... I can't remember it. I know my buddy knows because he works in a startup that isn't competing with them, but I'll ping him, get the link and throw it in the show notes. It'll be towards the bottom if you're curious of what this company is. I can't remember it off the top of my head. But the point being is that there are people that are experimenting with... Are they completely brand new architectures? No. But are they novel? Definitely.
01:10:15 [BT]
The million billion thing reminds me of Austin Powers and when he sends out a ransom for a million dollars and everybody in the room suddenly starts writing checks. Oh, okay. Is that all?
01:10:28 [CH]
Stephen, I thought you had your hand up, I think for half a second.
01:10:30 [ST]
Yeah, it's okay. People contact me through Q201 and they ask me, "How can I learn to be a QDev?" Which I understand them to mean, "How can I learn q and get paid to write q?" What would you tell them?
01:10:49 [AR]
One is to learn. We have got this KX Academy. Now, we invested a lot for people to get certified and q and you'll see that really people are getting a lot of good positives. So in terms of learning, KX Academy is investing a lot. But in terms of... I know there is some... somebody has created a job or an exchange for KDB and q. I've seen that maybe we can expand on it. But customers ask us all the time. I mean, they're looking for really people with the skills. So I think that's something I can find out where that is. But generally, to learn... I mean, first of all, having this... all the personal editions available for people can try it and we also have now KX Academy and hopefully we'll be able to find if there is an exchange or something that people find out what jobs are available.
01:11:42 [CH]
All right. I mean, we've blown by the hour mark as per usual. And I know we at one point too, we're going to... I mean, I'm sure you came across the blog post from a while ago that we had Ryan Hamilton on to talk about. But I feel like we've actually already addressed... I'm trying to pull it up quickly. You already mentioned and it's been brought up, I think Stephen echoed it, the future community edition to be announced at a future date, which sounds like it's going to be a personal edition plus plus with a different kind of license scheme or details to be confirmed. But that's something to look forward to. And I'm not sure if you had any other thoughts that you wanted to add on just in terms of the future of KX. And it sounds like there's tons of different things. I mean, one of the other things I noted down, you mentioned open source. I think the word you used was Fusion Library when you were mentioning the Stanford MIT downloads. I'm not sure if I... There's a lot being said. If you recall, was there a library? Do you remember if you were referring to something specifically? And just like the topic of open source in general that...Yeah. No, I think...
01:12:48 [AR]
Yeah. Just to clarify, right? I think the things what people typically have asked, the three things this comes down to, is getting more people to use and making it easy, the documentation. And what I said is, except for just the core kernel KDB plus, we have that we have kept it just because it's so efficient. It's very... I used to... I was part of the Eclipse open sourcing and I was at IBM. It just become massive and become gigabytes of code and people are adding... You lose control over it. So we wanted to make sure that what we built, every line of code is every developer in our team tests and develops and releases. So that part, and it's upgradable. We never break code, right? It's people don't understand. But 30 years, if you upgrade from one to another, it's not like Python. If you upgrade it in Python, everything break. So that's... But outside that, everything else, PyKX, we open sourced it. It's available on PyPI. Fusion libraries are a set of machine learning libraries we built and it's all open source. So what I was saying is, we are going to enhance and make it easy for people to bring those sort of machine learning libraries, including some deep learning. And we have integrated that with LLMs and others. So most of the things that we are doing is really open source and we've made that available. And then Fusion was something people said that they haven't implemented easy for us to integrate into as libraries and load them up and use them. So we are going to make that easy for people to build that ecosystem and use library, share libraries across companies. I think that's GitHub. GitHub is where we make it available today. So we're going to make more of that easy to use, accessible documentation, APIs, and we have Postgres support, we have SQL. PyKX is our way of getting people with Python to use KDB and q. And I think those are things typically, what I've heard, including I think some of the comments from Bob shared.
01:14:52 [CH]
Yeah, I can't remember if it was already or it already exists, but I think at KXCon 2023, I want to say it was maybe Jonny, but there was a presentation that mentioned like an awesome q libraries list. And I will go and try and find that because I think you mentioned TorQ earlier, which is one of the libraries or frameworks that Data Intellect.
01:15:14 [AR]
Yeah, it's an open source framework from Data Intellect. Customers use that. There is, you know, frameworks like PyKX, of course, we ourselves have made it available on Anaconda and also on PyPI. Fusion libraries are available on GitHub. And that's all the machine learning libraries and ML toolkit, which people love. And you know, those things are available. So we have quite a few things around our core thing, which we just want to bring it, make it all easier to use all of those and get more people to develop and make them available.
01:15:48 [CH]
Awesome. So we'll try and find that list, the curated list. But if not, we will collect as many links as possible. Definitely the Fusion ones. I hadn't heard it before and we'll be interested in taking a look at any final questions from the other. And Stephen has just posted q, Qsum, hyphen q dot org. It's awesome with a replaced his q. It's awesome dash q dot org.
01:16:12 [ST]
No, that's just a typo.
01:16:18 [ML]
But it sounds like we have a new suggestion for you.
01:16:21 [CH]
I was going to say, do we leave that misspelling in or just say it's awesome hyphen q dot org. That'll be up to Bob to see what he does with the edit. But yeah, any any final questions from the other panelists? I do have a final question whether or not it'll make it into the cut. I'm not sure, but I don't see any hands raised. So the question is, and this week we'll let Ashok decide whether this goes in or not. At KXCon, I was, I think, I can't remember if I was Conor number five or Conor number six at the conference. So the question is that I've been dying to ask is how many Conors went with FD to EPAM and how many Conors are left at KX? What can I expect at the next future? Was there a Conor, you know, all Conors stayed, all Conors left, or was there a dividing of the Conors?
01:17:12 [AR]
Yeah, no, I think, you know, there was a few Conors with single N and double N. So there's always a debate, you know, who is smarter with a double N. All those Conors you met, they're all with KX.
01:17:22 [CH]
Oh, they're all still with KX. So there was no Conor divide. All Conors are still employed at KX.
01:17:26 [AR]
There's one Conor left to go and start a company. But outside that, I think we have everybody, all the people we talk about, they're all here. Okay. So Conor McCarthy, who leads up by KX, he was on one of your thing. And so he's leading things. And then we have much of other Conors are all, you know, part of the KX as a company.
01:17:45 [CH]
That's good to know that at the future KXCon, although I imagine if it's KXCon or KXConnect that even some of the folks that went to First Derivative, they might end up at one of these conferences as well.
01:17:55 [AR]
No, they have a lot. They're part of the same ecosystem. In fact, you see many of them attend, you know, it's a small community. So we continue to work with them. And it's more about, you know, as a KX as a software company versus, you know, just as consulting, because people sometimes work with customers and they are traveling and it's a different, you know, what people want to do. So yeah, no, I think thanks for an opportunity. I mean, as I said, you know, we are a really excited for KX as a separate company and especially a lot of the existing customers are very passionate. They listen to this. We would love to invest, you know, that's what we're doing. And it gives us permission to now, you know, invest as a software company. And when I joined two years ago, believe me, we didn't even have a QA team. So we didn't think we had to test it because there was some at that point, you know, customers will pay for it, you know, if you actually do this, you know, products or whatever. So we went, we've gone, you know, we still have a lot to do, but I think we have a really great team at KX, passionate, but we have a fantastic customer. So this investment will be used this to innovate and deliver for existing customers so that they can get more value and will invest in the q absolutely. That's, you know, I'm so passionate about the team. So you'll see a lot of innovations and that's our core, you know, you know, the APL and what k and q without that we wouldn't exist. So I think that that to me, I'll say that clearly.
01:19:20 [CH]
Yeah. And it's exciting too, that KX is not going to get listed with KX in the ticker or something like that. I mean, it reminds me kind of, well, it's a bad analogy because it didn't end well for that company, but BlackBerry for many years was research in motion and their ticker was I think RIM or RIMM. And then they rebranded at a certain point to be BlackBerry. Like I said, bad analogy because it didn't end well for BlackBerry, but it was exciting. I thought, cause like it was a Canadian company and everyone was like outside of Canada was like, what's RIM? But now KX, I think, you know, KX is a technology that a lot of folks know. Anyways.
01:19:56 [ML]
Well, and there was a company called Dyadic Systems that eventually renamed themselves Dyalog.
01:20:03 [CH]
Okay. So better, better success story there. And they're, you know, still, still.
01:20:06 [AB]
I questioned the wisdom of that name change though, because now we have a confusing thing. Are you talking about the product? Are you talking about the language? Are you talking about the company? Or called Dyalog?
e you talking about the language? Are you talking about the company called?
01:20:16 [ML]
I don't know if that's that much trouble to anybody outside the company.
01:20:21 [CH]
Yeah, that's true.
01:20:22 [AB]
Certainly causes trouble inside the company to no end.
01:20:25 [ML]
Yeah, but you can deal with it.
01:20:27 [CH]
The first time I think we had, I can't remember if it was Morten on, but I would always say, I think I would refer to the company as Dyalog APL. And I was like, no, no, it's Dyalog Limited. Dyalog limits the company, Dyalog APL is a product.
01:20:39 [ST]
The company is limited. The product is limitless.
01:20:43 [CH]
Yeah, they need to steal your marketing, Marshall. But yeah, thank you so much for coming on Ashok. Hopefully we'll be able to do this again in a year or two and to have a chat again. It's always good to touch base. We'll make sure Bob, you can reach Bob with thoughts, comments, questions at?
01:21:00 [BT]
Arraycast.com and that's how you get in, sorry, contact AT Arraycast DOT com Well, we took that long break and as a result, I'm a little rusty on this. But anyway, it's contact@arraycast.com is how to get in touch with us. A shout out to our transcribers who, when you're talking about Dyalog, there are multiple ways that Dyalog shows up in our transcriptions. And although the system we're using to do the initial transcription is pretty good, that's why we have people like Sanjay and Igor working on it to make sure that the right Dyalog is in the right spot so that there's not a D-I-A-L where there should be a D-Y-A-L. And that's a shout out to our transcribers and a massive thank you to Ashok for coming on. So rare that you get a chance to get the insight into somebody who's got to make these decisions, has this kind of responsibility in a technology that is just massively moving forward at, well, as close as you can go is literally light speed, I think, of the changes I'm seeing over time and has a huge influence on our entire social structure. The fact that you're making these tools available and letting people learn how to use these tools is just, it's so important. And I think it really does democratize a lot of the technology and that is a big challenge that I think we all face. So thank you so much for coming on.
01:22:28 [AR]
Yeah, no, thanks again for having me. And I think, you know, this community is doing great things and we are the most successful commercial company, right, with a hundred million dollars AR and growing, you know, with the investors are excited. So I think the company which has got the most practical APL based thing in the market. So we, I believe, you know, we have a lot of ambition and hope that it'll create a lot of value for all of us, the customers and ecosystem.
01:22:57 [CH]
Awesome. I can't wait till we chat next time. But with that, we will say happy array programming.
01:23:02 [All]
Happy array programming.
01:23:04 [Music]