Transcript
Transcript prepared by Bob Therriault and Sanjay Cherian
00:00:00 [Conor McCarthy]
We essentially just want more people to see that there's a different way of thinking about solving problems and that they can be more efficient doing it without a for loop.
00:00:12 [Music]
00:00:22 [Conor Hoekstra]
Welcome to episode 76 of ArrayCast. My name is Conor. And today with us, we have a special guest who we'll get to introducing in a few moments. But first, we're going to go around and do brief introductions. We'll start with Bob, then go to Stephen, then go to Marshall, and then to Adám.
00:00:34 [Bob Therriault]
I'm Bob Therriault. I am a Jay enthusiast, and on the day that we recorded, this is my wife's birthday. So happy birthday, Diane.
00:00:41 [Stephen Taylor]
Happy birthday, Diane indeed. I'm Steven Taylor. I like APL. I like q.
00:00:43 [CH]
Happy birthday.
00:00:47 [Marshall Lochbaum]
I'm Marshall Lachbaum. I've worked with J and Dyalog in the past. Now I'm... Known for BQN and Singeli, maybe?
00:00:53 [Adám Brudzewsky]
I'm Adám Brudzewsky. I work for Dialogue. I'm a head of language design there. And that means I get to do a lot of APL work, just not very much language design.
00:01:04 [CH]
And as mentioned before, my name is Conor, host of the show, fan of all the array languages, and excited to talk to our guest today about himself and about what is happening in the q world, the KX world. But before we do that, we've got a couple of announcements. We will go to Stephen for one and then Adám for two.
00:01:22 [ST]
Thank you, Conor. I'm excited. Iverson College [01] is going to meet this year for the first time since the pandemic. Iverson College is a kind of cross between a small conference and a house party. 25 people meet and live together as students at Trinity Hall for a week in August. We bring our work. We do our work. We talk about it and share about it. There's no official schedule apart from mealtimes. But people often gather. We have a lot of people that ask to do talks. If you want to find out more about it, you can read about it at IversonCollege.com. There's only 25 places and it is an invitation-only event. But if you'd like an invitation, write to warden at IversonCollege.com.
00:02:09 [AB]
Dyalog is arranging a meetup in New York on April 11th. It's called DYNA 24. And we'll have a link to that in the show notes. But it's also just... DYNA, D-Y-N-A, like Dyalog North America, .dyalog.com. And then APL Germany is arranging a meetup on the 29th and 30th of April in Mainz in Germany. And we'll leave a link to that as well with the schedule. Interesting talks there.
00:02:40 [CH]
Awesome. So links for everything will be in the show notes as always. And that brings us to our guest for today. We first met, and by we, I mean just me, at... at KXCon 2023, which, when this gets released, I think will be roughly 10 months ago. And I believe Conor, Conor McCarthy is our guest, is going to be the first guest from KXCon that we've actually had on. We've talked about it probably three or four different times. We've reached out to a few people. But now finally we are having someone that we did, or I did meet at KXCon 23. So Conor McCarthy works for First Derivatives slash KX and is the lead architect. And he's the CEO of PyKX, which I'm sure we're going to talk about today. But we're having someone on finally because we got an extra boost after recently KX or KDB plus 4.1 came out just, I believe, a couple weeks ago now. And there was a bunch of new features that got released, including some new additions to the q language. And yeah, we're going to talk about that. PyKX, KX, maybe, you know, we've heard about it before because Ashok was on the podcast. We will link that in the show notes as well. Obviously, the CEO of... Maybe we'll have him back in the future. But we'll throw it over to you, Conor. One of the... I should also mention, I think it was six. It might have only been five Conors, including me at KXCon. Was it five?
00:03:41 [CM]
Five.
00:03:43 [CH]
Because I think there's like six Conors that work for First Derivatives and KX, which has got to be confusing. But we'll throw it over to you. And you can tell us about your background, how you got into computing and how you ended up in the world of array languages and KDB plus and q.
00:04:21 [CM]
Thanks, Conor. Yeah, I suppose as a way of introduction, my kind of programming experience really started in university. The story for kind of the very early stages is very similar to Nick Psaris when it comes to experience. So I'd spent a lot of my kind of initial time doing software development, doing some Python MATLAB, VBA, some Arduino programming as well, and then I kind of got into the field of physics. And really the motivation for that was that I've kind of always been fascinated with building practical solutions for problems, usually as elegantly as I can, but with practicality kind of at the heart of it. So university experience was great for me. I loved it. And when I got to the end of... of being in university for kind of taught functionality, I had a decision to make. I was either going to go and do a PhD, start working, or what I ended up doing was doing a paid master's. So I did my master's in experimental physics as well, in particular in spectroscopy. So the vast majority of that experience was in building physical systems for analyzing laser, and then I kind of produced plasmas. So that was mostly in kind of product design, from like a physical system design level. But I was also spending a lot of my time doing data analysis on spectra. So finding the power underneath curves to figure out how intense light sources were, and kind of more practical applications from like an imaging standpoint for monitoring drifts in systems. So it was kind of a lot of... I started to move away from the physical system building into more software development, mostly in Python in that side of things. And when I finished my master's, I kind of had a choice to make. I could either go and continue in academia, continue in research, or I kind of wanted to travel. So one of my best friends, Bill Corcoran, had been working with First Derivative [02] in London at the time, building out refineries. So building out practical applications for finance. And he suggested that I maybe think about joining KX because it would give me an opportunity to travel. So I applied, and I think at the second time of asking, I got an interview. And once all of that went through, I ended up in Newry at the kind of headquarters training, ultimately. Once that kind of thing was done, the kind of initial period of training had finished, I was fully expecting to end up on a trading floor, building out applications or infrastructure in q or KDB, or at least that was the expectation. The reality was that I was kind of in the right place at the right time. So about eight months prior to that, our CEO at the time, Brian Conlon, had had the foresight with Mark Sykes, our CTO at the time, Andrew Wilson and James Hannah, to set up a machine learning team. And my skills in software development, and well, more so in the data science sphere of things, gave me the opportunity to join that team in London. And for the first year, year and a half of my time at KX, I basically sat beside Andrew, who's now on the core team, and James, and tried to absorb as much as I could about q and KDB, and how most efficiently and elegantly to write solutions for that. So we built an awful lot of things in that time. Machine learning toolkit was kind of the principal one that I was involved in. So building feature extraction routines and functionality that could fit within anybody's machine learning application. So how you do grid searches while running in q processes, how you embed Python into q sessions. We had a embed py at the time, and I ended up taking on some of the work to do some enhancements on that. And kind of off the back of that, and having worked with Andrew pretty closely, building up some of the ML toolkit stuff, I ended up working on the Fusion Interfaces work that the company was doing. So around 2020, mid 2019, we had started kind of building out interfaces for external data sets, functionality for data conversions efficiently. So I worked on and learned C pretty much by integrating those solutions into KDB systems. So we were working on Kafka integration, MQTT integration, integrations with O-HR, integrations with Python. And that in combination with the machine learning work at the time was kind of the thing that I was doing. And in 2020, I was introduced to the first iteration of PyKX. And while the machine learning work continued, and we built an AutoML solution that's on GitHub, and we built a machine learning registry, set of functionality for deployments into enterprise, and we built a machine learning registry for deployments into enterprise systems, and model monitoring, really PyKX kind of started to take over pretty significantly in the way that I thought about what KDB, for me, could provide. And it's kind of been the thing that I look at as the area that I've had the most impact in, but also the area that I think for the community at large has had the most impact. And so I think the machine learning work and the fusion interfaces and everything that came with that are huge. They're a huge part of lots of applications. But I think with PyKX, what we're doing now and the kind of the impact that it's having on users and clients and download numbers and so on are kind of testament to the fact that being in a position to embed q inside of a language that is so ubiquitously used is actually beneficial generally.
00:11:21 [CH]
I'm not sure if you're able to answer this or even if you have the stats for it, but do you know what percentage of your clients are consuming KDB+ and Q either directly from the KDB+ executable and q executable versus through PyKX? I know it's only a new thing, so I'd imagine it's not every single one of them, but have you seen a ton of interest in the PyKX stuff?
00:11:47 [CM]
Yeah. I think pretty much 95, 100% of our clients are using q in its q executable form. Essentially, all the clients are using it that way. Currently, stats are about 45, 50% are also consuming it via PyKX. So despite it only being out for about a year at this point, it's about 50% of clients are onboarding it today. Some of them, like Citadel [03] presented at KXCon discussing their usage of it and supporting it, that's up on our YouTube and probably in the show notes now. But ultimately, a lot of our clients saw why we were moving that route and can see the benefit and value of it. Nick has talked a couple of times, I think on the podcast as well, about a number of people making use of PyKX as a tool for introduction of the language to new developers. I think one of the key points for us and something that a lot of our clients and users are seeing a lot of benefit from is that ultimately it's an introduction. It's not meant to be the end state for everyone. It will be for some people. They'll get to a point and say, maybe I don't want to take that extra step, but we're giving them the opportunity to peek under the covers a little bit and try and get to a place where they're getting the most benefit out of q and KDP as possible.
00:13:13 [CH]
That's pretty massive that you said only three years ago, really, and already basically half of the clients are... are using it still on top of what their workflow before. But that is that's massive adoption in a short period of time.
00:13:25 [CM]
Yeah, one of the stats on that, which is useful, and it's not an exact proxy for new users is at KXCon last year, we announced PyKX becoming open source. It's like you said, about 10 months ago. And in that period, we've now had 170,000 downloads on PyPi of the software.
00:13:47 [CH]
Oh wow.
00:13:47 [CM]
And it's continuing to increase. It's going to increase ultimately. So we're getting adoption and users trying it out themselves today. And the more that the community itself can push back into it and tell us what it is that they want and they need from their solutions, the more that we can build and ultimately increase the footprint over time.
00:14:09 [BT]
I would guess that part of it is that all the libraries and packages that you can get in Python, people might be using q as a, you know, essentially something you can insert. And do massive calculations and then bought back out to what you, what other packages you're working from.
00:14:23 [CM]
Yeah. Yeah, principally. So a lot of how users are using it today is as an extension with maybe q code that they have available to them now, things that had been built or were available previously. But also, like you said, having KDB databases that exist that they can query Pythonically or semi-Pythonically in order to get reasonably large amounts of data out to then fit machines. So that's one way of it being done. I think one of the areas that we've seen a degree of adoption, and particularly when talking at Python events, has been in users being able to take Pythonic data that they have available to them today, maybe as Parquet files or other data formats that can end up being Pandas or PyRO data frames and creating KDB databases from that. So taking data from another format, producing a KDB database, and then using it to create a new version of the data that we're using. So that's one way of it being done. And then using that data to create a new KDB database in order for them to then get the benefits of the queries and aggregations and everything that comes with a standard KDB infrastructure and setup.
00:15:28 [CH]
I guess maybe the next question is, what is next on the PyKX horizon? Are there things you're working on, or is it just building up better integration with the existing data science, machine learning ecosystem that exists in Python today? Or what are you guys focused on at the moment?
00:15:45 [CM]
So I've been thinking about this a lot from the perspective of like roadmap and everything and and we we do have a roadmap we publish it it's on um it's on the o.KX.com for special pyKX um but the vast majority of the work that we're doing ultimately is i think very similar to how the q development works is that it's intended to be iterative percentage-based improvements to the library over an extended period of time you said um that we've been developing it for three years and and the reality is since we've we've upticked it to be open source what we've been developing has changed an awful lot and we we try to react as best we can to what market the market being our users is actively requesting of us or asking us for um so there are things that like performance improvements moving a lot more of it down to c++ so it's more efficient um some streaming style applications that can be built on python they won't be as efficient as they would be if they were written fully in q but they will be a starting point for people to be able to build up those kind of infrastructures and then just better everything over the the course of it we've had a pandas like api for allowing users to interrogate data more pythonically um than they would if they had to write q um and expanding those kind of things just helps to reduce the barrier for entry a little bit so that we can then take them the extra step of saying here's the extra 10 percent of performance you can get by reading Q-tips or Fun Q and doing those kind of things.
00:17:20 [CH]
Interesting and so how does the c++ come into play because i imagine that the the PyKX was just kind of um not a shim but like it's a interop layer between python and then at some point you're invoking the q executable to do some work for you like the heavy lifting is are you saying that like somewhere in between the python layer and the q layer at the bottom there's c++ or for certain new things that q currently doesn't do you're writing you know bespoke c++ code that the python is directly invoking and then uh like for certain things it goes down this different path and doesn't end up at q is that what you mean or uh something else.
00:18:02 [CM]
No no it is it is actually it's different so the way that the pycax operates ultimately is that we're provided by the the core team um a q.so a shared object that we embed inside of python um so all of our interactions with um q objects themselves are in their kind of native form in in c so when we're executing against q in this instance we're not talking about executing against an executable ultimately what we're doing is is making direct calls against q so most of our work at least at majority of the code that's written is doing data translations between uh q and numpy q and pandas q and pyaro um and the rest of the kind of data formats that we support and we wrote a lot of that originally and incite them um so that we could have it be pythonic in nature um but when it's compiled that adds an awful lot of overhead um for actually running the conversions themselves so at the minute we're in a in the process of of trying to write a lot more of that data conversion down a layer again so we can be more efficient and how we go about doing it but it provides a lot of it provides a lot of benefits when it comes to that so like we've been able to implement um some of the python protocols for data conversions between q and and numpy [04] really really efficiently and bidirectionally because we're at that level of of integration between the two of them which is is obviously beneficial for for everyone for our users but but also for ourselves because we can piggyback on how numpy does some things um from time to time
00:19:45 [CH]
Other questions before any of any of the panelists before we move on to potentially our next topic of what's new in the latest release of kdb plus 4.1 i'm looking around bob's eye browse are bouncing up and down ready to ask questions probably about the next topic.
00:20:03 [BT]
No i'm interested in hearing about the next topic i don't even know what questions I would ask
00:20:09 [CH]
Well we'll be sure i think we've included in the show notes maybe in both of the past two episodes 75 and 74 uh since it's been released there was a great blog post and i think actually we can link uh i'm not sure if it was on linkedin or if it was on youtube but i definitely did see that there was a webinar going through the features and then there was a blog post attached to that um and yeah a lot of new exciting things um so i'm not sure we'll throw it over to you Conor and you can maybe highlight what you think is most exciting to you and i'm sure there'll be some questions uh from some of us we we've i've read the blog so i've seen some of this stuff and also too i think we did link in the last episode as well there was a talk by Andrew i think it was all of Andrew Wilson Oleg and Pierre they gave like a three person talk where they focused on different sections where they went through some of the the newest features which i don't think all of them made it in but most of the stuff that
they demoed in that talk um made made it into the talk so uh or it made it into the 4.1 so yeah we'll throw it over to you and and uh you can give us the overview
00:21:13 [CM]
Yeah so so 4.1 um was released on the 13th of February a little over a month ago now at this point i suppose it it covers off a huge range of different things some that are language specific others which are performance based upgrades to um kdb generally and and more broadly um just improvements across the board and i think it's a really good example of how we can get some of these things across the board from the perspective of like security but some of the kind of highlight features for a lot of the clients andrew um does have the recording on youtube now um i was looking at it yesterday evening for some inspiration um so it is always all available there but the big changes i suppose are around um parallelism so there's no nested parallelism within um q so people running peach can run peach at depth and it also includes uh implementation of a work ceiling algorithm and it's a really good example of how we can get some of these things across the board and i think it's really important to think about that as well um so i think it's really important to think about that as well um so i think it's really important to some of the architectural barriers that were put in place by having a 10-24 limit on the number of connecting processes. At the same time, the socket performance generally has been improved. So late connecting processes won't have a fixed penalty for being later to the party. That should be significantly improved by that. I think in the release notes, there's an example that's given of the performance improvements that are put there. One of the big things from a data science perspective and one of the things that we on PyKX have been benefiting a lot from is improvements to the parallelism for data loading. So CSV loads and fixed-width files now can be loaded in parallel, as well as named pipes. So named pipes also work in that world too. From kind of a general improvement perspective, HTTP persistent connections, the fact that you can now keep them alive reduces a lot of the friction that's put in place by single-shot HTTP requiring users to basically re-sign every time they're trying to run an operation, which should improve the usage of those. The same exists for the TLS-based connections as well, which is a super important thing. So I think that's a big thing. I think the biggest piece of improvement generally, or the biggest piece that's got everybody talking, has been around the language changes. I think one of the things, Conor, that you had mentioned in your conversation with Ashok was around k generally. And obviously, a lot of the things we're talking about here also apply down at the k level as well. So the language changes that have gone in from Pierre are fundamental. They're included in the improvements that exist for q. So the addition of the dictionary-literal syntax to make it a lot more aligned with how table generation is put in place. Anybody that's created a very, very small dictionary in q knows the pain of enlisting the first element, bang, enlisting the remaining elements. And that is a bugbear and something that I'm really looking forward to getting rid of in a lot of my code. But the additions of things like type checking for types and functions and the generalization of that for filter functions should really help to, I suppose, smooth out some of the areas where people have a lot of boilerplate code that you're doing. The last piece is on the pattern matching side of things, which is really, really interesting from the perspective of being able to do validation of the data that you're producing, doing multiple assignments, and doing checking on the data that you're producing. And I think that's really important. And I think that's really important. But I think one of the things that I wanted to kind of note on this is pretty much everything that's here is based on client feedback. We benefit as people are more communicative with us as to what they want to see, whether that's in q or KDB or whether it's in PyKX. And I can give a couple of examples of that. One of the things that maybe would come up is how long it's been since the last release. So it's been, I don't know, maybe a year or two. I don't know, I think four years since the last point release, probably a little bit less than that. But we're continuously iteratively improving those releases themselves. They're just not patch release versions. So they're all dated releases. So even since 4.1 was released last month, there's been an extension to .q.trap to include a .q.trapd, which is a dot apply for getting backtrace information. And additionally, there's been multiple extensions to .-36bang for dealing with encryption keys so that users can see that an encryption key has been loaded so they don't load one again, or they can lock it to make it a singleton object as well. So yeah, it's continuously evolving and all based on client interactions and feedback.
00:26:36 [BT]
When you mentioned the parallelism, is it parallel? Are you working with parallel CPUs? Are you working with GPUs? At what level can you parallelize?
00:26:48 [CM]
It's parallelism on CPU level. It's multi-threaded operations within q. [05] Now it can be more parallel objects, essentially.
00:26:57 [CH]
So I guess, yeah, the thing that I am most excited about, there's tons of stuff that you mentioned. And I recall being excited and both confused when I watched, I think it was Pierre who demoed the pattern matching. And we will, yeah, we'll link that talk again. Everyone should go watch it. It's is introducing every feature. He says, oh, very simple. And then moves on without like the most dry, you know, I asked him after if he was being serious or it was a joke. And he's like, of course it was a joke. I was joking, you know, you could tell. And I was like, well, you could have chuckled at one point to indicate that you knew that it was funny. But the pattern matching seems one, extremely powerful, but two, I don't know if any of the other existing array languages have sort of like anything close to this. So from my understanding, the basic syntax is that you parenthesize a pattern matching expression and it will, in several different ways, kind of destructure what you're matching. And if it can, match, you know, the variables that exist in your pattern matching expression, or I'm not actually sure if that's what it's called. And on the blog, it sort of has these examples, like parenthesis B, semicolon C, end parentheses, and then colon. So like the, whatever's in the parentheses before the colon is your, what I'll refer to as pattern matching expression. And then after that, it kind of tries to match the pattern. So after it is parentheses two, semicolon three, end parentheses. So the B matches to the two, the C matches to the three. But you can do this like in a bunch of different contexts. So like you can do it with a dictionary to pattern match on the value that's associated with the key. There's another example, which looks like, I'm not exactly sure what the bracket bracket X two, but it looks like it has a way of like saying, match on everything up until the last item, and then just like match on the last thing that's there. I think that's what it's saying, but I could be wrong there. You can clarify and not exactly.
00:29:12 [CM]
Yeah. So in that example, specifically, the pattern match that's happening is the table itself. So X two, colon E is taking the last column within the table and assigning it against E in this instance. So five, six are getting assigned for E there. But like you say, you can, the dictionary version above it is a good example too, right? The pattern that you're matching there is a dictionary where four is being assigned to D. So when you get to the last column, you're getting a sign for E. And then you get to four in the dictionary on the right hand side. And for an audio podcast, this is probably a little bit less descriptive. In the release notes, it ultimately is that four would be assigned to D in that case, and X two is assigned to E for that pattern.
00:30:02 [CH]
And so X two is the implicit name of the column because it hasn't, you haven't provided explicit names. Is that, that's correct?
00:30:09 [CM]
Yeah, exactly.
00:30:12 [CH]
And is that a 4.1 feature or have implicit names been there for a while?
00:30:15 [CM]
Implicit names have been there since the beginning, and they still exist for the dictionary literal syntax. So if you don't, if you don't put in the names of the columns that you want for a dictionary, so if you had left parentheses, open square brackets, one, two, three, close square brackets, close parentheses, you'd end up getting a dictionary that had X goes to one, X one goes to two, X three goes to three. Right. So yeah, it's, it's been there for, for a while.
00:30:46 [CH]
Stephen, you were going to ask something.
00:30:48 [ST]
Yeah. The pattern matching really exciting. We had a study session on it in the q201.org team. And at that time we couldn't find a formal definition of the pattern. So there were some cases where it wasn't clear to us whether the parentheses enclosed a data structure or a pattern and how we knew what the difference was. Is there these days a formal definition online of the pattern syntax, or can you promise us one?
00:31:19 [CM]
I don't think there is a formal definition that exists today, and I can definitely request it immediately as we finish this. I think, to your point, Stephen, it's incredibly useful to have that in place today to say where the limitations are on some of these. It's obvious in limited cases, or in the cases shown in the release notes, but I can talk to the team about getting some of that as well.
00:31:47 [CH]
And this is what was amazing about the pattern matching, is that it doubles as what are multiple different features in other languages. So in one sense, it is pattern matching. In another sense, it's destructuring, which I guess kind of go hand in hand depending on the language feature. But in Python, pattern matching with the match keyword and destructuring with iterable unpacking are two separate things. So depending on the language that you're comparing it to, that already maps to two different language features. But then on top of that, number three in the blog post points out that you can use the pattern matching to also do type checking, which is kind of, I don't know, to me it's mind-blowing. And then I think there was like a fourth feature that didn't end up making it into 4.1, but like in the presentation that Pierre was giving, he was like, Oh, it's a pattern matching. Oh, it's destructuring. It's pattern matching. It's type checking. And it's this other thing. And I remember sitting next to Nick Psaris, and I was kind of, I thought I was hanging on. I kept asking him. I was like, is this, do I have an understanding of this correctly? Because like, this is a lot of stuff. And he was like, no, yeah, that's what they're saying. And yeah, that's pretty impressive, too, that you can do type checking with this pattern matching syntax, which once again, I mean, it's an interpreted dynamic. I don't know. Is q? If we're doing type checking, is it dynamic? I don't even know. Is it a dynamic language? What do we call q and k these days if you can type check it?
00:33:16 [ML]
Well, it's just a dynamic language with type checks. There are lots of dynamic languages that have, I mean, usually you would have some sort of predicate, like a function that, you know, does, or an assertion, basically.
00:33:29 [CH]
That's true. Yeah. Erlang famously has the dialyzer tool for type checking.
00:33:34 [ML]
Does that turn things into Dyalog?
00:33:36 [CH]
No, it's just.
00:33:39 [AB]
Hey, speaking of that, I've needed this. Like, I've written some utility functions where I allow specifying some things as left argument. And then I've had, like, you can specify some number, you can specify maybe some text, maybe an object. And then I use the type to deduce the order so the user doesn't have to remember which order. Because the way we do multiple arguments is that you give an array of them. So it doesn't matter. It doesn't matter if you say one and then the letter B, or you do B and then the letter 1, because I can check the types and figure out what you meant. It would have been so much easier if I had something like this, sort of jumping through hoops, checking types, and finding the first one that matches.
00:34:22 [ML]
Well, so with that, I mean, you have a list of types you expect, and the user can pass in a list in any order. If you want to make pattern matches on that, doesn't that... Don't you have to make, like, a pattern match for every permutation? I'm not sure I understand what you're...
00:34:36 [AB]
No, but if I understand... If I understand right, the example that was given is the user... Like, there's an example on the type checking, the second example. And we have something called name. We want something called name, want something called age assigned, if I understand right. And then the user passes in a list. One is a symbol, one element is a symbol, and the one is whatever this age type is, some kind of numeric type.
00:35:03 [CM]
Short.
00:35:04
Sorry?
00:35:06 [CM]
That's the type I'm saying. Short.
00:35:07 [AB]
Yeah. But it's a numeric type. And then, if I understand right, the type checking pattern assignment here just says, okay, I need a... Oh, maybe I'm misunderstanding. I need a symbol, and that becomes the name. I need a short, that becomes the age. What if the user...
00:35:28 [ML]
I think it would just check that it would require them to be in the same order and just check that they match up.
00:35:33 [AB]
So what if they're not? So it's in the wrong order. Then... Then you get a type error.
00:35:38 [ML]
Then you have to write another pattern for that order.
00:35:41 [AB]
Ah, so it wouldn't actually help me, OK.
00:35:44 [CH]
Then you have to write another pattern for that order. Ah, so it wouldn't actually help me. Okay. I mean, it is very useful, though. Like, it shows in the next example, which is just an applied example of the type checking, is that you can basically add types to your function signatures. Which is, like, that's basically what you have in Python, right? Like, Python doesn't require you to add the types to your parameters, but if you want to, you can add them. And actually adding them... And adding them on their own does absolutely nothing.
00:36:09 [ML]
Yeah, Python's very weird that way.
00:36:11 [CH]
Yeah, you have to run a MyPy type checking tool or Pyre from Facebook. There's a bunch of them that you can use. I think even Rust, you know, is the one type checker and linter to rule them all. But, like, with q now, you get that out of the box. You can add it, and it'll automatically check it. So it's, like, it's honestly better than what Python has. It's gradual typing with the type checker built into the executable. Which is... I love gradual typing. [06] Because, you know, dynamic languages are great. A lot of the times you just want to do something quick. But every once in a while, if you keep on running into some error of, like, oh, I always forget this is supposed to be a string, not an int, and then I get, like, an error. I can just add that now and ensure that whenever you're calling it, it's going to make sure that that's the case. Otherwise, you'll get a type error. Which... Like I said, I don't think there's any array language that has this kind of stuff.
00:37:02 [ML]
Singeli. Forgot about Singeli. So, yeah, I am pretty familiar with using these sorts of features, the type checking and the destructuring. Because Singeli has pretty sophisticated destructuring. And there it's sort of interesting because you have these two kinds of values you might take. There's the compile time values, which are dynamically typed. And there's the runtime values, which at compile time are represented as, you know, an unknown value of... With a static type. So the type checking syntax... The syntax is for your compile time values. And then it looks pretty much like... Well, it's actually a lot like the type declarations and functions. So your pattern matching goes at the generators, which you run at compile time. And then your functions that run... That get compiled into the actual code that runs. Those have to be statically typed. They can only ever take one type. So, you know, there's just a standard static type checking thing. You say, you know, A is a long integer. B is a float. Or whatever. But at runtime, you have these generators. And the type checking syntax superficially is similar. But it's actually, you know, more of a pattern matching thing. Where it says, this variable is required to have a type. And then the type expression, either that can be like a name. So you say, it has to have a type. And I'll take in this name. And then I'll know what its type is. Or you can specify in parentheses. It has to... Its actual type has to be this value. So, yeah. And it's got... So you can do that in generators. And you can do that in assignments. And there's also a match structure. Which pretty much bundles a bunch of generators together. But then you can match and go into a pattern and say... Match it against one of these patterns. And keep trying until you find one that matches.
00:38:55 [CH]
Link in the show notes to Singeli. And maybe there'll be a Singeli versus q now blog post in the future. Comparing this kind of stuff. Stephen, I think... I'm not sure if you still have your thought that you were... Hanging on to a couple minutes back.
00:39:09 [ST]
I was wondering if your fourth use case, the one you couldn't remember, was assertion checking. Where you could force a match error.
00:39:15 [CH]
I can't remember. I remember the fourth one was mind-blowing. And I think Nick... It was either Nick while the talk was happening. Or Pierre said afterwards that. We'll see if that one makes it in. Assertion doesn't sound too mind-blowing. But also it was a while ago. So it might have been that.
00:39:31 [ML]
Well, the thing about assertions is that if you have a match on multiple patterns... Then you can go from... Then you can specify arbitrary stuff. So you can say... Well, I want this pattern to match if this holds. And then if the assertion fails, it'll go to the next pattern and try the next one and so on. So in that case, it's very useful. If you're just unpacking a variable, then you may as well put the assertion afterwards.
00:39:56 [CH]
Adám, you were gonna...
00:39:57 [AB]
Yeah, I was just going to ask about those types. Being that q is more, let's say, finely grained type. Like the different numeric types. Is there some way to specify that I just want a number?
00:40:11 [CM]
So not within the syntax for single character. You can write filter functions, which would do the check itself. Which I guess is the generalization. But not from a named numeric character representation. There isn't an equivalent for numeric only right now.
00:40:32 [CH]
Which is still pretty powerful. That also kind of corresponds... Not exactly, but kind of the type classes from Haskell and concepts from C++. Not really, but at an arm's length, the idea that you can create your own category by writing a function that checks. Is it a set of these things? I'm sure there's some PL academics listening to this right now that are upset that I made that comparison. But like I said, at an arm's length, there is a superficial similarity there. And yeah, I didn't even... I didn't even catch the example that's after the... Or the second one in the filter functions. Which is... Like once again, this is... I don't even know what this corresponds to. You can pattern match. And then within your pattern matching expression, apply custom functions to each thing that you pattern match if you choose. So the example is it pattern matches a list 1, 2, 3 to A, B, C. But then for B, it has a colon 10 plus. And for C, it has a colon 100 plus. And then you end up... You end up with A equal to 1, B equal to 10 plus 2, which is 12. And C equal to 100 plus 3. I don't even know... Like what is that? That is... Like that's crazy. That's... I mean, the closest thing I could think maybe. I'm not sure in BQN [07] if you can do something similar where you have a list of functions.
00:41:54 [ML]
I mean, you just apply a function after you once it's matched.
00:41:56 [AB]
Yeah.
00:41:57 [CH]
So that is... I mean, that's... This is the kind of stuff that I remember seeing in that talk. And I was like, I can't even think of a programming language. Where at the same time you're pattern matching, you are applying like a unary operation or a function to each one of the pattern match values. Like the closest thing I can think of is in like functional languages. You could write a little pattern matching function. And then you could map that pattern matching function over your list. And it would have the same effect. But that's morally semantically different than what's happening here. Yeah.
00:42:35 [AB]
In J, you could make an agenda, right? Something with a... If you make a... An agenda? Yeah. Isn't it? With a tolerance of the functions you want to apply to the specific elements. And then you just apply them to all of them. And then you do the assignment.
00:42:53 [BT]
And what you would do is... What agenda does is it gives you the choice of applying different verbs to whatever you're bringing in. And you would make the choice of what's applied by the type of the... Noun coming into the agenda function. And so you could do it that way. But this sounds to me almost like when you're talking about making changes within a type checker. That's almost like pre-processing, isn't it? Sort of a pre-processing going on?
00:43:19 [CM]
Yeah. Ultimately, it gives you that power, right? One of the big things that the people have asked for, and I know I looked for when I was starting out, was ways and means by which to modify data as it was coming in. Assuming it was incorrect types, but also not have to have if minus 11 bang equals... Or sorry, yeah, minus 11 H equals type X everywhere in my code. The addition of functionality like that for the pre-processing of data as it goes into the functions, I think is super powerful. It'll be mainly useful. It'll come out as people start to use it more and more in production. And we have clients that are onboarding it today and users that are trying it out and providing feedback. But I think it'll come out with more enhanced usage by the community as a whole and telling everyone what it is that they found really cool and what it is that they've struggled with that maybe needs improvement. I think, Conor, to your point on Pierre's talk around different features and functionality that he showed that didn't make it into the 4.0 release at that point, I think it's important if you try to write something... Some of the things that he wrote, they do raise NYI errors currently. So there's a million ways that a lot of this stuff would go. But the reality is that it'll all ultimately be driven by client feedback and user feedback at the end of the day.
00:44:50 [CH]
Yeah. It's still very, very cool, though. I can imagine several different use cases for the pattern matching plus some operation. Because in Python and C++... Or languages that have these kind of destructuring, which is what you're doing here, there's no way to immediately apply an operation. You have to destructure on one line and then on a next line apply your mutation to that operation. So one, it's way less declarative. But two, if you want to use that in some kind of single line expression, it's impossible currently. Because you can't use in C++ 17, it's called structured bindings. They have a whole syntax for that and you've got to pull out the, you know, A, B, C, and then get the C and then do something to it. And you're going to have to keep that mutable. Like you can't market const because immediately you're going to have to change the value.
00:45:44 [ML]
Well, you'd probably want to make a new variable, right?
00:45:45 [CH]
Or yeah, just declare a new variable. So now you've got two variables. But like with this q, you know, pattern matching functionality, you could do that all in one expression and then embed that in another expression. Not saying people want to do that, but like if you wanted to, you could. And in other languages, that's not possible. Which is, it's very, very cool that like in a language like q that I don't think it's, q is not known for being, you know, like a Haskell like, you know, academic try all these cool new features and we're doing things that other language aren't doing. But that's essentially what's happened here is there's a, you know, a language feature that's been implemented that is, you know, doing novel stuff that's not possible in other programming languages, which is as a PL enthusiast is very, very cool.
00:46:32 [ML]
I will say that with pattern matching, it's easy to, because there's so many design options and there's not really a natural path to take, it's easy to back yourself into a corner where you have patterns that are ambiguous. So like an obvious one is if you said, you know, A plus B, it's 10. Well, there are many choices of A and B there. There are some options like you could interpret it by saying, well, I want to assign only the variable A. Maybe B is already defined. And so if B is 3, then A would be 7. But you could do it the other way, too. So with a lot of syntax, you can get into that situation where, you know, there's two expressions and you could be defining either one of them. But you can't really define both.
00:47:19 [CH]
Oh, yeah. See, this, man, this is making, that's the thing that I think blew my mind last time. Which, maybe. Which, maybe that actually did get in. It's like the fifth or whatever we're on now, sixth language feature that this corresponds to, is in some cases under. It does like the inverse stuff where, you know, the simplest example, we might have talked about it on one of the last couple episodes, is where you're, you know, appending. You can use, you can pattern match a string like puppy by putting, you know, some variable comma PY. And then it'll map. And then it'll match on the PUP. Did that, did that make it in? Or is that the thing that. No. Okay. So that was the thing that was time blowing.
00:47:59 [CM]
Yeah. I think that was raised as part of the discussion in the room at KX Con, which is, I guess, on the recording. There's ambiguity in some of those and as a result, they were not put in. Ultimately if there's ambiguity there, there's potential for bugs. And that's a difficult one to have in place when users are trying to use it in practical applications. It could be generic.
00:48:32 [ML]
Well, that's flashing warning lights because you know, what do you do about A,B?
00:48:38 [CH]
Yeah.
00:48:39 [BT]
So it's been out for a month now. What has the response been like? Are people sort of discovering this or what's the feedback?
00:48:46 [CM]
Yeah, so the response has been really positive kind of across the board. Andrew Wilson and I were in London last week meeting a number of clients. And I know that our C-level staff have been doing the same over the last kind of couple of weeks. And it's really across the gambit of what's been released that clients and users are really happy with. So a lot of the focus has been on the language features, which are, as we kind of discussed here, innovative and [it is] really, really nice to see the evolution of what's been built on k and q. But a lot of where users are finding the immediate usage for 4.1 is in the performance updates; the increases in parallelism, faster loading of the CSV data, the kind of fixed width stuff as well as improvements that are HTTP infrastructures, because they can keep connections alive now. Those kinds of practical elements are things that people are onboarding pretty much immediately because one of the core principles on how a lot of the development in q has worked for forever is it's pretty backwards compatible, pretty much everywhere, as best as possible. There's very few knocks and thankfully the language features are pretty much entirely additive in this case, when we talk about pattern matching and filter functions. Users are happy to onboard it quicker than they would, obviously, if the language features made any modification to what they needed to do, bar a couple of minor knocks and in the release notes itself. So it's been great. Everybody seems very positive.
00:50:29 [BT]
When I worked in television production, we'd get a new piece of equipment and there were specific people I would give that equipment to because everybody, when they read the manual would be able to use it. But there were specific people I gave it to because I wanted to see what it could do. So [chuckles] I wanted the surprises of what they would find. Is that happening with any of your users? Have they surprised you with what they've come back with?
00:50:54 [CM]
I don't know if I would say "surprised" is probably the term. I think it's been good to get the feedback from the community as a whole as to their excitement about it. I am not on all of the email threads and community chats that go on about some of this but I think for the most part, we did a lot of the pre-work that you're kind of talking about, Bob, when it comes to giving the software to people we knew would break it. So that we could figure out where all of the dark edges were and everything we were implementing prior to that point. It was one of the benefits of the T releases (the 4.1 T versions) that preceded the production release of the software, is that a lot of the things that likely would have come up had come up before. So to Conor, your point around some of the features being maybe NYI'ed or some of the things not being there, a lot of that will be a result of internal testing, or it might be just be a result of us being in a position where we say: "right, we need to hold our powder on that and see where we can get to so that everybody is happy once they get it into their hands".
00:52:07 [CH]
Yeah. And I don't mean that it was a knock or anything. Their presentation (Andrew's and Pierre's and Oleg's ) was so awesome that even, 75% of what they showed was still like an amazing language feature. And I mean, if you look at the blog post carefully, really the last two, three and four [chuckles] are all really the same language feature. It's just what you can do with it differently. And the fact that you're able to list those as separate points, like is a testament how powerful this single language feature is. It's really quite astonishing.
00:52:41 [AB]
It does remind me a little bit of something that some APLs have which is related to "under". Being able to have function application on the left side of an assignment. So let's say I have a ... [sentence left incomplete].
00:52:58 [ML]
You're talking about selective assignment? [08]
00:52:59 [AB]
Yeah.
00:53:02 [ML]
So it's not only on the left of the assignment arrow. It's on the left of the variables.
00:53:04 [AB]
Yeah. Immediately to the left of the assignment arrow and that would be a modified assignment. [Marshall agrees]. We use the current value of a variable as part of the computation, similar to what many languages have [with] plus equals (+=) and other such things to add and subtract. And you could do other operations as well in many languages. No but here is it's to the left of the variable name. So let's say we have a variable: it contains the three element character vector, so a string like "ABC" for now (we're not going to actually use those). But instead of assigning something to A to overwrite it, say like "DEF", I can assign to "reverse A". I include the "reverse" function to the left of the A and then A gets the value "FED" instead of "DEF". So it's just kind of interesting what just happened, but it does work. Marshall said "selective assignment", but it doesn't really have the feeling of selecting here. It's assigning to some reflection of it.
00:54:09 [ML]
It assigns to a selection. It's meant to be an analogy to indexed assignment where you assign to particular indices. There you say "selective assignments". Not only can you pick it in the indices. You can make any selection.
00:54:24 [AB]
Yeah. Including restructuring.
00:54:27 [ML]
But when you say it works, it is a little tricky cause you can't just put any APL expression in there. Specifically the variable has to be on the right hand side of the expression and it has to have certain types of functions (I call them structural functions) that are then applied to it. [Adám agrees]. So, I mean, this is maybe *the* reason I developed a structural under; [it] was to try to package this in a more functional form. And the syntax for structural under is not nearly as nice but what you get from that is that it is completely unambiguous because you have a function and then you know what arguments [are] coming in and what results coming out, as opposed to starting with an expression and trying to figure out what's being modified here.
00:55:16 [AB]
Right. Yeah. So if you were to do something like "A function B" on the left of this assignment, then it's B that gets assigned to and A is just a parameter. [Marshall agrees]. Now if you swapped it, [meaning] instead of "A function B" you wrote "B function commute A", then you would think it's all the same. So commute is the higher order function (the operator) that just takes a function and flips it's arguments. You'd think: "oh, it's the same thing; I just moved these two names to the side and then told the function to take them in the opposite direction". And it's not because there's a difference to where the name goes. But I did actually use this feature to implement the model of structural under. And they can do sometimes surprising things, like you can destroy the structure of your array, apparently, and assign to that and it gets rebuilt. That's the impression you get, which is pretty cool. Occasionally I've assigned to the "revel" of something. I don't have to reshape the values before they go into the container array. I just assign to the flat thing or you can even APL-enlist (meaning completely destroy the structure, ripping open all the enclosures and that works as well).
00:56:33 [CH]
All right. Well, I have a couple of questions that are not necessarily kdb+ (actually one of them is) but I'll pause here before I kind of move sideways to a new set of topics and kind of move away from this. If there's any last questions that we have about language features or kdb+ 4.1 parallelism or any last things Conor, you want to say?
00:57:00 [AB]
You mentioned about backwards compatibility. I thought there was like a very hard rule that everything you did to the language had to be backwards compatible but apparently you're saying that's not the case; you do make breaking changes?
00:57:11 [CM]
The aim is always for it to be entirely backwards compatible but there are some knocks that exist for 4.1 really on the language side of things. Some of that shouldn't have existed before or shouldn't have been possible for a user to do and some of which I suppose are associated with pattern matching functionality itself. There's a list of them in the 4.1 release notes that kind of align what it is that users need to take into account but they should be pretty minimal. The expectation is it won't require a huge rewrite on anybody's side of things to onboard this and to start getting the value really quickly. But as always, we would suggest to everybody to run it through all of your test suites before putting it to production. That's always a bad idea if you don't do it. But we don't really expect there to be a huge amount of things that people need to change.
00:58:00 [AB]
So another question is entirely unrelated (well, it is kind of a little bit related to that): the previous version was version 4.0 in 2020 and then you have a 4.1 in 2024 that has like a huge thing (the new syntax being introduced). What's up with the version numbers? Do they mean something?
00:58:24 [CM]
I can't speak exactly to that. I think the versioning strategy always (and I think Stephen maybe can correct me if I'm wrong on this) has always been to date version upgrades within kdb. So we usually reserve minor changes within the updates to be for pretty large features that go in. But like I said we're continuously updating the versions that are released today. A good example of that is in 4.0 we made backwards compatible OpenSSL-3 support for clients from a security perspective. They're still being iterated against and there are always new features and new functionality that are going in. It's just that the versioning strategy is based really on when there's really, really brilliant features that maybe might not make sense for them to be in a patch release as would normally be the case.
00:59:18 [CH]
One of my questions dovetails onto this, about the release. We'll leave a link in the show notes (https://code.kx.com/q/releases) if you'd like to see all the past releases. I went and found this site, which I hadn't seen before and it goes all the way back to 2.4, which was released in 2012. And it seems like the maximum number of releases that happened in a single year was three (that happened in 2012 and 2020). And there's been a couple of years that were skipped 2018 and then obviously 2021 through 2023. So the question is: can we expect 4.3 later in the year [chuckles] or do you guys have any sort of goals or is it more? You just develop features and then when you feel like it the time is right, you ship it? So it could be multiple releases in a year or it could be a couple of years before you see the next one.
01:00:14 [CM]
I think it's largely dependent on when we have a critical mass of functionality that we want to get out to the community. I can't speak to when we can expect 4.2 or 4.3. That's way above my pay grade, but the reality is that it's always going to be dependent on what's available tomorrow for us to release and whether or not that constitutes a need for a 4.2 release, for example. So it may be the case that it won't be for a while. I can't speak to the timelines for the core team but the reality is that we're continuously iterating and once it reaches a critical mass on features that are on a development branch that haven't made it into a 4.1 branch, we may release it at any point. I'd say always stay next to your computer looking up when to expect the next one [Conor laughs] but I can't speak exactly to when timelines for any of those would be.
01:01:19 [CH]
What people should do is they should go write a script in q. I assume there's got to be some HTTPS [function]: you can ping some site and just have that run every five minutes and then send you an email when the updates [are] there. Speaking of which, we should make sure to mention that for everyone that's listening, obviously q and kdb+ is a paid product. However, if you are an individual that just wants to try this out (I think it happened a couple of years ago since we started recording this podcast) that there's a personal license. [09] I mean, Conor can correct me if I'm wrong, but I believe at the bottom of the blog, you can click on it or you can just Google it and you have to fill out an email and a name, I think. But other than that, they'll send you an email and you can download the executable and a license.kval (something like that)? There's some kind of file that you've got to make sure is in the right directory, but I managed to figure it out, so I'm sure everyone else can too. Then you can be off to the races and code in q on your local desktop or laptop, which is awesome because I think previously the only way you could do that was like ... [sentence left incomplete]. Was it ever on TIO.net or was there any online REPL? I'm not sure if it was. So yeah, awesome that KX has made that available to folks. Conor, if you want to add anything to that.
01:02:36 [CM]
Yeah, so the online licenses are a super, kind of straightforward way for people to get involved early and to provide feedback on what we do. The licenses for kdb now come in multiple different forms, but the version (and I'll plug PyKX while I'm at it), the version that you can get with PyKX when a user pip installs PyKX and tries to start PyKX by "importing PyKX as KX", they'll get basically a sign through kind of form to fill out that'll redirect them to the website so they can sign up and they can copy and paste their license in there at that point. An equivalent exists for q that's imported from Jupyter Notebook. So users can go to the site, can get it, but they can also follow the kind of instruction steps that are there once they've tried PyKX and get a license and it'll be installed into site packages if it's PyKX, or you can specify where the license is due to be in the normal case. Yeah, it's really helpful for us and for everybody else that we can use it as a tool to help teach people and have the likes of Nick's work on FunQ and Qtips kind of help to move forward a community of people that are hobby users or users that are wanting to use it, maybe not in a production setting, or to build applications or to run Advent of Code using q and following ... [sentence left incomplete]..
01:04:12 [CH]
Yeah, I was going to say, or just to solve leetcode problems [laughs].
01:04:15 [CM]
Yeah, exactly. Following some of the work that ... [sentence left incomplete].
01:04:18 [AB]
An online environment would be very nice to have as well, if you can figure that license-wise.
01:04:23 [BT]
And a warning, Conor McCarthy [laughs]: that's how the JPlaygrounds started. Adám had the RIDE working for APL and I think at that time, Marshall also had BQNpad, maybe something else.
01:04:37 [AB]
Not RIDE. TryAPL.com.
01:04:40 [BT]
There was TryAPL, yeah. But all those things were sent. And he just said to me: "well, you know, it'd be really good if something like that happened for J as well". One person said "I think I can do that" and did it so then we had one as well. So you may find the same thing for you, that somebody goes: "oh, well, actually you could do a browser interface" that allows you to come in and play. But actually my question was, if somebody were to come in and learn queue a question we often get is, are there any jobs [laughs] if you become an array programmer? And I think with q and k, the answer is emphatically yes, but you have to learn them. So this might be a stepping stone towards them.
01:05:18 [CM]
Yeah, I think that's a fair point. One of the benefits of working in q and k generally, at least from my perspective, is that we do have a lot of clients that are using us in production, managing their ecosystems for market data capture (in finance) for example. Or in Formula 1 for some of the data analytics work that they do. There's always roles that are available and one of the benefits of doing that (or learning q more generally) is that the demand for those jobs is not abating at all. If anything, our users and our clients are wanting to consume more. As a result, they have larger infrastructures and more need for development workflows. KX and First Derivative have kind of always hired young developers [and] attempted to train them up. We have partners: Data Intellect, [10] for example (formerly AquaQ) who do similar work in the consulting sphere and allow younger developers to get up and running quickly and to learn by practically doing with clients in major tier-1 investment banks or hedge funds. And from there, careers like Nick Psaris has had or Attila Vrabecz, are possible because you can do a huge amount with q and kdb+ that is practically useful in the financial world and ultimately across multiple different industries. From our perspective, the more people that are using it the better because it gives us broader feedback. It gets more people excited about using array programming languages and gets them more involved and from a personal perspective (and I know the same is true of most of the people that are working at KX), the more that we can solve people's problems, the better for us and for everyone involved because we just want more people to see the benefits that array programming generally can provide. And I think to take from I think something that Rob Pike had said on his appearance on the podcast: "we essentially just want more people to see that there's a different way of thinking about solving problems and that they can be more efficient doing it without a 'for' loop in some in some scenarios". Where everybody devolves that of as a default, basically because they've had to everywhere else before they saw what we can do.
01:08:03 [CH]
What a perfect way to end the podcast. I mean, technically, I was gonna ask one more question but actually we should ask, now that I've said [it]. What am I gonna do? I ruined the ending of the podcast [laughs]. I mean, it's a good question to ask because I'm sure that there are a few listeners that are thinking: "we've mentioned KX Con 2023 a couple of times and we've got someone from KX Con. I happen to know that there's no official plans or concrete dates. But I'll ask anyways, just so that the listener can have the answer: are there plans for a next KX Con at some point in the future? And if yes, those plans, have they been announced yet or should we just stay tuned?
01:08:42 [CM]
Though there hasn't yet been any announcement specifically about KX Con and I think stay tuned is probably the correct way to deal with that question generally. I think the reaction to KX Con last year was overwhelmingly positive, though. I think, Conor, your experience at it and what everybody else that I've talked to has had kind of shows the strength of the community. And I think from our perspective, whether it's KX Con, whether it's something else, we want to do more of that kind of conversation with the community because it helps everyone. If one of the boats in whatever the analogy is going to be is rising, then all of us have the potential to do it. I think IKX and the work we do on that side of things, that's the intention, right? We want to get more people involved and building community and building out a way of interacting with that community, whether it's through outreach, whether it's through conferences, whether it's webinars, whatever it is, makes all of our lives better. And ultimately gets more people interested in a thing that I think all of us really love.
01:09:53 [CH]
Yeah, I completely agree. And here's a little spoiler for the ArrayCast listeners that also may listen to ADSP: we serendipitously ended up having Phineas Porter on the podcast (ADSP) and that's coming out, I believe, on episode 176, which is two weeks or one week from now when this gets released when the listener is listening. I think in that podcast I was remarking that (I don't want to insult all the other communities) the array community, it's the best community. They think the right way, you know? And they're great people and we don't have to talk about loops and potatoes. It's fantastic. It's the best community. I apologize to all the other communities. I'm definitely looking forward to both the Iversonian College, the next KX Con, the next Middlebrook whenever it happens. Hopefully at some point, there will maybe even be some (I mean, we've talked about this on the podcast before) but some Array Con where all the parties from across the world get together, whether it's virtual [or] in person. But yeah, we will we will stay tuned. And whenever KX does know, feel free to let us know. I'm sure it'll come across our radar anyways but we will be sure to announce it on this podcast for folks that either went to the first one and want to go again or heard about the first one and missed it and are hoping to go to the next one. Hopefully it'll be either late 2024 [or] 2025. When KX knows, we will be sure to let the listeners know as well. And with that, we'll throw it over to Bob, who will let us know or let the listeners know, how you can reach us and where.
01:11:27 [BT]
If you would like to give us your feedback to our podcast and the things that have [been] said and maybe questions or [anything]. You know, it can be fairly general: we get a lot of different questions. contact@ArrayCast.com is the place to get in touch with us. Also, of course, there are show notes with this episode. Many of the things that have specifically been mentioned will have links in the show notes so that you can find out more about them. And oh, the transcripts. We have two people who work on transcripts as well, Sanjay and Igor. And both of them, we'd like to thank for their work on transcripts because it allows people to access the podcast in another media because there's a lot of people who don't want to take the time to listen to a podcast. And I won't take affront to that but it's much quicker sometimes to read the transcript. And I shouldn't even say this, but if you're searching for something, it's way faster to search a transcript [laughs]. I think people know about that, but hopefully they listen to the chat because I think that's kind of why we do this.
01:12:32 [CH]
I mean, for now. I'm sure we're like T-2 months away from some LLM website that you give it a podcast URL and you ask it: "hey, tell me when they're talking about X". Not only will they jump to it, they'll like splice it down and cut out everything else so that they can listen to an abridged version. So, but one final thank you, obviously, is to Conor for coming on. This has been absolutely awesome. It was a pleasure meeting you back last year in 2023, and hopefully we'll get to do the same thing again in the next year or two. And yeah, thanks for coming on. Hopefully we'll be able to get you back in the future when KX or kdb+ 4.2, or maybe it'll even be 5.0, who knows? We can get you back on to hear about the updates to PyKX and q the language and kdb+ the database. So this has been absolutely awesome. Thanks for taking the time.
01:13:18 [CM]
Thanks for having me guys. It's been a pleasure. It's great.
01:13:20 [CH]
And with that, we will say Happy Array Programming!
01:13:23 [everyone]
Happy Array Programming!
[music]