Podcasts > Ep. 198 - Rethink Database Design for the AI Era
Ep. 198
Rethink Database Design for the AI Era
Eric Daimler, CEO & Co-founder, Conexus AI
Friday, January 12, 2024

Today, we have Eric Daimler, the CEO and co-founder of Conexus AI. Conexus AI serves as a hybrid generative AI platform, facilitating reliable and rapid digital modernization, empowering enterprises to seamlessly migrate, integrate, and transform their IT systems.

In this episode, we delve into the utilization of category algebra to implement a domain-driven approach to interoperability. This approach focuses on computing the optimal data model rather than relying on manual design. Additionally, we explore the common issue of architects misunderstanding the practical structure of databases, leading to the failure of IT programs, as opposed to adhering to their originally intended structures.

Key Discussion Points:

  • What key concepts, such as data mesh and strategies, should companies consider when building the right architecture to effectively leverage their data assets?
  • How does Conexus AI help companies facing decentralized data challenges?
  • What does the before-and-after scenario look like in terms of data usage and outcomes?

To learn more about our guest, you can find him at:

Website: https://conexus.com/

LinkedIn: https://www.linkedin.com/in/ericdaimler/

 

Transcript.

Erik: Eric, thank you for joining us on the podcast today.

Eric: It's good to be here, Erik.

Erik: Yeah, before we get into the detailed topic, I think it's important to give a bit of background on where you're coming from. You have a really fascinating background.

Eric: Thank you.

Erik: On the one hand, quite a technical background. On the other hand, also, with one foot or at least half a foot in the public sphere. Can you quickly touch on some of the highlights in your career? I think the White House position as a fellow for machine intelligence and robotics is fascinating. Because it feels like that's when maybe the White House first started to think about these topics. Maybe I'm wrong, but it feels like it was fairly recent. Of course, you have a number of board roles that I think also allow you to think outside of your day to day.

Eric: Yeah, I've been doing this for a while. How people know me, if they know me, is really from the time that I spent working as a science advisor during the Obama administration. I was the first AI authority serving in the White House, colloquially known as the Science Advisory Group. It expanded into a whole initiative by a whole set of competent people after me. I'm not saying that there's three people now doing my job. But there's three people that are doing that job. I hope to go back. It has really created a terrific trajectory for the country and for the allies of the United States, that the value I feel like we were able to create is showing its fruits now in 2023 where we started with refreshing this initiative from the President on ubiquitous collaborative robotics. So that's where I spent time and what's probably the biggest deal.

To prepare me for that, I've been spending my time around AI in a variety of capacities. I was an AI researcher for a time at Stanford, at University of Washington, and Carnegie Mellon where I was on faculty, where I got my PhD. I had been a venture capitalist on Sand Hill Road, and now I'm on to my sixth startup. And as you said, I've done a few board director positions. A few of them are still active today. I've done AI. Other people have done many of these different roles, but I don't know anybody that have done all of those roles in AI. So I definitely think I have a rare, if not unique, perspective on what's going on in the industry and what the future looks like.

Erik: Just quickly, to dwell on this role of government. Because I think it's fair to say that the U.S. for a number of decades has not had a strong industrial policy in a lot of areas. Maybe the tech domain has been an area where the free market was more or less allowed to run free with quite limited regulation or direction, and that seems to be shifting somehow. How do you view this? I mean, obviously, you had a role in this, so I guess it's a net positive. But if you just look at the status quo today and how America is driving their technology policy around the development of AI, do you think we're on the right track, or it's just figuring things out right now as a nation?

Eric: Yeah, I like you suggesting, Erik, that maybe anything that I was involved in had a net positive outcome. It's nice. I'd like to think about that. About the heady conversation about industrial policy, I'm not sure that the U.S. has really changed its stance around that. Especially in Silicon Valley, there's politically really a libertarian bent. Let's get government out of the way. I'm busy innovating here. I'm not sure that's changed. Although, there are certainly people that want to make sure that what we bring into the world contributes. We're not immune to the potential dangers of this technology. We want to make sure that it serves the purpose of making our lives better.

I don't think that we are necessarily changing the ideal direction of the tech from the government's perspective. What I did in government is work with some other really smart individuals, speaking humbly on behalf of the President about the future direction for AI research. I think it is a unique capacity of the government historically to be funding work that doesn't immediately have some commercial application. That is something that I advocated then and continue to advocate today. I'm happy that that continues even as the economics between non-profits, academia, government, and for-profits, the hyperscalers, changes in its dynamic and changes in its texture as we train large language models.

Erik: Okay. Thanks. That makes sense. I appreciate you explaining the focus. I think that will give us a good jumping off point for discussion of the work that you're doing today. Because it's very much in helping to move forward, at least the way I look at it, AI in a more systematic, foreseeable, from a corporate development perspective.

One of the challenges that a lot of companies have with AI is that the development, maybe it looks different from a PhD than it does from a corporate position. But from a corporate position, it looks very much like a stepwise, where you have something like autonomous driving which was promised to us many times, eight years ago or so. We don't see any of these vehicles on the street, aside from small pilots. But then, hypothetically, three years or some point in the future, all of a sudden, they're going to be there. It's going to look like everybody — like somebody snapped their fingers, and all of a sudden AI is—

So expectation versus reality as a business is very important to forecast this. Because it can be very impactful, either beneficial or negative, for particular businesses. How do you look at this general challenge of understanding where are we today, and what is the feasibility of moving a particular domain of AI forward?

Eric: Yeah, there's a lot to say in there. I wouldn't want to put you on the spot. But we can ask the listeners, just to do a little exercise in their heads, around when they think the first autonomous vehicle was on public streets in the world operating without the assistance of a human driver. So I'm going to say that again. What year was it when an automobile of some flavor took to a public road without a driver behind the steering wheel, operating completely autonomously? The suggestion of your question, we might say, what? 2010 or go back to 2005. Maybe we go to 2000 to say, well, that was just the expression of some research that has been going on before.

But the real answer is 1983. Pittsburgh, Pennsylvania, Carnegie Mellon University, there was a van. A big van. It had more computing power than the entire developing world at the time. It says the mythology. It went five miles per hour in a sunny, bright, dry day. And yet, it was driving on its own. The issue with timing is, in the old adage for venture capitalists, don't confuse a clear vision with a short time horizon. So we can very much see what's going to happen. But, boy. But finding out when is a virtually impossible task. Finding out what's going to happen after something in the near term happens is often almost impossible.

For example, I was working with a group that was trying to predict what the future of the Internet looked like in 1999. And so after web pages were becoming ubiquitous and becoming more of a commodity. Because initially, web pages were something of an art. It was a business to create web pages. But what would be after web pages was predicted by IBM, oddly enough. They had this whole campaign around e-business, and they did a good job of predicting in a general sense what would come after web pages. They said webpages would decrease in importance. But it didn't help them define the app economy. Where is IBM in that? Nowhere, although they predicted the future in some sense. All of us that ever watched a Star Trek episode could have predicted in a general sense what an iPhone would have looked like. But none of us knew what the exact expression would look like. And even if we were to have imagined in some level a detail what these devices would have looked like, you can go further and say none of us would have predicted that there would have been that job called app developer or that app economy that IBM had predicted in a general sense.

Like today, influencer on TikTok. The future is virtually impossible to be predicting in any meaningful sense. What I really offer to the clients in my firm is adaptability. You could say there's personal resilience, but the company organizationally needs to just be quick to move to the new realities as they're presented. And kind of befitting my background, I'm going to say that the flexibility comes from having fast ability to process data in a principled way. You can't quickly learn and turn an organization if how you interact with people is in English or whatever language. You're using qualitative, informal means to operate a business. You need to have as much of the business as possible be formalized, be encoded, be systematized. Because that's the only way that you can begin to see what's coming and be able to adjust how you react to what's coming. So that's the answer. It's flexibility. Don't try to look into a crystal ball any more than anybody else.

Erik: Okay. Got you. So the solution here is not forecasting. The solution is architecture. That allows you to react to whatever reality emerges. Can you explain this concept of data mesh enterprise strategy or maybe any other concepts that you think would be useful in terms of how — let's say, we're imagining a Fortune 500 company where the senior management, they're not IT professionals. They're all 60 years old, and they've lived in a world where this hasn't been particularly important. Now, all of a sudden, it's very important. So what would be the concepts important for them for defining how their company builds the right architecture?

Eric: Sure. A lot of these terms are for marketing, and they get morphed around. AI itself, it didn't really mean what it means today. Artificial General Intelligence doesn't really mean what it's come to mean today. So if we talk about data mesh, or data fabric, or data warehousing, or data mart, or data lake, those are just funny terms. But I can say that, in a general sense, what people need to do is they bring together their data. Then they might structure their data slightly for a particular purpose. So let's collect all the data in one big cloud. It can be a cloud on Snowflake or AWS or something. Let's collect all of our data. Then let's structure it somehow, but we need to structure it by owner. Because after we get it all together, we do have some individual needs. We have marketing. We have manufacturing. We have sales. It's different from marketing. We have maybe customer fulfillment. We have our supplier network. So maybe there are some individual needs and that can be inside of a structured environment such as a data warehouse.

What a data fabric is — this is just a terminology. Data fabric says, we are going to centralize the ownership of this data and have somebody manage that for the different needs of those different departments. What a data mesh means is to have that data ownership be distributed. So individual clusters of data ownership can take place. Say, you have one drug research program and another drug research program. But it makes sense for the ownership to be separated by program. You might have similar issues by jurisdiction when geographies have different regulations for the storing of data. It's either technologies that enable that, but it's often just a manual process. So you can just think broadly what you want to do with your data, and then think what the right technology is to do it.

I can't say that you can take a pretty critical eye to many of these technologies, because the data lakes are often disappointing for people. If I threw all of the books from the U.S. Library of Congress into another warehouse without the benefit of a Dewey Decimal System, that's what people call data lakes. Just throw it into a big room. Okay. There we go. All my data is together. Well, that wasn't helpful. It's all about retrieval, not just storage. Then how do I get this data out? That's often what people can mean by the data fabric or the data warehouse. Well, how do they structure if they're going to structure it first name, last name? Maybe that's not how some of this data is characterized. And so that's a little bit like taking all the books in the building and then sorting the books by color or sorting them by height. You got to be really careful about how you make that data to be available.

In my firm, Conexus, we have clients that know that they have data that they want to use, but they just can't get at it. They'll have one piece of data stored in SAP and another in Oracle. There are other pieces of data stored in other domains and other formats. Other structures maybe in other geographies. They just can't bring that together. But they know it's available. Maybe in theory, it's available. But in practice, it's just not practically available to them. That's the problem sometimes with these data fabrics. Data meshes, excuse me. That's going to continue to be a problem using the solutions available today. But that gets at the sort of questions that an executive can be asking when deploying these technologies. It's, tell me exactly how this is going to be bringing my data to make it available for my operational needs.

Erik: Yeah, I was yesterday meeting with the head of digitalization for, let's say, a top three-tier company here in Singapore. The first thing that she said was, "I don't know what data we have, and I don't know how to access it." Because they have — she's sitting here in Singapore. They have Laos, and Cambodia, Vietnam and Thailand. Each market uses their own tech stack. They have their own data. They have their own CRM to an extent. So there's just not that central visibility. Then, of course, the local teams have — they also have it constraints, and so they might not be structuring their data in a particular systematic way. Then you have your distribution partners who, of course, have a lot of data. But then you get into the topics of how can we actually access that. What can we access and when, and so forth?

And so it really struck me. This is a very large company. The data should be relatively simple, right? Because it's not like we're dealing with complex supply chain data or manufacturing data. It's really all the same type of data about, okay, who's buying beer, and what type from which venues. So it should be relatively simple but nonetheless completely an unsolved problem. Let's get then into a bit of Conexus AI. What is your value proposition for how you help companies to get their hands around this challenge?

Eric: That was a great lead-in Erik because you just described the value proposition. What Conexus is is a data integration company where we allow more of an organization's data to be made available with benefits in speed, in cost and in integrity. With applications, we'd vary from national defense to energy discovery and distribution, to logistics and manufacturing. Really, every large organization has the problem of using the data that they actually collect. Right now, the classical approaches are manual. They will use old tools, taking many months, if not years, and consuming hundreds of millions, if not billions of dollars, that generally get funneled to large consultancy. So what we are is a data integration company with material benefits and speed, cost and integrity.

Erik: Got you. And if we can pick on Accenture here, I guess the default today would be, somebody might call up Accenture and say, I've got a problem. Can you put 50 guys on it, and two years later, see if they can come up with a solution? We could replace that with any other large integration company, but they're trying to do a lot of the work here. As you said, it's a very manual process. How do you use technology to solve that? What are the specific aspects of that problem that can be solved with technology? Maybe you can explain it. I don't know how useful it is to explain terms here. But there's a few terms on your website — composable engineering design, category theory, category algebra — that seem like they're at the core of your technology. So if you can put those into layman's terms for us and help us understand what is the underlying technology here?

Eric: Sure. We can pick on Accenture. It's a very well-run firm with some very smart people. They make 6.7 billion a year off this particular type of manual operation in bringing it together. It's not that you'll just hire 50 people to do a job over a couple of years, and then we'll see what they do. You'll hire 500 people over five years, and they'll come back 60% of the time and tell you the project failed. Or, they'll tell you we solved 80% of the problem, and they'll declare it a success. Well, but really, they did the easy part. The last 20% was the hard part. So they didn't really solve the problem of making all your data available.

We had a money center bank come to us and tell us how they had initially allocated 20 million, and then allocated another 85 million before they had to wipe their slate clean and start all over, allocating another 100 million another five years. End of which, they had spent 10 years and $210 million or something like that with a down-scaled product. They came to us to say, well, we solved the problem. But we know that as soon as we add or deprecate another database, that we start all over again.

What we are doing, what Conexus is doing is commercializing a discovery in mathematics. The branch of mathematics is called category theory. I call it categorical algebra to make it more familiar. Category theory is an abstract math. It's kind of related to type theory. But probably, an easier way for people to understand it is in relation to graph theory where my PhD focused. People will know graph theory by its visualization, if not in name. They'll recognize these graphs that look like spider webs, where they'll see things connected to other things in a number of different ways. It becomes a seemingly infinite complex connection of relationships of anything. It could be of drugs. It could be of biology. It could be of social networks. That's a way of using graph theory. The innovations in that really took off about 25 years ago to detect and disrupt terrorist networks. Mathematics was at the core of that. It now is used to detect fraud in financial systems.

You might just expand that visualization, that notion of having relationships, to a different level of dimensions. If we think of a chessboard and we think of 3D chess — add a couple of chess boards. Stack them, so you can move your chest in different dimensions — we can maybe do the same with that graph. If you have that spiderweb graph, make it a couple of dimensions, or really make it an infinite number. Or, as we say in math, we say in dimensions. So an infinite number. Those are arbitrary number of dimensions. So if in dimensional graph theory is category theory, we might say it's a graph theory with more richness.

Another way to describe it is that graph theory might describe you and I, Erik, having a relationship. So you and I have a relationship from talking right now. That's a relationship. Yes, no? Yeah, great. We have a relationship. But if you put us in a triathlon, we might have a very different relationship. We might have a relationship as competitors in a triathlon. That's sort of richness of the relationship where it's not really just yes-no, on-off, but it's nuanced. It's context dependent. And it's in that context that the whole world emerges. Because that's what your data has. Your data has context. Your data has descriptions.

When I, in a trivial example, think of a column name in Excel having a part number, well, that's a context. If I have a part number and a list of part numbers down a column, if I then do a cut and paste or a merge with another Excel spreadsheet, let alone 1,000 of them, the context of part number only applies to a certain number of those columns. The context needs to be preserved for the data to have any meaning. But so often, it's lost. What happens today is, you just don't have the principled way of maintaining the discipline of the programmers integrating these Excel spreadsheets column. I think it's fundamentally what databases are, you could say. So that when you get to 100, let alone 1000 integrations of these things, mistakes are made. We're human. So you have data errors, or you just run against a scale of complexity that makes it just all but unavailable to you. That's the classic approach. Category theory as a math, as an abstract math, allows the solving of that problem through the software expression that we originally had.

I can go further. What's happened is, NASA, which is one of the catalysts for this technology originally, found that they were spending 10 years developing their rockets, which was increasingly infeasible. Because the technology was moving quickly enough that they found the results to be out of date before they got done. They thought that we need to move more quickly. But the interrelationships in these contexts of the technologies that comprise a rocket were unable to be preserved as the project completes. So no one's arguing for a simpler world emerging. We're all probably in alignment that the world is becoming more complex, and relationships are becoming broader, deeper, more numerous among things as we invent more and create more. So we need a new technology to bring all that data together and preserve the context. That's the value of category theory. That's what the software provides for every company that stores and uses data. That's what we see as the big future for AI.

Erik: Is this primarily a solution that's required for AI? Because as you're describing this, I'm kind of thinking back to this very simple problem of managing a CRM across different geographies and different organizational units. That seems to be by itself an unsolved problem. But it's really not an AI problem. It's just a problem of having different data structures in each region or for each business unit. But nonetheless, having some commonality and the desire to know, okay, we're serving this customer in this market. Are you also serving them in that market, maybe in a different way? Maybe it was in a different subsidiary or whatever. So it's not an AI problem, but it's still a data integration problem.

Now, is your technology also applicable or used for these, or is that kind of a situation where there are existing solutions, and you're really focused on this specific problem of enabling the application of AI to these datasets?

Eric: I would encourage your audience to broaden their perspective about what AI is. It is certainly finding its current telegenic expression in large language models. A subset of neural networks or an expression of neural networks which themselves are an expression of machine learning, which itself is a domain under probabilistic AI.

But there's an entirely different domain of AI called symbolic AI, deterministic AI. If you remember from your high school logic classes or philosophy classes, there is a bottoms up and a tops down way to do logic. There's induction and then deduction. Induction is when you use these LLMs. Deduction is when you would use a symbolic AI. Ultimately, we're going to have to combine them together for many of the applications in complex environments. But, well, the current telegenic LLMs are currently all the rage. What are those technologies but a collection of just a breathtaking amount of data and a reconfiguration of them? The auto complete, as they say, about LLMs is analogous to how a symbolic AI works across billions or trillions of data points inside of a database. We recognize the patterns and find using the math the provably complete, minimal pathway that connects truth. It's really quite complementary in many of these applications. It's no more or no less AI, depending on whether you're saying it's symbolic or probabilistic.

Erik: Okay. Yeah, I guess today, a lot of people say AI, ML synonymously, right? But AI could have a much broader meaning aside from ML, which is kind of the choice at the moment.

Let's make this a bit more concrete by looking at your solution. So you have adaptable data consolidation and adaptable data interoperability. Could you walk us through a couple of case studies, just the end to end of what was the problem? Who is the team? I think it's also interesting to know who you're actually working with at these companies. Then what does the before and the after look like when the solution is deployed in terms of how people are using the data?

Eric: Yeah, I can offer a few. One that people would recognize is Uber. Everybody understands the business of Uber. It's a recognizable brand, in addition. They grew up quickly paying attention to their business. They grew up then having just like your friend in Singapore that has databases across their organization that they know exists but they can't get to them. Uber grew up with a database by city, because they're trying to manage the deployment of their company as they expand across cities. They had databases across every city of the world.

This ended up being 300,000 databases. They then needed to ensure that they respected the privacy lattice that was created from regulations that might have different sensitivities for driver's licenses versus license plates, for example. They also wanted to look at some complementary geographies to do basic supply-demand testing, where two cities might be adjacent but might have separate databases. How can they plan on riding out supply or driving demand based on, say, weather or an event happening? That required — at the time, it's a manual process — many dozens, if not hundreds of people manually working to respect the jurisdictional preferences created from these different databases. They looked around the world about how to bring these databases together in one unified view.

They found us. They found Conexus. We worked with them on the solution to integrate these 300,000 databases so that they can ask ordinary business intelligence questions with more alacrity, and more easily respect the privacy lattice embedded in these jurisdictions in which they operate. To have them tell it, they'll save US$10 million plus a year based on the technology that we will able to offer. That's one good use case at Uber about how Conexus was able to help Uber improve their business.

Erik: Okay, and these are — then there are similar databases. Maybe they're tracking somewhat different variables based on privacy regulations or just the whims of the management team in each of these geographies. I guess you have other situations, let's say, with a manufacturer where they have a CRM. They have supply chain systems, MES and ERP. So different layers of systems. They have different teams that have to collaborate closely together but that are capturing completely different types of data in completely different systems and architectures. How do you look at those environments?

Eric: Yeah, so we did this for an energy company, where in the design of a particular part — this company is one of the Fortune 50 company based in the United States — they had to bring together the engineering models of mechanical engineers, civil engineers, electrical engineers, geologists. All of them had their different perspective on the environment in which they found themselves and the parts that needed to be engineered inside that environment.

The traditional way of doing this for them was to send around, essentially send around Excel spreadsheets, to have everybody take another swipe at how somebody's interpretation of maximum area surface pressure could align with somebody else's interpretation of maximum area surface pressure. So you could then determine whether or not you wanted to put on a flange or not, how big the flange would be, and where you'd put it. It is really quite something. Then there was an internal auditor and an external auditor because the mistakes would really have people get killed. I laughed just because it was such a burdensome process.

So much better. It's so much better to feed these multiple engineering models into the Conexus' effective, predictable AI and have the provably complete and provably most simple output be created. So instead of taking six weeks to six months, it'd take six seconds to six minutes to create the output that synthesize all of these views from these complex engineering models. That's what an AI should be for. That's what you use it for. This is a tool to help us collaborate more effectively, quicker and with more integrity.

Erik: Got you. So in this case, the output is a solution for a particular query. Then I guess, ideally, not a blackbox solution but something that they can break apart and understand. So if the regulator has some questions of why you did this, you can explain exactly what the model is, how it was making decisions.

Eric: Oh, that's beautiful. Not only is it not a black box. It actually has to then be expressed as a manufactured good. You have to then actually know provably so that this thing will work in the real world. And you will see, the beautiful thing with symbolic AI is you'll see even the pathway about how the AI created the synthesized model. The beauty is, nothing is created. Nothing is confabulated, or the popular term, hallucinate. Although that's a little less accurate. Nothing is confabulated in the creation of these models. This is not just taking in truth, but it is emitting truth. This truthful AI is only available using a symbolic AI.

Erik: Okay. Very interesting. Because certainly, one of the challenges with anything related to this set of this new field of generative AI is that the probabilistic works for creating a poem. It doesn't work for solving engineering problems that could result in a life-threatening situation.

Eric: Yeah, your listeners can ask themselves. How many would be willing to get on an airplane that was designed by a large language model? The answer would be zero. But a truthful, predictable AI, sign me up.

Erik: So those are great cases. I know that you also are going to be launching a book quite soon called The Future is Formal: The Roadmap for Using Technology to Solve Society's Biggest Problems. Is this upcoming book specifically focused on the solutions that Conexus is developing but applying those to social problems? Can you give us a bit of context around the thought process?

Eric: Yeah, we want to teach people the broader process that informs how businesses can better prepare themselves for the world of AI. As we talked about earlier, in an environment where you can't really see more than two years in the future but know that you need to be more flexible and adaptable, the preparation that's involved is to be more formal. The future is formal. Or, you might say, to be more disciplined or more rigorous. How this expresses itself is taking business processes and formalizing them in a way that a robot can act on them. That's really the gold standard to know whether or not you have put sufficient rigor into your process. Another way of talking about it is in hardware design. If you can encode your business rules in a semiconductor, you know you've done the appropriate level of thinking. That's what's necessary.

To give an example, we are working with another very large company who has a board-level initiative, as one does in 2023, about AI. This board-level initiative for AI has looked into the future with their crystal ball and has seen ubiquitous collaborative robots, which is heartwarming to me. Because this is what we envisioned when I was in the White House in 2016. Looking out 10 years, we saw ubiquitous collaborative robots. This company now in 2023 sees in the not-too-distant future that for themselves. They see in this manufacturing environment, just to say the word ubiquitous, thousands or tens of thousands of robots working alongside their workers. Some of which could be harmful to humans, but they are collaborative robots. So not the type that you see doing welding on automobiles. How do you keep people safe as these robots move around? They're not just doing the tasks we see today of robots. They're going to be ubiquitous. So how do we encode those rules?

This requires the business to go through 2 or 3,000 different business processes with a, you'd say the proverbial fine-tooth comb but with a sufficient rigor so that it can be interpreted by robots. That's the job: 2 to 3,000 business processes with the clarity that's available for a machine to read it. That's what we need to do. That's what companies need to do. The way to do that, we're working to articulate in the book The Future is Formal. So executives, all the way down to engineers, can take these skills and prepare their organizations to be adaptable for the future. That's the idea.

Erik: Okay. Yeah, that's a fascinating problem. Because business processes, of course, are contradictory, right? If you have 10 business processes, there's going to be contradictions. Then you just trust the human to figure out the contradictions and prioritize appropriately. And if you're looking at 2,000, there's going to be all sorts of contradictions. Kind of logical. Figuring out how to enable the robot to make those prioritizations and to interpret some degree of vagueness, a fascinating problem.

Eric: Yeah, it's a good point. Some business processes will be contradictory. And some, you may not have known were contradictory but only will reveal themselves as contradictory once you begin to formalize them. There will be nuance available, and there will be areas where maybe it will be really difficult to formalize. Or, maybe you don't want to formalize, and part of the output is creative. But when I'm designing an airplane, I want the areas where I'm creative to be constrained. I don't want the reconciliation of vibration between a wing, a fuselage and an engine to be creative. I want that to be quite precise.

Erik: Oh, it's really an interesting thought process. Because you think, well, humans are probabilistic, right? Humans, we do make very precise vehicles that are very safe. But we're also doing that in a probabilistic way with a certain degree of uncertainty. So maybe that's necessary. But nonetheless, how do you then sense check that? How do you maybe have one aspect of the algorithm that's probabilistic and another algorithm that's then putting that through a rigorous evaluation process before it releases into the wild?

Eric: So the rule here is, when you want to learn from an AI, you use a large language model, a probabilistic model. When you want to teach an AI, that means when you have rules about the design of an airplane or the construction or distribution of energy, then you use a deterministic model. You want both. You can use both. When you want to teach, you use deterministic. When you want to learn, you use probabilistic.

Erik: Okay. Makes sense. Last question. So far, we've ruled out the value of crystal balls. Nonetheless, I'm going to ask you about the future but maybe just the future of Conexus. What's exciting for you over the next 24 months? New products that you're launching? Are there changes that you're seeing in the regulatory framework in terms of how boards are making decisions that are exciting? What are you thinking about when you wake up in the morning?

Eric: Well, what I'm thinking about is making sure my team is taken care of, and our clients are happy. But specifically, I think about how there's an emerging awareness that large language models, I think, deserve the hype in the long term but are beginning to be recognized as overhyped in the short to maybe even medium term. The actual use cases for these for many large organizations has so far demonstrated itself to be quite limited. It's attractive over the long term. And I love how many people are engaged in the conversation around AI. But there's an emerging awareness that we need to incorporate truth and predictability into our deployments of these learning algorithms, of these AIs. That's hugely encouraging for me and my organization, because it's what we've based our future on. And it's why I bet my career on doing this after I left the White House in 2017.

We're seeing some fantastic engagements with some of the largest companies in the world and some governments you would have heard of. It's fantastic time. What gives me some degree of optimism — I'm not ignoring all the many, many, many, many dangers that present themselves to us around the deployment of AI — is that the lifestyle that we all enjoy today is afforded by the improvements in productivity we've had over the decades, over the centuries. I think that we are emerging into a time where that increase in productivity may very well accelerate and give us all an even greater standard of living.

I think the challenge very well might be that we will begin to see the rest of the world's population that has been so far neglected, and think that somehow the world is getting worse. Where really, it's just that all these people are becoming visible to us in a way that hasn't been visible over most of our lifetimes. So I'm just usually optimistic about the improvements in the quality of life for both the developed world and the developing world.

Erik: Yeah, great. I mean, it's going to be — yeah, technology is power. So we will certainly be more powerful. Then can we have the wisdom to deploy that in a way that is generally beneficial as opposed to specifically beneficial? I guess that will be the challenge for the next generation or two. Eric, fantastic conversation and a really interesting company that you're building. Any last points that we haven't touched on yet that are important for people to understand?

Eric: I just really encourage people to experiment with AI. There are some really terrific apps emerging. The way to explore these in our worlds is to actively experiment with them. I encourage my friends and family to keep a tab open on your browser for Anthropic' Pi and ChatGPT, and maybe any number of the other image generators just to see what you might do with it. We have 25, 30 years of experience using modern search. We now need to train ourselves to all become prompt engineers. Experimenting with the technology is a personal recommendation for organizations. The winning organization in 2030 will be the one that has the most formalized rules, therefore allowing for maximum adaptability and flexibility.

Erik: Yeah, great advice, Eric. Thanks so much for the conversation today.

Eric: It's been great, Erik. It's been a good time.

 

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.