Podcasts > Ep. 190 - Transitioning from CAPEX to OPEX-based Offerings with IoT
Ep. 190
Transitioning from CAPEX to OPEX-based Offerings with IoT
Andrei Ciobotar, CTO, Relayr
Tuesday, October 24, 2023

Today, we have Andrei Ciobotar as our guest, who serves as the Chief Technology Officer at Relayr. Relayr provides comprehensive industrial IoT solutions and data-powered financial services to assist their customers in their transition towards Equipment-as-a-Service. With a decade of experience, Andrei is instrumental in relayr's success across cloud development, edge development, and AI engineering.

In this episode, we explored how IoT can facilitate the transition from capital expenditure (CAPEX) to operational expenditure (OPEX) models for industrial equipment suppliers, simultaneously mitigating financial risks for operators. Additionally, we delved into the progression of artificial intelligence within industrial settings, moving away from customized, labor-intensive solutions towards standardized offerings tailored to specific asset categories and application scenarios.

Key Discussion Points:

  • What industries are most open to rethinking asset ownership and OEM business models?
  • How do predictive maintenance and simulation work together to reduce downtime, considering the complexity of the predictive maintenance process?
  • Do you see the value of creating simulated data and how generative AI as part of your tech stack?

If you're curious to know more about our guest, you can find him on: 

Website: https://relayr.io/

LinkedIn: https://www.linkedin.com/in/andreiciobotar/

Transcript.

Erik: Andrei, thanks for joining us on the podcast today.

Andrei: Thanks for having me. Glad to be here.

Erik: I got to say, when I first got into this industry probably seven years ago and then really started deep diving into industrial IoT, relayr was one of the first companies that came up on our radar. And so it's great to be talking to you today. Let's start with a really high level. So you've been at relayr for — what is it now? About six years, is that about right? Seven years?

Andrei: Closing in on seven, yeah. So maybe just a few words. I'm the Chief Technology Officer for relayr. I think I'm around the 6 years and 10 months mark, if I'm not mistaken. I've worn a few hats over the years. I started out as an AI Director. I moved on to managing engineering teams as VP engineering, and now I'm focusing on technical strategy as CTO. Prior to relayr, I've also had the privilege and pleasure to co-found an AI startup where I've worked on systems that combined natural language processing, computer vision, and a little bit of time-series processing. I'm based in Munich, Germany.

Erik: Relayr is a very interesting company. Because, on the one hand, you have your tech stack and your technical products, but you also have financial service and insurance products because of the ownership by Munich Re. So it would be great to understand a little bit how you marry those together, because I think that's really a unique part of your business. So why don't we start with the 101 from a business perspective? What is the scope of relayr today?

Andrei: Yeah, I mean for relayr, our goal really is to be the partner of choice for industrial businesses. We empower them. We seek to empower them by unlocking new offerings and service-centric business models. We do this by effectively leveraging machine data that we collect to provide data analysis and modeling. We have an array of repeatable offerings focused on predictive maintenance with our SKYLER line of products. The focus for these two products is rotating equipment elevators. But as you also noted, we also develop an Equipment-as-a-Service product that is aimed to aid with the CAPEX-OPEX transition. What this product effectively does is combine the world of IoT and the world of financial operations and structuring. We have in-house expertise on the financial structuring side. This is something that we've absorbed over the past few years. We're also working — naturally, since the company is owned by HSB, by Munich Re through HSB, it's a bit of a line of ownership there, we are also working with them as partners for the insurance side of things.

I think what is really important to note here is that, despite calling it an industrial IoT company, our lifeblood is ultimately the data. Data science is how we derive value for our customers. I think today we can scratch the surface of this field. But it may be even important to note that even in the world of Equipment as a Service, data continues to be the most important element to it. Not just because we derive usage based on the data that we get, but it's also essential to combine the usage-based billing with data-driven product aimed to, say, increase up time or increase the quality of the output, which is something we can also cover today.

Erik: If we just choose an example, since you have quite a strong elevator portfolio, and describe the value proposition, would it then be that you would go to an elevator manufacturer with the proposition of helping them to convert their offering which is selling an elevator — from a CapEx investment towards an OPEX investment — that would change their cash flows and also change their relationship to their customer who would be, let's say, a billion-asset owner? Is that how relayr plays in this market, by enabling that shift from CAPEX to OPEX?

Andrei: The Equipment as a Service plays less focused on the elevator segment. That's where we have the SKYLER Elevate product. We're more focused on the industrial sector with the Equipment as a Service offering. So think in terms of high-value industrial equipment that you would effectively — consider you're an OEM that partners with relayr to bring usage-based business models to your end customers. One example is Heidelberger, where we're working on effectively printing machines. These are taken to the end customers and bundled with a consumption-based offering. There are some very practical advantages of doing so purely from a cash flow perspective. But it also increases a lot of the flexibility on the end customer side with regards to shaving off-peaks in demand, as an example, and coping with drops in usage.

What is I think really important is, when we look at relayr, to really think of the SKYLER line and Equipment as a Service as three distinct products. Sometimes they can be bundled together. For the most part, we really deliver them separately to our partners and end customers.

Erik: Okay. Clear. So if we continue that line of thought then — you have an industrial equipment manufacturer. They're deploying your technology onto their solutions that then they can have this more flexible business model, kind of the OPEX-based business model. Then you have the financial services in the insurance part of your business — how does that fit into their then offering when they're selling their equipment into the market?

Andrei: Well, there's a few elements to it. When we talk about insurance, we think in terms of uptime guarantees as an example. That's really where the whole data and AI piece comes in. That also brings with itself more sophisticated connections into the end customer infrastructure to get to this data. Then there's naturally the financial structuring piece. I would say when you look at Equipment as a Service purely from the billing mechanics perspective, it's 20% to 30% IoT. Everything else is financial structuring — managing the building model, operational asset management. And obviously, there's the whole financial structuring piece as an OEM. You would want to somehow have a pool of assets that you can take to your end customers, and those have to be financed. So there's the whole dog and pony show around the financial structuring piece of things.

Erik: Well, I think it's very interesting. Because IoT, by the nature of it, you said you're really a data company, right? And having access to this data allows you to rethink business models or allows your customers to rethink their business models, because they can actually track how assets are being used in way that wasn't possible before. But then when it comes to the end customer — I know you're CTO. You're not maybe front end on the business side day and day. But when it comes to the end customer, industries are in different places. Let's say, in terms of acceptance of new business models, some industries might be more willing to accept. Others, I think, can be quite traditional and resistant to maybe to rethinking how they make purchases. What's been your experience in terms of the industries that are most ready to rethink their ownership models for assets or from the OEM side, the business model for bringing asset to market using a different business model?

Andrei: I think there's two sides to the maturity discussion. One is on the OEM side. Specifically, is the OEM prepared to effectively uproot its entire business model for a real shift to service-based offerings? That is something that can't really be approached in a big bank fashion. So what we're seeing with our partners, especially in manufacturing, is that it is a line of product offerings that runs parallel to the existing business. So it's not a full transition in that sense.

Looking at the end customer side, I think it's a little bit more interesting purely from a practical perspective. Obviously, there's a discussion around ownership of the asset. But I think the practical implications of not having to own the assets are outweighing any kind of resistance. You can almost call it ideological resistance to not owning the asset per se, purely from the overhead of servicing or, again, from managing rival asset usage over time. I think there's a stronger storyline there for the end customers when it comes to transition. I think we did the challenges on the OEM side.

Erik: There was a company I was talking to. This was probably already three years ago or so. But they made the comment that for their products, they were selling them to manufacturers, but it was easier for them to sell actually to the financers behind the manufacturer, so to a bank. Because those guys understood the value proposition of being able to track, let's say, the return on an asset. Whereas selling directly to maybe the general manager of a factory was more of an uphill or a more of an education required. Do you find yourself typically also getting financial firms who might be significant shareholders of operating businesses involved in this purchase, or are you typically dealing directly and only with the, let's say, OEM or the manufacturer?

Andrei: It's typically the latter for now. Consider the topics around residual values and the like, diverse topics that we handle, with our know-how within relayr. That's the first element and the second element. These models are really become interesting once you've bundled them with data-driven offerings — smart maintenance, tailored maintenance, packages, and guarantees. These are the performance of this asset, be it quality or uptime. That's something you have to do either way with some sort of partner on the IoT side. We like to think of ourselves as a one-stop shop for all of these things and resolving all of that complexity. I think, in general, there is — I think people tend to underestimate the complexity of managing all of these elements and putting them under a single pane of glass that you can take to an end customer.

Erik: Yeah, certainly. I think that gives us a good foundation for what the business looks like. You're the CTO, so let's get into the tech stack a bit more here. I think the topic we'll probably end up focusing on is AI. But before we go there, let's do a quick walk through what it looks like. On your website, you have edge, middleware, AI visualization. Is that typically how you characterize the key element?

Andrei: Yeah, we develop end-to-end solutions. I think it's important to look at the tech stack. Not just the lens of what we have available today, but also through the lens of our journey to get here. In the past, we've been very much focused on bespoke IoT solutions, simply because of the environment and the available offering on the hyper SKYLER side of things and the maturity available on that end. We also had set out to build the IoT platform ourselves, so components that you would typically find as turnkey services today. When you look at the offering of an Amazon or Azure, back then, it wasn't available. So a lot of the stuff, we had to build ourselves. Be it data brokers or even user and access management.

Around two and a half years ago, we made a strategic shift in our stack. To your point, we continue to focus on the edge, but the middleware stack became considerably thinner because we focused on adopting hyper SKYLER services for some of the components we have been developing from scratch. And we shifted that attention from the middleware to the — we call it data-driven product side, specifically focusing more on business logic that is repeatable and is a lot more focused on a specific vertical rather than a one-size-fits-all platform. So it's still an end-to-end setup, but the weight really moved from middleware to product in the past few years quite considerably.

Erik: If we look or zoom in on AI — this seems to be really the critical question that a lot of industrial technologies are wrestling with right now — how do we make the AI repeatable? If we look at predictive maintenance, you say, okay, an automotive production line has, let's say, there's two production lines in a factory. They look similar, but the reality is that they have somewhat different components, different behaviors. And so it's two projects to deploy a predictive maintenance solution on both production lines to some extent, right?

Andrei: Right.

Erik: What do you find in terms of how you look at a situation and evaluate? Is this something that is going to be scalable and therefore have an ROI for the customer, and therefore be a good project for us versus one where you might advise the customer, this is not going to make sense because of reasons XYZ and a lack of scalability? How do you look at that situation and make an assessment today?

Andrei: As you noted, transitioning from lab to production continues to be, I think, a challenge for most of the players. There's a few things that we noticed. I'm really only scratching the surface here. But even when you look at us, relatively straightforward sounding predictive maintenance system, you got to consider the variety of stacks of problems that you solve. Even starting at a sensor level, when to wake up the sensor, how to tailor the collection strategy, to the timescale of the problem that you want to solve, identifying faults and then classifying them, you're typically looking at assets that have different modes of operation. How do you identify the right mode of operation and how to differentiate the transition to a different mode of operation from an actual fault? Then thinking about how to create models for each of these different modes of operation. It's such a multi-layered problem.

When you look at it through the lens of the two overarching constraints which is, one, reducing the time to value, how do you solve all of these problems in a way that translates well across different customers and asset classes? Then the second constraint that I think is quite important to consider is, how to balance the sensitivity of all of these systems not to the economics of the business case. At the end of the day, you got to ask yourself how many inspections due to false positives can I accommodate as an organization? What is the cost of a false negative, and so on? These are all things that have to be baked into the models.

To your point about productizing AI and then having it translate well across different customers, our experience is essential to be very prescriptive with the data collection strategy, the mechanics of it and also the data path. You can, at least, consider relying on characteristics of the data, richness of the data, and so on. Really, the other element is — this sounds quite banal — focusing on very specific asset classes. Maybe this was not obvious at the peak of the hype wave with AI and IoT, but very few of these systems translate well across different asset classes. It's really essential to focus, and it's also what drove our thinking. Rather than solve 1,001 bespoke AI problems for different asset classes and employ different methodologies for each different one, we find there is a lot more value in solving one class of problems well with an end-to-end stack that we understand and control.

Erik: Elevator looks like one of the asset classes that's a priority for you. Maybe two questions there. One would be, what are the other asset classes where you see a lot of potential or where you're currently focused? Then how do you make that assessment that elevators and these other asset classes are the ones where you want to put your energy?

Andrei: Obviously, we look at the characteristics of the market as well. But maybe from a technical perspective, the two products that we are developing, Rotate and Elevate, Rotate is focused on — probably based on the name rotating equipment. I think what is key here is, obviously, the business context but also the fact that this is a rather well-researched problem statement. Not necessarily a well-researched solutions base but definitely a well-understood problem statement that we believe we can track value in a repeatable fashion. I think what is important to also consider though, especially for Rotate, is that orthogonal to the asset class, you also have the market. That also brings a little bit of flavor to the problem statement. If you have a large fan versus a small fan, you're operating on different timescales. So we also have to be a little bit careful with not treating all asset classes similarly even if they are rotating equipment. Because, again, if we focus on markets where we are operating on a timescale that is quite long, then it's an entirely different product at the end of the day, different data collection strategy, and so on. So really, we narrowed down by considering all of these elements: the asset class itself, the timescale of the problem, the additional flavor that the market brings to the problem statement and also, obviously, the economics of solving the problem.

To my initial point about, well, can this operator afford to do routine inspections, is there a business case to be had here by adding a level of automation to maintenance and delivering some sort of predictive system that reduces the overhead of inspections and maintenance? As for elevator, it's a little bit simpler. Obviously, there's a lot of variation in the type of elevators as well but not as strong really. Really, from our perspective, the sweet spot with elevators is, if you look at the large elevator OEMs, there is a tendency to start developing IoT capabilities that are baked into the elevator infrastructure so to say. Our sweet spot really is more on the augmentation of existing fleets in a non-invasive manner. So there's obviously, I suppose, an economic playoff discussion to be had on the economic playbook with regards to the time to install the unit as well that we factor.

Erik: On that last point, if you're dealing with brownfield situations, it sounds like you're heavily focused on the analytics. The hardware — are you providing your own hardware solutions, or you're making an assessment and using off-the-shelf solutions? What part of that decision do you own and do you drive as opposed to passing that off to the customer when it comes to this brownfield?

Andrei: Well, both products are end to end because of similar constraints vis a vis what can we rely on in the data that's coming from these systems. To your question, yes, Elevate does come with its own hardware. The entire product experience is designed around assumptions that we can make simply by virtue of controlling the data collection mechanism. I think it's also important to note maybe on the hardware side, it's a non-invasive solution in the sense that it does not require connecting to the elevator control board, as an example, or any sort of invasive when you were in the elevator side. I think that's a strength of the product that we also manage in-house at the end.

Erik: One way to look at this market as asset classes. Another way to look at it would be use cases. Predictive maintenance is certainly one of the focus areas for you. Your colleague mentioned simulations, which maybe that's around simulations to improve operational efficiency, or energy management, or something else. What are the other use cases that you've found tend to have a solid ROI and tend to be, say, technically feasible to replicate or to scale across a large fleet of asset?

Andrei: I think maybe not super obvious. But one way of even looking at predictive maintenance can be as a simulation problem. When you frame it as a simulation problem, it is ultimately a method to figure out the future behavior of a machine based on its past and present states: simulating the progression of the state over time considering wear and tear stressors, historical patterns, and so on. So in some sense, really, the world of predictive maintenance and simulation are going hand in hand in a way. Concretely, we see a lot of value primarily on the reducing downtime side of things. That's not to say though that predictive maintenance is one technical problem that we solve. As I mentioned, to figure out the potential downtime of an asset, you have to consider quite a few layers to it. Each layer in isolation can also drive rather significant value.

When you look at predictive maintenance through the lens of, well, I'd like to know when this asset will break down, then that's really the holy grail of predictive maintenance. But there's a couple of milestones on the way that are also valuable on their own. If you work your way backwards, maybe you don't know the exact moment when the machine will break down, but you would have found some sort of anomaly. It's working, and you can classify it. Already, you're one step ahead of where you would have been without the system. Even without the classification, you would have at least had an anomaly that if it repeats over time, then you know you have a potential issue. Even the anomalies themselves can be built with AI or more naive use case can also be based with simple rules at the end of the day. There's no shame in that, really.

What I'm getting at is, predictive maintenance is the holy grail. But the journey to get there, I think, has some really valuable milestones to offer as well. We look at topics around quality control as well, but they're not really part of our product offering. One reason for this is the availability of the training data for these systems, which is quite scarce. The other reason has been the fact that, typically, QA processes are asynchronous. That is, as you create a batch of output from your asset and then you do the quality control maybe later in the day or maybe on the next day, so it turns from an AI problem which is probably not very difficult to solve on its own and to a discussion around the process with your customer and how to handle the QA process. So it becomes a lot more involved purely from the operational overhead side of things. These are the two examples that come to mind. When I think about quality control, I mean both topics like computer vision as an example come to mind.

You mentioned simulation. I think one really interesting discussion that is relevant for us on the simulation side of things is also how to generate synthetic data as part of a larger test bench to benchmark our own models. So in a way, simulating failures that you can then benchmark your algorithms against. That is something that we've been exploring. There are some state-of-the-art methods using auto encoders that we've also explored. I think, really, as a company, we've only really scratched the surface of what AI-assisted simulation can offer to us. Even if there is no direct benefit from the end customer side, at the end of the day, it does make our systems more powerful. That does translate it and does translate it to customer value at the end of the day.

Erik: This concept of creating simulated data, do you see value for your work processes from generative AI, whether it's in the creation of datasets, or scanning paper documents, or let's say PDF documents and information that's in text as additional data input? Or is it user interface augmentation in order to share the outcomes of a particular analysis in a more intuitive way? How are you looking at generative AI as part of your tech stack?

Andrei: We're actively exploring these use cases. There's a few areas of particular interest to us. One is going in the direction of large-scale data integration for our knowledge base. Consider topics around interactive manuals and generation of manuals. This has a direct influence on time to value. Consider the benefits of sensor onboarding with interactive assistance that can really guide the installation process and the troubleshooting in a way that's relevant. That really helps us scale onboarding partners for deploying sensors as an example.

Another area that is quite interesting for us, maybe more mid-term, is on the UX side of thing. So consider data interpretation. Consider combining generative AI with technologies such as blockchain, having conversational interfaces for our end users and allowing the dashboards to effectively translate the natural language ask of the end user into some sort of visualizations in which that makes sense, not relevant for that persona. Maybe the last piece, and you've already touched on it, is on the acceleration of r&d. I don't think there's a lot of promise, at least not to the technology that's out there in synthetic data generation or really accelerating development of predictive maintenance systems on an implementation side of things. But it can help with a lot of the text summarization exercises when we consume scientific material as an example. That could be a bit of a shortcut for a data science teams to take.

I think it's really, for LLMs, for me, at the end of the day, the two biggest blockers, one is hallucinations need to be tackled reliably when the model sounds like it's right but it's not really right. Only once those hallucinations are tackled can we have a discussion about high stakes implementation. Maybe for the use case that I just mentioned around sensor onboarding, we can afford the model being wrong every now and then. But any application where really the AI is somehow assisting our user, I would be very careful with it until we have a little bit more work done on the explainability side of things. Obviously, there are answers that are quickly emerging to be fair around where do we even run this generative AI. There's been a bit of an explosion in the past few months with the likes of ChatGPT, lambda.md, Microsoft, Azure OpenAI services. Naturally, none of these use cases can be approached using public APIs. That would be probably unwise. But I think it's interesting to also look at, for instance, the emerging open-source technology and our ability to run it.

To give you a very concrete example, if we create an onboarding assistant for our sensors and we expect to run the system in an environment with no internet connection, then that naturally invites a discussion about running edge models. So we're really keeping an eye on all of these developments with regards to shrinking models, sparse models, and all of the various flavors of language models that are that are emerging in the open source space. I wish I had a use case for you that is really mature at this point in time. But I think, like many others, we're really scratching the surface of what can be accomplished. We are making first timid steps in that direction. But I think this field has a lot of promise for what it's worth.

Erik: Yeah, I guess one other area which is more on the development side is use of these technologies just to support your coding processes. So are your teams using any over the shoulder software to help with writing code today, or are you integrating that into the product so that, let's say, your system integration partners or your customers can more easily modify and have a lower technical barrier to making modifications on your platform? Is that something where you see promise or adoption today?

Andrei: We see promise. We haven't adopted them yet as we're still considering the risks. Consider the likes of Copilot as an example. That is, I think, an offering that even precedes ChatGPT as an example. There is an ongoing legal debate that has pretty far-reaching consequences, especially when you look at topics such as copyleft, as an example. The law, it is still ongoing. I think there's really three potential outcomes on the horizon. One is some sort of GNU GPL license that prohibits AI training for specific code bases and a dismissal of all of the claims. Another outcome is, Microsoft wins a lawsuit. GPL and other copyleft licenses are rendered irrelevant. Then there is the cataclysmic option where Microsoft loses the lawsuit and all of the codes generated through Copilot, or ChatGPT, or whatever else is flagged as a GPL code. It's hundreds of large companies out there — be it Stripe, Coca-Cola, General Motors, and so on — that then have to explain well how much of their codebase can be flagged as GPL code. And how can you even file a copyright claim for fictitious entries as an example.

So I think there's definitely promise, and I can think of very concrete use cases to accelerate development both on API side and also accelerating some of our own internal processes around QA in particular. I think it's very easy to generate test cases and even edge test cases using these systems, something that is quite repetitive in nature on the QA side of things. But there are some unanswered questions with regards to the legal side of things that makes us a little bit cautious when implementing these systems.

Erik: Yeah, got it. Yeah, this is maybe a conversation for another day. But one thing that I ended up talking to a lot of my European clients around is competitiveness of the continent versus US companies, companies in different Asian countries or, let's say, the impact of policy on competitiveness. It's a bit of a tricky situation right now, where the incentives are pushing for governments to take risks. I think Europe is the least willing region to do this. But maybe China and the US are somewhat more willing.

Andrei: Well, let me give you a different example, though. This is as of June, if I'm not mistaken. This year, on the European side of things, we have the European Data Act. It's effectively focused on for access to end use of data. I think that's a piece that is really relevant for companies such as relayr. Because it's legislation that is geared towards breaking down data silos and democratizing access to data, specifically giving third-party companies such as relayr potential access to data that would normally be walled off by OEMs and their own proprietary systems. I think there are rays of hope and some good moves being done on the data side of things in Europe as well. But I think, to your point, there is a deeper discussion about competitiveness and regulation, which may be different philosophies will lead to different outcomes and competitiveness in the middle long term. Europe certainly has a track record of being a little bit more cautious and inclined to look at risk and regulation versus other geographies.

Erik: Yeah, clear. Well, why don't we wrap up with one or two case studies? You've given us some good examples of use cases and industries. But if we can choose one very specific customer example and walk through, what does this look like from technology deployment? But also, what does it look like in terms of how they're rethinking their business model?

Andrei: Well, I think when you look at the SKYLER line of things, they all follow the same customer journey at the end of the day. So I've always really focused on the sales process itself. Obviously, there's questions that I've hinted at earlier with regards to the economics of the business case. So that has to be sound. At the end of the day, it has to be clear that we generate value with our product. But purely, from a deployment perspective, we follow the same recipe regardless of asset class at the end of the day.

There's a few discrete steps. One is really doing a site survey, figuring out, alright, well, what are the constraints on the site that we have to be mindful of, specific limitations with regards to the infrastructure that we can deliver concretely? How many gateways? How many sensors do we need? Can we play a mesh network? Are we looking at some sort of Faraday cage type of situation where we can't really wirelessly get data out of the asset and all things that we have to look at? The next step is really the sensor installation piece. This is something that currently, we also deploy engineers over to do what is I think really key on the sensor installation pieces, that we also do a data sanity check and that we also set up the infrastructure for the uplink.

In many situations, I would say, the vast majority of situations, you will have a separate data path to the relayr cloud and will not tap into existing IP or LTE infrastructure on the customer side. I think that's something that also the customers would be inclined to want to avoid, simply because it adds a new level of complexity and security to the infrastructure. Once we establish data transfer, there is an entire exercise around baselining concretely, calibrating algorithms on assets that are not working as they should when we start measuring will lead to bad results at the end of the day. Simply because the model will be trained to think that faulty behavior is normal behavior. That's a sanity check that is also supported by our in-house vibration analysts. After that, we are effectively going to production. Where especially in the early months of deployment, there is a lot of attention that we give ourselves to the deployment also, not only with our data scientists but also vibration analysts, to make sure that the models behave as they should and that we course correct by adding additional labels to the system if the system is not delivering the outcomes that it should. This recipe is something that you apply whether it's a pilot or a large-scale deployment. Obviously, you'd be looking at partners to help with onboarding sensors of a large-scale deployment. But it's still really the same journey from a customer perspective.

To your last question about business model change, it also depends who your customer is. If your customer is an operator, then at the end of the day, you're helping make the maintenance processes more straightforward, more reliable, more predictable, and so on. If your customer is an MRO, that takes your solution to its own end customers, to its own operators. Then you also have a discussion around joint go-to-market, about white labeling, about creating basically a facade for the product that is tailored to the MRO and bundles it with the MRO service offering. I think that's really a distinction that we also have to make and the development that the customer journey that is very specific. I can also speak about Equipment as a Service, but I think we would need a separate podcast episode for that. Consider that even the process of getting an asset in the field is something that could take up to a year or two. It is really quite an involved process.

Erik: Great. Well, it sounds like a very well thought through process both from yourself perspective but also from a customer, from a quality management and workload management perspective. You mentioned already a number of the technology areas that your team is working on or that you're personally interested in. Maybe just as a final point here, what are we looking at over the next, let's say, 12 months? Are there new products that you have in the pipeline, new functionality that you're planning to bring out? What should we be expecting from relayr in the coming year?

Andrei: I think when we look at our own product portfolio, I think there's going to be a lot of great developments purely on the algorithmic side, new features around our products to handle variable speed motors. I think it's the highlight for us — really expanding the number of different asset classes that we can tackle with our product line. I also hope to be able to deliver the more cutting-edge, future-oriented functionality that is really focused on these new generative technologies that are being developed right now. I think these are really the two highlights purely from a product line perspective.

If you look at the industry at large and maybe an area where I'm personally curious, it's really how Nvidia is combining IoT simulation and remote collaboration with this Omniverse stack. So that's something that I would also love to dive into in the next few months. I think there are some really interesting developments with factories that are basically simulated before they're even built. One is under construction, I believe, in Hungary. It would be really interesting to see how reality matches the vision that the team had started out with and how it played out.

Erik: Absolutely. I mean, it's a fascinating time right now. There is so much investor capital going into this space more broadly, even if it's not always in these particular domains. That is certainly going to push the tech stack forward. So we'll be very excited to see what comes out over the next couple of years. Maybe I can have you on the podcast in 12 months or so and see where you are on these topics. I would love to talk again.

Andrei: Absolutely. I would love to.

Erik: Andrei, thanks for taking time today. Appreciate it.

Andrei: Thank you for having me.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.