Podcasts > Technology > Ep. 080 - Industrial edge computing
Download MP3
Ep. 080
Industrial edge computing
Chuck Byers, Associate CTO, IIC
Friday, January 22, 2021

In this episode, we discuss industrial edge computing from the perspective of benefits, properties, and system architecture. We also explore the potential positive impacts of 5G bases stations on industrial edge computing.

 

Chuck Byers is the Associate CTO of the Industrial Internet Consortium. The Industrial Internet Consortium is the world’s leading organization transforming business and society by accelerating the Industrial Internet of Things (IIoT). Our mission is to deliver a trustworthy IIoT in which the world’s systems and devices are securely connected and controlled to deliver transformational outcomes. iiconsortium.org 

 

Transcript.

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today is Chuck Byers, associate CTO of the Industrial Internet Consortium. The Industrial Internet Consortium, or IIC is the world's leading organization transforming business and society by accelerating the industrial internet of things. In this talk, we discussed industrial edge computing from the perspective of benefits properties and system architecture. And we also explored the potential positive impacts of 5G base stations on edge computing development. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's stories, or recommend the speaker, please email us at team at IoT one.com. Thank you. Chuck, thank you so much for joining us today.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Thank you. Chuck, thank you so much for joining us today.

Chuck: Oh, thank you so much for inviting me. I think it's going to be an interesting topic.

Erik: Yeah. So this topic of edge computing has come up for us very frequently, regularly. But then also, I've read a lot of other articles, obviously, the IIC has just published this white paper. Maybe as a starting point, from a business perspective, why now, why is this coming together? And then why is this interesting to the market? But before we get into that question, Chuck, it'd be great if you could just give a little bit of background on who you are, personally, and also, who the IIC is, and what type of work the IIC does so that the listeners of the podcast understand who they're listening to today.

Chuck: Certainly, Hi, everybody. My name is Chuck Byers, I'm the [inaudible 02:12] of the Industrial Internet Consortium. I was 10 years at Cisco where I did architecture for edge and fog computing, and various other Internet of Things applications at Cisco. And before that, I was a Bell Labs Fellow at Alcatel, Lucent, and AT&T. So I've been in network design and architecture system engineering kinds of roles for almost 35 years. And I have 107 US patents, around 50 of them relate to edge computing and its applications.

Let's talk a little bit about the Industrial Internet Consortium. It's about a six or seven year old Consortium. Right now, we have around 200 members. And our vision and mission is basically to work on the trustworthy internet of things. So this is a network just like the internet that you use day to day, but instead of connecting iPhones and laptops, it does all that plus, it connects sensors and actuators and various other intelligent devices that we find in multiple different vertical markets and lots of different applications.

The Industrial Internet Consortium is very interested in making the Internet of Things trustworthy. So that would be making it have appropriate properties with respect to lots of different trustworthiness, attributes, security, privacy, safety, reliability, resiliency. All these things are necessary if you're going to make the Internet of Things useful for mission critical and perhaps life critical applications.

The other thing that the Industrial Internet Consortium is very interested in these days is digital transformation of businesses. Digital transformation is a word associated with the digitization of all processes. And these are processes that your information technologists might care about, things like [inaudible 04:17], and stuff, and also things that your operations technologists might care about, things like automating factories or logistics or supply chains.

So the Industrial Internet consortium is very interested in building a system of systems view of how information technology and this sort of CIO office of your business and operations technology and the COO Officer of your business are going to get together under the banner of digital technology and network connected resources. We operate with basically lots of voluntary labor. Our members provide lots of input on what direction we should set for the Consortium via steering committee.

And we also have lots of technical and marketing and safety and security kinds of technical working groups that are responsible for putting together things like this white paper that we're going to be talking about in a minute and understanding how it all fits together under that mission.

Erik: And just a quick note here, though, the IIC on their website has a ton of freely available documentation around the work that they're doing. So I do encourage everyone to check that out if you're not familiar yet with the work that they're doing.

Chuck: Absolutely. If you go to iiconsortium.org and take a look at the resources, publications page, ones that I think are particularly interesting to someone who's sort of a novice user and Industrial Internet of Things or edge computing, there's the Industrial Internet of Things, IoT reference architecture, that's very useful. There's a bunch of security work which turns out to be a real problem in these architectures. And particularly something called the security maturity model or maturity readiness models for security, that's something that you should certainly have a look at. And then there's a collection of white papers under edge computing.

And the one that we just released last month is called the ‘Industrial Internet of Things Distributed Computing at the Edge’, it's the first one listed. And that's full of really interesting, very current information and guidance for people who want to decide whether edge computing is right for their applications, and once they make that decision, how to implement edge computing, for those applications.

Erik: And given you've been in this field for 35 years, I assume that those patents are strung out over the past decades. And so I'm curious, when did you first start working on edge computing? Because on the one hand edge computing is a topic that really the venture capital community and multinational product development departments have started investing in or focusing on in the past 2, 3, 4 years, a lot of the core technologies have been around or they've been developed for decades now. When did your career related to edge computing began?

Chuck: While I suppose it started at the University of Wisconsin in about 1980 when I was an undergrad student. I actually taught the Computer Control and Instrumentation Laboratory at the University of Wisconsin, Madison. And I had myself a bunch of [inaudible 07:43] 11 computers that were hooked through a bunch of interfaces, a bunch of no physical plants. We had people who were doing fetal heart rate monitors right next to people who were controlling model railroads. And it was one of these big laboratory classes, where you had to actually implement a system hardware and software to make it work.

So it's really been here, not necessarily a name, but in principle, for, I suppose, since the dawn of computation when the first sensor was hooked on the first [inaudible 08:19]. That's the kind of stuff that represents edge computing. I would say that it got a nice boost with some work at Carnegie Mellon about 10 years ago in something called Cloudlets. That represented a maturation, that centralized computers, all the big eggs in one time sharing system basket, might not make sense for a bunch of reasons.

And we should distribute those computational capabilities deeper in the network. So that probably represents a resurgence in the interest in edge computing. The bounce up of the Internet of Things is certainly an important contributor. Because as we start putting millions and billions of sensors and actuators out there to make the entire planet alive and digital, then we need to have edge computing capabilities to support that.

In terms of commercial products that are being offered, I suppose the earliest ones date about five years back to what Cisco called Fog Computing. We didn't really use the term edge computing very much at that point. We call it fog and fog was simply the cloud closer to the ground, that's why we coined that term. And that resulted in something called the Open Fog Consortium, which was absorbed into IIC, the Industrial Internet Consortium about two years ago.  

We also should acknowledge the role of ETSI, it’s a standard organization, the European Telecommunications Standards Association. They have an architecture they call Multi-Access Edge Computing MEC. And then turns out to be the same edge computing that Cisco was doing for industrial applications, but much more associated with 40s and 50s [inaudible 10:11] works. So Etsy started MEC about the same time that Cisco is working on fog. It's been standardized. And it's being rolled out right now at the base of a Verizon or AT&T cell tower near you. So that's a different flavor than I'm going to talk about in the rest of my white paper.

Over the last maybe two or three years, people have start to realize that the cloud is just inadequate for the kinds of performance that we need on these critical edge computing functionalities. And I can talk at length about reasons why the cloud isn't adequate, and why you would want to move at least a portion of your cloud workloads into edge computing resources.

Erik: Before we get there, why don't we just quickly touch on the benefits? I suppose the benefits are then directly related to what edge computing can provide that the cloud maybe cannot provide. But if you can share a little bit of the benefits in the context of the most common use cases, are we talking about shop floors, are we talking about moving vehicles? What would be the use cases? And then what would be the benefits in the context of those situations?

Chuck: Let's talk about a few of the different clusters of use cases. Certainly, there's a lot of use cases coming up quickly, in the industrial environment, doing things like process monitoring, predictive failure and predictive maintenance work where that varying sounds like it's about to fail, and we better replace it before it does. That represents an important set of use cases, an important vertical. Lots of things in autonomous guided vehicles, robotic welding, all this kind of stuff, requires edge computing, for reasons I'll talk about in a minute. So there's that industrial and manufacturing vertical that probably represents right now, just under half I suppose, of the shift installed base of edge computing.

Another really important vertical is smart grids, where I have a substation or a power distribution network, or a set of micro energy suppliers like solar panels on rooftops, and wind turbines, and I need to control all that stuff as a cohesive grid. It turns out edge computing is very helpful for that smart grid application. It can also be applied to other utilities, water, natural gas, wastewater, and so on. So there's this need to sort of control all the valves and all of the transformers and all of the circuit breakers, and do so with a very solid edge computing base.

I would say that transportation is another important vertical for edge. Autonomous vehicles, think about the analogue brakes on a self-driving car, that's a real difficult problem to solve from performance as well as reliability perspective. Edge computing can certainly assist with that. I've been writing a lot about ground support networks for high scale Drone Services, package delivery services delivering drugs to remote clinics in Sub Saharan Africa, things like that. It turns out that edge computing is also very useful, both on the drone flying with the airframe to help with the navigation and safety of that service, as well as the ground support network that's managing the uplink and downlink, some control and docking and landing pads and all that stuff. So that's another application for edge computing.

And the final one that I like to talk about is smart cities and smart buildings. Smart Cities are really going to be an important trend in the next decade. They're going to improve livability. They're going to prove the energy efficiency and the safety of all the residents of cities, allow cities to much more comfortably carry much higher population levels in a given area.

And smart buildings, similarly, we'll use edge computing technologies in order to improve the efficiency and the user experience associated with being a resident of that building. Things like automated elevators are already being done. Smart lighting where the appropriate ceiling fixture levels are set, depending upon what's going on underneath them. All of these things represent opportunities for IoT and edge computing in those vertical markets. IIC studies about 18 different vertical markets. Those are sort of the top four for edge computing.

Now let's talk a little bit about why edge computing is helpful or perhaps necessary in a subset of those use cases. The first reason that you want to move out of the cloud and into the edge is generally related to the latency. So we're doing critical control process in many Internet of Things applications, where we have a sensor that's measuring a parameter, say the pressure in an oil pipeline. And then we take that pressure reading and we send it to some computational resource.

If the computational resource is a cloud computer in Seattle, somewhere, in a server room in Seattle, it might take 80, or 100, or 200 milliseconds for that sensory reading to make it all the way through all the fiber cables, wait in the queues, be processed by the cloud servers, and find their way all the way back down to the originating endpoint, where that signal is used to control an actuator, maybe adjust the valve or adjust the RPMs of a pump to keep that pipeline working correctly.

If the pipeline had a problem, like a pressure surge, and it takes 100 milliseconds to go to the cloud server in Seattle, and come back, a whole lot of oil is dumped on the ground during those 100 milliseconds. If I put the edge computer running, essentially the same algorithm that would have been running the cloud, local to those valves and pressure sensors, then what happens is instead of a couple of 100 milliseconds, worst case round trip time, I'm way under one millisecond, 1/1000 of a second round trip time, and I can close that valve much faster if I find a safety problem, for example.

The most critical latency application I've ever found for edge computing revolves around what's called Haptics, which is the tactile feedback, you might have played video games where the steering wheel bumps you back if you have a collision in your driving game, or the joystick pumps you back when you feel a hammer blow or whatever. It turns out that in order to preserve the illusion that somebody is actually moving that sensor, namely the force on the joystick, and then receiving that tactile feedback, namely, the force back from the actuator, you have to close that loop at about one or two milliseconds. So the cloud is maybe 50 times too slow to give you a satisfying haptics experience; obviously useful in games, also really useful in things like telemedicine and telesurgery.

So, a surgeon behind the frontlines is sewn up a wounded soldier on the battlefield, you want to feel the tissue as you're doing those sutures. If that haptics interface is incorrectly done at low latency, then you're pushing harder, harder, you're poking through, by the time you realize that the feedback signal it's gotten back to you, not useful for telemedicine. So that's probably the most rigorous use for these kinds of low latency engines.

Slightly less rigorous use is virtual reality and augmented reality. There's a thing called the Templer Vestibular Reflex, which is the way your inner ear is wired basically. And it's a critical time cast that is about 7 milliseconds. So if you can read the position of your head, and send that back to a rendering engine, render the correct viewpoint of video for both eyeballs and get it back into the screens in about 7 milliseconds, [inaudible 18:27] nausea in many people.

In critical industrial processes, like you got a paper mill that's making tissue paper at 50 miles an hour. There's a sensor that's figuring out the thickness of that tissue paper, and then there's a little gate that's adjusting how fast the pulp goes under the screens, further up. Closing those loops in tens of milliseconds matters because otherwise those systems go unstable. The gate closes, the sensor says up, not enough, the gate opens, the sensor says ups too much, and that generates what's called an unstable control system. And if you're trying to wind the tissue paper on the other end, it's all wavy and nasty.

So many industrial control processes, things like paper making, things like welding, those are the things that require that instantaneous feedback and edge computing is the answer for that. So that's the first big need for edge is latency.

The second big need for edge is network bandwidth. Turns out that there's lots of analytics, artificial intelligence, machine learning, and deep learning algorithms up and down these hierarchies. If I put all that work up in the cloud, what I'm doing is sending lots and lots of data to the cloud that cloud analyzes, and then sends me back a lot of data.

It turns out that if I'm on a expensive connection, like a satellite connection to an oil platform in the Gulf of Mexico, for example, and if my data was something like a 4k video camera looking to see if the compressor plant has a fire in it, I'm all that all the way back to Houston and doing the analysis there, the cost of the bandwidth to haul that 4k signal across a satellite link is around $10,000 per day. If on the other hand, I had a local edge computer on the oil platform that knew how to look for fires, and all I would do is once a minute send a quick text message to Houston saying I didn't see any fire in the last minute, then I'm doing it for $1 a day of satellite bandwidth. That kind of edge computer pays for itself in less than one day of operation.

Other examples for where bandwidth matters is in intelligent vehicles. I probably don't want to send the whole LiDAR point cloud to the cloud for analysis. I probably want to do that an edge computer riding along on the vehicle. Positive train control is another example of a vehicle where there's a lot of data that's flowing around the train that I don't necessarily want to send back over VHF radio, perhaps, to the dispatch yard. I want to do that on a locomotive. So those are examples of why bandwidth reduction by putting analytics deeper into the edge is another reason why I might want to do edge computing and one of the economic benefits of it.

A third reason that I might want to do edge computing is security and privacy. It's possible that some of the data that I'm generating on a certain location of a particular sensor shouldn't go much beyond the boundaries of the installation where that sensor is located. So if for example, I have a nuclear reactor, I probably don't want the coolant flow readings traversing the boundary of that generation facility because it's a security risk if it does. So what I'll do is do edge computing to analyze and store those coolant flows locally, and never have to send them to the cloud.

Privacy, especially in case of personally identifiable health care data is another reason why I might want to do edge computing, because I can keep the data that needs to stay private, within a local jurisdiction. I, for example, may not want to send it across [inaudible 22:23], ones operating under right now. And the act of exporting that data, if it had personal identifiable information associated with it would potentially be a felony chargeable by big fines.

The final edge computing reason to do it is reliability. The cloud is generally pretty reliable. And companies like Amazon, Google, Microsoft, Alibaba do a great job of keeping their cloud infrastructure, reliable, redundant power systems, duplication of data centers, and all that. But disasters happen: fiber optic cables get cut [inaudible 23:06] take out entire data centers full of server capacity.

Under those circumstances, my critical application like my pedestrian safety application that's trying to hit the analogues brakes of oncoming cars when a little old lady steps into the crosswalk, we don't want that to fail because there's something weird going on in the cloud or some Bozo with a backhoe dug up a fiber, we want that to be local. And in order to provide the so called five nines, 99.999% reliability, that is the standard typically applied to life critical systems, we might have to have a portion of those systems, namely the most critical safety necessary algorithms. We want to have those running in the field and not in the cloud. And that's why edge computing helps there.

Erik: And then maybe one add on topic or small tangent here, the topic of 5G, so we see some group of companies that views 5G as the technology that will further enhance the capabilities and the usability of edge computing. And others who view that 5G when it's fully deployed, maybe, let's say three or four years, might make edge computing less necessary because then we can address some of those latency issues that currently exist. But how do you feel about 5G and the impact on edge computing? What do you think will be the significant impacts?

Chuck: And certainly the crystal ball isn't quite as clear on this one. But my gut feeling is that 5G will be a tailwind, an enabler for the edge computing capability, especially the NC MEC standards of edge computing. And there's a couple of reasons why I make that assertion. One is making the air interface down to a few handful of milliseconds perhaps, which 5G certainly does a lot better than 4G is inadequate if the data centers located in Iceland. I mean fiber optic cables run at about seven tenths the speed of light and that's tens or maybe 100 milliseconds round trip flight time just to get the data to that Iceland data center and back.

So I need to have a data center that's much more local, many fewer fiber kilometers away. And the edge computing at the base of the cell tower that MECs architecture tends to favor is a much better way to do that. So edge computing is an integrated part of MEC, MEC as an integrated part of 5G.

And the other thing that you ought to take a look at is what smart companies are partnering up to do. There were recently last November, about a year ago as I recall, some partnerships announced the flagship one is Amazon Web Services is partnered with Horizon to provide a 5G MEC edge computing capability at the base of many, probably most varieties and 5G cell towers. And Amazon and rising in partnership will make that edge computing resource available in sort of a time share basis, just the way the Amazon cloud is. It's an extension of the Amazon cloud.

Not to be outdone, AT&T and Microsoft, a similar deal, I think Google is involved in that somehow. I know Google has got a similar deal with Orange, big cell company in Europe, and so on. So these smart companies are pouring billions of dollars into the bet that edge computing intimately tied to 5G networks make sense and that almost the field of dreams approach. They're building all these 5G MEC infrastructures, and I’m not exactly sure how revenue producing, and customers are going to use them.

My belief is that one of the first applications of that 5G plus MEC edge computing marriage is going to probably be gaming. If you want to do high quality gaming, like an Oculus VR device, or even on your iPhone 12, it turns out you only have a few watts of electrical power to work with. But to do the photorealistic rendering, like you see on a PlayStation five nowadays, that requires about 150 watts of graphics processing power. 150 Watt GPU would, first of all, melt your pocket, and second burn the battery, your cell phone in a few minutes.

So what you're going to do is you're going to do all this very sophisticated photorealistic [inaudible 28:54] at the base of the cell tower on the other end of the air interface in those NEC engines, and then send what amounts to video pictures fully rendered photorealistic images back for your gameplay. So you're not doing the computations on your local energy constrained device. You're doing it back on the base of the MEC cell tower, and you're sending the video results in. The user can't tell the difference because the latency is so small, that it feels just like they're running on their PlayStation. So they get the benefit of those photorealistic fully immersive 120 hertz kinds of gaming experiences on their mobile, handheld, power-constrained device where the small battery that fits in your pocket will last you all day. My bet is going to be what one of the first applications of those networks will be.

Erik: But this is maybe a good point then to discuss what the multilayer edge system looks like. I think there's still some confusion among end users and also some discussion among the technology providers themselves around where the edge will reside, where the compute will be. So is it directly on the device? Is that on a gateway where I think right now there's a lot of work on gateways? Is it on the base station of a 5G cell tower? Is it in a server in a building somewhere? Without going into too much detail, but what does the architecture of a multilayer edge system look vertically? And also what does it look horizontally when we start talking about mesh systems or distributed systems where compute is maybe being done across multiple devices for an application?

Chuck: But let's talk through the architecture a little bit. And this is a point where I'd like to refer to a figure that exists in this whitepaper that you can download from IIconsortium.org. And just go to their resources, publications page, and you'll find this thing. This is figure 2.1, the first figure in the document that basically describes what's going on in these hierarchical edge enabled networks.

All networks are going to have cloud computing capability at the top. And that's possibly there as a control management orchestration kind of engine. It's there to provide security policy support. It's there to do lots of different things. And it helps to drive the basis to drive all of the rest of this network. So that's the top layer of the network that we're talking about. And it's a hierarchy of layers.

The top layer is the cloud, and the cloud is implemented typically, in data centers, big collections of servers. Then there's a bunch of physical facilities, fiber optic cables, radio links, that connect the cloud data centers to a layer of edge nodes. Edge nodes are the physical implementation of edge computing. And they might be located in lots of different places. They might be on a factory floor. They might be bolted to a transformer in an electrical substation. They might be in a smart building. They might be flying on the back of a drone. They might be in an automatic guided vehicle of some kind. These are all different places where I might want to put edge computers. They might be man portable. They might be in the pocket of a warfighter, for example.

So these things are the layer of edge nodes that are directly under the cloud that are doing the highest level of edge computing capabilities. And there's more than one edge node on that layer that are talking to each other peer to peer. The talking to each other peer to peer is what we sometimes called East-West traffic. And talking between layers is sometimes called North-South traffic. And that's all defined in the white paper.

So the first layer, what you might call the near cloud edge, or the top layer of the cloud, there are additional layers of edge computing below that. So there might be two or three or four layers of hierarchy of edge computers. These might be boxes the size of a refrigerator on a street corner cabinet. We did an example in Cisco for Barcelona, the city in Spain that had 3,300 street corner cabinets of edge computing in order to manage their smart city applications. And these cabinets were about the size of a four drawer file cabinet, full of various kinds of edge computing resources. Below them, you might have lower level edge computing resources. They might be the size of a shoe box, for example.

And then below those, you might have still lower level of edge nodes. You might build those out of these sort of hobbyist class computers, Raspberry Pi's, and Arduinos and BeagleBones’, if anybody's ever played with those, they're the size of an [inaudible 33:11]. And then bought them the intelligent devices, and these might be things like a security camera that includes image recognition or pattern recognition on it.

So we use all of those layers in conjunction with each other and we try to put the right computation, networking, and storage function on each layer to optimize the overall performance of the network. If we make a concrete example, if we have a smart city, we might have the big data center in the sky in city hall, that's the cloud, then we might have a street corner cabinet like the 3,000 plus we did in Barcelona, that's sort of the first layer of edge nodes, then we might have edge nodes in every single streetlight to sort of control when the lights come on, and maybe listen for gunshots with a microphone, and maybe have a security camera there. That's sort of the second layer of edge nodes in that smart city.

And then we might have more edge nodes in every building. So there might be an edge node in the basement to optimize the entire energy use of that building. There might be an edge node in every floor to sort of optimize the lighting plan or the heating and ventilation of that floor. There might be an edge node in the ceiling of a conference room to run the Zoom infrastructure or the WebEx infrastructure for that conference room, and so on. So you might have five or six or seven layers deep of edge computing between the bottom of the cloud and the top of the IoT devices.

Erik: And if we go into the next visual here, but I think the next visual is looking at the functional composition of an individual edge node. So now we're drilling into each of these nodes and looking at what are the set of software defined capabilities that each of these nodes would have. And so you have this divided into five different sections, and seven specific, let's call them functional capabilities across these five sections. Can you break this down? And this is maybe a little bit more of a challenge in breaking this down in a way that's understandable to somebody who doesn't have a background in the field here.

Chuck: Oh, this is Figure 2.2 of the white paper, and it describes how an edge system is assembled sort of the way that a software architect might assemble it. Somebody who's trying to figure out what modules are chewable chunks of software to code and then just make specifications for the interfaces between those chunks, and then sending them off to their various development teams to be coded. That's perhaps the most useful capability of this particular figure.

So on the top, we have what's called the edge system, which is often the cloud-based support for the edge network of nodes below. And that system has two primary sets of functions. One associated with system security management, that's kind of like the whole posture of the security for the edge network is driven from the cloud down. The cloud is sort of the ultimate authority on security policy, detecting breaches, understanding vulnerabilities, updating patches, all that kind of stuff happens from the system security management block.

Then we have a system provisioning management block that's responsible for configuring, monitoring, maintaining, repairing those edge nodes below it. And that's where the network operations set of people, the folks who sit in a swivel chairs and stare at the big glass walls of multiple TV monitors, those are the folks who are running the system provisioning and management block, which is once again probably in the cloud.

So those cloud instances of system security management and system provisioning and management, they're operating whole networks of edge nodes below them. If we go below the dotted line that cuts through the middle of this particular diagram, we'll see what we sometimes called the bathtub. This represents five sets of modules that together provide the edge computing infrastructure necessary to run your application software on edge nodes.

The application software runs in what's called application execution environments, which is in the center, it's the water inside of the bathtub. We want edge applications to run. We want pedestrian safety applications or smart grid control applications or whatever. They have an operating system. They have a memory management unit. They have mechanisms generally to run container-based workloads, [inaudible 38:06] Dockers and Kubernetes for those of you who are real software heads, you got to work those words in.

So basically, what's happening is I can go to a marketplace like iTunes or Google Play and I can order an application that knows how to navigate my drone, and then I can download that into the application execution environment. All the interfaces sort of hook themselves up together by themselves and then that application lets me run that code. But the interfaces that let me run that code require the other four blocks that are inside of this model of the edge node to function.

So the first one that I'd like to talk about is sort of the left side of the bathtub, which is the end-to-end security module. Security is probably the hardest individual problem associated with edge computing implementation. And that security function needs to manage all of the security postures, all of the cryptographic engines, all of the stuff associated with keeping all the components of the node secured, including that application execution environment, and the free PIN code that's running in it.

So the intent security module is described in great detail in chapter four of this document. And there's enough information there to almost act as a recipe to help you implement edge computing security. I don't think the scope of this podcast especially the time we have remaining is going to let us talk in great detail about that. But there is literally thousands of pages of information available on the Industrial Internet Consortiums website to help you with security of IoT and edge devices.

The second of the four boxes that support the application execution environment is what's called a trusted computing module. This is basically the interface between the computing networking and storage hardware, the physical soldered down shifts of the edge node, and its operating system instantiation up to that application execution environment. It's important to know that this is a trusted module.

There's a thing called a hardware root of trust that is pretty much essential in edge computing, that enables you to boot the computer in a way that nobody could have messed with the boot process, nobody could have overwritten something, it's all sort of indelibly stuck into a secrets vault, and it will boot up in a trustworthy way, no matter [inaudible 40:42]. Then after that you build a chain of trust, where you boots bloaters that are trustworthy, and operating systems that are trustworthy and protocol decks that are trustworthy, and so on. So that trusted computing module basically provides all the services that the application execution environments need in order to make that code run efficiently on that hardware base.

The next two chunks shown on sort of the bright side of the bathtub are the application service management modules, and the node management modules. These are basically talking to the system provisioning management stuff up in the cloud making sure that the configuration of the hardware software and applications that are running on each of those nodes is correctly installed, configured, orchestrated, and monitored throughout its lifecycle. So, management is probably the second hardest problem in edge computing beyond the end- to-end security of edge computing.

And we're talking about putting 50 billion sensors and actuators onto the Internet of Things over the next five years or so. And [inaudible 42:06] opportunity to mess up the configuration those things, think about the handwork that's necessary if you've got to type IP addresses into some kind of a configuration interface that's ugly really fast. So those node management and application service management modules have a lot of automation associated with the configuration, ongoing operation, and monitoring of those application execution environments.

So all this stuff put together makes a secure, manageable, high performance, low latency, reliable, safe infrastructure for edge computing. And that hits all of those checkboxes that we talked about when we were asking why do we need edge computing? This is the infrastructure that supports it.

Erik: Chuck, let's take a step back if we just kind of summarize where we are, and then take a step back and look at this from the perspective of either a technology provider who's trying to figure out how to bring a complete solution to market or an end user, or system integrator who's trying to understand who in the market can help them solve problem x.

So from a hardware perspective, we’re looking at everything from the system on a chip on a device up through gateways and other kind of edge computing devices through 5G shipping container size data centers, and then the cellular connectivity behind that. And then we have, obviously, a very diverse ecosystem of software companies that are providing embedded software on the edge.

So, if we look at the 200, members of the IIC, I would assume that perhaps most of those companies somehow have a finger in the edge computing pie. They have a stake in the game here. And one of the challenges that we've kind of noticed that IoT One is that the tech providers are in a somewhat challenging position right now, we're figuring out where in the edge computing tech stack they should play, and then who they should either source technology from, or collaborate with in order to develop the full solution that they can bring to market.

And then I believe the end users are in a similar situation of being confronted with a lot of information around edge computing, but a question of okay, then who do I talk to? Do I talk to Cisco? Do I talk to AWS or AT&T? Or do I talk to a software provider like Foghorn? I suppose the answer could be all of them. But what would your thoughts be from this perspective of, on the one hand, technology providers who are trying to determine how to bring their integrated solution to market or whether they fit into the edge computing ecosystem? And then on the other hand, the end users who are maybe interested in the value proposition behind edge computing but then trying to figure out where do I start, what is my starting point, and who are the companies that I should first engage with around this topic?

Chuck: Well, you start from requirements, you figure out what use case for use cases your edge computing infrastructure is supposed to support. And then you generate a rigorous list of requirements for those use cases, things like performance, capacity, reliability, power dissipation might matter in some cases. You might have a few 100 requirements that you come up.

And then beyond that, you probably want to have a little bit of supply strategy. Like what is your supplier strategy? There's probably about three or four main ones that you might want to go with. One is you can roll your own from scratch. I mean, it's going to take you two or three, four years to do so, but you could. The second one is you buy a hardware and software cafeteria style from a number of trusted vendors. And that's where companies like Cisco and Microsoft, and real time innovations is one of our member companies that might get involved in that kind of thing. Dell is certainly an IIC member company that ships this kind of stuff.

So you can be your own system integrator by pulling together the best and brightest from all the different ecosystems of hardware, operating systems, platform infrastructures, protocol, stacks, AI engines, application spaces, and so on. You can be your own system integrator, and you can probably get exactly what you want, and probably save some money, but it's going to take you quite a few person years of effort to do that, and quite a few calendar months.

The third thing is you can try to buy the sort of hardware software application package from one company, so let them be the system integrator. And certainly, IIC member like GE is in that space. You need a thing to control your wind turbine, GE will sell you the hardware software support right down to the 800 number you can call 24/7 to get your questions answered.

The last thing you can do is let one of the cloud service provider 5G operators do it for you. And that's where these Amazon plus Verizon or AT&T plus Microsoft plays come in. So you basically are doing edge computing as a service, everything's as a service is the big buzzword these days. And that would be an opportunity to eliminate some of the risk, some of the deployment hassle that you might go through citing a few 1,000 edge nodes out there.

You can use Amazon's network that might work. But of course, you're on the treadmill of paying the monthly premium bill for that service. Ask people are they satisfied with their cloud service provider, and they say, oh, performance is great, capacity is great, reliability is great. By it sure cost a lot per month, you'll see the same thing for edge.

And the final thing that I'd like to discuss a little bit about edge is the white box hardware and open source software marketplaces that are developing around these. So instead of the traditional vendors, the Cisco or Dell for hardware and the Microsoft for software, VMware for software, instead of those kinds of spaces, you can go to reference designs from things like the open compute project, and software platforms from places like Edge X Foundry and the Eclipse Foundation among half a dozen others, and put together your own shareware version of this. And that is a viable alternative for sophisticated ambitious customers in this space.

Erik: I would suggest just from my experience, if you're a medium sized company, at least to begin with, try to take something that's a bit more plug and play. But maybe carefully collaborate with the suppliers so you can also transfer some of that learning to your organization, because we have seen a number of companies have you talked about two, three year timeframe. But we've seen some companies that have gone down this path and then figured out that they built the 10th best product in the market, and they could have just purchased something cheaper and had it up and running two years ago. It's a good learning experience, but also, it's a bit of risk there if you're a medium sized organization and just don't have the bandwidth.

Chuck: No doubt. One thing that's really important as part of that discussion is understanding what's domain specific knowledge and what's the underlying platform. So, back to my oil pipeline control application, pretty much anybody's edge computer is probably adequate to run the software. But the problem is the company like Cisco or VMware, Microsoft probably doesn't know a heck of a lot about oil pipelines.

So what you might want to do is find a domain expert, Halliburton and Schlumberger of the world to write the code that represents that sort of application that runs inside of the application execution environment. Use that as the differentiator, the thing that provides you with all of your value and all of your reliability and performance and everything that you want out of that system, and then let the platform be sort of just a horizontal vanilla malleable platform that I can adapt to that domain specific code. You don't have to worry about sweating the details of API's for security, which you're going to have a hell of a time hired people that know those right now. And you buy that platform, sort of turnkey, and then you put the domain specific stuff that's really important for your application, really specific to your vertical market. That's what you focus your energies on.

Erik: That's a great piece of advice. So there's a lot of companies out there that are in the position of having their core market grow at low single digits, and then look towards digitalization as maybe an opportunity to sell data-driven services and tap into faster growth markets, but they're kind of confronted with this challenge of acquiring the technical expertise to deliver on that.

But I think you're exactly right, this functional domain or industry domain expertise is tremendously valuable here. And in many cases, you don't need to develop the tech competence. So you need to have enough to understand what's possible, but then you could white label, and you can compile the solution.

So I think for a lot of these, if we look in Germany, for example, the hidden champions here, these medium size companies that build different types of assets, and maybe want to build services on top of those, that's a great approach for this type of company to be able to participate in the markets here, not just for improving their internal operations, but for developing new revenue generating services, but to do so without taking on the challenge of becoming experts in every layer of the stack.

Chuck: Absolutely, it doesn't even have to be a medium sized company for very focused, very niche domains. 1-3 people in the spare bedroom could be a very successful business model. If they have the best AI algorithm for analyzing the paper flying off of a paper mill, every paper company in the world by algorithms, they want the best one, they don't care who wrote it, they want the best one. And you might get a royalty per each instance of that software being installed. It can be a serious cash flow for a consultant who happens to know that domain and happens to be able to codify it in an application.

Erik: Chuck, I think we've covered a good amount of ground here. We didn't get so deep into the cybersecurity topic. And as Chuck mentioned earlier, there is tremendous amount of, of information whether it's just high level content that you want to communicate to senior management around what should be focus areas around the trustworthiness or you really want to get into the details of how to secure a system. But Chuck, is there anything else that you wanted to touch on? Today,

Chuck: A shameless plug for IIC membership, the Industrial Internet Consortium is always interested in new members. There's lots of benefits to membership. Let me give you two members that probably mattered are the technical audience of this podcast. The first benefit to members is you get to help invent the future. So if you have a particular view of how you think that the next generation edge computing or the next generation digital transformation or industrial Internet ought to work, you can assert that influence and maybe it all it takes is your writing a couple of paragraphs that end up in the next white paper. And you can influence those white papers [inaudible 54:07] this white paper has been influenced by two dozen companies in that sense.

The second thing that is a benefit for IIC membership is it gets you a little clearer crystal ball of what's coming in the future. So even if you aren't making serious contributions to the next generation technical report, you're still listening to those conference calls and understanding the why about how this stuff got put together. If you can translate that into your development organizations, then you have a leg up on a competition who didn't have interest or party in the proceedings that are going on in IIC.

We have a tiered membership approach. And the dues that you pay to join really aren't particularly outrageous, considering the benefits that you get. The community is excellent. And we get a lot of really, really fine work done. I encourage everybody to take a look into it. Just go to iiconsortium.org and there's button to click for more information about membership.

Erik: IoT ONE, we are a relatively small niche research firm. And one of the first things that we did five years ago when we set up the firm was become an IIC member. And for me personally, it was just a tremendous education showing up at the conferences, participated in calls. I helped with John Colwell and Calvin to set up the smart factory task group. So being able to participate even though at that point, I mean, everybody in the IIC had more expertise than me, frankly.

And for me, it was it was really a just a complete education in the industry. I mean, like you said, a bit of a crystal ball and what's coming into the future. So if you're a small organization, whether you're a research institute, a consultancy, whatever you role might be, I would also encourage you're more a startup, I mean, the dues are, I think there's something like $5,000 per year.

Chuck: I think, waving them for startups that aren't yet profitable for the first year. So there's no reason not to join if you're in that boat.

Erik: So join, check it out. It's a great community to be part of. It's really a bit like a family. It's funny once you go to a couple of the events, I mean, everybody knows everybody. So it's just it's a cool community. So Chuck, thank you so much for joining and sharing today. I really, really do appreciate your time.

Chuck: Well, I do appreciate you inviting me to do this. And I hope that listeners got something valuable out of it. We are available for questions after the fact if you want to just flip [inaudible 56:32] will be happy to answer follow up questions for you. Thanks and have a great one.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.