• >
  • >
  • >
  • >
  • >
Podcasts > Ep. 225 - How are AI and IoT solving manufacturing labor shortages?
Ep. 225
How are AI and IoT solving manufacturing labor shortages?
Mike Rohrmoser, VP of Product Management for OEM Solutions, Digi
Thursday, September 25, 2025

In this episode we spoke with Mike Rohrmoser, VP of Product Management for OEM Solutions at Digi, a global provider of mission-critical IoT connectivity products and services. We explored how manufacturers are addressing labor shortages with IoT and automation, the trade-offs between retrofitting existing factories and building new ones, the evolving sensor and connectivity landscape, and practical steps to scale IoT pilots into production.

Key insights:

• Retrofitting existing plants is often the smarter move. Brownfield upgrades can cost 40–60% less than new builds and achieve faster returns when paired with business-focused use cases and retrofit connectivity.

• Sensors and networks must be judged as a whole system. Industrial buyers weigh accuracy, deployment simplicity, and lifetime cost over unit price, with wireless IO-Link and LTE Cat 1 gaining traction and 5G RedCap on the horizon.

 Edge AI is real, but focused. Today it is most effective in computer vision for quality inspection and counting, while new designs anticipate broader workloads as adoption matures.

• GenAI augments people, not machines. Its strengths are in analysis, documentation, and device management, while safety-critical real-time control remains firmly in the domain of conventional automation.

• Scaling pilots requires proving value early. Many initiatives stall when they start with technology instead of problems; success depends on production-ready components, operator trust, and leadership alignment.


IoT ONE database: https://www.iotone.com/case-studies
The Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): https://asiagrowthpartners.com/

Transcript.

Erik: Mike, thanks for joining me on the podcast today.

Mike: Hi, Erik. Glad to be here.

Erik: Yeah, well, I'm really looking forward to this conversation. This is a topic that is quite pressing, I would say, for companies in America, where you're sitting, but really around the world, as manufacturing has become more of a strategic priority for many governments. This is actually becoming a more pressing issue in many countries. When, for you, did this topic just start? I think for many people, this topic of massive labor shortages in the manufacturing sector became kind of a political priority just probably over COVID or a few years ago. From your perspective, maybe this started earlier. When did Digi start to identify that this was a pain point for their customers?

Mike: Well, I think we have noticed that we're facing somewhat of a perfect storm with three converging forces, I would say. We have this demographic cliff. So about, I think, one third of our manufacturing workforce is over 55, I believe. I think generally speaking, as of today—not even talking about the past—we're losing about I think over 11,000 baby boomers to retirement daily, all the way through 2027. And so I think we have noticed also again that demographic cliff, that great retirement cliff, a while back already. We have aging customers that we have also in many cases worked with for a long time, including new customers, where we engage with subject matter experts that are in that age range.

Then obviously, all of that, what contributes to that is that the whole skills evolution, there has to be more demand for simulation and software skills of the last, I would say, five years. But we also noticed that a lot of the younger workers, younger employees, don't necessarily always have the background. Because only a few schools actually offer manufacturing-relevant curricula. Then even the whole perception gap of only a third of Gen Z, I believe, is one of the numbers that consider manufacturing careers. So we have certainly seen the aging, I would say, as part of our daily work. That's obviously underpinned with some of the research, some of the market data we see. Again, that is augmented by the whole skills-related aspect and what younger people actually do from a manufacturing career decision—which, of course, then contributes to this massive shortage in, I think, about almost 2 million unfilled manufacturing jobs in the next 10 years.

Erik: Yeah, it's interesting how other geographies have arrived at a similar pain point but from different perspectives. I'm sitting here in China. And of course, there's a lot of people graduating. So there's at least in the short term not such the labor crunch. But they don't want to work in manufacturing, right? So then it's more of convincing. In India, there's a lot of young people. But what I hear from companies that are setting up factories there is, they don't have the skill sets. They don't have the capabilities that they need. And so then they're looking at digital solutions as a way to — you have low-cost labor, but the labor is not able to meet your quality standards, to run standard processes and so forth in a stable way. And so you're looking for technology to help address that. So, you know, many different routes end up at this same pain point of we don't have the right workers, let's say.

But I guess we can drill into the U.S. because the U.S. is — of course, we are coming from, it's probably the country now that also has the biggest immediate need to address this problem. If we drill in on the American market, where do you see the roles that are most challenging? Are we talking about the operators, maintenance techs, the specialists who are managing digital solutions? Where do you see the biggest bottleneck?

Mike: I would say it probably depends a little bit on the industry. I think there's differences when it comes to, let's say, discrete manufacturing or the mode of electronics. I would say that they struggle most with automation specialists and maybe robotics technicians. If you look at processing industries like chemicals or food and beverage, they face bigger challenges more with regulatory compliance specialists and safety certified operators that are not very easily replaced. We talked about the skills gap also and the experience that we referenced before.

But I do think, in the context of what you were saying, that both are solving it in the same way across the industry. Meaning that, you can use IoT or industrial IoT to actually almost amplify human capability rather than replace the workers. So I think that is certainly also the opportunity when we talk about IoT and we talk even about AI or alternative AI. I'm sure we're going to touch on that later as well, where those technologies can actually help. Not necessarily replace the people, but perhaps augment the human workforce and what we have, those losses that we talked about, and help the people that are actually working in those roles to do their job in a more effective way than before. That will help also then to maybe at least partly overcome the shortage that we are currently experiencing, in my opinion.

Erik: So one perspective would be different industries or at least discrete versus process, having different skills gaps. Another perspective could be brownfield versus greenfield. Of course, America has a lot of brownfields, right? So we have a lot of factories that were built in the '60s perhaps, and so they have a unique set of challenges. Then more recently, there's a few industries where there's a lot of greenfield that's been kind of penciled in or maybe even already begun construction or come online. Those are different challenges.

Let's take those one at a time, maybe first at brownfield, when you're working with a manufacturer that has a brownfield facility. How do you work with them to start thinking through what makes sense, given the state of our legacy infrastructure as kind of a thought process or a planning process to begin determining what use cases should we be looking at, what type of technology backbone do we need to invest in, in order to get them to where they want to be?

Mike: Well, I would say that, often, I would say — well, also because we have existing infrastructure and existing plants, I would say that the economics often strongly favor brownfield upgrades—at least strategic brownfield upgrades—compared to greenfield. They cost about 40% to 60% less than a greenfield build up with typically faster ROI, even though maybe a little more limited, versus new greenfield projects.

So for brownfield assessment in general, depending on the customer and the product need, we typically evaluate a few key areas. Again, it depends on the project and the customer, the connectivity infrastructure—because, obviously, Digi is focusing on connector products and IoT of all, industrial IoT—existing protocols that are in place, mechanisms, the security posture of the customer, operational workflows. Then the last one is also the workforce readiness, which is an important aspect. I think the key insight is generally to understand what you can retrofit versus what needs replacement also. That's where our products can also bring decades-old, let's say, serial equipment, old machinery into the digital age without ripping and replacing. That's really, for me, the magic of modern industrial IoT.

So what I've learned from hundreds of those deployments is that the success isn't necessarily always about the age of your facility. It's very much about the clarity of the digital strategy and starting with the actual high-impact use cases. So the plants on the customers that focus on, like I mentioned before, the worker augmentation rather than worker replacement, I would say they see higher success rates in scaling pilots, regardless of how old the facility is or the infrastructure is.

A good example—it's not Digi specific, but Siemens has a showcase plant essentially. Siemens Electronics works in Amberg. That actually has experienced, I think, a 13 times production increase in this very same brownfield footprint that is around since 1990. So the Amberg plant actually produces programmable logic controllers at PLCs and HMIs. They have achieved that really remarkable increase without expanding the original footprint, which is about 100,000 square feet, I believe, of production area, or significantly increasing their staff level. So it means they have achieved a level of growth through increased efficiency, retrofitting, brownfield, rather than simply having a larger factory built with more employees and more modern technology as well.

Erik: Yeah, it's a useful perspective. I mean, I hear among my clients quite a bit of interest in kind of factory in the dark lines. And I'm always curious. Is that more kind of an intellectual, like pushing the boundaries interest, or is there really a business case there? I guess these are more typically for things like solar panels and somewhat process-oriented manufacturing. But do you see scenarios where really pushing the boundaries towards a factory in the dark makes sense given the state of technology today? Or is it very rare where that makes sense, and it's more likely that that might be a useful lighthouse factory, for example, a company that wants to test out new technologies to prove viability as opposed to really having the stronger business case versus more of the augmentation of the existing workforce?

Mike: Well, I think over time, probably, I mean, that concept of having a fully-automated manufacturing facility that in its ideal "incarnation" would actually operate without human presence, that's probably very, very rare. Maybe you see a little bit of that in the automotive space. I would say, it's also that the typical customers that I am directly engaging with, or Digi is engaging with, historically. I think it's mostly customers and organizations that are actually, like I said, more looking at augmenting, becoming more efficient, more effective, increasing OEE through means that are building on the existing infrastructure. I would say that a factory in the dark, kind of a dark factory approach, is very much greenfield-ish in many ways. And it's probably only suitable, in my opinion, to a certain number of industries. I haven't seen many customers that we are directly engaged with in that kind of context.

Erik: Yeah, makes sense. Yeah, I think it’s a bit more common here in China, probably most likely, because it impresses government officials. Therefore, you’re able to get loans to do investments that don’t really have a strong business case. So clear from your perspective in the U.S. Brownfield typically has a stronger case. Nonetheless, if we look at the labor situation in the U.S., there does tend to be a kind of a shift in manufacturing capacity from the north, where you probably have more brownfield that's not fully utilized towards the south, where traditionally you haven't had as much manufacturing but now you have a lot more labor. So there's this kind of tension in the market.

When we look at the south and we look at scenarios where you might have to put more greenfield in, just because the existing assets are not there, how does your process to work with a customer differ in those scenarios versus in the more typical brownfield scenario?

Mike: So we're talking about greenfield versus the brownfield piece?

Erik: Exactly.

Mike: So I would say, well, in the smart factory kind of context—even though it obviously extends beyond the smart factory—I would say that what certainly helps and where those customers would typically also come in from in my experience are designing with Industry 4.0 principles from day one. So modularity, interoperability, edge-first architecture and Zero Trust security, which are all very important aspects of Industry 4.0. I would say that, actually, that greenfield design prioritizes principles, like having an optimal infrastructure in place for robotics and automation—similar to what you were referencing. Maybe not as complete as the factory in the dark that maybe you've seen in China a little more prevalent, but still having an infrastructure in place for robotics automation that doesn't have those legacy constraints. The module on scalable systems, future-proofing them actually, and having baked in sustainability rather than retrofit it, which is a very important aspect when we talk about automation, as well as the sustainability piece.

So I think that idea would then also incorporate that fully-autonomous operation, potential real-time AI and machine learning as core functionalities in production lines. So I think that would go a little bit more towards that. But again, I think you don't necessarily have to apply that to a whole factory, so it turns into a dark factory. But using those principles even on a smaller scale, like I said, those Industry 4.0 principles, I think that's what we're seeing there to help drive those specific projects on greenfield versus brownfield. Quite honestly, I mean, some of those would still apply to brownfield, at least in spirit, in my opinion. But of course, you can do it from ground up, and I would actually add it, retrofit it.

Erik: Yeah, clear. Then if we drill in at some of the technology elements of a smart factory, whether it's retrofit or greenfield, we have, I'd say, some areas of the tech stack — there are actually quite a few areas that are evolving very rapidly. So if we look at wireless sensors, there's been a fairly steady decline in cost and then, at the same time, an improvement in performance, which of course impacts the ROI of business cases, right? Where something might not make sense five years ago and, all of a sudden, today, given the cost and performance trends, it's kind of a no-brainer.

From Digi perspective—I think this is an area that you have a lot of expertise in—how do you evaluate these trends or kind of monitor these trends? Because this is always a very difficult part when we're running projects and helping companies to understand what makes sense. It's that there's often this kind of, yeah, the payback period is 28 months and then maybe there are some qualitative benefits. They want to have a 20-month payback. But then how do we value these qualitative benefits? Do we want to postpone for three years? Maybe the sensor cost performance improves, and the business case makes sense there. So there's always these kind of borderline scenarios where you're trying to understand the quantitative and qualitative benefits and also the trend line for the technology to evaluate, do we use the tech today? Do we push it down the line? When you're working with customers and looking at these kind of edge cases, how do you work with them to anticipate the trajectory of the technologies in your portfolio and determine, is this something we want to invest in today versus down the line when the cost and quality performance shifts?

Mike: I would say it's certainly true on the wireless sensor costs plummeting. I certainly have, I think, a specific opinion. It is certainly true that some wireless sensor cost has plummeted significantly in the last 5 to 10 years. It's almost 70% depending on what kind of sensor technology it is. That can certainly fundamentally change our equation, like you said. However, I would also say though that industrial IoT sensors, in my opinion, are a little bit of a different animal still. I mean, it doesn't mean that they haven't been subjected to cost savings or lower cost in general. But I would say the accuracy that you typically need for industrial use cases is different than, obviously, let's say what we have seen in phones and other related or consumer sensors. So I think that's certainly an important aspect. So the price drop can certainly help enable mainstream plants to achieve productivity gains, maybe more easily with less equipment downtime and then also less cost implemented. But I would also say that it's maybe not always as significant from a sensor cost perspective, in my opinion. It's for industrial use cases, but then it is for sensor use cases in consumer applications. Because that's where, I would say, the biggest price drop happens.

So, for example, what I have seen in the past and what I've seen in Digi as well is, like, if you look at wireless offshoots of established industrial sensor technologies—which are not the sensors themselves but such as IO-Link, so wireless IO-Link, for example—seems to get more traction and helps to drive use cases such as predictive maintenance as well. So I think it's probably more of a combination of technology, I would say, in terms of having their predictive maintenance capabilities. Having also, if you look at brownfield again, retrofitting, let's say, predictive maintenance vibration sensor for rotating equipment. I think it's the combination of the connectivity technology, i.e., cellular, if you want to make it relatively easy or sub-gigahertz. It's the sensors but also maintaining the quality and the accuracy of the sensor, and then all packaging that into something that actually customers can deploy easily also, so you don't have to have huge installation costs that come with it. That can be easily installed maybe also by not highly-skilled employees. I think the combination of it is, I think, what makes the difference. Then wireless comes into play. Because obviously, again, that makes the deployment easier, specifically when we talk about brownfield. So I think it's probably a mix of a number of things, not just the sensor costs, in my opinion.

Erik: Yeah, okay. Well noted. So the sensor, the cost of the individual sensor, might not actually shift as much in industrial, but the cost of the entire system when you consider connectivity—

Mike: And the deployment of it, which is a very important equation. In my previous life also, we have seen companies that are looking at, let's say, predictive maintenance more like in an OpEx approach versus a CapEx approach. So it becomes more of a service that you as a customer pay for, almost like a subscription. So you don't have the capital expense of getting the sensor actually installed, buying the equipment, running the cellular connectivity as well to the different sensors, but rather having an OpEx approach versus a CapEx approach on the deployment side to make it easier as well.

Erik: Yeah, that's an interesting case. That seems to be making more sense. As companies become more integrated across the hardware and software stack, right, that you have one vendor that can provide that entire system. If you look at that acceptance of an OpEx model, what would you say is kind of the market acceptance or the market preference for that as opposed to a CapEx? Are we talking about 20% of the market or just kind of a rough figure?

Mike: Well, that's a good question. I would say, I mean, again, this is a little bit more of my direct experience in the past. I would say that there is certainly a lot of interest in that kind of model. But percentage-wise, I would say it's probably, at this point, based on my experience, there's a little bit of a—I guess I have to admit, Erik—maybe a third. I'm talking about a third or a quarter maybe in that range of customers that have driven that kind of deployment. I think there's a lot of interest in that as well. I think the model in itself, especially if it can become very cost effective and, from a price point, interesting enough so that it's not, let's say, $1,000 or $2,000 per machine or per equipment—it can be lower than that—I think that there is probably a specific point where this becomes more interesting from an OpEx perspective.

Again, interests I've seen quite a bit for that. It's not an easy model to provide even for the company itself, the vendor itself. Because as you can imagine, somebody has to finance the equipment as well, that the customer is actually leasing essentially or subscribing to. So I think that the concept itself is good. I'm not sure if it hit that specific point from a pricing perspective, where we have seen widespread adoption, but I do think the concept is extremely interesting, especially in predictive maintenance.

Erik: Yeah.

Erik: Right. I remember when Munich really got into this. I guess you need that kind of financial intermediary to come into the market in order to finance the CapEx to make this really scalable. What about on the connectivity side? You mentioned this earlier. So there's cellular. I guess LTE-M, 5G are technologies that have matured recently. Do you see a significant move towards adoption of these kind of higher capability on-campus networks? What do you find to be more common? Are people more reliant on lower cost or traditional systems when we talk about enabling wireless sensors in a facility?

Mike: Well, I think it's a little bit of a mix. It depends a little bit on what kind of — if you stay with the sensor example, the sensor could be, for the ease of deployment, it could be something that actually is a connected sensor, like what we talked about previously as it relates to, let's say, vibration temperature sensor for rotating equipment or electric motors. It's certainly as much easier to deploy if it has a, let's say, cellular interface, especially when you look at maybe potentially employees or staff that is not necessarily a technician, that's not necessarily highly skilled on the digital side. Conceivably, you can have a connected sensor that is then easily deployed with cellular on a specific piece of equipment without any need for gateways or any kind of networks that would actually be within the facility itself. So that's certainly one use case where cellular connectivity can be interesting. Either because, A, it's ease of deployment and/or maybe if it's in more remote facilities.

Of course, it can also be then a little bit problematic because if you're indoor, you might have not the self-coverage that you need. You might be around equipment that's not very conducive to connectivity overall. What we have also seen then is, certainly, a lot of sub-gigahertz deployments, sub-gigahertz networks that come into play there as well when it comes to sensor connectivity. That applies to predictive maintenance as well. Now, that typically would imply that you do have somewhat of an APE that actually then those sensors would connect to or they could connect to. That's certainly another angle from a technology perspective how those could be deployed.

When it comes to the cellular connectivity that we mentioned before, I would say, certainly still we have seen LTE CAT 1, even you don't need that kind of connectivity from a speed perspective for sensor like that. But it's certainly more because it's ubiquitous in terms of coverage, which can be a little bit of a problem certainly with LTE-M as well, or NB-IoT. That's even worse recently. Some of those networks also have been sunset. So I would say, often, it can be LTE Cat 1. LTE Cat 1 bis is something that we have seen a lot of traction in recently too. So essentially, a little bit of a hybrid between, let's say, LTE, NB-IoT and LTE Cat 1. So you get all the affordability and relatively high speed, but still not super high speed that you would need for industrial IOT applications. But you don't have the coverage problems that you have with LTE-M or NB-IoT. Because like I said, LTE Cat 1 coverage is pretty ubiquitous. So we have seen that a lot. I wouldn't say that there was a huge rush towards LTE-M, NB-IoT. NB-IoT less so. But LTE Cat 1 bis, I think, can be a very interesting technology. Then going forward, if you look at 5G, even though it's probably two or three years out about, RedCap can be interesting for those as well from our command deployment perspective.

Erik: There's another area of the tech stack where we've seen a lot of very rapid development recently, which is an edge AI on modules, on SOMs. For things like vision inference, for example, at the machine, if you need really low latency for high throughput production lines, something like this, are you seeing a lot of adoption already of edge AI, or is this more something that you see also out in a two- to three-year time frame?

Mike: Well, Of course, AI/ML is on everybody's mind, as you can imagine. What we have seen so far, I would say it's probably again a multifaceted answer. I would say, yes, we do see certainly interest in AI/ML. I would say in general also, though, that a lot of customers might be very interested in it but don't necessarily always know how to go about it—sometimes even having some difficulties to, believe it or not, find the use cases for it, other than knowing that they probably should be doing something.

So what we typically do then for those customers — because for me, the question is not whether AI/ML is significant or not, whether it's relevant. It's more than, for me, a timing question for those customers. So what we're trying to do is when we talk about new designs, particularly also, we try to make sure that those customers actually have systems in place, SoCs in place, for example, that have capabilities that would support more sophisticated AI/ML applications in the future. And what sophisticated means can vary a lot. So if you look at, let's say, sensing application, predictive maintenance, you can actually do that without AI/ML. You can do that with thresholds as well. But you can have also sensing applications that might not need a lot of AI/ML horsepower, where you might maybe perfectly fine with one, one and a half tops. You don't have to have 20, 40 tops for the use case. So we make sure that those customers actually have a platform that can grow with their specific needs, even though they might still experiment at this point in time and looking at ways to prove the concept of using AI/ML. So I would say that's a general statement that's true in many cases.

Now, there's pockets, I would say, where AI/ML can be very strong. One of them, you brought up already, which is, I would say, vision. Computer vision is something where we do see use cases—which can also scale quite nicely, by the way, from a requirements perspective, from a horse power requirements perspective for AI/ML, in terms of what the platform needs to provide there. But that is certainly for quality inspection use cases, anything like traffic management, people counting. So those use cases. For image recognition, computer vision, those I think are well-defined use cases where we have customers that are interested in it, where you can actually have platform where software is available also. Then those customers typically know what they're looking for. So I think those pockets are there. But I would say there's a lot of industrial customers when you don't necessarily have those use cases in mind directly, where there's a lot of questions about: what exactly can I do, should I do, how do I take advantage of AI/ML, across those other use cases outside of computer vision?

Erik: Yeah, clear. Okay. So this is a company-specific challenge. There's another, let's say, the other AI that has been very prominent recently, which is GenAI. I know this is a bit farther from your traditional business. But I'm sure, nonetheless, it's a topic that you've had to discuss many times in the past years. Are you seeing real adoption cases for this among your customers? Because if we talk about the challenge here of labor and enabling labor, GenAI seems to have a very strong set of use cases around being kind of enabling this engineering co-pilot, this guide, this root cause analysis, and helping people to sift through old PDF reports of faults, and kind of identify potential hypothesis for root causes for quality issues and so forth. So there seems to be a strong set of use cases around helping engineers to understand the operations. When you have this kind of high turnover and new people coming in who don't have the 15 years in manufacturing that the workforce used to have, are you seeing already real cases of adoption, or people primarily still testing and kind of doing small pilots and just seeing how this technology might fit into their operations?

Mike: Well, I would say, with respect to — it certainly is the buzz as we all know. But where it really genuinely fits in the factory and where it's, I would say, still hyped, I would say you should probably separate a little bit signal from the noise, so to speak. If you look at Gartner, you probably know as well that Gartner has moved GenAI in their Gartner hype curve, from the peak of inflated expectations to the trough of disillusionment phase, I think as of this year. I think that certainly has some validity. Because I would say that GenAI certainly delivers some genuine breakthroughs. There's no doubt about it when it comes to, for example, product design optimization where I can create, generate thousands of design alternatives based on certain constraints and parameters. Root cause analysis, of course, for faster troubleshooting using GenAI. Of course, in documentation and compliance automation, that can deliver very significant productivity gains.

But I would still say—there are some McKinsey research also related to that—that it still falls, from a use case perspective, mostly across four functional use cases, which I think are probably customer operations, marketing and sales, R&D, software engineering. But I do see at least this point a much lower value potential for manufacturing functions. Because I think GenAI definitely excels at content creation and synthesis, but not necessarily, at this point, the numerical kind of optimization that drives manufacturing value. So like I said, it really works for product design through generative design. It can help produce the material cost and optimize from manufacturing. It can certainly help documentation generation and launch management and deliver productivity against that. Of course, GenAI generate code components can help and then, as I mentioned, process documentation and compliance automation.

Where it's still hype, in my opinion, is when it comes to, of course, real-time process control and safety-critical decision-making. So currently, GenAI is not fast or reliable enough for millisecond manufacturing decisions and certain manufacturing decisions. That goes a little bit back to autonomous factories, the dark factories that we talked about, when they become more prevalent in these industries. So I would say the smart play in general for using GenAI is, again, like I said at the very beginning, is to augment human expertise, not to replace it, and then generate maintenance procedures, for example, for predictive maintenance from sensor data, like we talked about. I think it's perfectly capable to do a value there on productivity gains.

Training materials for technicians from historical incidents, optimizing production schedules based on demand patterns, I think that is where the ROI for generative AI is proven today, in my opinion, and in general, I would say, nailed the fundamentals of proven edge computing and standard AI before chasing the whole GenAI train. That I think is a very fundamental initial approach for me. As far as our products are concerned also, Erik, that is also the use case envelope in which I would see we can use GenAI. So we have a remote management platform, device management platform, digital remote manager where I can definitely see GenAI help with pattern detection, summarization of reports of device behavior, capabilities like that. I can see us deploying it there. But again, the use cases are very specific. I think with the manufacturing, they're more functional, in my opinion.

Erik: Yeah, it makes sense. It feels like GenAI at this point still needs to be primarily in the domain of the software development companies. What I've seen the most success has been in companies that have a legacy software system that is using these more traditional data analytics tools, and then they're integrating GenAI as a function within their system, for example, to improve the user experience, potentially to grab data that otherwise was challenging to grab and integrate it in to access new data sources. Not kind of GenAI-native solutions, but really integrated into an existing solution as a component. That seems to be useful. But that's then product development, and it takes time and industrial software to make the product really work smoothly and comply with customer expectations, right? So this would probably be down the road a little bit.

Mike, if we look at then this challenge of moving from a concept to a scale, you mentioned this a couple of times that with Edge AI, for example, company is kind of being in that period of saying, "Yeah, we can understand that this is going to be relevant to us. We're not quite sure what use cases." This is a very natural problem, right? Because you're always in the case of making investments before you know what the actual result of that investment is going to be, right? That's a challenging thing for any management team to approve. What have you found to be most successful as a process for effectively doing pilots or, let's say, for de-risking a use case before you go to scale?

I mean, pilots in theory makes a lot of sense, but it's actually often quite difficult to make a pilot that accurately represents the complexity of the system at scale, right? So you might have a pilot that works very well on one machine. You try to scale that up and then you figure out, okay, actually, we've got 400 machines that all look the same, but they all behave a bit differently. Then this is just a much more challenging proposition than just making the system work on this one device. So what process have you put in place to smooth that journey from initial evaluation through a successful pilot and then to successful scale?

Mike: I mean, I think it depends on how the definition of scale, I would say. I mean, it's a sobering reality when it comes to IoT projects and pilots. It's still at about 70% to 75% about never really scaled beyond the proof of concept. I would say, in my opinion, that the failure pattern is relatively consistent—that companies start with technology instead of business problems, in my opinion. So in my opinion, the difference between that pilot purgatory and the production success is prioritizing business outcome and then also the focus on scalability. In the sense of what you talked about, I mean, your example was more like, okay, I have machine A, but I have maybe also machine type B, C, D, E, F, G that are not quite the same. I think that also I would fold that under the umbrella of making sure that you have a business outcome that is defined. That would also mean that you're not just trying to test a piece of technology on a device, but you actually say, okay, I have these devices. I have this equipment, main shop equipment. These are my specific goals that I want to get out of it from a business outcome perspective.

When it comes to the focus on scalability, I would say that, again, avoiding that pilot purgatory, you can do that by designing for scale from day one. Because if your pilot requires any kind of unique conditions or excessive customization, it won't scale. And so that is why we and why I personally also like the term "proof of value" a little more than proof of concept. Meaning, you're building pilots again with business outcome in mind, but also with ideally production-ready components that are actually, or as much as possible, production-ready components that are designed to scale from day one. So I think that is certainly something I would focus on, that we are focusing, to make that happen. We're just testing, yeah, can I theoretically do this? Sure. But then as soon as I say this is a good idea, I need to start building that product. And so focusing on that proof of value, focusing on business outcome and scale with production-ready components, I do believe that is one way to actually reduce the failure rate of IoT pilots, in my opinion.

Erik: Yeah, good perspective. I guess the POC concept or MVP works better for consumer scenarios, where you can kind of radically change the solution later on once you figure out that the consumer actually finds value. But in an industrial scenario, you want to find a solution that is actually going to fit with the architecture, safety requirements, and also the right cost structure, et cetera. That proof of value concept makes sense here.

Last question. We might have some folks listening in who are either younger and kind of figuring out the direction of their careers, or potentially a bit older but also pivoting in their careers. If you look at the manufacturing environment in the U.S, and you were advising somebody on where would it make sense for you to build skill sets to make sure that you're kind of future proof in terms of your ability to contribute to the development of manufacturing in the U.S., what would be a few areas that you would recommend people consider?

Mike: Well, it's a good question. We talked a little bit about it in the beginning. I would say that probably one of the hardest worlds to back fill our maintenance technicians is technicians with digital skills. I think that's probably where one of the biggest massive skills gaps are: people essentially that can bridge operational technology, information technology. I mean, the age-old IT, OT gap that we talked about for years now, decades, I think that still is an issue, and I think that's accelerating quite a bit. So anything that's related to digital skills in OT environments, simulation software skills, that I do believe is one of the ways how even the younger generation, as you go into the job, into the manufacturing environment, can actually make a difference—where you actually have a worker that is not just mechanically inclined, technically inclined, but also has the digital skills required. Maybe then it can finally bridge that OT, IT gap.

Erik: Yeah. Great. Well, Mike, thanks for your time. Anything that we didn't touch on today that you think is really important for folks to understand?

Mike: I don't think so. I think we had a pretty good — we touched on a lot of different subjects, I would say. I do think one thing maybe I find interesting as well when we talked about pilots, when we're providing value and maybe issues are with respect to adoption, we talked a little bit about adoption in the context of the technology itself, AI/ML itself, when it comes to building products or developing a solution. But what we also have seen which is interesting, I think, is that operator trust can be an issue.

Let's say you have a solution in an industrial setup that actually uses AI/ML, for example. Building operator trust requires a multi-faceted report as well. So AI systems. We touched a little bit about on AI in general. But AI systems that actually can provide clear reasoning for recommendations, those are the ones also that can deliver the highest acceptance rate so that operators do not feel like they're replaced by the system, or they don't know what the system is doing. We're focusing on more of a do a small thing first kind of setup. I think that is also a very important piece of the puzzle when it comes to deploying AI/ML, and actually have human acceptance as well in the workplace, including the IT and OT alignment, which is another challenge, as you know, where industrial IoT projects often have organizational silos. It's the biggest obstacle. Solving that problem also, having something that's more of a top-down approach for the entire organization, buys into it and understand what it is that is being done, why it is being done, and how it actually benefits the organization, every single individual. So the buy in across the organization, I think, is a very important aspect.

Again, that goes back to your question about, what kind of people do we need in the future? I do believe it is the people that actually can bridge that IT, OT gap from a skills level and are able to combine those skills in a single role. I think that will make a big difference to drive all this forward at a much, much larger scale.

Erik: Yeah, that's a great point. I mean, this is a management challenge of understanding how to make your teams comfortable using AI and other systems. But it's also the challenge of every employee in the company to make sure that they themselves remain relevant and increase their value—as systems are digitalized, that they're able to use those tools.

Mike: The digital translators, so to speak. So people can actually talk both sides of the equation in terms of the organization's responsibilities. I think that's a very important role, and I think that is certainly one of the main items that we need to somehow solve, some problems that we need to solve, challenges that we need to solve.

Erik: Great. Well, Mike, thanks for your time again.

Mike: Okay. Thank you, Erik.

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that AGP may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from AGP.
Submit

Thank you for your message!
We will contact you soon.