Special episode recorded live at Hannover Messe: Together with Scott Kemp from SoftServe, we take a look at real industrial projects – including use cases from OptoTech Optikmaschinen GmbH (part of the SCHUNK Group), Continental, and NVIDIA.The focus: smart data infrastructures, predictive maintenance, and practical AI applications on the shop floor.
Podcast episode summary
How can smart maintenance, AI, and global IoT infrastructures be put into practice – despite labor shortages and complex machinery? Scott Kemp, Head of Manufacturing Services, EMEA, at SoftServe discusses these challenges with Ing. Madeleine Mickeleit, sharing insights from projects with Schunk, Continental, and NVIDIA.
SoftServe demonstrates how scalable IoT backbones and AI applications deliver real value – for example, an AI assistant at Continental that reduces MTTR and boosts OEE by 10%. With Schunk, SoftServe co-developed an IoT backbone spanning the entire machine portfolio, enabling end customers to perform maintenance with the help of assistive functions.
This comes to life in the practical example from OptoTech: Product Owner Vineeth Vellappatt offers a look into an AI-supported grinding process on the SM80 machine – including error detection, parameter analysis, and concrete recommendations for action.
Technologically, SoftServe combines structured sensor data with unstructured knowledge (e.g. SOPs), embedded into a RAG model for fast information delivery – implemented across Microsoft Azure, NVIDIA Omniverse, AWS, and more. Standards like OPC UA and Unified Namespace lay the foundation for scalability.
At the core: compensating for knowledge loss, empowering new workers, monetizing services – and turning AI from a buzzword into productive reality. SoftServe follows practical frameworks like “Double Diamond Thinking” and Proofs of Technology instead of just POCs. The episode kicks off with a short impulse from Onuora Ogbukagu (Deutsche Messe AG).
Podcast Interview
Hey everyone – we’re live from Hannover Messe, one of the world’s largest industrial trade fairs with over 130,000 visitors from more than 150 countries and 4,000 exhibiting companies.
For today’s episode, I’ve picked one clear highlight: We’re cutting through the hype and talking real GenAI use cases on the industrial shop floor – featuring Schunk as an example.
But first, it’s an honor to share an exclusive statement from Onuora Ogbukagu, official spokesman of Deutsche Messe AG.
Onuora
What you do at the IoT Use Case Podcast is exactly what’s happening here at the fair. You’ve got digitalization leaders like Google and Microsoft, but also small, specialized software companies – and classic machine builders. When these worlds come together, that’s what drives future competitiveness. That’s what Hannover Messe is all about.
Thanks, Onuora – couldn’t agree more.
Joining me today is Scott Kemp, Director of Industry Solutions at SoftServe – a global leader in digital solutions for complex industrial challenges.
We’ll explore real-world projects with Schunk, Continental, and NVIDIA – including integrations with AWS, Microsoft, and Google.
All implementation details are, as always, on iotusecase.com. Let’s go.
Scott, great to have you here at Hannover Messe. It’s intense being here on-site. How’s your day going?
Scott
For me, Hannover is kind of the epicenter of manufacturing. It’s my third year here. It’s exhausting, but the conversations make it worth it.
What caught your eye so far? Have you had a chance to leave your booth and take a look around?
Scott
Yeah, we often get caught up in our own world, but I’ve managed to look around a bit.
You’re seeing the trends we expected — Agentic AI, GenAI — essentially the same thing.
But also technologies that have been evolving over the years, like computer vision and predictive maintenance. Some very cool stuff.
If you’re listening now, check out our LinkedIn — we’ve shared some media from after Hannover Messe. You can see what SoftServe is doing, but also other partners from our network. Maybe let’s start with SoftServe and talk about what you showcased on-site at Hannover Messe.
You work with 10,000 global experts supporting industrial clients across the whole value chain. You’ve also developed pre-built, plug-and-play solution templates, among other things. I guess we’ll get into that today. Are there any favorite IoT projects you’re seeing for 2024 or 2025? Maybe we can talk a bit about SoftServe’s practical work.
Scott
Sure! I’ve been with SoftServe for exactly a year now — my first day was actually at last year’s Hannover Messe. For me, SoftServe is a bit of a hidden gem in the industrial space. Once you dig into the website, you start to uncover some really cool and innovative things we’re doing across industries, especially in the industrial sector, with amazing customers. Last year, for example, we presented a great solution we built with NVIDIA and Continental — a generative AI industrial assistant. It’s a fantastic case study where we helped reduce the meantime to repair and increase OEE. I believe the result was around a 10% increase in OEE, which is fantastic. And of course, we’re always happy when our customers are.
That’s great. So how was NVIDIA involved? Was that through Omniverse?
Scott
Yes — and again, it’s one of those hidden aspects. We’ve actually been NVIDIA Partner of the Year two years in a row now.
Congrats again! Very cool.
Scott
Thanks! We just received the latest award last week at GTC. That really comes from years of investment in that relationship. We’ve been building solutions on top of the NVIDIA stack for a few years, and now, with the NVIDIA hype, that investment is really paying off. We often make those kinds of strategic, research-driven investments — testing whether a technology will flourish. Sometimes it doesn’t, but in this case, it really did. For this project, we approached the customer and asked: What problem do you want to solve? Then we worked backwards to figure out which technologies would help. That led us to use generative AI to tackle a range of challenges on the production floor. It was a very exciting opportunity.
Okay, nice. And at Hannover Messe you showcased that great use case together with Schunk. Can you give some insights into that? I’m really curious, especially since we’re focusing on machine builders today.
Scott
I think we see both sides of the coin. On one side, there are large companies that consider running their own production and maintaining machines as part of their core business. On the other side, you have machine builders who are looking to generate additional revenue by offering services to their end customers. That’s the difference between Continental and Schunk: Continental focused on optimizing their own production line, whereas Schunk looked at how they could support their customers. We worked with Schunk on a similar problem statement — using AI to enable their customers to perform their own maintenance tasks on Schunk machines.
Before we dive into the problem statement, let’s hear directly from OptoTech Optikmaschinen GmbH – a member of the Schunk Group.
They’re showcasing one of their connected machines live at Hannover Messe. Joining us for a quick insight is Vineeth Vellappatt, Product Owner for IoT and Digital Solutions at OptoTech.
Vineeth
OptoTech specializes in optical manufacturing equipment, and here at Hannover Messe, we’re showcasing the SM80 — a grinding machine that’s fully connected to our IoT platform. We’re demonstrating AI-powered use cases like parameter analysis, manual assessments, information extraction, and failure explanation.
In the optical industry, operators usually need to be on-site to react to errors. Together with SoftServe, we’ve built an IoT use case that collects parameters like spindle speed and overrides to monitor the machine remotely.
At the booth, we’re showing a live connection to the machine in Launsbach. Visitors can view real-time data like run time, manual time, stop time, and detailed error states. With AI, users can query specific faults and get clear explanations — including cause, effect, and suggested remedies — right at their fingertips.
Thanks, Vineeth, for sharing these insights live from Hannover Messe.
If you’d like to learn more, you’ll find links to the connected machine, OptoTech’s digital solutions, and Vineeth’s LinkedIn profile in the show notes.
Let’s get back to the project, Scott. How exactly did you collaborate with Schunk – and what was the scope of your joint initiative?
Scott
A lot of it started with how we helped them develop an IoT platform. The CIO at Schunk often says that their team is responsible for “building the road” — and others, like machines and developers, can drive on that road. It’s a bit like an Android system for apps: you provide the foundation, and then others can innovate on top. We collaborated with them to build a holistic IoT platform that connects all their equipment globally. That’s a key foundation for enabling things like generative AI. It’s a tough challenge because Schunk has a wide variety of machines — different products, different lifespans, different protocols and standards. Plus, they’ve acquired other companies, which adds complexity when integrating systems. So, having a unified IoT platform that provides visibility across all machines globally is a crucial step before you can do the “fun stuff” like using AI characters or assistants.
You started to outline the problem, just opening up the topic. So what’s the actual business need here? What specific problems are you solving?
Could you walk us through the problem statement from Schunk?
Scott
Sure. So, if we compare Continental and Schunk, there are a lot of similarities in their problem statements — and honestly, it’s an industry-wide challenge.
We’ve seen this with many of our customers: there’s a significant loss of knowledge. Skilled workers who could perform machine maintenance or changeovers blindfolded in five minutes are retiring or being recruited away.
So the big question is: How can we enable younger, less experienced workers to operate not just at the same speed, but at the same level of capability?
At the same time, equipment is getting more complex. It’s no longer like fixing an old Land Rover where you can just unscrew a few bolts. Take modern cars: you need to remove five components just to reach a light bulb. Machines are similar now — more electronics, more software, more complexity. And that makes everything harder.
Yeah, absolutely. Do you have a specific example of how Schunk — or another company — is addressing that in practice?
Scott
Sure. I can give examples from Schunk, but also from other customers we work with. Again, it starts with IoT data as the foundation. We bring together structured data — like vibration, temperature, and pressure sensors — and combine it with unstructured data, such as SOPs or PDFs.
We can even integrate information from MES or ERP systems. All of this is then fed into a generative AI model, more specifically into a RAG model — Retrieval-Augmented Generation.
The value here is simple: giving people faster access to the right information.
And what’s really exciting is when you combine that with traditional AI models — for example, predictive maintenance — all within the same environment. Then you can start asking really interesting questions like: When is this pump likely to fail?
And once you have that prediction, you move from a predictive state into an action state. That’s where generative AI comes in again. You can ask: Why is this prediction being made? It can then show you, for example, the vibration data from the last two days and explain: This behavior is triggering the prediction. And the next question is: Okay, how do I fix it?
At that point, the system pulls information from the SOP documentation — step one, step two, step three — and guides the operator through the resolution process.
That’s where generative AI acts as a single interface, enabling the execution of all these tasks seamlessly. I think that’s really cool.
I’m wondering how preventive maintenance can be replaced by an AI-based algorithm. Because in practice, we often see these standard rules: “Change it after 1,200 hours,” and that’s it. But now, we’re changing the whole system, right?
Scott
Yeah, exactly. I attended a maintenance event in Amsterdam last year, and it was really interesting to hear how experts are approaching this topic.
There are different classifications: Some components are intentionally run to failure because they’re low-value and not critical — if they break, it doesn’t matter.
Other components require preventive maintenance because failure is simply not an option. And then you have high-value assets. That’s where AI-based algorithms come into play. These help predict failures and allow you to act on your own terms.
I often talk about three steps: predicting, optimizing, and prescribing. Instead of constantly firefighting — which is still very common in operations — we gain the ability to anticipate what will happen. That puts us in control of the situation.
Are there any best practices for classifying the data? Because some data points might be critical for predicting a specific failure — and others not.
Do you have experience in structuring that? Like, how do you decide what’s a “critical event” and what isn’t? Because, as you said, so much of this knowledge is still in people’s heads.
How do you actually operationalize that?
Scott
Good question. Predictive maintenance can feel like a bit of a science. Often, there’s an 80/20 principle at play: three or four key data features are usually responsible for most failure cases.
For instance, you might know that when vibration or temperature behaves in a certain way, a failure is likely within the next one to two weeks — based on historical data from one or two years.
To improve the accuracy of predictions, you sometimes need to get creative. You might need to factor in external conditions — for example, how an asset operates in Germany versus in Brazil. Ambient temperature and the working environment can vary greatly, and machines behave very differently in those contexts.
You can also look at MES data to understand what was running at the time of a failure. Sometimes you find correlations in places you didn’t expect. There might be a surprising but significant impact from certain variables — and that’s where AI really shines. It can go beyond human capabilities by analyzing a wider range of data and identifying critical factors that wouldn’t be obvious otherwise.
Yeah, so you’re basically looking in every direction.
Scott
Exactly. It’s quite interesting — and we can apply this approach to different types of assets, like pumps, centrifugal systems, etc. They each behave in their own way, but we can re-templatize the findings and reuse those models for similar types of equipment.
A lot of our community members are asking: Who actually pays for these kinds of systems? And also, how do you talk to customers about that? How do you monetize AI-generated insights? That’s a big topic. Has Schunk already solved this — or other customers? Or is it still an ongoing process, where you co-develop these models with the customer as a partner? What’s your perspective?
Scott
Any time you talk to a customer, you really have to understand the nuances of their business. I always say: I try to become part of their company. I treat myself like I’m one of their employees — especially when I dive deep into the production floor.
Because sometimes, people at a high level might think they know what the problem is. But when you actually talk to the line operator, or the maintenance engineer, the reality can be very different. So, getting down to the granular level, asking the right questions, and truly understanding whether there is a real problem — that’s how you begin to identify potential business value.
So you basically put forward a thesis and then you ask your customer in a workshop to validate it, right?
Scott
We usually take a few different approaches. Of course, AI is a huge buzzword right now. A lot of people just want to talk about AI — and we’re like, “Okay, we can talk about it, but let’s focus on areas where there’s real value.” One method we use is the “Double Diamond Thinking” approach. You start by casting a wide net — sit together in a room and say: “This could be interesting”, or “That might have value.”
Then, on the other side of the diamond, you start narrowing it down: let’s prioritize the one, two, or three ideas that are truly important and then try to assign business value.
And once that’s done, we move into the second diamond: Let’s go build something real.
We try to avoid traditional POCs — in my opinion, there’s not much room for conceptual stuff anymore. It’s about proving what has already worked elsewhere.
So we run a Proof of Technology over about three months, to demonstrate that there’s real value.
We bring everyone along on that journey — and if it works, then we move to the next phase: scaling and improving. From what I’ve seen, that’s the best way to do it.
But there are also more in-depth methods we can apply. For example, we’ve conducted Digital Factory Assessments using the SIRI framework — that’s the Smart Industry Readiness Index.
It’s a two-day workshop we run, also used by McKinsey and the World Economic Forum to evaluate so-called lighthouse factories. And that’s really interesting research.
You assess a facility from 16 different dimensions and rate each one from 0 to 5 in terms of digitization. Then you can benchmark your site against over 5,000 other facilities worldwide — and see where you stand within your industry.
Very interesting! I’ll definitely include a link to that in the show notes and if you want you can share it later. But let’s circle back to the last point: How do you actually communicate AI-generated insights? Let’s say the AI provides a great insight — like: “You need to change the oil” — based on a complex analysis. Now you’ve got this recommendation. But how do you convey that effectively to the customer?
Scott
I mentioned earlier the three stages: predict, optimize and prescribe. What you’re talking about now falls into the prescriptive stage — where AI contextualizes a large amount of information and can suggest: “This is likely the cause of the failure — go check this component.” It might even give two options. That’s where the human-in-the-loop comes in. Someone with domain expertise evaluates whether the suggestion is correct or not. That person then investigates further to confirm or refute the AI’s recommendation. So once you have that prescriptive insight, the next step is to actually act on it.
And that’s where many companies struggle — because technology isn’t the answer to everything. In some cases, I’d say technology is only about 30%. The rest is about process.
The real question is: Is the company actually set up to receive and act on a prescription?
That’s the next step — is the organization ready to adopt and operationalize technology?
That’s why we work closely with many of our customers — not just to help them adopt technology, but to ensure they’re ready to execute on its recommendations. That’s an entirely different side of the problem statement.
Yeah, makes total sense. Maybe one last question on the business case side: Who actually pays for what? Because with customers like Continental — or maybe others like Schaeffler or Volkswagen — you often have use cases developed at the plant level.
They might solve something within the factory, but still need to partner with experts — either from the machine or component side — because those partners know their machines inside and out. So what’s your personal view — or do you have any best practices — on who pays for what?
Scott
Yeah, that’s almost the million-dollar question for enterprises. Companies operate differently, but especially in manufacturing, you often see “mini kingdoms” — each plant or facility is its own little world. You might manage to successfully deploy a technology in one plant, but getting it adopted in others is another story. They often have different budgets and separate decision-making processes.
And that’s another core problem: How do you get everyone on the same page? It varies from company to company, and it’s part of the enterprise challenge — you need everyone to be part of the journey. That includes IT teams, centralized digitalization teams, plant management — and you need alignment between the factories. Otherwise, you’re just solving for one site.
There’s also the question of when to integrate cloud solutions — for example, from Schunk or other providers who bring in valuable data and domain expertise.
They know the machines, they know when to change the oil — just to give a simple example.
But then you have to decide: How do I integrate these cloud-based solutions into my own system? That’s especially challenging in enterprise environments with complex IT architectures.
So the question becomes: How do I integrate external providers like Schunk who want to deploy their applications or bring in their expertise?
That’s something you’re also showcasing here at the booth.
Scott
It really varies from company to company. Some are clearly open to innovation — you can sense that in the conversations. Others prefer a DIY approach, wanting to build everything internally.
But I believe the most successful companies are those that work with open systems and architectures, where they can plug in different tools and collaborate externally.
At the same time, you have to manage different stakeholders — especially IT departments who already have their own solutions. And as the saying goes: “Never call someone else’s baby ugly.” You need to make sure everyone feels involved — that’s the key to success.
When it comes to digital transformation, I always say: Make it feel like the idea came from within the team. If you simply tell an operations team, “Here’s predictive maintenance, go use it,” — it’s never going to be adopted. You have to bring them into the process, make them feel like they’re part of building it. Then it becomes their baby, our baby — everyone’s baby.
And when you raise something together, it works.
It’s a new era of collaboration — working across digital teams, with interdisciplinary setups and partnerships, including external providers.
Scott
And fortunately, we now have standards that help everyone work better together — protocols, APIs, and cloud infrastructures designed for openness. That’s what we see emerging under the surface here at Hannover Messe.
Yes, many technologies being shown might not be adopted for a few years. But what’s exciting is that more people are seriously talking about building common platforms and infrastructures — like OPC UA, Unified Namespace, and similar approaches — to enable exactly that kind of collaboration.
Yeah, all those topics — super important. Thanks for bringing up the technical angle again! I just have one or two more questions, especially about how SoftServe is contributing with technology and what tech we should be paying attention to. Okay, coming to the last point — I’m also curious to hear how SoftServe can help.
We touched on this at the beginning, and for everyone setting up similar projects, I’ll include your contact in the show notes so they can reach out and share best practices.
But maybe you can explain in more detail: How exactly did you support Schunk — and maybe also Continental — in these projects? Then we could zoom out and look at the broader picture.
Scott
One of the things I love most about working at SoftServe is that we’re technology-agnostic. That means we always aim to do what’s right for the customer. Even with ISVs (independent software vendors) that offer out-of-the-box solutions — in reality, there’s always a certain degree of customization needed. Every company’s infrastructure is different, so integration is never truly plug-and-play.
The strength of SoftServe is that we take a step back, look at the customer’s architecture and needs, and then say: “Given your situation, here’s the best way to implement this.”
We don’t try to force-fit a solution like using a hammer to find nails. Instead, we ask: What do you need? What tools can we bring in to help you achieve your goals — within your budget and priorities? And based on that, we build the right setup.
For example, with Schunk, they had a wide range of equipment, and they needed to build a solid IoT infrastructure on top of it. We supported them — together with Microsoft — in building that on the Microsoft stack.
Once that was in place, we could bring in the more advanced tools, like generative AI — and that’s where things really start to get exciting.
Okay, so do you bring in people and experts to help set up these projects — or are you also providing software or templates to support that process?
Scott
We work in a very flexible way. SoftServe has around 11,000 people globally, but within that, we have Centers of Excellence — top-level experts in their fields who know everything about IoT.
We also have a team of over 100 data scientists, and we bring these experts together to form a dedicated crack team, that’s tailored to solve each customer’s specific challenge.
In addition, we use what we call templates or accelerators — proven setups from previous projects. These allow us to get to about a 60% ready state even before we start engaging with the customer.
That means we often have the problem statement defined and the architecture already mapped out — including APIs and integrations — so we can quickly embed the solution into the customer’s infrastructure.
And you’re also working with different standards and tech stacks, depending on what the customer needs?
Scott
Exactly — and with different partners as well. We’re, for example, closely partnered with Litmus, and I’m personally a big fan of their work.
Their platform makes it possible to connect to virtually any PLC, which is really powerful.
We’re actually doing a joint talk with them — either today or tomorrow — so that’ll be fun!
Cool, I’ll include that in the show notes. So if you’re listening now, check those out — you’ll find everything we’ve discussed there.
Let’s wrap it up with one last question: You’ve seen a lot of different projects — from your expertise and background, do you have two or three key learnings you’d like to share with others who are setting up a GenAI use case based on IoT data?
Scott
I’d probably go back to the idea of “Double Diamond Thinking”. Starting with technology is the wrong approach — you need to start with the problem statement. Then comes the process of learning, understanding, and only then: “What technologies could actually solve this?” But even then, technology isn’t always the answer.
We’ve worked with many customers where the solution turned out to be more about process change. That’s where we also support them — because in the end, it’s a fusion of technology and people that makes it work. And that’s where the real beauty lies.
The second key point I’d add is: Yes, technology is there, but it’s important to choose the right one and build it on top of a solid, open infrastructure.
We talked a lot about data — cleaning it, structuring it, making it usable. That creates the road for others to innovate on. We actually ran a survey earlier this year — I can share the link for the show notes. It involved over 750 industry experts, directors, and decision-makers. One of the key findings was that 75% of them plan to increase investment into data foundations. Because only then can you scale the more exciting parts — like AI and GenAI. And that’s what it’s all about.
Yeah, right. I’ll include that in the show notes.
It’s always a bit of a chicken-and-egg problem — On the one hand, you want to start with a business use case to generate insights. On the other, you also have to deal with connectivity — or what you refer to as the foundation.
By that, do you mean data acquisition? Or rather: how to build a scalable path for acquiring and using data?
Scott
Exactly. It’s about making data available to the right people and then federating it so that, when someone in the business has an idea, they can act on it quickly.
That’s so important — it’s about empowering the business teams to move fast.
Right — so that’s the second key takeaway: You need to invest in both sides — the business case and the IT infrastructure that enables scale.
Scott
As you said, it’s a chicken-and-egg situation — sometimes the business value is driven by an AI application, but to get there, you need the right data foundation.
Yeah, and you also need to have standardized data and proper data engineering in place to make all of that work efficiently.
Scott
That’s often where companies run into issues. And that’s where Proofs of Technology can be really useful — they help bridge the gap between the business side and the technical foundation.
Okay, cool. Let’s wrap up with one last look into the future. How do you see the evolution — let’s say from predictive optimization towards GenAI tools — by 2030 or within the next 5 to 10 years?
Where do you think we’re heading?
Scott
I watched Jensen Huang’s GTC keynote the other week, where he shared NVIDIA’s vision of where this is all going. The roadmap he laid out moves from Generative AI, to Agentic AI, and eventually to Physical AI.
The topic of Agentic AI is especially interesting — but there are still a lot of misunderstandings around it. Some people are using multimodal RAGs, which are already exciting.
But what really fascinates me is the complex reasoning: the ability of an AI system to move between systems, do pre-processing and thinking, and then return with a structured, meaningful answer — not just a generic generative response. This will absolutely evolve the performance of these systems — though they are very compute-heavy.
So it’s going to be a journey to get companies ready for that. Once again, data foundations will be critical — because reasoning across multiple systems and connecting them meaningfully is not an easy task.
Then the next step is Physical AI. We’re already seeing early signs of this — especially through our collaboration with NVIDIA, where we’re training robotics in virtual environments using physics-based simulations. The ability to model physical interactions — whether that’s with liquids or robot grippers picking up objects — is becoming highly advanced.
I often compare it to watching my child learn how to pick something up: the AI fails many times in simulation, but eventually it succeeds — and that’s pretty amazing.
I don’t know if it’s the ultimate next frontier, but I’m personally very excited about this space.
We’re witnessing a renaissance in robotics — and they’re becoming scarily smart, which is both fascinating and fun to watch.
It’s the fusion of the virtual and physical world, and that’s where it gets really interesting.
Very nice. That definitely sounds like a topic for another episode.
Once you have something running in practice, we should talk again and focus on how to set up similar use cases.
So far — thank you so much for your time today. It was really valuable to get these insights — for me and for our listeners.
And if you’re working on a similar challenge and want to explore what’s possible at your company — get in touch with Scott. I’ll put his contact in the show notes, and you can also check out softserveinc.com, right?
Please have a look — and again, thanks so much for being here and for recording this on-site at Hannover Messe.
Scott
Thanks! And yes — if you’re here, feel free to reach out.
And just to repeat my favorite analogy: Let’s try to raise the baby together. It’s a strange one, but it fits!
Perfect. Have a great week, everyone — bye!