Möchtest du unsere Inhalte auf Deutsch sehen?

x
x

Plan connectivity from the start – Edge Layer integrating OT and IT

““

You are currently viewing a placeholder content from Spotify Player. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

In podcast episode 199, Robin Schubert (Product Manager) and Frank Tannhäuser (Senior Sales Manager Manufacturing Automation) from Kontron AIS discuss why connectivity is becoming a strategic task in modern production environments. Many plants operate machines from different generations and with heterogeneous interfaces, making stable OT/IT connections, consistent data provisioning, and secure update processes increasingly challenging. At the same time, requirements are rising due to NIS2, the Cyber Resilience Act, and the growing need to make data usable for real-time processes, quality documentation, and future AI applications.

Podcast episode summary

The guests explain why many production plants—especially those built around 2010—exhibit highly heterogeneous landscapes of controllers, interfaces, and protocols, leading to significant integration effort, performance bottlenecks, and risks related to update capability.

A key focus of the episode is how connectivity can be considered early in plant design to better manage later requirements around security, software maintenance, NIS2 compliance, CRA regulations, and scalability. Based on real project experience, the experts demonstrate how an edge layer standardizes the connection of machines from different generations, provides consistent data, and enables secure update rollouts during ongoing operations.

The discussion also covers the technical criteria that must be considered when implementing OPC UA in high-cycle environments—such as latency limits below 100 milliseconds or typical fluctuations into the seconds range depending on the implementation. The guests share insights into integration scenarios where FabEagle®Connect is deployed as a Docker-based component within the Kontron Grid, serving as a reliable data source for MES systems, control systems, and data lake environments.

Looking ahead, both speakers outline how a well-designed edge layer forms the foundation for digital twins, AI-based analytics, and scalable production data management. This enables companies to integrate additional lines and sites step by step without having to re-implement existing interfaces.

Podcast interview

Today on the IoT Use Case Podcast: the tension between IoT for individual machines and an IoT edge layer for an entire factory, the pitfalls and specifics of OPC UA, the role of digital twins and artificial intelligence in production—and much more. Our guests from Kontron AIS are Robin Schubert, Product Manager for production connectivity, and Frank Tannhäuser, Senior Sales Manager Manufacturing Automation. We go deep into the details while still covering the bigger picture. Enjoy the episode!

This is the first regular episode hosted by me, Dr. Peter Schopf, your new podcast co-host. I am stepping in for Madeleine Mickeleit, who is currently in a very exciting phase of her life—just before the birth of her children. We are very happy for her and would like to send her our warm regards. Today’s discussion focuses primarily on data and the concept of the edge layer. And with that, I’d like to ask our guests to briefly introduce themselves. Robin, who are you and where are you joining us from?

Robin

Today I’m back in the office in Dresden, at our Kontron AIS site. I represent product management for the FabEagle product line, and today the focus will likely be on FabEagle®Connect, our solution for connectivity, along with a few additional tools contributed by the group—developed together with my colleague Vanessa—to support the edge layer concept. I’ve been with Kontron AIS for five years now, and for the past three years I’ve been deeply involved in the development of FabEagle®Connect and our control systems.

That’s exciting— especially since products like these often come with long development cycles. In many cases, it’s a never-ending story. Frank, over to you: where are you joining us from today?

Frank

I like to keep my private life and work clearly separated, which is why I prefer working on-site at the company. I work in sales within factory automation and am responsible for our factory automation solutions such as MES systems, control systems, and connectivity solutions. I spent many years working hands-on in the field—machine programming and MES commissioning—and have now been in sales at Kontron AIS for around ten years.

It’s always a good thing to have someone in sales who really knows what they’re talking about because they’ve done it themselves. Great. On your website, Kontron AIS states: “Whether you are a machine or plant manufacturer or a factory operator, we have the right software product or the optimal digitalization solution for you.” That sounds very comprehensive at first glance. So what are your core strengths? Where do you focus—and what do you deliberately not do?

Robin

It’s a very exciting topic because we have a wide range of solutions in-house—from connectivity to production control. Our core strength lies in the classical automation of production, particularly in discrete manufacturing. We are relatively industry-agnostic in this regard. Wherever we encounter PLCs or production control systems in manufacturing environments, we support our customers—both in deploying products and integrating them. And I think integration, in particular, is where the real challenges lie, especially from a sales perspective—right, Frank?

Frank

Exactly. Interface-related topics are our core focus— and they are also what matters most to our customers. On the one hand, we offer standard solutions such as MES systems. But in many projects, the real core task is figuring out: How do we access the data? How do we do it reliably? How does it work in a stable way? And how can we ultimately create value for the customer? Establishing connectivity often becomes a project of its own. There are different approaches and processes you can apply. We work across many industries—from semiconductor fabs with extremely high requirements and massive data volumes to very traditional production environments, including manufacturers of pool liners who have been running the same processes for 30 or 40 years and are now looking to digitalize. What they all need is connectivity.

In preparation for this episode, we talked about your edge layer concept. And the way I imagine it—and I’m curious to hear your explanation—is that this layer is placed over the factory, across the machines. I clearly remember discussions we had years ago about this tension: on the one hand, machine builders who want to connect their individual machines and provide dedicated apps for them; on the other hand, production managers who need a holistic view of the entire factory—and not 20 different applications from five different machine vendors.
So how does your edge layer concept fit into this? How did it come about in the first place? What was the original idea, which customer problem did you identify, and how did you define it?

Robin

I recently came across an interesting figure from the VDMA. In a survey, 60 percent of integrators stated that they need additional gateways or middleware solutions to integrate machines efficiently. This shows that finding the right middleware to connect machines to IT systems is still a major challenge in factory automation. We are talking about machine interfaces or device interfaces that need to be connected to IT systems—whether production control systems like MES or ERP systems for material management. The goal is to extract maximum value from machine data or to enable control-related use cases.
The edge layer does not only address this classic connectivity task of linking systems with different interfaces. We also want to integrate additional aspects such as update capability—meaning the ability to update software automatically—and device monitoring: Are the IoT devices running? What is their runtime? What is their uptime? The idea is to monitor and maintain the entire device landscape required for modern IoT applications and keep it up to date in order to meet today’s security standards.

That ties in well with the topic of security you just mentioned. Frank, when you’re talking to customers: do they come to you because they have a concrete problem and need to meet certain standards? Or do you still have to do a lot of persuasion—explaining that this approach can save money or prevent downtime? In other words, do you experience more pull or push? And what are the most important needs from the customer’s perspective?

Frank

There are essentially two sides to this. On the one hand, someone has to offer such a solution in the first place—and that’s where we come in, because we have a concept for how to implement this across very heterogeneous environments. On the other hand, it has increasingly become a pull topic. That has changed over the last few years: many customers now have a very clear idea of what they need when it comes to an edge layer. They have informed themselves, and in most cases it is the IT departments that approach us—not the production manager saying, “I need data from the machine now.” These IT teams are often small, yet they still have to handle complex integration projects. That’s why they need a solution that is flexible, easy to configure, and easy to maintain. As a result, the requirement for a unified integration layer now very clearly comes from the customer side.
Industry-specific requirements also play a major role. In high-volume production, we see very high throughput requirements: machines produce several hundred parts per minute, and the product-related data must be transmitted reliably. In other industries—pharmaceuticals, for example—the focus is more on regulatory requirements. As soon as IT is involved, it has to be assessed whether there is any risk to product quality, which ultimately can have an impact on people. These requirements naturally influence what we develop and prioritize in product management.

Especially for many mid-sized companies, investing in connectivity is a bold step. Often, they are looking for one specific use case that “pays for everything”—a so-called killer use case. But in reality, that rarely exists. We discussed this in the special episode with Madeleine as well: it’s much more about many small gold nuggets—many individual insights that can be derived from the data. How do you see this in practice when it comes to tangible benefits in projects?
Integration is a crucial prerequisite. But to convince management—especially a CFO who may expect a clear return on investment within one or two years—you need a set of concrete arguments and use cases. Which use cases do you see as most important for customers? You probably encounter this very directly in sales, Frank.

Frank

In fact, regardless of the industry, it is often very similar. In classic manufacturing environments that are not necessarily high-tech, the first step is almost always transparency—making problems in production visible. Companies usually pick one or two core applications, such as capturing throughput and quality data at the final station of a machine, the end-of-line test used for quality assurance. This already provides two very important pieces of information: how many good parts are being produced and where scrap occurs. This visibility alone can be enough to justify rolling the solution out further.
In addition, once you can see when and why machines stop, you can improve maintenance. Or you can compare day and night shifts and identify optimization potential in employee training. This first small use case often shows a return on investment relatively quickly—and then the project starts to scale across the entire factory.
In high-tech environments—such as new players currently entering the solar cell market in the US—the starting point is different. Here we are talking about highly automated factories with automated transport systems and innovative coating and printing processes. In these cases, digital twins are considered very early on, in different forms. That requires very high data quality, which makes the integration layer extremely important and quickly turns the project into a large-scale endeavor. In such greenfield factories, companies rarely start small; instead, they try to implement a comprehensive, unified concept from the very beginning.

I find the topic of digital twins particularly exciting. You can look at it from many perspectives—the digital twin of the product, the machine, or the entire production, across electrical, mechanical, and process-related levels.

[13:04] Challenges, potentials and status quo – This is what the use case looks like in practice

Robin, when it comes to keeping use cases in mind—we already talked at the beginning about how difficult it can be to position a product in an environment that often has a strong project character. How do customer use cases feed into your product development? What role do they play for you?

Robin

For us, the customer—or more precisely, the concrete use case—plays the central role. Beyond very modern topics such as digital twins or analytical, data-driven optimization, many companies in Europe are still operating equipment that is ten to fifteen years old. Around 70 percent of production assets were built before 2010. These systems come with very different interface protocols and integration approaches. That’s why we always look at a highly diverse market. We want to integrate existing machines into modern IT environments and new manufacturing execution systems, just as much as we want to digitally connect new equipment. Our goal is to cover connectivity as broadly as possible—from classic legacy interfaces to modern standards that are increasingly being adopted. OPC UA is a good example of this: today, market penetration may be around 25 percent, but it is expected to rise to over 40 percent in the coming years. However, OPC UA is not a standard you can configure with a single click. It requires a lot of know-how. You don’t just look at an individual machine, but at an entire production landscape with 50 or 100 machines connected to an MES, fulfilling control or transparency tasks.
One topic I would like to add is security. This is one of those small gold nuggets that is often overlooked. It is extremely difficult to quantify how much investment should go into update capability, security, hardened operating systems, or automated update processes. But these aspects are strategically critical. We support our customers in operating their systems in a scalable and secure way for the future.

Open source also plays a role in production environments—but at the same time, it brings security challenges. From your perspective, what are the trade-offs between open-source and proprietary software solutions?

Robin

Open source definitely has a firm place in modern automation. Linux is a good example—extremely versatile and widely used. There are also many open-source integration libraries that are very well suited for pilot projects. But when we look at the upcoming Cyber Resilience Act, or at operations with high regulatory requirements—such as under NIS2—it becomes clear that a partner is needed who takes responsibility. Someone who provides updates, monitors libraries, supports integration and upgrades, and works with the operator on a concept to ensure that production remains secure and efficient even ten years from now. Many factors come together here, and having an integration partner who actively manages this is extremely important.

The Cyber Resilience Act is particularly interesting—and in combination with NIS2, it’s not trivial. Frank, are you already seeing tangible demand from customers? Is there awareness of what’s coming, or is there still a lot of uncertainty?

Frank

There is still a lot of waiting and hesitation. Many people do not yet want to fully acknowledge In the end, it always comes down to risk assessment. Some say, “Yes, we have Windows machines in production, but they are isolated in the production network.” What is often underestimated is that people still have physical access and could theoretically do things they shouldn’t.
For operators, the Cyber Resilience Act will mean additional effort and costs related to license management and system maintenance. Among machine builders, we see two distinct groups: those who are actively addressing the topic and building up expertise, and those who simply do not have it on their radar because they are under strong price pressure and view every additional security requirement as a cost factor. At the same time, more and more factory operators are explicitly including these requirements in their specifications. As a result, machine builders are required to deliver an update and security concept covering the entire lifecycle of a machine.

Production managers and factory owners have significant influence through procurement decisions. If access to machine data is not specified upfront, companies often end up having to purchase software access or data connectivity separately from the machine builder later on. Robin, from a product management perspective, how do you view the Cyber Resilience Act in production environments? What are the most important aspects that need to be considered?

Robin

The Cyber Resilience Act is primarily aimed at us as software manufacturers. We are required to continuously update and support our software and to take responsibility for the libraries used within our products. This is a step toward improved security and more up-to-date software in industrial environments. At the same time, it presents challenges, because regular, iterative software updates are a significant cost driver in development.
However, the potential is substantial. Operators will receive products that are current and secure, providing a solid foundation for further security measures. For many, it will become difficult if they rely on self-developed software. Anyone using legacy software that can no longer be updated, or for which no one assumes responsibility, will face major challenges in the future—both in terms of updating systems and continuously monitoring security risks.

I believe this development is absolutely necessary. Many people underestimate how much effort is involved and how robust the processes for structured software development, lifecycle management, and updates really need to be.

[21:15] Solutions, offerings and services – A look at the technologies used

Let’s come back to OPC UA. I’d like to take another look at that—ideally using a concrete reference example where you may have run into difficulties. You mentioned earlier that this is not plug-and-play, that you don’t just connect and you’re done. Can you describe a situation where challenges occurred?

Frank

In our manufacturing execution system, we use a track-and-trace and production management system, mainly for assembly lines in the automotive, pharmaceutical, and electronics industries. These lines are optimized for very high throughput. The cycle times of individual stations have to be as short as possible so that production runs fast and efficiently. Communication between the line and the higher-level control system quickly becomes a bottleneck, because every millisecond is part of the cycle time.
That’s why in the past—and still today—we have often relied on very basic, proven protocols such as TCP/IP. They are stable, well tested, and performant. At some point, however, the requirement came up: “Please use OPC UA—it’s the standardized interface of the future.” So we implemented OPC UA and immediately ran into performance issues, because OPC UA runs on top of TCP/IP and introduces additional latency. Communication times suddenly increased to the range of seconds in some cases. That is simply not acceptable in a highly clocked production environment.
Data consistency is another critical issue. The data must belong exactly to the product currently being processed—not to the previous one or the next one. As soon as data overlaps or gets mixed, the information becomes unusable.
Another aspect is that you have to be very clear about how the data is transmitted in the first place—meaning the data structures themselves. Do you work in a classic, tag-based way, essentially like OPC Classic under the umbrella of OPC UA? Or do you use newer mechanisms such as events and methods? Even then, precautions must be taken to ensure data consistency. This applies both to the client side and the server side. The machine builder must provide the data correctly, and the data consumer—such as an MES or control system—must process it cleanly.

Then there is also the question of which libraries are used. Do you rely on a commercially available product with a service provider behind it who can support you in case of problems? Or do you use an open-source library that you integrate and then hope it continues to work reliably even under high performance requirements? Compared to classic TCP/IP-based communication, this can quickly lead to additional effort.

This is still a major topic: connectivity is simply not easy—especially with a heterogeneous machine park. Robin, you also mentioned that many machines are quite old.  

Robin

When I look at our projects or talk to colleagues in project management, we very often see that the actual configuration of the connectivity software accounts for only a small portion of the overall effort. A large part of our work consists of consulting and specification—figuring out how the machine interfaces are structured, how an OPC UA server needs to be designed, and what the data integration architecture should look like. The entire complexity lies in the details, and the software is then “just” the tool used to implement this architecture efficiently.

To implement all of this, you need building blocks—such as your FabEagle solution, for example. How does that fit into these kinds of implementations? How does it make your work easier? And how is it evolving?

Robin

When we receive a request, we typically start with specification, move on to the architecture, and then into the application of our software solution. In our design, we place a strong emphasis on modularity. Interfaces and protocols are represented as modules. This means that the properties of a specific interface can simply be configured. And if I need to implement specific logic—for example, transforming, filtering, or reorganizing data between an OPC UA client and an MQTT interface—I do that in a separate component, for instance using C#. In this way, we cleanly separate complexity and ensure that the product portion of a project remains easy to update.
Building on this, we use KontronGrid to provide FabEagle®Connect as a Docker container. This allows us to scale: we can roll out the solution to different edge gateways at the push of a button and largely automate commissioning and updates. This is where two worlds come together very closely—update capability and classic connectivity, both of which we address directly in the software.

[28:05] Transferability, scaling and next steps – Here’s how you can use this use case

Artificial intelligence is, of course, a topic that’s being discussed everywhere right now. At the same time, AI is not just AI. Many people talk about generative AI, but classical AI has long been established in production environments. You operate across layers—from sensor technology to the operating system. Where do you see artificial intelligence playing a role in production?

Frank

The integration layer plays a central role here. Data quality is crucial—and especially the contextual information surrounding the data. This already has to be considered when designing the interfaces. The integration layer ensures consistency and enables the AI to later assess: “Yes, I can use this data.”
In the projects we are currently supporting, the integration layer therefore has two main tasks. On the one hand, it provides data for classic applications such as an MES, meaning production control. On the other hand, it acts as a kind of switch. Not all data is first sent to the MES and then forwarded to AI applications. Instead, the integration layer delivers data directly—cleanly structured—to a data lake, where it is prepared for later analysis.
A lot of groundwork for future AI applications can be done during integration. Digital twin functions are particularly interesting in this context. We primarily see our role in providing the infrastructure—that is, reliably transferring data with sufficient context into environments such as data lakes. In practice, the AI applications themselves are far less generic. It’s not a case of installing an AI and suddenly “magic” happens. You need process experts who understand how the processes work, develop appropriate models from that knowledge, and then deploy those models in operation—similar to a physical digital twin.
We once had a project where the customer said: “We already have a control system, we’ve been collecting data for ten years, we produce 500,000 parts per year—just do something with it.” So we ran a pilot project, analyzed the data, and applied various algorithms. The result: the data quality wasn’t very good and first had to be improved. And the insights we generated—ones we were quite proud of—were commented on by the process engineers with: “Yes, those are well-known physical relationships.” In other words, nothing that really helped them move forward.
That shows that no matter how much money you invest, the key factor is always the right question. If I want to move toward predictive maintenance, I have to be clear about what exactly for. Is it transport systems, robot failures, specific motors where issues occurred in the past? Or is it about monitoring processes and detecting anomalies early? And the other direction is using AI-based models to actively optimize processes, detect errors as early as possible, or calculate internal parameters that cannot be measured directly—in order to further develop the technology itself.

I think that’s generally necessary in projects like these: multiple areas of expertise have to come together—the knowledge of how production actually runs, data expertise, and integration competence.
Robin, generative AI—we’ve talked a lot about classical AI, meaning custom models for specific data. Do you also see a role for generative AI in production environments?

Robin

That’s a difficult question, because many use cases are only just emerging. Internally, we are already exploring different approaches to how we can use generative AI ourselves. A classic scenario is software development: our solutions consist of code, so we can use it as a tool for quality control and improvement. It’s also conceivable that in the future we could generate logic blocks in FabEagle®Connect via prompting. For example, I might specify: “Structure the message, move temperature-related parameters to the front, and shorten certain formats.” AI can already generate such code snippets today.
And when we talk about connectivity, it’s always also about connecting AI in the first place. If I want my software to communicate with a service like ChatGPT, that is a connectivity task. For that, you need standard protocols—we mentioned them earlier: TCP/IP, REST interfaces, and similar technologies. Many generative AI services are cloud applications that need to receive and send messages. In that sense, connectivity plays a very classic role here as well.

Great. Thank you very much from my side—I found the discussion very insightful. We talked about OPC UA and why it’s not as simple as it’s sometimes made out to be. We started with the edge layer you are shaping and saw how important partnerships and update capabilities are, especially in light of current regulatory requirements. It became very clear how valuable it is to work with reliable partners. I’ll give you the final word: how can people get in touch with you, and what would you like to leave our listeners with?

Frank

The easiest way to reach us is via our website: kontron-ais.com. What I’d like to emphasize is this: connectivity is still not simple—but you shouldn’t be afraid of it. It’s worth actively tackling this challenge, because projects like these offer a lot of learning opportunities and open up possibilities you may have been thinking about for a long time. They create the foundation for future developments.

Robin
Connectivity—whether via an edge layer or other approaches—should be addressed strategically. It’s not just about making a connection, but also about aspects such as update capability, deployment, and maintenance in production environments. This needs to be considered as early as the plant design phase. How do I operate connectivity in the long term—scalable and update-ready, together with a partner or across multiple solutions? Those who think about these questions early on keep all options open for the future—whether that’s AI, big data, or data lakes. Planning this from the start gives companies the flexibility to deal with future requirements effectively. Feel free to reach out to us. We know these topics, we apply them in real projects, and an exchange with an integration partner who has hands-on experience is exactly the right first step.

We’ll put the contact details in the show notes. Thank you both very much—and see you next time on the podcast.

Frank

Thank you, Peter.

Robin

Thank you, bye!

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Mrs. IoT Founder of IIoT Use Case GmbH | IoT Business Development | Which use cases work and HOW? Focus on practice! #TechBusiness #AddedValue