At Siemens, you are involved in very different industries. Can you summarize a few use cases that you see in practical application?
Axel
There’s a range there. There’s the area of hybrid systems, where we combine an installed controller, an installed PLC with a factory edge system – quite great opportunities with Artificial Intelligence. Where we can do predictive maintenance. Where we can do Visual Inspection. And we can bring optimization and productivity to the plant for our customers. But there are now also applications with concrete, deterministic fail-safe software controllers in the plant, where we really also work very much hardware-independently and can also trigger a different spectrum of the hardware. All the way to properly virtualized controllers – I would rather use those today in an environment that is not so time-critical. But there are also great applications: Let’s think about a large water network where I can run the virtual world in parallel with the real world and the simulation checks how my system should look right now? And when it doesn’t, I get clear indications of where there is a problem.
So there are different applications across different industries and customer use cases, with different specific challenges.
Pierre, you also work on different use cases in the research itself. Can you add a little bit from a research perspective on what use cases you see?
Pierre
I would tie directly into: these different layers and that it complements the edge cloud and brings more functionality in to the PLC. One example that comes to mind is inline quality measurement for machine tools. In one project, we had equipped the machine with the greatest sensor technology we could install – acoustic emission sensor data without end. Then at some point it occurred to us: Oh, we can process the data … but how do we play it back again? How can we really harness that? Then we went through iterations piece by piece, where the networking was pushed further and further. Then we realized we needed the computing infrastructure, the power. Then we went into the edge cloud, into the factory cloud. Then we realized we had to play it back into the machine; had to develop fieldbus protocol adapters there, somehow get to the PLC, and then see at the end: This is all unstable, latency-critical and should not work like this.
That’s why we’ve now rolled up the issue in a big way. It’s insanely exciting because inline quality control will drive down the defects that occur in manufacturing so much, especially for expensive components. This is immensely interesting. In many cases, you could save yourself a trip to the measurement lab – which saves time. Which lets technicians focus on other tasks. What machine tools can save. And which is simply more sustainable throughout, because less scrap is produced.
Further, when control is really outsourced to the cloud, this second iteration level, we find it exciting to look at collaborations. So two virtualized robot controllers – it stands to reason that if they’re both running on one platform, that you can integrate a component there, a software component that enables cooperation, where before you had to build wired routes … hardcoded, hard path planning, which makes it all very inflexible. This cooperation, this idea can be taken further. Human to robot, human to machine, robot to machine, and so on. We suddenly have a whole playground that is virtually operated and independent of the hardware.
Let’s dive into the real world. Markus, I would be interested in the status quo at your site. We are in glass production on your store floor: What is your daily job like and what does a classic store floor look like for you?
Markus
You have to think of it this way, we have one of the hottest jobs in Germany. At about 1600 degrees we process different raw materials: sand, soda, dolomite and of course also cullet … to our new basic glass. Which is then the raw material, or starting product, for our sister company Saint-Gobain Sekurit, which uses it to make automotive glass. What is very important: our plants run continuously for up to twenty years. That means we never stop our production! This is of course important when you consider that within twenty years, of course, a great many control technology innovations take place, which we are also discussing right now. Of course, we also want to bring all of this into our production facility so that we can tap into advantages here in terms of innovation and digitalization.
The two of them had just mentioned a couple of use cases. Once all this for quality control, but also something like, you save ways somewhere, you save time to address this issue of Virtual PLC. What are some challenges that YOU have on the store floor?
Markus
I see mainly three challenges, or use cases, that could be served by such a theme. I have just said that our plants must run continuously for twenty years. This means that requirements are placed on resilience in particular. So far, we have installed redundant controls everywhere. I could imagine that such a topic of resilience could be covered much BETTER via a control system in the cloud – which can ultimately be virtually duplicated.
Other use cases that I can imagine are retrofitting. As I said, a lot happens in twenty years. Of course, we must always ensure that our current control programs also harmonize with the new generations. So you could imagine that even before that, virtually in the cloud … validating our current control programs in a virtual model before we implement them in reality. And last but not least, of course, it is not only brownfield applications that are of interest to us. WE also get new plant components. So it is of course very important for us, together with the integrators, to set new standards here for digitization and control technology.
That is, your primary application possibilities or potentials that you see are, so to speak, the virtualization of the control system in terms of resilience, but also in terms of retrofit – those are the greatest potentials?
Markus
Absolutely, exactly.
If we go a bit further, it’s also about data somewhere, data sources that are relevant for mapping something like this. Perhaps also asked from a practical point of view: What is data that is particularly exciting for you? Is it quality management or is it other data?
Markus
I’ll say, in terms of data sources … you have to think of it like this: Our biggest data source is the process control system. But what’s a bit of a shame is that even today we’re actually still relying on communication technologies that are from the nineties. This means that our typical chain looks like this: we connect the data sources via OPC DA, then perhaps translate them to OPC UA via a wrapper, and then only slowly move into a world where we speak MQTT … where we can transmit data at high performance and then move into the so-called Industrial IoT. Of course, it would be very desirable for us if we did not have to build up this chain over such a long period of time and had a large number of interfaces in order to be able to speak the new protocols of the Internet of Things. But to be able to get these interfaces natively here from new controllers in the cloud with the appropriate adapters.
Do you have an example from a trial in your country? You have very different processes in discrete manufacturing.
Markus
You can imagine it like this: We have a huge melting reactor where these high temperatures prevail. There is also a lot of control technology installed in it to control the gas combustion and the entire process. Controllers are typically sized somewhere in the design phase of a plant and then perhaps fine-tuned a bit during commissioning. But what we realize, of course, is that this kind of thing doesn’t work over twenty years. In any case, the controllers have to be reworked. Today, a lot of that takes place in the system itself.
What would help us a lot would be to bring all the information from the process with the current controller parameters into the cloud, and ultimately build a virtual model there on which you can test and evaluate new controller parameters – and then return them back to the production area virtually tested. Today, we are already talking about edge clouds and edge devices. So we are definitely on the move in this subject area; we have created the prerequisites. But the complete connection from the store floor to the edge and then to the cloud does not yet exist in this form. Also because: not yet possible.
This means that such controller behavior could also be tested virtually and perhaps simply optimized. Because you’ve established it at some point, and the next step would be to challenge that and have the data ready for that, right?
Markus
Exactly. We always say we’re talking in two circles, if you will: one is the fast controller circle, which ultimately comes to execution, where I only apply my developed controllers, my developed models, in order to be fast in production – in terms of real-time capability, response. And I have my optimization cycle, which might then run through the cloud, where I can analyze data and fathom optimized settings. Perhaps also with new methods of artificial intelligence or pattern recognition.
Axel, do you know such use cases from other customers?
Axel
Absolutely! And we are trying to close the chain by directly connecting connectors to various field systems – be it our controller, which would be the Simatic, or controllers or components from competitors – via an app as part of Industrial Edge, for example. So that you can get the data as complete, timely and fast as possible. You can process them in the Indusrial Edge, select them, or you can forward them directly. For this we then have the next connector, MQTT. These are all apps in Docker, so a customer can handle them very easily and also be sure that the components work for themselves first.
With this we try to close the way to make exactly the requirements Markus describes manageable and distributable. And also giving the customer the ability, if a case works to their satisfaction, to be able to roll it out to different plants, to different controllers. Because we also provide a corresponding management system for this runtime and for the apps. In this way, we ultimately also bring this into a product state that helps our customers – in this case perhaps also Markus – to apply such solutions.
Markus, now we got a little bit of the market and product view from Axel. What are the requirements of you as an end customer that such a system must meet? Is that primarily about this interface management?
Markus
From … to. I have to have the interface to create that connectivity first. One of the most important topics in the Industrial Internet of Things. But after that, many other issues also play a role: not least a service level agreement somewhere. I would be delighted if we could go in this direction with Axel and perhaps try out what is possible today, the state of the art, in terms of innovative control technologies. But at the end of the day, Axel also has to be available to me 24/7.
That means that I see a big point here with the integrators, who nowadays still stand between the suppliers – the big ones, Siemens – and us as end users, in order to ultimately bring these technologies to the market and also to be able to fully support us end users, Saint-Gobain. Here, of course, I would like to see integrators take an even stronger role and go along with these new technologies that Axel is describing.
Looking in the research direction, Pierre. I would be interested to know what research questions are you working on to address this?
Pierre
The first point that has already emerged, in my opinion, is standardization. With five different manufacturers, you can be sure that today they all speak different protocols and provide different interfaces – which of course makes it very, very complicated to network them all together. Cloud integration is also very, very difficult with today’s fieldbus systems. I think there’s still a lot of reworking to be done. The industry must agree on individual standards that can then be used across the board, and across manufacturers.
Then, of course, there is the classic issue of data sovereignty. I don’t want to go too deeply into that. The industry attaches great importance to ensuring that data stays where it is supposed to be, since corresponding IP (intellectual property) can be read out. If a manufacturing company equips its production with vibration sensors, and the data flows out, then in the worst case it is possible to reproduce from the data how the process ran, and then recreate the whole thing – which of course then represents a problem for the company, because its unique selling point has disappeared.
With regard to infrastructure, I already touched on two topics at the beginning: Data transmission and computing. In both cases, real-time must be enabled according to the requirements; real-time in the sense of the determined transmission of messages or the corresponding computing. I think there are already good approaches in the area of technology. But this needs to be taken further. The context of standardization is also important again, that the industry enters into an area and focuses there – otherwise there are again many different isolated solutions and this overall continuum, which is needed for this … that falls away.
I have one last point: responsibility. It is also very, very important. We are expanding the circle of hardware systems from one meter built into the machine to thirty meters into the server room … The question is: What happens in case of failure? Who is responsible? – The one who sets up the communication link? The one that builds the end devices? The one that provides the computing infrastructure, the application, the machine connection and so on … ? You can build that up in a very fine granular way; that’s a very big issue.
Maybe a little bit of a look into reality or into the future. What is the status now? Does the topic of virtual control already exist? If not, why is it not done today? What is the challenge with implementation today?
Pierre
The issue is there and it is being addressed. Still, the industry is very conservative, I’d say. Perhaps a small anecdote: My studies were not so long ago. There, in the area of control, I still learned how to design a controller from a step response, a triangle and a parameter table – in times of artificial intelligence, big data and whatnot, this is of course no longer up-to-date. Still, manufacturers are offering more and more connectivity options these days. A PLC has various interfaces. There are extension modules. – But the overall concept is still missing. This includes not only the question of responsibility, but also closing the loop, bringing individual platforms back into it, so that not only manufacturer 1 can supply all three components, but that perhaps component 2 can also be brought in by manufacturer 2. I think the overall infrastructure still needs to be built up; individual components, individual problems are already being solved. This is where everyone needs to come together once again.
Jumping back into practice; you said it so well earlier, you first bring the data from an old control infrastructure, be it OPC DA, somewhere into UA. That’s a lot of isolated solutions; that’s a challenge with standardization. If I really want to address the issue, start working on it tomorrow, how does data acquisition work? How do I even get the data from conventional PLCs?
Pierre
We can address many of the requirements we have discussed today. For example, with an industrial edge, we can get a lot of data out of the PLC in a very timely manner, preprocess it on the store floor, and send it to the cloud. The number of connectors, so that we can connect different manufacturers from the store floor to collect the data and process it immediately. Or the ability to bring data down from the cloud to the store floor from an AI model, for example. We have these possibilities today; we can use that today.
We have very powerful software controllers that we can combine with this. To get the whole state there, so what we also discussed in the final build … can’t we run the control completely in the cloud and can’t we, to increase resilience, run multiple controls at the same time? What about the complete deterministic then? How then do requirements work with fail-safe? How far is available as a product portfolio that a partner somewhere in the world can also build a solution with this construction kit for which he takes responsibility? We have to walk a little further. We’re on it. We believe this is an important goal to address.
That’s where we also look forward to collaborating on the one hand with research that shows us ways to do it, and on the other hand with customers who are willing to do a proof of concept with us – but the proof of concept is not enough. In the end, there has to be a product portfolio. Because the partners with whom we work all over the world must also be involved again, so that the solutions can then be produced in a repeatable manner in the market. That’s a great task for us to work on over the next few quarters. I’m quite excited to see how we get on there!
How does the data processing from this hardware layer to the next level work? Pierre, how do you do that and then where does the data go?
Pierre
Classically, you have specific hardware that is designed to be able to process data quickly, run applications, perform control algorithms. What people use today, or would like to use, would be standard hardware. Which cloud providers and so on also build in by default. PCs that are on the store floor anyway and so on. But they are not designed for real time. That’s where we work with different virtualization options: Hypervisor, CPU pinning, similar methods. To clearly define which computing resources are there for which application. Sure, we could make the hardware real-time capable all over again. But it would also be exciting if this could run in parallel. If non-real-time critical applications are running in parallel with real-time applications, that’s, A, great, because we can use the infrastructure that we have on site, use different nodes, so run redundantly – the topic of resilience just came up. And of course also in terms of sustainability. Why do we have to set up new computing systems everywhere, which are then used thirty percent of the time, when we could simply use the infrastructure that we already have on site and then dynamically play the individual software applications back and forth, depending on where it is needed and where resources are currently available?
We ourselves as IPT work on different levels. We have the Fraunhofer Edge Cloud. This is a server system that we have locally, which we trim to real time. But go down towards PCs, which we then preamp and test there, but also on edge hardware, which is then provided by various manufacturers, where we test and validate that accordingly.
On the subject of the cloud, you have already mentioned various providers and options. It’s also about evaluating the data in the end. Markus spoke earlier about certain controller behavior, which can be optimized. How and where is this data analysis possible? Is this what I do on the Edge? Am I doing all this on the cloud? Everyone has their own system?
Axel
It pretty much depends on the customer’s requirements and the application. Data pre-processing on the store floor is definitely possible if we want to do Machine Learning or use Artificial Intelligence and need or want to train or re-train the models. Then it makes sense to do something like this in the cloud. Of course, you can also build up the corresponding computing capacity locally at a customer’s site. But the question is, if it’s a larger company, what has global manufacturing … then it might make more sense to train that in the cloud. – But the result of this must somehow get back to the machine. And the big point is, this also has to get to the machine in such a way that the service technician finds the target value at 10:30 p.m. on Friday night and can put the plant back into operation.
This is where an industrial edge directly on the store floor, possibly directly in the control cabinet, with the controller that controls the system, provides a very, very great value. Today, we already have an incredible number of good connectors to the existing control systems. In that respect, we can do a lot of what we find in the plants. We certainly have a large installed base to fall back on. But we can already establish pretty good connections to other manufacturers as well. So where we then process that is very much dependent on the customer’s requirements. But when we talk about artificial intelligence, for example, training is often needed in the cloud.
We also want to talk a little bit about performance evaluation of the whole topic of PLCs or Virtual PLCs. In summary, what are the potentials compared to conventional PLCs?
Axel
A virtual PLC that is completely independent of the hardware on which it runs can be optimized completely independently itself, within itself. And it can benefit directly from the hardware’s performance improvements. An embedded system always takes time to design in after a new processor generation is developed – a virtual system can scale with the hardware immediately. This is a maximum advantage.
In addition, in times of cybersecurity, I can use completely different mechanisms in administration for large quantities of these controls, these virtual systems, completely virtualized. This is very convenient for companies that rely heavily on IT implementation. You can use many things there.
On the other hand, of course, there are also advantages that I have in an embedded system: I am inherently more resilient to many cybersecurity challenges. I am very much tailored to the application. Even if in a cloud or a company cloud, corporate cloud I have scaling of resources, it can always be that in situations where a lot of computing power is drawn at once, due to a failure case, I lack computing power for the individual application. On the other hand, with distributed computing power on different controllers, the individual controller is designed for itself – that doesn’t bother me for the time being.
At the moment, these are also reasons why, in the vast majority of cases that I experience, customers opt for the combination of both solutions and say, I would like to have a conventional solution for my core processes, where I don’t want to take any risks – but please combine them for me. That’s where factory cloud, i.e. industrial edge, and cloud application in combination come in very strong. But that may look very different by the end of this decade.