Möchtest du unsere Inhalte auf Deutsch sehen?

x
x

Controller in the cloud – future or reality? | #HM22 Special

““

Click on the button to load the content from Spotify Player.

Load content

Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

IoT Use Case Podcast Special bei der Hannover Messe, Saint-Gobain, Fraunhofer OPT, Siemens

Controller in the cloud – is it still the future or already reality? That’s the topic of this special episode, live from the Hannover Messe. Together with the most important European glass manufacturer Saint-Gobain Glass, the Fraunhofer Institute for Production Technology (Fraunhofer IPT) and Siemens AG, this question is being investigated.

Episode 67 at a glance (and click):

  • [05:35] Challenges, potentials and status quo – This is what the use case looks like in practice
  • [28:59] Results, Business Models and Best Practices – How Success is Measured

Podcast episode summary

Virtual PLC – What is it? What are the use cases that end customers see? What are the potentials compared to conventional controls?

Podcast episode 67 discusses what virtual PLCs are and why this technology building block can be valuable to manufacturing operations. The views and possible use cases come from Saint-Gobain Glass. Saint-Gobain Glass is one of the most important European glass manufacturers and the global market leader for coated glass. In this podcast trio, Siemens shows how the market is preparing for this with which business models and products. To that end, the panel talks about performance evaluation of a virtual PLC. In addition to technological challenges such as latencies and security, the dependencies are also highlighted here. Science and business often speak a different language – also on this topic? In the following, it becomes clear how the cooperation between industry and science takes place and which research questions move the Fraunhofer IPT, representing science, in this regard.

Madeleine Mickeleit’s guests in this special episode of Hannover Messe 2022:

Podcast interview

Markus, you are Head of Production at Saint-Gobain and responsible for digitization throughout Germany with a focus on Industry 4.0. You are experts in the field of glass manufacturing and market leaders in coated glass.

Markus

Right. I have a super exciting job where I can combine the challenges of day-to-day operations in manufacturing with strategic and innovative things in digital.

Pierre, you are a group leader at the Fraunhofer Institute for Production Technology, i.e. the Fraunhofer IPT, in Aachen. You lead the digital infrastructures group and deal with the networking of production from the field level to the cloud level.

Pierre
Exactly, that’s where we look at networking concepts for production. That is, once the communication of course, real-time is a term there in the context of 5G/Time-sensitive Networking. And of course the computing environment, that’s essential. That’s where we look at cloud and edge systems. The terms are very fuzzy, which is why I like to talk about factory cloud systems that are local to the site.
And Axel, you are Vice President Control at Siemens. You are with us today from the Siemens Digital Industries Group. Siemens – of course technology and innovation leader for industrial automation and of course digitization. Working closely with your partners and customers, you’ve been driving digital transformation for years, decades.
Today we are talking about Control-as-a-Service or the virtual PLC. That’s a broad term at first and probably new to many. Pierre, what is that anyway, a virtual PLC, from a research perspective?
Pierre
The classic PLC is installed as a hardware component in machines, or robots and the like. Specific hardware trimmed for real-time is in there. What we are trying to do from a research perspective is to detach the software that runs in it from the specific hardware. Thus, we want to be able to independently deploy the functionalities provided by a PLC on different nodes as software. This is very exciting because it allows us to create a great dynamic and systems can apply automatic updates, automated via the cloud. Older systems are thus obsolete. This means that we don’t have different generations, but can really keep a store floor constantly at one level, always a very high quality. If we bundle the whole thing at the cloud and edge level, it also opens up completely new concepts of networking, flexibility, optimization. A very, very exciting topic; that’s why we’re doing research there.
Axel, you are represented here at the table with Siemens. Can you add from a market perspective what the virtual PLC is?
Axel
What Pierre said is how we envision the future – that’s where it’s going. That is also what we need. There are quite a few challenges along the way. Technical challenges that we need to work on. But there is also a huge installed base. And there are also the requirements as to how we can bring the topic of digitization INTO the installed base that exists today. So how can I help a PLC, or Sinumerik controller, installed today to become more digital – to bring more digital requirements, more information down to manufacturing? Be it results from machine learning or an AI environment. Be it a collection of data from various field devices that I want to communicate up to the cloud. Or be it, as Pierre also correctly said, that I make the ability to update, for example, for patches in cybersecurity, easier on the installed base than they have been in the past or than they are today.
With all these things, these … we call it industrial edge, so the factory automation or the factory installed edge systems can help us massively, because they complement the functionalities of the installed base. In parallel, as Pierre also described, we need to work on the next state, the virtual PLC, with all the challenges that we have with it along the way.

Challenges, potentials and status quo - This is what the use case looks like in practice [05:35]

At Siemens, you are involved in very different industries. Can you summarize a few use cases that you see in practical application?

Axel

There’s a range there. There’s the area of hybrid systems, where we combine an installed controller, an installed PLC with a factory edge system – quite great opportunities with Artificial Intelligence. Where we can do predictive maintenance. Where we can do Visual Inspection. And we can bring optimization and productivity to the plant for our customers. But there are now also applications with concrete, deterministic fail-safe software controllers in the plant, where we really also work very much hardware-independently and can also trigger a different spectrum of the hardware. All the way to properly virtualized controllers – I would rather use those today in an environment that is not so time-critical. But there are also great applications: Let’s think about a large water network where I can run the virtual world in parallel with the real world and the simulation checks how my system should look right now? And when it doesn’t, I get clear indications of where there is a problem.

So there are different applications across different industries and customer use cases, with different specific challenges.

Pierre, you also work on different use cases in the research itself. Can you add a little bit from a research perspective on what use cases you see?
Pierre
I would tie directly into: these different layers and that it complements the edge cloud and brings more functionality in to the PLC. One example that comes to mind is inline quality measurement for machine tools. In one project, we had equipped the machine with the greatest sensor technology we could install – acoustic emission sensor data without end. Then at some point it occurred to us: Oh, we can process the data … but how do we play it back again? How can we really harness that? Then we went through iterations piece by piece, where the networking was pushed further and further. Then we realized we needed the computing infrastructure, the power. Then we went into the edge cloud, into the factory cloud. Then we realized we had to play it back into the machine; had to develop fieldbus protocol adapters there, somehow get to the PLC, and then see at the end: This is all unstable, latency-critical and should not work like this.
That’s why we’ve now rolled up the issue in a big way. It’s insanely exciting because inline quality control will drive down the defects that occur in manufacturing so much, especially for expensive components. This is immensely interesting. In many cases, you could save yourself a trip to the measurement lab – which saves time. Which lets technicians focus on other tasks. What machine tools can save. And which is simply more sustainable throughout, because less scrap is produced.
Further, when control is really outsourced to the cloud, this second iteration level, we find it exciting to look at collaborations. So two virtualized robot controllers – it stands to reason that if they’re both running on one platform, that you can integrate a component there, a software component that enables cooperation, where before you had to build wired routes … hardcoded, hard path planning, which makes it all very inflexible. This cooperation, this idea can be taken further. Human to robot, human to machine, robot to machine, and so on. We suddenly have a whole playground that is virtually operated and independent of the hardware.
Let’s dive into the real world. Markus, I would be interested in the status quo at your site. We are in glass production on your store floor: What is your daily job like and what does a classic store floor look like for you?
Markus
You have to think of it this way, we have one of the hottest jobs in Germany. At about 1600 degrees we process different raw materials: sand, soda, dolomite and of course also cullet … to our new basic glass. Which is then the raw material, or starting product, for our sister company Saint-Gobain Sekurit, which uses it to make automotive glass. What is very important: our plants run continuously for up to twenty years. That means we never stop our production! This is of course important when you consider that within twenty years, of course, a great many control technology innovations take place, which we are also discussing right now. Of course, we also want to bring all of this into our production facility so that we can tap into advantages here in terms of innovation and digitalization.
The two of them had just mentioned a couple of use cases. Once all this for quality control, but also something like, you save ways somewhere, you save time to address this issue of Virtual PLC. What are some challenges that YOU have on the store floor?
Markus
I see mainly three challenges, or use cases, that could be served by such a theme. I have just said that our plants must run continuously for twenty years. This means that requirements are placed on resilience in particular. So far, we have installed redundant controls everywhere. I could imagine that such a topic of resilience could be covered much BETTER via a control system in the cloud – which can ultimately be virtually duplicated.
Other use cases that I can imagine are retrofitting. As I said, a lot happens in twenty years. Of course, we must always ensure that our current control programs also harmonize with the new generations. So you could imagine that even before that, virtually in the cloud … validating our current control programs in a virtual model before we implement them in reality. And last but not least, of course, it is not only brownfield applications that are of interest to us. WE also get new plant components. So it is of course very important for us, together with the integrators, to set new standards here for digitization and control technology.
That is, your primary application possibilities or potentials that you see are, so to speak, the virtualization of the control system in terms of resilience, but also in terms of retrofit – those are the greatest potentials?
Markus
Absolutely, exactly.
If we go a bit further, it’s also about data somewhere, data sources that are relevant for mapping something like this. Perhaps also asked from a practical point of view: What is data that is particularly exciting for you? Is it quality management or is it other data?
Markus
I’ll say, in terms of data sources … you have to think of it like this: Our biggest data source is the process control system. But what’s a bit of a shame is that even today we’re actually still relying on communication technologies that are from the nineties. This means that our typical chain looks like this: we connect the data sources via OPC DA, then perhaps translate them to OPC UA via a wrapper, and then only slowly move into a world where we speak MQTT … where we can transmit data at high performance and then move into the so-called Industrial IoT. Of course, it would be very desirable for us if we did not have to build up this chain over such a long period of time and had a large number of interfaces in order to be able to speak the new protocols of the Internet of Things. But to be able to get these interfaces natively here from new controllers in the cloud with the appropriate adapters.
Do you have an example from a trial in your country? You have very different processes in discrete manufacturing.
Markus
You can imagine it like this: We have a huge melting reactor where these high temperatures prevail. There is also a lot of control technology installed in it to control the gas combustion and the entire process. Controllers are typically sized somewhere in the design phase of a plant and then perhaps fine-tuned a bit during commissioning. But what we realize, of course, is that this kind of thing doesn’t work over twenty years. In any case, the controllers have to be reworked. Today, a lot of that takes place in the system itself.
What would help us a lot would be to bring all the information from the process with the current controller parameters into the cloud, and ultimately build a virtual model there on which you can test and evaluate new controller parameters – and then return them back to the production area virtually tested. Today, we are already talking about edge clouds and edge devices. So we are definitely on the move in this subject area; we have created the prerequisites. But the complete connection from the store floor to the edge and then to the cloud does not yet exist in this form. Also because: not yet possible.
This means that such controller behavior could also be tested virtually and perhaps simply optimized. Because you’ve established it at some point, and the next step would be to challenge that and have the data ready for that, right?
Markus
Exactly. We always say we’re talking in two circles, if you will: one is the fast controller circle, which ultimately comes to execution, where I only apply my developed controllers, my developed models, in order to be fast in production – in terms of real-time capability, response. And I have my optimization cycle, which might then run through the cloud, where I can analyze data and fathom optimized settings. Perhaps also with new methods of artificial intelligence or pattern recognition.
Axel, do you know such use cases from other customers?
Axel
Absolutely! And we are trying to close the chain by directly connecting connectors to various field systems – be it our controller, which would be the Simatic, or controllers or components from competitors – via an app as part of Industrial Edge, for example. So that you can get the data as complete, timely and fast as possible. You can process them in the Indusrial Edge, select them, or you can forward them directly. For this we then have the next connector, MQTT. These are all apps in Docker, so a customer can handle them very easily and also be sure that the components work for themselves first.
With this we try to close the way to make exactly the requirements Markus describes manageable and distributable. And also giving the customer the ability, if a case works to their satisfaction, to be able to roll it out to different plants, to different controllers. Because we also provide a corresponding management system for this runtime and for the apps. In this way, we ultimately also bring this into a product state that helps our customers – in this case perhaps also Markus – to apply such solutions.
Markus, now we got a little bit of the market and product view from Axel. What are the requirements of you as an end customer that such a system must meet? Is that primarily about this interface management?
Markus
From … to. I have to have the interface to create that connectivity first. One of the most important topics in the Industrial Internet of Things. But after that, many other issues also play a role: not least a service level agreement somewhere. I would be delighted if we could go in this direction with Axel and perhaps try out what is possible today, the state of the art, in terms of innovative control technologies. But at the end of the day, Axel also has to be available to me 24/7.
That means that I see a big point here with the integrators, who nowadays still stand between the suppliers – the big ones, Siemens – and us as end users, in order to ultimately bring these technologies to the market and also to be able to fully support us end users, Saint-Gobain. Here, of course, I would like to see integrators take an even stronger role and go along with these new technologies that Axel is describing.
Looking in the research direction, Pierre. I would be interested to know what research questions are you working on to address this?
Pierre
The first point that has already emerged, in my opinion, is standardization. With five different manufacturers, you can be sure that today they all speak different protocols and provide different interfaces – which of course makes it very, very complicated to network them all together. Cloud integration is also very, very difficult with today’s fieldbus systems. I think there’s still a lot of reworking to be done. The industry must agree on individual standards that can then be used across the board, and across manufacturers.
Then, of course, there is the classic issue of data sovereignty. I don’t want to go too deeply into that. The industry attaches great importance to ensuring that data stays where it is supposed to be, since corresponding IP (intellectual property) can be read out. If a manufacturing company equips its production with vibration sensors, and the data flows out, then in the worst case it is possible to reproduce from the data how the process ran, and then recreate the whole thing – which of course then represents a problem for the company, because its unique selling point has disappeared.
With regard to infrastructure, I already touched on two topics at the beginning: Data transmission and computing. In both cases, real-time must be enabled according to the requirements; real-time in the sense of the determined transmission of messages or the corresponding computing. I think there are already good approaches in the area of technology. But this needs to be taken further. The context of standardization is also important again, that the industry enters into an area and focuses there – otherwise there are again many different isolated solutions and this overall continuum, which is needed for this … that falls away.
I have one last point: responsibility. It is also very, very important. We are expanding the circle of hardware systems from one meter built into the machine to thirty meters into the server room … The question is: What happens in case of failure? Who is responsible? – The one who sets up the communication link? The one that builds the end devices? The one that provides the computing infrastructure, the application, the machine connection and so on … ? You can build that up in a very fine granular way; that’s a very big issue.
Maybe a little bit of a look into reality or into the future. What is the status now? Does the topic of virtual control already exist? If not, why is it not done today? What is the challenge with implementation today?
Pierre
The issue is there and it is being addressed. Still, the industry is very conservative, I’d say. Perhaps a small anecdote: My studies were not so long ago. There, in the area of control, I still learned how to design a controller from a step response, a triangle and a parameter table – in times of artificial intelligence, big data and whatnot, this is of course no longer up-to-date. Still, manufacturers are offering more and more connectivity options these days. A PLC has various interfaces. There are extension modules. – But the overall concept is still missing. This includes not only the question of responsibility, but also closing the loop, bringing individual platforms back into it, so that not only manufacturer 1 can supply all three components, but that perhaps component 2 can also be brought in by manufacturer 2. I think the overall infrastructure still needs to be built up; individual components, individual problems are already being solved. This is where everyone needs to come together once again.
Jumping back into practice; you said it so well earlier, you first bring the data from an old control infrastructure, be it OPC DA, somewhere into UA. That’s a lot of isolated solutions; that’s a challenge with standardization. If I really want to address the issue, start working on it tomorrow, how does data acquisition work? How do I even get the data from conventional PLCs?
Pierre
We can address many of the requirements we have discussed today. For example, with an industrial edge, we can get a lot of data out of the PLC in a very timely manner, preprocess it on the store floor, and send it to the cloud. The number of connectors, so that we can connect different manufacturers from the store floor to collect the data and process it immediately. Or the ability to bring data down from the cloud to the store floor from an AI model, for example. We have these possibilities today; we can use that today.
We have very powerful software controllers that we can combine with this. To get the whole state there, so what we also discussed in the final build … can’t we run the control completely in the cloud and can’t we, to increase resilience, run multiple controls at the same time? What about the complete deterministic then? How then do requirements work with fail-safe? How far is available as a product portfolio that a partner somewhere in the world can also build a solution with this construction kit for which he takes responsibility? We have to walk a little further. We’re on it. We believe this is an important goal to address.
That’s where we also look forward to collaborating on the one hand with research that shows us ways to do it, and on the other hand with customers who are willing to do a proof of concept with us – but the proof of concept is not enough. In the end, there has to be a product portfolio. Because the partners with whom we work all over the world must also be involved again, so that the solutions can then be produced in a repeatable manner in the market. That’s a great task for us to work on over the next few quarters. I’m quite excited to see how we get on there!
How does the data processing from this hardware layer to the next level work? Pierre, how do you do that and then where does the data go?
Pierre
Classically, you have specific hardware that is designed to be able to process data quickly, run applications, perform control algorithms. What people use today, or would like to use, would be standard hardware. Which cloud providers and so on also build in by default. PCs that are on the store floor anyway and so on. But they are not designed for real time. That’s where we work with different virtualization options: Hypervisor, CPU pinning, similar methods. To clearly define which computing resources are there for which application. Sure, we could make the hardware real-time capable all over again. But it would also be exciting if this could run in parallel. If non-real-time critical applications are running in parallel with real-time applications, that’s, A, great, because we can use the infrastructure that we have on site, use different nodes, so run redundantly – the topic of resilience just came up. And of course also in terms of sustainability. Why do we have to set up new computing systems everywhere, which are then used thirty percent of the time, when we could simply use the infrastructure that we already have on site and then dynamically play the individual software applications back and forth, depending on where it is needed and where resources are currently available?
We ourselves as IPT work on different levels. We have the Fraunhofer Edge Cloud. This is a server system that we have locally, which we trim to real time. But go down towards PCs, which we then preamp and test there, but also on edge hardware, which is then provided by various manufacturers, where we test and validate that accordingly.
On the subject of the cloud, you have already mentioned various providers and options. It’s also about evaluating the data in the end. Markus spoke earlier about certain controller behavior, which can be optimized. How and where is this data analysis possible? Is this what I do on the Edge? Am I doing all this on the cloud? Everyone has their own system?
Axel
It pretty much depends on the customer’s requirements and the application. Data pre-processing on the store floor is definitely possible if we want to do Machine Learning or use Artificial Intelligence and need or want to train or re-train the models. Then it makes sense to do something like this in the cloud. Of course, you can also build up the corresponding computing capacity locally at a customer’s site. But the question is, if it’s a larger company, what has global manufacturing … then it might make more sense to train that in the cloud. – But the result of this must somehow get back to the machine. And the big point is, this also has to get to the machine in such a way that the service technician finds the target value at 10:30 p.m. on Friday night and can put the plant back into operation.
This is where an industrial edge directly on the store floor, possibly directly in the control cabinet, with the controller that controls the system, provides a very, very great value. Today, we already have an incredible number of good connectors to the existing control systems. In that respect, we can do a lot of what we find in the plants. We certainly have a large installed base to fall back on. But we can already establish pretty good connections to other manufacturers as well. So where we then process that is very much dependent on the customer’s requirements. But when we talk about artificial intelligence, for example, training is often needed in the cloud.
We also want to talk a little bit about performance evaluation of the whole topic of PLCs or Virtual PLCs. In summary, what are the potentials compared to conventional PLCs?
Axel
A virtual PLC that is completely independent of the hardware on which it runs can be optimized completely independently itself, within itself. And it can benefit directly from the hardware’s performance improvements. An embedded system always takes time to design in after a new processor generation is developed – a virtual system can scale with the hardware immediately. This is a maximum advantage.
In addition, in times of cybersecurity, I can use completely different mechanisms in administration for large quantities of these controls, these virtual systems, completely virtualized. This is very convenient for companies that rely heavily on IT implementation. You can use many things there.
On the other hand, of course, there are also advantages that I have in an embedded system: I am inherently more resilient to many cybersecurity challenges. I am very much tailored to the application. Even if in a cloud or a company cloud, corporate cloud I have scaling of resources, it can always be that in situations where a lot of computing power is drawn at once, due to a failure case, I lack computing power for the individual application. On the other hand, with distributed computing power on different controllers, the individual controller is designed for itself – that doesn’t bother me for the time being.
At the moment, these are also reasons why, in the vast majority of cases that I experience, customers opt for the combination of both solutions and say, I would like to have a conventional solution for my core processes, where I don’t want to take any risks – but please combine them for me. That’s where factory cloud, i.e. industrial edge, and cloud application in combination come in very strong. But that may look very different by the end of this decade.

Results, Business Models and Best Practices - How Success is Measured [28:59]

Now if we look a little bit at the business case. In the end, the business model is also interesting, which everyone can count on. Above all, how is the market developing? Axel, you also talk about PLC-as-a-Service, Control-as-a-Service models … what does the business model look like here for Siemens, for example? What can the market expect?

Axel

We are already on the topic of Industrial Edge today, that we say there is a management fee, there is a monthly fee. So we’re going there the new models of software-as-a-service that the customer can absolutely benefit from. And these new models still need a little time, because not everyone is comfortable with things. There is definitely still an aspiration in the market, “I would have liked to buy it once, and then it’s mine” … but I think there will also be a rethink. Because if you have a piece of software these days, no matter what hardware it runs on, you have to do software service over time, and then you’re into service and updates anyway. I can imagine that software-as-a-service will also become established in the automation sector, even on the store floor – but it will still take a while.

Of course, I would also be interested in the business case in the direction of Saint-Gobain. Markus, what does that look like to you? Have you ever … a kind of return-on-investment calculation is not yet possible, of course. But thinking along those lines, where are you introducing this and what’s the business case for you?
Markus
Yeah, sure, ultimately, for all of our digital developments that we put into production here at our facility, we have to have some form of RoI calculation to show at the end of the day. Now it’s relatively simple with us – we are a manufacturing company, and unfortunately, I don’t think there is a manufacturing company yet that doesn’t have any losses in production at all. Thus, our RoI calculation is first and foremost about the losses we ultimately avoid. And I could very well imagine that if we optimize our process via optimized process settings, i.e. we become more efficient as far as our main losses are concerned, we might also be able to save our main fuel, natural gas … this of course allows us to make tangible RoI calculations.
It’s more difficult with issues like resilience, which I mentioned at the beginning. Because it’s a bit like insurance. You always pay money in the hope that the insurance claim never occurs – and that’s the way it would be for us with an edge cloud or a pure cloud solution that we would deploy to cover ourselves, just to create that redundancy. Because at the end of the day, of course, we hope that the control failure – in whatever form – never happens. You would have to weigh up the costs and benefits a bit. But I definitely see potential!
Perhaps the hybrid infrastructures mentioned above also provide a certain degree of protection. Or rather, it also depends on different technological interpretations, which we have already discussed a bit.
Markus, future topic or reality, what would you say from your glasses? Will the issue come up in five years, fifteen – or tomorrow?
Markus
Maybe not today, but maybe not in ten years. So I do believe in this development, and we are already moving strongly in this direction with what we are doing with Saint-Gobain. When Axel talks about Docker, when I mention MQTT, these are all technologies somewhere that we are involved with, that are already being used here at Saint-Gobain. We are able to persist, to re-present many millions of data points per day in a very, very performant way. That’s why I think we’ve already taken a good step in the direction of digitization, Industry 4.0 and the cloud. I think we are ready. If Siemens is able to offer us initial solutions in a few years, or perhaps not until the end of the decade, then we would be very happy to try it out. We are of the opinion that this should always be tested directly in production, because this is ultimately an environment where you know directly how the potentials arise. That’s why I would be pleased if we were to stay on the subject and perhaps also be able to carry out the first tests together with Siemens.
Axel
Great, let’s get started then, good idea!
Axel, when do you see the issue?
Axel
I think we have everything we need to get going and get started. Do we have everything that has been discussed today ready as a product? – No, not quite yet. Takes a moment. I believe that by the end of this decade, however, the automation world will look very different. And I agree with Mark on that. Maybe it will take five years; maybe a little longer. But I say, at the end of the decade the world looks different!
Pierre
Then I also add again from the research glasses. I second that. Not today; but not in ten years either. The technologies are ready. Now we have to sit down together and put it all into practice. I think the industry is very, very conservative. That’s why it will start slowly there at the beginning. But as soon as the first successes are recorded, a maelstrom will form there, and then no one will actually be able to resist it.
I guess if anyone wants to get in touch, you’re all open to talking?
Axel
Maybe he also uses the time – we are at the Hannover Messe in hall 9!
That’s right too – come by the booths!

Please do not hesitate to contact me if you have any questions.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Host & General Manager
IoT Use Case Podcast