Stefan Soutschek (Vice President Operations IT Governance, at Schaeffler Technologies AG & Co. KG) and Matthias Hafner (Head of Sales & Marketing, Schaeffler Digital Solutions GmbH) talk about how thousands of machines at Schaeffler plants are networked in the 52nd episode of the IIoT Use Case Podcast. A key role is played by a messaging infrastructure that is used operationally worldwide and enables global, secure data availability.
Podcast episode summary
The use case discussed revolves around connectivity on the shopfloor. At Schaeffler, a major German supplier to the automotive and mechanical engineering industries, this means over 10,000 machines in 70 plants worldwide. In total, there are many more – around 20,000 machines – but not all of them will enter the digital world. A job for Schaeffler Digital Solutions GmbH, the internal and technology supplier. The subsidiary is a software expert in the field of machine data acquisition and analysis as well as predictive maintenance. Its focus is on machine connectivity and condition and real-time process monitoring.
In this podcast episode, it becomes clear that the heterogeneity of the plant landscape is the biggest challenge when it comes to networking machines. On the shopfloor, there are machines from a wide variety of manufacturers, of different ages and with diverse interface types. There are also machines that have no interfaces at all. OPC UA is already widely used, but the so-called brownfield has many stumbling blocks. Nor, according to Schaeffler, is it just a matter of networking the machines; it is also important to include the various perspectives of the shopfloor and to look at machines comprehensively in order to gain new data insights – Overall Equipment Effectiveness (OEE), the mapping of overall equipment effectiveness. The goal is not to create data garbage, but still to collect so much data that additional, relevant KPIs such as energy efficiencies can be derived for new use cases even without major conversions.
With the help of Schaeffler Digital Solutions’ messaging infrastructure, which is used alongside the gateways, this data is made available worldwide. Every Schaeffler employee who has the need and, above all, the authorization to use certain data for a use case can use it quickly and easily. How the new infrastructure works in detail and how the brownfield is successfully networked – this is explained in detail in the podcast.
Welcome to the IIoT Use Case Podcast. Today I’m talking to the Schaeffler Group, the global automotive and industrial supplier with decades of experience in the field of digitalization, and its subsidiary Digital Solution, the expert in software for machine data acquisition and analysis as well as predictive maintenance. The use case today involves Schaeffler’s plants and the networking of over 10,000 machines worldwide.
Matthias, could you briefly introduce yourself and say something about yourself, but also about you as a company, Schaeffler Digital Solutions, to give our audience an introduction?
My name is Matthias Hafner. I am responsible for global marketing and sales activities at Schaeffler Digital Solutions GmbH. This is a wholly owned subsidiary of the Schaeffler Group focused on machine connectivity, condition monitoring and real-time process monitoring. In short, products related to Industry 4.0.
Stefan, could you also briefly introduce yourself and briefly say where you belong in the Schaeffler Group? Where do you work and what is your role exactly?
You have already introduced us as a globally active automotive and industrial supplier. Specifically on my role, perhaps: I am Head of Operations IT Governance. This means that I am responsible for the IT portfolio, IT software portfolio and architecture governance in the Operations domain – which also includes the topic of production, but also supply chain in purchasing, among other things – which later includes the connectivity strategy. But also globally the topic of cybersecurity for the production environment.
I think you also worked for a specific plant in Herzogenaurach before. What’s the deal with that: what was your role there?
I was not directly responsible at the plant, but my history at Schaeffler is a bit longer. At the time, it started in the Schaeffler Group’s special machine manufacturing division, which also has a global presence. So I come directly from the reality of software development in production environments, which of course has opened my eyes a bit for today’s role, to be able to advance the translation between IT and production well.
So I would want to jump right in and ask a little bit about your plants, production and your shopfloor. First, a brief introductory question: What is your vision for digitization? That’s a big buzzword now, but Schaeffler is a giant corporation – what is your overarching vision in terms of digitization? Perhaps specifically aimed at the shopfloor now?
I was about to say that digitization at Schaeffler is very big. We have our own program with Execute 25, where we really focus fully on the topic of digitization. We are on a very successful path at various levels of product innovation, mindset shifts toward agility, and much more.In concrete terms for the shopfloor, of course, this means moving more and more into data-driven use cases, driving the digitization of our value streams in order to be able to operate truly end-to-end views of our production and value creation. We have also defined a clear strategy, which includes a number of use cases. Always with the goal of doing the things next that will move us forward most quickly in digitization.
What is the scope of the whole thing? That’s probably a lot of machines of various kinds that you have in the field, and also logistics supply chains that are part of it. That’s a holistic strategy then, across the Group, isn’t it?
Exactly, we are globally positioned in our organization across the various functions and divisions of the Schaeffler Group. Specifically, we’re talking about over 70 plants and in total, I would say, definitely over 20,000 machines. Not everyone will reasonably enter a digital world at some point, but you can imagine there’s a lot of change and a lot of work to be done to digitize a shopfloor.
Absolutely. That’s where I would want to stay right on topic and give the audience the insight. One or the other has probably been in production before, you also know pictures from the shopfloor. But perhaps to create a virtual image in the minds of our listeners and to immerse themselves in the use case: What does it look like on your production shopfloor and which machines are there?
The question is not an easy one to answer. Schaeffler is a very large company with a very broad product range, from classic products, such as bearings, to state-of-the-art mechatronic products, also in the context of e-mobility. And if you go into our production, you can see quite classic manufacturing processes, such as grinding, turning, hardening, assembling, testing. There is also the smell of oil. But there are now also really cutting-edge products, mechatronic products, where they are then on test benches; where we flash embedded hardware. So very heterogeneous, depending on which product you are producing in which location in the world. That is also to some extent the challenge of digitizing a very heterogeneous environment, with a wide variety of interfaces; with machines from different manufacturers, with different interfaces, of different ages. People are always talking about OPC UA and modern standards, and of course we use them in our new system installations. But the challenge is really the brownfield.
Now you’ve given me the perfect transition, because I was just about to ask: You had said that a wide variety of machines and systems – these are probably cutting machines, grinding, turning are part of the process. What are some of the classic challenges you face on a day-to-day basis? I can imagine these are a wide variety of departments that work there, that also find a wide variety of data exciting or work with the individual processes.
Of course, if we go to the purely process level, we have the challenge that we have quite a lot of different roles in our plants, which of course also have a different perspective on the machines and the data that these machines generate. That is, simply connecting is easy to say, but at the end of the day it’s always about, do I have a solution that provides me with the data that is perhaps also relevant for use cases that we don’t even know or have today, because we always have new perspectives? That combines, when you’re actually on the shopfloor then, for one thing, with how much time do I have on a production line to try connectivity solutions? Because it may also involve machine downtime, which then results in reduced production again. But – and this is actually the main issue – how do I tie up machines that are now of a certain age? Which have proprietary interfaces, just not a standard? Which were implemented individually, for a special case, with perhaps also machines, where there are no interfaces at all – how do I create them? This is actually one of the biggest challenges we face.
Now you had just mentioned data and key figures. I would go a little deeper into that. You said you have a wide variety of machines, probably thousands of them – what data are you interested in from these different systems? Is there any way to cluster this by department, who is interested in what data? What data is that today and which metrics are interesting for you to use for further use cases?
On the question of what data is relevant – that’s a very open question, because in principle all data can be relevant, even if we don’t know it today. That’s why we always make sure that we don’t create any data waste, but always have the option of generating additional data for new use cases, even without major conversions. If you look classically into production – where do you come from? – as in many companies, we will talk about key figures such as an Overall Equipment Effectiveness, or OEE, where we talk about the quality, the performance, the availability of machines. Simply to optimize value streams, to identify bottlenecks, to highlight quality gaps, and also and most importantly to see progress. In addition, besides the classic production control, it is of course also relevant how is the condition of our machines? These are also the two cases where we are already very broadly positioned today. Schaeffler is not now in the process of initially digitizing the first machines. It’s more about taking the next step and making it much broader and more scalable. In the end, it’s really about information like vibrations; currents that reflect the state of our machines. But what is completely new – and this is also one of the top issues we are dealing with – is how can we recognize consumption from our machines? How can we, from the context of sustainability, further optimize our manufacturing?
These are probably a wide variety of use cases. To pick out an example or to classify the whole thing in general: You probably have an MES system somewhere in your plants that already monitors this data today? Or rather, what is already there for control. What is the next step for you? What use cases are interesting outside of the data you already have in an MES system today, for example?
Traditionally, of course, if you look back a few years into the past, you used to have a lot of solutions – such as an MES, which of course we also have – but it was quite often a data silo. This means that this data is available there, but not across the board. This is one of the core issues that we have been dealing with from an architectural point of view, but also from a solution point of view, in recent years. One is, how do I get the data available at the lowest level across the board, where it’s really about heterogeneous brownfield application. But from that point on, at the latest, to say, how do I get this data made available now in such a way that anyone who has the need and the authorization to be able to use this data for a use case can also get it quickly and easily? That’s why, in addition to the actual gateways that do the connecting, we’ve built into our architecture a kind of messaging infrastructure that works according to a publish-subscribe mechanism. That is, anyone who has information publishes it on this infrastructure; and anyone who has the interest and rights can subscribe to it and be notified whenever new data is available. And we do that not only with machines, but also with legacy systems, like an MES or other system, to provide more and more broad data availability so that we can scale. Also with use cases. I always say, in the end, I want to have a cable out of the machine without having to build a new PC into a system every time for every use case.
Matthias, I’m sure you have something to say about this. Now it’s also a question of how do I do this in practice? We’ve learned the data has to be available somewhere; there are data silos there. It’s now a matter of making that data available to anyone who has the need and the authority to get added value. How do I do it in practice? How does this now also work in connection with Schaeffler Digital Solutions? How do you work together? Is there a solution yet?
There are already solutions. I’ll go into more detail about one that we’re still officially launching in a moment, as Stefan has already said so well: Machine and sensor data are an integral part of any digitization strategy on the shopfloor. Particularly with heterogeneous machinery, it is precisely this standardized data acquisition that is usually lacking. Our product autinity DAP, the so-called Data Acquisition Platform, actually stands, in a word, precisely for connectivity. DAP is a flexible software platform that can be easily integrated into existing infrastructures at the customer’s site and provides all data from controllers – as the most diverse machine controllers, Siemens, Fanuc, Heidenhain, Bosch, Allen-Bradley – but also collects and provides sensors in a machine-independent, or manufacturer-independent way, to an overlying system, or a so-called message broker. That’s sort of how it works accordingly. We can go into that in a little more depth if you like.
That is, what Stefan has elaborated is, so to speak, what you have solved with autinity DAP? To take just those silos of data from disparate systems that exist somewhere and make them holistically available to someone?
Exactly, partially. The DAP solution virtually provides the data from the machines and sensors, to a central infrastructure, the messaging bus. Other systems can also be connected to it, but that is then independent of us; that is possible. But the DAP platform is a complete connectivity product.
Stefan, how does the whole thing fit into your architecture in practice then?
At the point exactly as we want it: The Schaeffler Digital Solutions solution delivers exactly this data to our infrastructure, our messaging component, and thus provides data on a higher level for all the use cases that we are already implementing today and that we are also planning. In the end, if you look at our architecture, what is it aimed at? We would like to solve a great deal via interfaces. I always tell my colleagues and employees, or anyone who wants to hear it or doesn’t want to hear it: We must not follow every trend – but we must not miss the important ones. We simply need a sensible balance between stability and innovation, and we can only achieve this if we design our solutions to be as modular as possible. And one of the components that fits right in with us at this point is the Schaeffler Digital Solutions solution.
Matthias, what knowledge do I actually need for this? It sounds relatively simple to say that I am now creating connectivity here, including holistic connectivity to machines, systems and controls.
In principle, the product is built in such a way that we can drive digitization in the plants. That means you don’t need IT/programming skills to use the software. But what you need, of course, is knowledge of the control programs. This means that we provide software that is virtually present in the individual controllers or sensors, translated into a standard language. To do this, of course, you have to get to grips with the individual control programs. As a rule, however, we see vast knowledge among the customers we are working with today. Many companies program their controllers themselves; otherwise you can also give appropriate support . But in principle, the software is really designed to be so simple that you don’t need an IT degree to connect machines, you can use that out of the box, with our standard connectors that we provide.
Many customers are experts in the field of automation technology, and this is now also to some extent the interface between the two worlds.If I now want to start tomorrow with this issue – we have outlined this in the project, made a plan: How does the whole thing work in practice, really thought out from the data silos to the infrastructure addressed. Whether that’s cloud or anything on-site on servers where I run that data. How does this project work if I want to start with it now?
To perhaps go back to how the software is structured – I think that’s important to understand: There are three components, so to speak. Starting with a client software, the so-called DAP-Home, which collects the data directly on the shopfloor. The software could run on any common industrial PC, which may already be there today or can otherwise be attached. But could also, with a very good network, be provided virtually. This also includes the more than 50 connectors where we can query controls and bus systems out of the box.
Connector – that is, a template?
Exactly, this is a kind of standard interface for how we communicate with the devices. It’s like a translator: you enter a certain value and then have it translated into a numerical value that you can then compare. That is, the piece count now has a certain DBX module in a controller, and this is followed by a number, and I translate this number, for example, into a piece count. That’s exactly what the software does and then provides the data – for very, very different PLCs and sensors.
In other words, for one thing, we have the small software package, which is the DAP-Home?
Exactly, that was the first component, so to speak. A second component that we need is the server component, where we can manage the various devices – industrial PCs with the software on them that are installed on the shopfloor. Both the data retrieval services that run there and the component itself, in order to also centrally and conveniently import updates from the office and patch security updates. And then there is a third component. It’s very crucial – which is also what makes us tick. This is centralized data management. Because our customers often have multiple plants spread around the world, and we want to make sure that you can globally use the data that you collect. For this purpose, it is of course also important to always designate certain data in the same way. For example, the number of pieces – that the same variables, which have the same origin, are not referred to once as ”number of pieces”, once as ”part counter”, or 20 other variables, all of which cannot be compared due to the different namings. Because if everyone does that individually, it becomes difficult at some point to compare that data and do things like performance reports. For this reason, we still have the central component of data management. These are the three components we need for the software. How does this work in a project now – that was the question: In principle, an industrial PC – which can be equipped with the software – is installed in the control cabinet of the machine, connected with an Ethernet cable or an adapter; then purely configured with our DAP Home. Then the data is provided to whatever IT system is needed. Of course, this still requires a little configuration, because each IT system may consume the data differently. And then the data is available.
Probably then the roles and right within the customers are also different, that is, I then determine which data or data pots go to which department or to which person?
Correct. In principle, it first provides data. The IT tools where the data is sent to usually have user management anyway. If a message broker is installed, as used by Schaeffler, for example, or other tools; there are release processes for this, which the customer has often already implemented itself. This is really a data feeder that can be used very flexibly.
What exactly is a message broker?
In the end, you can trivially call it a structured data highway. In the past, people used to connect system to system. Then at some point you had an infinite number of systems connected to an infinite number of other systems – and for a small change I then had to adapt 20 systems. Nowadays, the approach has changed, I have a highway with different on-ramps and off-ramps, and anyone who follows the rules is allowed to be part of that highway and for example file their data there. To put it abstractly, at the end of the day it’s a software solution.
After all, there are a wide variety of customers who also pursue a wide variety of strategies. What kind of customers do you have and what features does this software end up bringing to address different customer needs?
We have gained a lot of experience with the software because we have been working in this field since 2004. We have quite small customers who operate only one plant, but also large DAX companies that use our solution. For this reason, it was important to us that we can supply both small and large. In addition, this can really be driven from within the plant itself – not that it requires a central department to drive digitization, but that the colleagues in the plant actually have the need, the use cases, to work with the data. Therefore, this is kept very flexible and can then be implemented well and cost-effectively in the customer infrastructures. That was extremely important to us, because every customer is different. Schaeffler is specific, for example, on the message broker NATS. Other customers may have nothing in that regard. That’s why we stayed flexible to serve everything.
This is probably also suitable on a global scale, so if I have my plant somewhere in China, does this work across plants?
Then I’ll move away from the one use case of connectivity, a little bit towards the business case. I’d be interested to know – at the end of the day, it’s all about saving costs in the long term, increasing new sales. These are all topics that sound very broad. Now, we have already talked about your solution that ensures unified and centralized management of different data. Is there a possibility to calculate some kind of business case, do you have something like that?
We are an internal supplier at Schaeffler as a technology supplier. Of course, there are corresponding business cases. What we often find today, based on the different IT tools that are used, is diverse hardware querying quite similar data on the same machine; sometimes up to three different devices. If you look at the hardware alone and then follow the process up to the enterprise level, you can see how much effort goes into physically connecting these three devices to each machine, plus managing this data three times, providing product owners, and so on. For this reason, the business case is quite high, especially for Schaeffler; but Stefan Soutschek will probably be able to say a bit more about how this looks specifically at Schaeffler. However, we see the same problem with many other customers, so the business case pays off quite quickly if you take a centralized approach to digitization, with a connectivity solution, and also add security aspects – not just purely business case. Because if I run three different solutions in my plants, on the same machine, of course I have three gateways for possible cyberattacks, which I also have to manage. This ends up bringing a very high overhead into the company.
Stefan, perhaps the question from practice to you: You had already talked about machine downtime at the beginning, also about potentials. What is the business case for you – can you calculate a return on investment at all?
Yes, you can. Without going into specific amounts of euros now, but the return on investment already comes out of several dimensions. To provide an overview: we are also globally positioned, with our 70 plants, and this messaging infrastructure, for example, is also operationally available in every region worldwide. Which is quite challenging when you have to consider issues like export control and data protection across countries. In the end, if you really start at the bottom in terms of the business case, the less different hardware I need for connectivity, the more I can standardize that, the more i save in hardware costs, for one thing. For another, it saves me maintenance costs, training costs, training costs for the individual employees. That alone gives us a benefit in the plants that we have to connect, for example, because we have to track quality data, or other issues. Based on this, we now have existing cases such as our condition monitoring, which is increasingly moving into predictive monitoring – predictive maintenance. Where we can simply reduce downtime, via targeted condition monitoring. We could expand this at will, on top with completely new use cases, for bottleneck management, for value stream optimization. In the end, however, we can say that the return on investment for the systems that we roll out and connect is well below one year. With this, it also scales. We do not go from plant to plant and build the PC into every machine, but we connect where it makes sense and adds value. The fact that we are going further in this area indicates that it is also profitable for us.
This may be a bit of a cheeky question, but is it possible to quantify this in euro amounts? Especially because today we are talking about connectivity. Everyone finds it a bit difficult to really make a statement about what a connection costs, also in terms of the return on investment.
We can at least differentiate it a bit when it comes to pure connectivity. Matthias has actually already described it nicely: What do the costs consist of? They consist once of the hardware that I really need for connectivity. These consist of peripheral equipment, such as network cables, and perhaps a Helmholz adapter for the connection. And they consist of employee costs screwing the system into the control cabinet and configuring that. In total, I would now say that we are averaging perhaps two to three thousand euros per connection. This is not unmanageable much per machine, which has a certain value; nevertheless, considering the mass, of course, it is a high sum. But these two or three thousand euros are amortized very quickly.
Exactly. Especially for us, the connection is done only once, future-oriented. This means that we are now creating opportunities; with the first use cases that run on connectivity, we are also directly adding an initial business case consideration. But the capabilities that I have then, once I connect that and go into real-time process monitoring on the same system, I accomplish a lot. We also see this right now, once the box is connected, that all of a sudden the divisions are coming from all sides. The big topic of sustainability: How do we calculate CO2 footprints on exactly detailed level? These are all things that are possible for this, because the data is already available today, of course.
I would like to talk about transferability again. We have discussed a rather generic use case today – it is about general connectivity. This results in a wide variety of use cases, which we have discussed. For many listeners as well, but also for medium-sized companies in general, it is often not yet clear where the journey will lead. You start with a very specific project, where you also have a strategic idea behind it, and roll it out step by step. Do you also help in the development of such use cases? Or do I generally start connecting all these data silos in the first place, and then draw necessity and justification from that?
Usually, today, if you sell software in the environment, you have to bring appropriate use cases. One is connectivity, but that alone does not have a direct benefit per se. But only what I then do with the data. Of course, Schaeffler has already implemented a whole host of ideas and use cases on our systems, speaking purely for Schaeffler Digital Solutions, with well over 2,000 connected machines today – we have already made specific savings on heat treatment systems, milling machines, honing machines, and grinding machines, which we naturally also contribute as part of our sales activities. So that one for sure. We are also happy to advise customers. The Schaeffler way alone, which we follow, is also what we bring to our customers, because that is what we support. In this way, we ensure that we solve the pain points for our customers that Schaeffler has been consistently addressing for years in the area of digitalization on the shopfloor.
We had now also talked about the classic topic of OEE key figures alone, but then also bottleneck analyses, value stream analyses. In general, sustainability topics are also separate use cases that result from this. That’s an incredibly broad field, where you don’t really know where to start. So a lot of potential.
Perhaps in addition to Matthias, we of course believe in transferability and rollout, and also push that at the end, because we have gained a certain amount of experience over the last few years, always in a good balance. This means that we have use cases for which we already know today that they will pay off. But on the other hand, we are of course still investing in connectivity because we believe that even more is possible without having to make major hardware changes for new solutions. That, of course, drives us – via the mix of belief in the data-driven use cases we’re defining for ourselves plus the savings we already have today – to consistently go further.
On the topic of use cases, perhaps briefly on our own behalf: You can find use cases that we discuss today, and others, linked in the show notes. There you can read up again, if you are interested in one or the other project in writing. What else is coming in the future, Stefan, where are you looking, what are you planning?
If we make it very concrete in connectivity, we plan to connect another 7,000 machines to our infrastructure and architecture in the next two years or so. We have various use cases coming down the pipeline, some of them in the direction of autonomous factories – knowing full well that not every factory will become an autonomous factory in a commercially viable way.
Matthias, what other topics do you see in the future, where is your software development going?
The main focus is to enable more mass connectivity again. But there are other things that we are looking at. In addition to software, which we already sell and develop further, we are also taking a closer look at the topic of sustainability in the area of condition monitoring and real-time process monitoring. There is a great need for relatively simple ways to get more out of CO2 and energy. That’s just where it’s going with us.
Very important issue. In addition to all the digitization, always look in the direction of sustainability.
Absolutely. After all, this is an issue that affects us all. Not only now, but also in the future. You can see out there, many companies are working on this and have a corresponding target for the next few years with regard to the CO2 balance. There is a lot of potential in networking and by data availability. Thank you so much for the insights into your project. Maybe we’ll hear each other again in the podcast in a different setting. Have a great rest of the week.