Möchtest du unsere Inhalte auf Deutsch sehen?


Making huge amounts of data manageable for seamless monitoring in CNC production


Click on the button to load the content from Spotify Player.

Load content

Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

IoT Use Case Podcast #61 Siemens

This podcast episode provides exclusive insights into quality monitoring and workpiece analytics with edge computing. Siemens Motion Control delivers IoT practice from the shopfloor, directly from the machine tool.

Episode 61 at a glance (and click):

  • [08:40] Challenges, potentials and status quo – This is what the use case looks like in practice
  • [14:30] Solutions, offerings and services – A look at the technologies used
  • [31:24] Results, Business Models and Best Practices – How Success is Measured
  • [34:21] Transferability, scaling and next steps – Here’s how you can use this use case

Podcast episode summary

Episode 61 of the IoT Use Case Podcast is about digitizing the machining of metallic workpieces with CNC-controlled machine tools. Two practical examples are discussed in this episode: one use case from the monitoring of single-part production and one from repetitive manufacturing.

The application example from series production comes from the automotive sector. It will be shown how processes are monitored and quality management is optimized with IIoT at Siemens’ own engine factory in Bad Neustadt. Have the components been correctly inserted into the clamping device? Is the quality of the gear right? Does the torque of the tool spindle fit or are there anomalies? – these are all processes that can be seamlessly mapped. The second use case comes from single-part production in the aerospace industry. This is about titanium machining of aircraft landing legs. This complex milling process is mapped completely digitally in order to detect errors in the part program and deviations from the nominal geometry and avoid them before costly iteration loops occur. Mapping the huge component in the virtual world helps to search (for causes) of quality defects and reduces immense costs.

The linchpin of the solution is the so-called Industrial Edge for Machine Tools – an IPC that’s directly in the machine’s control cabinet and preprocesses the data. It enables high-frequency data access and analysis to optimize workpiece quality and productivity. For example, old machines can be retrofitted with connectivity by having the tool act as a kind of converter, turning old interfaces into new ones, such as OPC UA.

Madeleine Mickeleit’s guests in this episode:

Podcast interview

A warm hello to Oli and Bjoern. Glad to have you guys along for the ride. Pleased to meet you.


Hi Madeleine, thank you so much for having us.


Yes, hi, hello from my side as well.


Both of you have been working at Siemens for many years in the context of CNC-controlled machine tools – digitization at the machine tool, so to speak. Today we will focus on the so-called Industrial Edge for Machine Tools. Your job here revolves around the mechanics of the machine tool on the one hand, but also the IT and app area around it on the other. The bottom line here is all the interfaces and secure and seamless data acquisition. Oli, you’re our application spokesperson here, and Bjoern, you’re responsible for all things concerning the platform and the ecosystem, so to speak. That’s one way to put it, isn’t it?


That’s right, that’s exactly right.


Very nice. I mean, one or the other probably knows CNC milling machines. But nevertheless, to classify the whole topic a bit: Where are your CNC machines used today? So where do they stand? Just as a simple example, perhaps from practice – Bjoern, can you tells us a bit about, where the machines are and what you are doing there?


Yes, basically you can find machine tools in automotive and aerospace industries. They are practically in the halls wherever we have to deal with machining of metallic workpieces. So classic steel and aluminum, in the automotive sector, engine block production. Also the rotary systems, the whole drive train, the axles and the bearings. In aerospace, we also do a lot of engine production, especially machine tools. The most diverse materials and high-end technology.


Now in the podcast, as always, I talk about specific use cases from the field. On the basis of which you really understand concretely, what are new technologies now? And what your Industrial Edge for machine tools also brings to the table, for example. What use cases have you brought along for us to discuss a bit about how this works in detail? Oli, maybe you have a use case from your field that you can share with us here.


Right. We clearly see a use case in series production. Now really in general. It doesn’t have to be about a very specific process. Quality management is always at the mercy of what I call the uncertainty principle. On the one hand, you could now check every component that you produce in series production. Then I have seamless monitoring, but it makes my process very expensive and very slow. If I say, okay, I’m going to reduce that now and just do random sampling, maybe only every hundredth component – then maybe my process is more favorable, yes. But then I no longer have such a good overview of the quality situation. The monitoring of such a series process is one such use case that we can look at in more detail here.


Exciting. Do you have a second use case there, Bjoern?


Yes, I would have added one there now. Namely, we are not only interested in series production, but there is also single-part production. Or let’s call it small batch sizes. Where it’s simply a matter of a few components, but they may be large and expensive. Then I may need special tools for it. This can quickly become very cost-intensive. If I am a contract manufacturer and have to deliver relatively quickly, then it would also be very good if this run-in process, i.e. this test in the simulation, can be prepared and secured reasonably well before I then need an unnecessary number of components or milling tests – which are also cost- and time-intensive – in order to obtain a good component in the end. And that’s what it’s all about in the end. To deliver as high a quality component as possible.


Exactly. Before I delve deeper into the use cases, just to get an impression. Oli, you’ve just said that it’s also about using optimization potential, the potential from the data. What is your vision here in the area of digitization? What is your customers goal? What are your goals? Where are you going with this? As a holistic vision.


Yes, really quite, quite strikingly said: Ultimately, a system that supports the expert in the manufacturing industry in identifying optimization potentials, but based on a self-regulating system, a self-learning overall system. So really, when you say the vision is, our customers press a button that says, Optimize my production, and then this system records data, learns along with it, and can then derive optimization potentials from it in order to then also issue recommendations for action. It’s those serial process glasses again. If you think about what Bjoern just said about the production of individual parts – that the process engineer actually knows before the first chip is even produced – to already know, will my component be something or not?


Sounds really exciting. I would also go into detail right there, of exactly what before the first chip means. Now if we jump right into the action, from this use case. Bjoern, now the question to you. You just said it: We’re going to zoom in on this use case. We are now in production in the automotive and aerospace sectors. There are diverse CNC milling machines probably. How do you have to imagine the daily job of your customers? What does it look like on the ground? What kind of controls are there? So of course Siemens too … but what kind of hardware can you find and what do the processes behind it look like? Can you tell us more about that? And how does such a milling process actually work? So for those who may not know it yet.


So basically, we are dealing with a very inhomogeneous environment here. In the factory halls there are – I would say, unfortunately – not only Siemens-controlled machine tools. But also machine tools that are controlled with third-party controllers. There are new machines that may be a bit better suited to the whole issue of digitization, because they also have interfaces that the old machines may not yet have. That means we already have a challenge in terms of connectivity, getting to the old machines at all and recording data. This is one of our characteristics. As far as the milling process is concerned, you asked: Yes, it’s a classic milling process, you have a raw material, which may simply be a prismatic block, and then you start from the solid, in so-called rough machining, first of all with a rough tool and rough movements to remove most of the material. Especially in the aerospace sector, 80-90 percent of the material can be removed from a large aluminum block. And then, in two or three operations, it finally comes to the so-called finishing process, where small, fine, precise tools are then used to determine the final surface. And in all these process steps, there are monitoring orders. In the roughing process, i.e., this pre-machining, we want to get through as quickly as possible – but at the same time conserve tool and material. The machine should not be broken in the process. At the end of the finishing process, I want to be as precise as possible, i.e. highly accurate, and also reasonably fast, but the most important thing is that the quality is right in the end. In other words, after four hours of finishing – which is a relatively long process – you don’t suddenly get a crack in the surface due to tool breakage, and the component is simply scrap. Then the machine time is already wasted.

Challenges, potentials and status quo - This is what the use case looks like in practice

As you have just mentioned, one of the challenges is to ensure quality and to prevent surface qualities from occurring somewhere that are not desired. Can you summarize these challenges that occur here on a day-to-day basis? Or maybe even potentials you see that can be lifted?


It is quality and productivity. On the one hand, as a factory operator, I have to make sure that I keep capacity utilization as high as possible. This means that every machine downtime or line stoppage in the automotive sector, of course, costs money and quickly becomes very expensive. This is something to avoid. The investment I’ve made there on the hall floor has to be worthwhile. That means a lot has to come out of it – to put it bluntly. And of course at the same time also keeping the quality high. This means that if I optimize the systems and keep optimizing them, then at some point a saturation point is reached and you are at the limits of the load capacity of man and machine. And navigating along that path and always maintaining the good optimum is a major challenge.

Now Oli has already teased it a bit earlier. This topic – do I measure all parts or do I only measure randomly on a high volume production run? – involves residual risks. It’s always such a trade-off. Find the optimum between residual risk and costs or productivity.

If I now look into a specific process a bit more concretely: I believe you also have your own series production in Bad Neustadt. Now, would that be an example where you have to … I don’t know, if we imagine that a component is inserted into a clamping device. Now there is the spindle, which monitors the whole thing accordingly there. What is the data here that is relevant to this process, to this monitoring? Do you have any insights from a particular process of yours, what’s being monitored?


In fact, we are using this Edge, the platform itself and one or two applications from Oli, in our electric motor factory in Bad Neustadt for series production. I can chat a bit from the sewing box, there is a practical use case. A housing component of the electric motor – if you look at it from quite far away, it is rotationally symmetrical. That is, it doesn’t matter how I insert it into the fixtures. But if you look a little closer, you’ll see that it’s still not one hundred percent rotationally symmetrical, and it’s HIGHLY dependent on how exactly it’s inserted into the fixture. And such a misplacement makes itself felt relatively quickly. Namely, if we look at the spindle current, i.e. the current of the tool-carrying spindle, then we see either too much material or too little, which is removed during the first cut, and I can already recognize from this, for example, that something is not right. This is a so-called anomaly. And the more we know about it and learn, our systems learn, the more conclusive they become. And the basis for this is data. That is, we look at flows, we look at target state and the state of the actual values. We look at state-control deviations. And this is sampled with high precision and high frequency. That is, no gap in it. All timestamps are correct. We know exactly where we are, which is either in the roughing or the finishing process. We have contextual information. Which tool was just used? All that these analysts need to be able to do exactly these evaluations.

Okay, I see. Oliver, how is it; now we were just talking about series production. What is it like in single-part production? Do you have an example there and what are some data that are relevant for you?


I’ll say, in the case of single-part production – let’s take another look at the aerospace sector. Classic example, which we always like to look at, is the so-called landing leg. The landing gear of any aircraft consists of various components that can be very, very large, depending on the type of aircraft. And, as I said, these tend to be single-part production processes. So they’re not going to be produced non-stop now. But you can imagine that such a milling process for such a landing leg – these are also five-axis milling processes – takes a very, very long time. In other words, it can take up to several days until the component has actually been finished to the quality required and, for example, as specified in a certification. And if something goes wrong somewhere in the process, for example if there is an error in the parts program or a predictable deviation from the target geometry occurs, then it can become very, very expensive under certain circumstances. Because, A, I have to provide the blank again – that already costs several thousand to several tens of thousands of euros, depending on the component. And then you also have to invest the time again, prepare the process. And also, let’s think about airplanes, landing legs – highly stable, titanium, very difficult-to-machine material: That’s where my tools end up breaking very quickly, too. And that’s exactly the kind of process we look at beforehand, and there’s also the possibility – Bjoern has just said it a bit: We first execute the part program in the virtual world, record the data in the virtual world, look at the target and actual values – at least as they would be specified by the controller, i.e. the CNC – and we see virtually every influence, except of course the mechanical influence from the machine. We don’t know it because we’re still in the virtual world. But you can still see a lot here, and you really have the possibility here to simulate with certain algorithms, ablation algorithms, what would the component look like if it were manufactured like this now? I can then actually see quirks in the virtual world that I would see on the real workpiece. Of course, this then allows me to optimize my process until I then say, at least in the virtual world, so, now it fits for me. Of course, this sounds very elaborate and also costs time. But thinking again in this classic rule of ten, it’s still cheaper than if I then realize too late that the part that really came out of my machine tool is qualitatively unusable.

Solutions, offerings and services - A look at the technologies used

Sounds super exciting, including the individual data aspects you can collect there. If we now look a bit – you have already mentioned this – at your solution: You now have the platform as such with the corresponding ecosystem. What steps do I have to take to get there? That sounds like … of course you also have the competence, also from history, from your own works, which you bring with you. But if I start such a project now, or maybe I’m in the middle of it, what kind of steps are necessary to deal with such an issue in the first place? Bjoern, how do you start then?


At the beginning, it is important to take stock of the situation. First to look, what kind of machines do I have? Which control? How old are the machines? That you first get a basic feeling of what is realistically feasible. I’ll put it this way, we can’t do witchcraft or magic either – although we try. And then, what does it take? The platform, the Industrial Edge for machine tools, and the application, which then works accordingly with the data, processes the work and provides corresponding results.

You can think of it a bit like a smartphone and an application, but the whole thing is on an industrial level. And of course with maximum IT security. We know how sensitive the processes are and how sensitive the data is that accumulates on such a machine tool. That is the asset to be protected, and those are our main goals.

So, first take stock, then choose the platform and the application to go with it.


Just wanted to say, since you had already mentioned at the beginning, there are probably some new machines. I don’t know if they already communicate in umati then? I think that’s one of the common protocols. And then there are also a few old machines where you might still have an old control system, where you have to take a detour in order to have this continuous data recording at all. I think that’s the first step, that’s a part of what you meant by inventory, right?


Yes, umati in particular is an exciting topic. Because new machines, under certain circumstances, already have umati onboard. But especially the old machines, it is relatively difficult, probably, to install something like that as they are in the present condition. And that’s where a platform like Industrial Edge for machine tools comes in. Because that offers the corresponding flexibility and freedom to retrofit such connectivity. That means I don’t have to intervene in the machine at all – there is our interface to the controller, and the umati functionality is then on the Edge. That is, it is practically a converter. Old machine, old interface, connected to the Edge, converted to modern interfaces, OPC UA, umati information model or anything else modern. Where there may not even be a physical interface on the old machine.


Perhaps we can also use the buzzword “edge” to classify the process a bit more: This means that you are doing data preprocessing directly on the machine – which is also common, because you are also handling large amounts of data here. How would you differentiate that a little bit? I mean, we’re talking about IoT here on the podcast. If you think of IoT you also think of cloud, where cloud is now also a broad term. I can now also store my data on a server. Now, here the classification would be that you say you’re on a server at your customer’s site because, these are issues that have to be executed close to the machine, on a PC; on a computing power, it’s installed on the machine somewhere. Or rather in the customer network. Right? Can you classify it like that?


Yes, so now that has several, different aspects. First, the Edge is installed directly in the machine. So the Industrial Edge for machine tools is an IPC that’s directly in the machine’s control cabinet. It does the first preprocessing of the many, many data that come out of the system, for example, from the machine tool in general. Then there can be a server infrastructure at the customer’s factory level, a database, maybe even a data lake, which is then fed with, let’s say, aggregated data from the many machines, from the many Edges then. This can then continue to be fine-grained raw data, if the network infrastructure allows, you can collect accordingly – and make the analyses and further evaluation at the factory level. Trained algorithms can, and perhaps should, already work directly on the machine tool and communicate results directly to the machine operator. So bring latencies down further and give direct feedback on the machine.


Great, thanks for the evaluation because, I get asked this all the time, with all sorts of buzzwords. Simply to classify that. Because, now we come to the next step of the application. Oli, this question is directed at you. If I now have my series production, the use case that you presented, and would like to test the quality of my workpieces perhaps no longer manually, but would really like to look at the entire production process somewhere – you now provide applications for this. First of all, which ones are they and how does that work now with the platform? Can you tell us more about that?


Yes, very much so. The application that you can use for this purpose has quite a long name. It is called Analyze MyWorkpiece /Monitor. So the focus is on the workpiece that is being manufactured, be it the series process in this case; and /monitor, to simply move back to the center, that online process monitoring is simply taking place here. The Analyze MyWorkpiece /Monitor is ultimately … one acts thereby in three steps, in order to be able to realize such a complete monitoring. The first step is to record reference data from my process. I already do that with the platform, so that’s what I really do with the Edge. And this reference data then simply serves as training data later on. That is, I should try to run this reference process in such a way that it is more likely to represent the, shall I say, good cases of my process. And then when I’ve collected enough data, what happens is what’s called model training. The data is then analyzed statistically, pre-cleaned a bit – we take a lot of work off the users’ hands here. Pre-cleaned really means that only comparable data is brought together. So that means that the process, I would almost say surgically exactly … the initial incision is made and the final incision. And we are still transforming the data. Now it’s getting a little too detailed, but here’s the thing: we’ve found that the time domain is not a particularly good factor for comparing data. Because sub-processes can simply take longer or shorter. That is, we are still doing a transformation of the data here from the time domain to the path domain, that is, the path domain. So we no longer think in seconds, but in millimeters that the tool has traveled. Then a statistical motivation happens based on a K-Sigma process, so for example Six Sigma. And we then puzzle out a monitoring model, so to speak, i.e. a tolerance band, and that is then activated. And now, every time such a component is manufactured again, the monitor automatically recognizes it. So I don’t have to add anything to that. I don’t have to say, here comes batch x, but the monitor now recognizes this itself and then checks whether the process is then within this previously trained tolerance or not, and then also gives automated feedback accordingly.


Okay, that sounds super exciting too. You just said that the reference data is also already being recorded in parallel, so to speak. That also pays off a little bit – I don’t know, I recently read another statistic that 85 percent of AI projects fail because the quality of the data is not sufficient: In this way, you virtually ensure that this reference data is collected and also used as training data, so to speak – can you understand it that way?


Yes, absolutely. I can also only confirm what you just said, at least now so from the gut. Just in the development of Monitor, a lot of time went into it, really, how do you even provide that comparability of data? So if you’re listening to some of the experienced millers and five-axis millers – even if I’m making the same part on the same machine, the fact is that I don’t always attach the part, or my blank, to the same place on my table. That is, it is sometimes a little off-center, sometimes a little more in one direction, so to speak, and sometimes in the other. And these decentrations must be compensated via these so-called linear axes. As a result, the sub-processes suddenly no longer take the same amount of time. You can make as much effort as you want there – you won’t make it in the time domain. And then the worst thing that can happen to you with a system like this is that it keeps sounding alarms where the parts are actually fine. Or vice versa, the system says everything is fine, and yet the parts that come out the back are garbage. This can easily happen if this comparability of data is not given.


Yes, I think that is also a very important point. I find that really exciting right now, because it was also new to me that the timestamps, so to speak, or the assignment of the time, is now also done with millimeter measurements. Is this actually a tool-specific or a machining-specific issue? Because you’ve already built that in this Analyze MyWorkpiece section specifically for that. Or do you have that with other use cases as well?


Let me put it this way. Basically, wherever I have a process that allows me to somehow represent that spatially, geometrically, in some kind of form. So now let’s move away from machining; let’s think about additive manufacturing. In the end, I also have an end effector, or a nozzle, where my substrate is extruded, for example. Or in robotics, for example. The bottom line is that it is moved through space with the help of six axes. The transformation into the path domain in this case is natural, allows to make itself invariant at least from temporal effects. That does make sense.


Exciting, yes. Okay. And if one or the other listener would like to go deeper into it: Your information is also stored in the show notes, so I think we can talk through one or the other use case with you before I ask too many questions. If I now think in terms of single-part production, Oli, what’s it like there? Do you have the same app for that or is that another one?


In the case of single-part production, it’s a different story. Since the app is called /Capture, so Analyze MyWorkpiece /Capture, to start with really just, step 1, record the data, and record the data in a good quality. Here, the main issue is ultimately three types of data. Once we need the target values, then the actual values, and beyond that, ultimately information that accompanies the process. So for example NC code – which NC code is being executed at all? We record that and maybe a metadate or two, so which tool was in? What geometry did it have? In order to be able to reconstruct the process completely afterwards, without having to ask anyone again. That’s what this /Capture application is doing, it simply records this data, in a very convenient way. And then afterwards I have the possibility – here we are now a bit away from the Edge platform as such – that I am close to the machine tool, but with an application called /Toolpath, so Analyze MyWorkpiece /Toolpath. I have the possibility to then visualize this data on the PC afterwards. And that’s where it makes sense for me to take the time. I load in the dataset, can do color coding, for example, on my point cloud. The point cloud then represents the movement of the tool in space. This means that as a process engineer or work scheduler, I can recognize my workpiece again and work with color coding to visualize process variables. And I also have the possibility, as I mentioned earlier, to run this surface reconstruction algorithm.


Very exciting. Now maybe the question, Bjoern, addressed to you: Now we are talking about a lot of data, high frequency data. As you said, this is a classic on-edge case, or rather it runs on the edge. What is actually the secret here, so that one can deal at all with these huge and also high-quality data, as we have just learned? So what’s the secret to dealing with that from your perspective?


Yes, so of course we don’t give away secrets. But basically, we have created an interface between, primarily of course, the Sinumerik CNC control and our Industrial Edge for machine tools, which functions with high performance, high availability and maximum non-reaction. That is, there is some post-installation, but it’s minimally invasive, as they say. This means that the additional task we give the control system, namely, please supply us with as much data as possible in maximum quality and permanently, does not lead to the entire machine and the entire process being disturbed in the long term. That is, after all, one of our goals. The process on the machine remains untouched as far as possible. That is one thing. And the other thing, of course, is the data quality itself. Gapless. I mean, you can, of course, protect the process and say, let me leave out every fifth package – we don’t do that. So the second big goal is, every data point is delivered and we make it available to the apps. That’s what you brought up earlier. Data quality is not yet sufficient for AI. That’s where we want to go. We know what AI needs, and we are already trying to deliver that quality throughout the chain, from the very first interface. And then, of course, the whole thing is embedded in an overall system that takes these annoying issues, IT security, off the user’s hands. Because, that’s then our third pillar, out-of-the-box default security.


Very good. Maybe two last questions about that. One would be, you talked about an ecosystem. Why do I need this? That would be the first question. And Oli, maybe I’ll ask you again: It’s also about the knowledge of the employees, how do you bring that into this process, also from the AI side? But Bjoern, back to the the first question, ecosystem, why do I need this here and what is this in the context?


Our system works with an industrial PC. It is managed by a management system running in the cloud. This means that this is the secure source of all components, interfaces or applications that are installed on this device. Again, the analogy to industry-level cell phones. We call the whole thing an ecosystem, not least because it also provides infrastructure for applications from other manufacturers and providers. So it’s not just that we, as Siemens, operate the platform and that only we build the applications – no. We also provide a development tool, a so-called SDK, where every customer can build their own applications and, in the medium term, sell them to others via the Marketplace infrastructure. That’s what we call an ecosystem. This is such an entire environment.


Very nice. Got it. Now is also the perfect transition to you, Oli. The bottom line is that employees are now perhaps also developing their own applications or contributing their know-how. After all, that’s one of the key components I need in order to train this model, isn’t it? That you also use the knowledge of the employees who are simply the experts in their field, right? How does this come together?


Yes, absolutely. Let me put it this way. The impression should not arise that such a platform and such a system, as Bjoern has now presented it together with me, replaces people. But there will always be a need for an expert or a group of experts who are able to interact with this system on the one hand. Teaching the system what is a good case or a bad case now? To put it very bluntly: There are already AI algorithms today that can perhaps recognize a very simple technical system, here is an anomaly, something doesn’t fit here; somehow a system, which, I don’t know, maybe an escalator or something, which is technically relatively simple … I somehow have a motor or a group of motors, which just make a synchronous movement. You can imagine that when I press this magic button, that a system says, okay, look, here, last Monday, the current was somehow too high, something might be jammed. But right now, in the manufacturing environment in which we operate, especially in the area of machine tools, we are confronted with such complexity, and process engineers are confronted with such complexity that a system like this will probably not be able to handle with a magic button in the next ten years. So it always needs people who then teach the system, look here, this is now a bad case and these are the causes. The causes can be manifold. That is just the one approach. And the other approach, of course, if we’re talking about the monitor application, then it’s still relatively manageable, because someone has to tell the system that this is a good workpiece and this is a bad workpiece. But when it then comes to pouring this IMPLICIT knowledge from experts into the form of an app, then of course a lot more work and collaboration and preparation is required, yes.


Very nice. That means, Bjoern, if I now have an app that I want to offer, I can also offer it in the Marketplace in the future, or perhaps already today, or bring it into the ecosystem.


That’s right, that’s where the journey is headed. I must honestly say, not yet today. But that is the medium-term goal.


Right. I think that will continue to develop over the next few years. There are many apps, or many are now starting with them, and many use cases – we also see this with us, I think about 20 to 30 percent are standardizable and also scalable, applicable. And I believe that this will lead to the development of applications that can be incorporated into marketplaces such as yours in the future, but perhaps also today, and that this knowledge can be used across other trades.

Results, Business Models and Best Practices - How Success is Measured

Perhaps very briefly at the very end about the business case. I’m always asked, bottom line, what have we accomplished now? So I maybe want a return on investment – I don’t know if we can go that deep into that right now. But the bottom line is that I always want to save costs in my process. Or, if we take the issue of the app, perhaps even in the direction of sales, bring in new sales. Oli, maybe a question for you: Do you have an example calculation for such a use case, what’s the bottom line? Are there any reliable key figures?


Yes, so I can’t now provide figures calculated exactly in euros. But what we have seen now, for example, in several customer projects and pilots – RoI calculations were then also carried out jointly in order to be able to justify this to our customers in front of the management: You can quickly reach five-digit amounts per machine, per shift … that’s definitely possible. It depends, as I said, a bit on the value of the component and the complexity of the process. But this implementation of such an application … yes, these are initial expenses, no question, especially if I’m doing this for the first time, I have to take them on. But once that has settled in, such a system … that can pay off within a year, at the latest a year and a half later.

Yes, well, in the case of single-part production: If I now imagine that I have saved an unmachined part of, let’s say, 20-30,000 euros – that’s how expensive they can be in certain industries – and also somehow saved 20 roughing tools, then it’s relatively easy to calculate if you have now recognized beforehand, that would have been scrap with THIS parts program, there I would have had a defect or flaw, which amounts can be saved at the end of the day.

I think the important thing is, with all these RoI considerations: You have to start somewhere. Of course, you can always weigh and hesitate and calculate. But the important thing is, you just have to get started, get to know it, break down fears of contact and simply take the first steps. This results in economies of scale that ultimately simplify the expansion of such a system.


Very nice. Bjoern, did you want to add?


Yes, I wanted to add something to that. There are also requests to us, which are of course a bit born out of necessity, because, there were ALREADY the incidents and then the truck came back with the pallets of manufactured parts. So. And THEN the costs are just there. It’s also a bit about prevention in a situation like this. Fortunately, this may not have driven the company into bankruptcy, but you just want to avoid it in the future. And systems are being sought that will help to avoid these situations in the future. And that’s where something like this comes in.


But now that was a nice ending. I think that the processes may still be different, but you have shown very well what potential there is. I always like such concrete KPIs, I think you probably also have some insights internally from Bad Neustadt, but of course also from the individual customer projects. So thanks there already for the insights.

Transferability, scaling and next steps - Here's how you can use this use case

And then I would say, at the very end, Bjoern, do you have a best practice, a hint, something that you can share with the listeners? Where you say, this is what you should be looking out for when you’re doing this? Last question for today.


Yes, so if we look at this whole process … I want to do digitization on my machine tool. If I observe this and take into account a bit of the experience we have already gained in dealing with our customers, then it is advisable to involve the IT department right from the start – i.e. with the first idea of doing something like this. Because it is a system, which is networked. It is a system that communicates with a cloud. And there are still relatively frequent difficulties in the coordination, to put it mildly. So it’s advisable to really involve IT in that from the beginning. Because, it’s just something new. Which somehow also presents the shop floor with new challenges.


Very nice. Perhaps briefly summarized again, I found it super exciting that we could discuss so concretely today as well. We had discussed quite a bit, especially in terms of training the data, the reference data, also the quality of the data, which plays a significant role. And what I have now also become aware of once again: I think a lot will happen in the next few years in the area of employee skillset. Many are experts in their own processes, and in the future these decisions will be much more data-driven. So that I can make a decision based on data that maybe makes that easier for me as well and where I can align my skillset in a different direction maybe. I think that’s another one of those things that I found super exciting. Thank you very much already to you for presenting these use cases, these two. Really exciting. Would love to hear from you, maybe we’ll hear from each other again with other use cases. Do you have any closing words? Otherwise, thank you for your time and the exciting session.


Many thanks also from my side. We always say Happy Edging. Maybe it will work. 


Happy Edging, okay. I’m recording this; there’s a post on LinkedIn about this. Very good. Thanks also to you Oli.


Yes, also from my side, thank you also. Closing words: Exciting time we live in. This is really, in my opinion, kind of a small departure, and I think in a few years, 10 or 20 years, we’ll look back and say, Man, that’s where it started back then! That was still so new and then it became established on a broad scale. 


Yes, there will be a lot going on. I think the industry is always about two or three years behind the consumer environment. You learn a lot from that and I think a lot still happens there, yes. Very nice. Thanks for the session and have a great week.


Thank you, likewise. Ciao. 


Thank you, same to you. Ciao. 



Please do not hesitate to contact me if you have any questions.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Host & General Manager
IoT Use Case Podcast