Möchtest du unsere Inhalte auf Deutsch sehen?


Material flow simulation: Fact-based decision-making in the production environment


Click on the button to load the content from Spotify Player.

Load content

Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

IoT Use Case Podcast #85 - ITK Engineering

Can this production order be routed from one machine to the other? Why does the acquired automated guided vehicle system not yet bring a relevant increase in productivity? Are three or four machines sufficient for the present order portfolio for production? Data-based factory simulations can optimally answer questions like these. How? That’s what episode 85 of the IoT Use Case Podcast with ITK Engineering is about.

Episode 85 at a glance (and click):

  • [04:58] Challenges, potentials and status quo – This is what the use case looks like in practice
  • [15:57] Solutions, offerings and services – A look at the technologies used
  • [23:41] Results, Business Models and Best Practices – How Success is Measured

Podcast episode summary

Jens Hetzler (Expert Engineer Factory Simulation), representing the development service provider ITK Engineering, tells us which use cases they bring from the field of production and logistics and how questions such as those mentioned above are developed into recurring simulation products.

Use Case 1: Harmonization of the IT landscape
Use Case 2: Test benches
Use Case 3: Factory and material flow simulation

For example, it is about decisions in the production environment that are made consciously and based on comprehensible data and facts. By mapping the entire value chain as digital twins, the so-called bottlenecks are also identified. Bottlenecks are processes or resources that represent capacity limits for the overall system.

ITK Engineering is a 100% Bosch subsidiary and serves customers from a wide range of segments. Many customers are manufacturing companies, such as @TRUMPF as a machine tool manufacturer. ITK acts as a kind of all-in-one development service provider and pursues the “white box” idea. This means that when, for example, source codes are developed, the solution belongs holistically to the customer.

Podcast interview

Whenever manufacturing companies question whether orders can be routed from one machine to another or why, for example, the acquired automated guided vehicle system has not yet shown any increase in productivity or whether, for example, three or four machines are sufficient for the order portfolio at hand for production: This is precisely when simulation expertise is needed. 

Today Jens, representing the development service provider ITK Engineering, explains to us which use cases they bring from the field of production and logistics and how exactly these simple questions are transformed into recurring simulation products, i.e. questions that can be easily answered by data over and over again. 

Before I anticipate any more exciting insights from this episode, I’d say let’s jump right in.

Hello Jens! Great to have you with us today and welcome to the IoT Use Case Podcast. How are you and where are you at right now?


Thank you Madeleine, for the invitation; I’m fine! You can reach me at the home office right now, where I’m three to four days a week, otherwise I’m in the office. Since Corona, the home office has become strongly established.

Where is your home office?


Near Landau in the Palatinate, which is north of Karlsruhe.

Karlsruhe, that’s familiar to me. Glad to have you with us today. Perhaps we’ll start with a brief introduction to ITK. Many know you from the BOSCH context, but you are well known in that environment as well. 

As I understand it, you work on customer-specific or, above all, individual development projects in the areas of software development, embedded systems, but also in control technology. What makes you guys special, or how I’ve come to know you, is that you guys are kind of all-in-one development service providers. That means, for example, when you develop source codes, the solution belongs holistically to the customer. 

You are practically pursuing a kind of “white box approach”, you could say. You are, so to speak, empowering the customer that they can continue to do that all or on their own. 

You are a 100 % subsidiary of Bosch and your customers come from very different segments; automotive, aerospace, a lot of industry, which we are also talking about today, but also rail, medical technology and also motorsport. You are very broadly positioned. 

You are an Expert Engineer for Factory Simulation at ITK Engineering. What exactly does your department do and which clients does it work with?


At ITK, you have a matrix organization. On one level, we have business units, which is, for example, industry, rail, medical technology, etc., and on the other level we have specialist areas, specialist departments and specialist teams. This matrix organization also helps us to implement customer-specific solutions across the board. In the Industry business unit, we can draw on various teams, for example the Factory Simulation Expert, which means me, among others. We have access to a cloud expert, high-end visualization and security. That’s what sets us apart and I’m here in the Industrial Systems department; that’s a department that came a little bit earlier. There was a lot of test bench development, has now moved towards Industry 4.0, and factory simulation is a topic of that.

That sounds very industry-heavy. That means that you primarily have industrial customers or manufacturing companies, or who are the classic customers you work with?


These are classically manufacturing companies – one large customer we have in this area, for example, is “TRUMPF” as a machine tool manufacturer. These are also medium-sized companies that have a production facility; that’s who we are targeting.

Do you also work with partners or is this an all-in-one ITK engineering solution that you’re putting out there?


I wouldn’t call it a partner, but we are platform and tool independent. When a customer approaches us, most of the time a customer has some kind of problem that they want us to solve. And based on their requirements that they want to have in the implementation, we choose the technology and the toolchain. In the area of factory simulation, we are not classically bound to a tool such as “Siemens Plant Simulation” or “AnyLogic”, but we choose that based on customer requirements.

Challenges, potentials and status quo – This is what the use case looks like in practice [04:58]


Which use cases do you generally cater to in your department or in the IoT environment, and which ones are we looking at in detail today?


We have a use case: What we see very often with customers is a harmonization of the IT landscape – this is due to the fact that customers have a very heterogeneous IT landscape that has simply grown historically, and they actually want to restructure it so that they can avoid system breaks and also enable communication between the individual systems.

The second use case is our department: we develop test benches, both the software and the hardware, also whitebox-specific. This means that we also deliver the source code of the test benches. The test benches we develop are then very customer-specific. These are typically one-off solutions; they are produced once and that’s it.

The third use case we are also looking at today is factory and material flow simulation. Simply for the continuous optimization of manufacturing processes. And above all, that’s very important, fact-based decision making. Often the decisions are made based on expert knowledge or on really simple Excel sheets. Material flow simulation is an immense help in representing complex simulation processes and finding the correct decision based on this.

The other examples that you had given are probably within that matrix in a different department or is that something that then comes across your desk as well?


I used to do test benches! That’s where I come from a little bit. Other colleagues are currently doing this, sometimes via other departments.

Let’s dive a bit into the challenge from your customers. What challenges exactly do you want to solve for your customers with the help of material flow simulation?


Perhaps we first need to specify this term “customer” a bit. In principle, the use case we have can be split again into two sub use cases. Once the Solution Provider. This is typically a mechanical engineering company that operates machines and can use material flow simulation to visualize the performance of its own products or machines via simulation. One customer we have in this area is the TRUMPF company, which makes machine tools.

What these customers actually want is, in principle, a kind of construction kit. We would call it a simulation kit, where the customer’s machines and machinery are mapped as digital twins, enabling the customer to make its capabilities available to its customers with the help of simulation.

The second sub use case is the solution users. These are then the classic factory owners who have a production line or even a production facility. They have a static system, their shopfloor, that they want to look at and based on that shopfloor they want to have optimization solutions.

What are classic challenges from providers? What are challenges from your customers in everyday life?


The classic case is that a customer wants to expand their production or wants to find the bottleneck first and then asks themselves the question: The dimensioning of a storage system, the dimensioning of the AGVs, i.e. how many AGVs do I need to optimally supply my machines?

Also production planning: How can I optimally plan my production so that I produce just in time? These are the classic challenges.

This then actually goes further, because to answer these questions, you first need data. That’s often a challenge in itself for customers to get that data. Typically, you don’t have a harmonized IT landscape; you have to address different systems. You need data from the MES, from the ERP system and partly also from the PLC. If it’s old machines, then it’s partly not available digitally at all, but in order to gain cycle times, you really need a stopwatch on the machine to look at how long does the process take? This is input data that I need for the simulation.

What about the machine builder or the solution provider? What are the classic challenges in this environment?


It’s similar there. It’s always about optimizing production, and then it’s also a matter of asking the solution provider whether three machines of this type are enough for the customer to carry out their production or process their job portfolio, or two, or even a completely different machine? The system is not as static as it is with the solution user because the solution provider does not have the ability to customize the shopfloor because the customer ultimately wants to buy machines.

What questions do your customers have that you need an expert to answer? What the tasks are, because the bottom line is that such a simulation is probably very costly, takes a very long time, and you have to have the data available somehow.


A classic issue focuses on optimization, but where the simulation comes in is, for example, a production planner who comes to work in the morning and a machine is defective. What should they do now, or how should they act? How do they have to route the orders to other machines in order to continue producing on schedule? Then, classically, they’re on the way to a simulation that can answer that question.

With this issue, you often reach a certain limit with the classic factory simulation, because we always get the feedback from customers; “The simulation is great!”. But first: it takes too long and second: it is too expensive.

In this example, too long means that when the factory owner comes to the factory in the morning, they need the answer to this question of what they should do now because their machine is defective; ad hoc, of course, i.e. within a few minutes. In this case, it does no good to go and set up the simulation, read the data, run the simulation, and check the results. It’s all way too slow. What we always have as a goal of projects is to somehow present the simulation as a product or an automated simulation. That ultimately means minimizing, I’ll call it time to build up simulation, so that you have as minimal time as possible to build up the simulation. That means getting down from days to hours and minutes. Second, to eliminate the need for the simulation expert to perform the simulation. This is simply because typically the simulation expert is not available to run a simulation when it is needed.

When you have achieved this, you have a great goal, because you can transfer the simulation into daily doing, i.e. you can use the simulations for recurring questions. You don’t just have this “one shot.” This is the classic case of setting up a one-shot simulation to answer a specific question. This question is answered and then the simulation ends up in the trash because it is no longer needed. That’s exactly what we want to get away from, so that you can make it reusable, so that you can transition it into everyday doing.

Such experts, who have done the simulation so far, can then develop themselves and their skillset over time and perhaps take on other tasks. Do all companies have such a simulation expert with them? 


That is precisely what is not the case, and that is the decisive factor. Typically, they don’t have that and have to check with the central department first to see if they can get someone or have to hire service providers from us. That is not actually the goal of the project, but the manufacturing companies should be able to do this autonomously in principle. The person closest to production should be able to do this, typically the production planner. This person should be able to answer the questions directly with the help of the simulation. This is actually the best case.

How can data availability be improved?


We can use an example: In a one-shot simulation, how the data problem is solved there – typically you do the export from the MES system and you do the export from ERP; These are then CSV files.

Can you say again very briefly what One Shot means? Simply a simulation?


This is exactly what I mean in a simulation that is used to answer a question and then is no longer needed.

One then has this data from different systems. These are then CSV files, these are JSON files from the PLC, and you have to parse them first. In a One Shot simulation, you write yourself a Python script that prepares the data so that you can use it in the simulation. In the case of automated simulation, or simulation as a product, things will be handled differently. That’s where they’ll try to integrate the simulation into the customer’s IT landscape. For example, data will still be retrieved from the same system, but in an automated way. This means that you no longer need a user to run the scripts and parse them “manually”, but this will happen automatically in the background.

What are the technical logical requirements for the automation of this simulation? What are the requirements from your customers here?


An important point is that such projects with an automated simulation must no longer be regarded as a pure simulation project, but as classic software development. You need knowledge across different technology boundaries; clearly, simulation tools. You need the data expert, data analytics, you need a cloud expert if the whole thing is going to run in the cloud and you also need security. It is precisely this broad methodological expertise that will be provided by the individual specialist teams at ITK.

Solutions, offerings and services – A look at the technologies used [15:57]

How exactly do you set up such a simulation and what are the questions asked by a customer in this context?


Typically, you would start with a meeting at the customer’s site to first get a detailed overview of the customer’s manufacturing operations. That’s usually an on-site workshop where you can also do a plant tour. I always find it very practical when I have seen the system I am simulating myself in person, because that gives you a better insight. The goal of the workshop is to define the requirement of what should be achieved with the help of the simulation. And then, of course, you try to get into implementation quickly. This is done with a proof of concept, for example, to get visible and reusable results relatively quickly. Quite decisive is simply the visibility by the visualization.

It is always a great wow effect when the customer sees this visualization in 2D and 3D of the simulation live from their plant. This is always very nice and above all it generates a certain confidence in the results of the simulation. For the customer, this is no longer a black box when you “punch in” data and data comes out at the end that is supposed to tell you where the bottleneck is, but you also believe it and the customer sees it: What is being done is valid! Because you have the whole thing in a 2D or 3D visualization. The proof of concept is ultimately nothing more than a one-shot simulation; with the caveat that you have to make sure the architecture is reusable, extensible, and maintainable so that you can eventually integrate it.

A question that you answer; for example, would be that the customer says: I want to route the orders to another machine or I want to find out if it takes three machines of this type or four. But that’s first of all an issue or a requirement for it.


Exactly, that is a requirement. And once you have this proof of concept and it’s convincing, then you would get to work on this simulation as a product or on automating the simulation by integrating this PoC, the simulation, into the customer’s IT landscape. How this is then done is completely customer-specific. But you can connect everything in the cloud and run everything on one computer.

There are also customers who say, for example, cloud-first strategy, we want to route it all directly into Microsoft Azure, AWS, Google Cloud, or you say you want to kind of host it on an on-premises server if that’s a manufacturing operation, for example.


Cloud has a certain appeal, of course; it’s always available. That typically runs in the browser, the simulation.

How does this data processing work in the simulation as a product, which is what you are developing there?


The key is to have a service in the background that takes this data – you need interfaces to the systems, so a REST API in the MES system.

An open interface virtually.


Open interface, exactly! Ultimately, what then happens in the background is a script is run to get the data into the appropriate format, which the simulation can then read in. It’s nothing different from what you do manually in the One Shot simulation.

What is it now that you are building? What makes this special at this point?


The special thing is that you can use this for recurring issues. You can run this simulation every morning to see how do I need to do my job routing today. I can’t do it with a one shot simulation because there I need a simulation expert and I need to set up the simulation. It all takes far too long. And with the help of these recurring issues, you then have the opportunity to intervene relatively quickly in the system and thereby also optimize your production directly and not just in two or three weeks.

Recurring issue, can you give an example again? For example, if I want to find out from the customer, whether they need three or four machines, what would be a recurring issue here?


That would be a solution provider for us. The solution provider often has the following issue: A customer approaches them who wants to plan a new production line. How many machines do they need? They have a certain order portfolio; these are their input data and they have to be processed in a time interval. How many machines do you recommend for me to have this in that time interval? And this is a recurring question for the solution provider because these are daily things.

They don’t want to have the question answered in three weeks, they want to answer it as ad hoc as possible to impress the customer.

That’s sort of a structure that you create where parameters change, but the basic question remains. This is for the producing company: How many orders can I route to another machine? There are also a few parameters that change, but the question remains similar in the end. 


Exactly, these are many questions. A machine can fail, plants can fail, especially nowadays. Production is becoming more complex and material is not available. What do I do now? What jobs can I prioritize and can I not do?

Then what do I do with the data that result from the simulation? Where do they go? Now you had just talked about a browser where they are evaluated.


Yes, it is data, but the visualization. That can be high-end in the game engine, that is partially sufficient in the 2D, that is customer-specific and customer-dependent.

What is Game Engine at the point?


It looks like a 3D computer game.

So really high frequency data, where volumes of data are being processed in the 3D image.


That’s right, it’s a polished simulation, so to speak. You can also integrate that with augmented reality. A lot is possible then!

Also dashboards, for example, to display specific KPIs. The KPIs are often the same. This is the throughput, That is, how many parts per hour or per day, per shift do I produce? That’s machine utilization, that’s worker utilization. How many workers do I even need to keep my production running? How big does my storage system need to be, how many shelves do I need in a high-bay warehouse? These are all typical questions.

There the circle closes again, because you have already teased at the beginning, now again discussed a little bit in detail. It’s very exciting to see what possibilities there are in a wide variety of use cases from your customer.

Results, Business Models and Best Practices – How Success is Measured [23:41]

What is the business case for these two customer segments, i.e. for providers and for solution providers?


The business case is the same for both. In principle, factory simulation is first and foremost about optimizing production processes and thus saving costs or increasing productivity. A return on investment here is completely dependent on the customer and also on its optimization potential. On the other hand, you can also ask the question differently: What do you do if you don’t perform material flow simulation for larger investments? An example would be when a factory operator buys two new AGVs that cost a lot of money because they think their bottleneck in production is the AGVs. After startup, they find their throughput hasn’t increased at all; then the bottleneck was misplaced because they thought the AGVs were the problem, but in the end they’re not. Then you have a very bad investment and exactly this bad decision could have been avoided if a simulation had been done beforehand, because it would have shown that the AGVs are not the bottleneck.

Of course, as you say, that’s the eureka effect when I suddenly see things like that and bring experts on board who look at things differently and question what’s happening.


And the more complex the production becomes, the more difficult it then becomes to make the decisions correctly, because you can no longer keep track of it.

What are things we can look forward to in the future? What are you developing right now? What are things that you want to share with our listeners?


We hope, of course, that simulation and especially simulation as a product, i.e. automated simulation, will continue to prove itself and become more widely accepted, because we are seeing precisely these points: Production is becoming more and more complex, especially in today’s times where material availability is becoming more and more difficult. And that’s where it simply helps to find a fact-based decision that gives you quick solutions to highly dynamic issues, such as a machine breaking down unexpectedly. How do I have to change my production planning in order to continue producing optimally? These are the things we hope for and where we also see very high potential.

That’s probably constantly evolving. You also cover the topic of harmonizing the IT landscape. As soon as I do integration into different pots and handle data, in the end, that’s a holistic issue that needs to be driven somewhere and that needs to be developed. That’s exactly where the whole thing interlocks, that you need a sensible IT architecture to make data available and usable at all. 


This makes it much easier to create the simulation if the data is already available in a structured form, i.e. if you have a harmonized IT landscape.

It was really exciting today to understand concretely what challenges are using the example of providers. It was very nice to understand where you are active and what is what you do, thanks for that! 

I will link the contacts to you in the show notes. I would love to hear back with an update! 

Thank you very much! Until next time!

Please do not hesitate to contact me if you have any questions.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Host & General Manager
IoT Use Case Podcast