In Episode 177 of the IoT Use Case Podcast, host Ing. Madeleine Mickeleit speaks with Soroush Khandouzi, Cloud Solution Engineer at KNF, and Florian Stein, Domain Lead for Cloud Transformation and Data Infrastructure at b.telligent.
The focus is on a joint IIoT project for pump lifetime monitoring, showing how traditional mechanical engineering companies are using intelligent data to future-proof their products – from edge integration to a scalable cloud setup.
Podcast Summary
Lifetime monitoring, predictive maintenance, and edge integration – how KNF is driving digitalization in mechanical engineering
This episode explores a real-world digitalization project by pump manufacturer KNF, developed together with IoT partner b.telligent. The goal: replace manual testing and documentation with an automated system for long-term pump monitoring – powered by an edge-to-cloud architecture based on Azure IoT and custom-built Data Acquisition Controllers (DAC).
The challenge:
Until now, key parameters like pressure, temperature, and current were recorded manually – sometimes daily, and over several years. With four production sites worldwide, fragmented systems made consistent evaluation nearly impossible.
The solution:
A scalable IoT infrastructure built on Azure IoT Edge, near-real-time data transmission, a burst mode for high-frequency measurements (up to 10 kHz), and visualization in Grafana. In addition to automating centralized testing for more than 1,500 pumps, the system enables cross-site monitoring, AI-driven analysis, and predictive maintenance.
The key insight:
Data is not just collected – it’s made actionable in real time, enabling faster development cycles, higher product quality, and entirely new service offerings.
This episode is a must-listen for anyone looking to scale IIoT projects – from R&D to testing and production.
👉 Tune in and discover practical best practices.
Podcast Interview
Today’s episode takes us to Freiburg im Breisgau to visit KNF. At KNF, everything revolves around pumps — developing and producing them. But one thing is clear: manufacturing pumps alone is no longer enough. Evolving customer demands call for more, especially data. And data is where the real value lies.
But what kind of data exactly? And how does a long-established company like KNF embark on its IoT journey? What does the technical implementation look like? Are they using Azure IoT Edge, IoT Hub, or other technologies?
And of course, my favorite question of all — what are the actual use cases behind it?
We’re exploring all of these questions today with Soroush Khandouzi, Cloud Solution Engineer at KNF, and Florian Stein, Domain Lead for Cloud Transformation and Data Infrastructure at b.telligent. b.telligent is the implementation partner for this project.
As always, you’ll take away valuable best practices for your own projects. We’ll also talk about failures, lessons learned, and what you can apply in your own context.
All details about this and other implementations can be found at iotusecase.com and in the show notes.
And with that — have fun, let’s get started. Enjoy the episode.
Hi Florian, hi Soroush. Great to have you here. Florian, I’ll start with you — how are you today, and where are you joining from?
Florian
I’m doing well, thanks for the invitation. I’m glad to be back on the IoT Use Case Podcast. I’m in the office in Munich today. I had a nice bike ride this morning, took me about half an hour to get here.
Nice! It’s great to have that option — combining exercise and commuting in the morning.
Florian
Exactly. When you arrive, you’re refreshed and ready to start working.
By the way, I just looked it up — the previous episode was number 130 with MARTIN GmbH.
If you’re listening right now, feel free to subscribe to the podcast and check out that previous episode as well. It’s a good one. Great to have you here. Soroush, how are you today, and where are you joining from?
Soroush
Hello, and thank you for having me on the IoT Use Case Podcast. I’m doing well. I’m working from home today and enjoying the sunny weather here in Freiburg, in the south of Germany. Perfect weather for the weekend.
Nice, greetings to Freiburg. And to everyone listening, maybe also to your colleagues — shout out to them as well. It’s great to have you here.
Soroush, you’re a Cloud Solution Engineer at KNF. Your background is in AI, cloud architecture, and data science.
I’d say your mission is to turn manual processes into smart and scalable solutions — helping your company and others bring data-driven innovation to life. At KNF, you’re leading the digital transformation of pump testing, which we’ll talk about in a bit.
But to begin, what excites you most about new technologies in the cloud and manufacturing space? And are there any lessons you’ve learned that others starting a similar journey should keep in mind?
Soroush
I studied Industrial Engineering for my bachelor’s degree, and during that time I realized that digitalization is something I’m really passionate about. I wanted to understand how to reduce the time people spend on repetitive tasks in their daily lives, and that’s where my journey started.
I joined KNF two years ago. Before that, I worked at different companies. I’m originally from Iran but lived and worked in Italy before moving to Germany. I brought the experience from my studies with me and began working here.
Digitalization is still what I focus on. I’ve always been interested in finding ways to reduce the time people spend on manual processes. The goal is to give them more room to use their minds for innovation rather than repetitive work.
Great. You also used to work for Bosch, right? I just checked your LinkedIn profile.
Soroush
Yes, Bosch Italy.
Nice. It’s great to have you here, especially for the practical perspective you bring.
Florian, to summarize your role: you are Domain Lead for Cloud Transformation and Data Infrastructure at b.telligent. You have experience with a number of industrial projects and are an expert in building scalable cloud architectures and implementing IoT data platforms across various technologies.
What do you personally find most exciting about combining cloud, IoT, and data infrastructure? Where do you see the biggest impact?
Florian
The biggest impact I see is in helping our customers on their data and IoT journey. It’s not just about developing a single isolated use case, but about always keeping the overall strategy and the data platform in mind.
We focus on helping customers build a foundation that allows them to automate processes and continuously expand by adding new use cases over time, gaining value from the entire platform. We don’t only develop cloud solutions. We also support customers with edge solutions, integrating data from the shop floor.
In this use case with Soroush, for example, we helped build an edge application to integrate data from testing machines. We’ll get into more detail about that later. But overall, our goal is to support customers across their entire journey.
And just for some context, you’re referring to b.telligent. You’re known as a technology-agnostic IT service and consulting firm specializing in IoT, analytics, data management, and integration services.
You also come from a background in business intelligence and data warehousing. Would you say that’s where your roots are, and that your work has now evolved more into the IoT or IIoT space?
Florian
Exactly. We come from the business intelligence and data warehousing space. When I joined b.telligent five years ago, I brought in the IoT and manufacturing focus and started building it up together with the team here. It is great to see what we have achieved over the past five years with various customers and success stories.
Today, we are more of a data platform consultancy. It is no longer just about business intelligence — we now cover the entire chain.
That is definitely something you and your team can be proud of. Greetings to everyone from the b.telligent team who might be listening.
What you are also highlighting is that your customers are enabled to manage things on their own. You both are clearly experts in IT, and you help your customers not only understand their data but also make use of their architecture and edge components independently. So it really seems like you are true partners to your clients.
Florian
Exactly. One of our guiding principles is to empower the customer so that, eventually, they no longer need us.
Now, let us talk about why you are both here today. How did this partnership between your companies come about?
Soroush
Before this project, KNF had already worked with b.telligent on a business intelligence solution in another project, and we achieved great results.
At that time, we started a proof of concept for what we now call the LTM project, short for Lifetime Monitoring. When we decided to move forward with creating an MVP, we reached out to Florian and the team. We realized that their expertise would be very valuable and would also help enhance our own engineering know-how.
Together, we wanted to bring digitalization to KNF.
Let’s talk a bit about KNF, because not everyone may be familiar with the company. You are a major player, but for some context: KNF is a globally active, family-owned company based in Germany, specialized in developing, designing, and producing pumps. Is that right?
Soroush
The KNF Holding AG is now headquartered in Switzerland, but it all started in Freiburg around 78 or 79 years ago as a family business.
We are active across many industries, but we often refer to ourselves as a “hidden champion” because people don’t always know us directly — but they encounter our pumps behind the scenes.
Our products are used everywhere: from deep-sea submarines to the International Space Station. That is why we call ourselves a hidden champion. We support other industries through our technology.
Our company slogan is: “Together we innovate, together we keep the world in flow.” Our pumps are part of making that possible.
Can you tell us more about your customer base? You mentioned different scenarios where your pumps are used. Do you serve specific customer segments, or is it truly across all industries?
Soroush
It really is across all industries — from medical and laboratory applications to universities and even maritime sectors. For example, our pumps are used in ship funnels to reduce emissions.
What makes KNF special is that we customize our pumps. We work closely with customers to understand their needs and deliver tailored solutions.
Wherever gas or liquid pumps are needed, our customers can reach out and we will find a way to provide a fitting solution.
Let’s talk about the project itself. I would call it an IIoT project. What are the main objectives? Can you both explain what this is about?
Soroush
Sure. Let me first explain what LTM means for us. It stands for Lifetime Monitoring. The idea is to monitor our pumps throughout their entire lifetime — which can be four, five, sometimes even eight or nine years — to understand what happens to them over time.
We create testing environments that replicate customer conditions and observe how the components behave during this time.
Before we started this IoT project, data collection was done manually — sometimes daily, sometimes weekly, depending on the project. A technician would visit the site and collect the data.
Since we are a global company with four production sites — two in Switzerland, one in Germany, and one in the US — the data was scattered. Each site managed its own data, without a centralized system. That meant we could not rely on a single source of truth.
This project was initiated because of the critical need to standardize and centralize our data. We needed to be able to transfer and access data across locations for future projects and analysis.
And when you talk about observing components — can you give an example? What kind of components are we talking about, and what kind of data are you interested in?
Soroush
Yes, a good example would be diaphragm pumps. These pumps have membranes that can fail over time. Often, we would only realize this when the pump had already failed. Once we disassembled it, we could see that the diaphragm had been damaged, but we didn’t know when or how it happened.
That is why we started collecting environmental data — such as current, pressure, humidity, and temperature — to better understand these conditions.
Previously, this was done manually. Technicians would record values in Excel or sometimes in SQL databases, but it was all manual. They also tracked operating hours.
This data helped us understand the lifetime of our pumps and identify failure patterns. For example, we could see if a change in components during production led to more frequent failures in the field.
I see. When we talk about the data you’re collecting, is it high-frequency data, or something like one data point per hour?
Soroush
Currently, we collect data at a frequency of around 100 Hz under normal conditions — that is 100 data points per second.
We also developed a feature we call “burst mode,” where we run the pump for 10 minutes a day at 10 kHz. This allows us to capture very high-frequency data to analyze aspects like noise and vibrations in more detail.
Interesting. I’ll come back to the burst mode in a second. But first, since you’re a global company with multiple sites, is your infrastructure currently fragmented across those locations? How should I imagine your system setup?
Soroush
Since the rise of the internet, our company has been on a path toward increasing integration across sites. This project is one of the latest steps in that direction.
As I mentioned before, with our earlier BI project with b.telligent, we also followed this approach.
Our goal is to become more integrated across the organization. We have four main production sites and more than 20 additional branches worldwide, mostly acting as sales centers.
Got it. Florian, a question for you: you have seen many similar projects. How does this one compare, especially when it comes to fragmented systems across multiple locations? Would you say this setup is typical?
Florian
Most of the challenges we encountered in this project are fairly typical. Many companies have siloed systems across different locations and want to bring their data together to compare production outputs or testing results.
We see this often — the need to centralize data to enable comparisons across different production lines.
What made this project a bit special was the technical setup, particularly the specific devices we had to connect and, as Soroush already mentioned, the burst mode. That was quite unique.
What exactly is burst mode? I have to ask now — what is it about?
Florian
Let me try to explain, and then Soroush can add more. In our IoT application — which also includes a web interface for technicians — there is an option to schedule a high-resolution data collection window.
Once a day, the system captures data at a much higher frequency for a specific time range. This allows users to zoom in on fine details and analyze behavior more deeply.
I see, so it’s about managing and analyzing high-frequency data.
Soroush
What I can add is that, as the name suggests, we run the pump at a very high frequency — up to around 10,000 revolutions — until it reaches its performance limit, or “burst” point.
Sometimes this helps us capture critical component behavior during the test, but that is not the main objective.
The goal is to better understand the operational limits of our pumps. For example, we run the pump for 10 minutes at a high frequency to see how it performs under stress and to define the threshold values it can handle reliably.
Got it. Now I’d like to understand more about the process before this project started. How was pump testing typically done, and why was it so time-consuming, as you mentioned earlier?
Soroush
Before this project, we already had test rooms where we placed our pumps to collect data. We would run a pump under normal operating conditions and start recording parameters like temperature, humidity, current, pressure, flow, and more.
Previously, however, a technician or engineer had to manually go into these rooms and read the values from each pump. That process was extremely time-consuming.
To give you an idea: today we are testing more than 1,500 pumps in parallel. Imagine one person having to manually read current, flow, pressure, temperature, and humidity — including input and output pressure — for every single pump, on a daily basis.
In the past, we did not test nearly as many pumps at once. With this project, we are now able to run tests on more than 1,500 pumps daily, fully automated.
That is impressive. And since we are talking about the business case behind this, of course, any technology investment also has to be justified. Many companies need to evaluate the return on investment, which can be challenging.
Beyond what you just described, do you have any further insights into the typical losses or inefficiencies — time, resources — that you or other companies might face? Maintenance time, for example, is often a major factor, right?
Soroush
One of the main issues we had was that we didn’t know when a pump would fail — or why.
Now, with this setup, we can address both points. While it’s difficult to quantify the exact ROI, we can say that it adds significant value. For example, our testing and development engineers now receive highly accurate data in near real time — with transfer delays of less than 100 milliseconds.
This setup has also opened up new opportunities for us. The most popular term right now is AI — and we are actually using this data for AI-based development, such as predictive maintenance.
So on the one hand, there’s the business and process side — manual testing and documentation. If someone spends one or two hours daily on testing thousands of pumps, that’s a huge time investment.
On the other hand, I imagine there’s also a missed opportunity in areas like predictive maintenance or smart services. I’m not sure how far along you are with that, but once the data is available, it could also be valuable for offering more advanced consulting to your customers.
Would you say it’s fair to separate these into process benefits and future business opportunities?
Soroush
Yes, exactly. And the system also supports our production process in parallel. It enables faster decision-making and quicker evaluation of newly developed products.
That’s great. Then there’s also the technology side — dealing with siloed systems across locations, managing high-frequency data, and addressing the bottlenecks in engineering workflows. That impacts product development and leads to delays, right? So that would be more of the technical aspect of the business case.
Soroush
Yes, exactly. We can now leverage insights from our engineers in both the US and Germany. For example, if they detect an anomaly with a certain pump type, they can add that alert to our system and testing protocols.
Previously, when the data was fragmented, everything had to be explained manually — what was tested, what happened, what the outcome was, and why. Now we speak a common language across locations.
I see. Florian, I assume this sounds familiar to you — this kind of business case applies to many of your projects, doesn’t it?
Florian
Yes, absolutely. One important point is that many processes were manual before. Engineers and technicians had to go into the test rooms just to turn pumps on or off.
We have now moved that functionality into the application, so they can manage everything from their regular workspace. They no longer need to interrupt their workday to physically access the pumps. That brings a significant efficiency advantage.
I can imagine the impact that has, especially in more extreme environments. If a pump is located underwater, for example, maintaining it can require an entire diving team. Just thinking about that shows how valuable remote control can be.
So, Soroush, you decided to work with Florian and the team. From your perspective, what made you choose b.telligent? Was there anything specific that convinced you they were the right partner?
Soroush
When I joined the project, the collaboration had already started. But during the first few months, I realized that Florian and his team had worked on similar projects before.
One thing that stood out for us was the customization of data collection — we refer to our setup as DAC, or Data Acquisition Devices. We use components like NI and MCC, which are quite specific to our needs.
This part was unique to our project, but I could see that they already had experience with similar architectures in other companies. They understood where issues might arise and knew how to avoid them. That made the collaboration really valuable.
But since I joined later, maybe Florian can share more from his perspective.
Florian
From the very beginning, even during the first call with their boss, it felt like we were on the same page. The communication was easy and natural, and we immediately connected.
That is something I find unique to IoT projects, compared to traditional business intelligence projects. In IoT, people often have a personal connection to the topic. Many already use smart home devices or electric cars, so the conversations take place on a different level.
We quickly validated our approach with a proof of concept, and once we saw that it worked, we agreed not only to implement this specific use case but also to build a scalable foundation. The idea was to enable future use cases as well. That was an important decision early on.
When Soroush joined, it was great. From the first conversation, he was enthusiastic about the project. We onboarded him, and now we have weekly meetings where we discuss the current steps and upcoming use cases.
What I appreciate most is that our role as consultants is not to do everything ourselves. Instead, we focus on enabling the KNF team to build and scale solutions independently.
I see.
Florian, can you explain the current solution you built for KNF? What exactly did you implement, what was prebuilt, and which technologies are being used? After that, I’ll dive into the community questions.
Florian
Sure. What was special in this case is that we started by building the entire cloud foundation. This included automating the rollout of the cloud infrastructure — both for this specific use case and to support future use cases as well.
That is what you refer to as the cloud foundation. It is essentially the technology stack and infrastructure you need, right?
Florian
Exactly. We use Terraform to automate the rollout of the infrastructure through deployment pipelines. We also set up the framework and GitOps processes around the Azure IoT stack.
This includes deploying Azure IoT Edge applications to the edge environment — close to the pumps — where we run the DAC (Data Acquisition Controller) applications. These edge environments collect data and send it to the cloud. They can also buffer data locally in case the internet connection is temporarily unavailable.
How do you connect brownfield environments to this modern cloud infrastructure? Do you use MQTT, OPC UA, or other protocols to connect the different devices?
Florian
This LTM use case is somewhat special. As mentioned earlier, we use custom DAC controllers. We built a Python-based software library using Azure IoT SDKs to send data from the edge to the IoT Hub.
For this use case, the setup includes MCC and NI boards to collect data locally. The edge device sends this data to the cloud using the Azure IoT SDK, where it is stored in Azure’s time-series database, specifically Azure Data Explorer.
The data is then visualized using Grafana, so engineers can monitor and analyze it in depth.
In other use cases at KNF, as Soroush mentioned, they also use MQTT brokers to transfer data into Azure. So yes, the architecture is always tailored to the specific requirements.
I assume it depends heavily on the customer and the setup they already have in place.
For those listening, if you are working on a similar project and want to exchange best practices, I will include Florian’s and Soroush’s LinkedIn profiles and contact details in the show notes. Feel free to reach out to them and discuss your use case or technical requirements.
It sounds like connectivity is highly individual, depending on the devices and project goals.
Now, coming back to the issue of failures — earlier you mentioned detecting not just when a failure occurs, but also why. How are you solving that from a technology perspective?
Florian
Our main goal was to collect data in near real time, with frequencies in the range of hundreds of hertz — and in some cases even higher. This data is sent to a time-series database, where engineers can analyze it using Grafana dashboards.
These dashboards display all relevant sensor values and allow engineers to monitor thresholds and detect anomalies. They can deep dive into the data, mark specific points where anomalies occur, and add comments.
This setup also enables the creation of long-term documentation. For example, during lifetime testing, engineers can flag suspicious behavior and track it over time. All of this can later be used for automation or as input for data science and AI models.
That makes sense. Soroush, would you like to add anything from a practical perspective? How are you or your team working with the system?
Soroush
Yes. As I mentioned earlier, Florian and his team at b.telligent had a good understanding of what challenges might arise. For example, we implemented a cloud-based alerting system before moving into production.
If a sensor detects abnormal values — for instance, if current levels go too high or too low — the system sends an alert. That way, a technician or engineer can check on the pump before a failure actually happens.
This is one of several examples where Florian’s team anticipated potential issues and helped us proactively implement the right solutions.
Exactly — being prepared for any failures that might occur.
Florian
One important aspect not to forget is that in the past, failures might have gone unnoticed. Technicians did not always have enough information to detect problems early.
Now, with this system in place, they know when a failure occurs and can go directly to the pump for inspection. The data provides a clear signal that something is wrong, allowing them to take timely action.
Exactly. That is a great conclusion.
Were there any unexpected outcomes or benefits that emerged from this project? Something you did not plan for but turned out to be a valuable result?
Soroush
One of them was realizing how smoothly we could connect our internal physical devices to the cloud and visualize the data in real time.
This opened the door to new project ideas, such as connecting production lines and testing environments directly to the cloud. Now, for example, we have a TV mounted on the wall in our test rooms showing which pumps are running and displaying the current values of the data we are collecting.
Another unexpected benefit was how this set the stage for using AI. We initially thought about predictive maintenance, but then we also realized we could use the data for error classification. By analyzing how specific changes in data correlate with certain types of failures, we can now classify and anticipate issues more effectively.
Florian
What I would like to share with the listeners — and this is something we see in every project — is that IoT projects often involve a large number of devices and data points being integrated into the cloud.
It is very important to regularly evaluate the application from a financial perspective. Sometimes you need to adjust certain settings to optimize cloud costs. An IoT solution is not something you set up once and let run forever. You need to monitor and optimize it continuously, especially as data volumes grow.
That is great advice for any company considering a similar project.
Thank you both for being part of this podcast. I really appreciated all the practical insights you shared.
I would love to hear how the project continues to evolve — maybe we can do an update in a year.
For now, thank you from my side. I’ll leave the final words to you if there is anything you would like to add.
Florian
Thank you for the invitation. Just like last time, it was great to talk about this use case and share our learnings with the audience.
We hope it helps inspire more IoT use cases across different companies.
Soroush
Thank you as well for having me on the podcast. It was a pleasure to talk with you and share the story of our successful project.
One final suggestion for the audience: when starting an MVP, always think in terms of the bigger picture. Do not focus only on the immediate use case. Design the project so it can scale from the beginning. That mindset makes a big difference.
That was a perfect final note. Thank you, and have a great rest of the week. Bye-bye.
Soroush
Bye.



