In this episode of the IoT Use Case Podcast, host Dr. Peter Schopf talks with Lukas Schattenberg, Sales Director at IXON, Alexander Engels, CEO of aiXbrain, and Jörg Halladin, Head of Development at SPALECK. The focus: how machine builders use an IoT platform, edge connectivity, and integrated AI to implement a scalable condition monitoring and predictive maintenance service offering for operators.
Podcast episode summary
SPALECK, a machine builder for vibratory conveyors and vibrating screens used in recycling, chemical, and food processing plants, integrates condition monitoring into its long-lasting machines to avoid downtime in linked process lines. The challenge was less about data collection than about operational responsiveness: local traffic-light indicators were overlooked, and rigid thresholds are not reliable early-warning signals when products and operating modes change. To derive service decisions from machine data in time, machines were quickly connected via edge routers from IXON and data was made available in the cloud. aiXbrain added ML models for predictive maintenance as an integrated app (Dataray) — including in-platform labeling, model comparison (false positives/negatives), and automated retraining. The solution is deliberately kept open via interfaces such as OPC UA, PROFINET, and APIs, so operators can integrate the data into their own plant dashboards. Benefits for IT/OT decision-makers: fast rollout, secure remote access, a scalable data pipeline, earlier fault detection (several days of lead time), and service-ready processes with clear alerts instead of additional tool overhead.
Podcast interview
Today on the IoT Use Case Podcast, we’re looking at a setup we’re seeing more and more often — and one that’s becoming strategically relevant for many industrial companies: an IoT platform, a specialized AI solution, and a machine builder working together to create a new service offering for end customers. Our guests are IXON, aiXbrain, and SPALECK.. We’ll talk about how machine data turns into concrete decisions — not in the lab, but in real operations. Enjoy the episode.
Hello and welcome to the IoT Use Case Podcast. My name is Dr. Peter Schopf, and in this episode we’re addressing a topic that many machine builders are dealing with: How can you offer data-driven services to end customers? Is it really the case that many fail due to technical complexity, security requirements, or a lack of AI expertise? Some companies also show how it can work. And I’m very happy to welcome our guests for that. First, we have Lukas Schattenberg, Sales Director at IXON, who provide the foundation with their IoT platform. Then we have Alexander Engels, CEO of aiXbrain, who deliver the right insights with their AI solution. And finally, Jörg Halladin, Head of Development at SPALECK.who as the machine builder implements all of this with their customers in practice. Great to have you all here. Jörg, I’d like to start with you to learn more about you and especially SPALECK. Where can people find you, and what can you tell us about yourself and SPALECK?
Jörg
Hello. SPALECK is based in the Münsterland region — a classic SME, completely owner-managed. We have several business areas, and one of them is building our own machines, which we design and develop entirely in-house. These are vibratory conveyors and vibrating screens for all kinds of bulk materials. We specialize in everything related to recycling, but increasingly also in the food and chemical industries.
Great. Can you name one or two customers? Are they larger and well-known, or more specialized players in the recycling space?
Jörg
Just take a look at the trash bins you have and the collection trucks — often the company name is written right on them.
We’re actually very strongly represented worldwide.
And what are the problems your customers have in that context, especially with bulk materials, recycling, and similar areas?
Jörg
These are process engineering plants that have to run — for example because they supply fuel for power plants. When it’s about energy recovery from waste, that’s what it’s called. Or it’s chemical processes where interruptions can be extremely costly. So what’s the challenge for us? Our machines are, first of all, part of interconnected systems. In this long process chain, ours is one machine — and it’s also very reliable. It’s quite common that customers don’t even know where our machine is installed or what it’s doing, because they have had nothing to do with it for years. So it often isn’t on their radar in day-to-day maintenance and servicing. While other machines are checked very frequently — sometimes every shift — because everyone knows it’s necessary to keep them running, with ours it’s more like you forget about them. And I actually like that, because that’s our job: achieving that reliability. But of course it makes it even more important that if something does happen, it gets noticed. That’s why condition monitoring has definitely been a topic for us.
And what is your role?
Jörg
As Head of Development: we’ve been doing condition monitoring for probably 14 years. Our first issue back then was that an LED stays green for five years, then turns yellow, stays yellow for months, turns red for a few weeks — and then the damage is there, and nobody saw it. In the logs we always saw: technically, everything worked — but nobody reacted. So at some point we moved it to a cloud platform so that we, as the experts for these machines, would see it and could make sure the necessary actions actually happen — so that a repair that might only take ten minutes really stays a ten-minute repair, and doesn’t turn into a full shutdown for one or two days.
That’s very vivid. What was the trigger? Why do you go to the cloud, as you just described? There are insights — but it’s about making them available at the right time. And then you started a selection process and looked for partners. We have Lukas here: how did you come together?
Jörg
At first we made the classic mistake: we have data and experience, we have an idea of our digital business model and how the service should run — and we started by looking for a very large solution and checked out different options. We found what many people know: if you want software projects to fail, you just need to take on way too much. We also fell a bit flat with our first partner. Then we had to reorient very quickly. A trade fair was coming up, and we had to rebuild this. Then we found IXON through an online search. We introduced ourselves, explained what we wanted to do — and they simply sent us a router. It arrived two days later, and a few hours after I had it, I had my dashboard ready with 90% of the functionality I wanted. That was very convincing in that moment. We had a lot of routers on the table and tried them out, and normally setup takes a long time. Here we already had dashboards ready. Then it became clear fairly quickly that we could get to market fast and deliver to customers what we had envisioned. And that’s exactly what happened.
Lukas, that’s a nice handover to you. The router arrived after two days. How do you do that? What’s your focus, and what’s your part in it? How do you do that? What’s your focus, and what’s your part in it?
Lukas
My role as Sales Director for the DACH region at IXON is to make sure that customers like Jörg get supported by my team as quickly as possible. The team is cross-functional: technical, sales, and also marketing. In general, we support customers with four products that we offer. Edge devices are something we provide out of the box. We offer remote access. Creating machine insights is another product, and finally, the customer portal is something we can deliver. As Jörg already said, one thing is always central: usability and time-to-market. In our experience, the machine builder has an incredible amount of know-how about their own machines. Creating that connection between machine builders and manufacturing companies — and maintaining a continuous connection across the entire lifecycle of a machine — is what we’ve set ourselves as a mission, and where we’ve been quite successful and are fortunate to support customers.
And where are you based? How big are you as a provider? How big are you as a provider?
Lukas
I’ve been with IXON for five and a half years now. When I started, we were just under 40 people. By the end of this year, I think we’ll be around 130. We’ve grown strongly over the last five years — not only in headcount, but also in the number of connected machines. We now have more than 60,000 machines connected to the IXON Cloud.
That’s impressive, because the IoT market isn’t easy. Growing like that and being successful is not a given.
[08:46] Challenges, potentials and status quo – This is what the use case looks like in practice
You mentioned your solutions. We also have specialized solutions and applications in use here — looking at Alexander. aiXbrain: how does that fit into the picture? Was this an app in the marketplace, or how did Alexander come into the picture?
Alexander
Yes, actually: with our application called Dataray, we’re an app in the IXON Cloud marketplace. I don’t remember exactly whether that was already the case when we first got in touch with SPALECK. I think it was classic cold outreach — that’s how we came together. But at the time, we definitely already had an integration into the IXON Cloud. Whether it was in the marketplace or not, I can’t say for sure. But in any case, we had already saved the effort of building an integration from scratch, and you get the advantages of an existing integration. Jörg then picked up on it and said: wait a second — we already have IXON, you’re integrated — let’s see whether you can support us with this forecasting to detect the kinds of issues he described early on. That’s how it came about.
Tell us a bit about your company. You’re CEO of aiXbrain.
Alexander
I’m a co-founder of aiXbrain. We’ve been around since 2019. We’re a spin-off from RWTH Aachen University, and we’re still based in Aachen. We specialize in industrial AI. We do that through different software products and software services. Dataray is the product line where we cover classic machine learning in predictive maintenance and predictive quality — on time series data, but also image data. And then we have a second major pillar: agentic AI, built on language models, agent frameworks, and similar technologies. We’ve been pushing that very strongly over the last two years. We’re getting to the point where these two topics can complement each other meaningfully. I think that’s also what the industrial AI world is looking at right now. We’re ten people — so relatively small. But it’s a good size to build strong AI solutions with real impact.
I also like that you split it into two product lines: classic AI, which has been relevant for a long time but isn’t always easy to implement, and then newer generative AI. Specifically for SPALECK: what do you do in the projects, and what comes out of your AI models?
Alexander
The machines that SPALECK builds and that are installed at the end customers are part of such lines. A key point is: if that unit fails or stops, nothing else happens on the line. By definition, it’s a critical machine. The question is: when does it become critical on a critical machine — what Jörg described as the “yellow light” — and how long does it take until it turns red? That’s a question where you can use data and train machine learning models that detect trends early, assess them, and then automatically alert the right personnel.
Especially with classic AI models, you need data for that, because you have to train the models yourself — and the machine needs to fail occasionally.
Jörg, maybe you can describe again: your machines run very reliably. How did you capture the data and provide it to aiXbrain so you can predict failures? You need a lot of data for that.
Jörg
That’s actually the biggest challenge. One advantage is that even though our machines are highly tailored to individual customer requirements — a batch size of one is pretty much standard for us — we use the same data model across all machines. That makes the data comparable. By default, the machines ship with “learned” thresholds that they train on their own. But that’s not enough to make a real prediction, because operator behavior — what product they run, how often they change it — means you can’t set a threshold in a way that’s close to a potential fault that a human could recognize. With a trained data model, we already gain four or five days. That doesn’t sound like much, but whether you know on a Friday or on a Tuesday that you really need to act makes a difference. aiXbrain then provided us, within the IXON cloud platform as an integrated app — you don’t even notice you’re not using our own dashboard — with a way to label the data. Then we discussed what we could do with it. aiXbrain tried and developed different models, and we looked at: how well do they match? Do they detect a fault correctly? How often do they flag a non-existent fault as a fault? That’s how we worked our way toward the right approach.
[15:22] Results, Business Models and Best Practices – How Success is Measured
I’m interested in the business model. Those four or five days can be absolutely critical. How did you implement that with your customers? Is it a service model?
Jörg
We had different ideas. One thing we ruled out immediately was value-based pricing — saying: those four hours cost you a certain amount, and we take a share of it. Instead, we charge a price that’s sufficient to operate and further develop the system. We considered a classic subscription model, but that didn’t go down well in the market. Nobody likes recurring payments. Also, operators can change over the years. For example, we have the handover from plant constructor to end customer that has to be managed. By now, it’s an integral part of the machine: when you buy a machine, it’s included — for a period that corresponds to the machine’s service life. That works well for us.
That’s really a core challenge for digital services in industry, because people are used to buying capital goods.
Lukas, you’re a platform provider and probably work with subscriptions. What’s your business model, and how do machine builders that integrate your platform handle that?
Lukas
That’s right: we’re not a capital good. We ourselves work with subscriptions for our different products. But we also have — if you look only at remote access, for example — a model without ongoing costs: you buy hardware and you’re able to offer global, cyber-secure remote access. For the other products, like the customer portal and machine insights, we use subscriptions. Machine builders serve a very heterogeneous customer base, from large corporations to very small businesses. I don’t think a machine builder can offer the exact same product one-to-one for different customer types. There’s a phase where they need to learn: how does my customer use my machine or system? Based on that, they can develop products tailored to the requirements of different customers. Not everyone makes it an integral part of the machine. Some use subscription models. Others integrate it into existing maintenance contracts and try to close more contracts or increase the scope. Especially at the beginning, a successful approach in our experience is to offer it for the warranty period first: so you can provide good support yourself, and so you can show the customer using data how you helped — through data or access to the machine. From there, you derive further products.
Another fundamental topic in IoT: do I connect every single machine, or do I have an IoT cloud for multiple machines? The machine manufacturer wants to place their solution, while the operator would ideally like one app for the entire plant. How do you see that tension?
Lukas
In my view, it’s not an either-or question. It complements each other. Not every machine builder can meet the requirements of a manufacturing company with their own dashboard — and maybe they don’t even need to. The question is: if I can help a manufacturing company by forwarding data into a higher-level IoT system, for example via an IXON edge device, that’s just as possible as visualizing it in my own dashboard. The machine builder needs data to learn from the machine, and then to offer consulting services: how do I operate the machine more effectively? For that, they don’t need the KPIs factories are typically interested in — output, OEE impact, quality metrics — but rather machine-specific values like: what temperature did the motor have before it failed? If there is that dialogue and a permanent connection, both sides benefit.
Alexander, how do you contribute on the business model side? Your solution is in the marketplace, probably subscription-based. Revenue share would be interesting, but likely difficult. Where do you stand? Where do you stand?
Alexander
Revenue share is the big dream. But the big question is: how do you measure the revenue or the delta? We’re in the marketplace. For us there are two phases. One is the configuration phase — you can’t avoid that. As far as I know, true plug-and-play doesn’t exist yet for this kind of data. There’s adaptation effort. Then it becomes a runtime service, and that’s where we charge a fee. Whether you call it a subscription or a license doesn’t really matter at that point. We’ve found a pricing model that’s accepted. It’s a bit “reverse engineered”: you look at what the market will bear, and at some point it converges. With generative AI and LLMs it’s similar: you find few that cost 1,000 euros per month; people generally agree it’s more in the double-digit range.
With you, you’ll have to retrain again and again when patterns shift. How do you handle that?
Alexander
That’s the big advantage of the application — or rather the framework behind it — which we call Dataray. For the value of the concrete application it doesn’t matter, but what’s behind it is a highly automated process. Once you have the model — including configuration of the input data channels, from sampling rate to the meaning of the data — and it’s running, retraining on new data is more or less a push of a button. Then the effort is low. The effort is in the initial setup: which data, which meaning? And you don’t trust it blindly — an expert has to look at it, adjust it, until it fits. One-time effort at the beginning, but generating more data and retraining on it is then very easy.
[24:26] Transferability, scaling and next steps – Here’s how you can use this use case
For many machine builders, this journey is exciting. Jörg, what should people watch out for to avoid typical mistakes?
Jörg
The main point is to really think in advance about what matters to the customer in the service. If you think fancy dashboards or web shops for spare parts ordering are important, you should talk to customers and see whether they even ask for that. We found that many people don’t care about our dashboards at all — they have their own. We provide an OPC UA interface and a PROFINET interface at the edge. We can do that together with IXON because the hardware is open enough. We have first customers who access APIs directly from the cloud because they use completely custom solutions. I would make sure the platform is open and works well with other products. For us, phone and email are still the preferred way. The machines fail very rarely and then require specialist knowledge. Support is tailored to our service technicians and processes. That’s how our processes work better, and it’s increasingly appreciated. In our case, the system sends an email and then we look at the data. It doesn’t matter whether it was a threshold or the AI that reported it. A service technician can quickly check: which machine is this, what’s going on? The customer has their own dashboard, we have timestamps, the data is named. Then you talk — you look at different dashboards in different systems — and still know for sure that you’re talking about the same thing. If we had tried to integrate a ticketing system, self-service, web shop, and so on from the start, we wouldn’t be where we are now. My recommendation is: focus on where you can generate direct value for customers and for your own processes — and make sure the topic doesn’t just “float somewhere in the cloud,” but is visible and actionable for the people.
Are you satisfied now — are you done? Or are there next steps?
Jörg
There are definitely next steps. At the IFA, the leading trade fair for bulk material processing in Munich in May, we’ll present new things. I don’t want to give too much away — best to just stop by. We’re definitely expanding the value of the cloud so the predictions become significantly better again. We’re moving away from “only detecting faults” toward “gaining process know-how.” That’s where it’s heading.
Lukas, where are you heading next?
Lukas
For us, the focus continues to be on security, data availability, and expanding customer portals to create more direct value for our customers. On the other side, there are add-ons to our platform: Jörg mentioned ticketing systems and web shops. We’ll continue to expand our partner network over the next years to provide better integrations that enable faster time-to-market for our customers. And the big topic of AI is becoming more deeply integrated into our platform: the IXON AI assistant, which is gaining more and more functionality. At SPS last year, 2025, we showed a first version — a way to interact with documents like in ChatGPT and get to results faster. When I think about the IXON Flow platform, I see many more possibilities for how the AI assistant can make life significantly easier for our customers.
Alexander, you’re directly involved with the AI assistant: what are you doing there, what steps are you taking?
Alexander
We’re deeply involved in the IXON AI assistant — we’re building it together. The difference compared to what happens in the marketplace — applications as on-top solutions — is that this is deeply embedded in the platform. The first application, something like a document chat, was a test application to show that the overall system works. This will continue, and the assistant will gain more and more functionality. To put it this way: there are functions like predictive maintenance, predictive quality, and AI-based forecasting. But you want to connect these functions intelligently — and not only AI functions, but also others. That’s what we’re doing now on the path toward the IXON AI assistant. Our vision is the “speaking machine”: a machine that knows what condition it’s in, that knows what to do — maybe nothing, maybe proactively something — and that informs people or even takes steps itself. For that, you need many tools that are intelligently connected and work together. That’s the vision we have and that we want to integrate at IXON. From my perspective, the path to the end customer and operator in almost all cases goes through the service department of the machine and plant builder. That’s where the know-how is, and that’s where you first have to create value and build trust. Before it goes one step further, it has to work there. If you imagine the life of a typical service employee: someone calls and says, my machine is down again. Then you can automatically look it up — or you already know from the assistant — what’s the current status? Have things changed since the last time it was working? You can look into manuals or maintenance logs, pull information together, and deliver a qualified pre-analysis: “These errors are visible. These three causes are possible. Ask these two follow-up questions.” That gets you to the root cause quickly. Maybe there’s even a proactive suggestion: which setting makes sense, or what needs to be cleaned. That makes a service employee’s life much easier — and that’s exactly what the IXON AI assistant is meant to deliver.
If other machine builders or service providers want to get in touch: what’s the best way to reach you — LinkedIn, email, website? We’ll of course also link everything on the IoT Use Case page.
IoT Use Case is a great website where the solutions of partners and network members are listed. Thank you all very much. And if anyone wants to talk with me about generative AI for organizational development — that’s also my focus topic — feel free to connect with me on LinkedIn. Until next time.


