In episode 171 of the IoT Use Case Podcast, host Ing. Madeleine Mickeleit speaks with Fabian Peter, CEO of ayedo, about the industrial use of Kubernetes—beyond traditional IT silos and DevOps clichés.
This episode explores how companies can efficiently scale use cases like predictive maintenance, roll out updates automatically, and meet compliance requirements. Fabian shares practical insights from projects in mechanical engineering and energy supply, explaining how a European tech stack can bring Kubernetes even to on-premise environments.
Podcast episode summary
Kubernetes has long since moved beyond being just an IT team topic—it’s becoming a key technology to take industrial use cases from prototype to rollout. In this episode, Fabian Peter, CEO of ayedo, talks about real-world challenges in manufacturing: distributed machines, complex update processes, lack of standardization, and growing compliance demands.
He explains how Kubernetes can serve as an operational platform for containerized applications—such as for predictive maintenance, OPC UA-based data connectivity, or API-based manufacturer integrations. The advantage: updates run automatically, changes go live within minutes, and applications remain resilient—even in complex infrastructure environments.
According to Fabian, many companies underestimate the initial hurdles—especially in the midmarket, where expertise is often lacking. That’s why ayedo offers managed services to help companies run their software on Kubernetes—whether for third-party apps or proprietary solutions. What matters most: a European stack that’s GDPR-compliant, flexible, and backed by personal 24/7 support.
This episode is geared toward digitalization leaders in industry, mechanical engineering, and the energy sector who want to standardize and future-proof their IT/OT systems—without getting lost in the complexity of hyperscalers.
Podcast interview
Many industrial companies today are under enormous pressure. I think you all know this: the IT/OT infrastructure needs to be modernized – not just as an end in itself, but so that the use cases you implement can really be operated in a scalable, secure and economical way. To do that, machines, sensor data and existing hardware must be reliably integrated—and the data made usable.
One key building block is Kubernetes. Some of you may have heard of it—an industry standard for orchestrating container-based applications. And if you’re thinking, “I’m not a DevOps nerd” and are about to tune out—please don’t! Whether you work on the shop floor, make architecture decisions, or drive digital solutions in operations: Kubernetes is no longer just an IT topic. It’s the key to taking use cases from idea to productive rollout.
Today we clarify: What is Kubernetes in an industrial context? What kinds of use cases can be implemented with it in manufacturing or energy? And which mistakes can you avoid from the start?
My guest is Fabian Peter, CEO of ayedo—a European provider of container infrastructure based on Kubernetes. Not only do they run these platforms, they also set them up—with 24/7 support. As always, all implementation details can be found at www.iotusecase.com and in the show notes. With that, let’s head into the studio.
Hi Fabian, great to have you here! How are you, and where are we catching you right now?
Fabian
Hello! I’m doing well. I’m near home, currently sitting in our office in Saarland, where the nice weather has settled in.
The Saarland—fantastic! Sending greetings your way. Let’s start by setting the scene a bit. I think most people have heard of Kubernetes, but it would be good to explain it briefly and put it in context.
Docker is installed on a device, for example on a WAGO controller. That lets me start individual containers—small pieces of software that perform a specific task, like collecting data via OPC UA. To orchestrate these containers or the devices they run on, you need Kubernetes. It’s essentially a central system that manages these devices. Is that more or less correct?
Fabian
Yes, that’s a good explanation! Kubernetes doesn’t originally come from the world of WAGO controllers—it was born at companies like Facebook and Google. At some point, they realized: I need lots of machines and have to orchestrate what we call “workloads” across them. A container represents one such workload.
As you said, one could, for example, collect OPC UA data and send it to a message queue or broker like HiveMQ or RabbitMQ. Behind that, you’ll find consumer applications—typically third-party software—performing specific tasks. Kubernetes allows you to package these tasks into containers. So it’s the Docker principle—but not just on a single device, rather scaled across many nodes. Kubernetes handles the distribution of containers across the cluster and ensures they run in the right place.
Cool! Do you remember when you first saw Kubernetes being used in industry? As you said, it comes from a completely different environment— that would be really interesting to hear.
Fabian
For me, Daimler was one of the early adopters in Germany. They’ve been doing a lot with Kubernetes for years. I think I saw at KubeCon last year that they’re running around 9,000 clusters. I’m not exactly sure what they’re doing with them—I can imagine a lot—but I don’t know the details.
We ourselves worked directly with a major German automotive manufacturer. Initially, we installed Docker—or more precisely, Docker Swarm—in one of their plants as a base platform to run a business application that interacted with the shop floor. That setup was later migrated to Kubernetes. That was about three or four years ago. That’s when it really hit me: Kubernetes isn’t just for e-commerce or internet companies—it’s relevant for other sectors too. Because even there, you need scalability and control over what’s running where and when.
Impressive. How many companies are using Kubernetes today? Is it mostly the big players like Daimler, or do you also see it in smaller, mid-sized businesses? Would you estimate 10%, 30%? What’s your gut feeling?
Fabian
I don’t have exact numbers, but I’d estimate maybe 5%. When a mid-sized company uses Kubernetes, it’s usually one where software development is a central part of their business model. In traditional manufacturing, I see it less often—especially in companies that aren’t dealing with a wide range of workloads or applications. That doesn’t mean it wouldn’t be a good idea—but the necessary IT mindset to reach that conclusion usually only develops once you hit a certain size or complexity of problems that need solving.
When do you even need Kubernetes, simply put?
I mentioned the example of the WAGO controller earlier, and of course there are many other manufacturers. At what point does such a setup make sense? Is it when you hit a certain number of devices you need to manage or orchestrate?
What’s the typical problem statement—especially with your clients? When is it really worth using Kubernetes?
Fabian
I think we need to take a small step away from the industrial context here, because with Kubernetes it’s not necessarily about the number of nodes, devices, or servers you manage. You can absolutely run a single device efficiently with Kubernetes. Technically, that’s not a real cluster—but it still works well.
Maybe a quick detour: What does Kubernetes actually do? You described it well earlier—it orchestrates containers on machines. But the truly interesting part is the clearly defined interface it provides—toward developers, operators, or end users. We’re talking about an API here. And in many industrial environments, that’s actually something new.
This API makes it possible to make very precise definitions: For example, I have an application that needs a database, some storage, four cores, three gigabytes of RAM – and here is the image that contains the application. Run it like that, please.
That is why this interface is so exciting. You break the app into clearly separated, standardized components. And that’s exactly where Kubernetes becomes relevant.
Here’s an example: Let’s say I have a WAGO controller—installed on a DIN rail behind an industrial machine—and I want to capture OPC UA data. If I want to run a small piece of software on that device, ideally using a process that’s accepted across the industry, I’d follow best practices. That’s what we call Cloud Native Software Engineering.
Once you understand this, you can define a change from a central point in a large company, press a button, and have it automatically rolled out to all affected devices— following a controlled process.
Of course, with a single device, you could manage that differently. But when you’re talking about 50 or 1,000 devices in the field and you need to roll out software updates, things get interesting. Maybe something needs to change because a feature was updated, new results are required, or compliance rules must be met.
That’s when Kubernetes becomes extremely helpful. I no longer have to access every device via SSH or write complicated Ansible scripts. Instead, I define centrally: the application is now version 2 instead of 1 – and Kubernetes takes care of distributing the update.
Ideally, it even happens with zero downtime. That’s especially important for web services. Kubernetes includes internal logic that ensures external connections—say, to a database or a broker—aren’t interrupted just because an update is running.
And that’s exactly what makes Kubernetes so attractive. If someone is using Docker on a controller, there’s usually a reason—typically to interact with physical hardware. And in those setups, software updates are going to be needed again and again.
That makes sense.
Do you have a real-world example of a typical use case—maybe something related to predictive maintenance? What kind of requirements would such an application have?
Fabian
We already touched on a concrete use case: You have an entire plant full of machines that, in some way, provide data. Now, I’m not a specialist in classical automation technology, meaning how this data was previously evaluated to generate insights.
In the “new world,” you would develop a Kubernetes application that includes an OPC UA poller. This software queries specific data from machine A and writes it to a database.
It’s a collaborative process: a business owner, developer, data analyst, or engineer decides which data from the machine should be collected. The machine operator may also be asked to flip a switch to obtain some different data.
Then, the developer has to adapt the software running in Kubernetes so that it provides the newly required data points in real time.
The use case itself isn’t new; this used to be done using Excel spreadsheets. But the key difference is: in this new world, this change process – meaning what the machine should deliver and what ends up in the dashboard or data lake – can be implemented within minutes.
That means: someone makes a decision “on the fly,” the code is changed and pushed, and the change is live immediately.
The data engineer on the receiving end also gets the result in real time. He no longer has to wait four weeks and go through six support tickets for someone to update the schema in Excel so that he can visualize it in Power BI.
Instead, developers now write their applications in a way that the data flows directly into modern ETL pipelines – into a big data pipeline that provides the data in near real time.
You can then use tools such as Power BI, Grafana or similar at the other end to make informed decisions immediately, e.g. in production.
Let’s say you notice a machine is getting too hot – now you can shut it down right away. In the past, it might have taken five minutes to even notice; today, it’s just 13 seconds. And that can make a huge difference when it comes to maintenance costs.
Okay, cool. That reminds me of a project: we have a partner, ALD Vacuum Technologies – a classic machine and plant manufacturer. They install their systems directly at the customer’s production site.
So, if I have a specific machine installed and I want to evaluate the melting parameters, but my manufacturer can do this better than I can, I would share my data with them through a defined interface.
That means I’d have a Docker or container instance running locally on my device, connected to Kubernetes. Through this setup, an API would be provided to implement my predictive maintenance use case, including the necessary application requirements.
Does that make sense in practice?
Fabian
I’d say Kubernetes is, in a way, the Trojan horse for the logic you just described – meaning for the application that’s actually being run.
The typical scenario is this: a machine operator has one or more plants with the corresponding equipment. If the company is large enough, it usually hires an external firm to develop suitable integration software or a specialized application. This application is then built in a way that makes it ideal to run on Kubernetes. We call this “deploying” it.
The actual logic – for example, for predictive maintenance – lies within the application itself, in the code written by the software developer. Kubernetes, in all of this, plays more of a transport or runtime platform role. You can “drop” your software on there, define a goal – for example that it should be available 24/7 – it is easy to monitor and there are industry standards that make it compatible with classic enterprise IT.
However, the business logic itself is developed by someone else, typically according to cloud-native principles, so that it runs seamlessly on Kubernetes. The trick here is that once such software has been developed to be cloud-native, the software artifact no longer differs in the way it is built, but only in its content, i.e. its business logic.
That means another company, a partner, or even an internal team can take over the application and continue developing it, as long as they understand how this “language” works. There are best practices, and you can look up how to build applications so they run on Kubernetes.
It used to be completely different. Every company delivered software in its own way, and the recipient – the plant – had to adapt individually. Today, with Kubernetes, there is this one common interface. If you develop for Kubernetes, I can just use your software.
[13:52] Challenges, potentials and status quo – This is what the use case looks like in practice
What exactly does Kubernetes save me compared to a traditional setup?
You just mentioned things like scalability and update cycles. But is there something more tangible, like a return on investment?
In other words: where are companies really losing time and money today – in their plants or in their deployments?
Fabian
I’m not really a big fan of quoting concrete numbers, because that’s not how we think about these topics. But I can try to outline where the potential lies — or, if you will, where the “gold is buried.”
Especially in larger companies with high compliance requirements, complex processes, and many team boundaries, Kubernetes plays to its strengths.
With Kubernetes and the corresponding patterns, you bring an entire ecosystem of automation into your organization — and one that can be audited against policies and compliance standards.
After a certain ramp-up time, you also end up with something like an audit code for each deployed application. This code can take over tasks that are often still done manually today — with Excel spreadsheets and checklists.
Another major advantage: monitoring.
In practice, we often see this: when someone in an enterprise system needs a new VM, 12 people across 17 tickets need to get involved. One handles the firewall on the left, another the one on the right, someone else does the IP whitelisting, then another configures the proxy, and so on.
These are all things that can be solved with centralized services. There’s already a range of tooling around Kubernetes for this, such as secrets management with HashiCorp Vault and similar solutions.
If you fundamentally integrate this approach into your workflows, you end up with a clean audit trail, because suddenly everything is traceable and verifiable. At the same time, you save a huge amount of time — because you’re no longer fixing problems manually, but writing logic and automation that work reliably in the background.
I could go on for a while. But the point is: Kubernetes opens new doors.
As I mentioned earlier, the distribution of software by third-party providers is becoming much easier. These days, almost everything comes as a Helm chart, and providers are already developing their solutions in that format.
So instead of needing complex adjustments, you can just take the software as it is and deploy it — because it’s standardized and works faster and more easily as a result.
So basically standardized, preconfigured setups, right?
One follow-up: many companies still need to build up that Kubernetes know-how internally. You mentioned teams earlier — that must also save on personnel costs or at least training effort?
Or would you say that’s more of a strategic issue?
I can imagine that just keeping up with all the developments in this world — in this subculture — takes a lot of time.
Fabian
Yes, absolutely. Especially when you come from the IT/OT world, the challenge is even greater. In that space, you have to manage not only classic IT, but also the “old world.” Machines have operating systems that need to run for 30 years — and the software for those machines has to work for just as long.
That leads to very different ways of thinking about problems.
That’s why the move to Kubernetes is a real leap — and it always takes a few people who are willing to leave old ways of thinking behind and replace them with new ones. It doesn’t help to just send someone to a Kubernetes training course and expect everything to work afterward. That’s not how it works.
It’s a bit like when virtualization first came onto the scene. It took years before people really understood what they could do with it. And even today, we’re still discovering new possibilities based on the hypervisor principle.
It’ll be the same with Kubernetes. Over time, new doors open and new potential emerges. At some point, you just have to get started — but it remains a big shift.
Before we talk about your projects and what you do:
Are there any common mistakes you often see when companies start using Kubernetes or run it in production?
You’ve supported quite a few large-scale customer projects – are there recurring pitfalls that teams should watch out for?
Fabian
One widespread misconception is that Kubernetes is just the “more complex” or “harder” version of Docker Swarm, and that Docker Swarm is the “easier” version of Kubernetes.
That’s not really true. While the two systems share some of the same promises, they don’t do the same thing.
What we often see is that the “learning delta” – the hurdle to getting started with Kubernetes – seems too steep at first. So people decide early on: “We’re not doing Kubernetes, we’ll go with Docker Swarm instead.”
And 100 percent of the time, they end up regretting that decision. Two years down the line, they usually end up redoing everything from scratch.
Another common mistake is thinking you can keep working with your old mindset.
Modern software development means setting up your delivery processes in a way that, in theory, allows you to deploy new versions multiple times a day.
That might sound terrifying in industrial environments – but in sectors like banking, it’s already a reality. And the stakes are just as high. If a payroll run disappears because someone deployed an update, that’s existential.
The point is: You can’t expect Kubernetes to behave “like it used to be.”
A lot of companies believe they can just hire someone like us, and things will magically work. Sure, we can take care of a lot. But if you really want to embrace how software will be written and how IT problems will be solved in the future, you need to fundamentally rethink your approach. And that’s something many aren’t doing.
Right, totally.
And if you’re listening and thinking, “Hey, we’re facing similar challenges,” feel free to reach out.
Fabian, I’ll include your LinkedIn link in the show notes so listeners can connect with you directly. Or just check out your website – that’s ayedo.de, right?
You also have a community there and contacts for these topics.
[20:15] Solutions, offerings and services – A look at the technologies used
So, what exactly do you do at ayedo? I’d like to get a sense of it – especially compared to other players in the market or hyperscalers. But let’s start broadly: What do you specifically offer?
Fabian
Okay, so at ayedo, what we offer is what we call “Managed Software Delivery.” We help companies that want to develop and run their software using Kubernetes – as well as those that simply need Kubernetes to reliably operate third-party software.
Our approach is flexible. We provide everything needed to build modern software – hardware, virtual machines, network transit, S3-compatible storage, and much more. You could say we function a bit like a cloud provider.
But unlike AWS, where you can just click everything together in a user interface, with us, it’s all about direct dialogue. Our customers come to us with very different requirements. Every piece of software is unique – with its own quirks, needs, and infrastructure demands.
We have a lot of conversations around how we can make these individual setups compliant. That’s actually one of the reasons why demand for us is growing. The world is moving in a direction where the combination of words like “data” and “America” in the same sentence is increasingly problematic – especially for many companies in Europe. And that’s exactly where we come in. We’re a fully European provider based in Germany, and we master the entire modern cloud stack, including Kubernetes. That’s why many companies – from agencies developing webshops to public authorities building citizen-facing applications under Germany’s Online Access Act – are turning to us. They need secure, GDPR-compliant environments, and we support them all the way: from development to hosting.
So basically full compliance, backed by a European technology stack.
And I think what really stands out to a lot of your customers is your service level agreement. Can you share a bit more about that – especially in terms of the decision to outsource versus build internally, which ties into the business case we just discussed?
Fabian
Exactly. We offer a service level agreement that’s currently standardized at 99 % uptime. But honestly, that doesn’t impress anyone anymore. Our default SLA is now more in the range of 99.5 % or even 99.9 %, depending on the service. That applies to our own cloud, to supported cloud providers, and even to on-premises setups running on customer infrastructure. Of course, we’re open to discussing more nines after the decimal if needed.
But in my view, the truly relevant aspect of an SLA isn’t the number you see on paper – even if there’s compensation involved in worst-case scenarios. What really matters is the work we do behind the scenes to make sure those numbers are consistently met. Our mission is to ensure a high level of availability – even in scenarios where other systems might start to break down.
That’s why we also offer 24/7 support. And when you call us, you’re not routed to a call center on the other side of the world – you’re directly connected with people like me who actually built the systems. Our support isn’t outsourced; it comes straight from the team that developed the infrastructure. We know the setups inside and out and can offer real, concrete guarantees – especially for the complex scenarios our customers often bring. Because no two setups are alike.
Impressive. And your customer base really is diverse. A quick look at your website shows names like Liebherr, HADES, T-Systems, Teltec – some really major players relying on your solutions.
That’s quite something. Even though we can’t go into detail here on the podcast, you still get a solid impression of the caliber of companies you’re working with.
Fabian
Yes, we’re honestly very proud to be working with those companies. These projects often involve incredibly exciting challenges – especially within large organizations.
What’s the most exciting part for you personally? Is it more about the people, or the technical architecture? What excites you most about working on these projects?
Fabian
For me, it’s definitely the people. That’s something I really have to emphasize. We’re still a small company – you could even say we’re still a startup. We have a very flexible, agile culture and we move fast. So when we collaborate with these large enterprises, you can really feel a different kind of loyalty, a deep sense of connection, and genuine passion for what they do.
Many of our customers also carry a national responsibility. And you can tell how seriously they take that. Emotionally, it’s something entirely different than just launching another webshop. Of course, we enjoy doing that too – but there are some projects that stand out. You end up going places – metaphorically speaking – that feel almost magical.
And then there’s the business aspect: for me personally, and for us as a company, it’s incredibly exciting to witness the shift that’s happening. A shift away from the hyperscaler mindset and total outsourcing – and toward a mentality of “we want to build things ourselves again, and we want to own what we build.”
I love that. I’ve always been someone who preferred putting the software and hardware into the customer’s basement rather than my own. It’s a privilege to be developing technology for that kind of mindset.
And this change is what makes our business case so exciting, because it is becoming more relevant for upper management again, with topics such as open source, digital sovereignty and location on the agenda. So it’s about more than just technology. It’s about control and trust.
There are only a handful of providers who can truly deliver on that. And we hope to be one of them – someone who can meet these requirements for a wide range of customers, even the most complex ones.
Absolutely. And earlier, you mentioned “location”. Do you mean primarily on-premise deployments, or also physically being on-site with your clients?
Fabian
Both, absolutely. But let’s be honest: these days, we’re rarely physically on-site – because the infrastructure often doesn’t require it anymore. We connect remotely, we communicate online. When we are on-site, it’s usually for workshops, hackathons, or simply to meet face-to-face at least once.
Very cool that you’ve secured these clients – that really speaks to your capabilities. I have to admit, I had to do a bit of research myself before our conversation to get a sense of who’s active in this space. You hear about AWS Compute offerings, IONOS, STACKIT and others that serve similar topics. It’s a massive market. But it’s great to see you’ve clearly carved out your own niche.
And one final question:
Is regulation a factor here – for example NIS2? Is that giving you a tailwind right now and helping you scale? Are you seeing momentum?
Fabian
Yes, definitely. NIS2 has been one of those looming topics for a while. I’ll be honest, I’m not someone who reads legal texts in detail, so I’m not 100% sure if it’s officially in effect in Germany yet or still on the way. But of course, we’ve been thinking for years about what kind of impact it might have on our work.
And you can really feel the change – for example in the vendor audits we go through. The questions being asked today are very different from what we heard one or two years ago.
Like I mentioned earlier: we try to take these complex requirements seriously. We actually enjoy working with Kubernetes and our apps in environments where the standard playbook no longer applies – places with special demands and unique setups.
[27:23] Transferability, scaling and next steps – Here’s how you can use this use case
What’s next? What do you see coming in the Kubernetes space, technologically speaking? And what are you working on – any new features or offerings we should know about? I’d love to hear more about that.
Fabian
I’ve had the feeling for years now that we’re heading toward a kind of liberalization of infrastructure. In Germany, there’s an incredible amount of hardware – actual “metal” – sitting in the basements of data center operators and IT providers, both large and small.
Ten or twenty years ago, it was all the rage to set up your own tech in the basement, buy IP networks, and build out your own transit routes. A lot of that infrastructure still exists today – even if it’s not actively used anymore. And now there are some initiatives targeting exactly that. The Sovereign Cloud Stack is a good example. The idea is to make this existing infrastructure usable again.
So that in the future, it becomes possible to run a Kubernetes cluster with a local IT provider who has the right hardware – and that it’s no longer a huge effort. Instead, there would be a platform layer where I can plug in my hardware into a central market. I can say: “These machines are located in Frankfurt, they have a certain network connection, here are the technical specs.” And the platform layer spins up a Kubernetes cluster on that setup, which can then be offered externally.
How exactly that will work in the end, I don’t know. But I’m convinced that we need an alternative to today’s hyperscaler dominance. And within a few years, a solution will emerge. Whether it’s OpenStack, SCS, or something else remains to be seen. But for Europe, this is clearly the technological direction we’re heading.
Yes, really exciting. I’ll definitely include a few links in the show notes. If anyone’s interested in those initiatives, feel free to check them out. Sorry for the interruption!
Fabian
No problem! Maybe just briefly about what we’re working on at ayedo: We’re actively contributing to this development. Our cloud platform is designed to empower users to build their own cloud – including on-premise. And we’re aiming for an experience that makes developers and users feel like they’re working with AWS, Azure, or IONOS – with the same convenience features, self-service options, role-based access control, and more.
Right now, we’re rolling out new services like a Managed Identity Provider, Managed Object Storage, and a Managed Container Registry – all part of our “easy-to-go” cloud solution. We offer these services both in our public cloud and as private or enterprise versions for on-prem deployments.
And there’s still a lot more to come in the years ahead – because these are exactly the kinds of challenges that keep popping up in our world.
Impressive! Do you already have a product name for your new solution, or is that not official yet?
Fabian
We just try to call things what they are. So, for example, we pragmatically say “managed identity provider.” I guess I’m a bit old-school in that sense. Why give something a fancy name when it’s already pretty self-explanatory and clear—why make people think twice about what it actually is?
In the end, it’s a hosted Keycloak instance, which is an identity provider. And with those two words, most people searching for this kind of thing immediately know what they’re dealing with. There’s no need for some imaginative product name.
Very nice!
Fabian, thanks so much for being here today—for all the great insights from your projects as well. I really appreciated how practical and grounded your input was. Maybe we’ll hear from you again in a follow-up episode, perhaps together with a partner or customer.
For now: Thanks again! And I’ll give you the last word.
Fabian
Thank you for having me!
I’d like to end with a quote we sometimes print on our T-shirts:
“No backup – no mercy.”
That’s a great closing line!
But—just one last question came to mind: Why the aliens on your website? I’m sure some of our listeners will be checking you out now—and they’re maybe going to wonder. Is there a story behind them?
Fabian
Let me think… So: when it comes to our branding, we try to connect with people on an emotional level.
If you follow me on LinkedIn, you’ll see—I mostly post memes. We simply want to engage with a positive tone—and the aliens are perfect for that.
They help us communicate complex things that you couldn’t easily explain in a classic diagram.
Our team members are even immortalized in those alien illustrations—they’re part of the storytelling. We’re not quite where we want to be yet, but it’s all about recognition, personality, and not looking like just another boring “IT shop.”
Well, I can say: it works!
So: Go check out the aliens on ayedo.de—and Fabian, once again, thank you so much for joining us today. Take care, bye!
Fabian
Thanks! Bye!