Möchtest du unsere Inhalte auf Deutsch sehen?

x
x

Transmitting IoT Data with EMQ: MQTT Broker in Industrial Practice Explained

““

You are currently viewing a placeholder content from Spotify Player. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

IoT Use Case Podcast #169 – EMQ

In Episode 169 of the IoT Use Case Podcast, host Ing. Madeleine Mickeleit speaks with Stefano Marmonti, Director EMEA at EMQ Technologies, about the role of MQTT in industrial IoT projects.
Using real-world examples, the episode explores how companies use the EMQX MQTT broker to enable efficient data flows—while reducing development efforts thanks to preconfigured integrations.
Topics range from smart metering and predictive maintenance to emerging standards like the Unified Namespace.

Podcast episode summary

Efficient IoT data flows – how to get MQTT right in industrial practice

In this episode, Stefano Marmonti (EMQ GmbH) explains why MQTT—being a lightweight and efficient protocol—plays a central role in industrial IoT. He shares how clear topic structures and ready-to-use integrations can save both time and costs.

Whether it’s sensors, machines, or gateways – anyone operating hundreds to millions of field devices faces the challenge of bringing data efficiently into the backend. The EMQX broker offers more than 45 out-of-the-box connectors, including Kafka, SAP, and Snowflake – helping many teams avoid building custom connectors from scratch.

What to expect in this episode:

  • How an agricultural sensor manufacturer saved 60% in telecom costs using MQTT
  • Why MQTT is ideal for use cases like smart metering, predictive maintenance, and bidirectional communication
  • How Store-and-Forward, lightweight data formats, and structured topics ensure reliability
  • What’s behind the Unified Namespace – and how Sparkplug enables plug-and-play connectivity
  • Which trends – from data pipelines to AI-based data routing – will shape the future

This episode is for anyone building or operating professional IIoT systems – from OT teams and platform architects to IT strategists across industrial and infrastructure sectors.

Tune in now to learn how MQTT + EMQX help deliver data exactly where it’s needed – securely, efficiently, and at scale.

Podcast interview

You’ve already deployed your own IoT platform, connected multiple devices, and implemented your first use cases? Then you’re likely familiar with the challenges:
Massive volumes of data are being generated – and they need to stay within budget. The demand for performance and compute power is growing, and scalability must be ensured without overwhelming your team.
At the heart of every IoT platform lies a critical yet often invisible component: the MQTT broker. Many of you have heard of it – it manages which data from which device flows to which destination. You can think of it as the data hub at the center of your IoT communication.
Many teams start out using an open-source solution – but as systems scale and go into production, the requirements extend far beyond just technology.
How can you tell when your broker is costing more than it’s delivering?
What are typical pitfalls during implementation?
And how can a professional broker help reduce both hardware and transmission costs?
What kind of architecture helps avoid these conflicts from the start?
That’s what we’re talking about today with Stefano Marmonti, Head of Sales EMEA at EMQ Technologies – one of the world’s leading providers of MQTT broker solutions. EMQ is trusted by HPE, AWS, SoftServe, VMware, and numerous industrial and infrastructure operators in the midmarket.
Practical insights from real-world projects – as always, more at www.iotusecase.com or in the show notes.
Let’s go!

Welcome to the IoT Use Case Podcast – and hi Stefano!
How are you today?

Stefano

Hi! I’m doing great – just got back from vacation, so I’m fully recharged and motivated.

Fantastic! And where are you located right now? Where is your company based again?

Stefano

I’m based in Munich. We have two offices – one in Erfurt and one in Frankfurt, specifically in Eschborn. I’m currently working from home, but in Munich.

Great – shout-out to your colleagues in Munich and of course to the other locations as well!
Let’s start with the basics: Why does an IoT platform even need an MQTT broker? And what’s happening in the market right now that explains the rise of both open-source and commercial solutions?

Stefano

You need an MQTT broker for efficient communication between edge devices, sensors, and machines and the backend. Today, that backend is typically the cloud – but it doesn’t have to be. It could also be a database or a streaming service like Kafka. To make use of data generated in the field, MQTT is a very good protocol – secure, reliable, and ideal for protected data transmission.

Right – as you mentioned earlier, the MQTT broker is almost like the central nervous system of communication. It acts as an intermediary to exchange messages.
How does this actually look in practice – especially for manufacturers running their own IoT platforms?
Where exactly is the MQTT broker positioned, and what systems is it connected to? Can it connect directly to a sensor, for example? Let’s try to paint a picture of that architecture together.

Stefano

Right – that really depends on the use case.
Let’s take sensors as an example: There are sensors that support MQTT – they can speak the protocol. These sensors can connect directly to the broker and send messages to it.
The broker itself usually runs centrally – either in a data center or in the cloud – and then forwards the data or stores it. For example, it might write the sensor data into a time-series database to archive measurements.
The broker can also forward these sensor messages to streaming services like Kafka, to analytics systems, or to ERP systems that further process the data.
So it’s about transporting data from the edge device to the backend – and of course the other way around. It’s bidirectional. A backend system can also send MQTT data streams back to a machine or sensor.

Exactly. Before we go too deep into technical details, I’d like to ask about something I’ll call the “problem statement” for many of your customers.
You work with a wide variety of companies – not sure if we’re allowed to name any today –
but what is their main pain point? The number of connected devices keeps increasing – what are the actual challenges your customers are facing?

Stefano

There’s definitely a technical pain point – the question of how to ensure reliable data transmission. But there’s also a business problem.
Device manufacturers and machine builders are trying to create added value – usually by building their own IoT platforms.
Through these platforms, they can manage their devices for customers and offer better services – for example, predictive maintenance. This helps them detect potential issues early and optimize their sensors and machines.
These additional services play a central role in connection with IoT platforms. They enable providers to offer new or improved services and create real value.
Of course, it always depends on the use case.
If you look at the automotive sector – the idea of a connected car, for instance – you can generate a lot of new services simply by having access to vehicle data.

Okay. And if we stay on the business side of things – it’s also about making the IoT platform scalable, right?
You just mentioned the importance of data usage – but from your experience, where does scalability reach its limits?
At what point do IT teams start looking around for other options? Can you name some of the typical business-related problems that tend to come up?

Stefano

That really depends on the use case again.
Let’s start with a simple example: smart meters – a classic use case for MQTT. Modern smart meters send their data packets to the broker via MQTT.
In Europe, that might not be quite as widespread yet, but we have customers in India who are rolling out over 50 million smart meters.
That’s an enormous number of devices to manage – and that’s where security and scalability become critical. Because it won’t stop at 50 million – the plans go up to 200 or even 600 million devices.
So the scaling happens very quickly, simply due to the volume.
And while smart meters don’t generate massive data volumes individually, the concept applies elsewhere too – like vehicle data or data from e-bikes.
These smart vehicles today are capable of opening multiple connections and sending data – not just one stream.
That creates huge amounts of data – and to handle that, you need a high-performance, stable backend infrastructure.

So you’re mainly working with large customers – or at least with those who have a lot of devices in the field that they want to connect?
If we stick with IoT platforms: one example was smart metering, but I imagine there are many large industrial automation companies out there who’ve been running their own IoT platforms for years now.
They’re integrating sensors, motors, actuators –
and often have thousands of devices in operation.
That’s your core market, right? Did I get that right?

Stefano

Exactly. We work with many German machine builders who are doing just that – especially in the sensor space or with smaller machines. Sensor data is a major focus, and these companies want to push that data into a powerful backend infrastructure. That’s exactly what we support.

So does that mainly apply to manufacturers with larger fleets?
Is an MQTT broker – and the kind of solution you offer – less relevant for companies with only a few connected devices? Or would you say there’s a certain threshold where it starts to make sense?

Stefano

That takes us back to market segmentation.
There’s obviously the open-source market – many companies start with an open-source broker, build their platform on top of it, and then realize: If they want to commercialize the whole setup, they’ll need more – like better scalability or dedicated support.
Some also plan to roll out their platform globally, across different countries or regions – and that’s when support becomes a key topic.
There are many dimensions to this.
So you can’t simply say: “If you only have a few devices, you don’t need a broker.”
The important thing is to ensure secure and efficient communication.

These days, many companies have dedicated teams for this kind of work – monitoring, development, and rollouts.
Depending on the organization, that could be five, 40, or even more people.
Would you say there’s a tipping point where an open-source MQTT broker hits its limits?
Where it starts to make sense to consider a commercial MQTT broker solution – like yours?

Stefano

Yes, the shift from open source to a commercial solution is usually driven by technical reasons.
Our enterprise product, for example, offers significantly greater scalability – it’s based on a slightly different architecture and can support hundreds of millions of devices.
Support is another major factor.
And on top of that, we offer extensive backend integration: more than 50 out-of-the-box connectors to existing systems – including Kafka, Snowflake (just added), SAP, SQL and NoSQL databases, analytics systems, and many more.
That’s a clear differentiator – because many customers try to build these connections themselves. They end up creating custom connectors.
That can work – but it often comes with significant effort.
And that’s exactly the kind of effort a commercial solution can help you avoid.

What’s the typical pitfall in projects like that? Are there any best practices where you’d say: “It really doesn’t make sense to develop this in-house”?
What actually happens in that situation?
Can you explain what it really means to build your own connector?

Stefano

As I mentioned: for us, a connector is the interface from the broker to another system.

So for example: A smart meter sends data to the broker via MQTT – you might think, “Okay, I could build that myself?”

Stefano

No – it’s not just about the endpoint. It’s about the connection from the central broker – the platform – to an external system.
That external system could be an MES, a control system, or an ERP system like SAP.
A typical scenario might look like this: A machine sends data through the broker to the MES or ERP system. Something happens there – and the system then sends data packets back to the machine.
This interface between the central broker and the backend system –
that’s exactly what we provide out of the box. And often, it comes directly from the specific use case.
Many companies start building an IoT platform that includes a broker – and then they quickly realize: There are many use cases they want to implement – like predictive maintenance, for example. And for that, you often need a connection to the MES system.

Yes, exactly. So let’s say I have a motor or another component in the field that hits a wear threshold.
Through my IoT platform, the device would then send an error log or an alert, for example.
Ideally – if everything is properly integrated – the ERP system could automatically create a service ticket, so the service team can respond right away.
That’s the kind of integration you’re talking about, right?

Stefano

Exactly.

Now you also mentioned connectors on the OT side. Many automation providers and manufacturers come from a hardware-centric background.
Is it a lot of engineering effort to connect OT devices properly?
I can imagine some devices already support MQTT natively, or can be connected via OPC UA. But with older machines, you probably need some sort of retrofit solution, right?
How complex is that? And is it also a common issue that companies try to build their own OT connectors just to get access to the data?

Stefano

Exactly – that’s the classic OT challenge.
You rarely have a greenfield environment.
Instead, you’re dealing with a lot of existing equipment that’s already in place.
At EMQ, we offer a software-based gateway that converts various industrial protocols into MQTT topics.
That means: many legacy machines don’t speak MQTT – newer models might, but many don’t.
Through our gateway, protocols like OPC UA, PROFIBUS, PROFINET, Siemens, SIMATIC, and many others can be translated into MQTT topics and sent directly to a broker.
We support over 80 industrial protocols.
Of course, that doesn’t cover 100% of all machines – but we estimate we can address around 90% of the machines still running in the field.
That way, we help bridge the gap to the OT world and integrate older machines into modern IoT infrastructures.

I think you call that the Neuron Gateway, if I remember correctly?

Stefano

Exactly – it’s called NeuronEX.

I’ll drop a link to that in the show notes in case you’d like to check it out.
It’s essentially the industrial edge data hub from EMQ – a really interesting solution.

[14:52] Challenges, potentials and status quo – This is what the use case looks like in practice

Do you have any examples from real projects where customers saw a clear return on investment?
Or where something was truly at stake for the business?

Stefano

Yes, we have several examples. One of them is a manufacturer of agricultural equipment. They build rain sensors that are installed in the field – specialized sensors for agriculture, for example, to help control tractors and other machinery.
Let’s stick with the rain sensors: These are devices you place in the field. They measure rainfall, temperature, and other values – and send that data.
Previously, they did this using SIM cards – a SIM card module that would transmit the data to the backend, where it was processed further.
In this case, they’ve now completely reworked their system and developed a new generation of devices.
These still transmit data via SIM cards – but now they use MQTT.
MQTT is extremely efficient, and the data volume is so small that they’ve been able to save around 60% of their previous telecom costs. That obviously saves a lot of money – and at the same time makes things much more efficient.

Interesting. And why is the data volume lower in this case?

Stefano

Because the data is compressed, which makes the transmission more efficient – so the data volume is reduced.

Okay – compression. Is that compression happening on your commercial MQTT broker, or where exactly does that take place?

Stefano

MQTT as a protocol is inherently extremely lightweight.
Compared to other protocols, the amount of data transmitted over the line is significantly lower.
MQTT is designed to transmit data efficiently and securely – especially in light, compact formats.
That’s what makes the difference. And in this case, it was clearly a cost factor.

Since we’re already talking about data volumes – do you have another example?
The one just mentioned was more of a case where data doesn’t need to be transmitted constantly.
Are there also extreme cases where you need to transmit a lot of data – for example via OPC UA over MQTT?
Do you encounter those kinds of scenarios too? Or is it mostly data sent at certain intervals – not really hard real-time?

Stefano

MQTT is not a real-time protocol, but it allows for near real-time communication – that’s important to understand.
A typical example, especially from the OT world:
In a factory, data is continuously collected. There’s a broker running – for example at the edge – which communicates with a central broker.
The special feature of MQTT is the so-called “store and forward” principle:
If the connection between the factory and the central system is interrupted, the edge broker continues to collect the data locally. As soon as the connection is restored, the data is transmitted afterward.
Other protocols might be able to re-establish the connection – but MQTT has a decisive advantage due to this “message queuing”:
Data is buffered and reliably sent later.
One concrete example: A producer experienced an outage in the factory and lost all the data.
It took more than a week to recover – and that, of course, involved significant costs.
Scenarios like that can be avoided with MQTT.

Yes, of course – that makes sense.
Do you have any additional figures from your projects that back this up?
Sorry if I’m digging a bit deep – I know it’s not always easy.
Or was the example just mentioned your most illustrative case?

Stefano

That again is a technical topic.
With our enterprise broker, we chose a specific architecture that is particularly scalable – and compared to other brokers, it can be operated with about 40% less hardware.
That’s especially interesting from an operational standpoint.
Of course, customers can address this financially by simply providing more hardware and compensating for any performance issues that way.
But we had a concrete case where we analyzed an existing system and compared it to ours – and we were able to reduce hardware by 40%. That obviously cuts operating costs and has a positive impact on return on investment.

[19:51] Solutions, offerings and services – A look at the technologies used

Do you have any best practices you’d like to share as we wrap up?
Especially when starting a new project – it doesn’t always have to be a migration project – there are surely some key points to keep in mind.
What are some of the lessons learned or recommendations you’ve gathered over the years that you can share with our listeners?

Stefano

One key question I always try to clarify right at the start is:
How do you want to operate the broker in the end?
At EMQ, we offer all common deployment models:
You can install our broker yourself and run it on your own hardware.
You can host it in your own cloud environment.
Or you can use our fully managed service, which we provide across all major hyperscalers – a service that you can simply consume.
From a project perspective, it’s important to consider early on which operating model is the most efficient.
Some customers, for security reasons – especially due to internal IT security requirements – say the broker has to run in their own data center or private cloud. If that’s the case, then that’s how it is.
But there are many use cases where this doesn’t apply – because an MQTT broker doesn’t store any data by default.
That means we don’t store anything ourselves. We just forward the data to a target system, where it is then stored.
And that’s exactly why a managed MQTT broker service in the cloud works so well.
The data is simply sent from the device or sensor via the managed service to the desired target system.

Okay, that’s really interesting. Just so I don’t forget – I’ll be covering the topic of OPC UA over MQTT in an upcoming podcast episode with the OPC Foundation.
It’s about contextualized data that can also be accessed from the cloud via MQTT.
But that’s a deeper topic for another day – I just wanted to flag it briefly.
If you haven’t subscribed to the podcast yet: that episode’s coming soon, so stay tuned!
What I definitely took away from this episode:
MQTT is extremely fast and perfectly suited for all near real-time use cases in industry – but also for other applications like smart meters or rain sensors, like you mentioned.
One last technical question:
What’s the story behind the Unified Namespace? I’m not sure if you already touched on it – can you explain again how the Unified Namespace ties in with MQTT and your offering?
It’s essentially a central, structured data directory – how does that fit into your solution?

Stefano

A Unified Namespace is basically a directory where data – or rather so-called topics – are structured.
You can think of it like a phone book: if a company defines a Unified Namespace, it follows certain rules and a predefined naming convention.
So it’s a kind of blueprint – not a product, but a concept.
It’s about defining topic structures that different stakeholders – for example, device manufacturers – can follow.
This ensures that devices can be integrated quickly and communicate reliably over MQTT.
Especially in industrial environments – particularly in manufacturing – the Unified Namespace is highly practical.
If manufacturers follow these structures, integration becomes much easier.
There’s usually a middleware layer involved – this can be a bit more technical – for example, Sparkplug.
Sparkplug can be implemented by device manufacturers so their devices speak MQTT and send messages in the correct structure – matching the Unified Namespace.
This enables a plug-and-play principle: a new “Sparkplug Ready” device can be added straight away – and it communicates with the MQTT broker instantly because the structure is already defined.
At EMQ, we can decode Sparkplug messages – they have a slightly different structure, but we can parse and forward them like any other.
Again, it’s important to note: the Unified Namespace is a concept, not a software product. But it’s a crucial foundation to ensure smooth operations in complex IIoT environments.

Yes, I understand. I’ll be sure to include a few additional links on Sparkplug in the show notes.
At the core, Sparkplug adds a layer of semantics – so you can extract metadata from your data streams, which your solution can then work with.

Stefano

Yes.

Okay. So what else can I actually buy from you?
If I go to your website – I think it’s all listed under the “Platform” section – there’s the EMQX platform, your commercial MQTT broker.
Does that mean I purchase a license? How does that work?

Stefano

That depends on the chosen deployment model.
If you want to operate the product yourself – meaning a self-managed deployment – then yes, you buy a traditional license.
If you use our managed service, it’s based on a subscription model – either a monthly or yearly payment plan, similar to a mobile contract.

I see. And are there certain features you can add depending on your needs?

Stefano

No, with us it’s simple: there’s only one product – with the full feature set.
That applies to both the self-managed enterprise version and the managed service.
It’s technically the same product – the only difference is in the operating or deployment model.
So the decision is simply: Do I want to run the product myself, or use it as a fully managed service?
But in terms of functionality, there are no differences – everything is included.

Okay, I see. I’ll include that in the show notes as well.
You can take a look there to explore the different deployment options – and I believe your pricing is pretty transparent as well.
And if a customer now wants to integrate a specific system – say, Snowflake – how does that work?
Is that integration already included in the platform?

Stefano

Exactly. Let’s take Snowflake as an example – there are two ways to go about it.
First, we offer SDKs – Software Development Kits – so customers with the technical know-how can develop their own integrations.
One of our logistics customers is doing exactly that – they have a specific target system and are building the integration themselves.
Second, we provide out-of-the-box connectors. Right now, we support around 45 target systems – including Kafka, SAP, and more recently, Snowflake.
We added Snowflake because one of our largest customers asked for the ability to send data directly from sensors or production environments into Snowflake.
Our development team in Stockholm built that connector – and now it’s part of our core product and available to all customers.
If we see a broader demand for a certain system, we decide whether to build the connector ourselves – or, if a customer has a specific need, we develop it together.
Snowflake is a good example – the connector is complete and now publicly available to all.

Fascinating!
Do you have any key figures you can share?
I know you have some listed on your website – how many users, how many connected devices, and so on.
Can you tell us more – how large is your community right now?

Stefano

Yes – we now have several hundred million connected devices.
Our open-source software – as well as the enterprise version – has been downloaded nearly 25 million times.
We can’t name all our customers publicly – many large companies use our open-source product in production but prefer not to be mentioned for various reasons.
One example: we recently spoke with an Indian social media provider – they run their platform with around 45 million users communicating through our software.
That shows how broadly our product is used – even if we can’t always share exact numbers.
But it’s clear: we serve millions of users across both open-source and commercial deployments.

[28:55] Transferability, scaling and next steps – Here’s how you can use this use case

So, what can we expect from you in the future?
I know you’re also working on AI-driven use cases – keyword: MCP over MQTT.
Can you walk us through that a bit? What’s currently shaping the scene and the market?
And what should we be looking forward to from your side in the coming months or years?

Stefano

One major topic is, of course, artificial intelligence – AI.
We’re currently working on extending our products to support target systems where data can be sent directly from the broker to AI models.
We’ve already added some initial features that allow developers to interact directly with an AI – for example, when writing code within our platform.
The AI can help generate code – that’s already technically possible.
There’s a lot more to come in this direction. Not everything is available yet, but a lot is in the pipeline to help optimize and improve existing workflows.
The second big topic, in my view, is data pipelines – meaning fixed, continuous connections that allow for permanent data transfer.
Think of it like an oil pipeline: steady data streams running continuously.
We still see a lot of potential here and have some ideas on how to make it even more efficient in the future – with MQTT, but not only with MQTT.

Quick follow-up:
Why do you think data pipelines will become such a big thing?
What’s driving the market in that area?

Stefano

It’s clearly heading toward real-time communication – or at least near-real-time.

Really interesting!
That really sounds like a topic worthy of its own future podcast episode – we’ll definitely have to dive into that again in more detail.
I found today’s conversation really insightful – especially how you explained things so practically and always brought it back to the business case from the customer’s perspective.
A quick message to our community: Be sure to take a look at what EMQ is doing – especially MQTT as a managed service or in other deployment models.
I’ll link everything important in the show notes – there you’ll find a clear overview of what EMQ offers.
With that, thank you so much from my side – and I’ll give you the final word.

Stefano

I can only encourage the community – take a look at what we’re doing.
There are many exciting topics and countless use cases that can be implemented more efficiently, elegantly, and quickly with MQTT.
Today we just touched the surface – the range goes far beyond that: transport, logistics, the public sector – all incredibly exciting areas.
I truly believe this will remain a major topic for years to come – and I hope we can do another podcast on it soon.

Absolutely! Thanks again, Stefano – and have a great rest of the week.
Take care, bye!

Stefano

Thanks – bye!

Please do not hesitate to contact me if you have any questions.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Host & General Manager
IoT Use Case Podcast