Möchtest du unsere Inhalte auf Deutsch sehen?


KUKA platform iiQoT: Cloud-based IIoT software enables visualization and troubleshooting


Click on the button to load the content from Spotify Player.

Load content

Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.
We present a novelty in the field of IIoT in robotics. Everyone probably knows them: the orange KUKA robots in every application, shape and size.

IoT Use Case Podcast #65 - Device Insight + KUKA

Podcast episode summary

Episode 65 at a glance (and click):

  • [07:33] Challenges, potentials and status quo – This is what the use case looks like in practice
  • [14:10] Solutions, offerings and services – A look at the technologies used
  • [27:26] Results, Business Models and Best Practices – How Success is Measured
  • [32:18] Transferability, scaling and next steps – Here’s how you can use this use case

We present a novelty in the field of IIoT in robotics. Everyone probably knows them: the orange KUKA robots in every application, shape and size. Now KUKA is taking the approach of making the data usable in real time via its new IIoT platform KUKA iiQoT. This platform is for all robot fleets – whether I have five of them, a hundred or thousands. In this podcast episode, we look at exactly what added value the new solution brings to individual use cases. We talk about how to take robotic fleet uptime to a new level. IoT expert Device Insight is one of our guests.

What does iiQoT mean? ii = industrial intelligence, iQ = intelligent capability and IIoT old familiar for the Industrial Internet of Things.

Podcast interview

Thomas, you are CTO and Managing Director at Device Insight. You are a subsidiary of KUKA and virtually an IoT specialist, build IoT solutions, connected products, are a Microsoft Azure Gold Partner and much more. Can you put it that way?


Well summarized. At Device Insight, we’ve been involved with IoT since our founding in 2003, even though the term didn’t exist back then. Basically, we have always had the vision of networking machines, plants and devices and building applications on top of them. We do this in very different industries. That means we help our customers implement Industrial IoT solutions, and as CTO I am primarily responsible for software development – that means both the development of our own products and components, but also system integration, which we do for customers to build complete end-to-end solutions.

We also worked for KUKA a few years ago. This then turned into something more: We are a subsidiary of KUKA and support KUKA colleagues in IoT projects. The exciting thing is that we are bringing together this expertise from the shop floor and automation from KUKA and our expertise in the cloud and IoT.


Richard, a hearty hello in your direction. You’re here today as Senior Vice President R&D Platform Applications & Tools at KUKA-Deutschland GmbH. KUKA is, of course, known as the world’s leading supplier of intelligent automation solutions. You build robots from the cell to the fully automated system, with sales of around 2.6 billion euros, 14,000 employees and headquarters in Augsburg. R&D Platform Applications & Tools: Can you explain what exactly you’re involved with – is it strategic development of the platform?


Yes, I am also involved in the strategic development of our IoT platform. I look forward to talking about that as we go forward. However, this bulky term Applications & Tools also stands for much more in our portfolio. For me, it’s about the tools in R&D; we’re working on software solutions that make the configuration, the programming of our robots as easy as possible. A very important topic is simulation, i.e. the virtual world for automation, which makes it even easier to commission systems that you don’t already have on your shop floor. And the applications that the robot is to carry out in the end – the robot is not an end in itself. We want to weld, palletize, glue and assemble – that’s what I’m all about: software development for precisely this area of business.


To start with, what is your vision in this direction? You talk about applications, about tools – where are you going with KUKA?


The mission for 2030 is to lower the entry threshold for automation. We want to make it available to everyone and also make it possible to experience. As our CEO, Peter Mohnen, puts it, automation should become as easy as working on a PC. In other words, the current state is more of a playground for engineers and technicians – and this playground of very well-trained, few people should become so simple that it can be applied and realized everywhere. For this purpose, we develop simulation and also application software, which make a simple setup possible, easy programming, configuration, also the interconnection with other peripherals, such as sensor, camera or welding timers and grippers. In other words, everything that is needed to implement the robot application. To make this as easy as possible, there is of course a lot for us to do, a lot to develop. But it’s also great fun to make this smart automation tangible.


There are also various new features and possibilities that your IIoT platform brings with it. Thomas, we’re talking about real-world use cases here to explain the technologies behind them in a simple and understandable way – what have you brought us today and which project are we looking at in detail


There have been colleagues of mine with you in the past who have presented various things from Device Insight. There, together with RAFI, we have looked at a lightweight shop-floor digitization solution and, from a completely different industry, we have also looked at coffee vending machines. Today we are talking about KUKA’s iiQoT product – in short, a secure and scalable IoT platform for KUKA robots. The commonality from our point of view is the term “connected products,” which represents the possibility of manufacturers not only selling their ancestral products, but also enriching them with digital solutions on the outside. In the meantime, we are familiar with the Connected Car, the Connected Washing Machine and even the toothbrush – but in industry, of course, things are somewhat different, which is why Richard is here today.


Exactly; let me explain what that looks like for most of our customers. We find a very interesting state of affairs in industry: the so-called OT world is not networked with the Internet. This means that the devices are networked with each other on the shop floor. However, there is no gateway to the outside, i.e. no possibility to implement software updates over the air, for example, as is totally common today, or to access the devices from the outside. Typically, the OT world is well connected, but just not with the outside. That means, for example, we do it by relying on what’s called a sneakernet instead of the Internet to deliver updates. That’s where we really go with service techs, in sneakers, that’s why it’s called sneakernet, and USB sticks to hand out upgrades. – This is obviously not up to date, and is something we and our customers want to change. That’s the core of why we’re working with Device Insight on this IoT platform; to make all the things possible that connected products and connected robots make possible.

Challenges, potentials and status quo - This is what the use case looks like in practice [07:33]

You said, among other things, it is about accessing the robots from the outside or sending the data to the outside, which is also profitable for various trades. What exactly are the challenges your customers face on a day-to-day basis and what potential do you see? Which can be leveraged, besides the OT world, which is already networked, but towards the outside?

The main challenge we have today is that an incredible amount has to take place on foot. The customer does not have a compact overview of his robot fleet, does not know the states of the individual automation solutions with the robots and cannot view operating parameters.

Today, we have solutions like the smartPAD, our operating devices, really hanging on the fence outside by the robot, which constantly display the program sequence and other information. This means that if someone wants to see how well things are going in the line in the event of a fault or monitoring – to feel a heartbeat of the line, so to speak – they have to go there on site and take a look. This is not up to date and we want to change that with the IoT platform, so that you can access it from the outside, remotely, centrally. With remote access, a personal appearance will no longer be necessary to respond or record data and information.


Is this also about commissioning? When a robot like this is delivered, that it can be put into operation as soon as it is delivered?


Exactly, there is a definite push towards Simple Setup. We have pre-configured files that go from production to an unboxing event at the customer’s site, making commissioning easy; just not direct remote access yet. There is the possibility to register relatively quickly; but you are already on site and have to plug cables. This step is still a normal step, like unpacking your own devices; unpack, plug in, charge, like a smartphone. After that, remote access is possible relatively quickly. Today, we have pre-installed systems; the so-called KDC, KUKA.DeviceConnector, is pre-installed, so the gateway to the connected world would be possible – and if a customer then also uses the IoT product from us, he has the possibility to access the functions we deliver from the outset.


Thomas, it’s all about states, operating parameters and so on – do you have any insight into what data is particularly exciting for you and for KUKA that you want to use to build the solution on?


First of all, there are different categories of data. They are divided, for example, according to the frequency in which they are transmitted. A software version does not change every few minutes, but only when it is updated. Then there are operating parameters; a CPU load of the controller or a memory load – these are much higher frequency. This is all state data that is transmitted on a regular basis. This is important, for example, for asset management – to know what software statuses I have in my fleet, for example. Also overview functions – how many robots of which type do I have? That’s where this data is very central, of course. But also features like condition monitoring – for example, if I want to be notified when a robot is permanently running at a high CPU utilization. Another category is event-driven data, where you capture messages that show up in the robot controller. Classically, these are error and warning messages, but also things like change logs. So things, for example, that are changed on site, which can then be recorded centrally and thus enable central tracking. This way you can see if things have been changed somewhere unexpectedly that you don’t want yet.


You said, for example, CPU utilization, means the robot is running at a performance limit that at some point is no longer healthy? Can you think of it as saying, you have to look more closely there, because otherwise there will be stoppages?


Exactly; that shouldn’t be the case, but those are exactly the things you want to detect: that the robot is no longer working as it should.


This can be the computing capacity we provide in the controller or in additional computers, box-beside-the-box-wise. But also motor currents and the like are very interesting parameters, which one would like to see. This ensures that the health state of the system is monitored. We want to optimize the fleet; that the robots are up to date with the latest software. Want to make troubleshooting efficient, maximize uptime, provide a clear and quick overview for the customer and also for us and the experts responsible for maintenance.


When I dive into the system – referring to the iiQoT platform – what are the requirements? You have built a platform at Device Insight.


First of all, it was important to us that you get a clear overview of the data in a compact form. We offer an incredible number of variables. I have just mentioned the KDC, which makes the variables retrievable. We then want to make sure, as a first requirement, of course, that that happens safely. The data should be available, securely transferable – we are talking about a cloud application, after all – and easily consumable for the target group. It is also important that the platform scales – because it is not always just about one robot, but also sometimes about a fleet of robots. There are a few customers who have many thousands of robots. We sell a good 40,000 robots a year. This means that the installed base is correspondingly large. But there are also some customers with only a few robots. Depending on who we want to connect, the platform must be able to adapt to the needs of the customer. It is also important to us that we also scale well in the total number of robots.


Exactly, so scalability ultimately means that you rely on a technology that is expandable and adaptable to a wide variety of customers.

Solutions, offerings and services - A look at the technologies used [14:10]

Then let’s dive into the solutions and how exactly the platform works. You built the KUKA iiQoT platform – that’s your own IIoT platform for the robot fleets, whether it’s 5 robots or 200 robots. It’s about implementing troubleshooting, optimizing the fleet and creating a fast, efficient overview of the robots.

You said it’s also about a so-called connector that’s already installed. Let’s shimmy from data acquisition, hardware, to processing, to analysis – Thomas, how does the data acquisition from the robots work?


We are not starting from scratch there, but there is already a software module for the robot controller called KUKA.DeviceConnector, KDC. This provides the data, in a standardized way, at least as far as the protocols are concerned – OPC UA. This also enables a pub/sub, i.e. publishing the data, actively, for example via MQTT, so that we don’t have to build proprietary or strange solutions. Instead, we can stick to best practices that are also common in the industry everywhere. This is the starting point.

The next step to pass the data to the iiQoT platform is a module, what we call Cloud Connector, because the robot does not send directly to the cloud, but to an intermediate layer. There are several reasons for this. On the one hand, it’s another layer of security. As a rule, you do not want to connect the robot directly to the Internet. The Cloud Connector can use the active OPC UA interface here, for example, to execute functions. It is also the layer that then ultimately authenticates to the cloud platform. Here we can use ready-made components and services provided by a cloud provider. So everything that has to do with certificates, updateability, authentication, we don’t have to build ourselves. Instead, we use things that already exist.

In perspective, this cloud connector could also take on more functions, such as the preprocessing of data. That way, you wouldn’t have to send everything to the platform, but have pre-aggregated data. That would be the place to do it smartly, too. Today, that is not yet the case.


Okay, it’s also possible that customers have already pre-processed data themselves, which you might record and then couple with yours. Do you use this classic IoT protocol MQTT for data transmission?


Exactly. I had already indicated that we also use ready-made services from the cloud; specifically, Microsoft Azure’s IoT Hub. The devices are then connected via MQTT both on the pub/sub side from the device connector towards the cloud connector and towards the cloud via MQTT. We also have the ability to call and trigger functions from the cloud via the persistent connection.


Thomas, you said that safety also plays a big role in development. Can you elaborate on that a little bit more? It is meaningful to many who use it. Why is it so important here with robots in particular?


I think security is important everywhere with IoT, and especially in industry. Customers have a strong requirement to maintain control over their data and ensure that no one else can see their data. That’s why it’s an important issue not only at the field level, i.e. how to communicate with the robot, but throughout the entire chain. It should be everywhere when it comes to IoT or Industrial IoT.


It’s really the case that our customers are sensitive about this. From the data that the controller provides or that we read out from the robots, it is in principle possible to draw conclusions about what is going on in production. But I’m quite confident and firmly convinced that, at the beginning, the perceived danger that the customer sees is very great – but the customer discussions show that, thanks to our expertise and our solutions, the trust is there to implement the solution cloud-based.


As Thomas mentioned, the Cloud Connector is exactly the feature I need to realize this decoupling and data encryption, right?


That’s right.


We’ve talked about data acquisition. This data is now collected in the Cloud Connector. The next step would be to go into processing, and you have to get the data from the robots and the individual fleets in the cloud; ideally also the evaluation. How do I manage all the robots and what do you use in the cloud to process the robot data?


If you follow the chain further, what happens to the data – they take different paths, depending on the category described. Basically, all data is first checked for plausibility and assigned to the correct customer. The whole safety concept is behind it, of course. And then, looking at the state data – if you will, we’re building a digital image of the particular robot in the cloud. I don’t want to overuse the term digital twin. But basically, we have a digital image of each robot that is kept up to date by this data. So we know from each robot, which software states are installed? Which modules or tech packages are installed? What hardware is installed? This is ultimately the basis for the customer to get an overview of his fleet, for example, and to be able to search and aggregate – i.e. specific robots with a specific status. State data that may change more frequently is, of course, also stored as time series data, because you would have to look over the history: how did that behave? What does the CPU utilization curve look like for the last few days? The messages are indexed so that you can search for them specifically. All of this data takes different routes into different databases and pots depending on the features implemented. They are partially pre-processed, aggregated; depending on the feature behind it.


The bottom line is that you take the categories of data – change logs, states, CPU data, simple condition monitoring data – and plausibilize them. You look at what types of data it is, put it into the individual data pots that it needs; then I can see and manage it in the asset manager and evaluate it in the next step?


Exactly. Plausibility check also means: If we expect a certain data type, a date or a floating point number, then it must match the schema. This ensures that you do not record strange data and falsify data.


If you go one step further – we have the Cloud Connector; I’ve included all the data from the OT world. In the next step, we are in the asset manager: I now see all the robots, regardless of whether they are 5 or 2000, they are onboarded there and it is a matter of data analysis. This is where a certain intelligence comes in, which you bring along on the part of Device Insight. How does that work exactly? Do you work with your customers and ask what is needed for a dashboard? Are there any ready-made patterns?


You can divide this into different levels. For the display of data that the customer wants to analyze himself, the system offers relatively much flexibility. This means that the customer can also compile dashboards themselves. The basic dashboards were created in cooperation with customers or according to their requirements. Ultimately, he can configure a lot here himself or look at it himself. It can drill down a bit in the system, from the overview into the details. It can also create condition monitoring rules. The step further, if you go in the direction of automatic analysis of data, in the direction of prediction or predictive maintenance, also with AI and machine learning methods, is just being implemented. That’s why we can’t say too much about it today. But, of course, the goal is already to evaluate the data not only in the actual state, but also to achieve a possible projection into the future – to predict maintenance or even to detect an anomaly; the robot is behaving abnormally. The customer is relieved of work because he does not always have to work his way through dashboards, but instead receives more information on where to take a closer look. What is also an interesting aspect with data is integration, as even iiQoT cannot and will not be able to do everything. This is not an island, but embeds itself in the system landscape – both at the customer and at KUKA. It’s not called the Industrial Internet of Things for nothing -the Internet also implies a certain amount of communication and data exchange. We want to address that as well. A good example of this, with clear added value for the customer, is that he can create a service ticket directly from the system if he sees messages appearing or there is a problem.


That means that in the case of a service, I have the interface to my own individual systems – be it different CRM systems, you’re sort of tying that in?


Exactly. We certainly have the ability to make that visible at the customer’s site and also connect the customer systems. But what Thomas just said was that we use our knowledge data base Xpert and our CRM system Salesforce and there we provide an integrated service. If the customer has a message or a problem and we want to support to solve it, it is no problem at all to generate such an Xpert ticket and integrate the data via Xpert into Salesforce. Our service experts can thus immediately access all data centrally. This eliminates the need for someone to show up on site first. The problem can be read out immediately, with the time series of what exactly happened with the robot. And the colleagues from Customer Service can react accordingly.


Could this be understood as a kind of app; a software that runs individually, called Xpert, through which I can ensure for the maintenance case that someone from KUKA processes the service order directly?


Exactly, Xpert is a knowledge database, which we also provide as a service and which is used a lot. By integrating the IoT platform, our CRM system Salesforce and Xpert, all this data management, the information that is with the customer, is immediately available to us. They are also linked to the solutions of various states stored in Xpert. The service colleagues also have many solutions ready in Xpert for the customer and for themselves. There are different entry points into the database, which makes it very easy to access the right information and help. For us, it’s important that downtime is minimal, and that’s much easier with such central availability than it was before – when you actually had to go to the machine and maybe even create a data dump. One example of this: If it really does happen that our robots fail – very rarely, of course, because the availability of today’s automation systems is really very high. But it does happen from time to time. Then you can create a so-called trace file in case of an error. This is basically like an oscilloscope. Functions provided, the detailed recording of robot movements, axis angles, axis speeds, torques, motor currents. You can look into all of that, like when you’re at the doctor’s office and he puts on a stethoscope. Similarly, with these trace files, we are able to provide immediate assistance with more complicated problems. It used to be a huge issue; because you had to get it, from the machine, then transport a lot of data with the USB stick. Then there was simply a time delay, and the downtime of course costs money.


That means your customers have probably known about Xpert for a while, have been using it, and now they have the ability to quickly leverage that data and make it available directly through the cloud. That’s certainly interesting for things like insurance claims. No matter what you have to prove, you have a data stack that you can always access? You no longer have to run around with a USB stick and manually gather everything together.


It is also interesting to note that we also have information about the cycle times and the energy that the systems absorb, and we would like to make optimization offers on this in the future.


To add to the previous topic. Walking around the factory floor with the USB stick, is actually an issue that you would see much bigger under the digitization aspect. One makes this process of collaboration between customer and manufacturer much leaner and more efficient. You don’t send out emails, call back there again, but provide the relevant information directly. Especially where a failure costs enormous money, speed is worth a lot.


I think that is a very important point: this process of collaboration is also the added value in the end – the IIoT offers precisely this possibility of networking and collaboration.

Results, Business Models and Best Practices - How Success is Measured [27:26]

Working with customers also often produces learnings or best practices that can be used and shared. Can you give some insights on what your experience has been in the projects?


Basically, we have some advance planning. We conduct many customer interviews to find out where the customer’s added value can lie. First, it is important to understand where the pain points lie in order to derive the added value. We have normal, typical procedures, like the Scrum methodology. I think everyone in software development knows this one. We also have product owners who use the interview results and requirements to prepare a backlog and write user stories, which developers then control – all of this is done in a straight line, so to speak.

What’s exciting now is that when we do demos with customers, or give them the opportunity to test the system – not only do we demonstrate, but the customer can also test – in the end, completely different learnings come out of it. I think that’s a very big step forward, because presentation or a demo is always one-sided – but making it tangible and collecting the feedback directly and reacting to it in an agile way: that’s what we’re doing together here. This works extremely well.


Thomas, you’re then involved in developing that from the product development side. How do you incorporate feedback on the platform directly?


It’s very important to do that at early stages. Don’t lock yourself in for three years first and then come out of the closet with the great platform. We have an explicit evaluation platform that is not just a demo system, but one that incorporates real data. That’s important in IoT, that you go through the whole chain, because in the end the data sometimes looks a bit different than you might imagine. That is what we have implemented here. Another learning that we can also share here – a learning that we have at Device Insight as a whole, but also in this project: build on existing standards and, above all, also use components and services that are already there. You shouldn’t reinvent the wheel in the wrong places, but use things that already exist to focus on the essentials and work out the added value for the customer.


We have created an episode of our IoT Industry Bartalk together with KUKA specifically on the topic of standards. That’s exactly what it’s all about: how can you use standards not only on the shop floor but also in the IIoT world to build scalable systems? What’s still to come; where do you want to go and what features can we look forward to?


We have already mentioned asset management, condition monitoring, preventive maintenance and remote monitoring. These are the four use cases we discussed. What will come in the future is condition-based maintenance. We want to go for the so-called anomaly detection. With the data and time series, it is not only possible to look backwards; but with artificial intelligence, with domain knowledge – how well we know our systems, along with their typical maloperations and possible failures – it is also possible to calculate forwards. We would like to monitor the condition so closely that we can react in time, for example, replace a necessary component or similar. Or provide assistance with cycle time analysis or energy consumption on how to set it up better. It’s important as a robotics manufacturer to use your domain knowledge to place improvements with the customer that the customer otherwise can’t immediately leverage – which is now enabled by the data. Backup management is also still an issue because many of the controllers contain very important programs. This is, of course, a common requirement. We want to standardize our customer journey. We have already connected the IoT platform Xpert and Salesforce. And there are other issues like my.KUKA and so on – that it becomes ONE Experience for the customer … ONE login to interact with the different offers and opportunities.


Predictive maintenance – everyone is talking about it, but this is the supreme discipline that comes on top when you have the data available in the appropriate data quality. Going into intelligent evaluation, bringing up new features and developing that with the customer.

Transferability, Scaling, and Next Steps - Here's how you can use this use case. [32:18]

Thomas, you are on the road as Device Insight with very different projects. You also collaborate with the RAFI company, with Costa Coffee, have countless other projects. How’s that for this platform? You developed it specifically for KUKA; but can it be transferred to other customers?

The challenges and the content are already quite transferable, since the framework conditions in the store floor and industry sectors – unlike in other contexts such as smart homes – are quite similar, especially the basic technologies or approaches. Especially with the data processing chain – how do I collect data, what happens to it, how does it get into the cloud? – it is a similar construct, which can also be found in other systems. That’s also why we’re supporting and working with KUKA on the product. A lot of use cases, when you think about asset management and overviews and aggregation and time series, are relatively similar after all. There it goes into more and more specific use cases that might involve a robot. Predictive maintenance is completely different from other areas. But the base always looks very similar. We have established best practices in this area, which ultimately represents a reusable architecture.

That’s the basis then, if I’m a machine tool manufacturer, or whatever environment: I can work with you and develop that together. The holistic IIoT platform for robot fleets is a good example of what is already possible today. If you want to go into more detail at one point or another, you can find the contact details in the show notes.

Please do not hesitate to contact me if you have any questions.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Host & General Manager
IoT Use Case Podcast