In the 50th podcast, Thomas Frahler (Business Lead at Microsoft Germany) predicts that IoT will be mainstream in a few years, which should further significantly increase the importance of the Microsoft Azure world. Michael Schöller (Schmitz Cargobull AG) will demonstrate how IoT introduction and subsequent scaling can be realized so quickly in the broad corporate landscape using the use case of IoT in transportation and, in the second use case, Philip Weber (CloudRail) as an expert for the scalable IoT connection of any machine.
Podcast episode summary
This podcast episode presents two use cases built on Azure, Microsoft’s cloud service that can be used to collect information on a large scale, analyze it, but also process it directly for further use and trigger appropriate actions.
Michael Schöller brings along a practical application from Schmitz Cargobull: Using IoT in road transport, sensors and a cloud connection can be used to permanently record and transmit, among other things, the temperature and global position of the valuable freight, but also the maintenance status of the vehicles. This makes it possible to monitor compliance with cold chains without having to resort to time-consuming and error-prone manual processes. They were so enthusiastic about their set cloud partner Microsoft in the process that they have since dispensed with local data centers and switched all internal processes to the cloud universe. “We are pursuing a cloud-first strategy!” – All in the name of the end of data silos, as Philip Weber also sees coming, who describes CloudRail’s services as a second use case:
They specialize in making machines IoT-ready and bringing them into Microsoft’s Azure cloud. Different requirements in different industries, a wide variety of technologies, older systems (retrofit) – thanks to abstraction, all this is no problem for CloudRail. With a uniform data model, the use of the IODD database and strict reliance on standards, such as the IO-Link protocol, they manage to effectively implement the IoT plug-and-play idea. The first proofs of concept can be shown in just a few hours. Those who then want to expand the result further into the enterprise additionally benefit from CloudRail’s centralization strategy. If the machines are already connected to the cloud for the purpose of IoT, then – not least in terms of data protection – the regular updates, for example, can be controlled centrally.
For the user, IoT thus brings numerous advantages. Predictive maintenance: I know at all times how my machines are doing and when and where they need to be upgraded. Manual tracing is no longer necessary. It also reduces downtime due to any unforeseen incidents between maintenance intervals. This applies to the exemplary transport vehicles equipped with Cargobull’s telematics sensor technology as well as to the software status of any machines connected by CloudRail. The ability to manage its assets remotely also eliminates travel time in many cases. The capital freed up in this way can be used for new investments. In order to easily convince themselves that the use cases also work in their own company, both Schmitz Cargobull and CloudRail have suitably packages ready for the first steps, which can then be built on if desired. Microsoft’s Thomas Frahler also agrees that offering bundles, packaged services, is essential to fully address growth and market needs.
Podcast interview
Welcome to the IIoT use case podcast. I welcome Thomas Frahler from Microsoft, Michael Schöller from Schmitz Cargobull and Philip Weber from CloudRail. Hi, to the round! Glad to have you with us. Let’s just start with a quick round of introductions. Thomas, could you briefly introduce yourself and what exactly you as a person are responsible for at Microsoft or within the Azure platform and what kind of customers you have?
Thomas
In my role, I get to call myself Business Lead for IoT. I always like to translate that as the business manager for the German market at Microsoft. I’ve been with the company since 2010 and also started in the IoT area – only we didn’t really call it that back then. At that time, we were not yet so much on the cloud platform, but rather the whole embedded stuff. Today, of course, it’s a completely different world. Where we are very strong is, of course, in mechanical engineering, manufacturing, and automotive. We see a lot of customers or companies starting projects because they want to develop further. Be it gaining efficiencies internally or actually expanding the portfolio of products. That’s where we’re very active, and I’m happy to be part of it in that capacity.
Very nice. I will come back to this in detail in a moment. Michael, would you like to introduce yourself and say something about you as a person and what you do as a core business?
Michael
With pleasure. My name is Michael Schöller. I have been with Schmitz Cargobull AG since the end of 2017, where I am responsible for the Infrastructure and IT Services division. There, we are responsible for the digital transformation of Schmitz Cargobull. The focus is primarily on technologies from Microsoft – key words Office 365, Azure Synapse, Azure Data Explorer, Event Hubs and many more. So we are really pursuing a cloud-first strategy, which we have also implemented very successfully together with Microsoft in recent years.
Then I would hand over to you, Philip. Would you like to close the introductory round and also say something about you as a person, and about you as a company?
Philip
Gladly. Philip Weber, I’m a partner manager at CloudRail, I look after Microsoft and also Microsoft partners who then build solutions based on Azure services. As you said earlier – a very good intro to our solution: we have developed a technology that allows you to easily connect industrial machines and equipment with Azure Services. On the one hand, this is a gateway installed in our operating system, and on the other hand, it is a device management solution, which I will tell you more about later. With this, I can then connect new systems via OPC-UA, but also older systems to Microsoft Azure very easily.
I would ask about that in detail in a moment. Before we get to that, very briefly by way of introduction: Thomas, it’s insane what developments Microsoft has made with the Azure cloud platform! I think almost everyone I talk to from the industry environment works with you as a user or as a partner. And, of course, the share price – within the last five years, I think you’ve increased fivefold, from $60 to just under $300 currently. Everything is working out for you! I would transition from market developments to the Azure platform to get started. You just said IoT, back then you didn’t call it that – what’s happening in the market today, and where do you see the potentials especially for the topic of IoT for industry?
Thomas
Where I actually see a lot of movement right now is that we’re slowly growing out of this early-adopter phase. So IoT is becoming mainstream, more and more. Various studies that we have done ourselves or that you get from IDC or other providers all say that 80 or 90 percent have already dealt with this. Of course, they have different levels of maturity, but the motivation and intention to implement projects is at a very, very high level.
Where are the developments going; what is in demand on the market? I think simplifying adaptation through, for example, packaged offerings, more out-of-the-box solutions, ready-to-go and so – I think we’ll hear more about that later: that’s something that’s coming, of course. Because projects – we know it – can be relatively complex, but you want to be able to start relatively quickly, and start with something good. Another topic that has become prominent for some time is, of course, the heavy involvement of artificial intelligence. That a better data-based networking of holistic processes and production flows in the company can emerge. Then many things can be worked out, such as reduced downtime, reduced scrap, more efficient integration of services, for example. Or by quickly fixing faults, and so on, and so forth. So this cross-departmental networking of data is also part of it, I think. There are still a lot of data silos today.
Another component that will also come is the use of more productive machines and robots that will help humans more and more – we see a lot of demand there. And many of our customers are already moving towards Equipment Asset Services, Product Asset Services. In other words, that the business models are adapted to this. Where I also see a lot of potential in the medium term is the whole issue of sustainability – how can production be made more sustainable? What investments do you have to make? I think there is a huge potential for first movers, i.e. the ones to be active first. They will probably also benefit over the next few years from not being forced to make investments then – but they have just used the time before.
Yes, perfectly. That was a really good introduction and also an overview of the potential you see in the market.
Now I would get into the use case a little bit, first with you guys, Schmitz Cargobull, to understand what potentials you are using here and what technologies of the Azure cloud platform are used here. Perhaps we’ll start with a little contextualization of the topic. Michael, can you tell us about your project and share a bit of a virtual picture? What does it look like for you on site and what is it all about?
Michael
Gladly. Our project was about the topic of telematics IoT. That means our sister company, Cargobull Telematics, has been pushing the telematics issue for about 16, 17 years. There were selected customers who wanted to introduce the topic, pharmaceutical companies, et cetera. The topic was then developed further and further – to such an extent that in 2018 we started to equip each refrigerated trailer with its own telematics unit. This means that every refrigerated trailer that currently leaves the plant here has its own telematics unit with a two-year contract, where the customer is onboarded onto our portal and where he is thus given the options for digital monitoring.
Meanwhile, we are at the point that we will equip EVERY trailer starting this year. Every trailer that leaves the plant here will have a telematics, so we will also quickly expect a significant number of trailers. And this was also a bit of the impetus for the project: We had already developed portals one and two, also independently with other partners. In the process, it became apparent that portal two would not meet the market growth we actually needed. Since we basically developed a cloud-first strategy at Schmitz Cargobull, Microsoft was naturally our set partner, and so we started developing the portal from scratch. We haven’t used any old software here, any old software code. We had to redevelop the complex product in a very short time, and here, of course, Microsoft technology helped us. Among other things, we have various highly scalable databases in use, Azure Data Explorer for example, Cosmos DB to some extent. Here we were able to build nicely on the Microsoft resources, so that it was possible at all to realize this in such a short time.
Can you briefly define Telematic for me? You had described the individual refrigerated trucks. Simply that you get an idea of what it means?
Michael
Gladly, sure! Of course, today’s dispatcher always wants to know where his goods are? Is it well cooled? Do I have a problem here? Is my brake maybe broken? – In other words, monitoring the entire transport is very important today. Especially the topic of pharmaceuticals, if we take that as an example. Keyword Covid transports. It was of elementarily importance that they knew exactly where the trailer was right now? Is it safe too? Has the cold chain been maintained? In the case of vaccines, of course, this is of enormous importance. That’s what our Telematics is for; it provides the customer with all the data, digitally in a portal at his disposal.
Now you had just said it was about a portal one and two? That is, you guys have developed a platform, with both cooling data and location data and so forth, somewhere from your trailers in the field driving somewhere?
Michael
Exactly. The trailers are completely networked. That means we have up to 200 sensors – depending on which package you book – that are read out. These are then made available to the customer aggregated in a portal or, if he wishes, also directly via API in his ERP system, for example. So he has complete control over it: Where is my trailer? What’s going on right now? Does it still drive? Am I going to have a problem? – This way, customers have the best overview of their goods and of their trailers.
So your customers are the dispatchers? Or with which “stakeholders”, customer groups do you classically work here?
Michael
Basically all major transportation companies. The dispatchers in particular, of course, work with it enormously. They’re responsible for handling all the traffic on a day-to-day basis, keeping track of it all. But it also includes the managing director, who wants to know, is everything under control right now? But the basic principle is that the dispatcher is the main person who looks where his goods are at the moment.
You had just mentioned various key figures and data that are interesting for a dispatcher or a transport company, for example. Are these primarily the kinds of topics – cooling chain, tracking, maybe also security data, an overview for management – that interest you? Is it possible to cluster them?
Michael
Above all, the subject of refrigeration is very important, food transports. Today, if it cannot be shown that the cold chain has been maintained on delivery, for example to a large food manufacturer or seller, then acceptance of the goods is refused. This means that when I go to my big Aldi today, I can be sure that the cold chain has been maintained, because there are clear rules and concepts for proof. You must be able to prove that the temperature in the trailer was xy, and not higher. This must be proven in a certified manner – this is enormously important. Also for pharmaceutical companies the issue of transport – where is my trailer right now? If I have high-value vaccine, of course I want to know where that is right now so I can make sure that it’s not just being hijacked and whatever else is being done with it.
These are two extremely important issues. But of course also in the future – we also want to go further – in terms of predictive maintenance. The more data we collect, the more predictions we will be able to give the customer. This is how we are continuously developing.
For example, if you’re talking about proof, or certain certifications that have to be there, or tracking data. How does that work today, how did it work in the past? Did this happen manually then?
Michael
With the temperature recorder on the trailer, thus analog. But in the meantime, everyone wants a digital temperature recorder – which is also required. That’s why the focus is increasingly on telematics.
If we come back to the portal that you mentioned: That is, we have various trailers that are on the road somewhere in the field. We have temperature data, location data, certain evidence to be provided. All this data has to get into the cloud somewhere with some intelligence. Whether that’s private or public remains to be seen, but how does it work? How do they get from the infrastructure to the cloud?
Michael
At the end of the day, what we have is a telematics module built into the trailer. A SIM card is included there and data is sent to Microsoft encrypted via VPN. There the Event Hub receives them and there they flow into the processing. According to certain rules, the data is then held and stored in the database and finally automatically prepared for the customer. The processing issue is indeed extremely complex because there are different rules. These must also be adhered to so that the end result is always the right data for the customer.
That also means you have an area with data that is only for you internally, and some that goes out externally to your customers? You probably differentiate that in areas in the cloud?
Michael
No, for us it is extremely important: the data belongs to the customer. This means that the customer has all the data at his disposal, depending on the module he is booking. There is of course a service, “Just give me GPS data”, for example. But there is also the full package, then he gets all the sensor data that his trailer has available. That is absolutely clear – our credo is very important: the data belongs to the customer, not to us.
Cloud-first strategy – does that mean that you are holistically relying on cloud technology within the company? Both in the direction of the customer, where you talked about different modules or services, and internally at your company, where you work with the cloud?
Michael
Right. We have replaced our entire data centers in the last four years. We no longer have local data centers. We have migrated completely towards Microsoft Azure over the last few years. We detached everything that could not be migrated to Azure – for example, completely detached our Oracle database, migrated towards SQL. We really did a completely streamlined cloud-first program, which is now also more or less complete. So we are very happy to have a partner like Microsoft.
Perfect transition! Thomas, I’m looking in your direction. Michael was just talking about event hubs and similar services. These are services that are provided by Microsoft Azure. How do I have to imagine that and how does that work, that I get all this knowledge and all this data into the cloud in the first place? – It’s probably those individual services, right?
Thomas
Exactly, we have various services there, such as Event Hubs or an Azure IoT Hub – those are from Microsoft. Then, of course, there are the options of using third-party brokers to enter the cloud, so to speak, where there is then also a very large construction kit of further services available to do routing or analytics, to store data; to enter a large data lake. The possibilities are almost endless, I’d say.
So event hubs, the service, would be, for example, I have the various transportation companies that have booked their telematics units somewhere with a temperature sensor as a service – and those individual data points are uploaded into the Azure cloud via the SIM card, I’ve now learned. There is a service there, Event Hubs, where the data is pre-aggregated, processed and probably provided in a dashboard? Can you explain that a little bit?
Thomas
Perhaps a brief elaboration on event hubs, IoT hubs. What is also very, very important for us when we are engaged in IoT projects: Do we need bidirectional communication or just communication in one direction? This is what the Event Hub, for example, allows: that data is routed from point A to point B. This means, for example, that the Event Hub is not exclusive to IoT projects. Whereas the IoT Hub is the classic IoT service, where I can also, for example, send a message down from the cloud, to my device or whatever, to trigger an action there again. So machine sends data to point A, Cloud analyzes – or in certain cases this can happen directly at the device, “at the edge” so to speak -, Cloud then recommends action B to be performed and sends this message back down.
Maybe that’s a quick way to put it in perspective. But then what we have with us are different ways that we then ultimately get to the dashboard. There is one world where we have a kind of pre-built solution, so a software asset services approach with, for example, Azure IoT Central, where then various Azure services have already been assembled so that I actually have my dashboard already there. I can operate this without writing a line of code. Where I can drag and drop to customize my dashboard and get up and running relatively quickly without having much programming or IoT expertise. And then, of course, there is the other variant, when I basically build my own platform, or set up certain rules, that I get there myself with the individual services, that I need storage, that I need analytics; that I also need visualization, that is, a visualization service. You then have to manually assemble them to get back to the dashboard.
So you can go a fast way – but it also has certain limitations in depth. Whereas, if I want to configure this with the individual building blocks mentioned, I have more effort, but of course have a very high degree of customization.
That probably also depends very much on the resources that I also have internally at my company; and probably also very much on the size of the company? You have both medium-sized customers who use the platform and larger corporations.
Thomas
Absolutely. In fact, there is also the differentiation in various departments, for example, even in larger companies. Where, for example, with the SaaS variant, we’re also in a world where… This might be a really big company, but there’s a department there that wants to launch something quickly. There may also not be as much support from IT; you then work with an external partner and can then still come to a result relatively quickly without having to have a major impact on the larger infrastructure of the company. So this is actually also an attractive offering for all sizes, but basically it’s fair to say that we’re seeing a trend: Large companies, which have a lot of resources at their disposal, want to have very individual solutions, and smaller companies prefer quick ready-to-go solutions off the shelf.
The topic of partners is probably also important there. I would look in your direction, Philip – the topic of CloudRail, your company, is also a building block that plays a role in the entire Microsoft partner ecosystem if I want to build such holistic solutions for the future. I think, Thomas, that you are relatively flexible: You now have a huge partner ecosystem with a wide variety of partners who contribute their skillset in order to implement such large solutions in the first place?
Thomas
Absolutely, yes. I think in Germany alone we have 30,000 or 35,000? A whole lot of partners, anyway. Of course, these are not all exclusive IoT partners, but represent the entire Microsoft universe. But I think that’s also one of the strengths we have, due to our long existence. And that we have actually always had such a partner-oriented business model. I think in IoT it comes into play a bit more because the ecosystem itself is also a bit more complex than in many other areas. What we see, for example, is that it is very difficult for individual companies to manage holistic IoT projects. There are so many core competencies to handle on larger projects. As a partner, you have to look left and right to see who can contribute with their expertise. When we talk about an IoT project – IoT projects are often also transformation projects – we need a certain component in consulting. Hardware is again a universe all its own. Do I create something myself? Do I buy something? Connectivity – does it work via the SIM card? Do I perform it over other components? Then the infrastructure I use – hybrid, cloud, private cloud? Then comes all the analytics stuff – application development, CRM, ERP, integration with other business processes, security, and so on and so forth. So I have a whole chain there, and that’s where we try to take an approach with our partners that we function like a hub, where we enable a partner-to-partner approach. Where we give packaged offers with different partners, so where we can give recommendations in certain areas. And the nice thing is, the companies are usually already aligned, the technologies are aligned – that makes the steps easier, that it’s faster. This normally facilitates scalability and speed overall. This will not only benefit the end users, but also us as a company and our partners, who will then be able to work in a more scalable manner.
Michael, how do you do it at Schmitz Cargobull; how are you set up internally? Do you have your own team that deals with the issues, or is that somewhat orchestrated by you?
Michael
The whole topic of cloud, IoT is orchestrated by us. We are positioned in the infrastructure in that we also take care of software development here in the company. So all that is custom software, non-SAP, runs through my department. We look at what the best approach is for Schmitz Cargobull and try to help select the partner so that we always get a sustainable solution for the individual departments. But in the end, when someone has a requirement, they come to us, we look at the issue, and then we look together with the department to see what the best solution is – but in such a way that we also always have the choice for the providers, the vendors, so that we can ensure that our standards are met.
To make the transition to use case two: We also want to talk a bit about scalable solutions. Especially when I imagine that I have many different data points, different sensors, but also think in terms of smart factories or supply chains – that’s an incredibly broad issue where I have to connect these individual data in a scalable way in the first place. Philip, that is your core business. Can youtell us a bit about what the relevance of standards and scalability is in general? Maybe also in relation to Cargobull or in the area in which you operate.
Philip
Gladly. When I want to network the supply chain, I often have the situation that I have networked the entire supply chain, but in between my industrial machines, my systems, my production facilities are not networked and it is a bit like a black box – where I have the SAP system, so to speak, which gives me evaluations, but which usually say very little about the machines themselves. So how are the individual machine parts? How are they used? What’s happening in production right now – and not 24 hours ago. We are very, very specialized in connecting machines to Azure services, and we pick up where Thomas already mentioned: we either send the data to the IoT Hub, where I can then flexibly build applications. Or in IoT Central, where we provide the data in a way that I can just slap together dashboards. That means I can really build a proof of concept with the solution within a few hours. If I want to see what insights, what knowledge an IoT system provides me with on the machine, I can simply try it out quickly with CloudRail.
We have achieved this by using standards, such as OPC-UA for new machines. Whereas most of you listening to the podcast and coming from production will probably think now: most of the stuff out there ISN’T OPC-UA though, it’s something else. Old controllers from Siemens, Rockwell, B&R; there are different protocols running. That means I have a very elaborate job there to translate those protocols, with certain middleware, and then configure the interfaces to the Azure services – and we provided that from a single source. What is very, very important here again, as Thomas mentioned, is the ecosystem. Because we connect the machine, the data is in Azure, and then our job is sort of done and either an IT department, like Schmitz Cargobull, for example, picks up the data and builds their services, or a Microsoft partner does it. In other words, an IT company that specializes in IoT services, but does not necessarily have machine connectivity as its core business.
You just said via middleware – how do customers do that today? Are isolated solutions primarily created, where you try to connect data from different systems?
Philip
It depends; we see very different approaches there. They don’t even depend that much on the size of the company. There are quite a few bootlegging projects that are carried out by specialist departments and, once successful, are then reported to management. However, there are also initiatives driven from above. What they all have in common is that if I connect a machine manually via middleware – or even via a gateway, which arrives at me as an empty router, I put software on it and do it myself – then I always have the problem that I get it right after a certain amount of time, but I’m back to square one with the next machine. With the next machine, I then have the findings from the last one, but have to start manually again to extract the data points from the controller, to evaluate the data points… and I often don’t get to the controller properly. For example, if they are third-party machines.
That is why we have decided on the approach for this retrofitting, the equipping of older systems: We are working with a sensor partner – ifm electronic, which should also be well-known in German industry – and have developed a plug-and-play system that automatically adds all sensors equipped with the IO-Link protocol to Azure, where it uses a uniform data model from Microsoft. So we use the Microsoft plug-and-play system; that means that the sensors can communicate with all the other plug-and-play devices, on a unified database. This allows me to scale the use cases seamlessly because I don’t have to start configuring this interface and then adjust it again later.
ifm is also one of our partners from the network, so we have already heard something about IO-Link and these sensor connections. Maybe one more question: it’s about using the insights of data connectivity? How do you do it? Are you working with a pre-built database that already comes with these definitions of the insights?
Philip
We have a database running in the background with which we identify all IO-Link sensors on the market, i.e. not only ifm, but across all manufacturers on the basis of their serial numbers. The data is then automatically normalized via our Device Management Cloud. This means that the sensor values then arrive in a format that I can work with immediately. This is for example temperature in °C or velocity in mm/s2. With this, I can integrate a sensor into the Azure ecosystem within seconds and also use it as a plug & play device. Due to the fact that they are Plug & Play certified, the sensor data can be used together with data from non-industrial sensors, smart home sensors.
When I think about it, it’s an incredibly heterogeneous infrastructure. What about the issue of security? I don’t want to stress this too much, but there are some security issues, especially when I connect old controllers, or older infrastructure in general – how do you solve this issue?
Philip
This is a very big issue for us because, of course, we are right at the point of being very critical. As soon as I connect these machines to the Internet – which I have to do for an IoT service – I have always created a certain security risk. Before, the solution was that I simply do NOT connect this machine and nothing happens; that was the thinking. For us, this conventional answer is no longer conceivable with IoT, because if I want the data to be available, I also have to connect it.
There are some security features that we have integrated: There is a TPM chip that prevents the hardware from being copied. We have blocked all USB ports and other ports that are not needed. And we communicate end-to-end encrypted with Microsoft Azure. – These are all answers that can be found to settle this security issue for now. Why for now? Unless I ensure that these devices are permanently updated, and in an automated way, I will not manage to keep IoT applications secure. That’s why one of the main benefits we offer customers is centralized management of gateways and updates that can be rolled out centrally. That means I can control the updates, I can schedule them, and I’m no longer dependent on someone with a laptop or a USB stick going to the gateways in production and updating them. Because in our opinion, these gateways NEVER see an update then.
After all, we are also talking about scaling – is that then related to ONE location? Many companies are now globally positioned, with multiple locations. How does this scaling work, for example, in terms of security or the connection of the various devices, sensors and machines?
Philip
I just mentioned that the updates can be controlled centrally. It’s not just the updates, but the whole system works from a software-as-a-service solution, the Device Management Cloud. With that I can do the configuration of the sensors or the OPC UA servers remotely. This means that I have a team, either they are plant engineers of the customer or it is a partner of ifm. They install the hardware locally; then there’s a little QR code with the serial number on the box, and I can send that to a central IoT team. This can be in Munich, for example, and the customer is in Bonn or in Mannheim, and I can configure these devices completely remotely; I no longer need a team on site. This then allows me to roll out this solution from one location to other machines and then think about what other use cases are there? Which machines can I still optimize? Which services can I still implement?
Then, in the next step, I can think about how I bring this solution together with the global plants? It is often the case in German industry that we have one location and other locations are somewhere in the world. Be that Brazil, South Africa or China. Of course, that’s an advantage if I don’t have to send the IoT team there by plane, but the whole thing can be configured remotely.
I am always asked: Business case? To also understand a little bit, if I’m using technology like this and I want to set myself up for the future in a scalable way, what is the business case here in this specific case that we’re talking about – with the scalable application of the devices, the sensors? When do I save myself money?
Philip
The bottom line is that I save money as soon as the savings – be it increased capacity, which increases my productivity; be it less machine downtime; or even less failure – is greater than the cost of the system. Then I ended up saving money with the solution. This is often not so trivial, because the problem with IoT use cases is that the actual data has not yet been recorded. A before and after comparison can sometimes be quite difficult. Together with our integration partners and Microsoft Azure, we have a relatively pragmatic answer to this: There is a bundle with a system integrator, which does not cost very much. That has a stripped-down framework, which means I have a couple of sensors, I have a couple of services with it. Then I’ll just start – I’ll connect that, try that out. We really connected the machine with partners within a day and configured the Azure services. Then I can look at the system, and what the benefits of the system are, without signing a million euro consulting contract right away. Instead, I can pragmatically look at one machine, move on to the next, and this whole system then grows organically.
You can look at it a bit like a Lego set. When I notice that certain data is missing from a machine: We connect the sensors in an IO master module, which can hold eight; and if there are only five sensors, I simply add a sixth the next day and connect it in Azure with a few clicks. Without me having to do any major programming or reconfiguration. That’s why we mostly see use cases growing. After an initial proof of concept has been made, the next step is to evaluate what other machines are available and which benefit is currently the greatest.
Michael, what does the business case look like for you? You had also talked about the individual potentials that you want to leverage. How do you approach the topic?
Michael
We sell the accesses or the individual telematics subscriptions, and that is also our business case. The corresponding company pays a monthly fee, depending on the contract to which it subscribes, and through this we generate our revenue and, of course, want to grow beyond that. From our point of view, this is a very good business case.
In general, your customers probably also save an insane amount of money, considering that I have the ability to access all the data across the board? Where otherwise times and costs are incurred, through perhaps additional travel or manual processes and paperwork. I have that accessible now in a solution?
Michael
Exactly. That is, they can conveniently access the portal, have all their data there. You don’t have to worry about anything else and always have permanent access to it. Of course, this is the business case par excellence for them!
Thomas. We have now selected two use cases. You have customer references and various success stories that can be found online. How do you see the transfer of the use cases we discussed today? Are these transferable? Do you see this more often or are these all individual cases?
Thomas
Absolutely transferable. We heard that we basically have two different expressions. One is in the direction of cost efficiency, that is, cutting costs on the production side so that I can free up capital that I can allocate to other investments. And the other case is to make money with it. So build the business model on the fact that customers [unhörbar 0:39:02]buy. The cases we had there with predictive maintenance, with simply monitoring the machines – so that I know what data the machines are giving me in the first place – or can go in the direction of making my service more efficient – so that I no longer have to send a lot of people around the world, but can do things remotely, in the plant. These are things that can be used and also implemented across industries, and that’s absolutely how we see it with the requests that we’ve had in recent years, currently, and will have in the future.
What else is coming here in the future? You had talked at the beginning about IoT becoming mainstream. Do you think that’s going to evolve a lot with the issues you’ve raised? Do you see any other issues? Where will we be in five years?
Thomas
Very good question. I think IoT and AI will have become mainstream by then. By then, every company will already have a project truly productive in one form or another. Then the question is really no longer how and why, but what does the scaling look like in other areas? Where perhaps really exciting things could still come – that is perhaps a bit further away – is in a direction where colleagues of mine are closer: Hyperscale computing and quantum mechanics. That we can then arrive at analyses even faster and even better; but that is really still up in the air, still five years plus. My next milestone is actually first, IoT, AI becomes mainstream, and then there are different components on the technology side, like 5G or edge computing, which of course will continue to be available and will continue to be adapted. Then in five years, it’s about how do we really scale that to the whole company, not just individual areas?
Philip, do you have any additions from your side on where we will be in five years?
Philip
I think that in the future these data silos that Thomas mentioned earlier will dissolve. It’s all going to go there, that things flow together in central systems. Because, of course, as a factory operator, I don’t want to use an extra tool for every type of machine, for every production line. Instead, everything should flow together centrally, ideally using a uniform data standard. And of course I don’t want to have to take care of a fleet of gateways that I have to update, but I want everything to be controlled centrally in some way and my IT department to have as little work as possible with it. After all, this also generates costs if I have to take excessive care of hardware and IoT services.
This was really a very exciting session. Michael, thanks again for introducing your project, what potential you see and how you are using the Azure cloud platform. Thanks also Philip to you for explaining the issue of scalable connectivity of individual devices, sensors et cetera. This is also, I think, not such a trivial issue, so thank you very much. And of course also to you, Thomas, you have also brought the round together here on the Microsoft side, and I believe there will be a lot more from your side in the coming years. I am curious about it.
Thanks for the insights, and have a great rest of your day!