In podcast episode 25, Madeleine Mickeleit talks to Jonas Schaub from the board of elunic AG. Three different use cases will be discussed: subscription services as efficient revenue models, quality assurance through artificial intelligence, and IIot machine portals for manufacturers of packaging machines.
Podcast Episode Summary
As an IIoT service provider, elunic generates added value for machine manufacturers through digital applications in the production environment. In this podcast episode, member of the Executive Board Jonas Schaub presents some of these added values, as well as difficulties that arise along the way, based on three use cases.
The first use case is about a collaboration with a filtration system builder. It’s about so-called “pirate filters” and, in general, spare parts that end customers like to source elsewhere. elunic’s solution: build bi-directional customer relationships and create added value and dependencies by providing data. Here, delivery plan business models, especially subscription models, can be the key to success.
Use case two revolves around the topics of quality assurance and artificial intelligence. It is about predictive maintenance and performance of optical quality assurance by neural networks as well as processes that have been performed by human operators until now.
In the third use case, Jonas Schaub talks about a digital machine portal for packaging machine manufacturing. The topics addressed include ticketing, digital service booklets and single user interfaces.
What is your role at elunic AG and what do you do exactly?
I am a member of the executive board of elunic AG. elunic is a service provider – we develop and realize digital applications around the topic of Industrial IoT. We work exclusively on topics that revolve around processes in the production environment. Our typical customer is a machine builder that we help implement the IoT vision. It is about the realization of the idea that the networking of things and the refinement of machines with software represent a relevant opportunity and importance in the future. We help highlight these opportunities, inspire and advise on what possible solutions may be. But we also point out which things you should not do. In doing so, we draw on our experience and learning from other projects and customers.
Can you give us a little insight into the practice? What challenges do customers come to you with, is there a “status quo” in the market?
The classic topics at the moment are predictive maintenance and pay per X models. This is currently the Holy Grail, the supreme discipline and at the same time the most difficult task to solve. In principle, I think it makes sense and is good that there is a very strong drive towards such recurring revenue models at the moment. I think an important task is to look at the whole thing in a slightly more differentiated way. When you look at: Who is the target group? How are they structured?
A classic per pay model is, for example, the coffee vending machine at the airport. This is now no longer operated by the airport operator, but is set up with a per-pay model so that the airport operator can concentrate on its core competencies and at the same time participate in disproportionate success. That, of course, is the template and also the thesis. When things, machines, are networked together, trends such as marketplaces become much more efficient, and utilization of machines increases, you need a model to participate not only in the sale of hardware, but also in its use. However – an insertion here – you really have to look at it in a differentiated way. One is because we are cyclically and culturally in an area where not everyone is always willing to operate a machine on a perpetual pay per model basis. As a concrete example: We have a low interest rate at the moment. We have a culture in Germany where machines are often bought, depreciated and taken off the books, but still used in production. These are all things that have to be taken into account, addressed, and see if that really fits across the board in a pay per use model approach.
Do you have a specific use case from your practice that can be used to understand where such models can work?
If you look at pay per use models as the final stage, then along the way, different gradations and delineations are possible. What we always like to use very much as a tool – inspired by the B2C sector, the consumer environment – are subscription models. For example, razor blades, that’s a perfect model: there I have consumables that can be obtained via a subscription model as a user, or the example of printer cartridges that are automatically reordered when the printer reaches an empty fill level. As a user, I do not have to worry about anything, do not have to intervene in the ordering process, and even do not have to think about it. This is the inspiration that we use. We are skilled at identifying where there are opportunities for such subscription models. We supply wear components, consumables or spare parts, which are sometimes offered cheaper by other market participants and establish a mode here, however, that the customer is interested in obtaining the original parts from us. And ideally via a subscription-like, ongoing ratio structure.
So printer cartridges and razor blades serve as inspiration. Do you have an example where such models are also used in industry?
Yes, we have already implemented and successfully used such models in several cases. All of this always with inspiration from the B2C environment, such as the loyalty card of a bonus points system. In the sense of, “If I buy my tenth cup of coffee at the bakery, I get the eleventh cup for free.”. We have applied this model to a manufacturer of filtration systems, which until now has derived a large part of its value creation from the sale of the systems. Beyond that, however, the filters themselves have become increasingly important. What have we done? We have put these filters completely in the center with a digital solution. The filter requirement is automatically detected and predicted, and the order is triggered accordingly.
How should one imagine such a filtration plant in practice? How does it work or what is interesting data that such a system provides?
Filtration systems are often used as part of a larger plant to treat or post-treat certain items. The classic application is the degree of contamination of the filter. If this is dirty, the filtration does not work as well. The technical term for this is the loading condition, which can be measured and recorded by sensors via various parameters.
How do I digitize such a filter and what is the final goal?
There are two different streams. The first and easier way: I connect the plant digitally to the network, bring it online, and then I can export the data. For this purpose, we work with providers who support loading the data via the controller into a centrally operated cloud application for overarching archiving.
The second way concerns systems that are already in the field, the so-called retrofit: We set an offline trigger, e.g. a QR code with the prompt “scan me”, through which every user is invited to carry out the scanning process with his smartphone or tablet. The moment the scan is performed, the manufacturer directly receives information. For example, the IP address is located and the manufacturer is immediately in a mode where he receives information about the state of the machine. In addition, the user is once again motivated and incentivized to log in to retrieve even more information. If we can’t network the machine directly, we establish connectivity through this bridge via a third-party device.
You’ve already briefly hinted at it: It’s about a subscription model or value creation via connectivity. What is the exact added value at the end of such a project?
In the case of such “pirate filters” or many spare and other wear parts, it is the case that we as the manufacturer are no longer in the game if these are obtained from elsewhere. However, we naturally want to keep our high-quality products in operation. That’s why we build a bi-directional customer relationship and create the back channel. We incentivize and reward the customer’s loyalty and, through the processed data, we can better understand how the user uses the filter in the case. This also allows us to optimize processes, as there are different filters. The user has the great advantage that he does not have to stock 20 filters, so he has less capital tied up. He doesn’t have to remember to reorder in time, but always has just as much as he needs at the moment.
It has to be said, however, that the cultural aspect often plays a role as well. There are customers who will never do something like that. They are very proud and experienced in their processes and know exactly when to reorder. But in general, the trend is already recognizable that certain things are given out of hand, because they are simply intercepted and work as a result. That is the great opportunity of this subscription model and that is also the framework in application basis, which we have brought in and started, specifically enriched for the user. Getting to these “road to pay per use models” can be a milestone.
You had also just mentioned QR code scanning by employees. Maybe you can briefly show the path from the QR code to the cloud. How does this work in the overall concept, also in cooperation with partners from the hardware environment?
We like to think of the “connectivity” problem as solved because it tends to be the more deterministic problem. Sure, it has to be done, but there are appropriate edge devices with a large number of drivers to connect even older machines and controllers. The path of the QR code is taken when we are not able to connect a machine at all because there are either too many or the infrastructural conditions do not allow it.
A big topic at the moment is, of course, artificial intelligence. Do you have an example from your field that shows how this is handled?
AI, machine and deep learning – that’s obviously the technology-driven entry point and the hope is that we can use it to solve things like predictive maintenance. We identified the technology as a great opportunity for us to perform optical, automated quality assurance. We have a solution that is used as far away as the Far East, in Asia. Here, the neural network is able to recognize characteristics on an image material. These characteristics can be scratches, cracks, dents, or specialized things like blowholes. What a human inspector normally does on the assembly line to see if the end piece meets the requirements, this system can do automatically.
For which user groups do you implement something like this?
The customers are from completely different industries, for example we also have sports car manufacturers. During the final quality inspection, after the vehicle rolls off the assembly line, certain characteristics of the vehicle are checked, e.g. whether certain type plates are present. The text on it is extracted, interpreted, matched and if everything is in order, the vehicle gets the hook to be released into the free world.
How exactly are the cracks, scratches, dents or blowholes on the surfaces detected? How does data acquisition work at this point?
You need an imaging system – that is, a camera, sometimes more than one. You usually need lighting so that you can produce consistent image captures. This is also nothing new per se. Such optical inspection processes have always been present, especially in electronics manufacturing. Economically, these only make sense from a certain number of units or batch size, because the corresponding effort behind them can only be amortized then. In the previous world, in the previous way things work, processes are rule-based and programmatically learned. They have the disadvantage that you always have to loop through the developer when new features are included for recognition, for example.
This is contrasted with a self-learning system, i.e. a computer that is trained on the basis of existing image material. The network, the computer, is trained to recognize defect types and defect classes by appropriately marked photos. There’s usually a lot more to it than that, because you can do certain pre-processing, because you bring in a whole different quality by switching nets after each other. The big advantage for the user is that he really trains the system live. The worker who stands at the line and carries out the inspection processes there can, by reporting an error for marking, also improve the system. He doesn’t have to go through the loop of a development process again, where then a third party or another organization has to do the whole thing.
What about the interface with the operator – is there some kind of dashboard where the operator can enter the info and give their feedback? How exactly does the information get into the algorithm?
The operator has a screen. This can be a tablet or an installed monitor where he sees the board, where he interactively has the opportunity to mark any features such as defects, scratches, cracks, dents, etc.. At that moment, this image material is archived and the network behind it is continuously re-trained and improved.
That means: You determine beforehand how the part should look in the best case and if any error occurs, you have the possibility to readjust the neural network?
Exactly, and often it’s hard because you need a certain amount of visuals. That’s actually the biggest challenge. But there are appropriate ways to shorten the path, because you are mainly oriented to good parts and have certain existing pre-trained networks, which can represent, for example, scratches or cracks on various materials, aluminum, glass, etc..
You are service providers in the field of digital applications. What exactly do your solutions look like? Do you have a kind of modular system that you use to implement the themes, or are these individual projects? How exactly do you contribute your expertise?
We can provide the solution end-to-end, all the way to hardware integration partners that perform imaging systems, exposure, etc. In itself, this is a licensed product that exists in the standard. Depending on the characteristics, certain things are converted individually with engineering on it – if necessary – and at the end assigned a quota. It’s exciting that it’s not just the process as such, but you often recognize things.
This was a use case from the field of AI. Do you perhaps have another use case from your field that is a good illustration of how manufacturers deal with such solutions?
A popular tool of ours is the implementation of a machine portal. We help manufacturers – even those who may already have applications – to centrally create a single interface to the customer. That is, the image that the operator can view centrally, on which all the contact points are channeled. Of course, we also want to refine the back channel. We would like to prepare a possible multitude, a list of applications that we see in the future, that are easily accessible to the manufacturers.
Do you have a concrete example that can be used to see how something like this works in practice?
A nice example is a manufacturer of packaging machines. It begins with the dstribution of operating manuals of the machines and documentations, which was previously handled by CD-ROM’s and whose updating could only be done by mail. Direct online access can be implemented here. The documents can be full-text indexed, specifically prioritized according to the user group, as well as prepared and thus made available to the user as a tool.
In addition, you can link correlating service operations in the portal, such as a ticket system – if support has one – or the spare parts catalog. You have a central hub from which the various jumps to the respective sub-applications happen.
This means: If you already have your own ticket system, could you integrate the data into a holistic machine portal so that you have a “source of truth”, so to speak?
Exactly and that is also the target image. I am convinced that if we imagine the world in a few years, then as a manufacturer of machines I will have several applications that I will make available to my customers. In order to facilitate convenience, access, and ergonomics in the software, it makes sense that I can log in with a user account anywhere and start the application in a uniform flow.
Whether this will ultimately be service applications with augmented reality or whether it will be the ordering process in the spare parts store is something that cannot be decided today. I’m gearing up for the fact that there will be a fragmentation of applications that I’ll bundle via a “bracket application” and make available to the user.
Marco Link from the company ADAMOS had talked to me in the Podcast about single sign-on. That’s exactly what you mean by the “bracket” around the machine portal, right?
Exactly, this is a nice example of one of several such centralized services. However, as a platform, you have to centrally manage not only the users, but also the assets. So that I do not have to store all my users, machines or even certain telemetry data redundantly per application, but the data that can be used across the board can be retrieved accordingly.
In addition to a ticket system, I’ll probably also be working on other IT systems, such as spare parts management or a store. What other partners are involved in a project like this? How do I build a holistic solution with you?
Often, something like a spare parts system already exists on the customer side and is integrated. For augmented reality support, there are several applications, including Team Viewer, that I can make us of. Microsoft Azure, IoT and other platforms provide the technological foundation on which to build solutions.
Now we’ve talked about different technologies – what else do you think will come in the next five years, taking Industrial IoT as an overarching technology horizontally? What developments do you see in the future, what do we have to prepare for?
I think the way customers deal with it, how they adjust, becomes quite relevant. At the moment, we have a bit of a chicken-and-egg problem: As a manufacturer, we want to offer added value through data. The fact that we have no data means that there is only limited added value. The customer who is not willing to share data in advance will not get any added value. This ice must be broken.
Then I believe that the way the business models work will be one of the driving forces. I, as a manufacturer who has been very transactional to date, will come into more recurring revenue. Whether that ends up being pay per use or whether those are performance-based contracts where, in addition to a response time, an SLA from a technician, or whether there is also a certain guarantee of availability – we’ll see. Those are then the gradations where I think things are also necessary for this technology-driven, AI-driven to generate value for both sides at some point.
Do you see industry-specific solutions popping up in the market right now? Or industries in which the topic of Industrial IoT has already found its way in, and perhaps already brought commercial success?
To go back to the beginning: What’s a great model for us is the thing about consumables where I can use the back channel. If I just sell a machine that runs afterwards without wear and tear, then it’s basically pay per use, another form of leasing – not for time, but with different parameters. It gets really interesting when I can bring configuration parameters, consumables or process controls into this contract.
In Use Case 2, we had talked about artificial intelligence. Do you see any trends emerging there in terms of IoT?
My experience and observation is that in the manufacturing environment, we are in a space where change is more difficult to teach and where change is a larger project. Often, it’s still small things that have greater leverage – for example, making production paperless. This gives me chances to perform things very easily, which in turn I have to roll out.
AI may be an issue there somewhere. However, our experience is also that it is not necessarily only the amount of data that is lacking, but often an abundance of telemetry and process data is available and corresponding evaluations are to be made, but the context, the description, the labeling of this data is missing. For example, I see a certain temperature value that rises and falls very sharply. Now, if I didn’t log that I changed a component in the machine at that time, then I have a hard time making deductions from that. That’s one of the big things I see as a necessity for AI, to bring that context in some form. When that happens, when I have the tools for it and the market uses it more, then I think there quickly will be bigger leaps again.