Episode 71 at a glance (and click):
- [06:38] Challenges, potentials and status quo – This is what the use case looks like in practice
- [20:33] Solutions, offerings and services – A look at the technologies used
- [33:56] Results, Business Models and Best Practices – How Success is Measured
Podcast episode summary
Companies are reluctant to share data within a supply chain or with commercial customers because of the complexities of clarifying rights and obligations. How can data exchange between companies work best? What are the requirements and what to look for?
Boris Scharinger, Senior Innovation Manager at Siemens Digital Industries, is addressing precisely these questions and how cross-vendor collaboration can be made possible and standardized. In doing so, he sheds light on both legal and organizational perspectives and presents possible solutions, as many companies fear the loss of trade secrets – especially in the case of overlapping data pools in AI projects. He also shows how the initiation process of IoT projects should be accelerated and simplified.
CEO of MindSphere World e.V., Ulf Könekamp, aims to help shape the future of the IIoT. He realizes this with experts from a wide range of industries. With the help of a wide variety of working groups, complementary further improvements in performance can be achieved. Where the limits of a single company are reached, further progress can be made through collaboration with other companies.
In the 70th episode of the IoT Use Case Podcast, we learn how a multilateral relationship between companies can be legally regulated and how the Ecosystem Manager can provide a remedy for possible consequences.
Podcast interview
Boris, you are a Senior Innovation Manager at Siemens Digital Industries. Siemens is the technology and innovation leader for industrial automation and digitization. You are currently dealing with what is probably the most important topic when it comes to data exchange from a legal and organizational perspective, in order to make cross-vendor collaboration possible. To start, do you have any recent examples that illustrate the relevance of the topic of data sharing from a legal and organizational perspective?
Boris
Yes, of course. There are many legal and organizational issues to clarify. The topic “What is a trade secret?” comes to mind. That’s not so trivial to answer. I would like to give a small example. I’m sure we all remember how Tesla ramped up Model 3 production. And the question of how high the production output will be from the new Model 3 was something that affected the stock price on a daily basis. If I just imagine I have a central machine in Tesla’s production, then the “timestamps” date alone is very sensitive, whereas in a completely different constellation at another company, that is insignificant from a trade secrets perspective. These and other issues are exactly what our work is about.
Challenges, potentials and status quo - This is what the use case looks like in practice [06:38]
Boris, you are one of the leaders of the Shared Data Pool group. Cross-vendor collaboration, where does that happen today and when do I need such data pools with multiple people?
Boris
We would like to see it take place even more than it does today. Many of today’s IoT projects are bilateral – between someone who provides data and someone who then builds a model with the data, for example. Wherever we want to see solutions developed that scale, that scale beyond a project, that have the potential to become a product, these overarching data pools are very valuable.
Maybe you train a neural network for a quality inspection, for an automated quality inspection at a customer in a specific environment. I will have great difficulty getting this neural network to work at the second customer; this will then require a very large further project effort. However, if I have now trained the neural network with data from five or six different customers, perhaps even with data from several plants per customer, then I can be very confident that the solution that was created there will also work for customer seven and eight.
Put another way: That it scales commercially. This is the big challenge today in the development of predictive models. That we have to manage to get out of the condemned project mode, to create solutions and products that scale. That’s difficult because today many parties – mechanical engineers, for example – sit on their data and say from a gut feeling, no, I don’t actually want to put my data together with other companies into a larger data pool now.
Solutions, offerings and services - A look at the technologies used [20:33]
Within the MindSphere World working group, you guys have made it happen and are working together on an IoT template generator to enable this collaboration across data. How does that work exactly, what you have developed there?
Boris
We have created the so-called MindSphere World Ecosystem Manager. This is a platform where I can announce use cases and also look at existing use cases in the marketplace, so that I can then decide: Do I want to apply to be involved in one of these use cases, for example by contributing a capability? In fact, besides this marketplace, the moment a project is … I’ll call it “configured ready” … the parties involved are fixed; there’s been a structural commercial agreement; there’s been a discussion on how to deal with IP, and there’s been an agreement. Then I can configure that. I have configuration options for all of them in my project setup and press the button. Then a framework contract is generated from the project setup for the entire project and all participants; in some cases, specific contracts for individual service packages are also generated.
For example, there is the trusted data processor. With which a Data Processing Agreement is concluded; between the parties involved and this party. It says exactly how the data pipelines are set up organizationally and what legal duties and obligations the trusted data processor must fulfill. If we say, as an example, in the project all stakeholders agreed that there is an option to audit. This audit option ensures that an independent external auditor checks whether the technical implementation of the shared data pool actually complies with the contractual agreements. Then the data processor must have audit clauses in their service contract. These say, with so and so many days notice, we can announce an audit, and then you have to have an outside party look at the whole issue.
That’s an issue where we check off; okay, we need the auditor and the audit capability, and then there are additional passages put in there in the contracts and contract templates.
Results, Business Models and Best Practices - How Success is Measured [30:56]
Ulf, you said you have very different companies with you that are all about bringing these competencies together. Can you summarize why it’s an important issue for many of your members?
Ulf
Many companies have now realized that they can no longer achieve the right innovations or efficiency improvements on their own. But that they have to do it together with others. That they use the complementary competencies of others and thus create ecosystems in order to be able to offer things that no one else could offer alone. It is always this interaction of several. That’s a real mindset change, where companies don’t think I can do this alone or bilaterally, but work together in a larger group. These include, for example, Cross Supplier Solutions. These offer enormous potential because the companies do not simply meet a specification, but can think ahead together and thus achieve more.
All these new technologies that we’re seeing – whether it’s edge or whether it’s cloud – become particularly valuable when data is shared and put into a new context. So data to another company or the weather data to it and so on. Then completely new possibilities and also insights arise, from which one can again draw a benefit.
Many companies fear losing company secrets and legal problems as a result of this data exchange and partial disclosure of their data. Of course, this also includes getting compliance. At this point, at the latest, the Ecosystem Manager comes into play. Contract templates tailored specifically to data use then protect companies and also enable them to move into data-driven business models, which would otherwise not be so easy.