How do new EU regulations impact businesses, and what role do standardized product data play in digital transformation?
In this episode of the IoT Use Case Podcast, Ing. Madeleine Mickeleit talks with Thorsten Kroke, Managing Director of ECLASS e.V., and Stefan Willms, CEO of Morphe Information Design, about the challenges and opportunities of the European Union Deforestation Regulation (EUDR), the importance of high-quality, harmonized master data, and the role of standards like ECLASS in building a future-proof supply chain.
Podcast episode summary
The growing importance of standardized product data presents new challenges for businesses—especially in the context of the European Union Deforestation Regulation (EUDR).
This regulation requires transparent proof of material origins across the entire supply chain to support sustainability goals and ensure compliance with regulatory requirements.
A key topic of discussion is the need for semantic standards like ECLASS, which create a unified structure for product and material data. Without such standards, companies face inefficient processes, manual data mapping, and a lack of interoperability between systems. Solutions like the Digital Product Passport (DPP) and the Asset Administration Shell (AAS) enable structured tracking and availability of product information throughout its entire lifecycle.
Beyond regulatory compliance, standardized master data offer economic benefits: automated data exchange and structured data containers save both time and costs. The experts emphasize that digitalization is not just an obligation—it’s an opportunity—especially in times of skilled labor shortages, increasing efficiency demands, and ambitious sustainability goals.
For those looking to learn more about successful implementation strategies, best practices, and technical solutions, this episode provides valuable real-world insights.
Podcast interview
For your master data, lifecycle data, and sustainability data—such as disposal information or material origin documentation—a digital representation of your products will soon be essential. Why? Because a transparent and efficient supply chain is crucial—and because the EU regulation now requires these proofs.
Of course, all of this can be digitally managed. Today, we’ll show you a practical example of how this works—using POLIPOL and Nolte Küchen as case studies. The focus is on materials like wood, rubber, and other raw materials.
Joining this podcast episode: morphe Information Design, experts in product data management, represented by Stefan Willms, CEO and Owner. Also with us: ECLASS, a non-profit organization developing the leading standard for digital product descriptions, represented by Thorsten Kroke, Managing Director.
We’ll answer key questions:
What does the EU regulation require? How do you structure and digitize this data? And what should you consider when implementing it?
Enjoy the episode! Find all the details at www.iotusecase.com. Let’s go!
Welcome, Thorsten and Stefan!
How are you doing, and where are you at the moment? Thorsten, why don’t you start?
Thorsten
I’m doing great! We’re making excellent progress in standardization, working closely with the EU and regulatory bodies while staying up to date with legislative texts. But—this is why I’m excited to see you here again, Stefan—we’re also actively engaging with our partners and experts from European industry. I’m really looking forward to 2025 and thrilled to be part of this again!
That’s fantastic! We’re already diving right into the topic. Hello to you too, Stefan! How are you, and where are you right now?
Stefan
We’re also heavily involved in these areas, particularly when it comes to sustainability. Just like the pandemic exposed certain weaknesses, we’re now facing new challenges—especially regarding product data management. Our goal is to bridge these gaps. We don’t see EU regulations as just a mandatory requirement but rather as an opportunity to address long-standing issues and drive innovation.
Great to have you both here! Before we get into the details, how did it come about that you’re both here today? You’ve been working together for quite some time, haven’t you?
Thorsten
I’ve known Stefan for seven or eight years, since I started at ECLASS. With his company, he’s an outstanding expert and a crucial link between standardization and its practical application. You’re doing a fantastic job! Stefan has a high level of expertise as he gathers and abstracts his customers’ requirements.
Plus, he’s a great guy—he’s from the Lower Rhine region, what more could you ask for?
That’s fantastic! Before we jump into your project, let’s briefly talk about the new EU regulations. I’ve been following these developments for quite a while, also from a technical perspective—what’s feasible and what measures companies need to take. On one hand, these regulations offer enormous potential, but on the other hand, they require companies to take action.
The key question isn’t just what the EU is doing, but also how companies and organizations can be inspired and actively engaged in this transformation—with practical, hands-on solutions. Why is this topic so relevant right now? Thorsten, would you like to start, or would one of you like to take this question?
Thorsten
There are two key aspects here: First, Europe is undergoing digital transformation—clean master data is essential in this process. For efficient data exchange, we need standards, meaning semantic standards and standardized transport containers. One example: As we’ve discussed in other episodes, ECLASS fits perfectly into an AAS, a BMEcat, or OPC UA—covering the necessary standards.
The second point—where Stefan comes into play—is a development that hasn’t received much attention yet: the new EU Deforestation Regulation. Stefan asked me if we at ECLASS had this on our radar. My honest answer was: No. He then pointed out that this regulation affects a vast range of products and isn’t just about meeting legal requirements but also about leveraging digital opportunities. But Stefan, you can explain this much better—I’ll hand it over to you.
Stefan
Maybe a few words about us first: We’ve been working intensively with product data for 12 years—actually, for 15 years now—and specifically with ECLASS for the last 12 years. Why ECLASS? Because product data doesn’t exist in isolation—it’s all about communication.
If you want to communicate across the supply chain, you need a common language—otherwise, it becomes complicated. And there’s hardly any way around ECLASS. A product classification system is essential for standardizing and exchanging product data. Companies that don’t adopt such standards face major integration challenges because they constantly need to adapt data from their upstream supply chain.
The EUDR adds a new dimension here: Historically, electronic catalogs and product data exchange were the main focus. But for years, we’ve been shifting away from catalog-, model-, or type-level data toward instance-level data, and the EUDR serves as an entry point for that.
In between, there could already be a shift toward batch-level data. In the Nolte example you mentioned, we are clearly operating on an instance level, as every kitchen represents an individual batch size of one—a highly complex system.
Before we dive into tools and methods—you’ve already touched on it: clean master data and standardization. But before that, I’d like to address the demand from the business side. I already hinted at it in the introduction: It’s about supply chain transparency and the verification of specific information.
Stefan, can you explain why this is so relevant for your customers? Maybe with two examples—why do companies need to provide such proof? Why is it essential to document the origin of materials correctly?
Stefan
A great example from the Digital Product Passport (DPP) is the EU Deforestation Regulation (EUDR). The EU isn’t just sitting back and saying there’s nothing that can be done about rainforest deforestation. Their approach: Companies must provide proof of where their wood comes from and whether forests previously stood on the corresponding land. The focus is on taking action to stop supporting such practices and actively counteract them.
If I understand correctly, EUDR stands for European Union Deforestation Regulation—the EU regulation requiring deforestation-free supply chains. Would you say that’s accurate? It’s about preventing deforestation and forest-damaging practices, right?
Stefan
Exactly.
Thorsten
And that’s crucial for Stefan’s customers. To come back to your question: The end customer wants to know whether the wood for their kitchen comes from the rainforest or not. But it’s not just private customers asking these questions—it’s just as relevant in the industrial sector.
I was personally surprised by how many products are affected by this regulation. Manufacturers now need to systematically ask their suppliers where the raw materials and resources originate. And this shouldn’t happen via fax or phone, but in a structured, system-based way. That’s the core idea behind a digital, standardized data exchange.
Exactly! That’s an important point because this doesn’t just concern a single industry. Many different sectors are listening today—whether it’s the construction industry, dealing with building materials, or the automotive sector, tracking the origin of metals or plastics. The regulation affects almost every industry. And today, we’ve brought a concrete exampe with us.
Thorsten
Exactly.
Stefan
Yes, exactly. Adding to the cross-industry aspect: We work a lot with furniture, where wood plays a central role. However, the scope has expanded significantly, as the 2023 updates to the original deforestation regulation have broadened its reach. The focus is no longer just on wood but also on other products identified by their customs tariff codes—including palm oil, rubber, soy, and cattle.
All these raw materials are key drivers of deforestation, particularly in the Amazon. With the new regulation, the EU aims to prevent the unintended promotion of forest destruction through the purchase and import of such products.
Exactly! And this impacts the entire supply chain—not just individual segments. Depending on the size of a company, many stakeholders are involved, who not only have an interest in this but also need to generate the relevant data.
This is an incredibly broad topic.
[12:12] Challenges, potentials and status quo – This is what the use case looks like in practice
You’ve already mentioned master data and product data—but why are these data points still not readily available, even though they are clearly needed?
Stefan
Our favorite topic for the past 15 years! Essentially, it’s a long list of missed opportunities to address this issue early on. In fact, this has been a core topic for over 20 years, yet product data has long been neglected, and its value has often been underestimated.
With the rise of online shops, the topic gained more attention, but that is just one of many use cases. The true value of product data has only been fully recognized in the last five to ten years. Additionally, the amount of information tied to a product keeps growing.
It’s no longer just about regulatory requirements like certifications or compliance documents. Customer expectations have changed as well—today, people want to know much more than they did ten years ago. The EUDR is just one example—sustainability verification is one aspect, but the overall topic is much bigger.
Let’s illustrate this with real-world examples. What specific data is relevant here? We’ve already touched on a few aspects, but can you provide concrete examples—perhaps from Nolte Küchen? What material data plays a role there?
Stefan
The term “material” can be a bit tricky because it’s interpreted differently. In an SAP context, it refers to a specific component, whereas in general, it describes the raw material something is made of.
Let’s stick with Nolte as an example— but the same applies to upholstered furniture, such as a POLIPOL armchair. Both products are highly complex, but kitchens illustrate the principle particularly well. When you visit a kitchen studio, you choose a design line. However, the final kitchen you purchase is custom-made, batch size one. It fits precisely into your space, taking wall recesses, corners, and individual connections into account.
This means that the kitchen sector has always been geared toward batch size one, even before the term became widely used. A kitchen doesn’t fit one-to-one into another home, simply due to different room dimensions, connections, and individual configurations.
Now, looking at the material level: Today, companies must be able to manage detailed material information for every single purchased component, every board, hinge, and drawer, simultaneously. For example, whether it is made of wood, coated, or has a specific surface finish. Especially with boards, multiple materials are often combined, but traditional systems don’t track this level of detail.
When a kitchen manufacturer assembles cabinets from predefined cabinet models, it doesn’t necessarily mean that each individual board has its own ERP or procurement article number. Often, there is no anchor point to document that a hinge consists of 17% aluminum and 12% plastic.
Additionally, material origin plays a key role, along with other factors like recyclability. The core objective is to transition from a linear economy to a circular economy.
At the end of a product’s lifecycle, perhaps after 25 years, the disposal company needs enough information to separate and reuse materials correctly. Scandinavian pioneers are already producing office chairs with a recycling content of 30–50%.
And that’s exactly the goal. None of this works without proper information—it’s fundamentally an information issue at its core.
Thorsten
The problem without standardization is: How do you describe what type of wood your kitchen cabinets are made from?
Stefan, you once explained to me that it’s not always about a single tree—pressed materials can consist of various wood types. That means you need to map a wide range of wood types. Then, you also need to document the country of origin, the processing steps, and whether coatings were used and what they are made of.
How do you handle this? Do you use a selection list? Do you define codes? Do you classify wood types? If every manufacturer does this differently, the data cannot be digitally linked. Nolte might use the Latin names of wood species, while a Brazilian supplier might label them in Portuguese. Without standardization, these datasets cannot be merged.
This is why there are standards such as ECLASS, which define how different types of wood in combination with additional materials are mapped in terms of data. The frameworks are predefined, and everyone adheres to them. This is the essence of standardization.
I just thought of a question: We’ve talked a lot about the distinction between procurement data and material data— essentially, data that exists or should exist in an SAP system.
But are these also IoT data? I’m wondering which of this information consists of live operational data. In kitchen manufacturing, real-time data isn’t typically collected to track exactly where a hinge was produced or where a specific panel was processed.
So, are we only talking about product master data here, or is there also live data involved?
Stefan
That’s an extremely important point! Let’s extend this scenario and stick with the kitchen example. Imagine you bought your kitchen ten years ago, and now the door on the left side is squeaking or needs to be replaced due to a scratch. That means material properties, specifications, or even updates might need to be adjusted afterward.
For recyclers, this means they don’t just need the original data set but also the entire repair history of the kitchen—for example, if an appliance has been replaced. This places high demands on data architecture and management.
There needs to be a neutral instance for product data with some form of IoT integration. This is where the Asset Administration Shell comes in. It is already established in the Industry 4.0 context, but in this extended scenario, it takes on an entirely new dimension. It forms the true digital twin that accompanies the physical product throughout its entire life cycle – including proactive repairs, modifications, updates and even a second usage cycle in the second-hand market.
Only in this way do the data remain consistent, ensuring that at the end of the lifecycle, the recycler has access to the real, up-to-date dataset—rather than an outdated version from 20 years ago.
Exactly! That means a repair data record is created along the way, which could theoretically even be linked to live data.
Stefan
Exactly! The same data container could also store usage data from electrical appliances—for example, an error log from an extractor hood. This would allow repairs and fault analyses to be derived directly from the system.
Interesting!
We’ll probably only be fully connected by 2040, but we can come back to that later.
If you’re interested in diving deeper into Asset Administration Shell, the Digital Product Passport, or related topics, make sure to check out episodes 101 and 128 of my podcast. Episode 101 covers the Digital Twin with a focus on the Asset Administration Shell, while Episode 128 explores related topics—in both episodes, ECLASS plays a central role as a standard.
[19:38] Solutions, offerings and services – A look at the technologies used
But back to the main topic.
The key question is: How do I implement this? If I want to start working on this now, what do I need to consider during implementation?
Stefan
First, let’s clarify what requirements arise from this scenario. We need a neutral instance where all relevant data is stored and which remains permanently accessible for modifications. A kitchen, for example, is used for 25 years, so the system must be long-term stable.
The data structure must be consistent so that information remains understandable over long periods. This is where standardization bodies come into play, defining these containers and their standardized language. The language itself, ECLASS, plays a central role by structuring various SAP models—for example, for the DPP, EUDR, or usage and error logs, such as those for an extraction hood.
This standardized language must be designed so that a supplier in Brazil uses the same terminology as a manufacturer in Europe. Only in this way can a semantically consistent data structure be maintained over time. Without ECLASS, we would repeatedly face the same interpretation and mapping issues we already know—standardization prevents exactly that.
One possible solution is a neutral data space where this container is stored. The concept of a data space is currently a buzzword, but there’s a lot behind it. Various EU projects are working on developing secure data spaces to facilitate data exchange along the supply chain.
Key requirements here are security and trust, as companies exchange sensitive information along the supply chain—often with stakeholders they don’t usually communicate with directly. The goal is to ensure that this exchange is secure and compliant with regulations.
Thorsten
Within a company, you actually need to take a bottom-up approach. First of all, you have to analyze which data already exists and then establish a structured process based on that. The first question is: What are the de facto requirements? It’s about understanding who generates the data, who needs it internally, who will use it externally, and where it will be integrated.
At the same time, de jure requirements must be examined—meaning the regulatory obligations imposed by law. Where must data be submitted, collected, or generated? Once this process is clearly defined, the next step is choosing the right standards. There are different approaches, but in many scenarios, a combination of Asset Administration Shell with ECLASS or OPC UA with ECLASS proves to be effective. The key is to consistently rely on established standards.
The next step is a layered approach. First, the database structure is reviewed to ensure that the standards function smoothly.
Then comes the business logic, defining roles and permissions: Who can edit the data? Who distributes it further? Finally, a graphical user interface (GUI) is developed, along with the management of data flows.
Only then does the question arise: Do I provide the data to a neutral data space? Do I share it exclusively with my customers? How do I involve suppliers? This is a comprehensive project that requires structured, process-driven planning. The key is to start from the core and build outward.
I have a question about this architecture you’re describing.
When we talk about a neutral instance, there are two levels: On the one hand, you could say that such an instance should be EU-driven – in other words, a kind of data room that enables data exchange in the first place. On the other hand, there is the company-hosted domain, where master data is managed separately.
Do you have a clear answer to this? There are EU initiatives moving in this direction. What do you see as the EU’s responsibility, and what falls under the responsibility of individual companies?
Thorsten
Everything that ensures maximum transparency should be the EU’s responsibility. This includes things like manufacturer information, production year, and safety-related details about hazardous materials—essentially, anything that falls under the Digital Product Passport (DPP).
At the same time, there are many company-specific data points that should only be shared with selected business partners rather than being publicly accessible. Business partners could be other companies or end customers, and from a technical standpoint, this is entirely feasible.
It’s also important to recognize that not only standardized but also highly individualized products exist. This is obvious for a Nolte kitchen in the B2C sector, but the same applies to customized machinery in the IoT world.
Take an example from the chemical industry: a Coriolis flow meter, which is a highly complex, custom-built product for companies like BASF. The same data structure and transparency requirements apply here.
Such information cannot be fully stored in a central system. It must be kept in a protected space where only relevant stakeholders have access—for instance, BASF and the manufacturer, or the end customer of a Nolte kitchen and a recycler. However, this information should not be accessible to the general public.
Exactly! There needs to be an overarching system for transparency in the market, but at the same time, a clear separation for protected data that should not be shared externally. I just wanted to emphasize this differentiation again.
Thorsten
Yes, exactly. The DPP will help in this regard by defining which data is private and which is public. Private data means that only specific stakeholders have access.
The EU also introduced the Gaia-X initiative. I have a critical view of it. The idea was good, but the initial results were not convincing. That’s why I see it as a positive development that the industry is now taking a stronger lead on this topic.
European companies know what they are doing, and experts like Stefan have a deep understanding of how data should flow, where it belongs, and how the structures should be set up.
Stefan
Yes, exactly. So what is the EU’s role in all of this—what is its setup in this context? The EU does not directly interfere with the technical implementation within the industry, but it sets the regulatory framework and, more importantly, establishes trust as a key factor.
This is also reflected in the Gaia-X initiative. It is less about providing a specific technical solution and more about regulatory requirements management: What must a data space fulfill to ensure all industry-relevant security criteria are met, allowing for trustworthy data exchange without companies unintentionally disclosing sensitive information they never intended to share?
Yes, exactly! I think we could dedicate an entire special episode to this topic—I don’t want to get too political here, but there are many initiatives in this space. My focus was on clarifying the architectural principle: What must a company manage on its own, and what is handled through collaborations?
But let’s get back to practical implementation. I often get the question: Why do some companies struggle with this implementation? What are the key pitfalls to watch out for?
Before we dive into that, just a quick preliminary question: You’ve talked a lot about containers. Some people may be familiar with the concept, while others may not. Can you briefly explain what these containers are and how they work?
Thorsten
A computer scientist would probably beat me for this explanation, but I’ll try to keep it simple. Imagine I send you an Excel document—this document itself is the container, because it defines the data format and serves as a framework for the information inside.
When you open the document, you find content. The column headers—such as number, manufacturer, color, weight in grams, length, width, height in millimeters—define the semantics, the meaning of the data. The rows then contain the actual values, representing the instantiation of this structure.
These could be types, batches, or individual products. So, the Excel file itself is the data container, the column headers define the semantics, and the values in the rows are the instantiated data stored in individual cells.
Nice! And once I’ve established this standardization, there are also solution providers on the market that offer software templates as a foundation for companies to work with. Thorsten, I’m not sure if we can name specific examples here, but one company that comes to mind is Neoception from Pepperl+Fuchs. There are surely many more working on this kind of implementation.
Thorsten
Exactly. Standardization organizations also collaborate with partner companies. At ECLASS, for example, there are IT service providers who act as close cooperation partners. Stefan is one of them, but there are about 14 or 15 others I could name directly. Neoception is one of them.
These companies support implementation, providing templates, frameworks, and the necessary expertise to enable seamless data exchange. At the same time, companies must always consider the fundamental “Make or Buy” decision— whether to seek external support or handle the implementation internally.
Stefan
We are operating on two levels here: internal company data exchange and industry-wide data exchange, which is reflected in various industry lighthouse projects. These projects focus on intensive data exchange at the industry level, which is why they are typically organized as industry collaborations.
In the furniture sector, for example, the Association of German Furniture Manufacturers is driving this topic forward because no company can solve it alone. We see the same in the automotive industry with Catena-X and in at least 15 other industries currently addressing this challenge. Data exchange is a central issue across entire industries.
Now, shifting focus from the macro perspective to the individual company—what can a business practically do to prepare for this new reality? Beyond analyzing what is happening within its own industry and identifying potential networking opportunities, it is crucial to develop a clear understanding of future data management requirements and the criteria a product must meet.
The next step is identifying which of these requirements cannot be met with existing systems, structures, and approaches. From this, it becomes clear what extensions are necessary. You can think of it as layers that need to be added around existing systems to incorporate additional dimensions of data management.
Just yesterday, we had a conversation with an upholstery manufacturer where this was exactly the issue: Which data points can their current system not capture at all? One example we discussed involved metal components, which in their system are currently tracked only by an article number.
To incorporate further data dimensions, they would need to add new layers around these datasets. Right now, if they receive a hinge labeled with an article number, they have no way to determine whether it comes from China or England, because their systems do not store this information.
Yes, absolutely. If you’re listening right now and working on similar challenges—I know many of you are already engaged with use cases around transparency—I’ll simply link both contacts in the show notes. Feel free to connect with Stefan Willms and Thorsten Kroke on LinkedIn to explore the topic further.
There are numerous initiatives, working groups, and industry associations addressing these challenges. In Episode 137, I already discussed Catena-X – there you’ll find a more detailed explanation of this topic.
[32:20] Transferability, scaling and next steps – how you can utilize this use case
Once again, the question: Why do companies struggle with the implementation? Do you have best practices or customer examples that we can learn from? What are the key aspects to consider for a successful implementation?
Thorsten
I believe many companies struggle with complexity. That’s why it’s crucial to start with a concrete example, begin small, and work step by step—first by creating a nameplate, then introducing a standard, setting up a data container, and testing it with an initial business partner. Projects often fail because companies try to do too much from the start.
The second point is the lack of commitment. People play a huge role in this. You often hear statements like, “We’ve always done it this way,” or “I always send Müller a fax.” But fax machines will eventually be phased out, and that alone proves that things need to change.
The third point, which can also be framed positively, is: How much money is being wasted without digitalization? The German Economic Institute conducted a study with several thousand ECLASS users.
The result: A company with 5,000 employees can save up to 5.8 million euros per year simply by consistently using ECLASS and standardized data containers—just through the elimination of duplicate work and inefficient processes.
So, digitalization is not just a necessity—it’s an opportunity. With the skills shortage and the potential of AI, companies must shift from problem-thinking to solution orientation. But enough philosophy—Stefan, back to the practical side.
Stefan
Yes, absolutely, everything you said is correct. I completely agree and want to add a bit more to it.
This is exactly what I meant earlier by the magnifying glass effect—we are looking at something that has existed for a long time, namely historically grown silo structures, which are now reaching their limits.
This issue isn’t just about interoperability within a company, it extends across the entire supply chain. You can’t just re-enter all data internally when it was originally created in another process. Instead, you must ensure that data can be seamlessly transferred and processed—without manual rework.
Unfortunately, today’s reality often looks very different: Data is still manually transferred, whether via Excel or other workarounds. The biggest challenge is internal interoperability—between departments and different systems. We are stuck in endless data mapping, and that is the real killer of efficient processes.
The only solution is to align with standards. If a standard defines a field as “color,” then it should be called “color”—not something else just because someone thinks they’ve found a better term. If a required field is missing in the standard, it should be added to the standard instead of creating isolated solutions.
It’s pointless if the supplier calls it “Rot,” the manufacturer stores it as “Red,” and the end customer uses yet another variation. This only leads to chaos and duplicated effort.
The pressure is increasing, and we are approaching a point where these inefficient systems will finally hit their limits. If companies don’t act soon, they will face massive challenges.
Yes, an extremely important point. Your sentence, “We are mapping ourselves to death,” perfectly sums it up—we absolutely need to emphasize this in our communication because this is where the real effort begins. Data mapping is the critical bottleneck, but also the key to the solution.
That’s why my call to action for everyone is: Engage with this topic, familiarize yourself with existing standards. Today, we’ve shared best practices, and within our IoT Use Case network, there are many partners actively working on these use cases and implementing them. Take advantage of the initiatives already available in the market!
I’ll link all the relevant information in the show notes, where you can read everything again in detail. A big thank you from my side—thank you, Thorsten, thank you, Stefan, for your time and valuable insights. I think it became very clear where the challenges lie, what the EU is driving forward here, why this should be seen as an opportunity and what concrete solutions already exist.
Thank you for being part of this, and thanks to everyone for listening. I’ll give you the final word.
Stefan
Thank you, it was a lot of fun!
Thorsten
Thank you, always a pleasure and very exciting. Thanks!
Thank you both, and have a great week! Take care. Bye!