Möchtest du unsere Inhalte auf Deutsch sehen?

x
x

Passive OT Monitoring: Detect attacks before they become critical

““

You are currently viewing a placeholder content from Spotify Player. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on Spotify.
Listen to the IoT Use Case Podcast on other platforms.

#197 IoT Use Case Podcast: passive OT Monitoring Rhebo

In episode 197 of the IoT Use Case Podcast, co-host Dr. Peter Schopf speaks with Jan Fischer, Head of Sales at Rhebo in Leipzig. The focus is on OT cybersecurity and the protection of industrial networks in critical infrastructure, manufacturing, and logistics. Jan explains how Rhebo passively monitors brownfield environments, makes anomalies visible, and why IT/OT convergence does not automatically mean fully merging both worlds. The discussion covers real incidents from the field, social engineering on LinkedIn, forgotten assets on the network, and the question of what role AI actually plays in OT security today.

Podcast episode summary

OT cybersecurity in brownfield environments. How Rhebo protects industrial networks through passive monitoring

In this episode, Jan Fischer shows how companies can raise their OT security to a new level in a pragmatic way without putting production networks or critical infrastructure at risk. The starting point is historically grown brownfield networks that include legacy protocols such as Profibus or Modbus, unencrypted HTTP communication, forgotten printers or Raspberry Pis on the network, and security components with severely delayed updates.

Rhebo’s solution is based on passive monitoring. The software taps into OT network traffic, distinguishes typical from atypical behavior patterns, and reports anomalies at an early stage. An assessment examines the existing infrastructure in depth. Warning signs include unexpected DHCP servers, the appearance of new protocols, data flows leaving the country, or compromised systems following social engineering attacks. A forensics and diagnostics team evaluates the findings and derives concrete actions, from closing security gaps to targeted security upgrades.

Jan also addresses current developments such as NIS2, the Cyber Resilience Act, and the growing demand for European on-prem solutions, while explaining the limits of AI in OT security. The episode is aimed at operators of critical infrastructure, manufacturing and logistics companies, and OT leaders who want to harden their networks and detect real attacks early.

Podcast interview

Today on the IoT Use Case Podcast: Cybersecurity in OT, in other words, in operational technology. We will learn what passive monitoring means in a cybersecurity context and how it works in combination with a red team. Our guest today is Rhebo from Saxony, represented by their Head of Sales, Jan Fischer. Many talk about IT and OT convergence. Today we discuss why it can be wise not to bring these two worlds too close together. Enjoy the episode.

Hello and welcome to the IoT Use Case Podcast. I am your co-host, Dr. Peter Schopf, but please feel free to call me Peter. I am stepping in for Madeleine Mickeleit. If you want to learn more about me, just jump back one episode. We recently recorded a special episode. Before we introduce ourselves in more detail, Jan, a question for you. Why should our listeners stay with us until the very end today?

Jan

Because the topic of cybersecurity is incredibly exciting. Every week and every month we see new reports in the news about attempts to compromise critical infrastructure. It is a very hot and important topic that we absolutely need to talk about.

Great. I am also very excited about this episode. Cybersecurity is not exactly my area of expertise, so I am all the more curious about the background. How do you actually set it up and what can go wrong? I am sure we have a few stories to explore. But first, tell us more about yourself, Jan. What is your role, where are you based, how long have you been with the company, and do you also have a technical background in cybersecurity?

Jan

Absolutely. My name is Jan Fischer. I am the Head of Sales at Rhebo. I am responsible for national and international sales, as well as presales and parts of our support structure. This includes collaboration with our development teams. I have been with the company for more than three years. Before that, I spent over a decade at the IT corporation Dell Technologies, where I experienced almost every role, from inside sales to field sales and other functions. I brought a lot of that experience with me to Rhebo. Before that, I spent over a decade at the IT corporation Dell Technologies, where I experienced almost every role, from inside sales to field sales and other functions. I brought a lot of that experience with me to Rhebo. Here as Head of Sales, I am not just orchestrating things from the inside. I work directly with customers. I hear what their daily needs are and what worries and challenges they face, both through our support organization and our developer teams. I have a hands-on mentality and see our product in live environments. I also enjoy using it myself for presentation purposes, for example in a web demo. It gives people a first look and feel. What does it look like if I choose Rhebo, how do I work with it, and how do I derive meaningful results from the software.

I find it fascinating that you focus specifically on the OT side of cybersecurity. In IT cybersecurity there are many companies. OT is much more specialized. How do you differentiate your offering in the market, what is your unique selling point?

Jan

What sets us apart, our unique selling point, is our overall approach. We not only provide the right software, we also deliver the right services to support it. When you talk about cybersecurity in OT, in other words in operational technology, you are dealing with a very different core infrastructure within a company. This applies to both industrial environments and energy providers. You are operating in an environment with a fairly rigid infrastructure. It has evolved over time and is not meant to change frequently. Because in OT you do not design new protocols every day, and you do not constantly introduce new assets that promise major innovations. Systems remain intentionally established and difficult to compromise because they are already hardened.
This means our unique selling point is the combination of software and services. That is also why we are uniquely positioned. We do not just provide service in the sense of explaining how the software works, we can actually support forensic and diagnostic work. When an incident occurs, you are not left alone. You do not have to pull your core resources out of daily operations. You can simply call Rhebo and get the help you need.

[04:26] Challenges, potentials and status quo – This is what the use case looks like in practice

What are the first signs that customers discover with your software? Where does your service intervene first? What happens when an incident actually occurs?

Jan

Exactly. Let us assume we have an infrastructure that is now combined with our software. That does not mean you could not also consider other vendors. But we know this infrastructure and our system works with the detection patterns that are known there. It knows what typical behavior looks like and what atypical behavior looks like.
As soon as one of my assets initiates network communication that is unfamiliar. For example, if a new protocol suddenly appears. Let us assume there is a deviation from Profinet, and suddenly the system is no longer speaking Profinet but an outdated Profibus. That shift from typical to atypical behavior instantly triggers an alert. Our system immediately notifies: there is an anomaly. It is not marked as acknowledged, so it is unknown and therefore represents a potential threat.
The first step is the identification of a threat. The second step is the ability to review the recording. Everything that happens within a defined time interval is captured and stored as a PCAP. Essentially, it is a short recording. You can see what was communicated, by whom, and to whom. You see both endpoints that were involved and the traffic that was generated.
Now I can take this PCAP and send the data to Rhebo if I am unable to assess what happened. Then there are different options. I can analyze it with Wireshark and display what is happening inside the PCAP. Which protocols were used? Which ports were involved? Which exact protocol was used? All of that is visible in our controller, the primary instance, and in more detail inside the PCAP file. Anyone who has not yet used Wireshark or uses another tool to analyze PCAPs will see the full truth of what the captured data reveals.

What always fascinates me is that IT security specialists sometimes say they receive thousands of attacks per month. Since you are connected to the internet, you are constantly exposed to some kind of attack, however those may be defined, ranging from spam emails to fully orchestrated attacks. In OT, it is probably not the case that you are under constant attack. But if something does happen, it is highly critical, because the attacker is likely very specific and targeted. Is that the right way to look at this distinction?

Jan
You already analyzed that very well, Peter. It is indeed the case that the first gateway into a customer’s infrastructure usually comes through IT. Typically, there is an IT or office network that communicates externally. There are various ways this happens, one of them being Bring Your Own Device. These are end devices of individual users, who may use their smartphone for operational tasks. There is a separate, isolated area on that smartphone that communicates with the office IT. Data can be exchanged and transferred through it.
Parallel to that, there is another level, a second demilitarized zone, which offers a pathway into OT. However, there are not many gates leading inward. Usually there are only a few, sometimes even just one. Moving from zone one to zone two is therefore not easily possible, because you do not have endless entry points, but maybe only one or two. This separation is very strict. It is particularly important today, as we see increasing interest in merging OT and IT more closely.
However, this always has to be weighed against the potential risks. Does it really make sense to interconnect both infrastructures in such a way that all the many entry points I mentioned for IT would also be open for OT? That creates additional attack vectors. A real-world example is again the smartphone. The smartphone becomes an entry point. If I carry it from IT into OT, it becomes even more dangerous than if there is a separate tunnel where communication flows outward from OT through IT. Clear separation always creates an additional fire barrier, an extra layer of protection that benefits me. If I decide against that, I automatically expose myself to more risk.

I can easily imagine that there is a constant tradeoff. On the one hand, I want to accelerate digitalization and make use of data, enable this typical IT and OT convergence, and bring IT capabilities into the shopfloor and OT. On the other hand, those exact risks argue against integrating things too tightly.

Jan

I would like to explore that point a bit further, especially because you mention convergence. It is particularly important since people often only think about transferring ideas. But this does not have to result in merging infrastructures. Everything learned in IT that already creates value there does not automatically require the two infrastructures to be combined. You can transfer those learnings from IT into OT. This means, I know protocols, I know the value behind those protocols, and I can define which access parameters are relevant for specific assets. In this way, I can build up a zero trust architecture step by step, if I want to, simply by applying IT learnings within OT.
The benefit is that I do not learn from OT outward, but rather from IT inward. Everything that works well gets brought over into OT. This creates value on both sides without merging the infrastructures.

And by that you mainly mean insights derived from data. Meaning, I transfer production data, analyze it, gain insights from it, and then apply resulting adjustments, for example control changes or similar measures, back into the system.

Jan

Exactly.

[10:01] Solutions, offerings and services – A look at the technologies used

So how do you proceed? Let us say a company approaches you. Before that, maybe one more question. What kind of companies do you support, how do you categorize your customers? And then we pick out a category and run through an example. What would be step one, step two, step three?

Jan

Exactly. Our position in the market applies to any company that uses operational technology. But you do not always have to place us there. There are infrastructures that are very small. Let us assume I have an automated gripper arm, really just that one gripper arm, and the rest is controlled by IT. That would still be an operational network, but based on a single gripper arm. In that case, it might not be particularly relevant to protect it extensively, because that arm may only be carrying coffee.
But if we talk about a large infrastructure, for example a manufacturing company or a major logistics provider, where several robots retrieve packages from a high-bay warehouse and move them into shipping, then we have a robotic structure. This operational technology consolidates and channels the demand resulting from customer orders. In the background, packages are being sorted, packed, and eventually shipped to the customer. In such an environment, operational networks are clearly present, and this is where we fit in.
Meaning, in every environment where operational technologies exist, one could engage with Rhebo, provided that a risk assessment shows the presence of a relatively large and critical infrastructure that must be protected. Because, to put it bluntly, there might be millions at stake if a warehouse comes to a halt for days, weeks, months, or even years. That would be catastrophic. Some companies recognize themselves in this scenario and say: We should talk to Rhebo.
We also support critical infrastructures, the ones we typically know as energy suppliers. At home, we depend on someone ensuring that our lights are on and electricity is available. We can be found in that domain as well. Those are our classic fields of application.
And to answer your question, how does the process work? You get in touch with us and say: I am interested in a Rhebo product. Then someone from my team or I personally make sure you get to know the product. What can we do with it?
After that, you can make a decision. We always recommend starting with an assessment to get to know the infrastructure and derive the first necessary actions. What would need to be done after this assessment, and is the Rhebo software actually the right choice for me? At the same time, this means: just because I conduct this assessment with Rhebo and analyze my entire infrastructure does not mean I am automatically tied to Rhebo. Even after the assessment and the results, you can still choose another solution or keep your existing setup. However, with this assessment, you are able to verify whether you are truly prepared or whether potential entry points still exist for someone who could compromise your infrastructure.

Do you have some examples, maybe a bit of inside insight, showing what typically comes to light during such an assessment?

Jan

As I mentioned earlier, you usually have an established infrastructure, and in OT we call that Brownfield. It is the opposite of Greenfield. Greenfield, just to clarify briefly, means that the infrastructure does not yet exist. You can build everything completely new. Hence the nice image of a freshly laid hockey field.

Completely new construction.

Jan

Exactly. And Brownfield in OT means building on an existing infrastructure. Communication profiles are already in place, including outdated protocols or the use of common IT protocols. For example, there might be a software application that is accessed in a browser using HTTP rather than HTTPS. The communication is therefore not secured. The communication is therefore not secured.
How does our software work? You connect our system and it collects all data, every piece of network communication that occurs between endpoints. This means that when our software is implemented for the first time, everything is initially considered a potential threat. After that, we sit down together and use predefined metrics in the software to identify what is particularly critical. We might see communication between two endpoints or two operational technologies that communicate with each other and externally.
It can happen that there is an asset that was simply forgotten. That asset now has a modern Raspberry Pi attached to it, which opens a server and distributes IP addresses to new devices. Those devices then establish communication to the outside world. This bypasses all protective measures between IT and OT and provides an additional communication platform, simply because it was not prohibited to set up a DHCP server. This is dangerous, because communication is not only leaving the company but also the country.
We have already seen communication going abroad, far away, and not only small amounts of data but megabytes. Anyone who works with text documents knows how quickly you reach large volumes. That is a lot of information that can leave a company. In such cases, we intervene immediately and say during the analysis: We need to talk to someone in charge right now. Is there an imminent threat? There is communication going abroad. Is it legitimate or not?
This is exactly what we do as part of the onboarding process. We check whether our system continues to collect data and whether there is already so much happening that we should pause and talk immediately. Or the other side might say: This is fine, we will take our time afterward, sit together for a day, and review everything jointly. After the agreed period, we look at our system. What did we find? Our forensic and diagnostics teams gather everything and provide recommendations.
That means we see old protocols that are no longer in use. We see communication attempts where requests are sent but no responses return, for example in specific TCP or IP connections.
We see old protocols like Profibus or Modbus. Modbus is still active as a protocol, but it might not be used in that particular environment. Then there is a single asset trying to communicate through it. Or we see a network printer that someone forgot about. A common reaction is: We did not even know that this thing still exists and is generating traffic.

These are the many small things. Unfortunately, there are also bigger ones, like a Raspberry Pi hidden somewhere that gets noticed for the first time. What we also see are outdated software versions. For example, I have components that I purchased from a major manufacturer for network protection, but they are running software from 2011 even though the current version from 2025 is available. Then we say: Please update immediately. We provide concrete guidance on where to update the software from the manufacturer, including a link.
These are the useful elements of a report that may seem very simple at first glance, but with many pages of highly actionable information.

Yes, it is widely known that delayed updates are one of the biggest problems in cybersecurity. That definitely makes sense. So that would be the first phase, the assessment and report that highlights vulnerabilities. What usually happens next?

Jan

Exactly. Then we ask the important question of whether Rhebo and our software are convincing and appealing, and how we can move forward. We leave the already integrated infrastructure in place, meaning our appliance or software remains with the customer and is transferred into active operations. Technically, it is already active, since the system is already detecting anomalies. But the company needs to decide for itself. Do I want to keep this state, do I want to operate this passive system permanently, or do I only want the report and send back the hardware and software?
The next step would be transitioning into regular operations. From that point on, we offer ongoing support from our team. Because if there is one thing cybersecurity has taught us, it is that while teams and responsibilities exist, dedicated forensic and diagnostics specialists are not always available. And you need to rely on something. The labor market unfortunately shows that demand is high while resources are limited. This is exactly where we offer the option to rely on Rhebo’s service, call our experts, and request assistance.

Very good. You just mentioned passive software. My understanding is that OT security must observe passively rather than sit in the critical path, to avoid any disruptions. How does that work exactly and what impact, or rather non-impact, does it have?

Jan

Simply put, a passive system does not generate any additional network traffic, so it does not add extra load. In critical infrastructures, that can be extremely dangerous. Extra load can cause irreversible damage, because communication might not take place or becomes delayed. Then we are back to the high bay warehouse or the robots. I need to communicate with robots in very short cycles. Commands must be verified precisely and in quick succession. If I generate additional traffic, I also create additional latency, meaning response times between sending and receiving.
A passive system does not do that. The passive system observes the traffic and says: This is what I see, but I do not interfere. That is how our primary instance works, our controller instance and our sensors. The name says it. The sensors observe. They do not have any effect on the running system. Operations are not disrupted. Packets are only recorded and analyzed.
If I have a remote location, we speak of fleet management. I have a large fleet, like battery storage systems or autonomous vehicles. Everything that generates communication there cannot be monitored using classic local network tools, because there is no on premise instance continuously watching. It is not part of my local network, it is off-premises, perhaps communicating via 5G or LTE.
In such cases, we work with sensors, with an agent, an additional piece of software. That allows us to monitor an asset in the field that is mobile. The data collected there is gathered, bundled, and sent to our primary instance. Either we place a broker in between to handle the flow, ensuring that a huge number of requests do not arrive at the same time from the same system. The broker packages the data and sends it at the right moment to the observer, the evaluation instance that can be customized according to the customer’s needs. That is where the analysis takes place.
Is my battery still sufficiently charged or do I need to go to a charging station? Or is one of my virtual power plants in the field, such as from a photovoltaic provider, currently compromised or failing because a critical charge level has been reached? The battery no longer supplies the household, although it is a very sunny day and energy should be generated. That is suspicious. These are situations where you need to look closely and develop a feel for your own data.

So wherever there is a factory, you have the ability, using the appropriate hardware, to read and monitor the infrastructure. And with fleet management, meaning everything operating in the field, you ping the various fleet components, whether they are power plants, solar panels, or vehicles, to check their status and confirm everything is working properly.

Jan

Almost the other way around. We would place something there, either an agent or a piece of our software, that sends information back and says: Something is happening here and we need to intervene. And actually not on the panels, but on the inverters. The interesting part is not the individual panel on the roof and whether it is working. Technologies have advanced to the point where a single panel is not so critical anymore. The inverter is what matters. That is what I need to protect and ensure it keeps functioning. It is also connected to a smart meter. And if you look at trends in Germany, smart meters may not have seemed important in the past. But as renewable energy adoption grows, you can clearly see that it is wise to validate your smart meter as well. What data from the inverter is actually reaching the meter?

Okay, let me take one step back to the topic of trends in 2025 or even looking ahead into 2026. What is happening right now in the OT cybersecurity market?

Jan

What is happening, as I mentioned earlier, is first of all that more and more attacks are being noticed. Take major manufacturers of vehicles or mobile machines operating on our roads. Attackers set timelines. They say: We have encrypted your data. You no longer have access. You have until the end of the week, otherwise you pay with your customers’ data. What we also see is that the awareness exists that cybersecurity software and services are needed. But the purchase decision is often still not made.
That is partly because there is a wide scatter effect with many vendors. Most offer cybersecurity for IT and OT, many of them from abroad. Therefore, a major trend in the market is: we want security, and specifically cybersecurity, made in Germany. People want the expertise to remain in the country and want the security solution sourced domestically. At the same time, everyone still compares every provider that exists globally and hopes that the new laws being introduced will already protect them, even though they cannot do that on their own.
Take cybersecurity in the context of NIS2. There are regulations defining which precautions a supplier must have taken. Compare that with the German IT Security Act 2.0, and the question arises: where is my hosting provider located, are my data actually stored in Germany or in Europe. Then a hyperscaler like Microsoft says: We cannot guarantee that 100 percent of the data will only be processed in Germany or in the EU. So we have the desire and the trend topic of cybersecurity, the feeling that we urgently need to act. But the reality is: in order to make systems comparable, we still hand our data over to external parties and cannot be entirely sure where they reside. So yes. Security is a major trend.

Of course the EU directives such as NIS2 and the Cyber Resilience Act also play a significant role, including their ongoing development. I see this as fundamentally positive, because if you look at the current geopolitical situation, including potential state actors who could take action, it is not something you can afford to ignore, especially in critical infrastructures.

Jan

Especially in the industrial sector, the topic of critical infrastructure is still often neglected. The awareness is there. Everyone understands that we need to act. We have left these infrastructures untouched for a long time and allowed them to grow historically. Now it is time to protect and harden them. On the other hand, people want to do that with solutions from their own country but do not always support local companies consistently. Instead they say: We must act, but we want to keep all options open regarding how we do it. With us, the decision would be straightforward. We have a German solution that is developed here, in the beautiful city of Leipzig, where we have excellent developers.

Great. How does an attack or intrusion actually happen? What exactly do attackers do and how do they proceed? How do they bypass existing security measures?

Jan

We had a good example. We simply call it social engineering, without naming the customer. Someone was contacted through their LinkedIn profile by a recruiter who acted very friendly and open, as if she was genuinely interested in the person. In reality, she was primarily interested in the company. Over time, a relationship was built until the moment came to “verify” data for a potential job interview. The person was asked to download a document and enter some updated personal information, supposedly with mutual consent.
The note said that the document contained active control components and required permission for communication to function properly. That permission was granted. In the background, an agent was installed that could open communication channels, essentially creating a user profile and providing services externally based on the compromised credentials. The system then searched for loopholes on the device used for the download, which in this case unfortunately was a corporate system embedded deep within the infrastructure. This made it possible for communication to be routed outside.
This is exactly the kind of thing we detect. That system it could be a laptop or desktop behaves atypically even internally but that goes unnoticed as long as there is no outbound communication.
It becomes relevant for us the moment it starts communicating externally, when it leaves the internal workspace and tries to reach others. That is the moment we see it. Then our anomaly detection triggers clearly and reports that something highly unusual is happening, prompting immediate action.

That makes it very tangible and also a bit frightening. Because downloading a document, especially when you believe you are communicating with a legitimate person, can happen very quickly. Awareness is crucial, meaning we have to talk about these issues. Especially in OT, because if the office IT goes down for a while, you can often manage depending on the situation. But if production comes to a halt, especially for an extended period, that is no longer acceptable.

Jan

Indeed.

[27:46] Transferability, scaling and next steps – Here’s how you can use this use case

How do you see things developing from here? What are you working on, what kinds of solutions are emerging? When you think about AI agents powered by generative AI, the possibilities are becoming increasingly complex and extensive, whether it is social engineering with personalized emails or entirely new attack vectors. Can you share something about that?

Jan

Yes. Yes. I do call it an agent, but in our case it is not a classic agent. An agent is typically a relatively large component that offers many capabilities, including management and control. With us it is simpler. I call it an agent to make it easier to grasp, but at its core it just collects data. Everything happening on the respective asset is recorded, analyzed, bundled, and sent to the primary instance. So it is not an agent in the sense that you would know from antivirus software.
This is also where the trend involving AI becomes relevant for us. If you look closely, people often focus only on convenience. What can AI do to make my day easier? That is about data evaluation. In cybersecurity, however, it is the opposite. AI can become dangerous if it is trained incorrectly. And there is no template. You cannot just say: AI, learn my infrastructure, and then roll out the same model to the next city expecting it to work the same way. It will not, because different protocols are in use and different needs exist.
This kind of demand-driven AI modeling is not yet possible in a way that allows simple templating. Take a major airport with 20 gates. You cannot set up the exact same scenario at each gate and use the same AI there to analyze and optimize. There are too many distinct structures. The security network with cameras operates in a separate system so they cannot be compromised. The access control system uses badge readers and NFC, also likely in a separate network. How can I protect all of this with a single AI? Our recommendation is: avoid applying AI in a one size fits all manner.
We have published studies on this with other companies and institutions. The central question was: does AI currently deliver real added value in operational cybersecurity? Our position today is: not yet. It is still not that intelligent. It compares information, processes it, and delivers interpretations. But there is no real intelligence behind it. There is no awareness in this software that independently concludes whether something is truly a threat. It works with schemes, patterns, and algorithms.
Human intelligence, on the other hand, says: this is a threat. Even if an asset communicates with a new protocol for the first time and that does not look dangerous at first glance, it can still be critical. It is a new asset and it stands out. I as a human judge it differently than artificial intelligence. AI has no emotions and no intuition.

Yes, exactly. And that brings us to an important point. I also like to distinguish between classical AI, where algorithms are specifically trained for certain use cases, and generative AI. Especially in the cybersecurity environment, classical AI has been tested and used so far because it focuses on pattern recognition. You need to train patterns and try to ensure that this classical AI triggers alerts when anomalies occur. That is extremely difficult because, as you explained earlier, you need a large amount of data and atypical patterns to successfully train a model. That is not easy. Generative AI, by contrast, has completely different use cases. It is not intended for such complex pattern recognition. It works with prompts, with natural language input, with text.
I agree with you. Even though there is a lot of talk about AI and its potential, pattern recognition at the level of complexity and speed required in cybersecurity is not something generative AI can deliver at all.

Jan

But even there, with prompts, there have already been plenty of discussions about how prompts can be manipulated. And we must not forget. When one side, let us call it the good side in the blue team, is working to protect the infrastructure, there is the red team on the other side, the adversarial side, that deliberately attacks it. Just because we want to use AI to protect ourselves does not mean the attackers are not also using AI, possibly even the very same AI, to compromise us. That opens up entirely new risks.

I also believe that generative AI will primarily be used for attacks, especially in social engineering. Could you briefly explain the concept of red team and blue team again. Do companies hire you to attack them intentionally in order to uncover weaknesses, in the sense of a red team approach?

Jan

We do not handle that ourselves directly. We always want to avoid giving the impression that everything must be done exclusively with Rhebo. We make recommendations and refer clients to companies with whom we deliver such operations jointly. We have a preferred partner for this, deliberately a partner and not Rhebo itself, who represents us and defines the scope of the red team exercise. The classical term for this is penetration testing or pentesting. That term often sounds like an obligatory compliance checkbox, as in: let us just do a pentest. But what really matters is how you structure the pentest and translate it into a red team scenario. Do you want to test from within the existing infrastructure, meaning an internal pentest, or do you also want to legally authorize the contracted company to start the test externally. We believe we are safe, but we want to put in writing that you have legal coverage when attacking us and finding vulnerabilities. You may go up to this defined point and no further. These boundaries are jointly defined. We provide recommendations for this, but we do not conduct the tests ourselves. We are part of the blue team. We are part of the protection system. That is our role. For red team operations, we recommend companies with whom we have worked in a trusted relationship for many years.

Thank you very much, Jan. I found this super interesting. Would you like to share some closing thoughts? What would you like the listeners to take away, what should they pay particular attention to?

Jan

From my point of view, the most important thing is to create transparency in the market by comparing the vendors we have here in Germany or in the EU, and by prioritizing data privacy and data security with solutions that run on premises and remain under direct control. That is what we strongly recommend. If I hand over my data right from the start by using a decentralized data center, things become risky when I host my security system somewhere else and simply consider it hardened and protected. That is exactly where danger lies. It creates an entry point that I cannot protect myself. I must trust that someone else has done it correctly.
That is what I would like to emphasize. It never hurts to invest five minutes in a conversation, whether with Rhebo or with one of our competitors from Germany. We have a different mindset here. We are not as strongly influenced by American thinking. In many cases that feels more comfortable because it is familiar. And this familiarity can be used much more. What matters to me is that we are given the chance to be seen and to create transparency about how we operate. I am not saying, do not go to the competition. I would simply like to see everyone in the EU given a fair chance. That is what we are hoping for. Strengthening the companies that have grown here.
That is what makes us strong as Rhebo. We were founded in Leipzig. Our founders come from Leipzig and the surrounding area. That makes us tangible. And that is something you can benefit from and in my opinion also should.

Very good. Jan, what is the best way for people to reach you if listeners now feel they definitely want to talk to you?

Jan

There are plenty of ways. The easiest is LinkedIn or our website, rhebo.com. You can reach us there and directly arrange a meeting with me or with one of my colleagues.

Then all that remains for me to say is thank you and see you next time.

Questions? Contact Madeleine Mickeleit

Ing. Madeleine Mickeleit

Mrs. IoT Founder of IIoT Use Case GmbH | IoT Business Development | Which use cases work and HOW? Focus on practice! #TechBusiness #AddedValue