Critical infrastructure devices in the industrial and government sectors are perhaps the most valuable assets in the cyber sphere. Given their complexity, they are also often the product of a multi-vendor technology supply chain, distributing the onus of secure design across many stakeholders. In our monthly safety and security interview with Andrew Girson, Co-Founder and CEO of embedded consulting firm Barr Group, we discuss Stuxnet, zero-day vulnerabilities, and the importance of testing for embedded electronics that are part of our critical infrastructure.
The Industrial IoT (IIoT) urges companies to connect often resource-constrained, brownfield systems to networks, and eventually, the Internet. Given the propensity for advanced threat actors to target these systems, what defensive options exist?
GIRSON: IIoT and the critical infrastructure that it controls are obvious targets for any actor desiring to disrupt the daily activities we all take for granted. And, there have been examples of poorly defended infrastructure installations that are too open and vulnerable, given their relative importance. Furthermore, as has been a topic in our political debates, critical infrastructure is aging and an area in need of a potentially significant amount of governmental investment for improvement.
The security implications of this situation are many-fold. Because infrastructure elements are often deployed for years without replacement, retrofitting existing components to add new capabilities and connectivity is going to be essential. So, engineers across disciplines are not necessarily going to be working from a clean sheet of paper and this will compound the existing security challenges. Some of these challenges can be solved apart from the device itself. Preventing physical security breaches on edge infrastructure devices can be accomplished via better human-, barrier-, and video-based security at the infrastructure location. This will be necessary, since legacy devices that are retrofitted are not necessarily going to have the tamper-resistance necessary to defeat a physical attack. And, if the site uses wireless connectivity for local communications, traffic must be encrypted and authenticated to defeat attackers who may gain physical proximity and attempt eavesdropping and/or man-in-the-middle attacks.
But the challenges of remote attackers using the Internet from long distances to penetrate devices must be solved with better Internet-based security measures. The concern with retrofitted devices is whether the engineers testing for vulnerabilities are taking a holistic approach, looking at not just the new Internet connectivity that is bolted on, but also the manner in which it is bolted on and how it interacts with the pre-existing hardware (and software). Testing should incorporate real-world scenarios that involve both the new part and the old part working together to make sure they are secure when combined.
How can embedded organizations conduct software analysis for zero-day vulnerabilities such as those found in the Windows operating system that have been exploited by attacks like Stuxnet?
GIRSON: Zero-day vulnerabilities are a problem across the technology landscape, and not limited to embedded systems. However, embedded systems may have higher degrees of vulnerability because they are often resource- and cost-constrained, because the physical attributes of their electro-mechanical design must be protected, and because the very wide variety of devices (even from a single manufacturer) lends itself to more potential vulnerabilities to exploit. Nowhere has this been more obvious than in how Stuxnet was designed and how it was disseminated.
However, many embedded devices are designed using pre-existing software and hardware building blocks, and if those building blocks are designed with security in mind it can make the integrator’s job a bit easier. That said, even systems built up from secure building blocks must be adequately tested for security vulnerabilities within the overall system that can arise when they are integrated.
The financial value of zero-day vulnerabilities to both well-meaning and sinister organizations will result in a continued focus on uncovering them and there is no magic bullet to be certain that you have eliminated all potential security vulnerabilities. As a designer, it’s important to remember that you are building a security perimeter when you are designing in security and testing for security vulnerabilities. Build the fence high (that is, invest in security across the engineering disciplines) and you will reduce, though not eliminate, the potential for zero-day vulnerabilities.
As you’ve indicated in the past, designing for security is a multi-disciplinary venture. However, as Stuxnet proved with Siemens and Microsoft, it’s also a multi-vendor one. Do embedded development organizations and systems integrators need to adopt a “trust but verify” mentality in complex system designs, or does/will industry provide any fallback or assurances to ease this burden?
GIRSON: Well, this builds on my previous answer. Multi-vendor building blocks, both in hardware and software, are the norm when creating embedded devices that interface with the cloud. With increasing complexity, there has been a greater reliance on third-party software libraries (both open- and closed-source) and hardware modules (such as wireless communications modules). This is inevitable as no single organization can create and manage all the sophisticated components that go into an embedded device.
The question though is: who is responsible for the security of a building block? Certainly, the developer of the software library or communications module or microcontroller should be taking security very seriously and, in an ideal world, would provide some sort of seal of approval or guarantee on security. Or, at least publish details about how their products were tested and the results of those tests.
But that does not necessarily exist today. As noted previously, software/hardware/systems have defects. Eliminating all of them is a laudable, if unattainable, goal. And, designers of the building blocks – even those that perform exhaustive security testing – are not going to be doing so within the overall design of your integrated embedded system. Also, designers of the integrated embedded system will be limited in their ability to test in real-world scenarios.
Take the analogy of automobiles. We all know these are very sophisticated products that are built up of a variety of electro-mechanical and software systems that are presumably tested to great length. But, if an automaker touts that its vehicle was test-driven over a million miles during development, that seemingly impressive testing statistic needs to be put into context; it represents only a fraction of the miles that this new automobile will be driven by new owners in just its first few weeks on the market.
What does this all mean? Understand that security is an arms race, for better or worse. Treat it seriously across all disciplines and reinforce your security perimeter to make your devices less vulnerable.