Increasingly, OEMs are demanding greater security for their next generation systems. In turn, the burden is on system and system-on-a-chip (SoC) designers to protect sensitive assets against unauthorized or malicious access. In most instances, the design community is opting for the hardware rather than the software approach to engineer security into their designs.
Today, designers are presented with a variety of security processor brands. However, most of them follow virtually the same chip architecture. It’s best characterized as basically having two domains, one is non-secure, while the other is secure with a single bit dividing the secure from the non-secure domain, Fig. 1.
Moreover, different applications from different entities (where an entity may be the SoC vendor, device OEM, service provider, end user or other participant in the ecosystem of the device) may be running in the same secure domain. However, they are not isolated from one another and they may be able to access not only their own keys, but also keys from other applications. Hardware partitioning isn’t between the different entities, it’s only between secure and non-secure.
Sandbox for Secure Applications
In effect, the security processor uses the one bit to create a sandbox for secure applications, Fig.1a. “Sandbox” in computer security lingo means a software or hardware structure whereby a separate, restricted environment is created for running only certain applications, usually with a specific set of restrictions on their operation.
In effect, this sandbox is for secure applications, and every secure application plays in the same sandbox, so to speak. There’s no isolation between entities that run secure applications. If a secure application is being run, it’s in that “secure” world along with every other application. In this instance, problems emerge because different entities may not completely trust each other in the context of security. If one entity incurs a malicious attack, it compromises the security of all the other entities.
Let’s take a poorly written digital rights management (DRM) content protection application for example. It can compromise the security of a payment application running on the same processor and allow access to banking information. Again, the issue here is lack of isolation between applications. One poorly written or malicious application can compromise the security of others running on that processor.
That’s one drawback. The other problem is there is a wide range of attacks whereby an attacker can change the value of signals in the design. Those so-called “fault attacks” or “glitch attacks” are based on perturbing the circuitry to alter its operation. They range from simple power and clock glitches to laser pulses, electromagnetic pulses, and others. A properly executed attack of this sort can change the bit controlling secure mode to flip from non-secure to secure for some operations, allowing non-secure applications to access sensitive data and keys.
Many Domains and Isolated
On the other hand, consider a security processor core that has many domains or multiple roots of trust, Fig. 2. In this case, there is a separate security domain for every entity. And those security domains are completely separated from each other using strong hardware security. Security assets like keys and hardware resources are completely isolated.
Each entity has its own set of signed applications in this security processor architecture. When the security processor switches from one application to another then all the context is flushed from the security processor. No data, keys, or other information persist in that security processor when it switches from one application to the other. The only exception is the ability to pass messages between the different applications, if that is explicitly desired by the application writer. This ensures that no context can be shared between different entities.
Security assets are thus completely and securely assigned to specific entities so that there is by default no overlap, meaning different entities cannot be allowed to access the same resources. However, overlap is acceptable if assignments are properly made.
Let’s say there is a test and debug port that the SoC vendor uses. It wants to make that same test and debug port available to its OEM customer. They might allow the same permission bits to be set for the OEM root as are set for the SoC vendor’s root, thus allowing access to that particular test and debut resource.
Conversely, the SoC vendor may have other test and debug ports it wants to reserve and not make them available. Hence, there is complete flexibility in terms of how those assignments are made, and it depends largely on SoC designers on the resource type they want to overlap. Other security functions cannot be overlapped. Take encryption and decryption keys, for example. There is a separate key space or set of keys for each entity, and they cannot share them with each other.
Keys Assigned to Each Root
In this multiple roots of trust architecture, a set of keys are assigned to each root. One operation as mentioned above is they enable applications to be signed differently for each root. Therefore, each root essentially gets its own private set of applications. When the application is loaded into the security processor core, the root is identified, and then the hardware configures itself specifically for that root.
Also, keys associated with a root provide a complete, isolated set of derived keys the root uses. So, one key can become many keys, and those many keys can be used for a considerable number of different security operations. But every set of keys is unique per root, and one root has no way to access keys from another root, which is hardware enforced.
A set of permissions is also associated with each root. Those permissions relate to different hardware resources in the security processor core, such as debug and I/O pins. Those different resources can be partitioned between the different roots, again hardware enforced. One root may be able to access debug ports; another root may not or have only partial access to those.
One root may be able to control certain external logic on the chip. Another root may be able to control a different set of external logic, but maybe not the same as the other root. In this case, let’s again use our test and debug examples. The SoC vendor has a root that enables it to completely control test and debug logic and completely control the configuration of other aspects of that SoC.
It may grant to the OEM that buys its SoCs some of that functionality, but not all of it. The SoC vendor may not want the OEM to be able to access all the test and debug logic because the OEM may learn too much about that SoC’s technology the vendor doesn’t want to share. It may allow the OEM to configure certain parts of the SoC, but not all of it.
Delegation from one entity to another is another aspect of roots. Like the SoC vendor can delegate certain permissions to the OEM, the OEM can also delegate some rights to the service provider if the SoC vendor gives the OEM the rights to do so. However, the rights and permissions of that delegation have to be a subset of what the OEM already has.
Further, depending on business relationships and system requirements, the SoC vendor may let the OEM obliterate the SoC vendor’s root. This means the SoC vendor would no longer be able to run software on the OEM’s device.
Root Isolation Critical for Upcoming SoC Designs
Security is increasingly important for virtually every device and system on the drawing boards these days. However, designers have to keep in mind that there are different uses for security and different entities need security functionality.
For instance, chipmakers need secure functionality for their own manufacturing and testing of their chip products. Their OEM customers also need security for their specific applications. Service providers and others may also need security functionality. Therefore, the SoC designer needs to offer security that can be used throughout the lifecycle of the chip by these different entities. However, they want to accomplish that objective without compromising their own security.
As we’ve said here, the idea is to have isolation between applications. One poorly written or malicious application can compromise the security of all other applications in that SoC. The bottom line is to avoid each application from being vulnerable to a malicious attack and simultaneously maintain complete trust among all the applications running on that SoC.