5 steps to secure embedded software

June 01, 2015

Blog

5 steps to secure embedded software

"When we do cybersecurity assessments, we get in almost every time. In our view, [the agencies] didn't need to wait until DOT&E found security issues,...

"When we do cybersecurity assessments, we get in almost every time. In our view, [the agencies] didn't need to wait until DOT&E found security issues, [the flaws] could have been found during software development." – Dr J. Michael Gilmore, Director of Operational Test and Evaluation, Office of the Secretary of Defense, CISQ IT Risk Management & Cybersecurity Summit, March 24, 2015, Reston, Virginia, USA.

As more and more functionality is embedded into smaller and smaller device footprints, security concerns rise. Often new features crowd out basic security concerns as vendors pack more and more functionality into the package with very little overall systems engineering, and only cursory security testing.

The embedded environment has matured to where security must move to the forefront much the way security did when the PC evolved in the 1990s. With the explosion of the Internet of Things (IoT), there is little doubt that any security flaws will be exploited. IoT devices enable highly useful business cases to come to reality. At the same time, they bring the risk of losing control. Today's embedded systems are far more powerful and vulnerable. If embedded systems are to avoid the pitfalls of the 1990s, protocols and approaches must be in place before they become the entry point for a new generation of hackers.

IT standards groups, like the Consortium for IT Software Quality (CISQ), MITRE Common Weakness Enumeration (CWE), and ISO 9000 and ISO 25000, publish guidelines and software quality standards. CISQ has published automated quality measures for security, reliability, performance efficiency, and maintainability. These measures provide some of the specific attributes that should be used as evidence that embedded systems might need to fulfill their business/mission function. While examining the state of embedded systems, it is apparent that security should be engineered in up front.

Implementing a security strategy

When considering security, most embedded systems engineers immediately focus on the problem of protecting data. Not only should the system protect data (within the application), but should also protect the interfaces from abuse. These five steps represent a reasonable starting point for developing an embedded security policy.

  • No untested programs in the execution space – No programs other than the programs necessary to execute the functions should exist in a place where they can be executed
  • Data must be private – Programs should not expose information to each other or to the network unintentionally
  • Confirm data at both ends – All information must be able to be verified and must be within expected ranges with out-of-bounds information rejected
  • Secure devices – Devices should have the capability to verify their integrity during boot time; devices should authenticate themselves before transmitting or receiving data
  • Follow the standards – Look at the Consortium for IT Software Quality (CISQ) quality characteristic measures that can be automated for ongoing security and software quality analysis and mitigation
  • Take action – If an anomaly occurs, the program must continue to function while handling the issue

No untested programs in the execution space

As embedded vendors strive to differentiate their products, they add programs to their standard distribution. Many of these will not be used and represent a potential security risk. These programs must be eliminated or, even better, never installed at all. Ask for an OS distribution with nothing in it beyond the essentials for the OS to work and install the programs manually. A minimalist strategy is the best for code. If the vendor does not provide stripped down distribution, the OS can restrict access rights for these programs and sensitive APIs, or the unused code can be deleted.

A better way is to provide a sandbox for custom and third-party applications to execute and then push communications through APIs, which provide the necessary isolation.

The hardware itself should be "clean" with no programs installed. It is key that any programs on the device are installed by the developer. Every piece of code must come from a trusted developer and cannot be altered prior to installation.

Data must be private

Programs should not expose information to each other or to the network unintentionally. Tempting as it is to believe that a device cannot be hacked on the Internet, it simply isn't reality. As modules within the program grow, data artifacts tend to grow with them and data tends to become increasingly exposed.

Embedded devices collect sensitive data (e.g., healthcare, enterprise) and there is a strong possibility the data traffic can be rerouted and modified before it reaches its destination. There should be checks to prevent copying and pasting as well as the ability to remotely wipe data if a device falls into the wrong hands.

Developers under pressure to meet deadlines tend to borrow code and routines from themselves and colleagues. Any security flaws will be propagated. Design and build the code right the first time.

Confirm data at both ends

All information must be verified, within expected ranges, and identified clearly. Using the same routines on both ends to validate content is essential. Interfaces should be sensitive to what comes in and be able to take action when the data is not correct. When a device receives bad data from a device that is "trusted," the intrusion is likely a hack. This is also true for direct hardware interfaces.

Like privacy, all connections to the external world need to be treated as suspect. Interfaces should be verified and data examined.

Secure devices

Devices should have the capability to verify their integrity during boot time and should authenticate themselves before transmitting or receiving data. Knowing who is sending the data is important, and one of the simpler hacks is substitution of unverified devices.

When booting up, devices must use cryptographically generated digital signatures. Resource-constrained devices could use unique hardware characteristics instead of compute-intensive algorithms to generate digital signatures for authentication. Devices failing that check should have a planned response. Default actions may not be appropriate for any given device.

Each device should have a key and each device should know the acceptable key for its type. When an unidentified key is received, the response should be planned, not simply ignored. If the receipt of the information is critical to the function of the device, receiving an errant ID more than once should be considered an attack. Planning for this fault is essential.

Follow the standards

CISQ has published a security standard that is designed to identify the top 25 known security weaknesses in IT application software as maintained by MITRE in the Common Weakness Enumeration (CWE). The CWEs are a measurable set of items that can be used as evidence for resiliency, security, and safety. Code analyzers such as CAST can pick these out of a complex environment. Developers should stay in constant touch with these important standards.

Take action

If an anomaly occurs, the program must continue to function while handling the issue. Developers normally focus on what happens when good data is received but error handling is typically simplistic. Developer training assumes that bad data is an artifact of programming, and not a hack, which is a policy that needs to be reviewed. Conduct assurance case testing on all key components. Assurance cases support the iterative review and revision of the implementation until the system displays the right behaviors.

In some cases there may be a way for the device to notify another that it is under attack. In other cases it may simply choose to ignore or work around the threat. In either case, communication is a powerful weapon to avoid hacks.

Securing embedded devices

Embedded security is emerging as a critical need for embedded devices. By following these recommendations, your embedded solution can focus on solving the problems it was designed to solve, without opening the floodgates to a new generation of hackers.

Bill Dickenson is an independent consultant with Strategy On The Web.

Vijayakumar Kabbin is General Manager with Wipro Technologies

Object Management Group

 

Bill Dickenson (Strategy On The Web) and Vijayakumar Kabbin (Wipro Technologies)
Categories
Security