This is the first in a three-part blog where I’ll look at improving security in medical devices. Here, I’ll focus on industry standards. Parts two and three will cover the attack surface analysis and wider security mechanisms that can be used to improve security in a medical device. If you're inclined to jump ahead, here's a link to Part two.
Safety developers are getting used to working to safety standards, but for security, finding a relevant standard can be much more challenging, and is only the start of a long and comprehensive route to creating a secure device. Note that more information is available on this topic in a whitepaper titled Increasing Security in Medical Devices.
Medical devices operate in a hostile environment, and developers should expect constant attempts at intrusion. Therefore, products should be hardened to meet this threat. But how?
Connected medical devices have many benefits, including the opportunity for continuous monitoring, telemedicine, and big data analytics to undercover hidden trends. However, with connectivity, there’s always a risk that bad actors could gain access to medical devices, with potential life or death consequences.
The bad actors’ advantage is that they only need to find one successful attack vector, whereas the defender must defend all possible attack vectors. As attacks evolve and get increasingly novel, aggressive, sophisticated, and frequent, defenses must be continually refined and improved.
Guidelines and standards for developing secure systems are limited. However, the FDA has issued several sets of guidance, focusing on medical device cybersecurity; they suggest that manufacturers use the National Institute of Standards and Technology Framework for Improving Critical Infrastructure Cybersecurity.
An alternative security standard that can be considered for medical device security is Common Criteria, a widely recognized international scheme used to assure security-enforcing products. It provides formal recognition that a developer's claims about the security features of their product are valid and have been independently tested against recognized criteria, to a formalized methodology.
Common Criteria requires a Security Target, which defines, amongst other things, the security objectives and the environment the device will work in. The system is then designed and verified according to a specific Evaluation Assurance Level (EAL), of which there are seven. Each EAL offers guidance on the methodology and design. Verification and penetration testing should follow. The higher the EAL level, the greater the effort required, which should result in greater determinism and security for the device under test.
Even if Common Criteria is not adopted in full, the guidelines and methodology used could be of use to medical device developers concerned about security. Coding standards are very important when coming to security, and most developers of safety-critical software are familiar with the MISRA C coding standards. Its aims are to facilitate code safety, security, portability, and reliability in the context of embedded systems, specifically those systems programmed in C, by providing a restricted subset of a standardized C programming language that’s deemed to be safer.
An alternative coding standard for secure programming is Cert C, from the Software Engineering Institute (SEI). Cert C details coding guidelines for those developing software that require a degree of security. Cert C provides rules and recommendations which, when followed, result in less vulnerabilities for the bad actor to exploit. There’s a growing trend to use Cert C within secure systems.