Today, the security of medical devices is becoming extremely important to assure customers and patients that interact with your devices that their health and their personal information is taken seriously. Globally, regulators are increasingly requiring and verifying that devices are as secure as possible before and after product release. In the United States, the Food and Drug Administration (FDA) has published guidance that outlines requirements for medical devices that mandate a number of facets of device development and maintenance.
A security vulnerability is a programming error (or defect, or bug) that opens a device to be affected by some unintended external or internal application. Security vulnerabilities are in every product, including medical devices. By recognizing this and preparing for this inevitability, embedded developers can limit the exposure and potential damage that these vulnerabilities may introduce.
The most important part of the process for recognizing and mitigating these potential issues is known as Common Vulnerabilities and Exposures (CVEs). Originally defined in 1999, CVEs are a repository of known exploitable security issues that exist (or existed) in products. CVEs are published and maintained in a joint effort between the MITRE Corporation and the US National Vulnerability Database (NVD), which is maintained by the US Department of Homeland Security. Every significant security vulnerability has been documented as a CVE from Heartbleed (CVE-2014-060) to Shellshock (CVE-2014-6271) to URGENT/11 (11 CVEs discovered in 2019).
These CVEs are discovered as either a result of damage caused (at postmortem, an adverse effect discovered the underlying issue), or as a result of a conscientious engineer who discovers a potential exploit. The good news is that most exploits are discovered without causing damage; the bad news is once an exploit is communicated to the world through the CVE process, it can be easily exploited by hackers worldwide, so time is of the essence. Fortunately, the CVE process gives product or software developers the time of fix the exploit before it is announced globally so they can act rapidly to implement device security.
Once a CVE is discovered, it is assigned a CVE identification ID. If the CVE is determined to be an issue, it is assigned a Vulnerability Score by the NVD. This is a number between 1 and 10; the higher the number, the more serious the vulnerability will be to the affected devices. The NVD also contains any other known information about the issue, plus links to pertinent sites that further describe the issue – and existing fixes available.
The main benefit of the CVE reporting process is knowledge of the issue, potential fixes, and the severity and risk the issue might have to your products. Security vulnerabilities can expose your devices, your customers, and yourself to several adverse consequences, including:
- Loss or modification of patient critical data that could lead to harm to the patient or medical personnel
- Exposure of customer or end-user data, which could lead to identity theft, HIPAA violations, and other serious consequences
- Infiltration of the device by malicious actors which can cause injection of malware, disabling the device, infestation of other parts of the hospital or clinician network, etc.
Security Issues and Product Design
When developing devices that are as secure as possible, it is important to consider the different sources of potential security issues:
- Issues known to the community at the time the device is developed
- Issues discovered after your device is released
- Issues introduced by the software written specifically for the device due to insufficient preventative development techniques
Device Protection from Known Issues
As discussed, many potential exploits used by hackers to break into devices are already known by the worldwide security community and have already been fixed. It would be unfortunate for a medical device to be exploited through an issue that was already fixed at the time of the device’s release. It takes effort to prevent this from happening, but the effort will save time, protect your reputation, and limit the costs and potential legal exposure if and when exploits occur. In addition, the regulatory agencies require the device developer to consider this before the device’s release. In the U.S., this is part of the FDA’s Guidance on the Content of Premarket submissions for Management of Cybersecurity. Here is the process.
Each CVE that is shown to be an exploit is searchable in both the NVD and CVE databases – search by component name, CVE ID, or any keyword of interest. For example, assume your device uses some distribution of Linux. By searching for Linux vulnerabilities, you will find a number of issues; as a specific example, consider CVE-201911683. This is a critical severity issue that should not be in your product, since it is both well-known and allows remote denial of service attacks, or “unspecified other impacts.” Looking at the entry for this defect, it is resolved in Linux kernel version 5.0.13 or later, which means that you should upgrade to that version in your product if your Linux kernel is an earlier version.
For open-source components in your product, there are a number of tools available to identify if important CVEs are included in your software, the most important of which is called cve-check (https:// github.com/clearlinux/cve-check-tool). This tool generates reports that include which packages contain CVEs that are not resolved in the versions you are using by performing version checking. This information can be used to determine if any pre-emptive action is required before your product images are considered complete.
Manufacturers do not want to do this kind of checking and updating since their priority is trying to get a product designed and developed. Most device manufacturers would rather have their engineers solve product problems instead of managing and maintaining a Linux distribution. However, there is a price for the extreme capabilities, stability, and community that open source provides (do you want to write an SSL layer or use the one that is used in devices all over the world?). Either the device manufacturer must take this task on, or use a commercial Linux distribution and hold their vendor responsible for doing this work for them.
Monitoring and making sure known exploits are addressed is not the only thing to think about at this phase. A few other concerns that, if not addressed, can cause your device to be exploited include:
- Access Control – Have you designed in the ability to define roles that can access various types of data (user level, management level, maintenance level, etc.), and are you certain that only authorized roles access the data? Is data access difficult with higher levels of access control from the internet compared with physical access to the device? Are the authentication methods for the device difficult to exploit? Are default accounts and passwords managed so they cannot be exploited in the field? Linux provides at least two ways to manage access control: 1) Discretionary (DAC), the standard Linux access control model, and 2), Mandatory (MAC) which is more complex and more secure—part of the SELinux package.
- Encryption – Is the data stored on your device (both in memory and in storage), and transmitted between your device and others, protected and encrypted so that it can only be deciphered by those meant to see it? Many potential exploits that allow outside actors to see data still would need the proper keys to decrypt it. Developers need to ensure that a different mechanism must be overcome to access the keys beyond just accessing the encrypted memory.
- Hardware security assistance – Many features of modern processors help in ensuring the security of devices and applications, but it is the responsibility of the system designer to take advantage of them. Features such as TrustZone, Cryptographic Acceleration, Trusted Platform Modules (TPMs), etc. are on modern microprocessors and designed to both accelerate and assist in the development of secure designs. However, just having these features in hardware is useless if you do not make use of them.
Once your product is released, your work is not complete since the number of known exploits is a moving target, increasing daily. In 2019, there were 12,174 CVEs created – that’s over 30 per day. Most of these do not turn out to be issues, and, of the ones that are issues, many will not apply to your device, since many CVEs are reported against older versions of open source components, or will be against components you are not using. That said, even against the Linux kernel, there were 170 CVEs issued in 2019, and some of them will lead to potential exploits against your device.
While there is no way to prevent this from happening, you need to know that it WILL happen, and you need to make sure your device is prepared. The time to prepare your device for the future is during development, so that you can prepare the device to be updated as new exploits (and significant product defects) are found and fixed. The regulatory agencies are taking a much stronger stance on this topic than in the past, and are requiring plans for managing this as part of the postmarket plans for the device (in the United States, as provided in the Guidance for the Postmarket Management of Cybersecurity in Medical Devices).
It’s not just CVEs that need to be managed; as an example, in 2016, an exploit commonly known as the Mirai botnet brought down large portions of the internet by taking over small IoT devices such as webcams and routers and used them to execute Distributed Denial of Service attacks (DDoS attacks) against both US and French web infrastructure providers. Most of the owners of these infected devices are unaware that their systems are infected, and Mirai (and derivatives of it) are still a threat today. There are even devices made today that are susceptible to it, even though the underlying cause, which was as simple as trying to access a root level access account with 64 well known default login/passwords such as user/user, or user/password. As most users of these devices were unaware or unable to change these simple defaults, the Mirai botnet was able to take control of these systems.
The considerations that should be made during development to future-proof your device are many, but the most important is the ability to securely update your system. The methods and facilities to support this are many and complex and outside the scope of this article. For more information (see end of this article), resources are provided that touch on this important topic.
Applied Preventative Development Techniques
If you are using Linux and other open source software as part of your product’s design, then there will be vulnerabilities in your device that you (or anybody else) knows about when you release your device. As a result, you do not only want to eliminate as many known vulnerabilities as possible at the time you release, but you must also assume that, at some point, some bad actor will be able to get unauthorized access to the device. When that happens, you want to make it as difficult as possible for them to profitably exploit that access. There is no perfect defense from determined hackers armed with the knowledge of exploits in your device, but you want to make it as difficult as possible for them to do so. You cannot protect yourself from flaws in the open source modules you are using, but you can control potential flaws in your own applications. Of course, you should consider the techniques mentioned above to design in greater protection, but what about the way your applications are developed?
As mentioned above, most exploits in open source and application software are due to repeated developmental flaws that happen over and over again. Things like NULL pointer de-references, freeing already freed memory, overflowing a fixed length buffer, etc. are the kinds of coding errors that can be easily exploited by hackers to compromise your devices. However, there are several approaches that can be used to help. Specific techniques are beyond the scope of this paper, but places to start looking are:
- Static (and dynamic) analysis. The first static analysis you are likely to see are the warnings that come from your compiler. It is surprising how many organizations ignore this valuable diagnostic tool in a misguided rush to get something released. Beyond that, the open source community provides several useful static analysis tools such cppcheck and clang, and there are many commercial solutions available. All will detect issues that are easily missed in code reviews and, as long as the reports from these tools are managed, you can prevent several major classes of potential exploits in your applications.
- Use of a coding standard. In general, the MISRA coding standard (https://misra.org.uk/) is the gold medal standard for these, and provides many well thought-through recommendations for securing your applications. While its genesis is from the automotive industry and the world of safety, there is nothing automotive (or safety) specific about it, and MISRA should be considered by any device manufacturer looking to secure their applications. Note that most static analysis tools also greatly ease the checking of applications against MISRA rules. While there are other coding standards available, MISRA combines common sense with good practice in a way that can be implemented by organizations of all sizes.
- Another useful coding standard comes from the Software Engineering Institute at Carnegie Mellon; known a SEI CERT C (https://wiki.sei.cmu.edu/confluence/display/seccode). There is significant overlap between this and MISRA, but the SEI standards extend beyond C and C++ and into Android, Java, and Perl.
There are many other useful sources of information to consider when developing secure software, but if you are not already employing the above techniques, start there and consider expanding your thinking once you have a smart coding standard and static analysis paradigm in place.
At the beginning of this paper, we discussed the proliferation of connectivity that has occurred in medical devices, and the benefits as well as the security risks that this connectivity has brought to our lives. We also discussed the increasing scrutiny that regulatory agencies such as the FDA in the United States are placing on building in, and maintaining security into these devices. Further, we discussed how these challenges may be overcome in the design, development, and maintenance of medical devices. By following the guidance in this paper, your product will be:
- More difficult to successfully exploit
- Protected against known and unknown exploits when released
- Faster to update to close any newly found exploits
- More secure, giving your customers confidence that they are protected EVEN IF something goes wrong.
The last point is especially important. Customers are aware that there is no device that is completely free of bugs. What they want to know is how you are minimizing defects and their impact, and how ready are you when something inevitably goes wrong. The methods in this paper will not prevent all potential future security issues, but they will put you in a good position to quickly resolve those issues when they arise.
By approaching security and limiting exposure to exploitation, your devices will be less prone to attack, better prepared to protect patient data, and more likely to be able to smoothly achieve regulatory approval and improve medical outcomes for patients around the world.
■ “Enabling Secure IoT Devices with Microsoft® Azure and Mentor Embedded Linux (Visit: https:// go.mentor.com/5an_2)
■ “Updating your Safety Critical Product – A nightmare waiting to happen”? (Visit: https://go.mentor. com/547Bd)
■ Webinar “Strategies to Develop Secure and Robust Embedded Devices” (Visit: https://www.mentor.com/embedded-software/multimedia/strategies-to-develop-secure-and-robust-embedded-devices)
Robert Bates is the chief safety officer for the Embedded Platform Systems group of Mentor, a Siemens business, responsible for the safety, quality and security aspects of the embedded product portfolio targeting the medical, industrial, automotive, and aerospace markets. In his role, Rob works closely with customers and certification agencies to facilitate the safety certification of devices to IEC 61508, IEC 62304, ISO 26262 and other safety certifications. Before moving to Mentor, Robert was a software development director at Wind River, where he was responsible for commercial and safety-certified operating system offerings, as well as both secure and commercial hypervisors. Robert has over 30 years of experience in the embedded software field, most of which has been spent developing operating system and middleware components for device makers around the world.