The development cycle for traditional security markets is a largely reactive one, where coding is developed mostly on an informal agile basis, with no risk mitigation and no coding guidelines. The resulting executables are then subjected to performance, penetration, load, and functional tests to attempt to find the vulnerabilities that almost certainly result. The hope, rather than the expectation, is that all issues will be found and the holes adequately plugged.
In contrast, accepted practice in safety-critical industries such as aerospace, automotive, rail, and medical is to apply a formalized software development process. The use of coding standards (or “language subsets”) is an integral part of that, because evidence suggests that for C and C++ developers around 80 percent of software defects are caused by the incorrect use of about 20 percent of the available language constructs. Restricting the use of these problematic parts of the languages reduces the number of associated defects and hence improves quality.
To address security concerns, many safety-critical software development organizations extend the safety-critical model and use coding standards such as MISRA or CERT to minimize vulnerabilities. It is therefore useful to compare how the principles of the “SEI CERT C Coding Standard” and the “MISRA C:2012 Guidelines” with “MISRA C:2012 Amendment 1” fit such a formal development process.
One example of the difference in emphasis can be found with reference to retrospective adoption. MISRA C:2012 states that “MISRA C should be adopted from the outset of a project. If a project is building on existing code that has a proven track record then the benefits of compliance with MISRA C may be outweighed by the risks of introducing a defect when making the code compliant.” In other words, MISRA does not advocate retrospective adoption.
This contrasts in emphasis with the assertion in CERT C that although “the priority of this standard is to support new code development … A close-second priority is supporting remediation of old code.”
Certainly, the retrospective application of any subset is better than nothing, but it does not represent best practice – and is not something to be advocated where security AND safety are paramount.
Relevance to safety, high integrity, and high reliability systems
The standards also differ significantly in focus. MISRA C:2012 asserts that it can be “… used to develop any application with high integrity or high reliability requirements.” The implication is that MISRA C:2012 was always appropriate for safe and secure critical applications even before the security enhancements in Amendment 1.
CERT C attempts to be more all encompassing, covering application programming such as POSIX in addition to the C language itself. That is perhaps reflected in its suggestion that “safety-critical systems typically have stricter requirements than are imposed by this standard.”
The primary purpose of a requirements-driven software development process as exemplified by ISO 26262 – a functional safety standard for automotive applications – is to control the development process as tightly as possible to minimize the possibility of error or inconsistency of any kind. Although that is theoretically possible by manual means, it will generally be far more effective if software tools are used to automate the process.
Static analysis tools require that the rules can be checked algorithmically. Compare, for example, the excerpts shown in Figure 1, both of which address the same issue. The approach taken by MISRA is to prevent the issue by disallowing the inclusion of the pertinent construct. CERT C instead asserts that the developer should “be aware” of it.
Of course, the CERT C approach is clearly more flexible; something of particular value if rules are applied retrospectively. MISRA C:2012 is more draconian, yet by avoiding the side effects altogether the resulting code will be checkable by a static analysis tool (and, incidentally, more portable). It cannot be possible for a tool to check whether a developer is “aware” of side effects, let alone whether “awareness” equates to “understanding.”
A question of priorities
The coding rules specified by such as CERT C and MISRA C:2012+AMD 1 are designed for use in secure software development, and the correct application of either will certainly result in more secure code than if neither were to be applied. However, MISRA’s stated aim to “… provide world-leading, best practice guidelines for the safe and secure application of both embedded control systems and standalone software” contrasts with CERT C’s wider remit. Perhaps that is why MISRA C:2012 lends itself better to highly critical applications, and why more of its rules are designed to be decidable by static analysis tools
Conversely, there is an argument for using the CERT C standard because it is more tolerant, perhaps if an application is not critical but is to be connected to the Internet for the first time. The retrospective application of CERT C may then be a pragmatic choice to make.
In truth, there is no right or wrong here. The chances are that no “off-the-shelf” coding standard will fit your organization and your processes perfectly, and a combination of rules from these and other standards could well be most appropriate for the task at hand. The best course of action might therefore be to consider static analysis tools that are capable of melding the best foundational rules from several sources.
- SEI CERT C Coding Standard https://wiki.sei.cmu.edu/confluence/display/c/SEI+CERT+C+Coding+Standard
- MISRA C:2012 Guidelines for the use of the C language in critical systems. March 2013.
- MISRA C:2012 - Amendment 1: Additional security guidelines for MISRA C:2012, ISBN 978-906400-16-3 (PDF), April 2016.