AdaCore Technologies for cyber security, part 3: Security vulnerabilities and their mitigation

July 3, 2018 Roderick Chapman and Yannick Moy, Adacore

This is part three in a series of excerpts from a book written by Roderick Chapman and Yannick Moy that cover AdaCore Technologies for Cyber Security. It considers a number of specific and high-profile software vulnerabilities, inspired by the CWE/SANS “Top 25 Most Dangerous Software Errors,” and discusses how each can be prevented or mitigated using Ada, SPARK, and AdaCore’s tools.

Read part two here. Note that the original content comes from AdaCore.

Some vulnerabilities are universal in that all software should be free of all occurrences—buffer overflow would be a good example, since all programs should be free of all buffer overflows, regardless of the particular application’s requirements or operational domain.

Many vulnerabilities are in some way application specific in that they may or may not be a problem, depending on the application’s particular security requirements and operational environment.

Related CWE identifiers are given in each sub-section. A more detailed list of other CWEs that are handled by Ada and/or AdaCore tools is presented in Appendix A.

Data validation

Related CWEs


Short description



Improper Input Validation

Plus all children and variants


Validate Inputs

Plus all children and variants


Missing or incorrect validation of input data remains one of the most common security vulnerabilities in software. This is an application-specific vulnerability, since exactly what does or doesn’t constitute “valid” input data is highly dependent on an application and its security requirements.

Ada offers a range of protections from these problems, from basic dynamic checks at run time to advanced static analysis and proof techniques.

Dynamic mitigation

At the most basic level, Ada has always offered run-time range checking for scalar values. If a check fails at run time, then an exception is raised rather than allowing the execution of the program to become undefined. This offers protection against common defects such as integer range violations, buffer overflows, arithmetic overflow and division by zero. For example, any attempt to store an integer value outside the range (-180 .. 180) for an angle, or a real value outside the range (0.0 .. 10000.0) for a length in the following example will raise an exception at run time. Similarly, a Data value whose Kind is Angle_Data cannot be mistakenly interpreted as a value whose Kind is Length_Data (i.e., an Angle bit pattern cannot be interpreted as a Length) when using the discriminated unions of Ada; such an error would raise an exception.

type Angle is new Integer range -180 .. 180;
type Length is new Float range 0.0 .. 10_000.0;
type Datatype is (Angle_Data, Length_Data);
type Data (Kind : Datatype) is record
   case Kind is 
      when Angle_Data =>
         A : Angle;
      when Length_Data =>
         L : Length;
   end case;
end record;


Ada 95 added a special attribute X'Valid for any scalar object X. This returns True if and only if the raw bit-pattern present in memory is a valid value for the type of the object and satisfies any subtype constraint or predicate (if present). This is more powerful than a simple “range check”, because it applies to types with complex representations such as floating-point or enumeration types with non-contiguous values. Further, the evaluation of X'Valid can never itself become undefined or raise an exception, so it provides a way to “peek” at incoming data to see if it’s OK before proceeding. It also works with the ZFP run time and SPARK, where exceptions are excluded anyway.

GNAT Pro adds an attribute X'Valid_Scalars that can be additionally applied to composite types like records and arrays. This applies the correct X'Valid test recursively to all the components of a composite object, and only returns True if they are all OK and also satisfy any subtype constraint/predicate. For example, an input value of type Data from the previous example could be validated by evaluating X'Valid_Scalars, which will check that X.Kind is a valid Datatype value, and depending on this value, that either X.A is a valid angle or X.L is a valid length.

In terms of run-time verification, GNAT Pro also offers an extended validity checking mode. This instructs the compiler to make worst-case assumptions about data validity and assume, for example, that memory might have been corrupted at any point, so it automatically inserts a validity test for all objects, every time they are read. This comes with a noticeable performance penalty, but offers the most protection. As noted in section 3.3, this mode can be particularly useful in combination with automated Fuzzing (essentially random input testing), since the extended validity checks spot a problem sooner rather than later. GNAT Pro also supports special pragmas that instruct the compiler to initialize scalar objects to a value which is known to be invalid and will therefore always fail a validity test on first access. This offers an easy way to spot uninitialized values at run time, protecting against another pernicious undefined behavior.

Static mitigation

The GNAT Pro compiler can detect some violations of data constraints that do not depend on the flow of control and analysis of calls. In such cases, it issues a warning that an exception will be raised at run time if that code is executed. Similarly, it can detect some simple cases of reading an uninitialized variable.

CodePeer can go further by analyzing values and relations between variables in a fully flow-sensitive and interprocedural analysis. CodePeer offers a range of analyses that protect from data validity problems and implements a form of data-flow analysis that statically detects uninitialized variables.

SPARK Pro goes further in a number of ways. Firstly, SPARK Pro offers a completely static verification for the absence of all undefined behavior, run-time errors and exceptions. In SPARK it’s possible to prove that none of Ada’s predefined run-time checks will ever fail for any program executions.

In the most general sense, subprograms in Ada and SPARK can also include precondition contracts that can specify arbitrary validity requirements on their parameters, which can be as permissive or as strict as is required by the designer. These can be checked at run time, or by static analysis, or both.

Native code injection

Related CWEs


Short description



Code Injection



Eval Injection



Static Code Injection



Improper Neutralization of Server-Side Includes



Improper Restriction of Operations within the Bounds of a Memory Buffer

Plus all children and variants


Unsafe Reflection



This section specifically deals with the problem of malicious injection of Ada or machine code. This is a universal vulnerability. Injection of code in other languages (e.g., scripting languages or SQL) can be application specific, so is considered elsewhere. Two cases are covered here—injection of Ada code itself, and injection of compiled machine code.

Mitigation: Ada code injection

Ada has always been a compiled language. There is no “reflection” or “eval”-like construct so it is impossible for Ada source code to be maliciously inserted and/or interpreted at run time.

Furthermore, Ada programs (especially for embedded systems) can be statically linked, and therefore are not susceptible to “DLL spoofing” or other attacks relating to shared libraries or dynamic linking. Finally, compiled Ada code is always executable from a read-only memory (such as ROM or a FLASH device) so can be further protected from tampering.

Mitigation: Machine code injection

This remains a common attack against unsafe programming languages and defective code. In short, a buffer over-write defect results in overwriting the stack memory with malicious data, which is actually the attacker’s machine code. The return address on the stack is also manipulated to force control to jump to the malicious code. There are variants on this theme (particularly the so-called “return oriented programming” (ROP) family of attacks) but they all rely on a buffer overflow defect as the initial point of entry.

Ada is strongly protected from this class of vulnerability, owing to run-time checking of all array accesses, exception handling, and the strong forms of static analysis offered by both CodePeer and SPARK Pro. See section 4.1 for more details, since the same dynamic and static verification techniques that apply to data validity also apply to buffer overflows and other defects that lead to code injection.

Denial of service

Related CWEs


Short description



Resource Exhaustion

Plus all children and variants


Unchecked Input for Loop Condition



Uncontrolled Recursion



“Denial-of-Service” has become a broad term that refers to any form of attack that prevents a computer system from fulfilling its intended role and service. This is an application-specific vulnerability, since some systems can “fail secure” while others might have onerous requirements for continuity of service and availability.

Three sub-classes of attack are worthy of mention:

  • Forced immediate termination. An attacker crafts input data that is designed to make a target system immediately terminate or “crash”. A good example would an input that provokes an unhandled exceptional situation such as a division-by-zero.
  • Termination through resource starvation. In this case, the attacker still causes the target system to terminate unexpectedly, but does so by deliberately exhausting its resources. For example, a flood of requests from the attacker causes the target system to “run out of memory” and eventually terminate.
  • Starvation. An attacker floods a system with legitimate-looking, but bogus requests. The system continues to “work” until this enormous load, but legitimate users are “starved out”.

Dynamic mitigation

For some systems, termination in a known “secure state” might be acceptable. This kind of “fail secure” behavior is supported by Ada through run-time exception handling. A top-level “catch all” handler can be inserted into the main program and each task type or object; the handler can bring the system to a safe and/or secure state before allowing the system to terminate.

A second option is similar, but the system can implement some sort of “graceful degradation” and switch to a simpler mode of operation. Other options include reverting to a backup system, or executing a hardware-based “reset” to bring the system to a known state.

Static mitigation

Where continuity of service is important, for example in communications and real-time control applications, both CodePeer and SPARK Pro offer strong protection. If programs can be statically shown to be free from all run-time errors, then they are effectively crash proof in the face of arbitrary input data.

To protect against possible non-termination of loops, CodePeer issues warnings when it detects that a loop may not terminate. SPARK Pro goes beyond mere warnings and can prove that loops terminate if the user specifies a loop variant—an expression that can be shown to increase or decrease towards a constant bound for every iteration of the loop.

SPARK is also immune from starvation or exhaustion of heap-based memory, since SPARK can be compiled without use of a heap. The GNATstack tool can also be used to show that SPARK programs will never run out of stack memory.

In all these scenarios, a “defense in depth” approach is appropriate where, for example, SPARK Pro might be used to statically eliminate run-time errors, but the system is still compiled with run-time checks, extended validity checking, and a top-level “catch all” exception handler.

Information leak

Related CWEs


Short description



Classic Buffer Overflow



Stack-based Buffer Overflow



Heap-based Buffer Overflow



Out-of-bounds Read



Buffer Over-read



Buffer Under-read



Information Exposure

Plus all children and variants


Covert Channel

Plus all children and variants


Improper Initialization

Plus all children and variants


These vulnerabilities form a general class of problems where information is seen to go where it shouldn’t. Three sub-classes of this problem arise:

Mitigation: Programming defects

This is a universal vulnerability. Simple programming defects can cause information to flow in unexpected ways. For example:

  • An uninitialized variable can result in a read from memory of a value that has been “left over” on the stack from a previous computation.
  • An unchecked buffer over-read can yield sensitive or incorrect information. The notable “Heartbleed” vulnerability in the OpenSSL library was of this class.

As noted earlier, Ada offers strong protection from these classes of vulnerabilities. Uninitialized variables can be tackled with GNAT Pro’s enhanced validity checking modes, and buffer overflows are always prevented by run-time checks in Ada. CodePeer and SPARK offer static protection against both of these defect classes.

Mitigation: Algorithmic defects

If a programmer simply implements “the wrong code”, then unintended information flow can result. For example, if a function is supposed to be computed from two input variables X and Y, but the programmer mistakenly computes the result from X, Y and Z, then an observer might be able to deduce something about the value of Z, which might be a security vulnerability if Z is some sensitive value like a cryptographic key.

The exact nature of these vulnerabilities is highly application specific, but Ada and SPARK offer some protection through the use of contracts. A Depends contract, for example, could specify that the function result must be computed from X and Y and not Z and this specification is verified by the SPARK Pro tools with information-flow analysis.

Contracts can also be used to specify, for example, that data of different classification (e.g. “Unclassified” and “Top-Secret”) shall not be mixed in a single computation. Again, SPARK Pro offers strong verification for such properties.

Mitigation: Side channels

So-called “Side” or “Covert” channels exploit devious and unusual observations of a program’s execution to deduce its internal state. These include:

  • Timing-based attacks. Observation of a program’s execution time can yield information on its internal state and variables.
  • Power. Observation of the electrical power consumption of a computer can divulge what’s going on internally.
  • Electro-magnetic emissions.
  • Acoustic (sound) emissions.
  • Other things that no-one has even thought of yet...

These attacks are extremely difficult to prevent using software techniques alone. SPARK offers some assistance, since it is designed to be amenable to timing analysis, and its information flow analysis engine could be used to detect where execution time depends on a particular critical variable.

Improper use of API

“Entia non sunt multiplicanda praeter necessitatem”
[“Entities are not to be multiplied beyond necessity”]
– William of Ockham, 14th Century.

or, put another way

“When in doubt, leave it out...”
– Joshua Bloch, Google.


Related CWEs


Short description



Expected Behaviour Violation



Often Misused: Arguments and Parameters



Function Call with Incorrectly Specified Arguments



Incorrect Use of Privileged APIs



Exposed Dangerous Method or Function



The design of reusable, general, and error-tolerant APIs remains one of the core skills of a software designer. (See Joshua Bloch’s lecture [18] for an overview of this topic.)

For secure systems, the unintentional misuse of cryptographic or communications APIs and libraries is a regular source of defects and headlines.

A core issue is the expressive power, precision and abstraction with which a programming language allows an API specification to be defined. Many APIs need to express usage rules—what a user should and shouldn’t do to use the API properly—and if these rules are specified formally then they can be checked either at run time or using static analysis.


Once again, Ada’s contracts offer strong support. Preconditions express exactly the conditions under which a particular operation may be invoked, and can even be used to express ordering constraints on operations (e.g. “operation X must be invoked before either of operations Y or Z.”) Similarly, postcondition contracts can express exactly what operations promise to do (and not do.)

Increasingly, such contracts are being added to parts of the standard Ada library to increase the strength of their specifications, and as a result their usage can be verified statically by the CodePeer and SPARK Pro tools. This is the case for example for the standard numerical library shipped with GNAT Pro. Some freely available libraries have also started to appear in SPARK, such as standard cryptographic algorithms [19].

Weak or no crypto

Related CWEs


Short description



Inadequate Encryption Strength

Plus all children and variants


Use of a Broken or Risky Cryptographic Algorithm

Plus all children and variants


Use of a Cryptographically Weak PRNG



These are clearly application specific. The appropriate use of cryptographic algorithms depends on an application’s precise needs for confidentiality, authentication and integrity of data, plus the perceived capability and threat owing to attackers.

The “strength” of cryptographic algorithms can depend on the algorithm chosen, the key length employed, and the quality of random numbers that are used in the generation of critical values such as keys and “nonce” values.


Ada can help to some extent through its system of strong types. A good design approach would be to declare distinct and incompatible types for unencrypted and encrypted data, so that they cannot be confused or used in the wrong context. For example:

type Plaintext_Buffer is private;

type Encrypted_Buffer is private


Even though, under the hood, these types might both represent an unstructured sequence of bytes, they are distinct and incompatible from the point of view of the client. For example, a procedure that outputs an encrypted buffer to a communications channel might be declared:

procedure Send_Buffer (B : in Encrypted_Buffer);


This procedure cannot send a Plaintext_Buffer and any attempt to do so would be rejected by the compiler.

If it is necessary to encrypt a plaintext buffer to produce an encrypted buffer, then a single function can do this and use contracts to enforce the strength of the key. For example:

type Key is limited private; -- no copying permitted!

function Strength_Of (K : in Key) return Natural;

function Encrypt (Data : in Plaintext_Buffer;
                  K    : in Key) return Encrypted_Buffer
  with Pre => Strength_Of (K) >= 256;


Note that the Key type is declared limited private so that objects of that type cannot be copied by assignment—another built-in feature of Ada that is ideal for such sensitive types.

Failure to erase sensitive data

Related CWEs


Short description



Compiler Removal of Code to Clear Buffers



Sensitive Information Uncleared Before Release

Plus all children and variants


Compiler Optimization Removal or Modification of Security-critical Code



This vulnerability concerns the need to erase (or “sanitize”) sensitive data, such as cryptographic keys, after they have been used, to prevent unintentional leak or exposure of that some time later.

This is a complex issue that spans application-specific needs across to highly technical implementation details. At the high level, applications need to define exactly what data is “sensitive” in the first place, and how much protection is required. At the low end of the spectrum, just “writing zeros” into memory might suffice. At the high end, it might be necessary to physical destroy hard disks and memory chips.

As a software programming challenge, this problem is more complex than it looks. Issues include data validity (“all zeroes” might be illegal or invalid), how to stop a compiler optimizing away the sanitizing code, avoiding explicit or implicit copying of sensitive data, coping with the complexities of data caches and memory devices, and so on.


Ada provides some specific support in this area. Its limited types are ideal for sensitive data since they cannot be copied by assignment, and are always passed by reference at run time. Ada 95 also introduced a special pragma Inspection_Point which serves to forbid “dead store elimination” in the compiler for a particular object at a particular place, thus ensuring that a final “sanitizing assignment” is not removed.

SPARK Pro offers some static verification support through information-flow analysis, since a final sanitizing assignment is reported as an expected anomaly (because it does not contribute to the functional behavior of the program), and can be justified as such.

A full description of this problem and how it was solved in a particular project can be found in [20].

Authentication and authorization

Related CWEs


Short description



Improper Access Control

Plus all children and variants


This large family of vulnerabilities is highly application specific. What is or is not “authorized” for a user of a system depends on the system, its environment, and the threat model that is expected.


In the broadest terms, systems should be designed with the least privilege principle in mind, restricting the most important or risky operations to the fewest users and operational scenarios.

Given a strong design along those lines, these concepts can be encoded in Ada using types and contracts. A simple model might map a user’s ID onto some ordered value of authorization:

type User_ID is limited private;

type Authorization is (None, Low, High);

function Authorizaton_Of (U : User_ID) return Authorization;


Sensitive operations can then use a precondition to restrict access, such as:

procedure Sensitive_Operation (Data : in out Encrypted_Buffer;
                               User : in     User_ID)
  with Pre => Authorization_Of (User) = High;


Operations that “escalate” a particular user’s authorization (from “Low” to “High” for example) could be strictly controlled and verified using Ada’s types and the appropriate combination of static and dynamic checking.

Dr. Roderick Chapman is an independent consulting engineer with more than 20 years of experience in the development and certification of critical software. Following graduation from the University of York in the U.K., Rod joined Praxis (now Altran UK). Rod is a regular speaker at international conferences, and he is widely recognized as a leading authority on high integrity software development, programming language design, and software verification tools. In February 2015, Rod was appointed Honorary Visiting Professor in the Department of Computer Science at the University of York.

Dr. Yannick Moy is a Senior Software Engineer at AdaCore and co-director of the ProofInUse joint laboratory with INRIA (France). At AdaCore, he works on the software source code analyzers CodePeer and SPARK Pro. Yannick has led the development of the SPARK 2014 technology, which he has presented in articles, at conferences, in courses and in blogs ( Yannick previously worked on source code analyzers at PolySpace (now The MathWorks) and at the Université Paris-Sud.

Previous Article
Smartphone companies pursue 3D sensing functionality for smartphone cameras

The recent past has seen escalating demand for a reliable generation of 3D models of both environment and o...

Next Article
Machine learning spotlight: Industry 4.0 and predictive maintenance
Machine learning spotlight: Industry 4.0 and predictive maintenance

I interviewed Eitan Vesely, CEO of Presenso to discuss Industry 4.0, the Smart Factory, and how companies l...