This is the second installment in a series of articles addressing engineering challenges and opportunities associated with the verification and validation of autonomous and semi-autonomous vehicles. Click here to read Part 1.
Part II: Scalable-fidelity within XIL test benches
With respect to the design of electrical/electronic (E/E) systems, development methodologies and verification & validation (V&V) tools have advanced significantly over the past few decades. Today, the model-driven development (MDD) methodology and “X”-in-the-loop (XIL) verification approaches are well established as effective means to develop safe and secure vehicle E/E systems.
The XIL apparatus represents the so-called “digital twin,” which is a model of a system that executes software functions on a network of electronic control units (ECUs) connected to environment models and tests. Two key industry standards that are commonly used to create the digital twin are Functional Mock-up Interface (FMI) and AUTOSAR.
FMI is an open, tool-independent standard broadly supported by many tool vendors for use with both model exchange tasks (external solvers) and co-simulation (internal solvers) of dynamic models. FMI specifies a combination of XML files and compiled C-code, all of which is bundled into Functional Mockup Units (FMUs) that represent the sensor, actuator, and plant environment surrounding a distributed E/E system. FMUs allow the fidelity of the environment model to scale to meet the verification intent without changing the test bench interfaces. FMUs can also represent the ECU itself within the system-level FMI-master simulation.
The AUTOSAR partnership is an alliance of OEM manufacturers, tier 1 automotive suppliers, semiconductor manufacturers, software suppliers, and tool suppliers. Considering the different automotive E/E architectures in current and future markets, the partnership establishes an open, de-facto industry standard for automotive software architectures. The significance of AUTOSAR within XIL test benches is that it provides formal platform concepts and hardware abstractions, allowing the digital twin’s timing behavior and signal communications to be considered very early and continuously in the process as the fidelity of the ECU model scales.
In the MDD systems engineering process, a model of the ECU’s behavior is tested against a model of the vehicle system’s communication networks, sensors, actuators, and plant environment surrounding the ECU – and all of this comprises the so-called Model-In-the-Loop (MIL) level of abstraction. Once the MIL-level behavioral model is validated, it is automatically transformed into C/C++ code and then retested – all of this then represents the Software-In-the-Loop (SIL) level of abstraction. Eventually, the generated code is integrated into ECU hardware and platform software (a.k.a. firmware) and again retested— giving us the Hardware-In-the-Loop “HIL” level of abstraction. The HIL-level testing can also be performed utilizing models of the ECU hardware – giving us a virtual Hardware-In-the-Loop (“vHIL”) level of abstraction.
To satisfy a test’s purpose, the accuracy of the XIL configuration must be sufficient for adequate coverage and confidence. The range of fidelity of the various XIL configurations can be quite wide. The vHIL configuration that leverages virtual ECU simulation technology covers the broadest range. With this configuration, the accuracy of the ECU hardware model can scale, whereas the platform and application software is the actual code that deploys in the final vehicle (similar to HIL). This facilitates the testing of final production software on a platform with optimal accuracy relative to verification intent. This concept is called “scalable-fidelity.”
Scalable-fidelity is important because the digital twin that is best for testing drivability may not be the same digital twin that is most effective for verifying whether embedded software meets certain safety or security requirements. Determining whether a digital twin is sufficiently accurate requires a clear and specific statement of precisely what must be verified, so that the “right-level” of fidelity required can be determined.
The right-level of fidelity matters, because digital twins that are too simple cannot expose enough detail for every verification problem, and highly precise digital twins have other trade-offs such as long development cycles, higher costs, and possibly insufficient simulation performance. Typically, the fastest and least-expensive digital twin available that still offers sufficient fidelity to solve the problem is best.
To effectively complete the massive amount of testing required, teams cannot rely solely on hardware-based rigs, because there are usually too few of them available for every software developer or verification engineer on a project. In addition, hardware rigs can only execute at real-wall-clock-time. Alternatively, conducting tests at the MIL, SIL, or virtual HIL (vHIL) levels can substantially speed up testing cycles, making these models far more appropriate for many verification requirements. And since testing with models requires only a PC, cost-efficiency can be maintained.
Also critical for efficiency is ensuring that the testing framework supports the ability to mix the XIL levels of abstraction within a single system-level simulation. This is key to the effective validation of the types of highly distributed, multi-core, and multi-ECU E/E designs common in most modern ADAS, ADS, and AV vehicle systems. Not every ECU digital twin in a multi-ECU system simulation scenario needs to be of the highest-fidelity available for that particular ECU.
Finally, ensuring scalable-fidelity within XIL test benches provides a number of cost-saving benefits:
- Problems are found earlier in car projects, when they are least expensive to fix
- Increased V&V coverage boosts safety, enhances security, and otherwise identifies problems before they are deployed into the field
- Leveraging test benches of mixed fidelity supports the massive number of V&V cycles required for multi-ECU systems.
The third installment in this series will address test reuse and associated considerations.