Part I: Trends, challenges, impact, and solutions
Automotive market trends toward advanced driver assistance systems (ADAS), automated driving systems (ADS), and autonomous vehicles (AV) are fostering innovation and fueling a relentless increase in the amount of software content incorporated into advanced electronic control units (ECUs), sensors, actuators, and other onboard hardware. This has resulted in dramatic spikes in the amount of data flowing throughout vehicle electrical/electronic (E/E) systems. Meanwhile, to meet the auto industry’s increasingly stringent requirements relative to time, budget and quality, achieving effective verification and validation (V&V) has become paramount – especially in light of the mass adoption that many experts predict for these transportation technologies in the years and decades ahead.
As automated and autonomous drive technologies grow substantially more complex, developers face new challenges in verifying and validating the safety and security of next-generation E/E systems. In the process of addressing these challenges, engineers are being bombarded with an expanding number of new, specialized software tools from different vendors, all of which must somehow work together.
A common and quite significant challenge facing today’s automotive engineers is dealing with the massive number of V&V cycles now required, including testing that spans across the many gaps in the tiered development ecosystem. The vast amount of software and mountains of data flowing within and between all system hardware creates very complex interactions. E/E systems are inherently multi-ECU distributed systems, which means that the ideal V&V infrastructure must support the ability to mix the level of accuracy (or fidelity) within the system model in order to realistically cover the amount of testing scenarios required.
While ECU hardware is quite accurate, and verification equipment allows engineers to test systems using actual ECU targets, cost and maintenance complexity factors limit the number of hardware-based verification systems available in a typical project. Access to these systems is often scheduled, and not every software engineer on a project can use them, especially when they are most needed. Further, physical hardware has a limited ability to be predictably controlled, and visibility into the system’s signals for tracing and failure injections is not always possible. Perhaps even more limiting is the fact that actual ECU hardware requires environment models that execute in real-wall-clock-time. This limits their fidelity on one end, and on the other end means time cannot be accelerated for tests that must account for long-term effects within shortened verification cycles.
Test reuse throughout the development process is another substantial engineering challenge in the ADAS/AD era. The most significant factors that affect reuse include: the many levels of test bench signal abstraction, the different means for controlling the growing array of available specialized tools, and the disparate modeling, test, and programming languages used across a project.
Failing to effectively address these challenges can present a number of negative outcomes from a business and economic perspective. Insufficient safety and security test coverage has perhaps the largest potential business impact, because insufficient coverage means that potentially significant problems can go undetected. Failing to discover issues early in the project, when they are least expensive to fix, can explode development budgets. And if they are not detected before they are deployed, technical errors can result in catastrophic consequences for businesses—or worse, for end-users.
A major factor contributing to inadequate test coverage (and its associated inefficiencies and costs) relates to the inability of partners to exchange test artifacts between teams, groups and organizations. Failure to follow V&V standards and best practices often means switching back-and-forth between incompatible testing technologies, which ultimately limits the kind of cross-organizational sharing of test artifacts required for optimal test coverage. It also prevents verification engineers from combining the best test automation software with the best test benches. The inability to share test artifacts can also contribute to the common problem where one partner cannot reproduce another’s reported issue. Further yet, training costs increase when engineers must learn proprietary details of each non-standard tool.
This is the first of a multi-part series exploring and outlining solutions to these critical V&V challenges in the AD/ADAS era. Among the solutions examined in this series are: virtual ECUs with scalable-fidelity; correct-by-design generative MDD workflows; software architecture standards; test framework standards; modeling and tool interoperability standards; and architecture-aware verification.
Here is an overview of the forthcoming articles in this series:
- Part II: Scalable-fidelity within XIL test benches – without a range of test bench platforms that each optimally support the many different levels of V&V required, testing time and coverage cannot scale to cover the immense problem space.
- Part III: Test reuse throughout the process – without test reuse, the cost and time penalties for V&V of the E/E system are business-prohibitive, and coverage is insufficient to address stringent safety and security requirements.
- Part IV: Generative model-driven development (MDD) workflow – automation tools not only significantly reduce engineering time and effort, but they also maximize quality by utilizing expertise-capturing, correct-by-design model transformations to produce production artifacts.
- Part V: Design-aware V&V – without a means to cross-correlate test signals to the design level, engineers cannot verify and validate their E/E systems in terms of its design.