Maximizing the benefits of Model-Based Design through early verification

November 01, 2011

Implementing these four best practices in Model-Based Design leads to early verification and decreased testing at the end of the development cycle.

 

Model-Based Design (MBD) performs verification and validation through testing in simulation. Although many organizations use some form of modeling, too many apply simulation in an ad hoc manner that does not take advantage of the potential verification benefits (see Figure 1).

 

Figure 1: Costs of fault propagation throughout the phases of development illustrate how some organizations do not take advantage of the verification benefits of simulation.


21

 

 

To maximize the benefits of MBD, successful organizations have implemented four key practices that help accomplish early verification:

·    Create and simulate a high-level system model during the specification stage. In MBD, the system model serves as an executable specification. Early simulations of this model highlight incomplete and inconsistent requirements and specifications.

·    Test from day one with multidomain simulations. By developing multidomain models and performing closed-loop simulations, engineers can start testing while the product idea is taking shape. These simulations enable engineers to investigate all aspects of the system, including algorithms, components, the plant model, and the environment.

·    Create virtual test suites that stress the system. Simulations enable engineers to conduct a range of tests that would be difficult or impossible to perform on the embedded system itself. Like all tests, these should be run as early as possible.

·    Use the model and test suites as a reference design throughout the development process. Well-constructed models can be used throughout development and then reused for future enhancements and derivative designs.[1]

These practices should be employed in parallel as four dimensions to successful usage of MBD for early verification.

Create and simulate a high-level system model

To serve as an executable specification, a high-level system model must mirror the system’s abstract behavior. The model might not include the full interface definition, but it must specify the system’s dynamic behavior. Simulating system behavior in the requirements specification stage helps ensure that the team has a complete and shared understanding of what the system is required to do.

With MBD, engineers start by assembling the architecture using either subsystems or discrete states. The dynamics within these subsystems should initially be modeled using the easiest possible approach. In parallel with this activity, other engineers can create scenarios or formalized requirements in preparation for testing the dynamics as early as possible.

When the first tests are run, the engineers who modeled the functional behavior will learn more about the system and the real meaning of the requirements. Likewise, the engineers who created the test scenarios or formalized requirements will learn whether the requirements are consistent and complete. Each group should communicate their findings to the other to make sure there are no misunderstandings.

Test from day one with multidomain simulations

System behavior is defined not only by the embedded control software, but also by the electronic and mechanical components, including the connected sensors and actuators. Early simulations in which the architecture is executed provide more insight when they are performed in a closed loop with plant or environment models.

Closed-loop simulations with plant models offer several advantages over open-loop simulations or testing on actual plant hardware. One advantage is that models are easier to change than metal, wires, and C code. Closed-loop simulations with plant and environment models reduce costs in multiple phases of development. A model is easier to reconfigure and replicate than a mechanical and electrical device built from steel, wires, circuits, and other hardware. Engineers can rapidly switch between versions of the physical model without incurring manufacturing costs. By simply changing parameters such as the length of a rod or the maximum torque of an electric drive, teams can evaluate trade-offs and optimize the complete system for cost, speed, power, and other requirements.

System-level optimization requires multidomain simulations. It is impossible to optimize today’s sophisticated systems by tuning one parameter at a time. To deliver maximum energy efficiency and highest performance at minimal material cost, engineers must optimize the system as a whole, and not just the embedded software.

Plant models provide another perspective on the system. Modeling the nonsoftware parts of the system gives engineers another view into system behavior. Engineers can often learn more about system dynamics through simulation than from the real system because simulation provides details on force, torque, current, and other values that are difficult or impossible to measure on the actual hardware.

Creating plant models requires engineering effort, but this effort is often overestimated, while the value provided by plant modeling is underestimated. When developing plant models, it is a best practice to start at a high level of abstraction and add details as needed. Choosing a level of abstraction that is just detailed enough to produce the needed results saves modeling effort as well as simulation time (see Figure 2).

 

Figure 2: Early verification as part of Model-Based Design streamlines embedded control design with modeling, simulation, and automatic code generation.


22

 

 

Create virtual test suites that stress the system

Efficient testing requires a separation of concerns. Organizations should test different aspects of the software implementation at different stages of development. Testing communications and hardware effects before the algorithms have been tested makes it difficult to isolate and identify the source of defects in the design.

Applying tests where and when they are most appropriate enables teams to evaluate the design at the right level for each phase of development. At each phase, test results should be fed back to development immediately to enable continued refinement of the design.

Functional testing involves simulating the controller model with the multidomain environment model. Test vectors used in functional testing are based on either formalized requirements[1] or scenarios such as recorded driving maneuvers. These test vectors can be reused for regression testing and full model coverage testing.

Rapid Control Prototyping (RCP) adds real-time verification and the user experience to the test regimen. RCP helps engineers quickly deploy algorithms and test them in the vehicle to determine whether the functionality feels right. Enabled by on-target rapid prototyping and functional rapid prototyping platforms, RCP can be a rich source of design ideas, but it should not serve as the primary method of verifying functionality.

Robustness testing aims to evaluate system robustness amid changes in software parameters, manufacturing process variances, mechanical and electrical hardware degeneration over the system’s lifetime, and similar effects. It is a best practice to run parameter sweeps on the virtual system, including the controller and environment. With a more thorough understanding of how the system performs at boundary conditions, engineers can choose to narrow the specification for their hardware vendors or conclude that a less expensive part with slightly higher variances meets their design needs.[2]

Hardware-In-the-Loop (HIL) testing enables engineers to test the real controller or controller networks in the lab rather than in a real-world environment. HIL testing can be used to test robustness (for example, by inserting failures) or diagnose intercontroller communication in large controller networks. It covers hardware and communication effects that cannot be modeled easily.

Like RCP, HIL testing is necessary for system verification, but it should not be used as the principal means of functional testing. This is because HIL testing is conducted at a very low abstraction level – close to the real system – and therefore combines many different effects that prevent efficient functional testing.[3]

HIL testing requires an investment in hardware that ranges from standard PCs with specialized data cards to high-end hardware racks. Fewer tests can be executed on such a system than in pure software because software testing can be more easily replicated on multiple computers. This is another reason to ensure that functionality is verified before HIL testing. If engineers find algorithmic defects during HIL testing, then the upstream verification process is probably inadequate.

Use the model and test suites as a reference design

In MBD, all key development tasks are performed at the model level. This means that any modification made to the generated code must also be made in the model. Using the model and test suites as a single source of truth throughout development promotes clear communication and efficient reuse of models and tests, not only for the current project, but also for future enhancements and derivative designs.

Put all artifacts under configuration management

Software engineers recognize the value of versioning code in a Configuration Management System (CMS). The key artifacts in MBD – models, tests, and simulation results – should also be maintained in a CMS. Managing artifacts in a CMS makes it easy for teams to rerun virtual tests and compare the current test harness with a former model state or former test vectors.

Versioning models works best when the model structure is modular rather than monolithic. A modular model structure can also accelerate development by allowing multiple engineers to work on different parts of the same system in parallel and by enabling parallelized code generation.

Perform regression tests

Software engineers use nightly builds to compile and test the most up-to-date version of the source code. This approach should be applied to modeling and simulation as well. Once an engineer defines a new test to verify a specific model behavior, that test should be integrated into the nightly build to ensure that the specific behavior still works in all subsequent modeling iterations. If the test fails at some point, then either a defect has been identified or the functionality has fundamentally changed and the test is no longer applicable.

Verify early and often

The best practices outlined in this article allow engineers to achieve early verification, lessening the time spent at the end of the development cycle testing and debugging their designs. Key to this process is MBD, which enables the use of verification as a parallel activity that occurs throughout the development process. Performing test and verification along every step of the development process means finding errors at their point of introduction. The design can be reiterated, fixed, and verified faster than in the traditional process.

References:

 

[1] Holzapfel, Florian, et al. Autopilotenentwicklung als Benchmark für einen durchgängigen System- und Softwareentwicklungsprozess. Garching: DGLR, 2009. Workshop: Brücke zwischen Systemdesign und Softwareentwicklung in der Luft- und Raumfahrt.

 

[2] Friedman, Jonathan, Prabhu, Sameer M., and Smith, Paul F. Best Practices for Establishing a Model-Based Design Culture. 2007. SAE World Congress.

[3] Schlosser, Joachim. Architektursimulation von verteilten Steuergerätesystemen. Berlin: Logos Verlag, 2006.

Guido Sandmann is the automotive marketing manager, EMEA, at MathWorks. He has more than 10 years of experience applying MathWorks products in various application areas. Guido has a degree in Computer Science from the University of Oldenburg.

Joachim Schlosser is senior team leader in the Application Engineering Group at MathWorks. He has experience as a process and methodology consultant as well as an application engineer for a MOST simulation/emulation tool. Joachim has a degree in Computer Science from the Augsburg University of Applied Sciences and a PhD in Computer Science from Technical University Munich.

Brett Murphy is technical marketing manager at MathWorks. He has extensive controls analysis, real-time software development, and systems engineering experience in the aerospace and embedded systems industries. Brett holds BS and MS degrees in Aerospace Engineering from Stanford University.

MathWorks
Linkedin: www.linkedin.com/company/the-mathworks_2
Facebook: www.facebook.com/MathWorks
Twitter: @MathWorks
www.mathworks.com

 

 


Guido Sandmann (MathWorks), Joachim Schlosser, PhD (MathWorks) and Brett Murphy (MathWorks)