How model-driven testing can generate code-based testing results

February 01, 2010

How model-driven testing can generate code-based testing results

Model-driven testing can link requirements to design, helping developers generate results in a common language that links everyone in the design proce...

Model-driven testing can link requirements to design, helping developers generate results in a common language that links everyone in the design process. This improves workflow and clearly communicates designs.

In the systems and software testing community, generating code-based results is considered the gold standard for software testing. But increasing software complexity and shrinking time-to-market windows have forced many organizations to rethink how they handle the testing process. With the introduction of Model-Based Testing (MBT), developers have gained a faster automated process that can help them obtain complete model and code coverage.

Even so, some developers think MBT has failed to live up to expectations because it doesn’t deliver code-based results. However, with the latest advances in MBT technology, that perception is no longer accurate. New MBT tools enable testers to achieve performance analysis, memory profiling, and code coverage with code-based results. 

A Model-Driven Testing (MDT) process enhances this workflow by letting developers take scenarios and execute these tests on an actual application. But the key issue is that developers need code-based testing results when running these scenario-based tests.

Getting started

An MDT approach can help organizations meet tight time-to-market windows because it allows developers to test in the same language they are designing in, the Unified Modeling Language (UML). Besides the time savings, MDT provides another advantage in that it starts with scenarios as the requirements, aligning tests with the customer’s specifications.

Even though MDT offers many benefits, it has one weakness that critics emphasize: lack of provision for code-based test results, which are essential for debugging failures, leaks, and performance gaps.  

Before diving into questions about code-based testing, let’s explain the MDT process. UML-based test cases can be written in many different formats, including UML sequence diagrams, flowcharts, and even code (using assert-style statements). Simply put, the MDT process compels developers to read their requirements and design scenarios based on them in one of the aforementioned formats. Next, the model is built into an executable that fulfills these scenarios. The original scenarios are then turned into tests. Finally, after the software is subjected to the MDT process, these same scenarios can be executed as tests.

Using traditional code and flowcharts to capture test case behavior

Test case behavior can be described using code or with a flowchart or sequence diagram, providing higher productivity than traditional coding. Using code to describe a test case is essentially the same as the current process for describing test cases, but the difference is that, as shown in Figure 1, the test case needs to focus on the stimuli and expected results. The context in which the test case executes is generated automatically from the model.

 

Figure 1: Developers can use code to describe the pure behavior of a test case.


21

 

 

Capturing test case behavior in code and having it execute is the most immediate way to leverage MDT with minimal risk and practically no learning curve. Another advantage to this approach is that it allows for easy reuse of existing code-based test cases. But as the logic of the test case behavior is often nontrivial, developers tend to sketch test cases as informal flowcharts. Since mapping a flowchart to code is relatively straightforward, MDT environments allow developers to capture test case behavior as a flowchart, generate test code from this flowchart, link it to the test architecture, and run the test.

Describing test cases as flowcharts, as shown in Figure 2, has the same expressive power as coding, yet it’s much easier to capture and communicate to all of the project’s stakeholders. 

 

Figure 2: Test case behavior is easier to capture in flowcharts/activity diagrams than in code.


22

 

 

Describing test case behavior with sequence diagrams

Sequence diagrams offer a unique view of the design that is rarely used within the context of code-based testing. These diagrams can describe operational scenarios between the overall system and the actors that interact with it. In other cases, they might include details about the sequencing and exchange of messages between internal design components.

During system-level analysis, designers identify many high-level requirements, and most of the behavioral requirements are described as sequence diagrams. This forms the basis for a process whereby the system analyst creates many variants of the basic requirements, as well as “rainy-day” permutations of the basic requirements. This process converts high-level requirements that are captured as sequence diagrams into concrete test cases.

Developers can look at a sequence diagram that describes a requirement and apply it interactively as a test case, injecting inputs into the system under test and checking the outputs to see if they match those defined in the sequence diagram. Sources of these tests include recording the application’s execution and writing them by hand. Each source has its own benefits. Recording execution doesn’t test requirements, but is helpful for regression testing. Hand-written sequences are useful in testing requirements. No matter how the tests are created, code-based results are needed as well.

Achieving code-based results

Developers today can obtain code-based results in a variety of ways, all of which require tests to be rewritten in some tool and then executed. Once this is complete, the team receives the results.

To some, this seems like a perfectly logical approach, but several issues come to light when using this testing method. First, developers must be certain that the original requirements match the software deliverables. To do this properly, developers must rewrite the same scenario-based tests or risk the possibility that the code results will not map to the requirements. Developers need the ability to write tests that match their requirements and realize complete scenario- and code-based results. Current testing tools make this possible, as MDT-based tools execute the actual code.

Given these advances, why has the software testing community been slow to adopt MDT? There are several reasons for this, beginning with the fact that early model-based testing did not offer code-based testing results. Furthermore, many developers need code coverage, memory profiling, and performance analysis metrics. When the tools lack these functions, it makes sense that some would see MDT as more of a burden than a solution.

Bridging these gaps is the challenge for successful MDT implementation. Several effective code-based testing tools are available in the marketplace, and recreating this functionality offers fewer choices in regards to the style of code-based results developers prefer. Another option is to execute model-based tests and include the code-based results. This is possible because when running the model-based tests, developers can execute the actual code, not a simulation. If a developer uses a code-based testing tool that instruments the code, the MDT can run, and the results will appear at the finish.

Distributed development, better code

Finding ways to help distributed teams work better together while driving high-quality deliverables is a top priority for many organizations. When the development process embraces an MDT approach, a team can achieve code-based test results. The key enabling technology is having MDT tests that execute the actual application. This means running the MDT tests with a code-based testing tool that can directly deliver the test results. The code-based testing tools track the executing applications, making this an optimal approach.

MDT has a proven track record for improving the testing process, but it hasn’t been widely adopted because it doesn’t yield code-based testing results. This is now possible, providing the best of both worlds – ease of creation and understanding tests inside an MDT process – with complete and thorough test results developers can use. By leveraging the gains from an MDT approach, developers can have their cake and eat it, too.

Martin Bakal is market manager for systems PLE, testing, and multicore at IBM Rational. He has consulted on numerous embedded projects ranging from aerospace defense contracts to various automotive projects. Previously, he worked at Phar Lap Software, an RTOS vendor, as a technical support manager. Martin has a BS in Electrical Engineering and an MS in Engineering Management, both from Tufts University.

IBM Rational
[email protected]  
www.ibm.com/rational

 

 

Martin Bakal (IBM Rational)