The value of usability: Getting developers to 'push the button' on static analysis - Q&A with Gwyn Fisher, CTO, Klocwork

March 01, 2012

The value of usability: Getting developers to 'push the button' on static analysis - Q&A with Gwyn Fisher, CTO, Klocwork

In and exclusive Q&A session with Embedded Computing Design, Gwyn Fisher of Klocwork comments on static code analysis and its growth as a staple of em...

 

 

ECD: As embedded developers turn to multicore processors to optimize performance, how can analysis tools help control inevitable cost and schedule problems?

FISHER: Any new development is an exercise in balancing expectation against risk. In the case of multicore, the naive expectation is always linear acceleration tempered perhaps by some jocular “wouldn’t that be nice” acceptance that the final result won’t be quite that good, but no real understanding of the reality that without significant effort (read: time, money, angst) the result might be slower than the old, interrupt-driven single-core code. So tools have a role to play in terms of helping developers understand the impact of what they’re doing, what pitfalls they’re unwittingly leaving themselves open to, and how to mitigate the associated risks.

Dynamic analysis in this space has received the lion’s share of attention for obvious reasons. If I can see a nice set of graphs over time that shows the performance of my code in action, I can presumably narrow in on problem areas quickly and apply my own knowledge to figuring out what’s going on. The challenge with dynamic analysis is that it depends on a) defining a test set that shows execution problems, and b) the reviewer’s intimate knowledge and understanding of what to do about those problems.

Static analysis, by contrast, assumes little knowledge on behalf of the reviewer and requires no effort to be expended in defining test cases. Every conceivable code path through the application is exercised with as much rigor as every other path. This approach is therefore far more likely to show complex and costly issues such as data races, deadlocks, and resource contention than any constructed test bench. That speaks directly to the bottom line of cost control in what is inevitably a large, over-budget project. Leaving issues such as these in a code base until late in the validation process will cost exponentially more to address than doing so during initial development.

In addition, due to the way that static analysis works by modeling the expected program execution, the finding is accompanied by a detailed walk-through of how the situation is predicted to occur, allowing even a relatively junior resource to interpret the issue at hand, determine whether or not it’s likely to happen, and apply an appropriate design fix. In one example we like to describe in seminars on the subject, a design flaw in a popular open-source database kernel resulted in months of effort expended to identify a deadlock and eventually rewrite key modules to avoid the data race at its heart. This same problem was identified during the first analysis using our tools, which provided a walk-through description enabling developers to easily see that the data race was causing the problem and that the deadlock was merely the symptom.

Contrast a few hours to run the tool to analyze the code and an hour at most to interpret and act upon the result (what turns out to be a one-line fix) with months of community effort to determine an appropriate set of tests, followed by design effort attempting to fix the deadlock, and finally requiring the designer to rewrite the whole thing from scratch.

ECD: How is static analysis introduced in the software development cycle, and how can it be used with existing Integrated Development Environment (IDE) tools?

FISHER: There’s absolutely no doubt that any developer-facing tool that doesn’t integrate seamlessly with any existing tooling is going to face significant friction in deployment. We’ve been selling this message effectively for years, with development managers almost asking the “why wouldn’t you do it this way?” question for us.

Whether developers have migrated to IDEs or their idea of an IDE is a bunch of gVim macros or emacs lisp modules, to them it’s where they live and work. And woe betide any vendor who tries to get them to change. Even if you’re not suggesting that they change tools, and instead asking them to visit somewhere else to see what they might have done wrong in some retrospective manner, your tool is going to suffer waves of disinterest and ultimately become shelfware.

Thus, static analysis has to be part of the developer’s native habitat, and more importantly, it has to work in a way that feels natural and follows the way other tools in that habitat work. For a gVim developer, issuing a ‘:’ command is second nature, so tools in that environment should follow suit. Put that same interaction mechanism in front of a Visual Studio user, and that will make for a fun-filled afternoon of derisive commentary.

At Klocwork we’ve gone through various iterations of technology design, getting closer to the developers themselves. It’s one thing to be resident as a tool within an IDE; it’s another to get the developer to “push the button” to use it. Making a button available is only changing geography, and it does nothing to help with the tool’s fundamental usability.

With this in mind, we’ve recently introduced a new technology that allows static analysis to take place in much the same way as spell checking in a Word document or e-mail. That is, as you’re writing your code and the tool detects a problem with what you’re doing, it can point out the problem in a perceptually instant manner, highlight it with a squiggly underline, and deliver all the value of static analysis with none of the “what do I have to do to use it?” resistance that is typically encountered during any new tool’s introduction (see Figure 1).

 

Figure 1: Following in the footsteps of word processors, Klocwork Insight highlights coding issues with a squiggly line the instant they’re introduced.


21

 

 

Getting a tool into an IDE is tough; getting it to be useful in the part of the IDE the developer truly lives in (the editor component, whatever that looks like in the environment at hand) is really tough. But until you’re there, you’re a distraction.

ECD: Can source code analysis be used to protect embedded devices against potential security threats?

FISHER: Absolutely, source code analysis has a significant role to play in threat identification and validating whatever threat model is being used to determine vulnerability in the device. The connectivity requirement is common in the embedded world today. That connection might be to another chip, or another device, or the whole Internet. In any case, there’s somebody else either sending you information or receiving information you’re sending. That’s the starting point for having to worry about your entire application design.

Static analysis doesn’t typically know, or try to know, anything about your surrounding environment. Tools can sometimes be tuned to perform their analysis within certain data boundaries, for example, such as knowing that a particular input will only range between -20 and +30 because it’s a temperature sensor intended for use in Western Europe. But that kind of thing is discouraged because you’re allowing the user to define limits to what the modeling technology naturally does – that is, assume nothing and point out everything that looks wrong.

In the case of threat detection, we’re most worried about how the data you’re interacting with from the outside world is used internally. Is it used to create a buffer into which you’ll read data (code or value injection), or is it used for memory allocation (Denial of Service or DoS), or perhaps interpreted as a reference into an internal data structure (hijacking or redirection)? This kind of data and path validation – that is, the path that tainted data follows from the outside world to its point of use within the code – is natural for source code analysis, as it accomplishes this modeling in order to perform everything else it does.

As a wonderful gentleman at a defense contractor once said to me, “Son, it’s a bomb; it’s supposed to blow up. We just want to be sure it doesn’t get hijacked on its way.”

ECD: What educational events or online classes does Klocwork offer to help embedded designers get started with its code analysis tools?

FISHER: Like any commercial organization attempting to encourage users to gain value from its tools, Klocwork provides a full suite of educational and professional services, running the gamut from introductory materials aimed at first-time users, to more advanced courses targeting secure coding and threat modeling, to full-on deployment services and mentoring.

We also recently introduced the Klocwork Developer Network (http://developer.klocwork.com), a website serving our users and acting as a repository for online courses, video tutorials, in-depth courseware, and the usual variety of community forums and ticketing.

Each customer has something unique they wish to gain from a tool such as source code analysis, so a large part of our educational focus is on internal champions, people who take knowledge of how the tools work and how other organizations have applied them and leverage those lessons in deploying our tools for their own use. A large part of that is learning from the community, so I’ve been thrilled to see the fast uptake that the Klocwork Developer Network has seen amongst users, both as a less formal mechanism for interacting with our staff, but most importantly as a way to learn from other customers.

 

ECD in 2D: Klocwork Insight’s plug-in for Visual Studio continuously runs data flow analysis to accurately identify defects and security vulnerabilities. Use your smartphone, scan this code, watch a video: http://opsy.st/zdEhC1.


22

 

 

Gwyn Fisher is CTO of Klocwork.

Klocwork [email protected] www.klocwork.com/blog www.facebook.com/klocwork www.twitter.com/@klocwork www.klocwork.com