The nature of embedded systems developments is changing rapidly. These systems were embedded inside user equipment, often with little or no user interface to reveal their presence. At most, they might present a two-line display and a set of physical buttons. But as systems have become more sophisticated, OEMs have moved to deploying more graphics-intensive screen-based user interfaces that can expose many more functions across a series of menus or pages.
The user interface is not just an integral part of the system, but often one of its main selling features. Appliances and automotive infotainment units are moving over to the use of large, touch-oriented displays that put much greater emphasis on graphics capability and the responsiveness of the underlying software and operating system. It is not enough for a media-rich Graphical User Interface (GUI) to be simply functional and effective. End users also require attractiveness, which may entail the addition of high-performing animations for active background and to perform transitions between visual modes and menus.
Increased software complexity
The trend for visual attractiveness is partially enabled by the growing availability and falling cost of high-performance 32-bit processors, often with multiple cores and built-in Graphics Processing Units (GPUs). These can be used to offload processing from the main Central Processing Unit (CPU) thread, which is then reserved for handling real-time events and data processing. As a trade-off, the additional processing elements add complexity to the development process as projects now become inherently multiprocessor-based.
Having multiple processors is not the only source of additional new software complexity. With user interfaces that were based on a tree of text-based menus, it was relatively simple to build a set of user-interface dialogues. Presented on a one- or two-line text display, the dialogues were modal so that information could be presented and retrieved in a structured way. Today's graphics-intensive user interfaces require a much more dynamic approach that is generally built around an event loop and requires multithreading to not block the user interface.
Updates from a user can come at practically any time and from a variety of sources. For example, a touch-based system may not just accept virtual button presses and scroll movements but it also needs to recognize a variety of gestures that take on different meanings depending on which application is active at the time. Those gestures may be analysed by software running on the main processor or supplied by a dedicated touch-interface device. Whatever the source, gestures and other input need to be handled and delivered to the correct process immediately.
To set up a working system from scratch involves a number of elements, from the core graphics drivers that allow a processor to display pixels on an attached display, through graphics, video, and audio libraries, to interface design tools so that user interface design experts with limited skill in programming can create attractive, enticing interfaces.
The problems are compounded by the need for an effective workflow with a cross-compilation environment to deliver working binaries to the embedded target. Each round trip of develop, compile, link, flash, and test takes significant time.
Connectivity is a further consideration for many embedded systems being developed today. They require not just the ability to transfer data over the Internet, but to store and manipulate structured data that needs to be synchronised by servers in the cloud. Database query systems such as SQL and associated web technologies such as XQuery and JSON provide the necessary connectivity to online data sources. But these are additional modules that the embedded systems developers need to build into the target.
Implementing all of the above from lower-level components is prohibitive for any embedded project. Thus, the correct choice of a software stack containing higher-level frameworks and tools for user interface creation, device deployment, and connectivity becomes one of the most important decisions for a new embedded project. The requirements of the software stack are then used as input for choosing the hardware.
Software environments built around a Linux framework have become effective platforms for embedded systems. They include Android, originally developed by Google for mobile phones and tablets and now being used increasingly in industrial systems, and embedded Linux distributions such as Yocto. These platforms readily support 3D graphics interfaces such as OpenGL, which is used by most mobile and desktop games, together with network connectivity such as HTTP and TCP/IP. Though effective and widely used, they still need to be assembled and made accessible to the developer.
Facilitating workflow with an effective IDE
Integrated Development Environments (IDEs) have evolved to take account of increasingly complex platform support and hide many of the complexities from the application developers. Desktop and mobile environments have seen the introduction of a number of technologies that ease the development of highly animated user interfaces that, with the right skills and experience, can be used in an embedded environment.
An important aspect of this form of IDE is support for both desktop and embedded environments. This allows much of the application logic and user interface to be developed natively on the desktop environment and then ported over to the embedded target for performance and final testing. As software engineering is heavily adopting agile development processes, the need for a workflow that supports rapid prototyping is also growing in embedded development.
Technologies such as virtual framebuffers, which emulate the interface on the target on the desktop machine, ensure that the graphics and animation will work effectively on the target’s display without incurring the round trip time of flashing and testing on the actual target. Desktop-based user interface development allows rapid prototyping to support acceptance testing by potential customers and users to ensure that the final product will be a market success.
An example of an IDE that supports media-intensive embedded systems development is the Qt Creator that forms part of the Qt Enterprise Embedded environment (Figure 1). Qt Creator provides easy switching between local execution on the developer's desktop and the deployment target to shorten the edit-compile-debug cycle and provides built-in device debugging to ensure that developers have access to the same debug features on the target as within the local environment.
On top of Qt Creator is a framework for building graphics-intensive embedded applications using technologies that range from web technologies such as HTML5 to native, high-performance languages such as C++.
Qt Enterprise Embedded includes the Qt Quick technology, which allows for the creation of high performing fluid user interfaces. It uses native C++ libraries and OpenGL ES to offload rendering to the GPU and a separate thread on the CPU. For the developer, Qt Quick offers a high-level declarative language, QML, for a fast development cycle and to make it easy to cooperate with user interface designers.
Further extensions to the Qt Enterprise Embedded environment include extensive SQL support. This allows connection to both local and network-based databases through standard web-oriented interfaces such as XQuery and JSON.
Evolving IDE support
As embedded systems have evolved to incorporate far more advanced user interfaces, development techniques have had to change. Advanced IDE support ensures that this evolution is not just manageable but the applications being created are optimised for the target system.