Let’s talk about incorporating RISC-V into an industrial product line development flow. The key message here is that software engineers can innovate earlier and more often and drive more specific core requirements for hardware design team. If you’ve ever done a design based on Linux, you should be familiar with this process, as there are many similarities. At the same time, hardware engineers, leveraging open source RISC-V implementations, can get a head start on innovation and collaboration, and participate in an open community.
The traditional HW/SW co-design model, as shown in the figure, has both hardware and software design moving forward in parallel after a partitioning phase. While that’s the “perfect world” scenario, reality is that hardware generally drives much of the design’s definition and software is left to support what is defined by the hardware. While this scenario has improved over the years, and the software does contribute more to the overall definition than in the past, it’s still a hardware-dominated model.
Like all models, eventually you have to go from concept to actual implementation, and it must occur in a cost-effective manner. Hence, most organization implement product-development models to manage this process, such as the model shown from NXP, where the hardware and software development phases operate mostly in parallel with several defined formal milestones.
The fly in the ointment comes from the idea that hardware and software developers are typically cut from a different cloth. Perfect solutions look very different for each discipline, and performance considerations often force decisions that aren’t friendly to software people. One potential solution is that when making those decisions, the hardware developers should consider how they can make it easier on the software team, ie., help them exploit known paradigms.
At NXP, the software teams drive the programming models for next-generation IoT systems, which is an abstraction of the underlying computing system that defines the expression of algorithms and data structures. This helps bridge the gap between the underlying hardware architecture and the supporting layers of software for application developers. It would include the operating system as well as capabilities that can be improved with hardware implementation, like low-level interrupts, memory management, and clock support. For example, for embedded processors with connectivity, software teams provide input on PHY and MAC level stacks that are architected closely with hardware design teams for efficient SoC design.
It’s clear when you look at resources thrown at current design (resources equal people) that software is starting to dominate the cost of the embedded design. As such, the software team needs to have a bigger voice in the decision making.
A more realistic design flow is shown in figure 3, where the hardware design starts before the software, providing the software team with some definition of what it needs to support. Unfortunately, this ends up in a longer software design time.
As shown in Figure 4, a “shift left” approach to software development can be employed by starting earlier in the process using fast emulation technologies like Zebu and simulation. Hence, the process picture in Figure 3 has evolved, now showing software starting earlier and ending earlier.
The RISC-V programming model is comprised of the languages and libraries that create an abstract view of the machine. The questions you need to be asking with respect to Control are”
- How is parallelism created?
- How are dependencies (orderings) enforced?
For Data, it’s:
- Can data be shared or is it all private?
- How is shared data accessed or private data communicated?
And for Synchronization:
- What operations can be used to coordinate parallelism?
- What are the atomic (indivisible) operations?
Using tools like the Chisel hardware design tool and an open ISA implementation like RISC-V, the software team can begin exploration earlier in the process, using key software algorithms and applications, designing specialized RISC-V cores that can efficiently execute these key software applications. This particularly pertains to system definition and software system modelling.
When tackling the system definition and software system modelling, software engineers use a model that can be akin to a C/C++ program or Matlab model, and a set of performance requirements. Then, Chisel is used for design exploration.
Once the team has the instructions needed to extend the RISC-V architecture, they’re handed off as a definition of new instructions, eventually producing the programming model. This is what gets sent back to the hardware team for implementation and optimization.
An important question is whether RISC-V can be “open enough” to operate in a manner similar to the Linux model. If you look at the early years of Linux and how its popularity took off, it would be good for the industry if it does occur that way. In the embedded space, the number of Linux projects are growing by roughly 50%. And about 80% of those use a free variation of the OS.
For RISC-V to achieve success, it must adhere to these four “freedoms:”
- The freedom to run the software, for any purpose.
- The freedom to study how the software works, and change it to make it do what you want. Access to the source code is a precondition.
- The freedom to redistribute copies so you can help your neighbor.
- The freedom to distribute copies of your modified versions to others.
The key message is that the embedded software engineers will play a larger role in defining the SoC architecture, particularly the programming model and in system optimization. Open source RISC-V implementations will allow more software-driven hardware. And the ecosystem is vital to the success of RISC-V.
Robert Oshana is the Vice President of Software Engineering Research and Development, Microcontroller Group, at NXP Semiconductors.eletter-05-24-2018 eletter-05-25-2018