Design Dreams: Hardware and software multicore co-design lessons learned

Design Dreams: Hardware and software multicore co-design lessons learned

Hardware and software don’t always co-exist peacefully. All too often, when hardware and software engineers get together, it’s a bit like having too many cooks in the kitchen. They come from different backgrounds, often have different work styles and unfortunately, the things that make a hardware engineers work easier often make life more difficult for the software side of things and vice versa.

Though the idea of hardware and software co-design has been around for many years, it hasn’t been truly implemented often. Yet as multicore devices have become more and more complicated over the years, it is now a necessity.

As part of that complexity, multicore devices are smaller than ever before, containing more transistors and running faster with each generation, enabling them to support very advanced, integrated systems. The interface between software and hardware is critical in these highly integrated systems and requires the design team to take a systems-level approach, jointly constructing both software and hardware simultaneously. When done correctly, multicore devices are easier to use. Software can be more easily added and the hardware is easier to optimize to new uses. It becomes more intuitive to program, as the complexity is dealt with as part of the design process.

When began work on Layerscape architecture, the ease of use that results from a co-design process was a primary goal, resulting in a heterogeneous multicore solution optimized for specific markets and use cases. Utilizing a co-design model, getting equal support and direction from the hardware and software experts, the team developed a highly efficient solution that is very software friendly. We learned a few things along the way:

  • Take a systems-level approach. Traditionally, hardware is designed first, with the software following later. However, choices made during the hardware stage of development can make it much harder to implement easy to use software. By taking a systems level approach, hardware and software engineers are able to work together to ensure an optimized, easy to use product.
  • Integrate. I don’t just mean the integration of hardware and software. Integrate the team. By putting hardware and software engineers in the same room together frequently at very early stages in the design process, both sides were better able to understand the impact their design choices has on the other side and adjust accordingly to ensure that the maximum ease of use for the end customer was achieved at all levels of the design process.
  • Focus on the end result. Because hardware and software engineers typically operate in silos, it was important to reduce those silos by keeping both sides focused on the end result — an easy to use device for the customers. This helped reduce information silos by encouraging the two groups to communicate often and share information throughout the project to ensure the chief concerns of both hardware and software were addressed in the design process. There will always be tradeoffs: ease of use versus implementation complexity, power consumption concerns or die size constraints for example. In the ideal situation, the software team would prefer one core running at 1000 Ghz — a physical impossibility. The hardware team would prefer a separate core/processing element for each individual function — a usability nightmare. The correct answer is somewhere in between, usually a combination of homogeneous cores plus some hardware acceleration.

Hardware and software co-design isn’t easy. It requires the two groups to work closely together, in contrast to their traditionally segmented workflow. However, when done properly, the end result is an easier to use, integrated and highly programmable multicore device that is also competitive in price, performance, and capability.

Rob Oshana
Rob Oshana
Director of Software Enablement for Digital Networking Rob Oshana provides software drivers, operating systems, virtualization, and software development kits for multicore processors in networking and wireless. He's on Embedded Systems and Multicore Industry advisory boards. Rob has authored several books, speaks internationally on embedded and software engineering, and teaches as an adjunct professor at SMU and University of Texas at Austin. His hobbies include cooking and biking.

1 Comment

  1. Avatar Piotr says:

    you mention hardware-acceleration. but why not a coprocessor instead? lesson learned from GPU is that a freely programmable shader is much better than a shader offering only a limited set of commands.

    advantage of hardware-acceleration is energy-efficiency. but nowadays this isn’t an issue anymore: go to parallella.org and see for yourself how a 16-core chip can consume low energy, and how an fpga can offer the best from both, hardware acceleration and programmable helpers. it’s an open-source system, so hardware and software developers can work together. if you have suggestions for making the system more user-friendly, tell them, or write the required software, or just build your own system!

Buy now