Deploying Layerscape-Based Edge-Computing Nodes Just Became Easier, Thanks to NXP EdgeScale

As I’ve written before, edge computing excites us at NXP. We’re fortunate to work alongside fellow travelers like Alibaba, Amazon, and Microsoft to bring their edge-computing frameworks to our processors. All of us are keen to accelerate OEMs’ development of edge-computing solutions, whatever their target market. And, we want to help OEMs secure their solutions. Security differentiates NXP, which is why we invest in our trust architecture and enabling software. At the same time, usability remains important. Ease of use has historically trumped security, but security can no longer be a secondary concern in a world of omnipresent hacking attempts. NXP, consequently, has developed EdgeScale. EdgeScale helps manage edge nodes and associated applications—a simultaneous improvement in security and ease of use.

We envisage companies installing thousands of edge nodes per deployment. A single company serving consumers could even have millions of nodes. These nodes will not have displays or keyboards. They may even be physically inaccessible once in the field. Plugging them into a corporate LAN or home broadband connection and hoping they’ll run forever without needing a bug fix, feature upgrade, or security patch (“plug and pray”) is unrealistic. It’s also unrealistic to manually set up each node because there are so many to manage and they may not have user interfaces. The solution to these challenges is a system for zero-touch onboarding, remote management, over-the-air firmware updates, and decommissioning. Past blogs have discussed these issues. We’re consolidating our efforts in the EdgeScale suite of cloud-based tools.

EdgeScale – Flexible Architecture Image for Blog

EdgeScale – Flexible Architecture Image

Edge-node managers have three modes to interact with their device fleet using EdgeScale. The first is a web-based dashboard and is available in the first EdgeScale release. It’s a web-based point-and-click system for setting up devices like the dashboard that cloud companies provide their customers for setting up virtual machines. A manager can create a model edge node, enroll physical nodes identified by cryptographic signatures, upload firmware including Docker containers, and deploy applications, including several from our library.

The second mode is a command-line interface (CLI), coming in later EdgeScale releases. Developers and IT pros have long used the command line to maximize productivity, whether they’re dealing with a workstation, network switch, or other gear. A CLI typically offers greater capability than a dashboard, exposing more, finer-grained commands. As an IT pro told me decades ago, a point-and-click interface (like the EdgeScale dashboard) is good when you need to do a task once, but a CLI is better when the task must be done repeatedly. Such is likely to be the case if a company manages thousands or millions of nodes. For example, the CLI may enable a developer to build a firmware image and push it out to multiple devices by tapping a few commands on his laptop.

Further out on our roadmap, the third mode is a RESTful API. An API enables even greater automation than a CLI, providing a means for developers to code their own applications for managing their edge deployments. This helps edge-node managers scale their deployments by giving them a tool for greater automation of tasks.

If you’re wondering how this fits in with the edge-computing frameworks provided by our fellow travelers, the answer is that it’s a tight fit. These companies have software frameworks (e.g., Alibaba Aliyun, AWS Greengrass, Azure IoT Edge) for edge developers to use in their applications and application management services. That is, they approach edge-node management from the direction of enabling software applications. We at NXP, being principally a supplier of hardware, approach edge-node management from the direction of provisioning and managing hardware. We collaborate with the framework suppliers to complete the set of enabling technologies that help edge-node OEMs and developers secure their solutions and get them to market quickly. We work with our software-focused collaborators to integrate our efforts with theirs. For example, if one of them has a manual system for installing keys on devices and registering the devices, we can automate those processes with EdgeScale. If there is any overlap in capability, we’ll adapt EdgeScale so that OEMs and developers get a seamless solution.

In summary, NXP loves edge computing. Playing to our strengths in embedded processing and security, it’s a great market opportunity for us. With our Layerscape and i.MX families, we have a vast portfolio of 64-bit ARM processors for edge computing. Because security starts with a hardware root of trust, these processors integrate the trust architecture I’ve blogged about in the past. Alternatively, misguided souls considering competing processors can add NXP’s A71CH secure element to their designs to get some of the benefits of our integrated trust architecture. Yes, we’ll adapt EdgeScale to support these designs. NXP also recognizes that fulfillment of our vision of many industries having millions of edge nodes requires barriers to be overcome. These nodes and the software they host must be easy to develop, easy to manage, and hard to hack. EdgeScale goes a long way to getting edge-node developers and managers over these barriers.

Joseph Byrne
Joseph Byrne
Joe Byrne is a senior strategic marketing manager for NXP's Digital Networking Group. Prior to joining NXP, Byrne was a senior analyst at The Linley Group, where he focused on communications and semiconductors, providing strategic guidance on product decisions to senior semiconductor executives. Prior to working at The Linley Group, he was a principal analyst at Gartner, leading the firm's coverage of wired communications semiconductors. There, he advised semiconductor suppliers on strategy, marketing and investing. Byrne started his career at SMOS Systems after graduating with a bachelor of science in engineering from Duke University. He spent three years at SMOS as part of the R&D engineering team working on 32-bit RISC microcontrollers. He then returned to school for an MBA, which he received with high distinction from the University of Michigan. He worked with Deloitte & Touche Consulting Group for a year before going on to work at Gartner, where he spent the next nine years until going to work for The Linley Group in 2005.

Comments are closed.

Buy now