Scalable Machine Learning Solutions Take Center Stage for NXP at Arm TechCon

Scalable Machine Learning Solutions Take Center Stage for NXP at Arm TechCon

We all like to talk about the super high-end of machine learning with computer vision algorithms running on a turbo-charged, 10 tera-operations-per-second accelerator, but the reality, especially for our embedded industry, is that the majority of applications need a processing engine suitable enough to get the job done and no more. This is our motivation for offering scalable machine learning devices from MCUs (such as the Arm® Cortex®-M7-based i.MX RT1050) to application processors (such as the i.MX 8QuadMax and Layerscape® LS1046) – and finally you’re able to see this range of performance in action with no less than 12 machine learning demos at the Arm TechCon in the NXP booth (details below).

For example, stop by the booth and see a wide range of solutions representing low cost, low power, secure, and high performance face recognition. How about face recognition solutions starting at $2 USD? Our design starts with an NXP i.MX RT1020, a low-cost device sporting an Arm® Cortex®-M7 core. NXP developed its own face recognition algorithms, and the ability to train for new faces directly on the RT1020 platform. The outcome is face detection and recognition in slightly more than 200msec with accuracy up to 95% – starting at $2 USD. Higher performance face recognition examples will also be on display using devices such as i.MX 7ULP (high-performance and ultra-low-power), i.MX 8M Nano (real-time face detection using Haar Cascades to give an efficient result of classifiers), and i.MX 8M Mini (doing secure identification with anti-spoofing), and the i.MX 8M Quad-based Google® Coral Board with the Google TPU (for super-fast facial recognition in a sea of people).

Moving on to image classification, the NXP booth will host an application using the i.MX RT1060 and the eIQ machine learning software development environment. This example performs classification with a TensorFlow Lite model trained to recognize different types of flowers (sunflower, tulip, rose, dandelion, and daisy). Specifically, we’re running a MobileNet model and doing inferencing at the rate of 3 frames per second – on an MCU! This demonstration also shows the flexibility of eIQ, providing support for a variety of inference mechanisms (e.g. TensorFlow Lite, CMSIS-NN, Glow) and other types of machine learning models besides image classification (e.g. audio or anomaly detection).

Other Cool NXP Things at Arm TechCon

  1. I’ll be giving a talk “Open Source ML is Rapidly Advancing”, Tuesday October 8th @9am.
  2. Donnie Garcia will talk about “Rightsizing Security for an MCU-based Voiced Assistant” Tuesday, Oct 8th 1:30pm
  3. NXP will host a kegerator in the exhibit hall at 5pm on October 9th and 10th.
Markus Levy
Markus Levy
Markus Levy joined NXP in 2017 as the Director of AI and Machine Learning Technologies. In this position, he is focused primarily on the technical strategy, roadmap, and marketing of AI and machine learning capabilities for NXP's microcontroller and i.MX applications processor product lines. Previously, Markus was chairman of the board of EEMBC, which he founded and ran as the President since April 1997. Mr. Levy was also president of the Multicore Association, which he co-founded in 2005. Previously, he was Senior Analyst at Microprocessor Report and an editor at EDN magazine. Markus began his career at Intel Corporation, as both a Senior Applications Engineer and customer training specialist for Intel's microprocessor and flash memory products. Markus volunteered for thirteen years as a first responder - fighting fires and saving lives.

Comments are closed.

Buy now