Tiny AI Devices Invade The Maker Moment

The Xpider by Maker Collider featuring a camera equipped with a hardware accelerated neuron memory chip for rapid image recognition

The tiny white orb-shaped robot scuttled across the carpet, it’s flashing headlamp illuminating the floor before it. “It can recognize faces,” said the hardware engineer in charge of implementing the compact internals of the Xpider. “We are working on a software layer to make it easy to drag-and-drop commands using the onboard neural network engine.

As I have toured the world attending hardware conferences and Maker Faires, the theme of providing small edge devices with more intelligence has grown stronger with each passing month. More and more inventive developers and teams seem to be working on solving the next big problem facing the Maker Movement: How to make AI-enabled hardware and perceptual computing gadgets usable to a wider audience of developers.

Several hundred AI-equipped Xpider units being configured for a technology demo in China

The Acceleration of AI-focused IoT Hardware

The industry has also moved aggressively to bring down the costs of key technologies associated with computer vision and robotics such as compact 3D cameras, low-cost Lidar and integrated robotics solutions such as the new TurtleBot 3 series from OSRF and Robotis.

Only a few years ago, building a robot capable of mapping and navigating an indoor environment, manipulating objects and recognizing obstacles would have cost developers thousands of dollars. The Turtlebot 3 Burger form-factor costs around $550 and is capable of a wide variety of functionality once available only for significantly more money.

As a result, the intense competition to accelerate computer learning and perception tasks has now penetrated well into the realm of the underlying hardware, encouraging major players like Google, Facebook and Microsoft to delve into custom silicon designs of their own. These efforts are trickling out slowly to the Maker Movement…but there is still work to be done.

Big Silicon Players Jumping In Head First

While most of the hardware innovation currently occurring to support AI is happening in the realm of “Big Iron” aka servers processing intensive workloads, we are now seeing the effects of this competition increasingly trickling down to a wider audience. A great example of this is the NVIDIA Jetson series, which has achieved a notable degree of success (helped along by a mature set of tools known as CUDA).

Intel has undertaken it’s own efforts with the purchases of companies like Movidius and the recent release of the Fathom Neural Compute Stick for $79. Meanwhile, Qualcomm, a major Intel competitor, has been busy optimizing the Hexagon DSP accelerator (which comes inside the SnapDragon 835 SoC) for Deep Learning tasks.

Finally, Arm (my employer), recently announced a software toolkit containing numerous optimizations for Deep Learning and computer vision called Compute Library. Compute Library is especially interesting because approximately 17.7 billion Arm-based processors were sold in 2016 meaning that the surface area for Deep Learning on many embedded devices at low cost is now much broader.

While much progress is being made to bring down costs, the Maker Movement has a much more stringent set of usability and accessibility requirements before these technologies can see broad adoption. Simply put: These tools must become even easier and cheaper before they really take off with makers and tinkerers…and that is exactly what is going on in the market!

An Xpider with mount for additional peripherals and sensors


AI Hardware And Software Must Be Packaged As A Solution

The JeVois open source machine vision platform raised $100,000 on Kickstarter last year to help developers more rapidly use advanced perception algorithms and computer vision techniques.

One barrier slowing adoption of AI and perception technology by the Maker mass market is the requirement that AI hardware and software must come packaged as an integrated solution instead of a loose collection of components.

Instead of needing just an Arduino, developers now really need an entire robot, drone or integrated camera solution to even get started with AI tinkering. Such solutions require device-makers to think very carefully about how to package advanced AI and computer vision algorithms in such a way that they can be simple and easy to use.

The Xpider is one example. It comes with a cheapish (~$130) robotic chassis, a camera and some additional sensors plus an IDE environment to allow developers to tap into it’s computer vision and learning capabilities quickly.

Computer vision made simple using the EZ-Builder

Another example of companies working towards lowering barriers to entry for advanced robotics is Calgary-based EZ-Robot, who have exerted significant effort towards producing a modular robotics system with high-speed / low-latency cameras that can be rapidly trained to recognize and track objects in a very short period of time.

Still other examples of efforts to lower barriers-to-entry include the JeVois platform (see above picture), which was successfully crowdfunded last year by founder Laurent Itti. The JeVois is an integrated camera plus low-energy compute solution that comes with a wide variety of Linux kernel optimizations and specialized camera integration that allows makers to plug the device directly into a Raspberry Pi or Arduino via USB. This allows makers to begin using phenomenally advanced compute vision algorithms without needing a Phd.

The cost? Less than $49.99. Thats not bad at all.

Still other options to engage in computer vision include Intel’s RealSense camera and competitive sensors produced by Orbbec3D (an Arm-based, low-cost alternative).

The Need For Open-Source Training Data

One final note, and probably one of the most important points – Is that using hardware gadgets to engage in AI-specific tasks requires high-quality, freely available and open source training data. This problem is likely going to be the #1 challenge inhibiting developers from truly making use of their AI hardware gadgets anytime soon.

Until the community begins producing and making freely available sets of high-quality data to train these new AI and perception toys, makers are not going to be able to do much.