Menu

EDGE FEATURE NEWS

AIStorm Goes After Mobile Edge Computing with $13 Million Financing Round

By Cynthia S. Artin February 15, 2019

At the very hard edge of the IoT, where the number of devices continues to proliferate and the economics of secure connectivity continues to challenge developers, AIStorm announced earlier this week they have secured over $13M in an A Round to continue their pursuit of bringing real time AI-in-Sensor technology to the edge at a “fraction of the cost.”

Born out of the semiconductor industry, the company’s founders, backed by leading sensor and equipment manufacturers, “aims to equip the next generation of handsets, IoT devices, wearables, and vehicles with a new approach to AI processing at the edge; compared with edge GPU solutions, AIStorm’s technology boosts performance while lowering power requirements and system cost.”

Investors include:

  • Egis Technology Inc., a biometrics supplier to handsets, gaming, and advanced driver-assistance systems (ADAS)
  • TowerJazz, a foundry company that specializes in image sensors for commercial, industrial, AR, and medical markets
  • Meyer Corporation, a food preparation equipment supplier
  • Linear Dimensions Semiconductor Inc., a leader in biometric authentication and digital health products

“This investment will help us accelerate our engineering & go-to-market efforts to bring a new type of machine learning to the edge. AIStorm’s revolutionary approach allows implementation of edge solutions in lower-cost analog technologies. The result is a cost savings of five to ten times compared to GPUs — without any compromise in performance,” said David Schie, CEO of AIStorm.

Today’s traditional AI systems require data be presented in digital form, which requires advanced and expensive GPUs that don’t work well on mobile devices. AIStorm aims to solve these problems by processing sensor data directly in its native analog form, in real time, which reduces power requirements and latency.

“The reaction time saved by AIStorm’s approach can mean the difference between an advanced driver-assistance system detecting an object and safely stopping versus a lethal collision,” said Russell Ellwanger, CEO of TowerJazz.

“Edge applications must process huge amounts of data generated by sensors. Digitizing that data takes time, which means that these applications don’t have time to intelligently select data from the sensor data stream, and instead have to collect volumes of data and process it later. For the first time, AIStorm’s approach allows us to intelligently prune data from the sensor stream in real time and keep up with the massive sensor input tasks,” said Todd Lin, COO of Egis Technology Inc.

“It makes sense to combine the AI processing with the imager and skip the costly digitization process. For our customers, this will open up new possibilities in smart, event-driven operation and high-speed processing at the edge,” said Dr. Avi Strum, SVP/GM of the sensors business unit of TowerJazz.

We posed specific questions to the company, given our readers’ interest in the substance behind the solution, and received these answers back from Schie, CEO/co-founder of AIStorm.

How much faster is your solution vs. GPU and how do you measure and report on that?

Results are measured in TOPs and TOPs/W. In >65nm topologies we are achieving 2.5 TOPs and 11.1 TOPs/W, compares with competing solutions, which are claiming 2.73 TOPs and 9.3 TOPs/W in 28nm in a similar silicon area. Right now we are using the cost difference to reduce costs by 5x to 10x. If we were to create a solution in 28nm or 7nm we could significantly outperform ANY GPU solution and we know this because at the MAC (multiply-accumulate circuit) level and memory level we can operate at speeds that digital systems are not capable of. The physics of it have been evaluated by leading technologists so we feel confident that eventually we will put GPUs out of business, but we have not built a system at that level so do not have absolute figures yet.

Which IoT solutions have you beta tested in the IoT/IIoT world and what did you learn?

We were involved in through-screen fingerprint detection, some high-profile glasses developments, heart-rate identification/fatigue detection through a car seat enabled with capacitive sensors, and other wearables form factors. We learned that ARM-based solutions cannot perform the desired features and that the battery gets too hot. We have also learned and are embarking on LIDAR/ADAS applications, which are struggling to digitize the incredibly short pulses they see. The pulses are so fast they only get two or three points of each pulse and can easily mis-time or miss them entirely — especially in real-world situations.

What are your favorite implementations in the real world?

Imaging applications where the AI in our sensor allows: i) faster response; ii) lower power; iii) event-driven operation rather than polling — which saves power and ensures we don't miss anything.

How do you go to market - your full-stack offering (from "silicon" to "systems") - are you strictly building the sensors and providing them to the OEMs or SIs to embed into solutions?

At present we are engaged with major suppliers like Egis and Tower for high-volume applications. For smaller customers partners like Linear Dimensions, we will help them implement solutions. We are providing the complete solution including sensor, AFE, AI and starter software. We are building bridges to standard AI tools so that in future (2020) customers can use our products without significant support from our side.

How will this impact the evolution towards real-time (or at least near-real-time) IoT/IIoT - are the economics now available to do this at the edge?

We can add tremendous value in true closed-loop real-time processing — even of systems as complex as LIDAR but certainly things like gesture control, heart rate monitoring, facial recognition, etc. We can cherry-pick data since we are AI in the sensor and therefore process what's important. It's an important difference when the AI and sensor are intertwined. We are not talking about AI+sensor, we are talking about technology where the two things blend together — including the AFE. Without our charge-domain method of doing the processing it would be too expensive and slow to ever really bring the economics at the edge and the processing capability in line.

How will this new sensor approach intersect with 5G and MEC as both advance?

I would say that we enable the most efficient way to do MEC so that we can minimize the use of 5G or other wireless standards, especially in imaging applications, which tend to generate a lot of data.

How does this benefit the CSPs who have been slow to offer IoT solutions - what problem are you solving for them?

We are increasing the volume of IoT applications by bringing AI in imagers and waveform analysis at costs they can afford. An AI-enabled doorbell with facial or fingerprint sensing is expensive. A doorbell without it is cheap. There is nothing in the middle.  We hope to bring AI-enabled imagers to the market in the single-digit dollar ranges. At MWC we are tackling:  i) through-screen fingerprint sensor problem; ii) gesture control for portable devices (present solutions are taking up to 8W to process data from a 52mW sensor); iii) drones (high speed data, heavy batteries); iv) wearables (identification, heart-rate monitoring, and, in future, working towards alcohol and diet monitoring); v) automotive, which includes LIDAR and imaging solutions where we increase the chance of not missing critical data by not digitizing. We also allow pruning of massive amounts of data so that we are not so bogged down that we forget to brake in time.

In addition to Schie, a former senior executive at Maxim, Micrel and Semtech, the company’s leadership team includes CFO Robert Barker, formerly with Micrel and WSI; Andreas Sibrai, formerly with Maxim and Toshiba; and Cesar Matias, founder of ARM’s Budapest design center.




Edited by Ken Briodagh
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Contributing Writer

SHARE THIS ARTICLE
Related Articles

ZEDEDA Certified Edge Computing Associate Certification to Support Growing Uses of Edge Computing

By: Alex Passett    9/6/2023

The new ZCEA certification from ZEDEDA is available through the company's Edge Academy and provides fundamental knowledge about the many benefits of e…

Read More

T-Mobile and Google Cloud Partner to Advance 5G and Edge Compute Possibilities

By: Alex Passett    6/15/2023

T-Mobile and Google Cloud are helping customers embrace next-gen 5G use cases; applications like AR/VR experiences, for example.

Read More

Aptiv PLC Acquires Wind River Systems to Enhance Software-Defined Vehicles

By: Alex Passett    1/5/2023

Dublin-based automotive technology supplier Aptiv PLC has acquired California-based cloud software and intelligent edge company Wind River Systems.

Read More

Driver Safety and Costs Keep Decision Makers Awake

By: Greg Tavarez    12/15/2022

The two things that are top of mind for SMB fleets are driver safety and financial concerns.

Read More

Tomahawk Hosts Microsoft Azure SDK on KxM Body-Worn Edge Processor

By: Stefania Viscusi    11/10/2022

Tomahawk Robotics, a provider of common control solutions, has successfully hosted Microsoft Azure SDK on its KxM edge device.

Read More