Software Solves a Silicon Problem, How a SiMa.ai Chip Outperformed the Industry Leader

Written by
Ronda Scott
Published
April 12, 2023
Software Solves a Silicon Problem, How a SiMa.ai Chip Outperformed the Industry Leader
Company journey


https://www.delltechnologiescapital.com/resources/simasolvessiliconproblem

Despite rapid advancements in artificial intelligence (AI), only a small fraction of today’s total addressable market is benefitting from these innovations, according to Krishna Rangasayee, founder and CEO of AI chipmaker SiMa.ai. Applications where running AI processes at the edge could spur significant innovation are being left out. “Automotive, medical, robotics, industrial, government, all these smart-vision systems, they’re vastly being ignored (by chipmakers),” he says. Yet the financial opportunity for successfully developing a chip that can efficiently run AI at the embedded edge for any one of those individual verticals is enormous and obvious. How is it possible there’s such a gap in solutions?

“Silicon Valley has been living up to its name too much with the belief that every problem can be solved only by improving the semiconductor,” Rangasayee says. “But if you really talk to customers and understand their pain points, they visualize their problem through the lens of software.”

“Silicon Valley has been living up to its name too much with the belief that every problem can be solved only by improving the semiconductor,” Rangasayee says.

There is a fundamental design challenge for AI/ML computing at the edge: the more compute you need, the more watts you draw. And the edge is often very power constrained. To date, entrenched AI chip companies like NVIDIA have focused on developing power-centric microprocessors. Those chips are optimized to do the heavy lifting compute away from the edge centrally or in the in the cloud.

Moving compute elsewhere saves power but can introduce other challenges like latency, limiting a chip’s practical applications. They cannot service what Rangasayee believes is the vast diversity of jobs at the embedded edge: cameras and sensors built into drones, cars, robotic arms, and any other networked device. In these instances, current chips are too costly from latency, power, and financial perspectives. It is very hard for an architecture leveraged for a specific purpose to do well in all needs. There is no “one-size-fits-all.”

Voice of the (Future) Customer

Drawing on decades of experience – Rangasayee built and sold semiconductors and microprocessors at giants like Xilinx, while SiMa.ai’s SVP of engineering and operations Gopal Hegde scaled and integrated chips at Intel and Cisco – the team understood that a lot of compute capacity is left on the table with any processor. “Having done a lot of chips, we know eighty percent of the features never get used,” Hegde said. “This is because there’s no enabling software for them.”

“Having done a lot of chips, we know eighty percent of the features never get used,” Hegde said. “This is because there’s no enabling software for them.”

And after spending nine months talking to more than forty different potential customers about their specific needs, they also learned that customers don’t want to write new applications or learn new coding languages to support ML. And that for SiMa.ai’s future chip to be successful, it’d have to be able to power a customer’s legacy applications.

With their experience and those learnings in hand, the direction was set: SiMa.ai would develop a software-centric Machine Learning System-on-Chip (MLSoC) to run multiple complex ML jobs in parallel while using the industry standard low-cost components. They’d embrace open source in ML and they’d build in the right architecture to run customer’s existing applications by incorporating ARM processors into their chip.

The benefits to this approach become apparent when you unpack a use case. Say an industrial complex needs drones to secure its perimeter. The footage is gathered in 4K so each frame is sharp enough for machine learning software to analyze objects. The drone might run four cameras running at 68 frames per second, generating a huge amount of data that requires intensive, real-time pre- and post-processing – understanding depth and distance, resizing, enhancing, and appending with data from other inputs and sensors. Processing all of these computational needs using power-hungry AI chips comes at significant expense to the drone’s battery. But with SiMa.ai’s MLSoC approach, Gopal explained that a customer can enable that line of drones to process efficiently at the edge without sacrificing watts.

A Winning Approach

Third party data is beginning to bear out those efficiency claims. In SiMa.ai’s debut in MLCommons’ recent MLPerf Benchmark results, the company was the first startup to participate and achieve winning results against the industry’s most popular MLPerf image benchmark: ResNet-50 in the Closed Edge Power category. SiMa.ai’s MLSoC earned top inference achievements in all aspects of the ResNet-50 benchmark, beating the industry leader on both performance (frames per second) and power. This is even more impressive as this was accomplished with a “pushbutton”-easy process technology that is two generations behind with no optimization. This performance speaks to the considerable potential of purpose built architectures and the hard work of the 140 talented folks at SiMa.ai.

When asked why they don’t see other chipmakers making similar strategic moves building a whole ML SoC platform, Rangasayee puts the credit on the resources he’s built at SiMa.ai.

“Very few companies have the where-with-all, the talent, and the money to do something this complicated,” he says. “That’s why you don’t see too many companies building a system-on-a-chip.”

“There’s nothing proprietary either,” Rangasayee adds. “Anyone using open-source in machine learning, in CC++, Python, Linux, they can use us. We didn’t want to reinvent the wheel. We wanted to reduce our risk profile to things that truly mattered to customers: tasks that are actually innovative.”

Related Posts