You are here
Sony, Microsoft strike deal on tiny AI chip with huge potential
[TOKYO] Sony and Microsoft have partnered to embed artificial intelligence (AI) capabilities into the Japanese company's latest imaging chip, a big boost for a camera product the electronics giant describes as a world-first for commercial customers.
The new module's big advantage is that it has its own processor and memory built in, which allows it to analyse video using AI tech like Microsoft's Azure, but in a self-contained system that's faster, simpler and more secure to operate than existing methods.
The two companies are appealing to retail and logistics businesses with potential uses like optimising warehouse and factory automation, quantifying the flow of customers through stores and making cars smarter about their drivers and environment.
At a time of increasing public surveillance to help rein in the spread of the novel coronavirus, this new smart camera also has the potential to offer more privacy-conscious monitoring. And should its technology be adapted for personal devices, it even holds promise for advancing mobile photography.
Instead of generating actual images, Sony's AI chip can analyse the video it sees and provide just metadata about what's in front of it - saying instead of showing what's in its frame of vision. Because no data is sent to remote servers, opportunities for hackers to intercept sensitive images or video are dramatically reduced, which should help allay privacy fears.
Apple has already proven the efficacy of combining AI and imaging to create more secure systems with its Face ID biometric authentication, powered by the iPhone's custom-designed Neural Engine processor. Huawei Technologies and Alphabet's Google also have dedicated AI silicon in their smartphones to assist with image processing. These on-device chips represent what's known as edge computing: handling complex AI and machine-learning tasks at the so-called edge of the network instead of sending data back and forth to servers.
"We are aware many companies are developing AI chips and it's not like we try to make our AI chip better than others," said Hideki Somemiya, senior general manager of Sony's System Solutions group. "Our focus is on how we can distribute AI computing across the system, taking cost and efficiency into consideration. Edge computing is a trend, and in that respect, ours is the edge of the edge."
Sony's advance is to eliminate the need for transfers within the device itself. Whereas Apple and Google still use conventional image sensors that convert light particles into computer-readable image formats for their chips to read, Sony's new part is capable of doing the analytical work without any data leaving its physical boundaries.
The AI-capable sensor may also help advance augmented reality (AR) applications. The two US giants, whose iOS and Android operating systems control practically the entire smartphone market, are heavily invested in AR development. Google Maps now offers the option to show 3-D directions atop a video feed of a user's surroundings while Apple is planning new 3-D cameras on its next set of iPhones in the fall. The agenda-setters of the mobile industry are looking for ever smarter mobile cameras, spurring the demand for more sophisticated imaging gear.
Sony already enjoys a substantial lead as the world's foremost provider of image sensors, counting Apple, Samsung Electronics and every major Chinese smartphone maker among its customers along with pro camera stalwarts like Hasselblad V, Fujifilm Holdings and Nikon.
Its next set of customers may be automakers.
The AI-powered Sony sensor is capable of recording high-resolution video and simultaneously conducting its AI analysis at up to 30 frames each second. That rapid, up-to-the-microsecond responsiveness makes it potentially suitable for in-car use such as detecting when a driver is falling asleep, Mr Somemiya said. Without the need for a "cloud brain" as some existing systems have, Sony's AI sensor could hasten the adoption of smart-car technology.
"This on-chip approach enables a system design to be more flexible and even optimised, given that the cost of image processing, which is one of the most compute-intensive tasks for autonomous driving, can be offloaded from an electronic control unit," said Shinpei Kato, founder and chief technology officer of Tokyo-based Tier IV, which develops self-driving software.