Facebook's AI chief researching new class of semiconductor

Facebook is looking at chips that can mimic the brain in processing vast amounts of data


FACEBOOK Inc's chief AI researcher has suggested the company is working on a new breed of semiconductor that would work very differently from most existing designs.

Yann LeCun said that future chips used for training deep-learning algorithms, which underpin most of the recent progress in artificial intelligence, would need to be able to manipulate data without having to break it up into multiple batches. Most existing computer chips, in order to handle the amount of data these machine learning systems need to learn, divide it into chunks and process each batch in sequence.

"We don't want to leave any stone unturned, particularly if no one else is turning them over," he said in an interview ahead of the release on Monday of a research paper he authored on the history and future of computer hardware designed to handle artificial intelligence.

Intel Corp and Facebook have previously said they are working together on a new class of chip designed specifically for artificial intelligence applications. In January, Intel said it planned to have the new chip ready by the second half of this year.

Facebook is part of an increasingly heated race to create semiconductors better suited to the most promising forms of machine learning. Alphabet Inc's Google has created a chip called a Tensor Processing Unit that helps power AI applications in its cloud-computing datacentres. In 2016, Intel bought San Diego-based startup Nervana Systems, which was working on an AI specific chip.

In April, Bloomberg reported that Facebook was hiring a hardware team to build its own chips for a variety of applications, including artificial intelligence as well as managing the complex workloads of the company's vast datacentres.

For the moment, the most commonly used chips for training neural networks - a kind of software loosely based on the way the human brain works - are graphical processing units from companies such as Nvidia Corp, originally designed to handle the computing intensive workloads of rendering images for video games.

Mr LeCun said that for the moment, GPUs would remain important for deep-learning research, but the chips were ill-suited for running the AI algorithms once they were trained, whether that was in datacentres or in devices like mobile phones or home digital assistants.

Instead, Mr LeCun said that future AI chip designs would have to handle information more efficiently. When learning, most neurons in a system - such as a human brain - don't need to be activated. But current chips process information from all the neurons in the network at every step of a computation, even if they're not used. This makes the process less efficient.

Several startups have tried to create chips to more efficiently handle sparse information. Former Nasa administrator Daniel Goldin founded a company called KnuEdge that was working on one such chip, but the company struggled to gain traction and in May announced it was laying off most of its workforce.

Mr LeCun, who is also a professor of computer science at New York University, is considered one of the pioneers of a class of machine-learning techniques known as deep learning. The method depends on the use of large neural networks. He is especially known for applying these deep-learning techniques to computer vision tasks, such as identifying letters and numbers or tagging people and objects in images. BLOOMBERG

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes