The biggest acquisition in the history of technology has been tabled. Broadcom, which itself was purchased by Singapore’s Avago Technologies in 2016, has made a $130bn bid for rival chipmaker Qualcomm.
If it goes through (and that’s a big if), Broadcom would be paying 20 times the amount Candy Crush-maker King was purchased for, or more than 130 times the amount it cost Facebook to buy Instagram. It could even get the equivalent of five LinkedIns for the price. The proposed deal is so big it’s nearly double the biggest tech buyout of all time, Dell’s $67bn buyout of EMC in 2015.
Broadcom’s purchase of Qualcomm would make the company dominant in the chipmaking industry. It’s likely that Qualcomm will reject the offer, but even if it does the stakes for chipmakers have been raised yet again.
But why is there so much money at stake? Blame artificial intelligence.
“From here on, things are not the same anymore,” said Michael Azoff, a principal analyst at Ovum. The speedy advancement of machine learning, where neural networks aim to mimic the human brain, has led to a greater demand for technical infrastructure to support it.
“Clearly, at the moment, Nvidia, with its high-end graphics processing unit (GPU) is the market leader,” said Azoff. Nvidia, based in Santa Clara, California, produces a series of GPUs that are commonplace in the machine learning industry. GPUs aren’t only capable of working with vast data sets, they also get through the work in less time and do it with less physical infrastructure. In the world of AI, that’s gold dust.
Based on gaming technology, Nvidia’s near-dominant position in the sector – it has been used by Facebook, Microsoft, Google and Alibaba – has forced others to play catch-up.
UK chip design giant ARM’s latest processors are also focussed on fuelling the AI boom. Highlighting the amount of money sloshing around the industry, the company was purchased for £24.3bn by Japan’s SoftBank in July 2016 and has since had a chunk sold to a Saudi-backed investment fund.
“In the past, chips were designed by a few manufacturers and used to just go into PCs,” said Neil Curson, from University College London’s nanotechnology research team. He explains that as more devices have AI-capable chips installed, companies will need to diversify to dominate individual markets.
“If Broadcom wants a bigger share of the market there’s no way they can specialize in so many areas, so what they’re trying to do is buy a company that broadens that base,” he said.
Elsewhere, the success of Nvidia has persuaded old rivals to come together. Advanced Micro Devices (AMD) and Intel have battled each other in court over anti-trust disputes and licensing agreements. Now the two firms are partnering to take on Nvidia. The pair will be creating a chip that uses an Intel CPU and an AMD graphics processor.
“I think the fact they’re talking about doing a partnership shows how seriously Intel is looking at the challenge from Nvidia,” Azoff said.
But there’s a complication: there’s no consensus on GPUs being the best option for machine learning processing. Nvidia and companies working on GPUs for AI will argue it’s the optimal approach, but other companies are buildings completely new chips and processors.
One of the most prominent in this endeavour is Google’s Tensor Processing Unit (TPU). The chip, which is currently in its second generation, has been designed for AI and is used within Google Search, Translate, Street View and elsewhere. DeepMind’s all-conquering AlphaGo is also powered by TPUs.
UK-based Graphcore is building its own AI processor, dubbed an Intelligent Processing Unit (IPU). It raised $30m to help it commercialise the product. In 2016, Intel splashed $400m on deep-learning startup Nervana. By the end of this year, it hopes to be selling the Intel Nervana chip.
“All these other players want to get in on this,” Azoff said. “They want to perhaps leapfrog the GPU with a novel architecture and be the chip that’s going to be in your phone, washing machine, in your car and more. The opportunity is huge.”