Moore’s Law Is Dead: The Future Of Computing

We can no longer guarantee that computer chips will keep growing smaller and more powerful. In fact, we're getting close to capacity. What does this mean for the future of computing?

The world’s most powerful computing machine is perhaps also the most ubiquitous: the human brain. It’s able to perform computations on a scale that even the most advanced supercomputers cannot hope to match. However, the circuitry underpinning our brains is surprisingly sloppy and slow. Neurons act on millisecond timescales – slow when compared to the fastest processors – and can fail to activate; one reason why the brain has so many neurons is this redundancy. What allows the brain to run on this clunky machinery is parallel computing, the ability to solve problems simultaneously using many different parts of the brain. The race to imitate the brain’s ability to parallel compute holds not only the promise of new heights of computing power, but also salvation from the impending death of one of the most important laws in modern history.

It’s the law that says the number of transistors on a chip will double every two years; it’s the law that accurately described the explosion in computing power that enables the modern world to function; it’s Moore’s Law and it is coming to a grinding halt. There are estimated to be almost 1.4 billion smartphones on the planet, each “black mirror” giving us access to an unimaginable amount of useful information.

Although the most popular devices are designed in California or South Korea, the chips powering them are designed in Cambridge, England. The company behind them, ARM, may not be a household name like Intel, but their chips trump their U.S. rival on energy efficiency and size, crucial for smartphones. Underpinning ARM chips is a simplified approach to computing first conceived at the University of Stanford and the University of California, Berkeley.

“This is one case of U.S. academic research being taken up by a U.K. company very successfully,” says Stephen Furber, professor of computer engineering at the University of Manchester in England. He designed the ARM chip in the 1980s while at the now-defunct ARM predecessor company, Acorn. The chip’s first big outing was in the Apple Newton in 1993 and the rest, as they say, is history.

“Then, as now, the Apple brand was magic and opened doors,” says Furber.

But like all companies in this space, ARM is grappling with the demise of a trend that has delivered more powerful computers every two years for almost half a century. The term Moore’s Law has come to represent inexorable technological progress, but at its heart it is a very specific observation about computer chips. 

In his seminal 1965 paper, Intel’s founder, Gordon Moore, predicted that the number of transistors on a chip would double every 18 months (later corrected to two years) and that the cost would fall off at a similar rate. While it’s unclear why transistor density has followed this particular exponential path, the effect is beyond doubt: “It has been a windfall,” says Doyne Farmer, a professor of complexity economics at the University of Oxford.

Now the laws of Physics are threatening to destroy that windfall. Simply put, transistors have to be made from atoms. As you shrink transistors in order to pack more of them on to a chip, you eventually reach a point where you run out of atoms. Even before you run out of atoms, the reliability of the smallest transistors decreases and the cost increases due to the increased complexity and difficulty of producing them.

“The problem we’re now facing is more fundamental [than anything before],” says Furber.

Although recent innovations like Intel’s 3D transistors have allowed us to pack transistors even tighter, the focus is beginning to move away from this density-driven approach. Moore’s Law of transistor density may be dying, but if we can be “clever at other stuff,” it can still live on in its broader definition of exponential growth in computing power, says Doyne Farmer. Parallelization, making chips that work more like our brains, is one way of being “clever,” says Farmer. Modern chips already have “lots of slack that we can’t exploit” effectively, notes Furber. By imitating the brain’s ability to distribute tasks we can get more out of what we already have. For that reason, Furber has set his sights on achieving the near-impossible: replicating the human brain.

Called SpiNNaker, his project is a huge experiment in parallel computing that aims to use one million ARM processors to recreate the complex and networked machinery of the brain. And just as the brain’s neurons are not the sharpest tools in the box, the SpiNNaker project uses processors that are far from the cutting edge of chip technology.

“The progress in architecture has to some degree been a bit lazy because you’ve always been able to get progress through Moore’s Law. Now that Moore’s Law is delivering less, it’s time for the architects and software engineers to start delivering [more],” says Furber.

Running programs in a fluid, parallel manner across combinations of high-power and low-power chips, like the latest generation of ARM processors does, has the potential to sustain Moore’s Law beyond the fundamental limits of miniaturizing transistors. Other experimental approaches include abandoning material that gave its name to Silicon Valley.

A recent project from Stanford University demonstrated the first carbon nanotube computer, albeit a very primitive version. By out-performing silicon in terms of energy efficiency, carbon nanotubes could form the basis for “the next generation […] of electronic systems” wrote the researchers in a letter published in Nature. Others are placing their faith in quantum computing.

“There is a renewed hope that within 20 years there’ll be a [quantum computing] device,” says Bob Coecke, Professor of Quantum Foundations, Logics and Structures at the University of Oxford. The appeal of a quantum computer is that it transcends the current paradigm of zero and ones, allowing a bit to be in a potentially infinite number of states and so opening up the possibility of doing many computations simultaneously. Of course, the technical challenges are just as significant.

Quantum states change once you interact with them, so “it’s very hard if you start doing things with a quantum state, like computations, to keep it nice and well behaved,” says Coecke.

Since its inception in the 1960s, Moore’s Law has been a source of confidence and certainty. Without it, says Doyne Farmer, “we will no longer be spoiled by having lots of problems just get easier automatically because computers make them easier […] we’ve almost come to count on that continuing exponential improvement.”

For that reason, creative approaches to parallel computing, as well more experimental research into alternate materials and quantum computing, are not only desirable, they’re necessary. Moore’s Law must live on, and in more ways than one, the human brain is the key.