Beyond Moore's Law

beyond-moore-and-039;s-law photo 1


By Tom Conte, IEEE Rebooting Computing Initiative Co-Chair and Professor in the Schools of Electrical & Computer Engineering and Computer Science at the Georgia Institute of Technology

Most people believe Moore's Law says computer performance doubles about every 18 months. Not so. Intel co-founder Gordon Moore actually meant that transistors will get cheaper with each generation.

So since 1965, when Moore first described this trend, manufacturers of PCs, cell phones and other devices have been able to buy chips with twice as many transistors, but for the same price they paid a year and a half earlier. This trend democratized computing by giving consumers access to devices and services that otherwise wouldn't be mass-market affordable.

But this trend began losing steam in the mid-1990s, when sending signals via long wires on a chip began to dominate computer circuitry. As a result, microprocessor designers began using superscalar techniques, where multiple instructions run in parallel on multiple hardware circuits. This sped up programs without requiring any software changes.

This approach worked well until 2005, when that generation's chips hit a power density of 200 watts per square centimeter. (Some perspective: A nuclear reactor core is about 100 watts per square centimeter.) At that point, chips couldn't be cooled cost-effectively, so the industry switched to a multi-core architecture. These chips have multiple processors, each running slower than a single one so they're easier to cool. They can run multiple programs in parallel. But to use these multi-cores to speed up a single program, the programmer has to re-engineer the software to use a parallel algorithm.

What's Next?

Even so, multicore is a Band-Aid. To keep delivering the advances in speed, battery life and capabilities that Moore's Law has conditioned consumers to expect, a fundamentally different approach to computing is required. IEEE created the Rebooting Computing Initiative to study these next-gen alternatives, which include:

  • Cryogenic computing. Cooling circuits to nearly absolute zero allows superconducting to occur. This enables highly energy efficient computers while still supports the current programming model by enabling superscalar processors that can run independent instructions in parallel. Cryogenic computing would be used in infrastructure such as servers rather than cell phones and laptops.
  • Reversible computing. Computers and cell phones take multiple sources of information and reduce it to a single answer. All of that other input is wasted, a process that generates the heat you feel when your phone is doing a processing-intensive application. Reversible computing recycles that energy so it can be used on the next task, thus saving electricity and battery life. The catch is that the industry doesn't know how to make complex computer circuits reversible. It will take billions in investment and a decade or more of R&D to make reversible computing viable.
  • Special-purpose hardware. Most of today's compute devices are general purpose, meaning they can do a lot of different things reasonably well based on what the programming instructs. An exception is graphics processing units (GPUs), which are designed to do one thing very well. This concept can be applied to many other tasks, with the benefit of being far more energy efficient. But wider use of these special-purpose devices would require a paradigm shift on the programming side, where code currently is written for general-purpose hardware.
  • Quantum computing. Today's chips handle data in the form of ones and zeros. Quantum computing can use additional states, providing flexibility to do more tasks simultaneously. But making these systems operate reliably is a challenging engineering problem. It will take at least another decade to refine the hardware. Even then, quantum computing only works for a few, albeit important, applications.
  • Neuromorphic computing. Like quantum computing, this approach is so radically different than today's programming models and circuit architectures that it will take moon-shot-level investment and breakthroughs to become viable. Neuromorphic computing's name reflects how it strives to achieve the human brain's energy efficiency and ability to rewire itself. The catch is that we're still learning how the brain does all that, meaning it will be many years before we can make chips even close to the brain's energy efficiency and compute capabilities. For some applications, such as the Internet of Things, one short-term solution is to move more computing tasks out of devices and to the cloud, where power consumption and heat are less of a concern. This strategy leverages the growing availability of high-speed, low-latency networks. But like multicore, offloading to the cloud ultimately is a Band-Aid because data centers also need to be green. So eventually they, too, will need to use one or more of the aforementioned next-gen computing architectures.

If you want a deeper dive into these and other potential alternatives, check out the presentations and papers at http://rebootingcomputing.ieee.org. They're the future of computing.

About the Author

Tom will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, Going Beyond Moore's Law, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org.

Tom Conte is a Professor of CS and ECE at Georgia Institute of Technology, where he directs the interdisciplinary Center for Research into Novel Computing Hierarchies. Since 2012, Tom has co-chaired (along with Elie Track) the IEEE-wide Rebooting Computing Initiative that has as its goal to entirely rethink how we compute, from algorithms down to semiconductor devices. He is also the vice chair of the IEEE International Roadmap of Devices and Systems (the successor to the International Technology Roadmap of Semiconductors). He travels around the world giving talks about how shifts in technology and the slowing of Moore's Law are about to cause a dramatic shift in how we compute. Tom is the past president of the IEEE Computer Society and a Fellow of the IEEE.

Recommended stories

More stories

How-To Geek is Hiring a Programmer

We’ve grown a ton in the last few years—we brought on a great new Editor-in-Chief, hired a bunch of new writers, and we’ve branched out into new areas. So why am I still doing the programming?

How to Keep People From Knowing You Read Their Message on Facebook

Facebook is the most popular social network on the internet, and as a result, its messaging service is a very common way for people to privately communicate with one another. However, Facebook also tells you when your recipient has read your message—something not everyone enjoys.

Kodi Is Not a Piracy Application

There’s a piracy app that lets users find any TV show, movie, or song you can imagine. Streams and downloads are both easy to find, and the software is already used by hundreds of millions of people.