Wednesday, March 20, 2013
What Comes After the Silicon Computer Chip?
From Zocalo public square
Will Quantum computers will change everything? Will we see mind-blowing medical breakthroughs? Check out what some engineering pioneers are predicting for our future. -A.T.
The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s Zócalo event on the future of nanotechnology, we asked engineers and people who think about computing, “What comes after the computer chip?”
SETH LLOYD
Quantum computers will change everything
In 1965, Gordon E. Moore, the founder of Intel, noted that the number of components in integrated circuits had doubled every year since their inception in 1958 and predicted that this annual doubling would continue for at least another 10 years. Since that time, the power of computers has doubled every year or year and a half, yielding computers that are millions of times more powerful than their ancestors of a half century ago. The result is the digital revolution that we see around us, including the Internet, iPhones, social networks, and spam.
Since Moore’s observation, the primary method of doubling has been to make the wires and transistors that transmit and process information smaller and smaller: The explosion in computing power comes from an implosion in the size of computing components. This implosion can’t go on forever, though, at least given the laws of physics as we know them. If we cram more and more, smaller and smaller, faster and faster components onto computer chips, they generate more and more heat. Eventually, the chip will melt. At the same time, basic semiconductor physics makes it difficult to keep increasing the clock speed of computer chips ever further into the gigahertz region. At some point—maybe even in the next decade or so—it will become hard to make semiconductor computer chips more powerful by further miniaturization.
At that point, the most important socio-economic event that will occur is that software designers will finally have to earn their pay. Not that they are not doing good work now—merely that they will have to use the resources available rather than simply assuming that computer power will have doubled by the time their software comes to market, thereby supporting the addition slop in their design. Enforced computational parsimony might not be a bad thing. The luxury of continual expansion of computer power can lead to design bloat. Is Microsoft Word today really better than Word in 1995? It is certainly more obnoxious about changing whatever word you are trying to write into the word it thinks you want to write.
The inevitable end to Moore’s Law for computer chips does not imply that the exponential increase in information processing power will end with it, however. The laws of physics support much faster and more precise information processing. For a decade and a half, my colleagues and I have been building prototype quantum computers that process information at the scale of atoms and elementary particles. Though tiny and computationally puny when compared with conventional chips, these quantum computers show that it is possible to represent and process information at scales far beyond what can be done in a semiconductor circuit. Moreover, quantum computers process information using weird and counterintuitive features of quantum mechanics that allow even these small, weak machines to perform tasks—such as simulating other quantum systems—that even the most powerful classical supercomputer cannot do.
Computation is not the only kind of societally relevant information processing that is improving exponentially. Dave Wineland of the National Institute of Standards and Technology shared the Nobel Prize in Physics this year in part for his work on quantum computing, but also in part for his use of funky quantum effects such as entanglement to construct the world’s most accurate atomic clocks. Conventional atomic clocks make up the guts of the global positioning system. Wineland’s novel clocks based on quantum information processing techniques have the potential to make GPS thousands of times more precise. Not just atomic clocks, but essentially every technology of precision measurement and control is advancing with its own “personal Moore’s Law.” The result is novel and startling developments in nanotechnology, medical devices and procedures, and personal hardware, including every known way of connecting to the Internet.
Finally, if we look at the ultimate limits to information processing, the laws of quantum mechanics and elementary particles allow much more extreme computation than could ever be found on a computer chip. Atomic scale computation? How about quark-scale computation? The ultimate level of miniaturization allowed by physical law is apparently the Planck scale, a billion billion billion times smaller than the current computational scale. And why just make things smaller—why not build larger computers? Why not enlist planets, stars, and galaxies in a universal computation? At the current rate of progress of Moore’s Law, in 400 years, the entire universe will be one giant quantum computer. Just don’t ask what the operating system will be.
Seth Lloyd is professor of mechanical engineering at MIT. His work focuses on the role of information processing in the universe, including quantum computation and complex systems. He is the author of Programming the Universe.
SETHURAMAN “PANCH” PANCHANATHAN
Better brain-computer interfaces
The evolutionary path of computing will no doubt result in ever increasing processing capacities through higher density and low power circuits, miniaturization, parallelization, and alternative forms of computing (such as quantum computing). These will address the demands of large-scale and big-data processing as well as the massive adoption of multimedia and multimodal computing in various applications.
However, future computing devices will have to shift from data- and information-level processing to higher levels of cognitive processing. For example, computing devices will be able to understand subtle cues such as intent in human communication rather than explicit cues such as prosody, expressions, and emotions. This will usher in a new era in computing in which the paradigm of humans interacting with computers in an explicit manner at higher levels of sophistication will be augmented by devices that also interact implicitly with humans. This “person-centered” engagement in which man and machine work as collaborative partners will allow for a range of tasks, from simple to complex. Computing devices on-body, in-body, and in the environment, as well as next-generation applications, will require the user to engage in a symbiotic relationship with the devices termed “coaptivecomputing.”
Computing devices (like prosthetic devices) working coaptively with the user will assist her in certain tasks that are predetermined for their role and purpose and even learn explicitly through instructions from the user. More importantly, devices need to learn through implicit observations of the interactions between the user and the environment, thereby relieving the user of the usual “mundane” tasks. This will enable users to enhance their capability and function and engage at higher levels of cognition, which thus far, has not been possible due to the limited capacity for multisensory perception and cognition.
For example, the user may recall only a few encounters with people and things at an event simply because she had a focused engagement with those particular people and objects. However, future computing devices can essentially recall all of the encounters in a “life log,” along with their context. This could prompt or inform the user as appropriate in their subsequent interactions. As coaption becomes more pervasive, the future of brain-computer interfaces will increasingly become a reality.
No longer will we think of a computer chip as just a physical entity, but instead as a ubiquitous device conjoined and operating seamlessly with humans as partners in everyday activities.
Sethuraman “Panch” Panchanathan is the senior vice president of the Office of Knowledge Enterprise Development at Arizona State University. He is also a foundation chair in Computing and Informatics and director of the Center for Cognitive Ubiquitous Computing. Dr. Panchanathan was the founding director of the School of Computing and Informatics and was instrumental in founding the Biomedical Informatics Department at ASU.
KONSTANTIN KAKAES
The end of the “La-Z-Boy era” of sequential programming
The important question to the end-user is not what comes after the chip, but how chips can be designed and integrated with sufficient ingenuity so that processing speed improves even as physics constrains the speed and size of circuits.
Ever since John von Neumann first enunciated the architecture of the modern computer in 1945, processors and memory have gotten faster more quickly than the ability to communicate between them, leading to an ever-worsening “von Neumann bottleneck”—the connection between memory and a CPU (or central processing unit).
Because chip features can no longer simply be made smaller, the only way forward is through increasing parallelism—doing many computations at once instead of, as in a classic von Neumann architecture, one computation at a time. (Each computation is essentially a logical operation like “AND” and “OR” executed in the correct order by hardware—it’s the basis for how a computer functions.)
Though the first multiprocessor architecture debuted in 1961, the practice didn’t become mainstream until the mid-’00s, when chip companies started placing multiple processing units or “cores” on the same microprocessor. Chips often have two or four cores today. Within a decade, a chip could have hundreds or even thousands of cores. A laptop or mobile device might have one chip with many cores, while supercomputers will be comprised (as they are today) of many such chips in parallel, so that a single computer will have as many as a billion processors before the end of the decade, according to Peter Ungaro, the head of supercomputing company Cray.
Figuring out how best to interconnect both many cores on a single chip and many chips to one another is a major challenge. So is how to move a computation forward when it is no long possible to synchronize all of a chip’s processors with a signal from a central clock, as is done today. New solutions like “transactional memory” will allow different processes to efficiently share memory without introducing errors.
The overall problem is so difficult because the hardware is only as good as the software, and the software only as good as the hardware. One way around this chicken-and-egg problem will be “autotuning” systems that will replace traditional compilers. Compilers translate a program in a high-level language into a specific set of low-level instructions. Autotuning will instead try out lots of different possible translations of a high-level program to see which works best.
Autotuning and transactional memory are just two of many new techniques being developed by computer scientists to take advantage of parallelism. There is no question the new techniques are harder for programmers. One group at Berkeley calls it the end of the “La-Z-Boy era” of sequential programming.
Konstantin Kakaes, a former Economist correspondent in Mexico, is a Schwartz Fellow at The New America Foundation in Washington, D.C.
STEPHEN GOODNICK
Biology-inspired computing
We are rapidly reaching the end of the doubling of transistor density every two years described by Moore’s Law, as we are literally running out of atoms with which to make individual transistors. Recently, nanotechnology has led to many new and exciting materials—such as semiconductor nanowires, graphene, and carbon nanotubes. But as long as computing is based on digital logic (ones or zeros) moving electronic charge around to turn on and off individual transistors, these new materials will only extend Moore’s Law two or three more generations. The fundamental size limits still exist, not to mention limitations due to heat generation. Some new paradigms of non-charge-based computing may emerge that for example, could theoretically use the spin of an electron or nuclei to store or encode information. However, there are many obstacles to creating a viable, scalable technology based on “spintronics” that can keep us on the path of Moore’s Law.
It’s important to remember, though, that Moore’s Law can be viewed not merely as a doubling of density of transistors every two years, but as a doubling of information processing capability as well. While bare number-crunching operations are most efficiently performed using digital logic, new developments in digital imagery, video, speech recognition, artificial intelligence, etc., require processing vast amounts of data. Nature has much to teach us in terms of how we can efficiently process vast amounts of sensory information in a highly parallel, analog fashion like the brain does, which is fundamentally different than conventional digital computation. Such “neuromorphic” computing systems, which mimic neural-biological functions, may be more efficiently realized with new materials and devices that are not presently on the radar screen.
Similarly, quantum computing may offer a way of addressing specialized problems involving large amounts of parallel information processing. The most likely scenario is that the computer chip of the future will marry a version of our current digital technology to highly parallel, specialized architectures inspired by biological systems, with each performing what it does best. New computational paradigms and architectures together with improved materials and device technologies will likely allow a continued doubling of our information processing capability long after we reach the limits of scaling of conventional transistors.
Stephen Goodnick is a professor of electrical engineering at Arizona State University, the deputy director of ASU Lightworks, and the president of the IEEE Nanotechnology Council.
H.-S. PHILIP WONG
Mind-blowing medical breakthroughs
The 10 fingers, the abacus, mechanical cash registers, vacuum tube-based ENIAC, the transistor, the integrated circuit, the billion-transistor “computer chip” … then what? I suppose that was the line of thinking when this question was posed. Rather than fixating on whether a new “transistor” or a new “integrated circuit” will be invented, it is useful to focus on two key observations: “It will be a long time before we reach the fundamental limits of computing,” and “The technologies we use to build the computer chip will impact many fields outside of computing.”
Advances in computing are reined in by energy consumption of the computer chip. Today’s transistor consumes in excess of 1,000 times more energy than the kT∙ ln(2) limit for erasing one bit of information per logical step of computing. Reversible computing, as described by physicist Rolf Landauer and computer scientist Charles Bennett, will reach below the kT∙ ln(2) limit once a practical implementation is devised. There is plenty of room at the bottom! We will continue to get more computational power for lesser amount of energy consumed.
Now that I have put to rest the inkling that there may be an end to the rapid progress we expect from the computer chip, let’s talk about what else the “computer chip” will bring us in addition to computing and information technology. The semiconductor technology and design methodology that are employed to fabricate the computer chip have already wielded their power in other fields. Tiny cameras in cellphones that allow us to take pictures wherever we go, digitally projected 3-D movies, and LED lighting that is substantially more energy efficient than the incandescent light bulb are all examples of “computer chip” technologies that have already made impact in society. Enabling technologies that transform the field of biomedical research are in the offing.
The cost for sequencing a genome has dropped faster than Moore’s Law; the technique is based on technologies borrowed from computer chip manufacturing. Nanofabrication techniques developed for the semiconductor industry have enabled massive probing of neural signals, which eventually will lead to a sea change in our understanding of neuroscience. Nanofabricated sensors and actuators, in the style of Fantastic Voyage, are now beginning to be developed and are not completely science fiction. Emulation of the brain, both by brute force supercomputers or innovative nanoscale electronic devices, is becoming possible and will reach human-scale if the present rate of progress continues.
I am optimistic that what we have experienced in technological progress so far is just the beginning. The societal impact of the “computer chip” and the basic technologies that are the foundations of the “computer chip” will advance knowledge in other fields.
H.-S. Philip Wong is the Willard R. and Inez Kerr Bell Professor in the School of Engineering at Stanford University. He joined Stanford University as a professor of electrical engineering in 2004 after a 16-year research career on the “computer chip” with the IBM T.J. Watson Research Center. He is the co-author (with Deji Akinwande) of the book Carbon Nanotube and Graphene Device Physics.
Labels:
Technology of the Warrior
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment