As Moore’s law ends, brain-like computers begin

Professor Kwabena Boahen has written “A Neuromorph’s Prospectus” outlining how to build computers that directly mimic in silicon what the brain does in flesh and blood. Credit: L.A. Cicero

For five decades, Moore’s law held up pretty well: Roughly every two years, the number of transistors one could fit on a chip doubled, all while costs steadily declined. Today, however, transistors and other electronic components are so small they’re beginning to bump up against fundamental physical limits on their size. Moore’s law has reached its end, and it’s going to take something different to meet the need for computing that is ever faster, cheaper and more efficient.

As it happens, Kwabena Boahen, a professor of bioengineering and of electrical engineering, has a pretty good idea what that something more is: brain-like, or neuromorphic, computers that are vastly more efficient than the conventional digital computers we’ve grown accustomed to. This is not a vision of the future, Boahen said. As he lays out in the latest issue of Computing in Science and Engineering, the future is now.

“We’ve gotten to the point where we need to do something different,” said Boahen, who is also a member of Stanford Bio-X and the Stanford Neurosciences Institute. “Our lab’s three decades of experience has put us in a position where we can do something different, something competitive.”

It’s a moment Boahen has been working toward his entire adult life, and then some. He first got interested in computers as a teenager growing up in Ghana. But the more he learned, the more traditional computers looked like a giant, inelegant mess of memory chips and processors connected by weirdly complicated wiring.

Both the need for something new and the first ideas for what that would look like crystalized in the mid-1980s. Even then, Boahen said, some researchers could see the end of Moore’s law on the horizon. As transistors continued to shrink, they would bump up against fundamental physical limits on their size. Eventually, they’d get so small that only a single lane of electron traffic could get through under the best circumstances. What had once been electron superfreeways would soon be tiny mountain roads, and while that meant engineers could fit more components on a chip, those chips would become more and more unreliable.

At around the same time, Boahen and others came to understand that the brain had enormous computing power – orders of magnitude more than what people have built, even today – even though it used vastly less energy and remarkably unreliable components, neurons. More Here – TechExplore

This entry was posted in Technology. Bookmark the permalink.