Here’s something to think about that I haven’t ever seen anyone explicitly mention: There exist two different time spans to expect for the appearance of artificial general intelligence (AGI): one long time span, one short time span.
(1) the long time span
Even if the AI community fails to figure out the essence of human brain operation and to implement that discovery in a machine, commercial digital computer development will continue to produce processors of increased speed until through brute force they will have enough speed to match human level information processing, despite their inherent inefficiency. According to the book I browsed today for the first time, “Apocalyptic AI” (intriguing title!), almost all people in the field of AI believe that machines will reach human intelligence level in the first half of this century. This could be considered the “lazy” approach to AI: all the general public and scientific community has to do to see AI is sit around for 40 years (assuming they’ll live that long, of course). Estimates I’ve read for predictions of when this will happen range from 2 years ago to 33 years from now:
If this rate of improvement were to continue into the next century, the 10 teraops required for a humanlike computer would be available in a $10 million supercomputer before 2010 and in a $1,000 personal computer by 2030. [2012 - 2010 = 2 years ago]
(“Mind Children: The Future of Robot and Human Intelligence”, Hans Moravec, 1988)
The Intels and Microsofts of a new industry built on hierarchical memories will be started sometime within the next ten years. [2004 + 10 => 2014 = 2 years from now]
(“On Intelligence”, Jeff Hawkins with Sandra Blakeslee, 2004)
At the present rate, computers suitable for humanlike robots will appear in the 2020s.
—Hans Moravec [2020s - 2012 => 8-18 years from now]
http://www.transhumanist.com/volume1/moravec.htm
Computers will become more intelligent than people by about 2030.
[2030 - 2012 => 18 years from now]
(“Future Files: The 5 Trends That Will Shape the Next 50 Years”, Richard Watson, 2008)
That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. [2045 - 2012 = 33 years from now]
(“The Singularity is Near: When Humans Transcend Biology”, Ray Kurzweil, 2006)
Therefore some people think we’re already overdue for human level intelligence in a machine by brute force methods. Even by the farthest estimate I’d read, though, we’re only 33 years away.
(2) the short time span
This is the more interesting but more unpredictable time span. Whereas the first time span addresses mainly technological issues, this time span addresses mainly creativity issues. That means that in theory anybody could produce the “Grand Theoretical Breakthrough” at any time, thereby shortcutting the likely decades-long wait that would otherwise be required.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved.
...
Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
—Marvin Minsky
http://www-formal.stanford.edu/jmc/whatisai/node1.html
As much as I admire Minsky, that seems like a naive statement to me only because the type of data that animal brains process is high-volume streams of sensory input, so the sheer volume of parallel processing that is needed to interpret such real-world data seems prohibitive using digital computers. By the way, I got that term “Grand Theoretical Breakthrough” from Patricia Churchland’s book, which suggests three of the most promising approaches to AI of which she was aware in 1989:
Regardless of whether any of the three examples has succeeded in making a Grand Theoretical Breakthrough, each illustrates some important aspect of the problem of theory in neuroscience: for example, what a nonsentential account of representations might look like, how a massively parallel system might succeed in sensorimotor control, pattern recognition, or learning, how one might ascend beyond the level of the single cell to address the nature of cell assemblies, how coevolutionary exchange between high-level and low-level hypotheses can be productive.
(“Neurophilosophy: Toward a Unified Science of the Mind-Brain”, Patricia Smith Churchland, 1989)
At any rate, this second time span should inspiring to us all, not only because we can shortcut the otherwise long wait for AGI, but because probably any of us capable of coming up with a creative enough idea to make the breakthrough that will certainly change the world in a radical way.
The presence of two competing time spans creates a possibility that I’ve never heard anyone mention before, either: What if the Grand Theoretical Breakthrough comes after the brute force approach succeeds? I suppose it depends on how each type of machine is implemented, but it suggests that we could be using 30-years-in-the-future technology as a foundation for implementing a type of processing that is already inherently one million times faster than biological neurons. That’s a pretty dazzling possibility.
While neurons work on the order of milliseconds, silicon operates on the order of nanoseconds (and is still getting faster). That’s a million-fold difference, or six orders of magnitude. The speed difference between organic and silicon-based minds will be of great consequence. Intelligent machines will be able to think as much as a million times faster than the human brain.
(“On Intelligence”, Jeff Hawkins with Sandra Blakeslee, 2004)