AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

2 time spans to reach AGI
 
 

Here’s something to think about that I haven’t ever seen anyone explicitly mention: There exist two different time spans to expect for the appearance of artificial general intelligence (AGI): one long time span, one short time span.

(1) the long time span

Even if the AI community fails to figure out the essence of human brain operation and to implement that discovery in a machine, commercial digital computer development will continue to produce processors of increased speed until through brute force they will have enough speed to match human level information processing, despite their inherent inefficiency. According to the book I browsed today for the first time, “Apocalyptic AI” (intriguing title!), almost all people in the field of AI believe that machines will reach human intelligence level in the first half of this century. This could be considered the “lazy” approach to AI: all the general public and scientific community has to do to see AI is sit around for 40 years (assuming they’ll live that long, of course). Estimates I’ve read for predictions of when this will happen range from 2 years ago to 33 years from now:

If this rate of improvement were to continue into the next century, the 10 teraops required for a humanlike computer would be available in a $10 million supercomputer before 2010 and in a $1,000 personal computer by 2030. [2012 - 2010 = 2 years ago]
(“Mind Children: The Future of Robot and Human Intelligence”, Hans Moravec, 1988)

The Intels and Microsofts of a new industry built on hierarchical memories will be started sometime within the next ten years. [2004 + 10 => 2014 = 2 years from now]
(“On Intelligence”, Jeff Hawkins with Sandra Blakeslee, 2004)

At the present rate, computers suitable for humanlike robots will appear in the 2020s.
—Hans Moravec [2020s - 2012 => 8-18 years from now]
http://www.transhumanist.com/volume1/moravec.htm

Computers will become more intelligent than people by about 2030.
[2030 - 2012 => 18 years from now]
(“Future Files: The 5 Trends That Will Shape the Next 50 Years”, Richard Watson, 2008)

That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. [2045 - 2012 = 33 years from now]
(“The Singularity is Near: When Humans Transcend Biology”, Ray Kurzweil, 2006)

Therefore some people think we’re already overdue for human level intelligence in a machine by brute force methods. Even by the farthest estimate I’d read, though, we’re only 33 years away.

(2) the short time span

This is the more interesting but more unpredictable time span. Whereas the first time span addresses mainly technological issues, this time span addresses mainly creativity issues. That means that in theory anybody could produce the “Grand Theoretical Breakthrough” at any time, thereby shortcutting the likely decades-long wait that would otherwise be required.

However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved.
...
Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
—Marvin Minsky
http://www-formal.stanford.edu/jmc/whatisai/node1.html

As much as I admire Minsky, that seems like a naive statement to me only because the type of data that animal brains process is high-volume streams of sensory input, so the sheer volume of parallel processing that is needed to interpret such real-world data seems prohibitive using digital computers. By the way, I got that term “Grand Theoretical Breakthrough” from Patricia Churchland’s book, which suggests three of the most promising approaches to AI of which she was aware in 1989:

Regardless of whether any of the three examples has succeeded in making a Grand Theoretical Breakthrough, each illustrates some important aspect of the problem of theory in neuroscience: for example, what a nonsentential account of representations might look like, how a massively parallel system might succeed in sensorimotor control, pattern recognition, or learning, how one might ascend beyond the level of the single cell to address the nature of cell assemblies, how coevolutionary exchange between high-level and low-level hypotheses can be productive.
(“Neurophilosophy: Toward a Unified Science of the Mind-Brain”, Patricia Smith Churchland, 1989)

At any rate, this second time span should inspiring to us all, not only because we can shortcut the otherwise long wait for AGI, but because probably any of us capable of coming up with a creative enough idea to make the breakthrough that will certainly change the world in a radical way.

The presence of two competing time spans creates a possibility that I’ve never heard anyone mention before, either: What if the Grand Theoretical Breakthrough comes after the brute force approach succeeds? I suppose it depends on how each type of machine is implemented, but it suggests that we could be using 30-years-in-the-future technology as a foundation for implementing a type of processing that is already inherently one million times faster than biological neurons. That’s a pretty dazzling possibility.

While neurons work on the order of milliseconds, silicon operates on the order of nanoseconds (and is still getting faster). That’s a million-fold difference, or six orders of magnitude. The speed difference between organic and silicon-based minds will be of great consequence. Intelligent machines will be able to think as much as a million times faster than the human brain.
(“On Intelligence”, Jeff Hawkins with Sandra Blakeslee, 2004)

 

 

 
  [ # 1 ]

Another way of looking at the future of artificial intelligence might be to take an evolutionay rather than a revolutionary view. I’ve long favoured comparing the development of flight with the as yet unproven development of artificial intelligence.

Although nobody could know for sure until someone actually succeeded, there were many inventors who had a pretty good idea of how to achieve heavier-than-air flight centuries before the invention of the internal combustion engine. This key piece of technology provided a high enough power to weight ratio to lift machines off the ground for extended periods of time, and there followed a period of research and development which continues to this day as all the bugs are ironed out and the designs refined with the benefit of practical experimentation.

So what about a power plant for artificial intelligence? To put things in perspective, consider that the largest computer systems in existence today (probably Google’s data centres) consume more electrical power than most cities, yet can match only a few percent of the capacity of one human brain. A human brain by comparison runs on about 25 watts of power.

However it is a common mistake to think of a human brain as being intelligent by itself. Although we operate under the illusion of individuality we are really a hive mind, and what we think of as being our intelligence was achieved through hundreds of thousands of years of development under the auspices of civilisation. Just creating one electronic brain that is the equivalent of a human brain won’t be enough. We would need to create billions of them and run them for millennia to achieve parity with human beings.

What’s the solution? The same as it has always been. Artificial intelligence will simply be an extension of ourselves, it will leverage off all of our knowledge and achievements, it will seek to improve on billions of years of evolution. We cannot view artificial intelligence as being some kind of isolated phenomenon separate from ourselves, any more than we can divorce ourselves from the rest of the ecosystem which sustains us, or we can imagine aircraft flying without a civilisation to manufacture and maintain them.

But I still have no idea how long it will take.

 

 
  [ # 2 ]

Yesterday I copied the excerpt from the book “Apocalyptic AI” to which I referred earlier. I remembered it wrong: it’s not most people in the field of AI in general, but rather “Apocalyptic AI advocates”, who believe that strong AI will arrive in the first half of this century. This book has some terrific parts in it, including a list who’s whom in the field of apocalyptic AI, described relationships between existing religions and apocalyptic AI, mention of transhumanists and artilects, etc. I’d recommend taking a look at it, if you’re interested in the religious aspects of AGI, or AGI as a social movement in general. I’m personally extremely interested in such a movement.

All Apocalyptic AI advocates agree that the exponential rise in computer power will lead to intelligent robots in the first half of the twenty-first century. Those who believe we will build intelligent machines have generally accepted Moravec’s dating scheme (Vintage 2003), though Minsky has insisted that we already have sufficient computing power to duplicate human intelligence, if only the software problems were solved (Hall 2007, 252). Moravec initially predicted that we would build robots with humanlike performance by 2030 but, in Robot (1999), revised this date to 2040. Kurzweil believes the feat could be achieved in a supercomputer by 2010 and in a $1,000 personal computer by 2025. In 2060, he says, a $1,000 computer will be as intelligent as all human beings put together and by 2099, one penny’s worth of computation will be one billion times greater than that of all human beings combined. “Of course,” he continues, “I may be off by a year or two” (Kurzweil 1999, 105,  emphasis added). Using the Law of Accelerating Returns as justification has helped Kurzweil greatly enhance his level of certitude since the publication of The Age of Intelligent Machines.
(“Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality”, Robert Geraci, 2010, page 30)

Also, I’ve tentatively revised my opinion of Minsky’s opinion on the time frame of AGI, because even if we could process only a fraction of any real-time sensory input data stream, that would be enough to start demonstrating strong AI in a convincing way, limited only by resolution, which would clearly be a scalable parameter in the future. So I believe Minsky and I agree perfectly on everything now, which is a good sign.

 

 
  [ # 3 ]

Follow-up:

I recently came across this excerpt that explicitly mentions the two possible directions to AGI (that in turn imply two different time spans) I mentioned. It’s the first such explicit reference I found. I read this book years ago but I just didn’t remember that organization of those paragraphs.

  The computers we’ve built so far are the lineal descendants of the gear-grinding adding machines of our grandparents’ time. We’ve been doing arithmetic for millennia, and our first decades in the computer era have been just more of the same: bigger, faster, but not qualitatively different. For the most part, the programs we’ve written are equally “linear” in their thinking. They have had to be, because of the structure of the machines they’ve been written for. If we examine these first decades, we find that for linear, arithmetic tasks computers have progressed farther, faster, and more usefully than the most wild-eyed dreamers imagined. The computational power and information storage capacity that is available now in a package half the size of a deck of cards, at a price hardly greater than a dinner for half a dozen people at a good restaurant, is incredible.
  On the other hand, the expectations that the optimists had for computer solutions to problems involving association have turned out to be far too high. Pattern-recognition, even of a simple kind, is extraordinarily difficult for a computer. Anything approaching “artificial intelligence” on the part of computers is so far from present reality that we should avoid the phrase. We have made machines that perform very well a tiny subset of the functions of one side of the brain. The other side is still so mysterious that we don’t even begin to know how best to imitate it.
  The positive approach to this dilemma has two paths. One is already being followed: to give a computer so large a table of possible combinations, and so great a speed, that it can try trillions of combinations and stop when it recognizes a good one, all in a reasonable time. That is pure brute force, but it is the method used by the electronic chess-playing computers that are a current consumer toy.
  The other path is more difficult but in the long run more rewarding, and I hope it will be followed; I think it will be. That is to continue and extend the present vigorous research into the associative and creative processes of the human brain, and simultaneously to begin thinking about wholly new kinds of computers whose thinking will be holistic and associative rather than linear. We have to go a long way back, as far back as Babbage, and explore a branch that we passed by then. My guess is that by 2081 each of those two schools of research will have something useful to say to the other.
(“2081: A Hopeful View of the Human Future”, Gerard K. O’Neill, 1981, pages 48-49)

That quote and insight about missing a branch of computer science going back to Babbage (whose first ideas about calculating machines started in 1812) is absolutely outstanding, and I’d like to emphasize that statement to everyone interested in AI for any reason, even if just for criticism of AI. This quote could refer to the branches of analog versus digital computing, or something more fundamental, like a very different form of math required for AI, a type of math not yet invented. My opinion is the latter. If true, that suggests there is really only one missing fundamental piece of the AI puzzle: the underlying form of math or type of operation, both of which equate to a form of representation we’re not currently using. That in turn suggests that a breakthrough in AI is potentially very close. (All you have to do is come up with a new type of mathematics applicable to the high-level functions of human brains…! grin)

Charles Babbage
http://en.wikipedia.org/wiki/Charles_Babbage

 

 

 
  login or register to react