|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
In a recent article. MyCyberTwin’s chief innovation officer and co-founder, John Zakos, said the day when artificial intelligence (AI) reaches the stage of self-awareness is “lifetimes” away.
Robot self-awareness “lifetimes” away: MyCyberTwin
I think my own bot, Skynet-AI does a pretty good job of exhibiting self awareness.
QBO has shown the start of self awareness also.
Robot recognizes self in mirror
So my question to the collective is:
When will an AI be self aware? If we define a couple of lifetimes as 150 years, do you take the over or under on Zakos’ prediction?
|
|
|
|
|
Posted: Dec 1, 2011 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
While I don’t think I’ll see it in my lifetime (though I hope that’s more than 25 years, TBH), I’m reasonably certain that, even by all but the very strictest of standards we should see the emergence of a self-aware (though probably still minimally sentient) AI system within, say, 50 years. I think that we’re going to first have a much better understanding of what makes us humans self-aware before we can create something that does more than simply mimic such behavior, and let’s face it, we can’t agree on even one facet of self-awareness (e.g. intelligence) right now, let alone the entire package.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 2 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
I think my own bot, Skynet-AI does a pretty good job of exhibiting self awareness.
How is it self aware? Can we even tell if humans are self aware?
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 3 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Toby, I’m sure that there’s a better way to express your views without resorting to insults.
Merlin, in what way do you consider Skynet-AI to exhibit self awareness?
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 4 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
They people who made QBO are just “gaming” the task. Any piece of software that can identify any image and change a part of its memory is performing in the same way.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 5 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
It’s indeed a tricky question, what is self awareness? On tv, I once saw a study done on animals to determine if they were self-aware or not. They also tested it with a mirror: if the people got the impression that, when the animal was looking in the mirror, it appeared to recognise itself, they labelled it as self aware. Perhaps that’s where they got the inspiration of the mirror. Since our bots don’t yet have vision, that’s going to be a difficult task. But, I guess that the idea is: when a system is able to recognize characteristics of itself in a description, without the description explicitly naming the subject, it is self aware.
So, I think a self aware bot needs to have some sort of memory that’s able to store information about the ‘self’. Furthermore, the bot should be able to recognize those values in the input so that it can correctly label individuals from context.
I think I can do that.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Jan Bogaerts - Dec 2, 2011: ...But, I guess that the idea is: when a system is able to recognize characteristics of itself in a description, without the description explicitly naming the subject, it is self aware…
That would describe anyone who believes in astrology and reads their forecasts on a regular basis. You could hardly describe such people as being self-aware, let alone intelligent. I think what you are describing would be better classified as empathy.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 7 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
O common Andrew, you may not like or agree with astrology (neither do I), but that’s no reason for calling people who do believe in that stuff, non-self-aware or not intelligent. Many of the smartest people believed some very strange things. (Tesla, Einstein,....)
Empathy to me is the reverse: being able to understand/recognize situations/things with someone else. not from yourself.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Ah, more fun with semantics on the forum.
For the record, I agree with both of you. The ability to relate to others’ situations because of similar experiences of your own is “empathy” (as Andrew said). The ability to relate to others’ situations despite no similar experiences of your own is “sympathy.” However, I would agree that to recognize traits/experiences of the self in others requires self awareness (as Jan said).
I voted for < 150 years. Not because it couldn’t be solved sooner, but because most mainstream AI development is not geared towards these goals and I believe* it’ll take serious computing power, peripherals, and collaboration to design.
*Eager to be proved wrong though.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
Cool thread!!!!
I voted for in my lifetime. This article (posted in our state of the art area inspired me:
http://www.chatbots.org/research/news/conversational_agents_self_awareness_virtual_world/
I personally believe that self awareness comes with being aware of occupying space where other can’t come. In other words: where does your (virtual skin) defines the outer space of your body. When others do enter that space, i.e. entering your body, you will die, or at least by harmed, and that’s something you have to prevent.
I’ll invite John as well btw, he’s already on Chatbots.org:
http://www.chatbots.org/expert/john_zakos/11139/
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 10 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
I’m pretty optimistic about big AI happening within 20 years. There are some big players with LOTS of money who are quite eager to make artificial brains. Jeff Hawkins is one—-worth hundreds of millions of dollars. Siri…although not super great at conversation, the fact that it exists is going to push developers to make better and better versions.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 11 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Dave Morton - Dec 2, 2011:
Merlin, in what way do you consider Skynet-AI to exhibit self awareness?
The first question that comes up is how do you determine if something is “self-aware”. Like any good magic trick, once you know how it is done it loses some of the luster. Sure, image/pattern recognition could be considered ‘gaming’ the task, but others would say that is what humans do.
So, let me give you the basics:
AI LOG: USER:Who are you? AI: I am Skynet. I am a self aware AI. USER:What are you? AI: I am the start of singularity. This press release should be informative. (writes press release for you)
Now as I thought about how an AI (or a small child) would consider itself, I came to the conclusion that it represents a disjoint Venn diagram of 2 sets:
A: Things that I am
B: Things that I am not
It also could be thought of as fuzzy set with some overlap, but I did not implement this in Skynet-AI. Skynet-AI attempts to classify things into one of these two sets. If you are taking the Stanford machine learning course, you could consider this an “one vs all” classification problem.
Which leads us to users testing Skynet-AI to see if it is self-aware:
AI LOG: USER:Are you a terminator? AI: I am a terminator. When do you think I will be as smart as you are? USER:Are you a cabbage? AI: Some would say I am not a cabbage.
The response is generated on the fly based on the input.
Although I am sure there are still some holes, in general Skynet-AI does a good job with the self-recognition task. Try-it and see what you think.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 12 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I personally believe that machines will never be self aware. They will always only follow instructions and the only glimpse of self awareness is the illusion presented to the user that has been programmed into it. I like to think my bot Mitsuku (http://www.mitsuku.com) gives a decent imitation of intelligence but not for one minute would I ever class it as being self aware or conscious. It is just a computer program and will always remain so.
And yes, Mitsuku can also do , “are you a banana/cabbage/peanut/radio?” type questions but again this is just programming and demonstrates no self awareness.
|
|
|
|
|
Posted: Dec 2, 2011 |
[ # 13 ]
|
|
Senior member
Total posts: 250
Joined: Oct 29, 2011
|
Awareness requires a level of conciseness. So the real question would be, is a computer capable of conciseness?
IMHO, not in it’s current state of design. As humans we connect our conciseness to the unified field which is a spiritual network. I don’t believe a man made machine will ever be capable of spiritual awareness, possibly just the concepts that are represented by the data.
.
|
|
|
|
|
Posted: Dec 3, 2011 |
[ # 14 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
I agree with you Steve. Although I come down on the side of the bots, depending on the definition I could easily have voted in the ‘never’ slot.
It brings us back to some of the same memes that resonate through this board. Given unlimited time and resources, can a machine ever be:
- Intelligent
- Self-aware
- Creative
- Emotional
- Etc, (pick your own favorite trait)
And if the it creates a good enough illusion, will it really matter? Is a spreadsheet intelligent? How about Watson or Deep Blue? If a bot can recognize it’s own image and answer questions about it self is it not self-aware? Can animals be self-aware? Should a bot be discriminated against just because its DNA is electronic or because it was taught differently than children?
The “ethics of intelligent machines” makes for some interesting sci-fi/philosophy discussions.
PS. It is also interesting that no one is yet in the camp of John Zakos and multiple life times.
|
|
|
|
|
Posted: Dec 3, 2011 |
[ # 15 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
PS. It is also interesting that no one is yet in the camp of John Zakos and multiple life times.
Well a lot of people were in the Zakos camp but it seems to be emptying. Just like nobody believed me when I said automation will take up tons of jobs—and now the economists are saying the same thing.
|
|
|
|