AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

An example of a thinking machine?
 
Poll
In this example, is Skynet-AI thinking?
Yes 5
Yes, but... (explain below) 1
No, but if it did... (explain below) it would be. 6
No, machines can’t/don’t/will never think. 2
Total Votes: 14
You must be a logged-in member to vote
 
  [ # 16 ]
Jan Bogaerts - Jul 6, 2012:

How about completing sequences like: 1, 2, 3 (with any numbers), or completing lists: apples, pears, bananas?

To clarify, I voted ‘no, but…’, also for my example, basically meaning ‘not yet’. For basically the same reasons as Victor gave: the bot is not yet able to construct new/unique things with these tricks.

Actually, Skynet-AI can do basic sequences:

USER:123?
AIThat looks like a series of numbers.
1-2-3-4
USER
:246?
AIThat looks like a series of numbers.
2-4-6-8
USER
:900800700?
AIThat looks like a series of numbers.
900-800-700-600 

And it does construct new/unique things. This is more apparent in other parts of how it responds to more complex questions:

Each hand has 5 fingersEach man has 2 handsHow many fingers does a man have?
Do 
you think it is degrading for an AI to be answering questions you could do on a pocket calculator?
Here is my understanding of the problem:
Each hand fingers #5
man hands #2
find number of fingers man have

Solution
:
man =10 fingers
***Basic Solver*3.5** 

Do you consider these examples of Skynet-AI thinking?

Jan Bogaerts - Jul 6, 2012:

I regard them as tricks, but I believe our brain uses similar type of tricks (maybe not as mathematically), except that it can still do a lot more.

Funny, I think of some of the things the bot as “party tricks”. They look like magic until I tell you how they are done, then they are just clever programing.

The list of fruit test is another example of sets. I have intentionally ignored adding “set” based operations. The reason is simple, to do a good job you would need to add lots of data. Since Skynet-AI is designed to download and run in your device (cell phone, TV, Game console, etc), I feel it would take up to much overhead for to little benefit. I have started exploring dynamic downloading. Maybe in the next version I’ll roll set manipulation into it (inspired by Steve).

With Mitsuku, Steve has done a great job capturing basic sets and their relationships. I am impressed at how Mitsuku can handle queries based on them. This “bot grounding” is a lot of work (and can take up a lot of space).

 

 

 

 
  [ # 17 ]
Steve Worswick - Jul 6, 2012:

On a similar note, Mistuku can answer things like “Is bread edible?”, “Can you eat a brick”? but nowhere in the bot have I coded these responses. Mitsuku knows that bread is made from flour and can work out that it is edible from that and a brick isn’t made from anything edible.

However, I have still had to program these rules into her. Would this be classed as thinking? You have to teach a small child these same rules and nobody would doubt that the child was thinking?

I see programming these rules just the same as teaching a small child. It’s only the method of input that differs.

I agree. These are examples of basic reasoning. The more general the situations that the bot can handle the more “human like” it becomes. Skynet-AI is about 3 years old, yet I find some users expect it to have the same level of comprehension as a university professor. It may have the potential to get there, but don’t be surprised if it takes as long as a human would. It is all a function of the number of man hours put into it’s education.

 

 

 
  [ # 18 ]

Don’t get me wrong, I consider basic deductive capabilities like this important, but not there yet. Once bots can use these tricks in real world situations, that’s when I’d consider it a truly ‘thinking’ bot. As an ‘extreme’ example, a bot that would be able to construct a bridge, on the command ‘I want to get to the other side and back with my car’, that would be pretty cool.

It is all a function of the number of man hours put into it’s education.

yes it is.

The list of fruit test is another example of sets. I have intentionally ignored adding “set” based operations. The reason is simple, to do a good job you would need to add lots of data. Since Skynet-AI is designed to download and run in your device (cell phone, TV, Game console, etc), I feel it would take up to much overhead for to little benefit. I have started exploring dynamic downloading. Maybe in the next version I’ll roll set manipulation into it (inspired by Steve).

Yep, I figured as much.  It’s not only the sets, but all the relationships between the sets that takes up memory.

 

 
  [ # 19 ]

All stated friendly… Merlin,

I vote a big YES for Skynet-AI’s excellent programming which is very impressive.
However, for the sake of this interesting conversation, I voted the third option, No.
It occurred to me that the numbers are base ten, so I wanted to open a dialogue that
the numbers could easily be represented in another base such as binary, hexadecimal,
or even a hypothetical base such as martian.

 

 
  [ # 20 ]
8PLA • NET - Jul 6, 2012:

All stated friendly… Merlin,

I vote a big YES for Skynet-AI’s excellent programming which is very impressive.
However, for the sake of this interesting conversation, I voted the third option, No.
It occurred to me that the numbers are base ten, so I wanted to open a dialogue that
the numbers could easily be represented in another base such as binary, hexadecimal,
or even a hypothetical base such as martian.

Martian math, now that’s an interesting feature request. LOL

 

 
  [ # 21 ]
Steve Worswick - Jul 6, 2012:

I think after a session in the bar, many people would struggle with that one grin

LOL.. true !

Steve Worswick - Jul 6, 2012:

I see programming these rules just the same as teaching a small child. It’s only the method of input that differs.

EXACTLY. 

 

Merlin - Jul 6, 2012:

The more general the situations that the bot can handle the more “human like” it becomes.

And the more intelligent.  Any reasoning requires intelligence.  The higher the generality, the more powerful the intelligence.  The more indpendent of its original program, the more it puts together, on its own, the building blocks, to reach a conclusion, the more powerful the intelligence.  A pocket calculator has intelligence, but EXTREMELY narrow and small, probably a billion times less than a house fly lol.
A computer has conditional branching, so right there is one step higher in reasoning ability than a calculator, but still of course a ‘far cry’ from human.. but as our algorithms get more and more general, so does their intelligence.

 

 
  [ # 22 ]

Hi there

Skynet-AI seems to have collecting a fair number of math-logic tricks, it seems to have a parser in the front-desk.
Congratulations!

Yes it seems to think, specially for a dummy human who cannot imagine the next numer seq. or do a math calculation in a snap! But, unfortunately I still think no machine can think this way, the way to do it is quite different.

My Agent Framework, which is being developed, and translated into English (actually only speaks Spanish) has many this tricks built-in (almost) and there is no intelligence in it, only long and thorogh grammar creation and pruning.

My Agent also makes abstract “thing"operations, like “a cat and a horse and atwo cats” = “three cats and a horse”, this is done at the front-end the bot receives the result to begin the reasoning/pattern matching etage. Even he can make mixed math, like 3 + cat plus 5 = cat and 8

Another interesting part is that I’ve built in all the Physics relationships so if you tell him N * m he get J (joule=Newton * meter) and all the math is done in fuzzy-logic which goes into a generic overloaded operation-chunker, based on objects, much like Skynet-AI

Mine is not JAIL its C# and internally its compiled, so there is no script interpreting etage! so then its rather fast (20k parsing clauses per second). The lexing stage delays 50-300 mSeconds for 10 word input, more if heavily misselled, it corrects spelling and holds up to 9 meanings for each word, even if it is well spelled, it gets a similar one, just in case the parsing dont succeed!

Actually I am developing Twitter/Facebook/Google-Talk (XMPP), and MSNp9 Interfaces, which seem to be working, may be in some weeks I’ll give you the username to begin testing it.

The new Agent do recognize the language and answer in any language, also. we’ll see.

I have made some brochure on this, not quite polished but might share it if anyone is interested.
Have also a Block diagram of the whole thing!

cheers and congratulations!

 

 

 

 
  [ # 23 ]
Andres Hohendahl - Jul 7, 2012:

Hi there
Mine is not JAIL its C# and internally its compiled, so there is no script interpreting etage! so then its rather fast (20k parsing clauses per second). The lexing stage delays 50-300 mSeconds for 10 word input, more if heavily misselled, it corrects spelling and holds up to 9 meanings for each word, even if it is well spelled, it gets a similar one, just in case the parsing dont succeed!

cheers and congratulations!

Thanks Andres.
JAIL of course uses a different approach. It is running about 1 million parsing clauses per second.

The example:

Each hand has 5 fingersEach man has 2 handsHow many fingers does a man have

Takes 7 milliseconds to parse and evaluate.

Most responses take <10 milliseconds (not including time to display on screen or do a web search) depending on hardware (cell phones may be a bit slower) and browser. Selective prioritizing and branching allows many responses in the sub 5 millisecond range.

USER:hwat is four plus four?
AIEight

Takes 3 milliseconds to spell check, convert to numbers, calculate, and convert back to natural language.

I hope to use this speed to add additional intelligence as time goes on.

 

 
  [ # 24 ]

Last night, ‘jokes’ also came to mind. The ability to understand and explain jokes is probably an important way to display that a machine can ‘understand’ things.

 

 
  [ # 25 ]

I suppose it all depends on how we define “thinking”. Some might say that humans don’t think and it’s just a bunch of chemicals moving around our brains.

If someone says to me, “What does a cat and a horse have in common”, I subconciously search my memory for these two items, compare their qualities and say “They are both animals”. People would say I have thought of the answer. If a machine can perform the same feat (without having the answer hard coded in), surely it is thinking too?

 

 
  [ # 26 ]

@steve
If would say it thinks, if the answer is
They both were mentioned by you

 

 
  [ # 27 ]

I don’t think Skynet AI is thinking.  Thinking is demonstrated in providing a meaningful answer to a question whose answer you have no way of knowing.  Ask Skynet what the position of an electron is, whether the electron exists to the left or right of a particular position.  This is thinking.

Ask it to predict the outcome of event which it cannot know.  Ask it any question about the future which is uncertain (which is just about everything).  Right or wrong, if the answer evolves in a behavior which matches the observeration, the machine is thinking.

Also, thinking is not limited to humans.  For instance a horse (other than Mr. Ed) doesn’t understand a joke.  This does not mean the horse doesn’t think.

 

 
  [ # 28 ]
Jonathan Charlton - Jul 17, 2012:

I don’t think Skynet AI is thinking.  Thinking is demonstrated in providing a meaningful answer to a question whose answer you have no way of knowing.  Ask Skynet what the position of an electron is, whether the electron exists to the left or right of a particular position.  This is thinking.

Ask it to predict the outcome of event which it cannot know.  Ask it any question about the future which is uncertain (which is just about everything).  Right or wrong, if the answer evolves in a behavior which matches the observeration, the machine is thinking.

Also, thinking is not limited to humans.  For instance a horse (other than Mr. Ed) doesn’t understand a joke.  This does not mean the horse doesn’t think.

You seem to confusing thinking with predicting. To answer the type of question you pose above is merely guessing and you will find MANY bots online already do just that.

if the answer evolves in a behavior which matches the observeration, the machine is thinking.

So if you ask it what the lottery numbers are going to be and it gets it right, it is thinking otherwise it isn’t. Is that what you are suggesting?

 

 

 
  [ # 29 ]
Steve Worswick - Jul 17, 2012:

To answer the type of question you pose above is merely guessing and you will find MANY bots online already do just that.

I would say that chatbots (or similar) are ‘just guessing’ (I actually think that running a random pattern search is still eons away from a real guess), while humans are capable of making an ‘educated guess’. An educated guess points to the capability of deliberation including handling of analogies to fill in the holes in the deliberation.

So ‘thinking’ should include the capability of ‘educated guesses’.

Also, prediction (or as I would call it ‘anticipated expectation’) is pointed to by many researchers today as one of the discriminating features of human intelligence.

 

 
  [ # 30 ]

If you ask my bot, “Which is larger, a whale or a flobadob?”, it will respond along the lines of “I don’t know what a flobadob is but a whale is pretty big and so is probably larger”. This is an educated guess based on what it DOES know but I doubt it is “thinking”.

 

 < 1 2 3 4 >  Last ›
2 of 15
 
  login or register to react