AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

An example of a thinking machine?
 
Poll
In this example, is Skynet-AI thinking?
Yes 5
Yes, but... (explain below) 1
No, but if it did... (explain below) it would be. 6
No, machines can’t/don’t/will never think. 2
Total Votes: 14
You must be a logged-in member to vote
 
  [ # 46 ]

Do you have a sample Turing test dialogue that you had with a human and Mu that you can post here as a demo?

 

 
  [ # 47 ]

So Mu is not NLP.  I built Mu by breaking down the Turing Test into its most fundamental (basic) components, and that includes dialogue.

Here is how a Turing Test with Mu works.

1. There are three rooms, one with the interrogator, one with a human and one with Mu.
2. The interrogator thus poses the following challenge to both rooms.  “I have a child, electron, etc. named ‘Kramer.’  Kramer is now going to jump to the left or to the right.  Which way did Kramer jump?”
3.  Both rooms provide a response, left or right.
4.  The interrogator informs each room of which way Kramer did in fact jump.
5.  The interrogator repeats the question (with Kramer jumping to the left or to the right).
6.  The respondents provide an answer.
7.  The interrogator again informs each room of which way Kramer did in fact jump.
8.  This goes on and on until the interrogator decides to make a determiniation of man versus machine.
9.  In all instances the responses of the two rooms will either be indentical in the statistical behavior of their respones, or their responses will be identitical completely.  (This is based on whether Kramer’s movement is random, or if it has a prior preference.)
10.  The interrogator will be unable to determine the difference between the decisions made by a human and those made by the computer because they are identical.  This is true even if there is a fourth room where Kramer jumps, and another person is providing the movement feedback to the two test rooms, while the interrogator does not know the true movement of Kramer either.

 

 
  [ # 48 ]

I understand you might think it unlikely that someone has a program that defeats the Turing Test 100% of the time.
This does tend to fly in the face of probability theory.

I do claim that it defeats the Turing Test 100% of the time against all interrogators given any human control variable (which it will mimic 1:1).  I’m going to demonstrate it soon enough after I get some feedback from a local engineering school.  Nevertheless, my program has never lost the Turing Test, and it has successfully mimicked the human control variable, perfectly, every time, even though there is no communication between the two.  So it is impossible for the interrogator to differentiate man from computer because the responses coming from behind both doors is exactly the same, no matter who the human is and no matter how far apart the rooms are.  That’s what I’m hoping to demonstrate in an an academic environment this August.

I look forward to August to see the results.

You can go to my website, Charlton AG, and look up the services link to see on the right side of the screen, my AI program Mu perfectly mimicking the S&P 500, the NASDAQ Composite index, and any other market in the world perfectly with zero deviation over 22 years of daily data.  It’s a very simple program, and yes, it does defeat the Turing Test 100% of the time.  I developed it on 6 March 2012, and have briefed it at IU, and will be briefing it at Georgia Tech.
It is a bit hard to see based on just your charts. Do you have it in spreadsheet form? I would love to see how it mimicked the flash crash. My question would be if your model is just overfitting the data.

 

 

 

 
  [ # 49 ]

Your description is unlike any Turing test I have ever encountered.

 

 
  [ # 50 ]

It doesn’t predict.  It only mimics.  It asserts that prediction is impossible because the future represents a system of information that has yet to be created.  However, the process which generates the future is the same process that keeps happening over and over again.  Essentially, if you are within the system of prediction and you were able to ascertain the future perfectly, then you would act, thereby altering it and rendering your prior assessment incorrect.  If however, you were external to the future then you may ascertain the future but could do nothing about it, not even prepare for it.  So predicting the future is fruitless.  However, mimicking is not.

My program just mimics the behavior of the market.  So it doesn’t matter what event occurs, it does what the market would do in that event, because what the market would do in that event is reducible to a binary decision.

 

 
  [ # 51 ]
Steve Worswick - Jul 18, 2012:

Your description is unlike any Turing test I have ever encountered.

Here’s the definition of the Turing Test from Stanford.  It asserts that “imitation” is the game, not conversation.

My program imitates and it defeats the Turing Test as Alan Turing himself devised it.

http://plato.stanford.edu/entries/turing-test/

 

 
  [ # 52 ]

So if I can mimic your decisions exactly, or so perfectly as to be indistinguishable from you, am I not you?  This is the intelligence we all have.  In the same analogy with the child and the stranger.  The behavior is engrained, such that we know the outcome by the prior defined behavior.

The experience is like that of quantum mechanics.  Two different rooms, two “different” entities.  Both providing the exact same feedback to the same question.

That’s the imitation game.  How we merge past information with new information.  We humans do that in a very specific and universal way.  The fact that Mu does the exact same thing, and that that thing is unremarkable in the process, but remarkable in the complexity of the results it provides demonstrates that when it comes to intelligence, we all have a whole lot more in common than we think, and that we’re all coming to the same conclusion of reality because we’re using the same process.

 

 
  [ # 53 ]

http://en.wikipedia.org/wiki/Turing_test.

You only have to imitate the way a person decides.  Not the way a person conveys that decision.  You first have to figure out the way we decide, and then we know how to map the NLP.

 

 
  [ # 54 ]

Here are some sample questions from Alan Turing himself:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6
and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

I fail to see how your program emulates this.

 

 
  [ # 55 ]

That’s not the Turing Test. Those might be questions.  But the question posed by Turing was whether or not you could imitate a human.  My program imitates a human’s decisions, such that the decisions are indistinguishable from those of a human.  If you assert free will, then you assert that you make decisions and this is a distinguishing aspect of you.  If my program can mimic your free-will then in doing so, it has imitated you, your free-will and in doing so has shown that you don’t have any.

Here’s an example to question really whether or not what you consider as intelligence exist.  I’m going to show you how to guarantee yourself 1 million dollars in the stock market.

1.  Open up 8 online stock sites that provide only one of two recommendations: Buy (B) or Sell (S).
2.  Now pick any stock, porfolio of stocks, and fund, I don’t care.  Make it complex sounding if you like.
3.  Now in time periods one through three, you instruct the managers of each site to make the following recommendations:

          S1   S2   S3   S4   S5   S6   S7   S8
Time 1   B     B     B     B     S     S     S     S
Time 2   B     B     S     S     B     B     S     S
Time 3   B     S     B     S     B     S     B     S

Now, now matter what happens one of your sites will get it right each and every time.  That’s because the right answer exists prior to any justification you select for Buying or Selling the stock/ porfolio/ etc.  This is true of every decision.  Intelligence is just on picking up which has the greatest probalility and that comes from mimicking the prior prevalence, not inferring the future.

If you are intelligent differentiate between people who are doing this scheme and people who are actually “making” “good” “decisions.”  (I’ll give you a hint.  You can’t, or couldn’t.  My program Mu demonstrates exactly how we ascertain the prior probability.  That is intelligence.  Not some ability to write a sonnet.  Event then, you’re just mimicking form.  Intelligence is imitation, plain and simple.  It doesn’t matter what scale you do it at.

 

 
  [ # 56 ]

But those questions were set by Alan Turing himself for his own test.

The Turing test involves two subjects, a human and a machine (the computer to be “tested”) who engage in conversations with some number of interrogators. Each interrogator (human beings) will be placed in a room with a computer terminal. Using the terminal to communicate, each interrogator will engage in two conversations with each of the two subjects—the computer (to be tested) and the human subject. The interrogators do not know which of the two subjects is the machine and which is the human. It is their job to ask questions or to say anything in conversation that might trip-up the computer program and identify it as the machine. After the interrogator has a conversation with both subjects, the interrogator must guess which is the person and which is the machine. Turing never specifically states “official criteria” for what counts as passing the test. However, he describes a certain level of accomplishment that he believes would be reasonable to expect within 50 years:

Alan Turing - “I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 109 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

You appear to have made your own test up.

 

 
  [ # 57 ]
Steve Worswick - Jul 18, 2012:

But those questions were set by Alan Turing himself for his own test.

The Turing test involves two subjects, a human and a machine (the computer to be “tested”) who engage in conversations with some number of interrogators. Each interrogator (human beings) will be placed in a room with a computer terminal. Using the terminal to communicate, each interrogator will engage in two conversations with each of the two subjects—the computer (to be tested) and the human subject. The interrogators do not know which of the two subjects is the machine and which is the human. It is their job to ask questions or to say anything in conversation that might trip-up the computer program and identify it as the machine. After the interrogator has a conversation with both subjects, the interrogator must guess which is the person and which is the machine. Turing never specifically states “official criteria” for what counts as passing the test. However, he describes a certain level of accomplishment that he believes would be reasonable to expect within 50 years:

Alan Turing - “I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 109 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

You appear to have made your own test up.

No, that is not what I have done.  The conversation is simply reduced to:

1. A question asked by an interrogator.
2. A response provided by both rooms.
3. A response by the interrogator as to what the outcome was.
4. The same question asked by the interrogator.
5. The same respondents providing a response.

That is a conversation.  It is just reduced to its most basic form.  Question (i.e. interrogation).  Answer (i.e. response).  Feedback and Question (interrogator), ad - infinitum.

That’s not making up a test.  That is the most basic form of the imitation test.  That is the most basic form of the interrogation, and it accounts for all self-replicating behavior.

 

 
  [ # 58 ]

The conversation I have posed does not violate the definition of the test as you have laid out.  It is just basic.

 

 
  [ # 59 ]

Yes it does violate it. Your steps 3,4 and 5 do not occur in Turing’s imitation game. The interrogator doesn’t supply feedback on how each entity is doing as the conversation progresses.

If we are breaking it down into stages:

1. A question asked by an interrogator.
2. A response provided by both rooms.
3. Repeat steps 1 and 2 for five minutes.
4. Interrogator decides which is which

That is a conversation.

If you define a conversation as repeating the same question over and over again, you must be a big hit at parties.

 

 

 
  [ # 60 ]

Your re-writing of the rules sounds more like the “prisoners dilemma”.

If we allow re-writing of the rules you can make your program very simple.

The conversation is reduced to:

1. A question asked by an interrogator.
2. A response is provided by room A.
3. A response by the interrogator is made to both rooms as to what the outcome was.
4. Room B responds the same as room A.

For the program in Room B, it just waits for a question and an answer and then responds and repeats the process. Guessing if a computer or person is in room B would be impossible if both follow the same rules.

It still looks to me like you are just overfitting the data. From the Stanford AI Class:
http://www.wonderwhy-er.com/ai-class/
It is discussed in unit 5.30 Overfitting Prevention

 

‹ First  < 2 3 4 5 6 >  Last ›
4 of 15
 
  login or register to react