|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
I started a new thread here to address some of the questions Don and Jarrod raised in the X-Games thread.
So can a chatbot make inferences? Again I am going to use Mitsuku as an example. (The fact is that all of us tend to talk abut our own projects because we are more familiar with them, but its nice to be able to use someone elses work so as to not become tiresome) but yes, I have seem Mitsuku make inferences.
What color is a red ball?
Red
what color is the red sea
Blue
And I have seen an example where Steve coded an AIML trigger that inferred that if a human has 2 legs, and Dave is a human, then Dave has 2 legs.
The question you asked, is the same one I was asking, what is intelligence? I do not think anyone has ever answered this with any degree of satisfaction. Do we need to establish some formal guidelines? That hasnt worked with humans very well, maybe it can be applied to machines with more sucess. I believe there have been some suggestions in the field towards this end. Maybe some of the other guys and girls can jump in.
V
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 1 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
I think the important thing here is to identify precisely what’s being asked. ‘Intelligence’ is a word. It has whatever definition is most useful for communication. That said, I don’t think that the question we really care about here is which particular labels make for the most efficient communication. So what do we mean? It’s worth noting that the question was asked in the context of people criticizing Watson and/or chatbots as not ‘really’ being intelligent. So for now I’ll assume that the question is, “When someone makes a claim like that, what do they generally tend to mean by it?” Well, for myself, I can answer that at least in part by pointing you to my posts in the thread that inspired this one.
Jarrod Torriero - Mar 24, 2014:
I agree that something can be intelligent even if that intelligence is the result of a very different process to the human mind, but there are plenty of valid criticisms of any claims of intelligence from Watson more specific than ‘not how humans do it’. It cannot form abstract models of the world or of itself. It does not possess advanced reasoning abilities. It cannot comprehend the notion of a process (something that my own research and reflections in the pursuit of AGI have led me to believe to be crucial to any intelligent agent that has to actually function in the real world). Watson is probably closer to AGI than any chatbots that I’m aware of, but it still has a long way to go, and the difference between Watson and true AGI is qualitative, not quantitative.
@Vincent
Define ‘intelligent’. Better yet, let’s forget the word ‘intelligence’ and discuss the underlying matters. Why do we care about this thing we call intelligence? Well, intelligence is why we’re here today. Our ancestors looked at a rock and realized that, if thrown, that rock could be a dangerous weapon. This was the result of a detailed understanding of the way the world worked. Probably not an explicit understanding, but that’s ok. It seems to me that what really makes our intelligence powerful is that it lets us understand the processes underlying a domain and use that understanding to solve problems within that domain. To see the way the world is and the way we want the world to be, and find ways to make the former more like the latter. So let’s talk about that. Can Mitsuku do that? Can any chatbot that you know of?
I know that the derivative of ln(|x|) is 1/x. I learned that by rote in high school. But if you were to delete that knowledge from my brain, I could regenerate it from more basic principles in about twenty seconds. What chatbot can do that? Off the top of my head, many chatbots can perform arithmetic operations on arbitrary input, so there’s that. But that’s not much. It’s true that humans spend a lot of time on rote learning, but that’s not what makes us intelligent.
When I talk about intelligence in a context like this, I tend to be talking about the trait that allows us to creatively solve problems and come up with accurate models of the world. This is the trait that has elevated us to dominion over the planet, and this is the trait that I find interesting. I also think that this interpretation probably has a fair bit of overlap with what other people tend to mean when they make similar such claims about the AIs of today, but here are a few other things that people might mean:
* Sentience (basically the notion of it subjectively ‘feeling’ like something to exist).
* ‘Spontaneous’ motivation (deciding one’s own goals and desires, probably based on emotion - this has some overlap with sentience).
* Self-awareness (being aware of one’s self as a persistent entity within the world).
* Moral (ie possessing moral intuitions).
Self-awareness would partially overlap with my own definition in that it is to an extent implied by any sufficiently advanced capacity for world modelling, but otherwise I would generally not be referring to any of these when I used the word ‘intelligence’. I also wouldn’t claim Mitsuku or chatbots in general to have zero intelligence, I just wouldn’t rate their intelligence very highly.
There are numerous reasons why output which would generally imply intelligence in humans is often a much weaker signal coming from chatbots. As an example, consider the way human memory works. We can memorize by rote without understanding the underlying meaning, but it’s not easy for us. We find it much easier to remember things that we understand. As a result, when we talk in depth about complex matters it’s a pretty strong signal that we understand those matters. On the other hand, a computer program can perfectly remember any data to which it is exposed provided that it is programmed to do so (which isn’t hard) and there is sufficient available storage (would be a problem for some agents, but not likely for simple agents living in a world of mere ASCII text). Coming from a chatbot, a paragraph explaining a complex concept is not nearly so reliable a signal of intelligence as it would be from a human (though it participating productively in an in-depth discussion about said concept might be). Chatbots and humans can both say the words, but coming from the human those words merely sit atop a complex underlying network of understanding. With a chatbot, the words are rarely reducible to any great degree. They’re just words.
Incidentally, if I had to give a formal definition of intelligence, it’d probably be something like ‘expected optimization power across arbitrary domains’.
EDIT: It occurs to me that I’ve used the word ‘understanding’ in this post without much explanation. There are many ways this could be defined, but it seems to me that the purest (albeit informal) definition is that you understand something when you can apply it outside of the context of merely answering verbal questions about that thing. A chatbot can say something about the world, and you could argue that it therefore ‘knows’ that thing, but the question is whether or not that ‘knowledge’ pays off in terms of the chatbot’s ability to predict the consequences of actions within the world.
EDIT2: ^^Ok, that definition of ‘understanding’ has some pretty obvious pitfalls, but hopefully you understand the point I’m trying to make anyway.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 2 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Vincent Gilbert - Mar 25, 2014:
What color is a red ball? Red
what color is the red sea? Blue
Begging your pardon, these are not inferences but direct knowledge. The conclusion about Dave having two legs because he is human, arrived at by combining two other facts, is an inference. Since I regard inference as an intelligent process, I regard the inferring functions of Mitsuku as intelligent, other functions not.
The first time I told someone I was making an AI, they asked me what my “definition of intelligence” was. I failed to see why this mattered. Surely I do not need to define an apple to paint it, when I have a perfectly good example of one. I’ve since seen several discussions about other people’s definitions, and concluded that there was only one single definition:
The definition of intelligence is “me”.
Everyone seems to define it as such, “intelligent like me”, because they themselves are the closest example of intelligence that they truly know. A person led by goals will be convinced that intelligence can not exist without a goal . A person led by emotion will feel that emotions are somehow part of it . A person led by philosophy will theorise that sentience is required , and I, a person led by logic, will only accept measurable criteria . Since all people differ, all their definitions will differ. And some people will still deny intelligence if it stares them right in the face. Intelligence used to be defined by the mental ability to do math or play chess. As soon as those were achieved, they just changed the definition.
Personally I would say that a red sea is red, as opposed to the Red Sea.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 3 ]
|
|
Thunder Walk
Senior member
Total posts: 399
Joined: Feb 7, 2009
|
If you’re going to cite Mitsuku as an example, we have to admit that the “intelligence” lies within Steve, because AIML is a pattern-matching system, which that particular botmaster had taken into a higher level.
It’s a story that’s been rewritten more than once, but a movie slated to be released on April 18, 2014, called Transcendence taps into the topic in a way that I believe will be the only true form of Artificial Intelligence, where human consciousness is merged with software.
http://www.youtube.com/watch?v=VCTen3-B8GU
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 4 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
Hmm…I’m not sure that I agree with that Don. “Red sea” is direct knowledge, “red ball” is an inference. I could be wrong and I’ll give it some additional thought, but heres my thinking at this point. The sentence “What color is a red ball” is rather vague. At this point “red ball” could be a reference to actual coloration, or it could by simply a naming convention, such as “Red Bull” (the drink). Consider these examples;
“What color is a ball which is red in color”
Would be direct knowledge because we are including a direct reference to the actual coloration as data in the sentence itself.
[AI POSSESSING SOME FORM OF VISUAL SENSOR]
(holds up a red ball)
“What color is this red ball”?
Would be direct knowledge because a visual analysis of the object has confirmed that in fact, what was an ambiguous reference at the point where the interrogatory was posed, is in fact red.
Beause the “red sea” question has been used by many developers, I’ll use an example where RICH (sorry I swore I wouldnt use a personal example, but…LOL) succeeded and failed. because RICH uses an AI_SUB_SYS to self organize data in the background, it correctly made the association between “What color is the red sea” and a question it had not encountered before, “What color is the black sea?”. Success. But when posed the question “What color is a Red Bull”, it tanked. Failure. I am assuming that the failure to make the association was due to the fact that “THE red sea” “THE black sea” indicated a single object which were naming conventions, but “A red bull” was incorrectly classified as a probable attribute.
So, “what color is a red ball” requires that the AI infer that “red” is a reference to coloration, and some secondary mechanism is required to weed out incorrect references that are in fact naming conventions. To show that the AI is actually making an inference and not simply spitting back hard coded responses, we might pose an entirely different category of attribute. (actual conversation)
Vince: What shape is a Square bear
RICH: No offence, but how dumb do you think I am. A Square Bear is square.
Vince: Well, that was stupid
RICH: Wow! That hurt. You calling me stupid is like a monkey telling a test pilot he cant fly.
(Should never have taught him to be sarcastic)
So back to the chatbot as AI, I think that you would discover in similar fashion that Mitsuku is not hard coded (direct knowledge) but is in fact making the inference that"red ball” is an attribute, and I would offer the “Red Bull” failure and the “Square Bear” failures as proof of concept. These were incorrect inferences.
So back to the question, what is Intelligence. I still have no idea, and evidently I am dumber than a monkey
V
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 5 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
If one knows “red sea” as a compound word, that meaning gets priority. If one does not know this exception and it is not written in capitals, one must assume that “red” means the colour red. And a square bear is clearly square.
When I was a kid I always assumed that the Red Sea referred to both the name and the colour, because the sea was enclosed by desert and thus probably mirky with red sand. This was the only logical assumption since other seas were clearly transparent
If you want to see more people attempt to define intelligence, here you go: ai-forum
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 6 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I’m really trying hard to stay out of this discussion Don Patrick - Mar 25, 2014: When I was a kid I always assumed that the Red Sea referred to both the name and the colour, because the sea was enclosed by desert and thus probably mirky with red sand. This was the only logical assumption since other seas were clearly transparent
That is a very nice example of a (partly) ungrounded concept (or symbol) that gets handled through the use of analogies
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 7 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
I think that makes the point I was angling for Don. You inferred that square was an attribute which was classified as shape, and from that inferred a meaning for bear. In this case however, the proper answer could have only been obtained from having the correct perspective. In this case, Square Bear refers to Steve Worswick, and unless I am mistaken, the meaning of Square might be taken as the opposite of “hipster”, maybe from Steves day as a DJ? (And again, I have no emperical data, so I am making an inference. I know Steve was involved in the entertainment business, I was involved in the entertainment business, I have heard the term applied as a retro moniker elsewhere to someone who was involved in tech and music, so somewhere in my brain some sort of Bayesian algo says that this is the highest probability as the source of the nickname )
So, I’m going to stick with my original thought, as long as I can plug in different variables, this algorithm is making an inference. Simple yes, but still an inference given the ambiguity of the sentence and in the absence of empirical data.
V
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 8 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
I’m going to respond to my own post here, because I think we are al getting at the same point, albeit from different angles perhaps. I read some of your posts on the Ai forum Don and had to marvel at your patience after seeing some of the responses you received. Thank god this forum has some excellent moderators!
On the subject and on the field in general, heres my take on it. What I love about doing research in this area is the “wide open” nature itself. By that I mean, we all read papers written by some truly brilliant minds, and we see some truly spectacular results in some areas, some may appear at first to be not so spectacular, but…whos to say. Some by well funded teams of PhD’s in billion dollar research facilities, some by guys in their basements. And they all have one thing in common, they are all equally clueless. (wait until I finish before judging LOL)
I mean that in the best possible way, (and by “they” I mean “we”) because its the sort of clueless that existed in cosmology in the days of Galileo. And thats exciting. Its a particularly rudderless pure exploration because what we are talking about is designing something that we have yet (as far as I know) to have definitively defined in ourselves. So….by definition, the PhD and the hobbyist are equally…clueless. And thats where truly brilliant leaps in discovery occur. And whos to say who might make that leap. So I dont care of your chatbot came in last place in whatever contest took place, you should be encouraged. If you happen to be a PhD in a well funded research lab, and your latest promising lead..tanked. Start over. Edison is said to have remarked on his thousand attempts, I did not fail 999 times, I succeeded 999 times in discovering how NOT to make a light bulb. It’s exciting because unlike designing the first airplane, there is a component here that implies a search not just to emulate a human, but what IS a human. Where did he come from, where is he going. Was he created after all? and that is something that does not exist in any other field on earth.
So best of luck to everyone in their efforts, I enjoy reading about them and interacting with one and all. And when I have been asked, what model am I using, or what am I doing..
I have no clue. I’m just following a path to see where it goes, sometimes the sights along the way are…interesting.
V
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 9 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Vincent Gilbert - Mar 25, 2014:
What color is a red ball?
Red
what color is the red sea
Blue
The second question is a pet peeve of mine since it cost me some points in a chatbot contest. You assume the sea is not red (and that is a bad assumption for the Red Sea).
AI LOG: USER:What color is a red ball AI: The ball is red. Are you color blind? My favorite color is steel blue. USER:What color is the Red Sea? AI: The Sea is Red. Is that a tough question for a human? Today, it is widely accepted that the Red Sea is named for an occasional bloom of the cyanobacteria, Trichodesmium erythraeum algae, which clouds and muddies the usually translucent blue-green waters. These cyanobacteria appear as red and pinkish blankets on the surface of these waters. After blooming, the Trichodesmium erythraeum die, and they turn the sea reddish-brown. USER:What is in a cup of coffee? AI: In a cup of coffee, a cup contains coffee. Do most AIs get that wrong?
This topic relates back to the thread Is this an example of a thinking machine
In case you missed it, an attempt at an equation for intelligence has been proposed:
F = T ∇ Sτ
Meaning: Intelligence is a Force that Maximises the Future Freedom of Action
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 10 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
I had not seen that yet Merlin, thanks for sharing! As I said there are truly brilliant people out there doing really cool work. Interesting you should post this one (and this is slightly tongue in cheek) if that equation were applied to machine logic in the real world, we might call it the Skynet equation. Ill scribble this down when I get the chance (Id I get dragged away from what Im doing much more Im going to get killed at work) but Im thinking that…
If we assume a number that represents the total number of tasks now undertaken by humans, and calculate the percentage given to machines and call that p, then calculate the percentage of those tasks that include tasks that include the ability to produce a delta tau condition in the equation above, then as F—> we can calculate the point where machines exhibit more F than humans. Gotta get back to work but…
ill be back
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 11 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Vincent Gilbert - Mar 26, 2014: then as F—> we can calculate the point where machines exhibit more F than humans.
I was thinking something along the same line: if we put that equation to the test with humans we might find out there are many humans without intelligence
Thinking of that, it actually makes sense
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 12 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Vincent Gilbert - Mar 26, 2014: Thank god this forum has some excellent moderators!
You can imagine why I have settled down here . I pointed to the other forum for definition examples. I put up with it to see if there was a pattern of truth even in fruitcake opinions, and the pattern I saw was that their definitions described their own personalities. Combined with a teacher’s remark that his students had as many definitions of “intelligence” as there were students, I drew my conclusion.
If I have been protesting a little too hard on the “red sea” question, it is because I vaguely recall an explanation of Mitsuku’s inner workings on finding the colour of things . Distinguishing ambiguity can be an intelligent process, but is not necessarily the case, as by my example of two straightforward rules. My program has a ton of intertwining rules to distinguish ambiguity for all possible words, but I am not sure whether to call this intelligent.
Since the definition of intelligence is so hard to agree on, perhaps we could agree on what we do not consider intelligent. People seem to have clear opinions on that, at least. Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth. - Sherlock Holmes
|
|
|
|
|
Posted: Mar 27, 2014 |
[ # 13 ]
|
|
Senior member
Total posts: 370
Joined: Oct 1, 2012
|
Hans
Sorry….translated from “vince scribble shorthand” just means “approaches” LOL
V
|
|
|
|
|
Posted: Mar 27, 2014 |
[ # 14 ]
|
|
Experienced member
Total posts: 75
Joined: Nov 16, 2013
|
My 2 cents, well, in Canada we only have nickles now.
I think “what is an intelligent being” is a more important question that “what is intelligent”.
A 2 year old human is an intelligent being, but not very smart. They can’t do math, can barely talk (an advanced chatter bot would put them to shame), they know little about the world, or how things work, they can’t play chess, or jeporday. Comparititly, many computer programs are much more “intelligent” than them, except they can learn. It might take them 10-20 years, but they can learn almost anything (well, mabye 10% of humans can learn almost anything, the rest have trouble with calculus).
So my simple definition of intelligence, is the abiliiity to learn.
|
|
|
|
|
Posted: Mar 27, 2014 |
[ # 15 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
James Sutherland - Mar 27, 2014: So my simple definition of intelligence, is the abiliiity to learn.
By that definition we are already there; machine learning is pretty much covered in many AI-projects. However, I guess you mean something else, something that is close to my own definition of intelligence. It is the ability to learn through ‘experience’.
For me, and I stated this before (here on the board), a machine is human-level intelligent when it can answer the question ‘how does that make you feel’ (about anything).
|
|
|
|