AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Intuition and Empathy
 
 

So I have just started to dive into all this chat bot stuff. I have read a bit, made a basic bot, started digging around in her brain, and had a lot of conversations with other bots to see what makes them tick and where they surprise me or fall short.

The one thing I have picked up on (ok its only been a few days so I may not have a complete picture yet) is that bots are a bit weaker with a certain type of human response. Humans tend to be “leading” in their conversations. We give clues to our conversation partner about what we want them to ask us about. And our partner, out of politeness, obliges.

I was thinking about this and thinking about how bots could learn or be programmed to respond to these subtle conversational cues. And I thought about how it is that we humans know what to respond when the conversation is leading in some way. Nobody teaches it to us, we just know it, intuitively. But how do we know it if we aren’t able to experience the mind of our conversation partner—and nobody ever just tells us to our face that we were supposed to ask them about their health when they mentioned a doctors appointment.

I realized that even though we don’t experience the mind of another human, we do know it intimately. Because we have a mind too and we get the same fears and worries and ups and downs. We know what we would want someone to ask us if we told them we had a doctors appointment. Which is empathy.

So that got me to thinking, how could a robot ever learn to pick up on these leading cues (barring a massive programming effort)? Especially when it says something like “Oh that’s nice that you had a doctors appointment. So tell me about your sister?” There is no opportunity to learn by interaction and no ability to learn via empathy, because a robot mind is not a human mind.

Anyway, I am such a noob I shouldn’t even be talking about this stuff yet, because maybe its already been done to death or maybe there is some simple solution I haven’t realized yet. But I just wanted to muse out loud about it.

It made me wonder about this tendency for bots to flit from one conversation subject to the next. What if, just to see what would happen, a bot was programmed to be like a dog with a bone on anything the human said. To latch on and not let go until it was completely hashed out.

LOL. That just made me think of a trick my husband uses on my mom all the time when she is chatting his ear off. Whenever she pauses to let him speak he just says “And theeennn?” Which is a total spoof of that scene in “Dude Where’s my Car” but she doesn’t catch on and keeps expanding on what she was talking about. And he doesn’t have to expend a lick of effort in the conversation. It is hilarious and I always end up kicking him for it because I spend the whole visit trying not to bust out laughing whenever he says “And theeenn?”

Ok I should probably stop talking now. Sorry for the ramble.

 

 
  [ # 1 ]

Actually that is a good question and context is still very hard for computers, let alone human psychology. There are a few techniques one could employ such as the topic systems in ChatScript, “remembering” the main topic the user was on, then fall back to that topic when conversation stumbles. Similarly one could use knowledge databases like WordNet to deduce what people are on about, noting that “meat”, “knife” and “cooking” all have correlations with the topic “food”, but this might be a bit computation intensive.

In terms of feelings and empathy, I don’t think anyone has yet built a program with an emotional model of humans to predict their emotional needs, but it isn’t impossible. One simple technique for it is sentiment analysis, wherein the computer checks whether the input contains negative or positive words. Advanced commercial applications even categorise inputs this way into “sad”, “happy”, “angry”, etc, giving some general indication of how the user feels and thus how one might respond. This is however hard to implement in pattern matching if you ask me. I think it is possible to set a few custom variables in AIML and then script multiple responses for each question, one for each perceived emotion, but that would be a lot of manual work.

 

 
  [ # 2 ]
Don Patrick - Jan 7, 2016:

In terms of feelings and empathy…One simple technique for it is sentiment analysis , wherein the computer checks whether the input contains negative or positive words. Advanced commercial applications even categorise inputs this way into “sad”, “happy”, “angry”, etc, giving some general indication of how the user feels and thus how one might respond. This is however hard to implement in pattern matching if you ask me. I think it is possible to set a few custom variables in AIML and then script multiple responses for each question, one for each perceived emotion, but that would be a lot of manual work.

Hard to implement in pattern matching w/in AIML or other markup type bots, but fairly straightforward to do programmatically, and then letting pattern matching part take care of the actual response (forking for appropriate mood). Wordnet is pretty outdated IMO, I prefer to use something more like a deep/high dendrite knowledge reference, like Wolfram|Alpha.

 

 

 
  [ # 3 ]

Thanks for the replies :D

 

 
  [ # 4 ]

This discussion is very interesting because it touches, in my opinion, the lack of most chatterbot. Chatterbots are generally designed to answer a question or a sentence. It’s a Pavlovian reflex that has roughly the intelligence of an earthworm.

Conversely, a real discussion between two humans is a real strategy. Each one tends a pole to the interlocutor for the discussion to continue for the crop, soothe, feed it. Replicas “And Then?” or “I have a doctors appointment” are good examples. Each reply is a pawn we advance, as in a chess game, and requires analysis and feedback from everyone that the discussion can continue.

I had read a good analysis of the theater play of Matei Visniec “Pockets full of bread” that illustrates this. It is the story of two men talking around a well where fell a dog. But instead of finding a solution to save this poor dog, the whole piece is built on language, the various strategies for feeding discussion ... and do nothing. The two men argue, reconcile, negotiate, but the important thing is that the dialogue continues.

Here (in french) : http://gerflint.fr/Base/Roumanie6/despierres.pdf

Abstract: This article studies the semantic and pragmatic functioning of dialogue in
Matei Vişniec’s play, Du pain plein les poches. The analysis shall rely on conversational
pragmatics and enunciative linguistics, and deal with the dynamics of interaction and
speech strategies, in order to bring forth the mechanisms involved in the construction
of a speech induced by cowardice.

Understanding this mechanism could improve the way we construct our chatterbots.

 

 

 
  [ # 5 ]

I haven’t learned enough yet to speak definitively on this, but it seems that, while the AIML pattern-template format is a good framework, something else is missing to achieve that final something to get actual human type understanding in an AI. It doesn’t seem like the simple Q and A format will be able to achieve that.

 

 
  [ # 6 ]

Although AIML is used by probably 90% of people as a simple pattern matcher, it is capable of far more than that.

 

 
  [ # 7 ]

I was going to ask for you to tell me all of what it is capable of but that might get a little long, eh? How about some links to articles. I would love ot start learning that now before I get too ingrained into the pattern matching.

 

 
  [ # 8 ]

I don’t have any articles unfortunately but if you are just starting out, I would stick with the pattern matching for now. If you have a specific example of something you are trying to achieve (or a sample conversation), I will be happy to try and code a category or two for you.

You can code the bot to reply differently depending on how it perceives you to feel or what it knows about you, which I guess could class for emotional content. It can also stick to topics if you so wish.

 

 
  [ # 9 ]

Well I noticed that when bots grab on to topics they can be awkward because the * may pick up a whole string of a sentence that really isn’t any kind of topic. Could you program a bot to read the string that gets caught up in the * and find any words that it has a programmed topic for?

For example if someone said I really love to watch Doctor Who, it’s the coolest show ever.
Say the bot’s code picks up “Doctor Who, its the coolest show ever” as a topic from its programming. But that is an awkward topic and probably not going to yield much. But say the bot has a “Doctor Who” topic which would be a great match. Could the bot be programmed to read “Doctor Who it’s the coolest show ever” and pick out something it does have a topic on like “Doctor Who” and make that the topic instead of the entire string?

 

 
  [ # 10 ]

Yes. You could do a pattern of “_ DOCTOR WHO *” so it ignores all the extra words around it and then set a topic to Doctor Who, which could then contain the answer to questions like “What year did it first start?”, “Do you like it?” and so on. The bot would know from the topic that “it” meant Dr Who.

I did one last year after the Paris shootings. If anyone gave any input with “PARIS” in it, Mitsuku would then start talking about Paris and kept on topic. It’s always important to have a category that gets you out of a topic though. Only the most hardcore of fans wants to talk about Dr Who for hours on end smile

 

 
  [ # 11 ]

Ok.
Oh and I am sorry to have to tell you this, but I had to tell Mitsuku that daleks were not robots technically smile I don’t know what she will do with that LOL

 

 
  [ # 12 ]

I know they’re not as well but like to keep that in her database smile

 

 
  [ # 13 ]

While AIML is capable of a lot, it does strike me that it was not built for complicated analyses of the input, and takes a lot of manual tinkering as a result. For instance, specifying a <that> tag at every input and output in case someone makes a reference later, and finding ways around the lack of sentence splitting.

Having said that, the alternatives may be just as much work. To get a degree of understanding in an AI, you need a solid phase inbetween input and output to examine what is being said. Likely including:
- Natural Language Processing (NLP): There are grammar parsers available such as Link-grammar and Stanford’s grammar parser. They help determine the subjects of a sentence, split input at commas and link-words, and translate pronouns like “it” to what they refer to.
- Knowledge: Alternatively to writing all the answers down yourself, the AI can consult ontologies such as WordNet or OWL. These are databases containing word meanings and common facts about them. It could tell the AI for instance what a doctor is for.
- Reasoning: AIML can do inferences as Steve has proven, but it looks like one has to pre-define both the arguments and conclusions. While I don’t have ready-to-use solutions, the aforementioned ontologies lend themselves well for use with an inference engine that takes any two facts as input to draw conclusions from. This could be used to conclude for instance that if you’re going to a doctor, and a doctor’s purpose is to operate on people, then you’re going to the doctor to be operated. Then you could reply that as hypothesis: “Oh, are you going to be operated?”.
- Decision-making: This would allow the AI to run the user’s situation by its own decision process and ‘predict’ what the user would do or wants to do, or advise what they should do. Unfortunately this is largely an unsolved problem in A.I. . There are some things one could do, like one university had their AI read a lot of text and note pairs of verbs that were encountered: buy-pay, need-get, break-repair, together with how often each pair was encountered. The most occurring pair would then presumably be the most natural solution or effect of the first verb. I wouldn’t recommend it, but it’s one way.
- Pragmatics / dialog managers: This part would ultimately decide how to respond, using all the above as background information: Does the user just want a factual reply? Do they want an opinion? Did they state a problem that needs advice? Did they express a feeling (using e.g. semantic analysis) that requires sympathy? Should we “u-huh” and let the user continue explaining, or should we summarise what they said to show that we understood the story so far? Or is the input so brief that it’s our turn to talk now? Particularly this area hasn’t seen much research, other than to divide observed dialog into speech acts. To make these decisions, you’d need a system designed to work with context a lot. Personally I take more hints from psychology books on social interaction between humans than I do from A.I. research.

Basically, see the “goals” part of https://en.wikipedia.org/wiki/Artificial_intelligence
There is no complete package for this level of conversation like AIML is a complete package. There are only components that would have to be tied together through a programming language like Python or C++, two of the most supported languages in this area. ChatScript seems to be more accommodating than AIML in the areas of grammar parsing and tying into WordNet, though I understood that AIML 2.0 has also started to support grammar and maybe more.

 

 
  [ # 14 ]

Cool. Food for thought smile

Funny thing, I had only seen NLP as an abbreviation before now on these forums and with my psych degree plus a background in sales and marketing that means “Neuro Linguistic Programming”. I usually just tune references to NLP out on internet forums. Now i have to reconfigure my assumptions that people on the internet who refer to NLP a lot are a bunch of amateur hypnotists trying to learn tricks to pick up women. Loooolzzz

Glad u spelled it out for me!

 

 
  [ # 15 ]

Hi Amanda, all

Just read the topic and maybe I can enlighten a little over what others here have written a whole paper-like work.
You can do two things
1) try to understand user’s feelings, to get information complementary to the understanding
or
2) try to generate a artificial feeling of the bot, to trim his answers-mood (defensive / aggressive / lovely / missive / etc.)

Assuming 1 or 2 there are common things: Empathy is about huma-like feelings,
Ok…. this is obvious, but follow me the chain-thinking
Feelings are specific human empathetic responses after understanding, taking into account the context of the conversation.
Therefore understanding must exist to process/guess feelings (synthetic)
and you need to understand under context to do that

this is called NLU (Natural Language Understanding) and this is a difficult part of the deepest NLP (Natural Language Processing)

So, resuming

to use Empathy either you get the feelings, by trying to understand the user or you “fabricate” an artificial empathy modulate the mood, according to the “tone” of the conversation, but to do this you need to understand the user’s assertions or questions, in the conversation context.

Remember to do this, that you need to understand
and understand is after you disambiguate all word semantics
after you parsed successfully
after you POS-tagged
after you analyzed morphologically
after you spell-corrected
the input phrase

Having said this, AIML nor ChatScript (I guess) is a place to search for this!

You may use a AM (artificial modelling) according to Robert’s Plutchick Flower of Emotions theory here: https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions

I’ve done this successfully and actually my libraries are able to tag a phrase (single) without context according to 36 feelings, using a owned corpus of feelings vs. disambiguated words, after parsing, after tagging, after analyzing, after spell-correcting the input user’s phrase.

that’s it

Hope you understood my long dissertation!

 

 

 

 

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react