AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Longevity of AI powered VA’s Chatbots etc
 
 

Looking at the deployment of VA’s [from demo references particularly in the Enterprise space], I am discovering that “solutions” adopted by some larger companies may not have the longevity I’d expect for such large investments.

My thoughts are:
- People are “unsure” how to use them?
- For support, the dialogue/conversation flow is too stilted or controlled?
- Users prefer the “self-service” of search, and choosing from a list, drill down on own
- Poor semantic/intent understanding
- Return on investment not achieved
- Too expensive to maintain
- Long (3m) cycle to get them taught

How do we make them “live for longer”?
- The adoption of platform VA’s like Cortana, Siri etc will make technologies more mainstream, and like search, people will become more accustomed to using them?
- Add more randomness to the dialogue flow control
- Improve the semantic knowledge through better “understanding” of object, place, time, people, emotion within utterances
- “Instant” on capability with crowd sourced knowledge bases

Perhaps I’m deceiving myself. Anybody else seen a trend?

Cheers

Rob

 

 
  [ # 1 ]

First, welcome.

Rob Shrubsall - Sep 24, 2015:

I am discovering that “solutions” adopted by some larger companies may not have the longevity I’d expect for such large investments.

I would be interested in any references that led you to that conclusion.

Rob Shrubsall - Sep 24, 2015:

My thoughts are:
- People are “unsure” how to use them?
- For support, the dialogue/conversation flow is too stilted or controlled?
- Users prefer the “self-service” of search, and choosing from a list, drill down on own
- Poor semantic/intent understanding
- Return on investment not achieved
- Too expensive to maintain
- Long (3m) cycle to get them taught

I think you are mixing a couple of issues:
- As a new market, Virtual Assistants are a new phenomenon to both users and companies. For users, this is rapidly changing with Siri and Cortona available on all Apple and Windows devices.

- Limitations of some VA solutions are only apparent after deployment (stilted conversations; poor understanding; cost in terms of time, money employees to keep VA active and up to date; etc.), companies currently don’t know what they don’t know.

- Time to deploy/test/implement is a variable and dependent on the application/VA solution. Because all of this is new, it is hard to accurately budget.

Rob Shrubsall - Sep 24, 2015:

How do we make them “live for longer”?
- The adoption of platform VA’s like Cortana, Siri etc will make technologies more mainstream, and like search, people will become more accustomed to using them?
- Add more randomness to the dialogue flow control
- Improve the semantic knowledge through better “understanding” of object, place, time, people, emotion within utterances
- “Instant” on capability with crowd sourced knowledge bases

I agree that the public is becoming more accustomed to Virtual Assistants. In 5 years, it will be expected.

Semantics, flow control, and randomness of output is a function of which implementation the company chooses. Pick the wrong solution and it could be a real problem. Possibly something that a company will end up throwing away after a lot of effort.

I am not a big fan of crowd sourced knowledge bases (open internet input, or user input like cleverbot). The input you get tends to be noisy and dirty. In some cases it is just plain bad.

Internal data (customer service chat logs) or Academic data (like Framenet) is better and easier to transform.

The VAs that have the best usage use/require a dedicated system of editors.

 

 

 

 

 
  [ # 2 ]

Thanks Merlin for your rapid feedback.

As a few examples: major Enterprises such as Vodafone, AVG etc are touted as reference sites but are no longer “visible” - perhaps they have been banished to the backrooms to assist on the intranet. Perhaps they were “too early” for the customer base, or as you say, had insufficient editors assigned.

In terms of “new to market” do you believe that we are still in the innovation/early adopter (pre peak of inflated expectation phase) at the moment. I appreciate that there are companies who have had success for over 15 years (whilst some spent the first eight or so years simply thinking about it!)... and agree that in five years it will be expected.

I appreciate that crowd sourcing can lead to “bad data” and rather open domain, but is there an opportunity at a lower vocabulary level? Similar to ConceptNet triples; but more aligned to non-formal sentence structure of instant messaging…

It’s sure going to be an interesting ride over the next five years.

 

 
  [ # 3 ]

Yes, I believe we are in the early adopter phase. VAs have been most useful as an adjunct/replacement for some customer service functions (Phone bank or on-line chat). Avatar based helpers are very rare. Someday, will provide conversational intelligent virtual assistants.

Concept net and similar open source efforts can be a great help (Aiml and chatscript provided data are a couple of AI/chatbot specific resources). Wikipedia, Freebase, DBpedia and the like are also great resources.

I am not sure I got what you mean by lower vocabulary level. Something like a repository of the input patterns in most of the bot markup languages?

You might find Microsoft vs Watson interesting:
http://gofishdigital.com/microsoft-disses-watson-describes-natural-language-processing-approach-question-answering/

Google’s future of search:
http://www.slate.com/articles/technology/technology/2013/04/google_has_a_single_towering_obsession_it_wants_to_build_the_star_trek_computer.html

Tamar Yehoshua, director of product management on Google’s search. “Is there a roadmap for how search will look a few years from now?”
“Our vision is the Star Trek computer,” she shot back with a smile. “You can talk to it—it understands you, and it can have a conversation with you.”

 

 
  [ # 4 ]

I think one issue is that young people these days are increasingly less interested in text (that’s a generalisation of course). Text based VAs are just too much hassle, especially if they’re using a tablet without a physical keyboard. They want something like Siri that they can talk to and that talks back. Which adds whole extra layers of complexity to the implementation.

 

 
  [ # 5 ]

Hmmm, I hadn’t noticed the short longevity, but wouldn’t be surprised.  However, don’t discount the publicity factor, especially being first to market within a given vertical.  Of course, there has been a lot of bad timing over the years: but, AI seems to be finally taking off due to desperation as much as anything else, like grasping at straws.  In terms of return on investment, many of the companies that have implemented agents have gotten out cheap at the end of the day, on publicity alone.

 

 
  [ # 6 ]
Rob Shrubsall - Sep 24, 2015:

- People are “unsure” how to use them?
- For support, the dialogue/conversation flow is too stilted or controlled?
- Users prefer the “self-service” of search, and choosing from a list, drill down on own
- Poor semantic/intent understanding

My experience is that most of these ‘solutions’ are poor fronts for a limited FAQ list, and if I wanted to access a FAQ, it’s faster to go over the list than to “ask” for the same result with many more words (i.e. a complete sentence) that are more likely to confuse the bot. Often if you type “What is a X?” it’s most likely to come up with irrelavant answers to various “What is a…” questions rather than answers about the one keyword that matters: X. So then using the search bar to just type in X is far more efficient and transparent than groping in the dark behind a front of unknown abilities. The chatbot will only give you one answer, a search bar will give you all 5 available answers on the topic, and if yours isn’t among them at least you know it’s no use to keep trying.

So, the main problems to address are these:
- Transparency: Are we dealing with a glorified FAQ or IBM’s Watson? You can’t tell the grade of AI by their avatar, or how extensive/limited its knowledge is.
- Search efficiency: It takes longer to type complete and proper sentences than to gloss over a FAQ list or search bar. This is why people typically type only two keywords in Google.
- Understanding: As long as word statistics get priority over semantics, answers are not often as relevant as they should be, causing users to fall back to methods that they are sure will at least work.

When I’m a customer in need of support, I want my answer asap and with the least hassle. When dealing with a live human we grant a lot of leeway because we know humans are flawed, they give constant feedback on their understanding which makes you feel like you’re getting somewhere even if slowly, and the fact that it’s their job makes it their hassle.

Solutions:
- For starters, extensive additional feedback rather than one fingers-crossed reply. A narrow-down approach would be much more preferable because then at least you get the feeling you’re getting somewhere.
- Secondly: Full session context, blimey. You should be able to track what pages I’ve visited, paused on and searched for in the last minute. The topics of these should be taken into account as secundary keywords when I type a question. You can tell from how long I’ve spent looking at an answer whether I’ve found what I was looking for or am still searching.

 

 
  [ # 7 ]

I have long been of the opinion that semantics (the meaning of things) is the key to making revolutionary improvements in VA’s. An approach to providing semantics is via the Semantic Web initiative whereby ontologies can be created using a descriptive logic language designed by the W3C for that purpose (OWL). Ontologies provide a description of concepts in terms of classes, subclasses, properties, sub-properties and well-defined relationships between these. These concepts can be general in nature, such as concepts of time and space. They can also be more specific, such as concepts of geography, including oceans, rivers, countries, cities, mountains, etc. These high-level concepts can be a factor in other more detailed descriptions and are often referred to as upper-ontologies. The more detailed ontologies are generally called domain specific ontologies and may describe concepts such as airplanes, automobiles, factories, businesses, sports teams, and any other thing you can think of. Describing these concepts using a formal descriptive language, such as OWL, defines the semantics of things within the ontologies such that it becomes clear how things relate to other things in a way that is machine readable. These ontologies and the specific instances of data described by the ontologies is often called a knowledge base. Because of the formal logic underpinning, it also allows for reasoning, so that if in the ontology a is expressed as equal to b, and b is equal to c, the reasoning process can determine that a=c. This process seems trivial, but it is well beyond the capability of relational data bases. Most subscribers of Chatbots.org are familiar with the premises of the Semantic Web, but very little progress has been made in actually implementing a VA using this technology. One of the reasons why this has not happened is because it is difficult to create these ontologies in a way that is expressive enough to cover a broad swath of human knowledge so that the resulting knowledge base would be practical as the underpinning for a VA. One approach would be to elevate the development of such ontologies from a single person, or even company, to a world-wide project such as has been done with Wikipedia. A group of volunteers could first tackle the creating of a suitable upper ontology that would provide the support for more detailed domain ontologies. There has been significant previous work in the area of upper ontologies, but this should be revisited with the specific goal of defining an upper ontology specifically needed to support VA’s. Once the upper ontology was in place then many different people/groups could tackle the development of domain ontologies, and over time, the ontologies could eventually grow to become inclusive of a large part of human knowledge. This is a huge undertaking, but something along these lines is necessary in order to create truly useful VA’s.

John Flynn
http://semanticsimulations.com

 

 
  [ # 8 ]

I agree with Don

“Understanding: As long as word statistics get priority over semantics, answers are not often as relevant as they should be, causing users to fall back to methods that they are sure will at least work.”

and with John’s elaboration on the Semantic Web.

Chatbots and VAs need to better process the actual semantics of the sentences in a conversation/request in order provide better responses.

I would add that ontologies are good for reasoning about a model of the world or a specific domain but they are limited in some important ways. 

The human brain has a semantic memory where ontologies seem to be useful in modeling the knowledge we believe is associated with semantic memory.

However, there we also have episodic memory and procedural memory and we can perform temporal reasoning.  I have not been able to find very many actual examples of chatbot or va code for reasoning over an ontology that encodes events.

Can your chatbot answer the question “Do people usually put their socks or shoes on first?  or “When baking a cake would you mix the flour and salt first or pour the batter into the cake pan?” or “In the past when it has rained have you usually taken an umbrella to work or to school?”

Some challenges with knowledge representation/reasoning:
[ul]
[li]n-ary relations[/li]
[li]negation[/li]
[li]temporal reasoning[/li]
[li]procedural logic[/li]
[li]episodic memory vs semantic memory[/li]
[li]reasoning with uncertainty[/li]
[/ul]

RDF/OWL supposedly does not allow for attributes on edges.  See Neo4j as an example of a graph database that does.  I’m not sure exactly how this would affect reasoning, but it is one example of an ontology limitation.

I think in developing ontologies that there is emphasis on logic, efficient data/knowledge storage and processing, consistency in encoding knowledge and being able to exchange it and share it.  I suspect that in order to create a chatbot/VA/AI that can respond like a human, that a system will be created that is not always logical, has redundant and conflicting knowledge, will be able to reason in uncertainty, guess, make reasoning mistakes, will reason inefficiently, will encode experiences and emotions, and will store its data with personal experience in a way that will not be shareable or universal, but will be unique to the individual ai.

 

 

 
  [ # 9 ]

Can your chatbot answer the question “Do people usually put their socks or shoes on first?  or “When baking a cake would you mix the flour and salt first or pour the batter into the cake pan?” or “In the past when it has rained have you usually taken an umbrella to work or to school?”

I believe facts as stored in common ontologies can be expanded on to form sequential procedures, but I don’t see many semantic reasoners around either way. That said, answering these questions does not require reasoning in my opinion, just recollection.

There is a recent virtual assistant capable of answering similar questions: Amelia. She extracts procedural steps from technical manuals by means of automated machine learning and recalls them when asked. Although the answers seem to indicate a textual ‘knowledge’ representation, its understanding of informal language seemed to outdo statistical word matching AI, so there may be more to it. Amelia can advise how to proceed from any point in a learned procedure, including from the initial statement of the problem. Worth checking out, I’d say.

 

 
  [ # 10 ]

What I see is that VAs are simple Q&A approaches, where the responses are templated, or pre-printed. In rare cases the avater asks for refinement, to give the answer or perform an action, unless the template say so. There is no understanding at all. No semantic reasoning, unless some wikipedia corpus exerpts and web-found astonishing answers. The agents do not think, nor do nothing similar to human thinking, the model is different, is a model to serve the user with a definite menu, like a pull down menu, limited by templates, which indeed might be large, but it’s nothing more, nor nothing less.

On the other hand there is the AGI definition, an ambitious idea, noone knows what it is, nor how to build them, there are approaches, using ontology, reasoning etc. But I have not heard any of them asking someone for help, nor answering weird stuff, like Watson, who is not an AGI, it’s a refined Q&A system, backed by several paralell servers for rasonable responsiveness.

I think the approach taken actually is not the one which will emerge with a intelligent machine, one who you can talk with, being and emphatetic. This might be accomplished building internal representation of the world, in a sophisticated mental state, constructing internal worlds by the agents can help to go in this direction, it’s just an idea, nothing more, nor less.

 

 

 
  [ # 11 ]

Don, thanks for the reference to Amelia.  I would be interested in seeing the customer interactions and it will be interesting to see if agents on this sort make inroads into actually being used in business and as Rob suggests whether there is any longevity to these agent solutions.

One approach would be to elevate the development of such ontologies from a single person, or even company, to a world-wide project such as has been done with Wikipedia. A group of volunteers could first tackle the creating of a suitable upper ontology that would provide the support for more detailed domain ontologies.

Don, will you be sending out the a weekly meeting invite to the volunteers? smile

Would this be different than SUMO?

I agree that a collaborative approach to a common framework would benefit the chatbot community. 

I think that the longevity of a VA depends on actually being able to assist in a task.  For specific domains semantic understanding may be less important than speed of response, cost and time required to implement a VA, etc. For a VA to be able to respond intelligently in a domain that consists of all of human knowledge and experience I would think would require some minimal semantic interpretation and reasoning.

 

 

 

 
  [ # 12 ]

A veritable discussion, thanks to everyone for their insight so far.

One intetresting thread within the discussions is the ease by which the user can get the information. The key is that users will do the minimal possible to expect an answer. This has similarities with the auditory utterances that perform day to day conversations, I guess. We “expect” the other party to quickly garner the context of the dialogue pretty quickly - people have written whole books about Relevance Theory and such like!  I guess this is why Q&A VAs fail miserably as they try and guess through statistical inference what the user is talking about, and therefore we encounter epic fails when zero semantic reasoning is applied (for example if the system takes no account of previous dialogue or pages visited or products selected in the case of ecommerce).

I believe that maintaining engagement, rather than one quick answer, actually “assisting” a user find the answer rather than directly “answering” probably has some legs.

For example you don’t walk into a car lot/dealer showroom and say “Silver VW XYZ”;  and immediately get a professional salesperson say here is a Silver VW XYZ with ABC, it is 40k, do you want to order that now?  A series of structured broadening and funnelling questions are asked to qualify.; along with visual cues to reduce the utterances and stimulate multiple parts of the brain.

Merging voice/text with visual cues (product pictures/videos) probably provides the optimum UX, though I may be wrong. There is something about collaboration on the same goal (“buy a car”) that builds raport with the customer. Why can’t this be done online?

 

 
  login or register to react
‹‹ Cortana      Deep Learning for dialog ››