Awesome stuff !!
So it will be the combined strengths of both audio waveform analysis and contextual/knowledge based inference ! Now that is truly fascinating !!. When I’m completed my core NLP engine, I will want to look into that, putting that ‘front end’ for the bot..
Actually, after setting up the animation forum (3d Humans), a forum dedicated to voice is on my wish list .
I believe it’s not the main problem we are facing. When human chat in text, they get different information from voice chatting. As Erwin’s example, in all the sentences, the basic idea are the same. The only difference is that besides the basic idea the listener may guess some additional information. In text chatting, the reader won’t guess. Actually, when a human chat in text with another human. He may use two sentences to represent the full information instead of only one sentence with intonation.
I get what you mean. Actually what you’re saying is that chatbot who can communicate without voice must even be more intelligent than voice chatbots, just because they have less information. They can’t use sound to determine the language for example, they only can use the words, and typing speed. So a chatbot in text is more difficult to build than a chatbot in voice, and we always assumed it was the other way around.
@Dave: thanks for coloring my day (I needed it, it’s raining again while it’s still summer in the northern hemisphere :-s)