Jan Bogaerts - Aug 10, 2011:
Oh, common. I thought that wasn’t to bad. There are some errors in the output, but at least it’s already picking up the different units.
Also, some of the inputs you give need lots of context. I wonder how your bot would handle something like that?
Actually I don’t claim my bot can do this! - I am just being the devil’s lawyer!
FYI: my actual routines can handle things like this, not as good as it seems (or ought to be) but it can handle definitively physical units, monetary units (even it consults a web-service on int’l exchange rates) makes operations on them, does math simplification, derivatives and integrals (on literals) -by using a deep-math module, It handles ‘things’ simple math like"how much are two dogs and a cat and a dog = three dogs and one cat. (actually it makes this in Spanish, managing inflections too!)
My last try-is this, works on different turns, and even when you tell the bot single words (or you may tell them all at once) and the bot finds what the heck is this and asks for the ‘supposedly missing’ parts to make sense. (no actual common-sense-engine, rather than a simple implementation upon a grammatic-inference plus simple co-referenced and linking engine), but the engine is capable of positioning in time coordinates, so you may ask using a verb in ‘infinitive’ or any valid inflection - even misspelled (I remember you in Spanish a verb can have more than 50 inflected forms)
Then the system sees if any of your words resemble a time-location like ‘yesterday’ or a object-nominal part or a direct/indirect object-part, so yo u may say (I translate from ‘es’):
vino mañana yo = [ came tomorrow me/I ]
- and the system understands = Yo vendré mañana = [ I will come tomorrow ]
and asks eventually the following questions (depending on some internal priorities)
¿Tu vendrás de donde? = [ from where, will you came tomorrow? ]
¿Tu vendrás con quién/qué? = [ With who/what, will you came tomorrow? ]
Even if you say the words in different turns, it composes the sentence-action-figure internally! and may ask for clarification.
Once fulfilled, the sentence and action is memorized and used for internal shallow-reasoning
(assigning a small veracity to your declaration)
This when linked with my anaphoric solver (still incipient) may yield this kind of conversation (translated to English) - actually it does almost 90% of this = see clarification at the end!
you: Alan
bot: I see you know Alan, I’ll remember this!
you: Mary
bot: I see you also know Mary, I’ll remember this!
you: John
bot: I see you know John, I’ll remember this too!
you: car
bot: I see you mentioned a car, I’ll memorize this!
you: she
bot: Mary?
you: he
bot: John or Alan?
you: John
bot: ok
you: they
bot: John, Alan and Mary?
you: yes
bot: Ok
you: they love her
bot: John and Alan love Mary?
you: yes
bot: Ok, I suspected this!
you: he hate her
bot: John hates Mary?
you: yes
bot: this seems strange to me! (*)
you: they love it
bot: John, Alan and Mary loves the cat?
you: no
bot: I am confused, please explain me better!
¿who loves the cat?
you: John and Mary
bot: ok, then: John and Mary loves the cat
you: ok
bot: I understand.
(*) here in the last replacement, the bot must have a verbal-taxonomy while WordNet don’t state the opposite sense of the verbs (even I have no EuroWordNet in Spanish, it costs 15.000 EUROS only for academos research!!)
So this works ONLY if I manually introduce the ontology of the verbs love and hate! (actually did so for a small number of them. Also I am working on capturing some ontology from aligned bilingual text, and its rather difficult!
hope this helps!