A chatbot is a software system, which can
interact or “chat” with a human user in
natural language such as English. For the
annual Loebner Prize contest, rival chatbots have been assessed in terms of ability
to fool a judge in a restricted chat session.
We are investigating methods to train and
adapt a chatbot to a specific user’s language use or application, via a usersupplied training corpus. We advocate
open-ended trials by real users, such as an
example Afrikaans chatbot for Afrikaansspeaking researchers and students in
South Africa. This is evaluated in terms of
“glass box” dialogue efficiency metrics,
and “black box” dialogue quality metrics
and user satisfaction feedback. The other
examples presented in this paper are the
Qur'an and the FAQchat prototypes. Our
general conclusion is that evaluation
should be adapted to the application and
to user needs.