I suspect you are creating script for your bot’s responses. Instead of using the emotional content of the client’s inputs which represent their emotional state, why don’t you make some annotations to your responses which would be the bot’s emotions? The annotations could include setting the expressions of the bot’s avatar.
I’ve used ConceptNet 2.1 which has a function that takes text in and returns a list of emotions weighted with rankings. But I haven’t used this much since I already know what kind of input I’m writing the response to. And depending upon the bot’s current mood, the bot may need to take the input as a joke or as a serious offense, etc. so the input rating, which is constant, doesn’t always work.
Contributed to my app are the animation controls to any MS Agent (can do this because it is stand-alone desktop app) and a matrix of emotions controlled by experimental AIML tags and utilized by the <random> tag. You can also set/save the bot’s emotional state (the mood) through the app’s scripting language. Interestingly, being able to identify the emotion when you have many optional utterances via the <random> tag makes writing the bot’s responses seem much easier.
The <random> tag is more likely to select a reply tagged with an emotion closer to the current mood of the bot. Every time an emotion tag is encountered in the selected output, the bot’s mood is move closer to that of the emotion tag.
This does effect the dynamics of the bot’s personality. I can tell when I’ve upset the bot or when they are happy which rubs off on me in the happy chat mode.