AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

finding the rule from the logs
 
 

is there an easier way to find the rule that is executed?
for example
~prepasstopic.172.0.~control.6.0   F:7

172 is the 172th top level rule in this topic.

I cannot come up with an easy way to find the rule, because it is buried…
any hints?

 

 
  [ # 1 ]

You could supply rule labels for your rules, then that label will also show up in the data

 

 
  [ # 2 ]

but if you want to find the rule, you could pass that to GetRule and retrieve the piece

 

 
  [ # 3 ]

oh, yes, i can add a generic unique string in all caps between u: (  and   s: (  and   ?:  ( 
I am going to see if I can do this with notepad +++
It would be a nice feature to auto label everything that is not already labeled.
This would take a big chunk of time out of debugging user logs.
If a rule is executed, it is not always clear where it is coming from and finding it is sometimes time consuming.

 

 
  [ # 4 ]

ok, here is a decent solution to auto adding labels to all of the CS lines of code which do not have labels.

Using notepad ++
Change u: ( 
to
u: LABELZ (
in all *.top files

LABELZ is just some random word that it not used anywhere, and it is in all caps.

Do the same for s: and ?:

then run this perl command against the linux directory that has the *.top file
perl -pi -e ‘s/\b’LABELZ’\b/$&.‘_’.++$A /ge’ /home/ec2-user/ChatScript/RAWDATA/HARRYFIX/*

The labels become

u: LABELZ1 (
u: LABELZ.. (
u: LABELZ100000 (

hope this helps someone.
It will make debugging a lot faster.
cheers

 

 
  [ # 5 ]

OMG, it is so much easier finding the executed rule when you have a rule label. Because the rules are numbered after using that perl script,  I can now find any bad responses from the log file, scan for the rule number, and then edit the rule,  knowing where it came from.  Before I was always sort of guessing if it was the right rule.

 

 
  [ # 6 ]

I find it even easier with meaningful names, which also make it easy to enable tracing on a rule (given I remember important rule names and the topic they are in).  But just scrolling a log, easy to know where I am by meaningful label than it would be with same labels differing only by numbers

 

 
  [ # 7 ]

Yes, that is a good point.  I using meaningful labels already for many rules, the ones i reuse, but it is now everywhere, so i can find the rule quickly.
It would be a useful feature to auto number the rules,  in my opinion.
Someone can overwrite the numbering if needed.
As an aside, i have to see if i can reuse a specific rule in a different topic. Not sure if that works with just the rule name label.

 

 
  [ # 8 ]

^reuse(~topicname.~rulename)

 

 
  [ # 9 ]

The compiler produces a cross-reference map file that can be used to locate a rule and its line number.

 

 
  [ # 10 ]

Thank you, Andy. Can I ask you, what is your workflow for reviewing log file responses?
I imagine there is probably a better way to do this, wondering what you both do. I am afraid to ask Bruce, I am sure his method is light years from what I am doing.

This is mine….
I am currently pulling the logs, combine them, make them pipe-delimited in notepad+++ (mostly the “:” become “|”), and then import them into excel, pipe delimited. Then, sort them by ID and date, usually.

Then I review them, looking for bad responses, in excel, one by one, marking up the bad ones.
When I have all of the bad responses, I review the CS code by looking up the rule number (all of my rules have names and numbers now).

I look for common problems.
Then I review the logic for changes.

Often,  I add/modify the rule so that it takes into account a slightly different rule/response.
Sometimes I adjust the topic search words, so it goes to the right topic. Or reorg the topics.
Sometimes I add new topics or split them. Or add gambits.
Sometimes I adjust the control script, to put a priority on some other topics.
If the topic is big, with lots of data, I put it in Postgres and create an API to do lookups. I did this with Wikipedia and some other big data items.

Just curious about your workflow.

Feedback logs. I have a feedback mechanism;  it does not work that well.  I want the end-user to point out bad responses, and have this go to a special log file.  I have something working, but I must admit it was not that useful.  I guess I have to to look at it again, to see what I need to add to it to make it more useful. 

Right now, the log files are a lot more useful, because you get the entire user message trace. With this, you can feed it back after your changes to see if you are making the right changes. 

Just wondering if there is a better way to go about improvements.

 

 

 
  [ # 11 ]

My workflow is not going to be comparable because our bot is so significantly different than one built around topics and gambits/rejoinders/responders. Our bot is almost totally data driven and so is mainly script code, and one user volley can actually turn into several CS volleys as we return control to a higher tier and utilize callbacks. And to complicate things even more, our CS tier generally is not generating the actual text seen by the user, so the response is typically just some reference to an object.

We have fairly extensive logging built into our code base that we turn on and off via “cheat” commands, and that is more useful to us than log responses.

You might want to check out the :trim command to process log files.

 

 
  [ # 12 ]

yes, I recall your thoughts from April on this subject. 

Although I have thought about it, off and on,  I do not have a design like this, at this point.  I spend most of my time just authoring thoughtful narratives and responses. 

However,  I have still been thinking a lot about design and organization regarding a new way of approaching this. One small thing to note,  CS still ships with a deceptively simple example that demonstrates a lot of features.  And if you spend a lot of time using this approach, you can still create a deceptively good “chatbot”.  It just takes a lot of hand creation and sweat work to get something that is good, using this approach.  And a lot of manual iterations.  Not complaining here, but it is what it is.

The CS architecture provides a lot of flexibility,  to evolve.  It is just that it takes a long time to learn it because the capability is really rich. 

To create something that is almost all data-driven would be an ideal design, no doubt.  This would be really difficult to achieve.  The reuse potential would be virtually and elegantly unlimited if the data remained totally separated from the “business” logic/learning.  If you had this separated, theoretically, you could focus on feeding it quality datasets, and the “machine” would learn from the data.  Adjust the generic logic scripts to process the data better within each iteration.  Get feedback on the response/outcome to improve response concepts.  Lots of logic, math and stats.
It would be somewhat of a cross between CS and ML. 

Parallel, competing CS volleys?  Each creating objects with metadata?  These are passed to a higher tier which marshalls the response decision?  Wow.

Thank you for the :trim suggestion!

 

 
  [ # 13 ]

Of course my goal is to keep providing new powerful features before you are ready for them. E.g., the new ability in CS9.7 to put patterns directly into concept sets as though they were phrases.

 

 
  [ # 14 ]

Actually, i saw that feature and i am diving into it and some others that you added.
I thought i would have more trouble upgrading my master server image from 9.0 to 9.7, but it took only about 20 minutes.
I had to recompile cs postgres with different options,  which was my only challenge.

I am thinking about how to create more meaningful standard objects based on your extension of the concept set definition.
After writing up a thousand ways to look for the intent, it makes a lot of sense to standardize the most common concepts into super concepts.
Spending a lot of time on the first 20 volleys, i.e. 20 people x 20 volley tests.

I am finding that there are little gems all over the place in your documentation.

 

 
  [ # 15 ]

Worse, there is a LOT of documentation.  The 3rd best cs programmer on the planet says/said he rereads all cs documentation once a month and discovers new things every time he didnt appreciate previously. Probably by now I have made that impossible with enough documentation so he cant reread it all at once.

 

 1 2 > 
1 of 2
 
  login or register to react
‹‹ Manipulating match position      TCP Problems ››