|
|
Experienced member
Total posts: 94
Joined: Dec 8, 2011
|
This is the first thumbsketch of a new kind of ontology called ONON
( ONtology Of Needs )
based on the works of A. Maslov ( Maslow’s hierarchy of needs ),
which I enhanced in many ways.
ONON is an ontology created especially for NLP, chats and interrogations,
reflecting my view, that you will hardly find a topic in human conversation
(and emotion),
which lacks in a link to a need defined in ONON
( example: fear =
emotion arising if a someone is imagining a possible failure
to fulfill his or her need in the future ).
I see ONON as an endorsement for ontologies like SUMO.
But in ONON, the most important relation is not “is_a” or “has_a”, but “needs_a”
(+ “seems_to_need_a).
To differentiate this, ONON distinguishes between
“satisfiers”, “destroyers” and “pseudo-satisfieres”.
Example “food”:
satisfier = agriculture…, destroyer: erosion…, pseudo-satisfier: smoking…
I will use the following abbreviations:
s = “satisfiers”
d = “destroyers”
p = “pseudo-satisfiers”
They are followed by a first brainstorming of examples of it.
Some of them ( like ~dictators_d ) are ChatScript-concepts of my own yet.
A complete ONON will contain an - I hope - full set of particular cases
for every s, d and p.
https://sourceforge.net/projects/maldix/files/?source=navbar
Is something missing in your opinion?
All feedback welcome!
Best
Andreas
|
|
|
|
|
Posted: Jan 7, 2013 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
I wish I had more time to investigate this more completely, as I think this can become very useful for determining/developing goal-driven behaviors. It’s one thing to know (in a general sort of way), “what you need”, but it’s another thing altogether to have a resource available that can at least provide clues that could lead to methods to satisfy that/those need(s), and unless I’m misunderstanding the concept here, ONON may well be able to provide just this. This is certainly something I’m going to keep my eye on.
|
|
|
|
|
Posted: Jan 7, 2013 |
[ # 2 ]
|
|
Senior member
Total posts: 133
Joined: Sep 25, 2012
|
For a more solid foundation, I would choose neurology over psychology, in particular MacLean over Maslow.
In particular, I have found MacLean’s Triune Brain hypothesis to be an extremely useful way for understanding, categorizing, and predicting human and animal behavior. One of the most common ways I see that hypothesis evidenced in everyday life is when getting into an intellectual debate with someone. When there is disagreement, if you’re lucky, you might be able to exchange some productive ideas for a while, but typically one or both parties start getting flustered, irritated, louder, and increasingly angry. At that point they’ve dropped down one level from the Neocortex/intellectual level to the Limbic/emotional level. If you keep pushing the discussion after that point, they will drop down one more level to the Reptilian/physical level, whereupon they start shoving, slapping, punching, shooting, or if it’s your boss, firing you. It’s all a default defense mechanism: when the current level fails and the organism still feels threatened, it drops down to a more basic level, closer to fundamental survival. There exists feedback between the levels, so it’s not a pure hierarchy, but it’s close to a clean hierarchy, and I believe it’s significant that pure, objective, intellect/logic is at the top.
I like Maslow’s hierarchy, but I would restructure it so that it fits into those three levels of the Triune Brain as a high-level organizing structure, as I’ve shown below. Note that some of Maslow’s levels apply to multiple MacLean levels (therefore I’ve duplicated those titles). By the way, when discussing the Triune Brain model, discussion often turns to a fourth “spiritual” level. Although I don’t believe there is scientific basic for additional forces of nature, there is definitely much evidence for a very high-level drive for transcendence, which can be felt both emotionally and intellectually, and genetics and brain physiology support the presence of such a drive, too. Maybe evolution prepared biological organisms in those ways for a potential, future fourth level, just in case. Therefore I might acknowledge such a level by creating a level “3b” that hints at something beyond intellect.
(1) lowest/Reptilian/physical
1_physiological_needs
1_Q_nature (sunlight…)
2_group
3_safety
(2) middle/Limbic/emotional
1_O_beauty
1_Q_nature (sunlight…)
2_group
3_safety
4_love
5_esteem
8_growth_needs
(3) highest/Neocortex
(3a) intellectual
5_esteem
6_B_s: parallel_hobby_talent parallel_hobby_money parallel_hobby_family_situation
7_cognitive_needs
8_growth_needs
(3b) spiritual
9_transcendence
http://en.wikipedia.org/wiki/File:Triune_brain.png
http://www.kheper.net/topics/intelligence/MacLean.htm
If you ever get into the fine details of the drive for happiness, I could offer more suggestions, but I think what you have will keep you very busy for a while.
Minor point: I think you misspelled “safety” as “savety”, and “hypocrisy” as “hipocrisy”.
I’m glad to see somebody putting serious thought into foundations before coding a system.
Neuroscientists from the University of California at San Diego have found what they call the God module, a tiny locus of nerve cells in the frontal lobe that appears to be activated during religious experiences. They discovered this neural machinery while studying epileptic patients who have intense mystical experiences during seizures. Apparently the intense neural storms during a seizure stimulate the God module. Tracking surface electrical activity in the brain with highly sensitive skin monitors, the scientists found a similar response when very religious nonepileptic persons were shown words and symbols evoking their spiritual beliefs.
(“The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Ray Kurzweil, 1999, page 152)
|
|
|
|
|
Posted: Jan 7, 2013 |
[ # 3 ]
|
|
Experienced member
Total posts: 94
Joined: Dec 8, 2011
|
Thank you very much for your
positive and differenciated replies.
I think, Dave mentioned the first magic word of our discussion:
“goal driven”,
because goals probably are the drives in humans
(I hope I don´t have to chat with reptiles too often;)
that arise if satisfiers for needs are missing:
and I want to use Dr. Joscha Bachs MICR0PSI (perhaps even its fusion to opencog = openpsi ),
which takes goals as a central point of its architecture:
see my MALDIX-description.
And Mark is absolutely right
that this shouldn´t be the end of the road and ONON should be integrated in
even higher hierarchies.
But for me,
you don´t have to have a brain to execute goals:
the “reproduction”-goal is even fulfilled by viruses.
So every goal arises from a new step of evolution.
The adaptability-need arises when endotherms are arising,
the transcendence-need with the possibility to “neuronalize” it.
So if all the human-need-work is done
- and Mark is absolutely right that this will take quite a lot of time -
I will think this point over, too.
One thing is clear even now:
I will have to leave SUMO at the point, where Adam Pease sees
humans and animals side by side.
For me, every human is a primate + need x,
which is a mammal + need y,
which is an endotherm + need z and so on.
And now I will change
my misspellings:)
|
|
|
|
|
Posted: Jan 10, 2013 |
[ # 4 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
I like the idea of incorporating into a chatbot an understanding of human needs (the stuff that comes to us intuitively because we’re biological). However the response I initially wrote, below, wanders a bit off target into the nature of ontologies themselves. I’ll post it anyway, but sorry if I drift afield of the topic at hand.
Andreas Drescher - Jan 7, 2013: I see ONON as an endorsement for ontologies like SUMO.
But in ONON, the most important relation is not “is_a” or “has_a”, but “needs_a”
(+ “seems_to_need_a).
To differentiate this, ONON distinguishes between
“satisfiers”, “destroyers” and “pseudo-satisfieres”.
One thing that never satisfied me ( ) about ontologies is the restricted nature of the relationships that can be conveyed. That is, relationship links such as “is_a” or “seems_to_need_a”. A generalized knowledge base, on the otherhand, would theoretically include any grammatically acceptable relationship between two objects.
In principle, having restricted relationships allows you to combine those relationships with logic and produce more meaningful i/o. (I take it that’s your goal, Andreas?) But couldn’t one in principle do this with an generalized knowledge base as well? The amount of data that can be incorporated into one’s logic models would be restricted only by your ability to recognize equivalent ways of expressing the same information. For example, “seems_to_need_a” = “appears_to_need_a” ~ “might_need_a” ~ “wants_a”.
The advantage of the generalized knowledge base is that as your logic becomes more sophisticated (or generalized), you can put to use the subtleties in [removed]the difference between “seems_to_need_a” and “wants_a”).
Mark Atkins - Jan 7, 2013: When there is disagreement, if you’re lucky, you might be able to exchange some productive ideas for a while, but typically one or both parties start getting flustered, irritated, louder, and increasingly angry. At that point they’ve dropped down one level from the Neocortex/intellectual level to the Limbic/emotional level.
Depressing that when it comes to intellectual ideas, we treat agreement as an emotional reward and disagreement as an attack.
Mark Atkins - Jan 7, 2013: If you keep pushing the discussion after that point, they will drop down one more level to the Reptilian/physical level, whereupon they start shoving, slapping, punching, shooting, or if it’s your boss, firing you.
Doesn’t get more reptilian than that. :D
The hierarchy structure you mentioned is interesting. I wonder though if it’s so easy to disentangle the intellectual from the emotional. Could it be that the special mix of chemicals we associate with certain emotions only get released together because of the higher thoughts we’re thinking at the time? That is, certain emotions may be common to all animals, but more complex emotions arise from the simultaneous experience of different base emotions, which are only felt simultaneously due to the type of introspection and abstraction the human brain is capable of.
Fun to ponder. Of course, unless there is such thing as reincarnation, I guess I’ll never know for sure.
Andreas Drescher - Jan 7, 2013: But for me,
you don´t have to have a brain to execute goals:
the “reproduction”-goal is even fulfilled by viruses.
In this sense, a “goal” is really just a physical consequence of your body and its interactions with itself and its environment. A virus doesn’t seek to reproduce itself. In most environments, it does exactly nothing. But the reason we have so darn many of them floating around is that when a few are in just the right environment, they undergo a series of chemical reactions that results in many copies of themselves.
(Maybe people aren’t so different in this respect. )
Are you including (or do you plan to include) the goals of your relations in your ontology? (as in RELATION1 = OBJECT1 “needs_a” OBJECT2 “because” GOAL_OBJECT1/RELATION2) I haven’t had a chance to download the version you put online.
|
|
|
|
|
Posted: Jan 10, 2013 |
[ # 5 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Wow, when I went to submit that post in the “advanced reply” section, it just vanished. Luckily I had written it in a text editor.
|
|
|
|
|
Posted: Jan 10, 2013 |
[ # 6 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
C R Hunt - Jan 10, 2013: Wow, when I went to submit that post in the “advanced reply” section, it just vanished. Luckily I had written it in a text editor.
[hijack]
That’s a common pitfall of this forum software, and one that I’ve had some hard experiences with before. Sadly, I had not written my posts in a text editor at the time (lesson well learned), so ended up having to completely re-create the posts.
There’s no really easy fix for the problem, though I do have an idea or two about possible methods to correct the issue. But right now, I have other, more important, fish to fry, so it will have to wait.
[/hijack]
|
|
|
|
|
Posted: Jan 11, 2013 |
[ # 7 ]
|
|
Experienced member
Total posts: 94
Joined: Dec 8, 2011
|
Hi CR,
thank you for your feedback.
I like the idea of incorporating into a chatbot an understanding of human needs…
If you want your chatbot to become an
interlocution-bot and perhaps even an AI,
it could have needs of its own which can be
balanced with the needs of its interlocutor.
For MALDIX, I am transposing the goals of MICROPSI
http://micropsi.com/
to a “thirst” of continuing the interlocation
and a “hunger” to reach a deeper level of conversation.
(Level 1 for example: topic_hobby,
Level 7 for example: topic_childhood_trauma)
A balanced architecture of the needs of MALDIX
and his interlocutor could look like this:
INLOC: I had some serious problem with my father when I was 11 years old.
MALDIX: Do you want to tell me?
INLOC: Yes, but I´m quite hungry now.
At this point, MALDIX has to come to a descition between
his “hunger”-goal to have a talk which “goes under your skin”
and his “thirst”-goal (or need) to keep on the conversation.
And ONON gives him the opportunity to understand,
that it may be quite stupid trying to find out what happened between
INLOC and his father AGAINST such an essential need as INLOCs hunger.
You are absolutely right, CR: These kind of descisions
can´t be made by ontologies only. That´s why right under my first
version of ONON one can find at Sourceforge you can see a file called:
MALDIX als symbiotischs System - Die ES-Logic.
Translated:
MALDIX as a symbiotic system. An ETHICAL-SOCIAL-LOGIC.
I am changing so many things there,
that it would be stupid to translate it at this moment.
But when the flood of changes stops, I will do so.
And I will let you know:)
Best
Andreas
|
|
|
|
|
Posted: Jan 14, 2013 |
[ # 8 ]
|
|
Experienced member
Total posts: 94
Joined: Dec 8, 2011
|
Adam Pease, the creator of SUMO, writes:
“Your ontology looks intriguing…
Working on defining “needs” for entities
could be very promising…”
and gives me some very helpful hints for my further work.
Thank you for your inspirations, Adam!
|
|
|
|
|
Posted: Feb 4, 2013 |
[ # 9 ]
|
|
Member
Total posts: 27
Joined: Nov 18, 2009
|
I have been very interested in creating an ontology that represents human characteristics, including such things a needs, for some time. Of course, there is no universal agreement on how to define these characteristics. However, after surveying work in this area I think the concepts of emotions, goals, standards and preferences might provide a reasonable ontological representation that would support practical development of an avatar that might respond realistically to stimuli. I have developed a very rough start at such an ontology at http://mysite.verizon.net/jflynn12/COPE-Ont.jpg. Also, I wrote a book chapter titled Semantic Adaptive Training at http://ebooks.cambridge.org/chapter.jsf?bid=CBO9781139049580&cid=CBO9781139049580A026 that attempts to address the implementation of some of these ideas. I would be very interested in other’s ideas along these lines.
|
|
|
|
|
Posted: Feb 4, 2013 |
[ # 10 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Hi there, John, and welcome to chatbots.org!
That image looks like a map to my room, lol.
Seriously, though, I think that’s a great start on sorting out a potentially very complex challenge. I’m sure that others here will find it interesting, as well. I haven’t had the chance to look over the chapter you linked to, but I’ll try to do so by the end of the day.
If you’ve spent any appreciable time here reading some of the other threads, you’ll know that I’m not an “academic type” in any way, shape, form or fashion, but I do love this field, and I try to “poke my nose” into everything I can, trying to learn more, all of the time. So please, by all means, feel free to toss your notions into the mix, and let’s see what comes of it.
|
|
|
|
|
Posted: Feb 4, 2013 |
[ # 11 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
John, some of your emotion classes look like opposites. Others look like they may be related. Is this intended?
Gloating-happiness vs like-dislike for example. If I am gloating, I would say I am happy.
Of course cataloging emotions and using them in AI has been a subject of widespread discussion.
“Are computer-generated emotions and moods plausible to humans?” - http://www.dfki.de/~gebhard/papers/iva06.pdf
|
|
|
|
|
Posted: Feb 4, 2013 |
[ # 12 ]
|
|
Member
Total posts: 27
Joined: Nov 18, 2009
|
I’m a computer scientist not a psychologist. The emotion pairs in my draft ontology were obtained from various professional papers on the subject. No ones knows what are the “correct” set of human emotions, if any such thing even exists. I think the goal for creating a reasonable computer representation of emotions, and other related human characteristics, is to experiment with various options until we find a set that works. By “works” I mean the avatar responds to external stimuli in very human-like ways. I don’t think you can really do much better than that. There is no silver bullet answer, but if I could create an avatar that passes an emotional Turing Test equivalent, then I would be very happy. Yes, I also think the gloating-happiness pair is strange. There may be some psychological rational for that pair but it isn’t obvious to me. I suspect that a fairly limited set of these, or some other, emotion pairs may turn out to be sufficient for a reasonable implementation. In fact, it would probably be wise to start with only two or three emotion pairs and see what kind of fidelity you could achieve. Then incrementally add in additional emotional nuances experimentally to see what improvements could be made to the base model. A major issue is that emotions are in context. A person may have an emotional baseline, such as generally pleasant or generally angry, but largely emotions are in relation to something else. For example you may have set up your avatar to love dogs but hate cats. So a significant challenge is to examine the context of the chatbot dialog for emotional triggers to which the avatar might, or should, respond in a dynamic fashion.
|
|
|
|
|
Posted: Feb 5, 2013 |
[ # 13 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
John, you might find this thread interesting:
Authentic Human Emotions in Chatbots - http://www.chatbots.org/ai_zone/viewthread/930/
http://upload.wikimedia.org/wikipedia/commons/c/ce/Plutchik-wheel.svg
|
|
|
|
|
Posted: Feb 6, 2013 |
[ # 14 ]
|
|
Experienced member
Total posts: 94
Joined: Dec 8, 2011
|
Your ontology looks quite interesting, John.
But i think your goals and your emotions should be connected.
For me, a goal is a need (or a pseudo-need)
with causes an action of a living-being (here a human) -
and emotions are caused (just like in ALMA) by the fulfillment or non-fulfillment of it
compared with interrelated experiences in the past.
For example:
Joy/happiness appears if need or pseudo-need X found a satisfier.
Sadness or anger if such a satisfier couldn´t be found.
Hope appears if the experience of a living-being shows need or pseudo-need X
as fulfilled in the future. Fear if this is not the case.
Shame appears if there is a clash of the need to own an appreciated position
in a human and another need.
Consider:
A central position of needs in your ontology
will even enable your bot to anticipate the future of their human interlocutors
using the structures of a modal logic.
For example:
beverage_satisfier will necessarily cause need_urinate in the future
food_satisfier may cause need_rest or need_sleep in the future,
|
|
|
|
|
Posted: Feb 6, 2013 |
[ # 15 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Andreas, I agree that the atainment or failure is a cause for the heightening of certain “connected” emotional states, but I disagree that goals, in and of themselves, are needs (or even pseudo-needs). In my considered opinion (which, admitedly, lacks any sort of academic foundation), goals are more along the line of plans (no matter how nebulous, ill-defined, or removed from our consiousness) to adress needs, or possibly expressions of a need. For example, we all need to eat, so we all have goals to do so. The goal (to feed ourselves) is not our need (to eat), but an expression of how we address that need. In certain, functional situations, we may be able to interchange the concept of need and goal that is based upon that need, but the two are separate.
But as I said, this is just my opinion, and can be completely ignored.
|
|
|
|