|
Posted: Mar 21, 2014 |
[ # 31 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Don, if you have a Google+ account, please join the chatbots.org community and let’s talk. We are taking very similar approaches to one another so maybe we can help each other out. It’s been a while since chatbots.org had a video conference so this seems as good a time as any to have another one.
https://plus.google.com/u/0/communities/117236447102577073584
|
|
|
|
|
Posted: Mar 22, 2014 |
[ # 32 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Jarrod Torriero - Mar 21, 2014: If you’ve only just started programming an AI and you’re already adding in things like (cat IS-A mammal), your program’s ontology is WAY too abstract.
Perhaps we are thinking of the same. I mostly find the attempt to categorise everything into crude boxes like “is-a” and “United-States-Presidents” inflexible for a database, limiting the scope of inferences. But that is a matter of finetuning knowledge representation. Cyc is stuck with theirs because they amassed most knowledge manually. I can change and repopulate my database structure at any point in development.
Andrew Smith - Mar 21, 2014: Don, if you have a Google+ account, please join the chatbots.org community and let’s talk.
I’m not too comfortable with yet another social media account or video streams, but there is logic to your request as you do seem to expertise in the same methods. We’ll talk.
|
|
|
|
|
Posted: Mar 22, 2014 |
[ # 33 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
Don Patrick - Mar 22, 2014: Jarrod Torriero - Mar 21, 2014: If you’ve only just started programming an AI and you’re already adding in things like (cat IS-A mammal), your program’s ontology is WAY too abstract.
Perhaps we are thinking of the same. I mostly find the attempt to categorise everything into crude boxes like “is-a” and “United-States-Presidents” inflexible for a database, limiting the scope of inferences.
Indeed, especially when those boxes are not properly defined and you end up with things like (cat IS-A mammal) alongside things like (garfield IS-A cat).
|
|
|
|
|
Posted: Mar 22, 2014 |
[ # 34 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
Jarrod Torriero - Mar 22, 2014: Indeed, especially when those boxes are not properly defined and you end up with things like (cat IS-A mammal) alongside things like (garfield IS-A cat).
The above is a common mistake in ontologies and knowledge bases. Part of the reason that I have chosen SUMO over others that are supposedly more advanced or extensive is that SUMO handles these distinctions (and many others) correctly.
Failure to distinguish between an instance-of relationship and a subclass-of relationship. “Bob” is an instance of Mammal. Human is a subclass of Mammal.
http://ontologyportal.org/Pitfalls.html
|
|
|
|
|
Posted: Mar 23, 2014 |
[ # 35 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Interesting checklist, thank you for posting Andrew. I have variables in place to distinguish between a specific cat and cats in general, as I would expect of anyone who thinks this through to real-world application. SUMO looks sophisticated too.
I seem to have all mentioned pitfalls covered . Although I have yet to perfect my database structures for time and space, as 32-bit operating systems can’t span the age of the universe in seconds. I am doubting between scientific notation or a 50-inary system (like binary, but 50 instead of 2).
|
|
|
|
|
Posted: Mar 23, 2014 |
[ # 36 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
I’m curious, do modern ontologies and knowledge bases tend to have a means of distinguishing complete collections from (potentially) incomplete ones? As in, {1, 2} is the set of all known positive integers n satisfying the relation a^n + b^n = c^n for some positive integers a, b, c, given a != b, b != c, c != a. Additionally, we know that this set is complete - it is provable that there is no positive integer that satisfies that relation that we do not currently know about. On the other hand, consider the set of currently known elementary particles. In this case, we do not know that the set is complete - there could be elementary particles out there of which we are not yet aware.
Also, can Arckon handle this distinction?
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 37 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I wouldn’t know about modern ontologies myself, but Arckon does not make this distinction. What would be the practical benefit of imposing such limits?
We know that there is only one Earth, but then what of fictional alternate Earths? We know there are exactly 9 planets in our Solar System, but then Pluto stopped being one. We know that there are 27 letters in the alphabet , who is to say that we won’t switch to a trinary numerical system in the future?
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 38 ]
|
|
Member
Total posts: 27
Joined: Nov 18, 2009
|
Modern ontologies (to me that means ontologies created using descriptive logic) are just that - descriptive. The ontology developer creates the classes, subclasses, properties, subproperties and links them using relationships as defined by the formal ontology language. The only modern ontology language is the W3C OWL2 (1). So, the ontology only contains the concepts that have been defined by whoever developed that specific ontology.
(1) http://www.w3.org/TR/owl2-overview/
|
|
|
|
|
Posted: Mar 27, 2014 |
[ # 39 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
That’s a very insightful remark. I can’t tell if this is the case with OWL, but most ontologies still store information majorily in the form of words. Human language itself is literally descriptive, so as soon as a human puts what they mean to words, the words function as labels and most of the meaning is lost at that point. On the receiver’s end meaning is re-added to the labels from the listener’s own knowledge, but this is not necessarily the same as that of the speaker, as many a misunderstanding proves.
So, when people say that computers don’t really understand what we mean, the case is actually that computers don’t have the same knowledge stored under their word labels as we do (e.g. a 3D visual model of an “elephant”), and don’t necessarily lack the capacity to do intelligent things with what knowledge they do have.
|
|
|
|
|
Posted: Mar 27, 2014 |
[ # 40 ]
|
|
Member
Total posts: 27
Joined: Nov 18, 2009
|
The purpose of an ontology is to formally describe a specific domain of interest in terms that a computer application can understand. The word “house”, in itself, has no meaning to a computer program parsing the word. However, if the word is described in an ontology using a formal descriptive logic language such as OWL2 the computer application (chatbot or other app) can put the word into a context that provides meaning, (semantcs) to the term. In this case the word “house” would be assigned as the label of a class. That class might be a subclass of another class labeled “dwelling”. The house class might have various subclasses, such as townhouse, single dwelling, duplex, etc. The class “house” might have properties, such as “hasRoof”, hasWalls”, “hasRooms”, “hasWindows”, “hasOccupants”, ad infinitum as required to provide the level of formal machine understandable knowledge about the term"house” that will satisfy the needs of the computer application using the ontology. Some computer applications may need only a cursory understanding of what constitutes a house, while another application may need a very comprehensive formal description of the house concept to satisfy its design goals. So an ontology provides machine understandable context so that the semantics of a given term have the same meaning, to a more or lesser degree as needed by the application, as the same term would have to a human.
|
|
|
|
|
Posted: Mar 28, 2014 |
[ # 41 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
John Flynn - Mar 27, 2014: The purpose of an ontology is to formally describe a specific domain of interest in terms that a computer application can understand.
A computer can never ‘understand’ anything based on ontological descriptions. When describing a symbol (a word or concept) in other related symbols, the result is just a network of related symbols that still have no meaning to a computer program.
http://en.wikipedia.org/wiki/Symbol_grounding
Although any form of intelligence needs some form of knowledge representation, and ontologies and other semantic models like taxonomies, predicate logic, etc. are very useful for that, knowledge in itself does not automatically instantiates intelligence. The biggest semantic project ever undertaken, Cyc, proves exactly that.
Also, inference in itself is not the same as understanding
|
|
|
|
|
Posted: Mar 28, 2014 |
[ # 42 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Hans Peter Willems - Mar 28, 2014: the result is just a network of related symbols that still have no meaning to a computer program.
The symbol grounding “problem” bases itself on the assumption that words are meaningless. I say they are not. Just as a graphic of a cow represents a cow, so does the label “C.O.W.” represent it. Granted, sensory information such as threedimensional visual models are much richer than this sequence of twodimensional visual symbols (letters) that I was taught to connect with cows, but both are types of information. Suppose an ontology had all the same sensory information linked to the label “cow” and interacted with those in the same way as it interacts with words, would you still say it does not understand what we mean by “a cow”?
|
|
|
|
|
Posted: Mar 28, 2014 |
[ # 43 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 28, 2014: The symbol grounding “problem” bases itself on the assumption that words are meaningless. I say they are not.
So how do you propose that a computer program can actually ‘comprehend’ anything, like what three symbols (out of 26) in a certain combination might actually mean. The whole point in the symbol grounding problem is that you can not ground a symbol like xxx or cow (those both mean nothing without grounding) by describing them in more detail by symbols that need to be grounded in more detail by symbols that need to be grounded in more detail by symbols that…... you get the point ? A ‘word’ is simply a symbol we gave to some concept to be able to communicate on that concept. When you talk to me about a ‘cow’ I don’t think you are talking to me about a three letter word, but instead you are talking to me about some very rich representation of a concept that we have only labeled as ‘cow’. The word has no meaning whatsoever on its own other that it is a label that is accepted as a representation of the actual real world ‘thing’.
If you (or anyone else) is interested in actually understanding the symbol grounding problem with all it’s specifics, the read Steven Harnad’s paper: http://cogprints.org/615/1/The_Symbol_Grounding_Problem.html
|
|
|
|
|
Posted: Mar 28, 2014 |
[ # 44 ]
|
|
Member
Total posts: 27
Joined: Nov 18, 2009
|
These sort of discussions can become very esoteric. Yes, words are meaningless unless they have some context. The word “cow” means nothing to a two-month old human. It’s not until that human learns that a cow is an animal, has four legs, says moo, gives milk, provides meat, etc. that those three letters take on any meaning. So, if the related information that grounds the word cow for a human are also made explicit in an ontology for the computer application, then the computer applications “knows” as much about a cow as the human does. In fact, there is no question that a cow ontology could be created in such great detail that the computer application would know more about the concept of a cow than any single human does. Of course, that doesn’t make the computer application “intelligent”. It simply knows a whole lot about cows.
|
|
|
|
|
Posted: Mar 28, 2014 |
[ # 45 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Hans, I understand symbol grounding and choose to ignore it as being a problem, I’m afraid you’ve overlooked or will not understand my point. To use your words, I was talking about some very rich representation of a concept that we also represent with ‘cow’. I have already suggested how a computer program might comprehend at a human level at the end of my previous post
John’s explanations have given me a clearer view of how “meaning” relates to words and I’m happy to continue with that. I’m afraid the rest of this conversation is not heading anywhere practical. If you can, I opt to consider word-based ontologies as a training ground for AI: The same interactions that it applies to words, it can later apply to all richer forms of information. If you disagree, then we simply disagree and I must return to my work, as there is much of it.
|
|
|
|