|
Posted: Mar 28, 2014 |
[ # 46 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
John Flynn - Mar 28, 2014: In fact, there is no question that a cow ontology could be created in such great detail that the computer application would know more about the concept of a cow than any single human does.
In fact, it is already proven beyond anything that no matter how big, deep and/or complete an ontology is constructed to describe any concept (like a ‘cow’), the computer still doesn’t ‘understand’ anything. The Cyc project is the ultimate proof of this. After 30 years and hundreds of millions in funding has resulted in the largest common sense ontology and a computer system that still doesn’t ‘understand’ anything. If ontologies would be the straight and simple way to make computers that actually understand things, by now we should have computers that do actually understand things. But we don’t. And the Cyc-project demonstrates that this is not a matter of time, nor a matter of (financial) resources. It simply doesn’t work.
As long as the only way to describe the meaning of a symbol is by using other symbols, you are faced with a repeating symbol grounding problem. You can not ground a symbol by describing it using other symbols that need the same grounding. It’s a paradox, a loop, or whatever you will call it. It never magically turns into a working solution at ‘some’ level.
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 47 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Aren’t ones and zeros symbols too?
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 48 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 29, 2014: Aren’t ones and zeros symbols too?
Indeed they are, but I don’t see what you want to say with that. As a symbol, one and zero needs to be grounded as well before a computer can understand their meaning. Mind you, the ‘concept’ of ‘zero’ is relatively young in the history of math:
http://en.wikipedia.org/wiki/0_(number)
[NOTE] I’ve had to “fix” the link here because the forum software is overly protective about link URL’s. ~~~Dave
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 49 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Just wondering if your view on symbols would mean that all effort to make computer programs understand “meaning” is in vain, since all computer programs run on zero and one symbols. I am satisfied with your answer.
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 50 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 29, 2014: since all computer programs run on zero and one symbols.
Don, you are missing the difference between ‘what’ a computer processes and ‘how’ it does that. Computer programs don’t run on zero and one symbols, they run on zero and one statusses (i.e. binary representations). The way a computer processes data inside the CPU has nothing to do with the way a computer program actually processes information. So the zero and one states the CPU uses to process numbers have no relation to the ability or inability for software to ‘understand’ (or ground) symbolic representations.
For us humans, ‘zero’ and ‘one’ are indeed symbolic representations of the two binary states in binary representations. We could as well have called them ‘X’ and ‘Y’, it doesn’t make any difference, as long as the choosen symbols are grounded in a ‘real world perception’ (of the two binary states). And that is exactly where it falls down inside a computer when you talk about making it ‘understand’ something (as in ‘having a real world perception’ of it), when you think you can do that by just adding more ‘symbolic representations’ that don’t have a real world perception linked to them.
By the way, for most people zero and one in binary representation are pretty much ungrounded as well, as they have no ‘real world perception’ and there for no actual understanding of the concept. This illustrates nicely the fact that a ‘system’ (either a human or a machine) can actually handle ungrounded symbols, but (and this is the important ‘but’) that system has no actual understanding of that symbol while handeling it. The example I use myself a lot in conversations is the theory of relativety; most people ‘know’ the symbolic representation ‘E=MC²’, but not many people do actually ‘understand’ it
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 51 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
Symbols have no meaning, except by the convention in which they are used.
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 52 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Symbols get meaning by their connection with other symbols. The connectome is what gives the symbols meaning.
The connenctome is built through experience or in the case of AI can be created through a shortcut and programmed directly.
If I were to show a person a “red” apple, and was able to probe the photons hitting the retina and the electrical signals generated, although the photonic input would be the same, each persons neuron electrical output would be different. It is only through experience/training that we grow to understand that specific electrical impulse is the color “red” and the symbol is grounded.
|
|
|
|
|
Posted: Mar 29, 2014 |
[ # 53 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
All visible wavelengths of white light (a narrow range of the electrmagnetic spectrum), except for the red wavelength, are absorbed as heat by the apple. The red wavelength is seen by the person because it is reflected.
However for conversational purposes, getting away from science and back to computer science… Symbols get meanings (plural) by other conventions. A symbol may have multiple conventions. This thread discusses symbols such as a “0” and a “|” ... Which by another convention, for example, may mean a “circle” and a “line”.
Does “|0” equal ten in decimal, or two in binary? Symbols may separate from meaning by one convention and join to a new meaning by another convention. Proof that symbols without convention have no meaning. With that said, no argument intended or implied, and your opinion is respected. All stated politely and friendly.
Please note:
This response has been made approximative for the sake of improving conversatonal quality.
|
|
|
|
|
Posted: Mar 31, 2014 |
[ # 54 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I think it should be noted that ontologies like Cyc are never intended as a stand-alone entity from which understanding and intelligence is supposed to naturally emerge. They are part of a greater system that makes the connections. Undefinable concepts are however not an obstacle to continue my work, and I doubt they are to you.
For more expert insight, watch this video about the SUMO ontology which addresses most of everything said from the point where Jarrod asked his question.
What I’ve learned from it is that my knowledge database is an “upper semantic ontology” that covers most of the same grounds and pitfalls as SUMO, with the exclusion of temporal and spatial relations that I will get to later this year.
Meanwhile I have decided to use a 60-inary system to store time indexes, allowing to record any event within the range of 30 million years in 10 characters. Not much practical information is stored in nanoseconds, is it?
I’ve also looked into connecting Arckon to the internet through php and popen(), but it turns out my webhost doesn’t allow running exe’s on its server altogether. I may be able to set up an email or php form interface eventually, but I can’t think of sufficient use to justify the effort, as it still wouldn’t enable 24/7 availability for contests.
|
|
|
|
|
Posted: Mar 31, 2014 |
[ # 55 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Mar 31, 2014: I think it should be noted that ontologies like Cyc are never intended as a stand-alone entity from which understanding and intelligence is supposed to naturally emerge.
That is not entirely correct:
The name “Cyc” (from “encyclopedia”, pronounced [saɪk] like syke) is a registered trademark owned by Cycorp. The original knowledge base is proprietary, but a smaller version of the knowledge base, intended to establish a common vocabulary for automatic reasoning, was released as OpenCyc under an open source (Apache) license. More recently, Cyc has been made available to AI researchers under a research-purposes license as ResearchCyc.
Typical pieces of knowledge represented in the database are “Every tree is a plant” and “Plants die eventually”. When asked whether trees die, the inference engine can draw the obvious conclusion and answer the question correctly. The Knowledge Base (KB) contains over one million human-defined assertions, rules or common sense ideas. These are formulated in the language CycL, which is based on predicate calculus and has a syntax similar to that of the Lisp programming language.
Much of the current work on the Cyc project continues to be knowledge engineering, representing facts about the world by hand, and implementing efficient inference mechanisms on that knowledge. Increasingly, however, work at Cycorp involves giving the Cyc system the ability to communicate with end users in natural language, and to assist with the knowledge formation process via machine learning.
|
|
|
|
|
Posted: Apr 1, 2014 |
[ # 56 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I am aware of that and many other descriptions of Cyc. The consistent story is that it was meant as a knowledge base for other AI to implement, not an end product by itself. Its inference functions only serve to extend the common knowledge on a basic level. If they are focusing their efforts beyond that now, I have yet to see the results so I’ll take that with a grain of salt.
|
|
|
|
|
Posted: Apr 2, 2014 |
[ # 57 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Don Patrick - Apr 1, 2014: The consistent story is that it was meant as a knowledge base for other AI to implement, not an end product by itself.
Going from statements made by Minsky (who knows Doug Lenat) in several lectures, the initial idea was to build ‘strong AI’ period. Also, do you really believe that DoD/Darpa have pumped that amount of money (I seem to remember it’s several hundreds of milions, can’t find that info now) into the project if it was to develop ‘just’ a semantic knowledgebase. Don’t think so
Besides that, almost every semantic knowledgebase project that started out with claims about building a strong reasoning engine based on semantic real world representations, is now pretty much defunct and seen as ‘nice’ technology for semantic web applications but far removed from the original goals of strong AI.
|
|
|
|
|
Posted: Apr 2, 2014 |
[ # 58 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Oh well, if Minsky says so. It’s certainly not apparent from the project itself.
|
|
|
|
|
Posted: Apr 2, 2014 |
[ # 59 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
One of the things I’ve done over the last few years is making an inventory of all the big and/or important AI-related projects that where done during the last 5-6 decades (as part of the research for my own project, did the same with robotics projects). I’ve read several research papers pro and con to each project (as far as those papers exist and are accessible). One thing is pretty clear: for each and every AI-project the expectations up front where very big and none of those projects deliverd even slightly in the neigbourhood of those up-front expectations. It’s the main teneur when reading back into AI-research history, unfortunately. This is also one of the main reasons why today everyone is so reluctant to make statements towards any predictions on the possibility of strong-AI or AGI.
|
|
|
|
|
Posted: May 2, 2014 |
[ # 60 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
—- Meanwhile—-
Dayjob ate my time so I’ve only added two abilities to Arckon last month:
1. Extracting the main topic of text files, which may in the future enable a command like “Get me the file about dogs”. Now if only I could access pdf files and word docs..
2. Answering open questions about the colour of things (nothing to do with recent topics, just one of those dust-covered lightweight tasks). I first wanted to use RGB values, but I figured I might as well store them as words, as long as there is no visual input anyway.
Now a question: Does anyone know of a conventient list of common items and their colours? I’m looking for texts containing e.g. “A rose is red.” or “red roses” that I could have Arckon read and learn from. Otherwise I’ll just have to read him some children’s books or poetry
(Emphasis on “convenient”, it’s a minor concern, I’m not looking for extra work)
One last noteworthy occurrance: Arckon’s program persistently crashes whenever he reads the 3rd law of robotics . You’ll never guess why. A robot must obey the orders given to it by the human beings, except when such orders would conflict with the Zeroth or the First Law.
|
|
|
|