|
Posted: Feb 20, 2011 |
[ # 31 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jan, great point. Although I’m very much focussed on the idea that the ‘code’ of a human brain is nothing more then a governing system that handles data-interaction, I do recognise the obvious task of constructing this governing system. And because a computer is, at least, very good at ‘doing logic’, there is an opportunity to let the computer ‘help’ to write the logic.
But, and it’s an important ‘but’ for me, I’m working from the premise that I’m modelling something that comes close to ‘human reasoning’ (although this is just one aspect of my model). So I try to stay close to the idea that de ‘governing logic’ is fairly basic and the real complexity is in the ‘data-model’.
So when I get to actually implement my ideas into code, I don’t think I want the system to write code by itself. However, I can see how a ‘tuning layer’ inside the code could help to fine-tune the logic when this is instigated by the developing database.
|
|
|
|
|
Posted: Feb 20, 2011 |
[ # 32 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Have you ever considered that the ‘governing logic’ and ‘data-model’ should be the same, like in compiler design. In that case, you can change your data-model as much as you want without having to change code, since your data model contains the code.
|
|
|
|
|
Posted: Feb 20, 2011 |
[ # 33 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jan Bogaerts - Feb 20, 2011: Have you ever considered that the ‘governing logic’ and ‘data-model’ should be the same, like in compiler design. In that case, you can change your data-model as much as you want without having to change code, since your data model contains the code.
Actually, in my notes I have this: logic = datamodel. However, I do mean something else then what you propose. What you point at is the approach that LISP implemented; both code and data are described in one system. This means that the software can act onto code the same way as it can act onto data.
However, in my approach (from actually looking at the human brain) there IS a separation between the ‘logic’ (i.e. code) and the ‘data’. Our code or logic, the ‘HOW’ we process our thoughts, is determined by evolutionary design. Our data, the ‘WHAT’ we process, are our thoughts, experiences and accumulated knowledge that we are constantly reordering, re-evaluating and reconnecting to other thoughts, experiences, etc.
Thinking about it, a better way to describe the ‘logic code’ in my model is to represent it as the ‘operating system’. I have to think about this a bit more though. Time to head back to my (digital) notebook
|
|
|
|
|
Posted: Feb 20, 2011 |
[ # 34 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
External stimuli equals ‘input, especially when we are talking about sensors. The ‘rules’ are defined at first as the core-concepts (AI-instinct) I spoke of before. The code does not have to grow, simply because it doesn’t work in the human brain that way. Processing in the human brain is what you have, but adding knowledge does not change the way we process that knowledge, Still, somehow the human brain is capable to grow it’s reasoning capacity based on learning new things.
I don’t think I made myself clear. I only spoke of growing because I assume you will not be programming all of the core concepts you plan to implement all at once. Whenever you program a new concept, you will have to add the code that describes how the concept causes the bot to behave. (Humans, on the other hand, have as many “core concepts” as we’ll ever get at birth, as you said.)
|
|
|
|
|
Posted: Feb 21, 2011 |
[ # 35 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
C R Hunt - Feb 20, 2011: I only spoke of growing because I assume you will not be programming all of the core concepts you plan to implement all at once.
Actually, there are only a very few core-concepts, everything else is build upon those. I’m already getting close to my final definition of those core-concepts and coming weeks I’m going to test those to see if they are valid for describing ANY other concept on top of those. When I’m done testing I plan to publish a research paper about this model.
Currently my model of core-concepts has three layers, the first most basic layer has only eight ‘concepts’ (so far), everything else is being added on top of just these eight. This is of course a very abstract level but we can say that at this abstract level, reality can be described with those few concepts. To give an idea of the concepts at this abstract level; time, causality, polarity…
Btw, if you take a look at the video I posted in the ‘On Intelligence’ topic, you will see that Jeff Hawkins’s model shows obvious similarities. Where he is reasoning bottom-up (step by step simplification of complex input into less complex descriptions), I’m working top-down by starting with the simplest description of reality at an abstract level; the core-concepts.
|
|
|
|
|
Posted: Feb 21, 2011 |
[ # 36 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Actually, there are only a very few core-concepts, everything else is build upon those. I’m already getting close to my final definition of those core-concepts and coming weeks I’m going to test those to see if they are valid for describing ANY other concept on top of those. When I’m done testing I plan to publish a research paper about this model.
Currently my model of core-concepts has three layers, the first most basic layer has only eight ‘concepts’ (so far), everything else is being added on top of just these eight. This is of course a very abstract level but we can say that at this abstract level, reality can be described with those few concepts. To give an idea of the concepts at this abstract level; time, causality, polarity…
Interesting. Looking forward to the paper.
|
|
|
|
|
Posted: Feb 23, 2011 |
[ # 37 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Just to keep you guys up to date:
I’ve now officially hooked up with an academic researcher who is himself working on several AI-related things. Our ideas appeared to glue on many points and together we will be able to leverage both our efforts to another level. Because we are also both involved in corporate projects in the field of knowledge management, there are already some possible links between the disciplines. We hope to move our AI-research into funded territory, probably within this year.
I can not yet name him here for certain reasons, but we already talked about co-authoring the research paper I mentioned. So his name will surface soon enough.
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 38 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I’ve just started mapping out the basics for the ‘mind operating system’ that will eventually run my model. It is as basic as possible and has some functional similarities to the human brain. So far I’m mapping out memory, perception, interaction and behaviour. When the design of this mind-OS is done and I get it aligned with my core-concepts model I will be able to start coding some ‘proof of concept’ stuff.
Right now I’m thinking about the role of short-term and long-term memory; short-term is holding current conversation history (both verbal/grammar and non-verbal/sensors/other) and long-term holds the concepts and tagging that is persisted from the short-term memory.
Any thoughts or input are appreciated
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 39 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Am I to assume that your short-term memory “module” will have some sort of “expiration-date tagging”, that allows any particular memory to be purged after a certain time limit has passed? If I remember correctly from past readings, Human short-term memory generally lasts (on average) about 48-72 hours, and that, during that time, some/most/all short-term memories are shifted to long-term memory, or just plain forgotten. Also, what criteria will determine what memories get shifted to long-term? Will it be some sort of rote system, where a particular memory that matches (or partially matches) a previous memory triggers the transfer, or will this be a manual process, at first? Just curious…
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 40 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Dave, thanks for your input, it’s greatly appreciated.
Dave Morton - Feb 26, 2011: Am I to assume that your short-term memory “module” will have some sort of “expiration-date tagging”, that allows any particular memory to be purged after a certain time limit has passed?
That is indeed how I see it as well. While thinking about the function of ‘memory’ as part of a mind-model, I came to realise that we actually WANT to ‘forget’ everything that has failed to show any relevance towards previous stored (both short and long term) knowledge and experience.
Dave Morton - Feb 26, 2011: If I remember correctly from past readings, Human short-term memory generally lasts (on average) about 48-72 hours, and that, during that time, some/most/all short-term memories are shifted to long-term memory, or just plain forgotten.
Ah, OK. I was already questioning myself what time frame would be useful. I’ll read up on it but your suggestion sounds about right to me. Point of contemplation of course will be how this translates to AI in relation to higher learning speed, not full time interaction (temporarily shut down system), etc.
Dave Morton - Feb 26, 2011: Also, what criteria will determine what memories get shifted to long-term? Will it be some sort of rote system, where a particular memory that matches (or partially matches) a previous memory triggers the transfer, or will this be a manual process, at first? Just curious…
I’m thinking towards matching/validating against stored information. This should use both short and long term memory, where long term memory would give ‘higher’ validation (as it would represent ‘previous’ validation) and short term memory would build up a ‘level of validation’ the more a concept would be given context within the timespan of the short memory.
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 41 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
isn’t sleep somehow important to ‘forget’ or transfer from short term to long term memory?
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 42 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
It is if you do, I guess. I’ve more or less forgotten what sleep is like.
(it’s 1:36AM here, and I’m guessing it’s around 10:36 or 11:36 where Hans Peter is. I’m not sure where you are, Jan.)
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 43 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Jan Bogaerts - Feb 26, 2011: isn’t sleep somehow important to ‘forget’ or transfer from short term to long term memory?
Sleep is the state for humans that happens to be the time when this seems to happen, although there are different schools of thought on this. The most prevailing idea is that our short-term memory is reset during rem-sleep, and that this ‘resetting’ is the source of dreams.
However, I don’t see the relevance of sleep (in itself) for AI, as sleep is largely a biological/physiological thing for humans. And I’m not looking to build a human A ‘simple’ algorithm that will dump non-relevant stuff after a certain period of time will suffice.
I have (already) been thinking about the timespan and I guess it is simply depending on how much memory should be available for short-term memory. The timespan will be depending on how fast this memory is filling up and how much from that information is being shifted to long-term memory (thus freeing up short-term memory).
Thinking a bit more on this, I can even see how it would be possible to induce a dream-state, where information that is close to the end of the short-term timespan is being mingled up at random (with some long-term stuff mixed in, to create ‘handles into reality’) and then tested once more against long-term stored concepts before being dumped definitely. To have this work we might indeed have to introduce a ‘daily cycle’ into the system, which would also help to align the AI with ‘our reality’.
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 44 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Dave Morton - Feb 26, 2011: It is if you do, I guess. I’ve more or less forgotten what sleep is like.
(it’s 1:36AM here, and I’m guessing it’s around 10:36 or 11:36 where Hans Peter is. I’m not sure where you are, Jan.)
I was wondering about that, posts from the US so early in the morning.
My time-zone by the way, is the same as Hans Peter’s.
|
|
|
|
|
Posted: Feb 26, 2011 |
[ # 45 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
That’s good information to know, Jan. Thank you.
We now return you to your regularly scheduled thread.
|
|
|
|