AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Simple experiment that logic synthesis is a form of artificial intelligence
 
 

Hi,

Here I am again, just sharing with you another insight or experiment that I did as I’m attempting to gain wisdom on the feasibility of true AI on conventional computers. As a backend IC design engineer, I have thought of logic simplification very similar to human learning, we humans define learning as a form of generalization or abstraction that reduces the representation of an idea that allow us to easily apply and associate this to other ideas inside our brain.

The experiment involves the application of logic synthesis on incompletely specified functions, and check its interpolation capacity and area reduction over increasing number of samples.

You can read the experiment/research at this link:

https://arjayvisionsthoughts.wordpress.com/demonstration-of-artificial-intelligence-through-logic-synthesis/

I thought my experiment is something very new, until I encountered some works about artificial intelligence in logic design in the middle of my writing, which does the same approach. Anyway, I finished the writing for the sake of completion as well as for feedback. Tell me if my assumptions are incorrect or any opinions regarding this. I’ll continue my exploration about AI on conventional computer systems and might as well apply it on my work.

Thanks in advance!!

 

 
  [ # 1 ]

That’s a very impressive piece of work Arjeus and I hope you will persevere. The one thing that is missing is your list of references describing what you have already learned on the subject.

It’s extremely rare and difficult for anything new to be invented and most people spend their whole lives now just learning what has already been discovered. If they are lucky they might notice something that everybody else has missed. The average age of Nobel laureates when they make their big discoveries is mid to late fifties, which might give you some idea of how much work you have ahead of you. However you are off to a great start.

Some things that you might find useful in this line of enquiry are “Karnaugh Maps”, “Fuzzy Logic” and “Logical Resolution” If you look for those terms on the internet you will find a great deal of thought provoking material which will help you along considerably. You might also find the work of Marcus Hutter inspirational.

https://en.wikipedia.org/wiki/Marcus_Hutter

Keep up the great work, I’ll be following your progress with interest.

 

 
  [ # 2 ]

Thanks for the encouragement and your appreciation, Andrew. I hope I can find more time doing more research as well as more advisers who are expert in the field. I am also intending of taking an MS course on computer science on the next school year. Nevertheless I’m continuing my research for now as a hobby and I hope that I could be a part of a bigger AI research rather formally or informally that would direct me better into producing more innovative research.

I read about Marcus Hutter’s and I read about minimal message length as well and that’s quite some good insight on what I’ve been researching about AI. There’s some critical relationship between program optimization and machine learning and that’s what I want to dig further about. Thanks for those information.

By the way, Andrew, do you know any free online material and tools about compiler design? Especially for Python and Prolog languages. Thanks and God bless.

 

 
  [ # 3 ]

Hi all, I jump right in, and I am also named Andrew (Andres in Spanish)
My experitse is Computer Lingüistics adn I developed a GLR compiler for natural language, so I think I can give you some advises.

First you have to study finite automatas, see examples, master them, next, you need to meet table driven automatas, adn last you will need to understand a push-pull automata (stack+table driven). After this, you must read Chomsky language generation definition, what kind of grammars there are, adn understand about different kind of grammars.

Then you will know and master what is a grammar definition, and read/write them using the Backus Naurus notation.

Here you will learn what is a parser, which is (giving you some advance info.) “a software capable of telling whether a symbol sequence belongs to the sequences of symbols (sentences) emmited by a certain grammar following the Chomsky rules”. Then, and only then you will be able to understand what is a compiler. Which is a specialized kind of parser which builds a representation of the grammar elements of a sequence, and then, reading this sequence by means of a pull-down automata, constructs another sequence of instructions which put in the environment of a runtime engine, builds up another automata, capable of executing the grammar described accordingly. (the executable code, aka: compilation result)

Just as s imple as that! (if ever simple..)
Normally in university those are 2 semesters on the last years, involving deep training, so may be in your home, alone, you can master this in less than a few weeks, reading and lots of making exercises for at least 4-6 hours every day.

Dont cross roads, you will get lost! Use this order: Deterministic Automatas, Non-Deterministics ones, Automata equivalences, conversions, optimization, Stack-automata, table driven automata, grammar, Chomsky, concept of a Lexer (search for LEX/JLEX/FLEX), this constitutes a light pre-parser usually named scanner, then read parser generator, then read then you can search for LR and LL parser-generators like Coco, antLR, and finally the big brother GLR like ElkHound parser generators, and actually statistical driven parsers (as research matter)

I once did this in about 3 months and finally built a special kind of NLP-GLR non-deterministic+statistical parser-generator in C# on my own! (and after this, mumbled on this parser for years!)

Just have fun/// wink

 

 
  login or register to react