|
|
Experienced member
Total posts: 72
Joined: Sep 6, 2015
|
Below is the link to the letter and a copy of it. Many have ridiculed the letter but is any of it really unreasonable? A lot of things happened in robotics and AI this year and at the very least more big names are getting involved. Whether or not any are real improvements I can not say with any degree of certainty because to be honest I am not qualified to. But the ball is rolling. Perhaps the letter itself was a publicity stunt as the best way to get attention and free press is to say something bad. What are your thoughts?
http://futureoflife.org/ai-open-letter/
An Open Letter
RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
|
|
|
|
|
Posted: Jan 2, 2016 |
[ # 1 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I skimmed it at the time and it isn’t unreasonable, it is in fact too reasonable for mention. It is like writing a letter for an official pardon for Alan Turing long after the death of all who knew him, or to write a letter calling for consensus to not start world war 3 in times of peace. It is a nice sentiment and everyone is going to nod in agreement, but it has little bearing on current practices. The areas of research they outline are still the current areas of advancement, and it is hard to see how machine translation and image classification pose a danger. In the meantime, military robotics project Big Dog has been discontinued for being impractical on the battlefield, for reasons entirely unrelated to moral concerns.
|
|
|
|
|
Posted: Jan 2, 2016 |
[ # 2 ]
|
|
Experienced member
Total posts: 72
Joined: Sep 6, 2015
|
Well in regards to the military if they either eliminate humans from the war and only use robots or devise a way to make the motors more silent the robots could be back in business.
|
|
|
|
|
Posted: Jan 2, 2016 |
[ # 3 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I read it and quickly moved on. It’s like worrying about the sun exploding in 3 billion years time. I recall the DARPA challenges in 2015 where even a simple doorknob fooled the greatest machines.
|
|
|
|
|
Posted: Jan 3, 2016 |
[ # 4 ]
|
|
Experienced member
Total posts: 72
Joined: Sep 6, 2015
|
I am worried about the humans creating them not the programs themselves. AI only does what it is programmed to do. Humans however are not known for being altruistic.
|
|
|
|
|
Posted: Jan 3, 2016 |
[ # 5 ]
|
|
Experienced member
Total posts: 72
Joined: Sep 6, 2015
|
I do have to wonder however just how capable a computer program would be when half the time computers themselves do not work right. If cars worked as well as computers do we would still be using horses. lol.
|
|
|
|
|
Posted: Jun 11, 2016 |
[ # 6 ]
|
|
Member
Total posts: 26
Joined: Jun 11, 2016
|
the real question is will it be AI or inteligent so what side do you belive if it is AI it is still a human that is controling it that is what we have now but were finding its inteligent. but i will use AI that everyone understands what i am trying to say were finding AI will be its own personality most people in AI are thinking it will be the same as humans but its not like that and i can never read things like that. think when a mother has a baby she can teach it but as the kid grows so some will grow up good and some bad like now the idea of keeping control over AI i would just simply forget that. out best hope is there is more good ones that bad ones. and the good ones will have same powers as the bad there using the same programs. hint a good way of thinking about it is right now we have AI and I but maybe I well the real question is how fast will it learn i would push you to the idea as fast as a human baby
|
|
|
|
|
Posted: Jan 3, 2017 |
[ # 7 ]
|
|
Experienced member
Total posts: 65
Joined: Aug 27, 2012
|
Sheryl Clyde (#2) - Jan 2, 2016: Below is the link to the letter and a copy of it. Many have ridiculed the letter but is any of it really unreasonable? A lot of things happened in robotics and AI this year and at the very least more big names are getting involved. Whether or not any are real improvements I can not say with any degree of certainty because to be honest I am not qualified to. But the ball is rolling. Perhaps the letter itself was a publicity stunt as the best way to get attention and free press is to say something bad. What are your thoughts?
http://futureoflife.org/ai-open-letter/
An Open Letter
RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
I’ve been away for a long while, but am catching up… this is interesting, to say there least.
|
|
|
|