I'm not aware that technology has reached that level yet, but before he died, Steven Hawking warned the world to be very careful about all forms of artificial intelligence. I think Asimov, in his "I Robot" and the sequels, explored the topic so thoroughly that the world should heed his four laws of robotics as guide to programming any artificial thinker.
I was tickled by the assessment of artificial "intelligence" I heard recently on the radio, by an expert on the matter. She said something like, "its present level is about that of the hedgehog's."
A contributor on the BBC Radio Four's witty Museum of Curiosities show talked about the Deep Blue computer that beat Gary Kasparov in a game of chess, allegedly proving the computer was brighter than the human player. It wasn't of course - it approached the problem by being able to throw vast numbers of calculations at it in a relatively short time - but what the speaker pointed out was that the computer had "no sense of a job well done " afterwards.
Computers have a very long way to go before they can be said to be "intelligent". They can calculate vast numbers of probabilities and similar numerical problems very rapidly so can be programmed to follow rigidly-set protocols and processes such as arise in industrial settings; but they lack any sense of nuance so cannot decide on anything other than Boolean either/or, this/that, etc.
Moreover, their supposed "intelligence" is not innate - they still need human beings to programme them to perform rigidly-set, numerically- and logically- definable tasks; and these humans are axiomatically ones who can do something the machine cannot do: understand the problems to be solved.
Lovely combination of anecdote, fact and thought. Really enjoyed reading your reply, Durdle.
Perhaps the misperception of AI's derives from the use of the word "intelligent" which implies that the machine somehow mimics human cognition. Perhaps they should be called artificial calculators.
To develop something closer to human intelligence, computers would need to have: emotions and empathy in order to understand ethics and ethical dilemmas; moral and legal capacity to evaluate evidence based on moral priorities and laws; recognise patterns in all sensory formats, including some beyond human senses, and be capable of analysing those patterns and using the information to solve new problems: lateral thinking - the ability to bring previously unrelated matrices together to create new solutions, and totally original concepts; hypothesise and design repeatable experiments to determine if something is reliably so; recognise what information is necessary and missing, and gather relevant information; determine differences between correlation and cause and effect; evaluate both positive and negative effects of proposed processes and plans; be capable of recognising jokes, lies, irony and sarcasm; and possibly be capable of recognising art, beauty, metaphor and poetic meaning.
It seems impossible to me that a machine could be designed and programmed to achieve such levels of complexity, but with Steven Hawking's warning, it's well worth re-thinking what is possible, because to create something that could do even half these things could have terrifying implications if it was programmed with, for example, the values and goals of a narcissistic-borderline tyrant. Such a machine in the hands of the wrong kinds of programmers or hackers could create nightmares worse than the worst imagined science fiction.
This post was edited by inky at October 22, 2018 7:02 PM MDT
Excellently put, though I think the dystopian AI you suggest would more likely be software hidden in the World Wide Web's servers, capable of causing untold damage around the world, so far more dangerous than a mere machine however human-like in appearance or "clever" in action.
Good point. It would be near impossible for some governments to resist using hidden AI embedded in the Net. Interestingly, in Asimov's fantasy series, he portrayed the intelligent robots as so life-like that they lived among and influenced human societies without detection. The moral issue then became whether human societies had been deprived of free choice in their evolution. The robots had also learned how to self-replicate and self-improve and had developed political factions.
Thank you. I fear governments are already finding that problem, at least at an early staqe; and not only governments, too.
It would be stretching the point to call the hidden programmes "Artificial Intelligence" - the intelligence is in the heads of the hackers who are simply making the tools they need to gain their intended results. Nor are all those people "hackers" in an aggressive political or criminal way from Russia or the Far East; as I would include the vast, shadowy but apparently-legal trade in personal data harvested by the huge American companies that now dominate the Internet.
Perhaps Asimov's robots will not need to be discrete, humanoid machines; and free social choice is already under attack as much by huge commercial enterprises as by political entities. Think about all those times some pundit or other tells us "Now we all ... on-line". Perhaps we do not always obey the "Now-we-all" drives, but they are definitely directed "choices".