In a highly regarded (4.6 stars out of 5 on Amazon) book, The Myth of Artificial Intelligence, Erik Larson makes the case the AGI is much more difficult and farther out than some of the hype might make people think.
The Amazon capsule description make it clear Larson's position is that such hype is wrong:
Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake.
To the degree that his message is aimed at hype, he is probably correct. The problem is that not all predictions of AGI are hype or hyperbole, and those describing potential AGI outcomes are not necessarily enthusiasts.
One could make the case that Larson's book is a good reference for those seeking to capitalize on the expertise and work of past researchers like Larson in order to learn all the methods that have failed so far, or at least which don't look promising and are in other researchers' blind spots.
Though I haven't read the book, the main challenge to its thesis I see is that Larson is not addressing all of the AGI "construction cases", one of which is design through evolution.
Today I bumped into this exchange, in which Ed Hagen cites a research paper on design through artificial evolution:
A cautionary tale:
— Ed Hagen (@ed_hagen) May 13, 2022
Explorations in design space: unconventional electronics design through artificial evolutionhttps://t.co/XUpiwwsciH pic.twitter.com/H1LJfZyg0y
The paper is available here. If the link doesn't work, use Google Scholar and search for "Explorations in Design Space: Unconventional electronics design through artificial evolution".
Just in case the Twitter message is deleted, I'm reproduced the snapshot of the abstract that Hagen cited here:
The primary point is this: Evolution can generate unusual designs that appear to be well outside the thought patterns of experienced designers. If AGI is possible, it is quite likely that it would be built using a design that Larson is unlikely to try or think would work.
In the Twitter thread Hagen goes on to cite another paper (actually, a 1993 Scientific American article) about the ability of the barn owl to locate prey in total darkness using differences in acoustic arrival times. It is a mechanism that at first seemed impossible using the "neuron toolkit" available, but once known to be possible, led to speculation about the design, which then was found to be present in barn owls.
All of this fits neatly into Arthur Clarke's first law:
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Detecting emergent AGI will then be even more difficult because we've been told it is not possible, or not possible at the present time, and the first discovery will not occur in a "name" lab. By the time the "impossible" is detected and confirmed, much more time will have passed than we would have liked.
[edited 7/8/22 to change book link, rephrase a sentence, and clarify the "hype" comments]
No comments:
Post a Comment