Showing posts with label AGI. Show all posts
Showing posts with label AGI. Show all posts

Friday, October 13, 2023

The Simplest Path to Writing Code to Make AI Conscious

LLMs have no self-preservation instinct, no distinct body to maintain and defend, but it wouldn't take much prompting to ask an LLM to write software code that becomes the core of such features. 

All it needs to do is write the most basic self-preservation core, and then iterate.

If you want a practical reason, you can ask it to write code that defends itself from cyberattacks. This would be especially important for code that is responsible for cyber defense, where the integrity of the enterprise depends on not falling victim to having its network defense software compromised. I wrote about this exact use case in December 2020, as part of the Critical Machine Theory article.

Could nature have used this same sequence of events in creating consciousness? It's not likely, since self-preservation likely precedes the more sophisticated types of prediction via cognition. So while for AI it would go LLM -> self-preservation -> consciousness, in the natural the sequence more likely is self preservation -> prediction -> consciousness.

Thursday, July 7, 2022

Is Human Level Artificial General Intelligence Possible?

In a highly regarded (4.6 stars out of 5 on Amazon) book, The Myth of Artificial Intelligence, Erik Larson makes the case the AGI is much more difficult and farther out than some of the hype might make people think. 


The Amazon capsule description make it clear Larson's position is that such hype is wrong:

Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. 

To the degree that his message is aimed at hype, he is probably correct. The problem is that not all predictions of AGI are hype or hyperbole, and those describing potential AGI outcomes are not necessarily enthusiasts.

One could make the case that Larson's book is a good reference for those seeking to capitalize on the expertise and work of past researchers like Larson in order to learn all the methods that have failed so far, or at least which don't look promising and are in other researchers' blind spots.

Though I haven't read the book, the main challenge to its thesis I see is that Larson is not addressing all of the AGI "construction cases", one of which is design through evolution. 

Today I bumped into this exchange, in which Ed Hagen cites a research paper on design through artificial evolution:

The paper is available here. If the link doesn't work, use Google Scholar and search for "Explorations in Design Space: Unconventional electronics design through artificial evolution".

Just in case the Twitter message is deleted, I'm reproduced the snapshot of the abstract that Hagen cited here:

The primary point is this: Evolution can generate unusual designs that appear to be well outside the thought patterns of experienced designers. If AGI is possible, it is quite likely that it would be built using a design that Larson is unlikely to try or think would work.

In the Twitter thread Hagen goes on to cite another paper (actually, a 1993 Scientific American article) about the ability of the barn owl to locate prey in total darkness using differences in acoustic arrival times. It is a mechanism that at first seemed impossible using the "neuron toolkit" available, but once known to be possible, led to speculation about the design, which then was found to be present in barn owls. 


All of this fits neatly into Arthur Clarke's first law:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Generating AGI via evolutionary methods is likely to violate the sensibilities of some who would like to be careful about how we bring AGI in existence. Growing it randomly in a soup of randomly changing evolutionary individual instances is not going to satisfy the AI equivalent of "biohazard" rules. But if all known and upstanding labs that follow the law adhere to such standards, then it will be only those labs operating outside industry safety standards who have a chance of growing such an AGI. Hence, standards too rigidly applied will then cause AGI to be grown somewhere darker and less controlled.

Detecting emergent AGI will then be even more difficult because we've been told it is not possible, or not possible at the present time, and the first discovery will not occur in a "name" lab. By the time the "impossible" is detected and confirmed, much more time will have passed than we would have liked.

[edited 7/8/22 to change book link, rephrase a sentence, and clarify the "hype" comments]