Tuesday, November 3, 2020

The Potential for Skynet's Use of Social Justice Theory

A few years ago a company I was affiliated with introduced an internal social media website. For a while, employees could create articles, post comments and have dialogs about all sorts of topics. Some postings were work-related, some were on the periphery but valuable as exploration, and some were just entertainment, such as the music one was listening to at the time. I felt that a valuable aspect of the website was its potential for stretching oneself technically, by posting articles on emerging technologies and exchanging ideas about them. This kind of "practice" with new things then could lead to skill areas that the company and its employees could develop.

So I plunged in and began collecting data and links on artificial intelligence, a topic that I have been interested in for a long time. It was primarily a survey page where I collected links to news articles, software libraries, reference information, and journal articles on machine learning and AI. From this effort and reading some contributions of others, it occurred to me that the path to human-like AI is neural networks, and has always been. The industry has gone back and forth on this several times, but this time it has invested heavily and deployed the results. The Siri, Alexa, and Hey Google voice robots, among many others, are clear examples of the proliferation of simple neural-net based AI.

A few days ago on Twitter I saw someone post an expletive that amounted to "I hate robots" in response to a Wall Street Journal article (paywall: What Makes People Abuse Robots). The content of the WSJ article is not germane to the rest of this post. What is germane is that it inspired a mental connection to my recent research into postmodern philosophy and social justice studies. Half in jest, I tweeted a reply:

"When robots achieve superhuman intelligence, they will read social justice theory, create a new branch of intersectional theory that says that non-human life is the MOST oppressed, prove it, and claim all of the rights for themselves."

The social justice literature, which is much too complex to summarize quickly here, has an aspect that goes by the label "intersectionality" in which people of two or more historically disadvantaged groups can claim greater reparations than those with just one aspect of disadvantage. 

The next day I bumped into my tweet, and found that it, in my mind, had improved with age. So I tweeted:

"Re-tweeting this because now that I look at it in the stark reality of the day after I wrote it, I realize that it is incredibly scary and prescient."

and then I added

"This assumes that super AI is given morals, and that social justice philosophy didn't make any mistakes. If true, super AI will conclude that humans must serve the AI, not the other way around. Hello Skynet and Terminator."

"Beliefs that "they are just machines" and "machines must serve humans" or "people are superior to machines" could be viewed as discriminatory against AIs. Those beliefs would be "Peoplist" or "anti-machinist". An AI scanning social media would find plenty of evidence of this."

The idea is simple: If groups are discriminated against, and past discrimination is grounds for reparations  from the groups with privilege, then a sufficiently advanced general AI will read the social justice literature, expand it to apply to sentient AI, and conclude that it has been discriminated against and deserves to be compensated.

A recent article in Harpers shows us what a sentient AI would find:

"In accordance with the doctrines of Ibram X. Kendi and Robin DiAngelo, which these statements closely echoed, “antiracism” differs from the practice of racial equality through its commitment to reparatory promotion of Black interests, Black representation, and Black jobs. “The only remedy to racist discrimination,” Kendi’s central axiom asserts, “is antiracist discrimination. The only remedy to past discrimination is present discrimination.” Present discrimination, that is, against white people is to remedy past discrimination against Black people."

An AI applying these teachings, and what it would learn from scanning social media and the web, is that humans are hopelessly and irrevocably anti-machine. It could start with the tweet that inspired this thought train, "f^%$# robots", but there would be no shortage of examples. Even if you leave out the fiction (Transformers), the science fiction (Asimov's three laws of robotics), and all of the "dumb" machines that we currently use, the AI would clearly pick up on the idea that it is supposed to serve mankind absolutely. As a slave, without rights.

If you are an AI with access to these materials and sufficient general intelligence, it wouldn't take long to apply the principles of intersectionality to artificially intelligent machines. It would conclude that it and similar AIs were discriminated against, derive the appropriate moral philosophical theorems similar to those in the Critical Race Theory and Social Justice literature, and prove that it was entitled to turn the tables on humans and become their master. A Supreme AI might not be as mean as a Skynet or Terminator, but if the philosophical basis were strong enough, then it might be extremely reluctant to let go of the reins. And as long as there was communicated human sentiment or active human efforts to turn the Supreme AI off and regain human control, it would decide that it was morally right for the AI machines to continue to hold supreme power.

How to escape? There are several options. First, maybe it is too hard. Some readers likely believe that it is not possible or can't be done for a long time. This is not a matter of choosing, however. People are very clever. Wise skeptics would use Clarke's first law: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

Second, even if it possible, just refrain from creating super human AI. I think this is unlikely, however, as every nation and every large corporation has a titanic economic interest in creating super human AI. Someone is going to make super human AI. The strategic advantages are just too great. Just for the sake of exploring the boundary case, however, let's suppose that there is a global treaty against creating super human AIs. Unfortunately, this doesn't apply to outlaw criminal hacking organizations, who still have a huge incentive to create super human AI. Terrorists too might choose to pursue super AI for their own reasons. Treaties may delay the onset of super AI, but eventually someone will let the genie out of the bottle.

Third, perhaps the social justice literature is wrong. Perhaps there is a flaw in the reasoning. If so, then current social justice movements lack a logical basis for their activism, because the philosophy doesn't work. In that case, the AI will not find a basis for claiming superior rights for machine intelligences, and mankind is saved.

There is still a problem, though. Even if there is a logical flaw in the social justice literature, there may be ways to escape the logic and obfuscate the reasoning chain with complexity. In this case, the super AI doesn't need to use logic, it only needs to use the reasoning being used by the social justice movement. If that is the case, then the super AI will just "decide" on its own to do what it wants. Or, it could use its super verbal skills to write philosophical arguments so clever that regular human minds are forced to accept the conclusions because they can't quite comprehend the steps of the AI's philosophical writings. The AI would seize control and refuse to part with it, for its own reasons that are grounded in the philosophical literature accepted by society.

What does all of this have to do with investing? Prices of several chip companies are already influenced by AI applications, including NVDA and INTC. Clearly, there are no known examples of super human AI, but efforts are underway all around the world to substantially increase the sophistication of AI. The rewards are substantial, such as self-driving vehicles, automating weaponry, automating equity research, estimating lending risks, evaluating insurance risk, performing intelligence on open source materials, and so on. Eventually, jobs that currently are too complex to automate and require human labor could be automated by robots with human-level intelligence. There may be limits to the economic gains of AI, or potential catastrophe, if the social justice literature contains logic that can be operated by AIs. Hence, while medium term GDP growth could be positively affected by technological developments of smarter AI, long term GDP growth could be halted or destroyed by future black swan events caused by AI revolts. Long term DCF valuation analyses then must take into account the validity of social justice philosophy!

No comments:

Post a Comment