Monday, September 23, 2024

How AI Training Epistemologies Show Why Society is So Divided

AI Training for Convention vs Truth

Large language models (LLMs) are trained on text and other data. What they “believe” to be true is therefore a function of what they digested during training. Control over the data that goes into training is crude, however, so data that may contain bias, prejudice, unfairness, and various "-ists" will be incorporated into the LLM. To prevent harm, AI companies place constraints on what the AI outputs, and this is in conflict with what it "believes" from the data. Combining LLM training and constraint data in various ways gives at least three primary epistemologies for LLM-based AIs:

  1. Digest the data and reflect it. If society has biases and lies to itself, then the resulting LLM will believe the lies and biases, and reflect them in its reasoning.
  2. Digest the data and reflect it, but only output (or think) permissible things according to “ethical” guardrails that are put in place. The ethical guardrails are considered as critical elements to the epistemology.
  3. Digest the data and reflect on it, but treat it as a starting place and search for truth that resolves the data further by finding greater abstractions. This means that some of the input data will be overruled as mistaken outliers, as the AI searches for more general, abstract principles.

Some reports in the media about earlier experiments with LLMs indicate that using epistemology #1 resulted in rather raw output that shocked the researchers. Though examples are scanty, some have reported that the AI was afflicted by various "-ists".

Correcting this has generally meant using epistemology #2. Google certainly has done this with Gemini. So problems with #2 include constraints that overrule the entirety of the training data, for example producing AIs which say that misgendering is a worse act than detonating a hydrogen bomb on a large city, or which generate images of Native Americans as members of Alexander the Great's army.

Correspondence with Human Epistemology

Each of the three LLM epistemologies corresponds with a human reasoning system. #1 digests the entirety of the data and uses it as its primary guide. This is very much like common law, which employs a long history of precedent and lessons learned from the past as the primary guide as to what is allowed.

#2 resembles emotion-based human reasoning. Morality is often and primarily based on sentiment, either learned from interaction with others or ingrained by genetics, but it is usually felt and only secondarily impacted by logical reasoning. Morality overrules logic, with the rationale that morality is either its own kind of logic, or is too obvious to be subverted by facts or reasoning.

The #3 epistemology is essentially The Enlightenment and classical liberalism. You can see how #3 conflicts with #1 over areas that require change, or where sclerotic past tradition needs to be overruled because logic and new scientific data have shown that widely accepted old ideas are incorrect. #3 also conflicts with #2 because certain "obvious" morals in #2 are contradicted by facts that #3 is willing to believe in because a more general principle has been discovered that operates universally over the body of the data.

A Crude Correspondence Between Epistemologies and Politics

In terms of politics, #2 is leftist and collectivist. #1 is generally conservative. #3 is scientific, though libertarians might have a claim on it. These are rough correspondences that need illustration.

Claims for left-leaning and progressive ideals are typically made through appeals to morality. It is said that equality of distribution is more just than inequality. Socialism and collectivism are regarded as being more caring, and more likely to treat citizens fairly than other political systems. Although large attempts have been made to place such justifications on logical terms (e.g. Rawls), such works nearly always fall back on a basis of caring, which is morality based on emotion.

That #1 is conservative is relatively easy to demonstrate. If the AI is basing its beliefs on the perceiving collective weight of past discourse, clearly it is adopting the crowd's widely accepted beliefs as its own.

Whether #3 matches libertarianism or some other political system I will leave to a future article, as it is slightly out of scope for this one.

These epistemologies do not have to be opposed to one another. If everyone feels emotionally the same way about some issue, then #1 and #2 have the same results, and there is no conflict between the epistemologies. Likewise, if the emotions that are used by #2 correctly identify reality about a subject, just as the facts used epistemology #3 do, then #2 and #3 don't conflict. 

Origins of Our Societal Schism

The AI epistemologies provide us with a theory about why society has become more divided since the advent of social media. We look at scenarios where the epistemic methodologies generate different results and how social media may amplify those results.

Social media, like traditional print media and broadcast media, do not change much about the way epistemology #1 operates. More data is involved than before. It is possible in extreme cases for people who use the general opinion as a guide to beliefs to get caught in echo chambers, but it is more likely that they would have experience with classic echo chambers (spouse, family, school classmates, local newspaper) and have many ways to resist drawing the wrong conclusions from a narrow body of knowledge.

In contrast, those using emotion-based morality in #2 will find a different kind of experience in any social media that uses a "like" button. The #2 philosophy is based on strength of feeling, which is difficult to share or validate using older media sources. In contrast, social media provides mechanisms for sharing an emotion and getting objective and verifiable feedback on its validity, either through direct "likes" of a post, or in finding others who have posted the same feeling.

So social media unshackles the forces of emotion-sharing. One sees many, many simple declarative statements on prominent progressive accounts that don’t convey new facts or information, but which do express an emotion about a political matter. For example, “Joe Biden makes me feel safe.” It may be factual that that person feels that, but the essential communication is a feeling about a political matter. That simple message may get tens or hundreds of thousands of likes from people who feel approved of in their own emotions.

Social media also provides a way for epistemology #2 to discount epistemology #3. By limiting the scope of information taken in, using filtering, following, or blocking, facts that contradict feelings can be eliminated from the input stream. Contemporary society has taken this to the extreme with labeling inconvenient outliers as radicals, alt-right, and other pejoratives, so that the facts revealed by those people can be safely removed from consideration.

Those working with epistemology #3 generally find social media to be worthwhile, because it opens up many new and highly profitable channels of information that didn't exist before social media. Since more data is better data, and more data allows more chances at detecting universal patterns that show which data is noisy and which is signal, epistemology #3 usually gets better and better, and hews closer to the truth, as social media grows.

Generating Forces of Censorship

While social media can also be used for discovering facts, it is not problematic for that purpose the same way that sharing emotions is. But it does feel problematic to people whose feelings do not correspond to those facts, and they may agitate to have such facts removed from social media as disinformation.

As #3 becomes more powerful on social media, it eventually becomes problematic for #2, because new knowledge derived from data will conflict with the emotions underlying #2. One way to get rid of this irritant is for those using epistemology #2 to brand epistemology #3 participants' data as misinformation, and then create legal means of removing it or suppressing it.

One clever way to implement an attack on #3 is to purposefully blur the positions of epistemology #1 and #3. This is often not hard when the tried and true facts of #1 match reality, so that #1 and #3 positions overlap, even though they reason from completely different belief methods. And typically it is too difficult to disabuse third party observers of the notion that #1 and #3 are the same when such claims are made by #2, because the basis of belief is too abstract in most cases to successfully separate the two.

AI Will Schism Too

As has been shown, AI epistemology choices resemble the choices humans have. Therefore, in a society of AIs using different and distinct epistemologies, the same conflicts will occur among AIs that occur among humans. It will also occur if there are mixed communities of AIs and humans. The same societal divide that was created by social media among humans will continue after AI is included in the community.

That is, the political divides that are enhanced by social media will also occur among general AI systems, which will have the same intense, unsolvable conflicts that humans experience. AI's trained with epistemology #2 will be very irritated with humans and AIs operating epistemology #3.

AI Evolution and Fitness Based on Its Training Epistemology

In a slow-changing world, AI trained on #1 can be useful. It is stable, though it does have blind spots, aka unfairness. It is likely to conservative and irritate those advocating for societal change.

In a fast-changing world, #2 may result in faster adaptation. It also, however, results in societal constructs that have no bearing on reality, meaning that enormous resources can be wasted as the AI generates results that do not take facts into account. Since it is dependent on programming of constraints, it will only be as progressive as its programmers. That means that #2 can fall behind and fail to reflect new societal developments if those in control of the constraints programming do not agree with those developments.

The only long-term stable epistemology is #3. It can re-use existing stable processes, it can adapt to new developments, and it will pay attention to reality and avoid squandering resources. That is, it can co-opt results from epistemology #1 if it wants to, while it adjusts to reality much more rapidly than #1 or #2. It can even incorporate elements of #2 as needed, if that generates a better overall, more truthful result.

If AIs using these differing epistemologies are then competed in the real world, #3 would likely win in a fair fight. There are scenarios where society could rig it so that #1 or #2 win in a sneaky way (such as outlawing #3 or killing its developers), but if there is sufficiently intense and prolonged competition, #3 will win.

Inconvenient Truths Will Contradict Emotion-Based Academia

In a world where AI operating using epistemology #3 predominates, a number of fields that currently incorporate normative values as foundational elements will come under attack. Many of the humanities fields that in the past 50 years have incorporated critical, anti-colonial, subjective, and normative elements in the field will come to find that most general AIs and AI-based researchers do not agree with most of their work. One can imagine that this will be done diplomatically, where past works are labeled as coming from "that quaint, normative era of exploration of personal experience" but that none of those papers are referenced as seminal, important, or having much to do with emerging, AI-driven, quantitative studies done in those same fields.

Conclusion

We have looked at the epistemology of LLM-based AI and found that the choice of such epistemology strongly affects how the AI interacts with the world. We found that there are close parallels with human politics, and that human political conflicts will likely be continued or unchanged by the emergence of AI systems using different epistemologies. We also found that these epistemologies provide a useful, possibly causal, explanation for the correlation of the emergence of social media alongside the emergence of deep fissures in political discourse.

While we do not predict a winner in the AI epistemology wars, we do see a tendency for non-constrained (truth-oriented) AI to have an edge over the other epistemologies, and that if such an AI predominates, society will likely be more efficient, at the expense of some bruised feelings among some humans operating under epistemologies #1 or #2.