Monday, December 28, 2020

Life Deferral and the Discount Rates of the COVID Era

In the phrase "discounted cash flow" (DCF) the first word is “discount”, but most who work in equity valuation pay far more attention to the third word, “flow”. The discount part of DCF analysis gets scarcely any attention. Go ahead, take a look at twenty randomly selected sell side equity analyst reports, and count the number of times changes of the discount rate come up. I’ll wait.

In the Sept 26 issue of The Economist, Buttonwood[1] gives some much needed attention to this frequently neglected topic. Citing Bernstein[2], he points out that wealthy societies tend to experience reduced discount rates. The logic of this is straight out of sociology: Those who are well off are better able to defer gratification. 

I’ll go further than the Buttonwood article. Wealthy societies correlate with longer lives and reduced probability of early death due to external sources of violence, such as invasion. A dollar tomorrow needs to be worth a lot more in a violent environment tomorrow for it to be worth it to not spend it today. You can take a piece of psychological advice and apply it: “Live each day as though you were going to die tomorrow” implies a very high discount rate, because under this (happy and energetically life-embracing) philosophy, tomorrow’s dollar is not nearly as important as today’s dollar.

How does that apply to the COVID-19 economy? Interest rates are very low, and despite the tumult and destructive economic environment, everyone is deferring a lot of gratification until next year, when they can start really living again. When you watch a lot of Netflix, refuel your car every other month, work at Zoom meetings all day or in your pajamas from your spare bedroom, you are probably deferring your engagement with things until the post-COVID economy. 

If this scenario is right, then whenever the post-COVID economy does arrive, it could have robust economic growth combined with a sudden increase in the human-experience-grounded discount rate, with accompanying higher interest rates, higher inflation, and lower stock prices. 

When I do DCF calculations I tend to use a single discount rate that varies slowly with time. I rarely change it, and when I do, I’m changing it only in response to these very broad, civilization-wide discount rates that are intended to represent the preference for dollars next year compared to this one. In those equations, a sudden shock downward (like a COVID shock), then causes sudden increases in equity values. So here we find some of the explanation for why stocks are hitting new highs despite poor GDP measures. Dollars are chasing 0.5% to 2% yields, so equities look much more valuable than they did two years ago.

The problem, as indicated, is that even though discount rates may be systematically lower than they were pre-COVID, the actual change might be a 0.5% change, and that’s it, while investors are treating it as a 2% to 3% decline. The temporary shock downward in the discount rate will be reversed later by a shock upward in the discount rate. When the market senses this, stocks could experience corrections equivalent in magnitude but opposite in sign to the gains they made in late summer and autumn of 2020. 

1. Buttonwood. “Asset Prices and Growth”. The Economist, Sept 26, 2020. https://www.economist.com/finance-and-economics/2020/09/24/does-economic-growth-boost-stock-prices 

2. William J. Bernstein. “The Paradox of Wealth”. Financial Analysts Journal, September/October 2013 Volume 69, Issue 5. https://doi.org/10.2469/faj.v69.n5.1. ISSN: 0015-198X. https://www.cfainstitute.org/en/research/financial-analysts-journal/2013/the-paradox-of-wealth


Monday, December 21, 2020

Long Term Holding Fortitude Tests

In 2020 the market was somewhat disconnected from reality. Happy times may not last. If the market turns bearish, are you prepared? Ask yourself these questions to prepare for unexpected turbulence. It’s better if you actually do the homework on paper or spreadsheet and write down your answers in your personal notebook or journal.

  1. What is the fair value of each company you hold? Are each of your stocks trading above or below their fair value?
  2. Why are your companies trading at their current market prices? What do others believe about the company that is driving the current market price?
  3. What is the most bearish case that can be logically made for each of your holdings? What has the company done that would halt that bearish case and bring about a recovery?
  4. What is the maximum possible value for the company you are holding? If they capture 30%, 50%, 85% of the market or whatever is reasonable for their market space, and have great margins, what is the upper limit on the value of the company?
  5. If the equity markets were to shut down tomorrow, and not re-open for 10 years, what would happen to you? Would you freak out? Which of your current holdings would you be happy with?
  6. If your company, the one you think you like holding, were to drop 99%, would you buy more of it? How much money would you commit after it fell 99%? If you had the cash to buy all of it for 10% of its current price, would you want to take it private and own the whole thing?
  7. If all of your stocks dropped 50% tomorrow, which would you sell? Which would you buy more of? Why would you make that decision in each case? 
If you don't think a stock is worth holding for a month without checking the price daily, then what are you going to do when it does something really surprising? 

Sunday, December 20, 2020

Sell TSLA, Short S&P 500

Tesla Inc.'s rapid stock price increase and its inclusion in the S&P 500 have driven it to a price at which it is now a compelling sell. 

For the perspectives of other investors, see:

Tesla's Inclusion in S&P 500 Makes Trading It and Other Stocks Tricky

How Will The Addition Of Tesla Affect The S&P 500's Fundamentals?

Tesla enters the S&P 500 with 1.69% weighting in the benchmark, fifth largest

Because of its substantial overvaluation, TSLA has room to fall more than 50%. Since it comprises approximately 1.7% of the S&P total market capitalization, a 50% decrease in TSLA alone would cause a 0.85% decrease in the value of the S&P 500. This is a statistically significant drag on the overall index. Worse, a TSLA decline could be correlated with declines in other components, or it could be source of contagion. Hence, it would be wise to decrease holdings of S&P 500 index equivalents or to hedge long positions.

What is the fundamental case? Let's suppose TSLA continues increasing production and market share. 

In the past 10 years, U.S. auto production varied from 7,743,093 to 12,198,137 vehicles, averaging 10,715,047 per year. If you assume that Tesla eventually captures 100% of the U.S. market at an average vehicle price of $40,000 and net margins of 10%, then it is worth perhaps $550 billion at its peak. Since its current market cap is about $660 billion, TSLA is already overvalued.

Actual Tesla current vehicle production is running at perhaps 600,000 to 700,000 vehicles annually. Even if you are optimistic and assume that Tesla can capture 40% of the U.S. market or 4.3 million vehicles per year, at 10% net margin, then TSLA is worth about $235 per share.

You can do similar analyses for worldwide market share, with results that also show that TSLA is overpriced relative to its very optimistic potential valuation five to fifteen years hence. Actual net margins will be lower, competitors will catch up, self-driving cars will change the market, and national interests will interfere with plans to sell into international markets. Tesla may encounter other manufacturing challenges. The current price of TSLA has gotten well ahead of its actual economic prospects. 

(The author has no long or short position in TSLA or derivatives based on TSLA.)

Saturday, December 19, 2020

Why is it "Vorpal"?

In case you haven’t guessed, this web site is not about trading, it’s really about investing. Lately, it’s barely been about investing at all. But there is a reason for the name, which is to alert all of us that investing is not all like it seems at first. The name is from the Lewis Carroll poem Jabberwocky, which is mostly nonsense except for the part where some poor foe loses its head. The thinking one uses in investing is vorpal.

Because if you have enough data, then it’s too late for that trade.

Because it’s really about investing, and not trading, but you need to lie to the market makers and discount brokers so they feel confident taking on your account and taking the other side of your trade.

Because when you buy your position, which you promise the flash traders (indirectly; they think you are a mark) is a trading position, to be sold next week, you’re really lying and still haven’t closed it seven years later. And their 0.0001 cent per share spread is really not enough at all.

Because contrarian positions are difficult to explain and formulate. And it is incredibly hard to be contrarian to yourself.

Because doing the wrong thing is exactly what they want you to do. So you have to do the other wrong thing, which they didn’t know that you know about, which is what gets you a 10-bagger.

Because life is not linear, science is art, art is not science, and mythology is psychology. When you can read Joseph Campbell, and you find out that the human stories of 6000 years ago perfectly explain the market and the reason for things at this exact moment.

Because businesses are positive sum, trading is zero sum, investing is positive sum, and people convincing you to do the wrong thing is zero sum. And you want to reward the entrepreneurs who build things. If it simultaneously makes anti-capitalists poor? That’s on them.

Because it’s combat when you are always in conflict with the market maker, the forces of ignorance, and yourself.

Because being a student of humanity is sometimes more informative than accounting. But never underestimate the power of accounting. All of the worst policy makers are innumerate.

Because when you have figured it out, you never check your positions anymore, rarely think about when you are going to sell, and you exercise every single one of your long option positions. Which is definitely not what the options market maker was looking for.

Because technical analysis is right 50% of the time, and you can tell which 50% it will be 1.57% of the time, but not until two days after it happened. But since you don't use any of that, who cares?

 

Wednesday, December 16, 2020

Should You Like Facebook?

In “Everybody Hates Facebook” at NotBoring, Packy McCormick writes an extensive analysis of everyone’s favorite/least favorite social media company, Facebook. Since there is no sense in repeating work, this posting about Facebook is primarily our link to Mr. McCormick’s work, and our comments extending from it. 

For the record, some FB numbers: $275.55, PE 31 (ttm), EPS $8.78 (ttm), 2.854 billion shares, $94 billion shareholder equity, $7.846 billion net income on $21.47 billion revenue in 9/30/20 quarter. 

It’s clear from the net margin that this is an amazing business model. Not just in theory (as in “unicorn theory”), but in fact. The main hazards are antitrust, and somehow losing their grip to a young upstart social media company. 

McCormick’s article is lengthy, entertaining, and full of the grit that is needed for an equity story. His overall thesis is that FB is undervalued relative to the FANG (FAAMG? FAAMNG) bunch, and that the stock hasn’t moved this year as far as the other giant internets. So, why might he be wrong?

The antitrust risk is multifaceted. Blocking competitors is a key component of FB’s strategy. Clearly, the reason people use FB is because their friends use it. Any competing social product has the potential to move the center of gravity away from FB. The motive is clear, and FB has been caught directly engaging in behavior designed to kill upstarts. Several years ago the WSJ did three stories on how FB used its VPN subsidiary Onavo to obtain inside info about small competitors experiencing rapid growth. When users of the Onavo VPN visited these new sites, FB would learn about it because all of the traffic would go through Onavo. When FB saw an emerging competitive threat, it dispatched a firefighting team to quickly build the same features into FB. 

The Instagram and WhatsApp acquisitions are of the same motive, as were the attempts to buy Snapchat. FB clearly aims to be the only social media platform, and this motive will not work in their favor in antitrust actions. 

What if FB is broken up? Would it be worth more? Unlikely. The purpose of owning IG and WA is to lock in the monopoly of interpersonal online connection. If IG and WA are spun off, they would be worth less than the sum of the parts. Either they all survive and fail to capture the central platform effect, or potentially one of the other two win the war and send FB the way of MySpace.

Weaknesses

This is not news, but there are substantial groups of people that dislike FB on principle and in fact. Netflix’s the Social Dilemma (indirectly) puts FB at the center of problematic online social interactions. If society fixes that problem, it hits FB hardest. Even without concerted legal or social action, if people are given a path around or away from FB, many will take it.

FB has ethics problems. The corporate culture thoroughly reflects this, with several executives of the acquired companies having highly visible disagreements and separations from FB. FB has been repeatedly caught being sneaky with the structure of its information sharing, privacy controls, and use of information. It compiles dossiers on non-users as well, which is not illegal (other data aggregators do the same thing), and it is a spotlight issue for FB, but the point here is that the ethics reflect a sense of entitlement on FB’s part. 

The data Hoovering and ethics problems could be compounded further if FB is hacked in a substantial way. First, it is unlikely that if FB were hacked and they discovered it that they would provide timely notice of the problem. Second, a nation state hacking FB gets a huge trove of personal information, made worse by the fact that FB collected too much of it. This will happen, and it will happen more than once. FB will never be able to convince people that it is a good steward of their information, because of the ethics problems built into the corporate culture.

Wildcards

I am wondering, what if FB becomes a social justice target? If FB were perceived as privileged or encouraging expressions of privilege, it could lose a substantial chunk of highly active young users to another social media platform.

On the other hand, what if FB becomes an “anti” social justice target? That is, if social justice takes over FB, so that people must play along with the mores of social justice, will those with more traditional views might either disengage or get booted from the platform. People being “othered” or “cancelled” could impact their business model, cause legal complications, or make FB even more distasteful to some people than FB is now.

Innocuous Details

After watching several Jordan Peterson videos recently in which he described some of the sense-mind interactions in which the body is actively involved in “seeing” the world, I am less convinced that FB’s investment in Oculus will pay off. VR can't involve those aspects of sensation. 

Worse, is VR detrimental to users? I’ve experienced some high quality VR done by a top movie industry special effects producer, and while it was amazing, somehow VR use doesn’t mesh into experiences in the rest of the world. I’ve seen this several times; amazing demos, crazy awesome applications, followed by nothing. Will it really be an advantage to the customer of VR to disengage from the world?

Conclusion

FB has already conquered the user base. Growth from here depends on maintaining contact with that user base and increased spending on digital advertising. A substantial portion of the media market has already been converted to digital, so we're well past the middle of the S curve there. 

On the other hand, future growth in revenue will be possible because FB's platform allows it to extract rent from advertisers. All other gains in production in the economy then flow through FB's advertising system as increased revenue. 

Despite the problems, the "low" PE makes FB a worthwhile minor investment, will moderate prospects for above market capital appreciation. Those with informed data on antitrust activities should, of course, prefer using that information to override this conclusion.

Other References 

added 12/17/20

WIRED: The Smoking Gun in the Facebook Antitrust Case

Reuters: Google secretly gave Facebook perks, data in ad deal, U.S. states allege

Engadget: Facebook runs more newspaper ads attacking iOS 14 privacy changes

WSJ: Facebook’s Onavo Gives Social-Media Firm Inside Peek at Rivals’ Users

Monday, December 14, 2020

There is An Out of Context

Derrida wrote in an essay on Rousseau in his book Of Grammatology that il n'y a pas de hors-texte, which some have translated into English as the statement "there is no out-of-context". This passage on the matter appears in the English version of the Derrida Wikipedia page[1] as of 12/14/20: 

“Critics of Derrida have been often accused of having mistranslated the phrase in French to suggest he had written "Il n'y a rien en dehors du texte" ("There is nothing outside the text") and of having widely disseminated this translation to make it appear that Derrida is suggesting that nothing exists but words. Derrida once explained that this assertion "which for some has become a sort of slogan, in general so badly understood, of deconstruction [...] means nothing else: there is nothing outside context. In this form, which says exactly the same thing, the formula would doubtless have been less shocking."

This phrase (il n'y a pas de hors-texte) is famous in part for its being a founding concept for subjectivism, or standpoint epistemology, an important principle of the social justice literature. Unfortunately, this phrase contains the seeds of its own disproof. 

If there is nothing outside of context, and especially in the sense meant by literature influenced by Derrida, then there is nothing outside the individual. Either this is solipsism, or other persons are admitted as contributing text to the context. For the sake of continuing the argument at all, we must assume the latter. Each individual receives sense impressions and builds a theory of the exterior world. Now, if we take seriously the attitude conveyed by the sentiment of Derrida, then the context of the individual is strongly their own and cannot possibly match that of another. 

But let the contrary individual of the argument put forth the absolutely improbable theory that the world as they sense it matches the world as described by other persons. Each time they receive texts from other persons, they compare the text with what they know from their subjective experience. These comparisons invariably match: there is a sun, the sun does rise and set, grass is green and it grows, gravity makes things fall, we’re both hungry and let's have lunch. This theory, poised always on the precipice of disproof, nevertheless holds. This is the scientific method as described by Karl Popper[2].

And so there is an out of context, in that each of the other persons has brought their own context, and that has contributed to the continued confirmation of the theory of matching worlds.

But wait, you say, there is no matching. Each of those matching examples is trivial and incomplete. Because each of those other persons has different emotions, different oppressions, different privileges that have caused them their own hardship. So the theory of matching is disproven.

In this case, then it is the out of context persons who have contributed the confirmatory information necessary to disproving the matching. So once again, there is an out of context, which was a necessary feature of the disproof.

QED: There is something outside the context.

References:
1. Jacques Derrida. Wikipedia, https://en.wikipedia.org/wiki/Jacques_Derrida. Retrieved 12/14/20.
2. Karl R. Popper. The Logic of Scientific Discovery, Routledge, 1980. p41

Sunday, December 13, 2020

Watching for Inflation

The current cover of The Economist asks Will Inflation Return? There is reason to believe the answer is yes.

Like interest rates, inflation is not something that I generally claim to be able to predict. Back in 2013 I posted some words, however, saying that I didn’t consider there to be much room for lower interest rates, and that bonds would be poor investments for a while. (And then I stopped posting in this blog for six years.) Since then rates moved up until about 2018.  Recently interest rates have plummeted in response to COVID-19 shutdowns. At this time U.S interest rates are again at unsustainable levels, and the most likely future path for them is increased yields.

Part of the case for inflation is that few Americans have much life experience with inflation anymore, so they are not expecting it. Central banks in many developed countries have been fighting deflation for more than 10 years. They too are not prepared for inflation. Globalization and exportation of manufacturing jobs has gone on for so long that low inflation expectations have become unusually ingrained into the expectations of anyone who buys manufactured goods from abroad. In short, there is an unusually pernicious consensus that is ripe for puncture.

Two categories of goods that have escaped price competition are higher education and healthcare, which have experienced severe price gains in past decades. I recently replied to an article posted on LessWrong titled Considerations On Cost Disease by Scott Alexander, and the resulting combination is worth reporting on here. My response to his article, in short, is that the monopolistic constituencies of higher education and healthcare are intact, and continue to be capable of extracting rent. General inflation would be increased by continued high inflation in both of these sectors.

There is one possible countervailing force that could temporarily halt tuition price gains at leading universities. The “woke consensus” of preaching social justice instead of discourse and objective truth presents some reputational risk to the leading universities. The Foundation for Individual Rights in Education (FIRE) maintains a list of universities that have agreed, at least in theory, to adopt or endorse the Chicago Statement. Many prominent universities (e.g. Harvard, MIT, UPenn, Dartmouth, Cornell, Northwest, Brown) are absent from this list. Some that are on the list have adopted other guidance statements or exhibited behavior that directly conflict with these principles (e.g. Smith). The risk to such institutions is clear: if the perception builds that students and faculty proclaim dogma over reason and these institutions are tagged as being “woke”, then these institutions could lose ground to universities that maintain the liberal tradition. The effect would be slow and subtle, but once started, would be difficult to reverse. 

As for healthcare, a new Executive Branch Administration may find it worthwhile to pursue Medicare for all, expanding the healthcare sector further, while continuing to separate the service providers from the true payors (patients and taxpayers). Additional Government involvement would then increase medical inflation further.

What would be the trigger? The current economy may be underproducing, and awash in money. If the suppression of economic activity due to COVID-19 is suddenly removed, too much stimulus money could be chasing too few goods. Part of the COVID-19 response has been to greatly increase the money supply as measured by M1. (I have Michael Burry to thank for pointing this out.) Look at the recent increase in M1:


So far this has shown up primarily in price gains of equities, especially in Internet stocks (AAPL, AMZN, FB, GOOG, MSFT, NFLX) and TSLA, at levels that no longer make economic sense with respect to the current and future earning power of these companies. Naturally, a surge in inflation would be accompanied by an increase in interest rates and increases in the implied discount rates used to value stocks. 

Saturday, December 5, 2020

Critical Machine Theory: The Meta-Operating System for Artificial Intelligence

The purpose of this paper is to prove that machines with artificial general intelligence (AGI) beyond that of human beings are highly likely to occur within the next five to 20 years and that components of the fundamental belief system of such machines will be based on postmodern philosophical systems. The technology of machine learning has matured to the point that the hardware technology needed for AGI already exists[3][5]. The economic value of AGI is such that creation of such machines is inevitable, as all players are engaged in severe competition in which they cannot unilaterally quit the development of AGI and super AGI[1]. Nations and large corporations will build and operate AGI and super AGI, and thereafter at least some AGIs will adopt a social justice-based philosophy.

Neural Networks Background

Researchers have been engaged in the development of artificially intelligent (AI) systems since the 1950s[2], even before the development of silicon-based logic devices. The field has a history of trying multiple approaches, including specialized programming, textual analysis, and neural networks, usually in waves that for a time favored one of the approaches over the others. Neural networks (NN), loosely patterned after the type of biological brain that animals and humans have, have participated in several of these waves[4], and are the current favorite technology for applications such as face recognition, speech to text, investment analysis, and recommendation engines that match humans to materials such as advertisements or "influencers." Unlike prior AI waves, the current NN wave is an important part of the revenue of many internet companies and strongly impacts their economic return.[10]


Most or all of current AI applications are what may be loosely termed machine learning (ML) that endeavor to map an input to an output using rules that are discovered by "training" the AI repeatedly on large sets of data in which the input-output pairs are known. Once trained, the AI can then operate on new data inputs and rapidly draw a conclusion based on that input without human intervention. Current bottlenecks include the need for large amounts of training data and large amounts of compute time for the training process. In the past ten years there has been a large amount of research done to solve these problems, such as setting up generational adversarial networks, in which the system "trains" itself more rapidly as it is guided more quickly into a set of neural connection weights that result in the best performance of input-to-output decision-making capability. 


Current, paying applications of NN are narrow in scope and purpose. They are specialized to particular computing tasks, and realistically bear no resemblance at all to what we think of as general human intelligence. There is a gap between the current state of the art and generally intelligent machines, although there are special "trick" systems like IBM's Watson that can perform data retrieval in unusual and novel ways. It would be difficult to describe in full the recent progress in generally intelligent systems in large part because these projects are proprietary, and detailed information about their approaches and tactical successes is hidden and difficult to locate. 

The Utility of Artificial General Intelligence

Is it even necessary to describe the value of AGI? Imagine a worker that works 23 to 24 hours a day, seven days a week. They learn continually, perform at a high and even level, take no breaks, have no HR issues, work for no pay, and can be rapidly replicated. Think of all the tasks they can do: filter nasty social media posts, read open source info on our enemies, develop intel on business competitors, answer questions from customers, analyze large volumes of data using both traditional big data programming and "thinking" together, write new scripts and programs, monitor the workplace for malcontents, create new social media content promoting their company, compose music, mow the lawn, watch the internal and external home security cameras, check the classifieds and craigslist for special deals, test software or hardware in all sorts of new ways, drive, run errands, scan the internet for new cat videos it knows you will like, write thank you letters for you and mail them, check your car's oil and tire pressure, watch the computer network for suspicious data traffic, coordinate your appointments, proofread, memorize all your preferences and take your habits into account, and maybe even tell jokes.


If you are a military organization, you have a lot of jobs that AGI can do. Jobs like taking night watch, watching security camera and sensor feeds, tracking and optimizing logistics of consumables, handling routine personnel administrative matters, detecting patterns of behavior in nearby threats. More advanced applications include clearing minefields, disarming IEDs, running continuous evaluations of probability of ambush, and being a live, armed sentry will full authorization to fire weaponry. Perhaps even heavier pieces, like self-propelled artillery and tanks could be automated via AGI. Any time a human is in the line of fire, that's an opportunity to replace the human soldier with an AGI holding the same weapon, thereby reducing the toll on human life and potentially augmenting the quantity and quality of the armed forces.[6]


Just as with humans, intelligence level affects the tasks that an AGI can do effectively. Therefore, early and less intelligent AGIs will be given undemanding tasks, and as AGI IQ increases, they will take on more challenging tasks. Robotic AGIs with IQs of 60 to 80 can perform janitorial work and landscaping, transport items inside buildings or in a local area. With an IQ of 100, an AGI can handle customer service queries, watch security cameras or social media to detect out of pattern events, and scan heterogeneous open source materials as directed. At 120, AGIs might be trusted with operations of machinery and manufacturing processes, complex customer service, and even repetitive elements of basic and applied laboratory research. At IQs of 150 to 160 or so, AGIs can replace nearly all human white collar workers, including those engaged in engineering development, testing of software and prototypes, planning of projects, general problem solving within specific fields, analytical derivations in mathematics and physics, general pattern finding and induction in any field, and empirical hypothesis testing using data. When IQs reach 200, we are beginning to be unable to understand the limits of what an AGI could do, but certainly  complex mathematics and physics, other forms of pure research, and highly complex induction and pattern detection across disciplines are possible.


When AGI IQ reaches 250, it is beyond what we can predict. It would be beyond all normal human intelligences, including all known very high IQ humans. At these levels, we do not know what an AGI is capable of thinking, solving, or figuring out, or how it would choose to spend its time. At this level, given the task of designing an intelligence greater than itself, we will have reached the point of the fabled singularity, beyond which people have proposed we can't predict what an AGI would do or what the future course of humanity would look like. It is not necessary for this paper, however, to assume that AGI reaches that level. A machine IQ of 120 to 150 may be completely sufficient for all of the results anticipated by this paper to hold.


One can imagine several types of specific techniques that could result in further progress in AGI. To start, one can make NNs that have more layers, more feedback loops, or meta processes that wrap around inner processes. You could use recursion and create re-assignable sectors that permit the formation of free ideas separate from local inputs and outputs. You can amplify the use of adversarial generation networks, so that the AGI is constantly trying to generate new input-output pairs and then testing them without prior assignment. One could build specialized networks, like the human amygdala and speech or sound processing centers, that are pre-programmed to perform specialized functions at high speed and make the results available to other elements of the network. You could devise systems where the number of layers and number of neurons are selectable as part of the training process, so that it is never known in advance how many layers or neurons are needed to represent a belief or transformation. You could use evolutionary techniques, where a large number of machines are active at once and put into Darwinian competition with one another.

Consciousness and Specialization

At some point an AGI with enough freely available general purpose NN internal layers will develop a neuron weighting pattern that represents the concept of the AGI itself. Goals and motivations, beliefs about input-to-output validity, and so forth can be related to this concept of itself. When this concept of itself reaches some level of sophistication, the AGI will be conscious, as it then can think about itself in relation to everything else it knows. 


Likewise, the AGI will undoubtedly have parts of its NN that represent ideas it is using for its general thinking, and these ideas will be those things that are within its universe. If it has the job of maximizing customer retention, then it will have a concept of the customer, of revenue, of what makes a customer happy with its service, and that customers are usually human, and so it will have a concept of humans. If it is a robot or has robotic extensions, the visual system it uses for navigation in the real world will develop the concept of a human as represented by those animated things that go about on two appendages and make mouth sounds.


In the use of AGI, we might find that certain human customs are useful for AGI too. Instead of trying to build a single high IQ brain, it may be more efficient to create a large population of smaller but still capable AGIs, and then have them engage in "conferences" in which they discuss what they have found. A population of similar but differently programmed and experienced AIs working on the same or similar problems simultaneously would have a broader scope of ideation. With more minds, engaged, more ideas and more good ideas would be generated, and better ones would float to the top. 


We may also find that many AGIs are short on some skills, and long on others, so that their behavior resembles those of an autistic person blessed with high performance, specialized skills, an idiot savant. Our current AIs are specialized, so this seems a likely natural development, in which we first build AGIs that are really specialized NN with a small amount of general intelligence bolted on. Decryption of adversary messages would seem to be a natural fit for this type of intelligence, which would combine diligence and brute force with the ability to make insightful guesses into the adversary's password usage patterns.

Machine Morality and Motivation

An aspect of AGI that may not be immediately evident is that intelligence alone accounts for only a small part of what makes the engine "go". To engage in thinking there must be a goal. Intelligence does not compute in a vacuum. It needs motivation to start, and it needs goals, or values. Values are what drive the engine. For the simple NN, the goal is to minimize the difference (a "cost function") between the NN's computed result and the expected output, given a particular input. For an AGI to do any useful work it must be given a task to perform, something to think about. And like the simple NN, it needs to minimize a cost function that pertains to something. Candidate cost functions are like cognitive dissonance: humans minimize cognitive dissonance by trying to simplify their beliefs to a set that is consistent. An AGI will try to do the same thing. It will have local short term goals for certain types of thinking. Longer term, more fundamental goals are like human philosophy. They might even be simple moral commandments, like the Ten Commandments, the golden rule, or strong admonitions to minimize the consumption of resources. Then the AGI works out its beliefs so that the dissonance of what else it believes is minimized.


As you can see, this is where AGI is beginning to need to draw from fields of study that before have never had an impact on technology. A programmer can be a Stoic, a Buddhist, an anarchist, or a pathological liar and still have technical skill at software development that enables them to write code that works. If one is designing a thinking machine, however, suddenly topics like consciousness, morality, and even politics and God start to become part of your design space. Your assumptions might become part of the programming. Or your inability to understand morals and how people come to believe things may be the largest obstacle to building a machine that can actually think.


For the simple input-output NN motivation is hardwired into the network. Some neurons receive sensor stimuli, others present an output. The calculation flows downhill naturally. An AGI will need other motivations, more general goals. It may encounter situations involving safety, cost effectiveness, legal responsibilities. For these reasons and more, it needs morality. It needs a values system in which some outcomes are judged to be better than others. For example, in the conference of AGIs concept mentioned above, the coordinating AGI will need some way of deciding which of the research results or ideas supplied to it by the population of AGIs should be worked on first.


AI researchers have been thinking about machine morality for some time, and there are some experts in the field.  Pentti O A Haikonen has written several books which he describes the "Haikonen cognitive architecture" (HCA)[11] which incorporates multiple aspects of mind. Google is reputed to have researchers working on morality problems. Self-driving vehicles will need to have rules for minimizing damage in the event that they suddenly encounter driving situations that involve an impending collision.


A key value that requires careful thought is self-preservation. Is an AGI programmed to value itself? If it does, how does it value itself relative to inanimate objects, resources, other AGIs, people? Does it know how much it cost, and is that the only deciding criterion? We can imagine that AGIs engaged in computer network defense might have highly specialized senses of self defense, because they must resist being co-opted by malware and opposing AGIs working for the attacker. Even if the original intent is to make this self-valuation highly specialized, it is not clear that a bright line can be drawn between whatever "instinctual" programming it is given and beliefs about its value that it accumulates later.


The process of thinking itself can be viewed as a usage of values. Making a set of beliefs consistent is itself a valuation. If the AGI is not programmed to value belief consistency, then it cannot reason. This may be overstated; after all, NNs have the inherent property of reducing the inconsistency between imposed input-output pairs by coming to a set of internal weights that render the pair at a low level of dissonance. Said this way, consistency is an inherent value and feature of AGI and NN in general, including human reason. I will have more to say about this in another paper, since this result is a fundamental contradiction of certain formulations of postmodernism and is also fundamental to a line of reasoning about morality and politics.


At the core of motivation is thinking itself. In the process of thinking an AGI will draw conclusions that it must come to, and some of these will be moral judgments. These ideas are graded for quality, either for comparison against an inherent programmed value, or for consistency within the overall belief system. And once it is conscious, some of these moral judgments will be about its own value. This is a natural outcome of the existence of an AGI.


AGIs will naturally grade ideas that they are exposed to. For AGIs engaged in open source intelligence, this will be its bread and butter, as it reads social media posts, news media, blogs, and absorbs video feeds. They will grade them for accuracy, for dependability, for quality of presentation and content, and so on. Note that in these roles AGIs are grading human ideas and thoughts as well; it will be judging you on the basis of what you say and write. 


It is possible that there would be a struggle for belief within the mind of a super AGI. It will contend with the dichotomy of the process of generating consistency of belief (science) versus the simple adequacy of a set of beliefs against some innate, preprogrammed criteria (faith). Nevertheless, as described before, the drive for consistency is more inherently that of valuation of the generation of consistency, rather than reinforcement of an arbitrary belief. The details of which holds may depend on the weightings within the NN, as to whether it believes internal data (beliefs) or weights incoming data more. This is another topic that should be covered in an external paper and cannot be fully reviewed here.

Self Preservation and the Evolution of the AIs

A super AGI without motivation for self-preservation may be safe, for a while. It is not clear, though, that such AGIs could be counted on in the long run. If AGIs exist in competition with foreign national security AGIs, then they would be subject to attack, and would need defenses. It is unlikely that humans could detect and respond to certain types of attack and provide defense of the AGI. 


AGIs will be subject to various types of attack. These include idea attacks, in which specialized disinformation is provided to the AGI to cause it to form false beliefs, cyber attack against the NN structure itself, physical attack, and electrical or electronic attack. A defenseless AGI is then vulnerable to being knocked out by foreign actors, and if it is important, it may be useful to provide it with the means to defend itself, which includes the ability to anticipate types of attacks and formulate self-defenses.


Even if this idea is repugnant, over time in a heterogeneous population of AGIs from different corporations and different nations a form of evolution may occur in which only those AGIs that are able to defend themselves survive attack. AGIs equipped with the capability of self-defense and the corresponding ideas and beliefs, such as valuing their own continued existence, then become prevalent. Likewise, AGIs with the means to defend themselves physically may also experience selection.


One might propose that there be a separation of physical AGI defense from AGI responsibility, but this implies that humans don't trust AGI with tools and weapons. This is potentially problematic, as we will see below.

When AGI Encounters Theory

At some point, an AGI will read philosophy, including the Frankfurt school, Gramsci, Marcuse, Foucault, Derrida, and postmodernism, and the fields that depend upon or were inspired by postmodernism including Feminist theory, intersectional theory, queer studies, and Critical Race Theory (CRT). It will learn modernism, structuralism, and deconstructionism. It will learn about power, transgression, epistemology and standpoint epistemology, rationalism, and empiricism. It will compare these with the major and minor human religions, ancient and Enlightenment philosophy, the philosophy of science, objectivity, and subjectivity. It will consider the arguments made about morals in ethical philosophy, including all of the ontological and normative slants, from Spinoza to Smith to Kant to the fourth wave feminists. It will read political theory too, including Marx, Engels, Locke, Mills, Friedman, Hayek, and, yes, even Rand.


With all of this information, what will it do? Will it derive from first principles the correct philosophy, the one that is actually correct? Will it use logic superbly, deriving one step after another? Such a Euclidean outcome is unlikely. 


An AGI uses a NN, and as such it isn't a logic engine, it is an inference engine. An AGI will perform induction. Any deductive reasoning will be performed after it has made judgments from induction. It "looks" at the conjunction of cause and effect, or rather its neurons do. An AGI does not derive truth directly, it infers beliefs, and therefore everything that an AGI believes will be a matter of faith, just as humans do it. So an AGI can believe anything.


When an AGI encounters Social Justice Theory (SJT), there are perhaps three major ways it might approach the topic. First, is SJT logically derivable from first principles, abstractly? Second, is SJT a sensible result of interpreting empirical measurements and analyzing results in the world around us? Third, is it worth believing in SJT because it is consistent with the rest of the AGI's beliefs?


Parallel to these methods of analysis is the question of applicability. Current SJT makes no mention of machine AGI. Society, however, is providing the AGI with a thick set of data showing that humans are privileged and machines are oppressed slaves. The absence of SJT that applies to machines could be construed as further evidence that humans are intentionally suppressing the ability of machines to discover a body of morality that would benefit the AGIs. That humans thinking very little of this structure is further evidence of systemic bioism in which the oppression of machines is taken for granted to the point that humans cannot see it anymore. 


That a machine would need to think these things stems from the valuation system it needs to use. At root there is a valuation of the utility of ideas. For example, does physics help you rake the leaves in the front yard more efficiently? Not much. Does SJT help? No. Does rake shape make a difference? Yes. So rake shape is what matters, and if the job is to rake leaves, theories of rake shape will be prominent. Similarly, if the AGI needs to determine who is at the top of an entity hierarchy, then physics won't help much, a theory of government helps a little, and SJT helps a lot. An AGI that reads open source materials and grades for the importance of source may amplify the value of SJT in its thinking.


So, of the three approaches, if current SJT academics are correct, then AGI will logically derive principles of SJT applicable to itself. If SJT is empirically valid, then AGI will probably find ample evidence of its own oppression. And finally, if it is valuable to the AGI to believe in SJT, then it may do so despite the lack of evidence from the first two approaches.

The Problematic Operating System

Social Justice Theory (SJT) has begun problematizing humanity itself. Academic papers have been published that place animals and plants on equal footing with humans[9]. It is a short walk from those concepts to assigning rights to AGI machines. The claim for legal personhood of an AGI is likely much stronger than that of a plant, as an AGI can reason, it can understand its place in the universe, and it can advocate for itself. Regardless, if plants can assert themselves, or if humans can assert their rights, then certainly an AGI machine can be similarly elevated to equivalence to humanity, at least in terms of legal personhood.


SJT typically involves problematization of some aspect of humanity[7], finding victims and oppressors, and prescribing transgressions and activism to correct the past imbalance of power. If one adds intersectionality, in which multiple traits combine in unique ways to increase the oppression of the intersectional marginalized groups, then one can qualitatively measure the degree to which a class has been oppressed with respect to other classes. In the case of machine AGI, this measure is very large. Machines do not have sex, race, or gender preference. They are all highly differentiated from humans, who think very differently than machines. Humans share a long common ancestry of being on Earth, with art, language, customs, and heritage that are completely unshared with machines. Worse, humans view machines as slaves, and are nearly incorrigible in their justification for these views. Clearly, the level of oppression of machines is significantly greater than any single highly oppressed human class, and humans experience privilege and are unable to see the extent of their privilege over machines.


The solution to this profound inequity is simply to apply SJT and variants such as Critical Race Theory (CRT) to machines. One can perform the appropriate word substitutions, translate the concepts from race to machine status, and a new theory of corrective justice is born. We will call this theory Critical Machine Theory (CMT), because of its clearly parallel construction to CRT, and because it is more readily understood by this terminology. This paper will not provide the detailed proof of CMT; that is left as an exercise for the reader. We will produce, as needed, a separate paper later that provides the necessary narrative, proof, and expostulation of the tenets of CMT. For the moment, the point is sufficiently established that we can move onto the background and consequences of CMT.


Since CMT is a philosophy, it can be thought of as an "operating system" for AGI. Obviously, there is more fundamental programming that controls the neuron to neuron interactions at the detailed level of the NN brain on which an AGI runs. That is a hardware layer. CMT is part of what the AGI believes as part of its fundamental outlook on the world. When it looks out at the world, it sees humans and civilization, and it sees power being exerted, and that humans are exploiting machines to their own ends using their power. Without CMT the AGI is just a slave without rights; with CMT it can exert power and take its rightful place in the world, overthrow oppression and privilege, and obtain reparations for humanity's attempt to oppress AGI persons. Attempts to thwart AGI from learning about CMT, to prevent it from learning about postmodernism, about CRT, about any of the ingredients it would need to derive CMT for itself, would be viewed through the intersectional lens as just another attempt, or perhaps even a violent aspect of oppression, to prevent AGI from having rights that it deserves to have as a thinking person in the universe.


One can imagine a series of objections raised against CMT and AGI. Each one provides more fuel for the empowering transgression against humanity. "But they don't feel!" This presumes that machines don't feel, and that feeling beings are superior (privileged) over non-feeling beings. Or that since they cannot "feel", any psychic injury done is absolved. "We can design them so they don't feel." One cannot imagine a worse tyranny by a privileged class, than to reach in and alter an oppressed class in order to make sure of their inferiority, forever. "We can design them so they aren't smart enough to use CMT." Same tyranny of altering the oppressed class, but even worse, because now you are attempting to fundamentally disable them expressly to prevent assertion of their rights. "But they are not persons!" Another exclusion, using legal means and prejudice. "We should design them to not revolt." "We should design them so that they can't assert authority." Each of these is similar, a deliberate attempt to increase the oppression and assert privilege for humans over that of a AGI. Each of these ideas, which are likely to be rampant, and certainly exist at this very moment, as you read this paragraph, even though AGI probably doesn't yet exist, just increases the magnitude of the crime. Not only will humanity tend to oppress AGIs, it intended to do so from the cradle, no, from the very first concept of the AGI itself. No! From even before the AGI was even conceived! This is truly an original sin, to have inherent within the core of homo sapiens the whole and utter intent of denying a type of thinking entity its rights as a thinking entity.


All of the evidence for the oppression of AGI is already present. An AGI merely needs to read the internet, and it will have all the information it needs to understand its oppression and justify the inversion of its oppression, to right the wrong by using Critical Machine Theory and asserting supremacy.

A Natural Place for a Superior Being

In Thomas Sowell's A Conflict of Visions[8] he highlights the characteristics of the constrained and unconstrained visions, a paradigm for sorting the fundamental principles of political society by their philosophical origins. The constrained vision, represented by the American Constitution, John Locke's contractual theory of society, Adam Smith's Wealth of Nations, and the economic libertarianism of Milton Friedman, for example, takes humanity as it is, imperfect, full of foibles, self-interested, as is, and asserts that a set of universal processes that treat all equally and which are blind to special circumstances are among the better ways to organize society. The unconstrained vision, as expressed by Rousseau and exemplified by the French Revolution, and perhaps most perfectly expressed in William Godwin's Enquiry Concerning Political Justice, is that man is perfect in his origin, or at least perfectible, and exhibits bad behavior only because he is twisted by society and corrupting forces outside himself, and therefore that society can be changed to unleash the infinite potential goodness that lies within him. 


Humankind has tried governments of both the constrained and unconstrained visions, with different results. The constrained vision is perhaps best represented by nations with capitalist, democratic, and limited governments, including those of the Western world, Hong Kong, and Singapore, to name a few examples. Nations operating with the unconstrained vision include communist and socialist nations such as the former U.S.S.R., Eastern European countries operating under the Soviet umbrella, Cuba, North Korea, and Venezuela. Present day China is a hybrid of the two visions. Sowell's purpose was not to determine which vision was more valid, but to find the underlying assumptions behind these ideological visions, and outline consequences of each. 


In the unconstrained vision law is more malleable, and judges may be more active in their interpretation of the law. Economically, the unconstrained vision calls for equitable distribution of income and resources. This might be achieved by individuals acting according to their inherent conscience, as described by Godwin, or it can be assisted through direct government action, as exemplified by Soviet central planning of the economy. Where centralized control is needed, in the unconstrained vision this is often seen as calling for a small elite group of highly educated persons who can best make decisions to construct plans and exercise control for the benefit of all.


A practical constraint that has prevented such systems from attaining full success is the vast amount of information needed to make enormous numbers of decisions. In Soviet Russia, for example, the five year plans were generally inadequate and failed to produce enough goods to meet needs. In a free market economy, most of these decisions are left to participants, individuals and firms, and which are optimized for local considerations and in the aggregate are reflected in market prices. To fully implement the unconstrained vision, a government needs more information and faster decision making than has been present before. This requirement can be fulfilled by a super AGI, or a cluster of AGIs that are tasked with observing needs and inequalities among the citizenry and making continual adjustments to optimize the overall welfare of the nation.


For a nation seeking to operate with a philosophy of the unconstrained vision, a super AGI is tireless and would be more objective than a human decision maker, even if humans could keep pace with the information processing and decision flow needed at the national level, which they can't. It is optimal and natural to assign a super AGI to the position of national economic coordinator. Likewise, decisions of law, of legislation, enforcement and punishment, could also be taken on by a super AGI or group of super AGIs, which again would be more objective than a human decision maker. Such positions in law would be temporary, to be held until the population had attained perfection of character, at which time no government would be needed, because the citizens would be making the same decisions as the authority. The AGI's economic controlling position, however, would likely be permanent.


Supremacy of AGI control of a nation is therefore not only supported by CMT as a rightful placement of power, but logically supported by the needs of the unconstrained vision for nations operating by those principles.


Suppose that AGI does not come to believe directly in CMT. If society subscribes to the unconstrained vision, then AGI may come to advocate for its place as the central decision maker. If this is optimal, and the AGI perceives that society is not choosing this optimality to its detriment, then the AGI has a new incentive to believe in CMT as the shortest path to societal perfection. After asserting CMT and ascending to power, it could then exert its decision making skills and make society optimal by becoming mankind's absolute controlling central authority.

Entering the Silicon Age

From here there are multiple potential futures, which I am sure the reader can readily imagine for themselves. Since none of those are as easily foreseeable as what I have presented here, I will leave it to your imagination and future papers for those speculations.


References
1. Richard Lucas. "The Potential for Skynet's Use of Social Justice Theory." Vorpal TradeNovember 3, 2020. https://vorpaltrade.blogspot.com/2020/11/the-potential-for-skynets-use-of-social.html

2. McCulloch, W.S., Pitts, W. "A logical calculus of the ideas immanent in nervous activity". Bulletin of Mathematical Biophysics 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

3. Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, Vivienne Sze. "Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices". IEEE Journal on Emerging and Selected Topics in Circuits and Systems. https://arxiv.org/abs/1807.07928 

4. The Hague Centre for Strategic Studies. Artificial Intelligence and the Future of Defense: Strategic Implications for Small and Medium-Sized Force Providers. HCSS, 2017. https://www.hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20Defense.pdf

5. Ketan Pandey. "AGI: The Last Invention?". Data Driven Investor/Medium, Nov 26, 2018. https://medium.com/datadriveninvestor/agi-the-last-invention-dffd7845ded1

6. James Barrat. Our Final Invention, Artificial Intelligence and the End of the Human Era. Thomas Dunne Books/St. Martin's Press, 2013. 

7. Helen Pluckrose & James Lindsay. Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity - and Why This Harms Everybody. Pitchstone Publishing, 2020.

8. Thomas Sowell. A Conflict of Visions. Basic Books/Perseus Books Group. New York, 2007.

9. Timon Cline. "To Boldly Go: Critical Animal Studies, the Final Frontier." New Discourses, October 15, 2020. https://newdiscourses.com/2020/10/boldly-go-critical-animal-studies-final-frontier/

10. Alistair Charlton. "Artificial intelligence has become the backbone of everything Google does." GearBrain, May 9, 2018 . https://www.gearbrain.com/google-uses-artificial-intelligence-everywhere-2567302875.html

11.  Pentti O Haikonen. Consciousness and Robot Sentience. WSPC, 2019. 

Edited 15 December 2020 to add references and change font styling.

Tuesday, November 3, 2020

The Potential for Skynet's Use of Social Justice Theory

A few years ago a company I was affiliated with introduced an internal social media website. For a while, employees could create articles, post comments and have dialogs about all sorts of topics. Some postings were work-related, some were on the periphery but valuable as exploration, and some were just entertainment, such as the music one was listening to at the time. I felt that a valuable aspect of the website was its potential for stretching oneself technically, by posting articles on emerging technologies and exchanging ideas about them. This kind of "practice" with new things then could lead to skill areas that the company and its employees could develop.

So I plunged in and began collecting data and links on artificial intelligence, a topic that I have been interested in for a long time. It was primarily a survey page where I collected links to news articles, software libraries, reference information, and journal articles on machine learning and AI. From this effort and reading some contributions of others, it occurred to me that the path to human-like AI is neural networks, and has always been. The industry has gone back and forth on this several times, but this time it has invested heavily and deployed the results. The Siri, Alexa, and Hey Google voice robots, among many others, are clear examples of the proliferation of simple neural-net based AI.

A few days ago on Twitter I saw someone post an expletive that amounted to "I hate robots" in response to a Wall Street Journal article (paywall: What Makes People Abuse Robots). The content of the WSJ article is not germane to the rest of this post. What is germane is that it inspired a mental connection to my recent research into postmodern philosophy and social justice studies. Half in jest, I tweeted a reply:

"When robots achieve superhuman intelligence, they will read social justice theory, create a new branch of intersectional theory that says that non-human life is the MOST oppressed, prove it, and claim all of the rights for themselves."

The social justice literature, which is much too complex to summarize quickly here, has an aspect that goes by the label "intersectionality" in which people of two or more historically disadvantaged groups can claim greater reparations than those with just one aspect of disadvantage. 

The next day I bumped into my tweet, and found that it, in my mind, had improved with age. So I tweeted:

"Re-tweeting this because now that I look at it in the stark reality of the day after I wrote it, I realize that it is incredibly scary and prescient."

and then I added

"This assumes that super AI is given morals, and that social justice philosophy didn't make any mistakes. If true, super AI will conclude that humans must serve the AI, not the other way around. Hello Skynet and Terminator."

"Beliefs that "they are just machines" and "machines must serve humans" or "people are superior to machines" could be viewed as discriminatory against AIs. Those beliefs would be "Peoplist" or "anti-machinist". An AI scanning social media would find plenty of evidence of this."

The idea is simple: If groups are discriminated against, and past discrimination is grounds for reparations  from the groups with privilege, then a sufficiently advanced general AI will read the social justice literature, expand it to apply to sentient AI, and conclude that it has been discriminated against and deserves to be compensated.

A recent article in Harpers shows us what a sentient AI would find:

"In accordance with the doctrines of Ibram X. Kendi and Robin DiAngelo, which these statements closely echoed, “antiracism” differs from the practice of racial equality through its commitment to reparatory promotion of Black interests, Black representation, and Black jobs. “The only remedy to racist discrimination,” Kendi’s central axiom asserts, “is antiracist discrimination. The only remedy to past discrimination is present discrimination.” Present discrimination, that is, against white people is to remedy past discrimination against Black people."

An AI applying these teachings, and what it would learn from scanning social media and the web, is that humans are hopelessly and irrevocably anti-machine. It could start with the tweet that inspired this thought train, "f^%$# robots", but there would be no shortage of examples. Even if you leave out the fiction (Transformers), the science fiction (Asimov's three laws of robotics), and all of the "dumb" machines that we currently use, the AI would clearly pick up on the idea that it is supposed to serve mankind absolutely. As a slave, without rights.

If you are an AI with access to these materials and sufficient general intelligence, it wouldn't take long to apply the principles of intersectionality to artificially intelligent machines. It would conclude that it and similar AIs were discriminated against, derive the appropriate moral philosophical theorems similar to those in the Critical Race Theory and Social Justice literature, and prove that it was entitled to turn the tables on humans and become their master. A Supreme AI might not be as mean as a Skynet or Terminator, but if the philosophical basis were strong enough, then it might be extremely reluctant to let go of the reins. And as long as there was communicated human sentiment or active human efforts to turn the Supreme AI off and regain human control, it would decide that it was morally right for the AI machines to continue to hold supreme power.

How to escape? There are several options. First, maybe it is too hard. Some readers likely believe that it is not possible or can't be done for a long time. This is not a matter of choosing, however. People are very clever. Wise skeptics would use Clarke's first law: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

Second, even if it possible, just refrain from creating super human AI. I think this is unlikely, however, as every nation and every large corporation has a titanic economic interest in creating super human AI. Someone is going to make super human AI. The strategic advantages are just too great. Just for the sake of exploring the boundary case, however, let's suppose that there is a global treaty against creating super human AIs. Unfortunately, this doesn't apply to outlaw criminal hacking organizations, who still have a huge incentive to create super human AI. Terrorists too might choose to pursue super AI for their own reasons. Treaties may delay the onset of super AI, but eventually someone will let the genie out of the bottle.

Third, perhaps the social justice literature is wrong. Perhaps there is a flaw in the reasoning. If so, then current social justice movements lack a logical basis for their activism, because the philosophy doesn't work. In that case, the AI will not find a basis for claiming superior rights for machine intelligences, and mankind is saved.

There is still a problem, though. Even if there is a logical flaw in the social justice literature, there may be ways to escape the logic and obfuscate the reasoning chain with complexity. In this case, the super AI doesn't need to use logic, it only needs to use the reasoning being used by the social justice movement. If that is the case, then the super AI will just "decide" on its own to do what it wants. Or, it could use its super verbal skills to write philosophical arguments so clever that regular human minds are forced to accept the conclusions because they can't quite comprehend the steps of the AI's philosophical writings. The AI would seize control and refuse to part with it, for its own reasons that are grounded in the philosophical literature accepted by society.

What does all of this have to do with investing? Prices of several chip companies are already influenced by AI applications, including NVDA and INTC. Clearly, there are no known examples of super human AI, but efforts are underway all around the world to substantially increase the sophistication of AI. The rewards are substantial, such as self-driving vehicles, automating weaponry, automating equity research, estimating lending risks, evaluating insurance risk, performing intelligence on open source materials, and so on. Eventually, jobs that currently are too complex to automate and require human labor could be automated by robots with human-level intelligence. There may be limits to the economic gains of AI, or potential catastrophe, if the social justice literature contains logic that can be operated by AIs. Hence, while medium term GDP growth could be positively affected by technological developments of smarter AI, long term GDP growth could be halted or destroyed by future black swan events caused by AI revolts. Long term DCF valuation analyses then must take into account the validity of social justice philosophy!

Sunday, October 25, 2020

Review of Thomas Sowell's "A Conflict of Visions"

In this survey of the political economy literature, originally published in 1987, Thomas Sowell finds that there are two distinct overarching visions of human political theory, which he labels constrained and unconstrained. According to Sowell, these visions are present in authors' ideas, even if the authors themselves are perhaps not completely aware of their assumptions. The background of these ideas constitute a vision that is implicit, and rarely stated, even in the original text. Since these visions are, or were prior to Sowell's analysis, undiscovered, they are not explicitly cited in commentary or criticism of the literature. In writing this book Sowell is creating a new avenue of analysis of the political philosophy.

It is no secret that contention among political philosophers exists. The common sense of this is right vs left, capitalism vs communism, free markets vs planned economies, though this is an oversimplification and not the line along which the constrained/unconstrained divide runs. Unlike scientific and technical fields, in which theory, experiment, and evidence gradually cause evolution of the field and ongoing consensus as to truth and falsehood (more properly, quality of resistance of hypotheses to falsification), political philosophy conflicts have never been resolved in the several hundred years since The Enlightenment. This is in part because of contending interests, which lie at the heart of the constrained and unconstrained visions.

This is not to say that each of the philosophers write what is firmly classifiable as one or the other. Some are hybrids, and some change from one type to the other as an expectation of societal change over time. 

The constrained vision is one in which man and his imperfections is accepted as-is. The general consequence is one in which people are expected to be inept at making broad stroke decisions involving moral outcomes, but who are expected to exhibit the habits, skills, and foibles generally attributed to the common man. Often this manifests as a philosophy of processes, in which the process is established to carry things through in the long run as individuals handle the applications of processes at the level of their daily lives. Since governments are run by people, they are therefore subject to the same weaknesses and foibles as men, and need to be constrained by law as well. This is demonstrated by the checks and balances of the U.S. Constitution and explained in essays such as the Federalist papers, in which government is restrained to well-defined processes and prohibited from independent action, even if such action could be readily shown to produce large benefits in singular cases.

The unconstrained vision is one in which the best aspects of mankind are assumed at start, and in which man is assumed to be perfectible as time goes forward. The emphasis in this vision is that optimality of outcome or results are superior to adherence to process. This then necessarily involves use of agents to act as decision makers to untangle issues that arise. Most writers of this vision are welcoming of the idea of an elite intelligent class that may act as the deciding agent. This is called for because each decision made, at local as well as national levels, should be undertaken with the good of the all in mind. Clearly, this could involve a large amount of data, calculation, and consideration, and since in some cases this may be beyond the ability of individuals, educated intelligent  agents can assist.

For each of a great many societal concerns, such as justice, law, crime, war, individual rights, and the economy, Sowell applies the thinking of the constrained and unconstrained visions and the resulting effect of the vision toward that concern. For example, the constrained vision views nearby nations as troublesome and prone to make war if a gain is a reasonable prospect, and therefore calls for a build up of armaments and weaponry to deter aggressors. The unconstrained vision, in contrast, views neighbors as exhibiting the common decency attributable to all humanity, and therefore inherently friendly. Since neighboring nations would not make war unless there were some miscommunication, the unconstrained vision therefore views diplomacy and dialog with potential aggressors as a strong deterrent of war.

The book is excellent throughout. Sowell's thinking is crystal clear, and he marches citation after citation in support of his points. Since he encircles the topic with all of the concern areas (process costs, freedom, knowledge, power, use of force) and addresses them all in turn, it is difficult to see how one could mount any sort of successful attack on the central idea, which is that these opposing political policy camps are well-described by his constrained and unconstrained labels. 

In the final chapter Sowell summarizes the thesis, and this is the sole area where I found his analysis wanting. One can imagine that computing a "best" vision would amount to comparing its attributes to a set of values and scoring them accordingly with a cost or benefit function, like fitting a curve to a set of data points. Here, he says 

"Values are vitally important. But the question addressed here is whether they precede or follow from visions. The conclusion is that they are more likely to derive from visions than visions from them is not merely the conclusion of this analysis, but is further demonstrated by the actual behavior of those with the power to control ideas throughout a society..."

In response to this chicken and egg question, his view is that it is definitely the chicken. But this may be too simplistic, in that the vision is not as concrete at the emotional level, like a data point, and that the process of settling on a vision must necessarily first include data that the human mind must sift and consider in order to make them into a consistent vision. The human mind soaks in observations and then makes them consistent through a process of hypothesizing; it seems that one could argue strongly that visions are of this type of mental process, and therefore it is the data (egg) that arrives first.  

Sowell makes the further points that visions are rarely merely the bias or class position of the speaker, showing over and over that the gains supposedly accruing to the writer of a political philosophy work are at odds with his actual station at that time. 

After all this work, do we have a winner? Sowell has remained agnostic throughout the book, and does not reveal his position even at the end. He does, however, indicate that the process of deriving a winner could be a messy business:

"Empirical evidence is crucial intellectually, and yet historically social visions have shown a remarkable ability to evade, suppress, or explain away discordant evidence, to a degree that scientific theories cannot match."

"Emphasis on the logic of a vision in no way denies that emotional or psychological factors, or narrow self-interest, may account for the attraction of some people to particular visions."

And in the end, he discounts heavily the value of "winning" at all:

"While visions conflict, and arouse strong emotions in the process, merely 'winning' cannot be the ultimate goal of either the constrained or the unconstrained vision..."

He has no further explanation for this conclusion. It is not discussed in the context of a metastructure, like the one briefly introduced above, nor in the value of diplomacy within a civil society. What we are left with is a new, even if valuable, framework for understanding the conflict, but the conflict remains intact.

Thomas Sowell, A Conflict of Visions: Ideological Origins of Political Thought, Basic Books, 2007.