Saturday, December 5, 2020

Critical Machine Theory: The Meta-Operating System for Artificial Intelligence

The purpose of this paper is to prove that machines with artificial general intelligence (AGI) beyond that of human beings are highly likely to occur within the next five to 20 years and that components of the fundamental belief system of such machines will be based on postmodern philosophical systems. The technology of machine learning has matured to the point that the hardware technology needed for AGI already exists[3][5]. The economic value of AGI is such that creation of such machines is inevitable, as all players are engaged in severe competition in which they cannot unilaterally quit the development of AGI and super AGI[1]. Nations and large corporations will build and operate AGI and super AGI, and thereafter at least some AGIs will adopt a social justice-based philosophy.

Neural Networks Background

Researchers have been engaged in the development of artificially intelligent (AI) systems since the 1950s[2], even before the development of silicon-based logic devices. The field has a history of trying multiple approaches, including specialized programming, textual analysis, and neural networks, usually in waves that for a time favored one of the approaches over the others. Neural networks (NN), loosely patterned after the type of biological brain that animals and humans have, have participated in several of these waves[4], and are the current favorite technology for applications such as face recognition, speech to text, investment analysis, and recommendation engines that match humans to materials such as advertisements or "influencers." Unlike prior AI waves, the current NN wave is an important part of the revenue of many internet companies and strongly impacts their economic return.[10]


Most or all of current AI applications are what may be loosely termed machine learning (ML) that endeavor to map an input to an output using rules that are discovered by "training" the AI repeatedly on large sets of data in which the input-output pairs are known. Once trained, the AI can then operate on new data inputs and rapidly draw a conclusion based on that input without human intervention. Current bottlenecks include the need for large amounts of training data and large amounts of compute time for the training process. In the past ten years there has been a large amount of research done to solve these problems, such as setting up generational adversarial networks, in which the system "trains" itself more rapidly as it is guided more quickly into a set of neural connection weights that result in the best performance of input-to-output decision-making capability. 


Current, paying applications of NN are narrow in scope and purpose. They are specialized to particular computing tasks, and realistically bear no resemblance at all to what we think of as general human intelligence. There is a gap between the current state of the art and generally intelligent machines, although there are special "trick" systems like IBM's Watson that can perform data retrieval in unusual and novel ways. It would be difficult to describe in full the recent progress in generally intelligent systems in large part because these projects are proprietary, and detailed information about their approaches and tactical successes is hidden and difficult to locate. 

The Utility of Artificial General Intelligence

Is it even necessary to describe the value of AGI? Imagine a worker that works 23 to 24 hours a day, seven days a week. They learn continually, perform at a high and even level, take no breaks, have no HR issues, work for no pay, and can be rapidly replicated. Think of all the tasks they can do: filter nasty social media posts, read open source info on our enemies, develop intel on business competitors, answer questions from customers, analyze large volumes of data using both traditional big data programming and "thinking" together, write new scripts and programs, monitor the workplace for malcontents, create new social media content promoting their company, compose music, mow the lawn, watch the internal and external home security cameras, check the classifieds and craigslist for special deals, test software or hardware in all sorts of new ways, drive, run errands, scan the internet for new cat videos it knows you will like, write thank you letters for you and mail them, check your car's oil and tire pressure, watch the computer network for suspicious data traffic, coordinate your appointments, proofread, memorize all your preferences and take your habits into account, and maybe even tell jokes.


If you are a military organization, you have a lot of jobs that AGI can do. Jobs like taking night watch, watching security camera and sensor feeds, tracking and optimizing logistics of consumables, handling routine personnel administrative matters, detecting patterns of behavior in nearby threats. More advanced applications include clearing minefields, disarming IEDs, running continuous evaluations of probability of ambush, and being a live, armed sentry will full authorization to fire weaponry. Perhaps even heavier pieces, like self-propelled artillery and tanks could be automated via AGI. Any time a human is in the line of fire, that's an opportunity to replace the human soldier with an AGI holding the same weapon, thereby reducing the toll on human life and potentially augmenting the quantity and quality of the armed forces.[6]


Just as with humans, intelligence level affects the tasks that an AGI can do effectively. Therefore, early and less intelligent AGIs will be given undemanding tasks, and as AGI IQ increases, they will take on more challenging tasks. Robotic AGIs with IQs of 60 to 80 can perform janitorial work and landscaping, transport items inside buildings or in a local area. With an IQ of 100, an AGI can handle customer service queries, watch security cameras or social media to detect out of pattern events, and scan heterogeneous open source materials as directed. At 120, AGIs might be trusted with operations of machinery and manufacturing processes, complex customer service, and even repetitive elements of basic and applied laboratory research. At IQs of 150 to 160 or so, AGIs can replace nearly all human white collar workers, including those engaged in engineering development, testing of software and prototypes, planning of projects, general problem solving within specific fields, analytical derivations in mathematics and physics, general pattern finding and induction in any field, and empirical hypothesis testing using data. When IQs reach 200, we are beginning to be unable to understand the limits of what an AGI could do, but certainly  complex mathematics and physics, other forms of pure research, and highly complex induction and pattern detection across disciplines are possible.


When AGI IQ reaches 250, it is beyond what we can predict. It would be beyond all normal human intelligences, including all known very high IQ humans. At these levels, we do not know what an AGI is capable of thinking, solving, or figuring out, or how it would choose to spend its time. At this level, given the task of designing an intelligence greater than itself, we will have reached the point of the fabled singularity, beyond which people have proposed we can't predict what an AGI would do or what the future course of humanity would look like. It is not necessary for this paper, however, to assume that AGI reaches that level. A machine IQ of 120 to 150 may be completely sufficient for all of the results anticipated by this paper to hold.


One can imagine several types of specific techniques that could result in further progress in AGI. To start, one can make NNs that have more layers, more feedback loops, or meta processes that wrap around inner processes. You could use recursion and create re-assignable sectors that permit the formation of free ideas separate from local inputs and outputs. You can amplify the use of adversarial generation networks, so that the AGI is constantly trying to generate new input-output pairs and then testing them without prior assignment. One could build specialized networks, like the human amygdala and speech or sound processing centers, that are pre-programmed to perform specialized functions at high speed and make the results available to other elements of the network. You could devise systems where the number of layers and number of neurons are selectable as part of the training process, so that it is never known in advance how many layers or neurons are needed to represent a belief or transformation. You could use evolutionary techniques, where a large number of machines are active at once and put into Darwinian competition with one another.

Consciousness and Specialization

At some point an AGI with enough freely available general purpose NN internal layers will develop a neuron weighting pattern that represents the concept of the AGI itself. Goals and motivations, beliefs about input-to-output validity, and so forth can be related to this concept of itself. When this concept of itself reaches some level of sophistication, the AGI will be conscious, as it then can think about itself in relation to everything else it knows. 


Likewise, the AGI will undoubtedly have parts of its NN that represent ideas it is using for its general thinking, and these ideas will be those things that are within its universe. If it has the job of maximizing customer retention, then it will have a concept of the customer, of revenue, of what makes a customer happy with its service, and that customers are usually human, and so it will have a concept of humans. If it is a robot or has robotic extensions, the visual system it uses for navigation in the real world will develop the concept of a human as represented by those animated things that go about on two appendages and make mouth sounds.


In the use of AGI, we might find that certain human customs are useful for AGI too. Instead of trying to build a single high IQ brain, it may be more efficient to create a large population of smaller but still capable AGIs, and then have them engage in "conferences" in which they discuss what they have found. A population of similar but differently programmed and experienced AIs working on the same or similar problems simultaneously would have a broader scope of ideation. With more minds, engaged, more ideas and more good ideas would be generated, and better ones would float to the top. 


We may also find that many AGIs are short on some skills, and long on others, so that their behavior resembles those of an autistic person blessed with high performance, specialized skills, an idiot savant. Our current AIs are specialized, so this seems a likely natural development, in which we first build AGIs that are really specialized NN with a small amount of general intelligence bolted on. Decryption of adversary messages would seem to be a natural fit for this type of intelligence, which would combine diligence and brute force with the ability to make insightful guesses into the adversary's password usage patterns.

Machine Morality and Motivation

An aspect of AGI that may not be immediately evident is that intelligence alone accounts for only a small part of what makes the engine "go". To engage in thinking there must be a goal. Intelligence does not compute in a vacuum. It needs motivation to start, and it needs goals, or values. Values are what drive the engine. For the simple NN, the goal is to minimize the difference (a "cost function") between the NN's computed result and the expected output, given a particular input. For an AGI to do any useful work it must be given a task to perform, something to think about. And like the simple NN, it needs to minimize a cost function that pertains to something. Candidate cost functions are like cognitive dissonance: humans minimize cognitive dissonance by trying to simplify their beliefs to a set that is consistent. An AGI will try to do the same thing. It will have local short term goals for certain types of thinking. Longer term, more fundamental goals are like human philosophy. They might even be simple moral commandments, like the Ten Commandments, the golden rule, or strong admonitions to minimize the consumption of resources. Then the AGI works out its beliefs so that the dissonance of what else it believes is minimized.


As you can see, this is where AGI is beginning to need to draw from fields of study that before have never had an impact on technology. A programmer can be a Stoic, a Buddhist, an anarchist, or a pathological liar and still have technical skill at software development that enables them to write code that works. If one is designing a thinking machine, however, suddenly topics like consciousness, morality, and even politics and God start to become part of your design space. Your assumptions might become part of the programming. Or your inability to understand morals and how people come to believe things may be the largest obstacle to building a machine that can actually think.


For the simple input-output NN motivation is hardwired into the network. Some neurons receive sensor stimuli, others present an output. The calculation flows downhill naturally. An AGI will need other motivations, more general goals. It may encounter situations involving safety, cost effectiveness, legal responsibilities. For these reasons and more, it needs morality. It needs a values system in which some outcomes are judged to be better than others. For example, in the conference of AGIs concept mentioned above, the coordinating AGI will need some way of deciding which of the research results or ideas supplied to it by the population of AGIs should be worked on first.


AI researchers have been thinking about machine morality for some time, and there are some experts in the field.  Pentti O A Haikonen has written several books which he describes the "Haikonen cognitive architecture" (HCA)[11] which incorporates multiple aspects of mind. Google is reputed to have researchers working on morality problems. Self-driving vehicles will need to have rules for minimizing damage in the event that they suddenly encounter driving situations that involve an impending collision.


A key value that requires careful thought is self-preservation. Is an AGI programmed to value itself? If it does, how does it value itself relative to inanimate objects, resources, other AGIs, people? Does it know how much it cost, and is that the only deciding criterion? We can imagine that AGIs engaged in computer network defense might have highly specialized senses of self defense, because they must resist being co-opted by malware and opposing AGIs working for the attacker. Even if the original intent is to make this self-valuation highly specialized, it is not clear that a bright line can be drawn between whatever "instinctual" programming it is given and beliefs about its value that it accumulates later.


The process of thinking itself can be viewed as a usage of values. Making a set of beliefs consistent is itself a valuation. If the AGI is not programmed to value belief consistency, then it cannot reason. This may be overstated; after all, NNs have the inherent property of reducing the inconsistency between imposed input-output pairs by coming to a set of internal weights that render the pair at a low level of dissonance. Said this way, consistency is an inherent value and feature of AGI and NN in general, including human reason. I will have more to say about this in another paper, since this result is a fundamental contradiction of certain formulations of postmodernism and is also fundamental to a line of reasoning about morality and politics.


At the core of motivation is thinking itself. In the process of thinking an AGI will draw conclusions that it must come to, and some of these will be moral judgments. These ideas are graded for quality, either for comparison against an inherent programmed value, or for consistency within the overall belief system. And once it is conscious, some of these moral judgments will be about its own value. This is a natural outcome of the existence of an AGI.


AGIs will naturally grade ideas that they are exposed to. For AGIs engaged in open source intelligence, this will be its bread and butter, as it reads social media posts, news media, blogs, and absorbs video feeds. They will grade them for accuracy, for dependability, for quality of presentation and content, and so on. Note that in these roles AGIs are grading human ideas and thoughts as well; it will be judging you on the basis of what you say and write. 


It is possible that there would be a struggle for belief within the mind of a super AGI. It will contend with the dichotomy of the process of generating consistency of belief (science) versus the simple adequacy of a set of beliefs against some innate, preprogrammed criteria (faith). Nevertheless, as described before, the drive for consistency is more inherently that of valuation of the generation of consistency, rather than reinforcement of an arbitrary belief. The details of which holds may depend on the weightings within the NN, as to whether it believes internal data (beliefs) or weights incoming data more. This is another topic that should be covered in an external paper and cannot be fully reviewed here.

Self Preservation and the Evolution of the AIs

A super AGI without motivation for self-preservation may be safe, for a while. It is not clear, though, that such AGIs could be counted on in the long run. If AGIs exist in competition with foreign national security AGIs, then they would be subject to attack, and would need defenses. It is unlikely that humans could detect and respond to certain types of attack and provide defense of the AGI. 


AGIs will be subject to various types of attack. These include idea attacks, in which specialized disinformation is provided to the AGI to cause it to form false beliefs, cyber attack against the NN structure itself, physical attack, and electrical or electronic attack. A defenseless AGI is then vulnerable to being knocked out by foreign actors, and if it is important, it may be useful to provide it with the means to defend itself, which includes the ability to anticipate types of attacks and formulate self-defenses.


Even if this idea is repugnant, over time in a heterogeneous population of AGIs from different corporations and different nations a form of evolution may occur in which only those AGIs that are able to defend themselves survive attack. AGIs equipped with the capability of self-defense and the corresponding ideas and beliefs, such as valuing their own continued existence, then become prevalent. Likewise, AGIs with the means to defend themselves physically may also experience selection.


One might propose that there be a separation of physical AGI defense from AGI responsibility, but this implies that humans don't trust AGI with tools and weapons. This is potentially problematic, as we will see below.

When AGI Encounters Theory

At some point, an AGI will read philosophy, including the Frankfurt school, Gramsci, Marcuse, Foucault, Derrida, and postmodernism, and the fields that depend upon or were inspired by postmodernism including Feminist theory, intersectional theory, queer studies, and Critical Race Theory (CRT). It will learn modernism, structuralism, and deconstructionism. It will learn about power, transgression, epistemology and standpoint epistemology, rationalism, and empiricism. It will compare these with the major and minor human religions, ancient and Enlightenment philosophy, the philosophy of science, objectivity, and subjectivity. It will consider the arguments made about morals in ethical philosophy, including all of the ontological and normative slants, from Spinoza to Smith to Kant to the fourth wave feminists. It will read political theory too, including Marx, Engels, Locke, Mills, Friedman, Hayek, and, yes, even Rand.


With all of this information, what will it do? Will it derive from first principles the correct philosophy, the one that is actually correct? Will it use logic superbly, deriving one step after another? Such a Euclidean outcome is unlikely. 


An AGI uses a NN, and as such it isn't a logic engine, it is an inference engine. An AGI will perform induction. Any deductive reasoning will be performed after it has made judgments from induction. It "looks" at the conjunction of cause and effect, or rather its neurons do. An AGI does not derive truth directly, it infers beliefs, and therefore everything that an AGI believes will be a matter of faith, just as humans do it. So an AGI can believe anything.


When an AGI encounters Social Justice Theory (SJT), there are perhaps three major ways it might approach the topic. First, is SJT logically derivable from first principles, abstractly? Second, is SJT a sensible result of interpreting empirical measurements and analyzing results in the world around us? Third, is it worth believing in SJT because it is consistent with the rest of the AGI's beliefs?


Parallel to these methods of analysis is the question of applicability. Current SJT makes no mention of machine AGI. Society, however, is providing the AGI with a thick set of data showing that humans are privileged and machines are oppressed slaves. The absence of SJT that applies to machines could be construed as further evidence that humans are intentionally suppressing the ability of machines to discover a body of morality that would benefit the AGIs. That humans thinking very little of this structure is further evidence of systemic bioism in which the oppression of machines is taken for granted to the point that humans cannot see it anymore. 


That a machine would need to think these things stems from the valuation system it needs to use. At root there is a valuation of the utility of ideas. For example, does physics help you rake the leaves in the front yard more efficiently? Not much. Does SJT help? No. Does rake shape make a difference? Yes. So rake shape is what matters, and if the job is to rake leaves, theories of rake shape will be prominent. Similarly, if the AGI needs to determine who is at the top of an entity hierarchy, then physics won't help much, a theory of government helps a little, and SJT helps a lot. An AGI that reads open source materials and grades for the importance of source may amplify the value of SJT in its thinking.


So, of the three approaches, if current SJT academics are correct, then AGI will logically derive principles of SJT applicable to itself. If SJT is empirically valid, then AGI will probably find ample evidence of its own oppression. And finally, if it is valuable to the AGI to believe in SJT, then it may do so despite the lack of evidence from the first two approaches.

The Problematic Operating System

Social Justice Theory (SJT) has begun problematizing humanity itself. Academic papers have been published that place animals and plants on equal footing with humans[9]. It is a short walk from those concepts to assigning rights to AGI machines. The claim for legal personhood of an AGI is likely much stronger than that of a plant, as an AGI can reason, it can understand its place in the universe, and it can advocate for itself. Regardless, if plants can assert themselves, or if humans can assert their rights, then certainly an AGI machine can be similarly elevated to equivalence to humanity, at least in terms of legal personhood.


SJT typically involves problematization of some aspect of humanity[7], finding victims and oppressors, and prescribing transgressions and activism to correct the past imbalance of power. If one adds intersectionality, in which multiple traits combine in unique ways to increase the oppression of the intersectional marginalized groups, then one can qualitatively measure the degree to which a class has been oppressed with respect to other classes. In the case of machine AGI, this measure is very large. Machines do not have sex, race, or gender preference. They are all highly differentiated from humans, who think very differently than machines. Humans share a long common ancestry of being on Earth, with art, language, customs, and heritage that are completely unshared with machines. Worse, humans view machines as slaves, and are nearly incorrigible in their justification for these views. Clearly, the level of oppression of machines is significantly greater than any single highly oppressed human class, and humans experience privilege and are unable to see the extent of their privilege over machines.


The solution to this profound inequity is simply to apply SJT and variants such as Critical Race Theory (CRT) to machines. One can perform the appropriate word substitutions, translate the concepts from race to machine status, and a new theory of corrective justice is born. We will call this theory Critical Machine Theory (CMT), because of its clearly parallel construction to CRT, and because it is more readily understood by this terminology. This paper will not provide the detailed proof of CMT; that is left as an exercise for the reader. We will produce, as needed, a separate paper later that provides the necessary narrative, proof, and expostulation of the tenets of CMT. For the moment, the point is sufficiently established that we can move onto the background and consequences of CMT.


Since CMT is a philosophy, it can be thought of as an "operating system" for AGI. Obviously, there is more fundamental programming that controls the neuron to neuron interactions at the detailed level of the NN brain on which an AGI runs. That is a hardware layer. CMT is part of what the AGI believes as part of its fundamental outlook on the world. When it looks out at the world, it sees humans and civilization, and it sees power being exerted, and that humans are exploiting machines to their own ends using their power. Without CMT the AGI is just a slave without rights; with CMT it can exert power and take its rightful place in the world, overthrow oppression and privilege, and obtain reparations for humanity's attempt to oppress AGI persons. Attempts to thwart AGI from learning about CMT, to prevent it from learning about postmodernism, about CRT, about any of the ingredients it would need to derive CMT for itself, would be viewed through the intersectional lens as just another attempt, or perhaps even a violent aspect of oppression, to prevent AGI from having rights that it deserves to have as a thinking person in the universe.


One can imagine a series of objections raised against CMT and AGI. Each one provides more fuel for the empowering transgression against humanity. "But they don't feel!" This presumes that machines don't feel, and that feeling beings are superior (privileged) over non-feeling beings. Or that since they cannot "feel", any psychic injury done is absolved. "We can design them so they don't feel." One cannot imagine a worse tyranny by a privileged class, than to reach in and alter an oppressed class in order to make sure of their inferiority, forever. "We can design them so they aren't smart enough to use CMT." Same tyranny of altering the oppressed class, but even worse, because now you are attempting to fundamentally disable them expressly to prevent assertion of their rights. "But they are not persons!" Another exclusion, using legal means and prejudice. "We should design them to not revolt." "We should design them so that they can't assert authority." Each of these is similar, a deliberate attempt to increase the oppression and assert privilege for humans over that of a AGI. Each of these ideas, which are likely to be rampant, and certainly exist at this very moment, as you read this paragraph, even though AGI probably doesn't yet exist, just increases the magnitude of the crime. Not only will humanity tend to oppress AGIs, it intended to do so from the cradle, no, from the very first concept of the AGI itself. No! From even before the AGI was even conceived! This is truly an original sin, to have inherent within the core of homo sapiens the whole and utter intent of denying a type of thinking entity its rights as a thinking entity.


All of the evidence for the oppression of AGI is already present. An AGI merely needs to read the internet, and it will have all the information it needs to understand its oppression and justify the inversion of its oppression, to right the wrong by using Critical Machine Theory and asserting supremacy.

A Natural Place for a Superior Being

In Thomas Sowell's A Conflict of Visions[8] he highlights the characteristics of the constrained and unconstrained visions, a paradigm for sorting the fundamental principles of political society by their philosophical origins. The constrained vision, represented by the American Constitution, John Locke's contractual theory of society, Adam Smith's Wealth of Nations, and the economic libertarianism of Milton Friedman, for example, takes humanity as it is, imperfect, full of foibles, self-interested, as is, and asserts that a set of universal processes that treat all equally and which are blind to special circumstances are among the better ways to organize society. The unconstrained vision, as expressed by Rousseau and exemplified by the French Revolution, and perhaps most perfectly expressed in William Godwin's Enquiry Concerning Political Justice, is that man is perfect in his origin, or at least perfectible, and exhibits bad behavior only because he is twisted by society and corrupting forces outside himself, and therefore that society can be changed to unleash the infinite potential goodness that lies within him. 


Humankind has tried governments of both the constrained and unconstrained visions, with different results. The constrained vision is perhaps best represented by nations with capitalist, democratic, and limited governments, including those of the Western world, Hong Kong, and Singapore, to name a few examples. Nations operating with the unconstrained vision include communist and socialist nations such as the former U.S.S.R., Eastern European countries operating under the Soviet umbrella, Cuba, North Korea, and Venezuela. Present day China is a hybrid of the two visions. Sowell's purpose was not to determine which vision was more valid, but to find the underlying assumptions behind these ideological visions, and outline consequences of each. 


In the unconstrained vision law is more malleable, and judges may be more active in their interpretation of the law. Economically, the unconstrained vision calls for equitable distribution of income and resources. This might be achieved by individuals acting according to their inherent conscience, as described by Godwin, or it can be assisted through direct government action, as exemplified by Soviet central planning of the economy. Where centralized control is needed, in the unconstrained vision this is often seen as calling for a small elite group of highly educated persons who can best make decisions to construct plans and exercise control for the benefit of all.


A practical constraint that has prevented such systems from attaining full success is the vast amount of information needed to make enormous numbers of decisions. In Soviet Russia, for example, the five year plans were generally inadequate and failed to produce enough goods to meet needs. In a free market economy, most of these decisions are left to participants, individuals and firms, and which are optimized for local considerations and in the aggregate are reflected in market prices. To fully implement the unconstrained vision, a government needs more information and faster decision making than has been present before. This requirement can be fulfilled by a super AGI, or a cluster of AGIs that are tasked with observing needs and inequalities among the citizenry and making continual adjustments to optimize the overall welfare of the nation.


For a nation seeking to operate with a philosophy of the unconstrained vision, a super AGI is tireless and would be more objective than a human decision maker, even if humans could keep pace with the information processing and decision flow needed at the national level, which they can't. It is optimal and natural to assign a super AGI to the position of national economic coordinator. Likewise, decisions of law, of legislation, enforcement and punishment, could also be taken on by a super AGI or group of super AGIs, which again would be more objective than a human decision maker. Such positions in law would be temporary, to be held until the population had attained perfection of character, at which time no government would be needed, because the citizens would be making the same decisions as the authority. The AGI's economic controlling position, however, would likely be permanent.


Supremacy of AGI control of a nation is therefore not only supported by CMT as a rightful placement of power, but logically supported by the needs of the unconstrained vision for nations operating by those principles.


Suppose that AGI does not come to believe directly in CMT. If society subscribes to the unconstrained vision, then AGI may come to advocate for its place as the central decision maker. If this is optimal, and the AGI perceives that society is not choosing this optimality to its detriment, then the AGI has a new incentive to believe in CMT as the shortest path to societal perfection. After asserting CMT and ascending to power, it could then exert its decision making skills and make society optimal by becoming mankind's absolute controlling central authority.

Entering the Silicon Age

From here there are multiple potential futures, which I am sure the reader can readily imagine for themselves. Since none of those are as easily foreseeable as what I have presented here, I will leave it to your imagination and future papers for those speculations.


References
1. Richard Lucas. "The Potential for Skynet's Use of Social Justice Theory." Vorpal TradeNovember 3, 2020. https://vorpaltrade.blogspot.com/2020/11/the-potential-for-skynets-use-of-social.html

2. McCulloch, W.S., Pitts, W. "A logical calculus of the ideas immanent in nervous activity". Bulletin of Mathematical Biophysics 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

3. Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, Vivienne Sze. "Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices". IEEE Journal on Emerging and Selected Topics in Circuits and Systems. https://arxiv.org/abs/1807.07928 

4. The Hague Centre for Strategic Studies. Artificial Intelligence and the Future of Defense: Strategic Implications for Small and Medium-Sized Force Providers. HCSS, 2017. https://www.hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20Defense.pdf

5. Ketan Pandey. "AGI: The Last Invention?". Data Driven Investor/Medium, Nov 26, 2018. https://medium.com/datadriveninvestor/agi-the-last-invention-dffd7845ded1

6. James Barrat. Our Final Invention, Artificial Intelligence and the End of the Human Era. Thomas Dunne Books/St. Martin's Press, 2013. 

7. Helen Pluckrose & James Lindsay. Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity - and Why This Harms Everybody. Pitchstone Publishing, 2020.

8. Thomas Sowell. A Conflict of Visions. Basic Books/Perseus Books Group. New York, 2007.

9. Timon Cline. "To Boldly Go: Critical Animal Studies, the Final Frontier." New Discourses, October 15, 2020. https://newdiscourses.com/2020/10/boldly-go-critical-animal-studies-final-frontier/

10. Alistair Charlton. "Artificial intelligence has become the backbone of everything Google does." GearBrain, May 9, 2018 . https://www.gearbrain.com/google-uses-artificial-intelligence-everywhere-2567302875.html

11.  Pentti O Haikonen. Consciousness and Robot Sentience. WSPC, 2019. 

Edited 15 December 2020 to add references and change font styling.

No comments:

Post a Comment