Technological Animism

The Uncanny Personhood of Humanoid Machines

in Social Analysis

Abstract

This article analyzes the role of animism in the creation and production of humanoid robots. In Japan and the United States, robotic science has emerged from fictional sources and is enmeshed with fictional models, even when developed in advanced technoscientific facilities. Drawing on the work of Sigmund Freud and Masahiro Mori, I explore the robot as an ‘uncanny’ doppelgänger that is liminally situated between the human and non-human. Cultural depictions of robots, particularly in written and visual fiction, reflect Freudian fears of the ‘double’ as the annihilating other. I propose the concept of ‘technological animism’ to explore how fiction and technoscience co-construct each other, with roboticists drawing inspiration from positive fictional models, as among Japanese scientists, or frequently rejecting such models, as among their North American colleagues.

How should animism be understood in the early twenty-first century? From Tylor’s description in Primitive Culture ([1871] 1920) to Bird-David’s (1999) critique of it as the ‘failed epistemology’ model of animism, anthropologists have shifted tremendously in their vision of animistic phenomena. Descola (1996), Ingold (2006), Corsín-Jiménez and Willerslev (2007), and Viveiros de Castro (1998, 2004), among others, have challenged Western epistemologies in the study of animism, particularly Cartesian dualisms that divide the real from the non-real. Even in these innovative works, however, animism remains associated with processes of personifying nature, especially animals (and sometimes plants or other features of the natural landscape), by attributing thoughts, feelings, and intentions to these entities. Yet the potential for animism in technoscientific settings—the personhood of machines—has hardly been explored in the anthropological literature on animism.

This article examines the role of animism in the creation and production of humanoid robots. I suggest that the concept of animism has broader applications in both ‘natural’ surroundings and the highly technological and experimental setting of robotics laboratories, where myth, fiction, and scientific exploration are brought to bear on the question of what it means to be human. I propose the term ‘technological animism’ to describe the conceptual model of personhood that emerges in the interaction between fiction, robotics, and culturally specific models of personhood, which may already include non-human persons.

How, as anthropologists, might we draw on debates related to animism to make sense of a peculiar activity where technologists create human-like robot entities that have been intentionally designed to have specific kinds of human qualities (such as emotions or memory) and then put them with adults and children in order to understand the relational effects of these entities (Richardson 2015)? Corsín-Jiménez and Willerslev (2007) problematize the constructs we use as anthropologists, arguing that modes of reflection are contained within our constructs, as well as within the relations between constructs that provide an analytical architecture. What analytical architecture might be useful in understanding the expansion of the field of humanoid and social robotics? In this article, I draw on the psychoanalytic writings of Sigmund Freud and the scientific and theoretical writings of Masahiro Mori, as well as on cognitive science and contemporary anthropological studies of animism, to illuminate the liminal character of both fictional and real robots. In essence, my findings decisively challenge the view of animism as a phenomenon belonging to the ‘primitive’ and situate it instead as a means of understanding personhood beyond the human—even beyond the ‘natural’.

The making of robots is a complex multinational and multicultural scientific enterprise. In US robotics labs, I met North American, Japanese, Indonesian, Irish, Colombian, and Italian roboticists. After a while, the multiplicity of persons of different nationalities, life histories, and lived experiences converged into specific themes. In this article, I focus on the theme of technological animism and its roots in popular fiction. The automata of the eighteenth century were an important precursor to the mechanical reproduction of humans and animals, as I discuss below, but what we may call the ‘modern robot’ began as a character in a play, entering public life not via a laboratory or a factory, but at a theater. Rossumovi Univerzální Roboti (Rossum’s Universal Robots), an avant-garde play from the early twentieth century by Karel Čapek ([1921] 2006), inaugurated the trend in which fictional representations of androids and other human-like automata directly influenced the development and popular reception of modern robots.

In my fieldwork in American robotics laboratories with international teams of scientists from Japan, the United States, and Europe—and focusing especially on the lab at the Massachusetts Institute of Technology (MIT)—I found that fictional representations of animate beings were pivotal to the emergence of technological animism. In the US, fictional visions of robots as ‘uncanny’ agents of annihilation have influenced American researchers to carefully manage (and often minimize) the human qualities of robots in order to avoid evoking these destructive qualities. Their creations affirm a vision of robots as secular and non-magical. Yet at the same time, American researchers are unable to ‘control’ their own narratives about robots as they compete with popular fictions—such as the Terminator franchise of films and television series (beginning with the 1984 film of the same name)—that present robots as threatening. US roboticists saw themselves as producing objects in a climate of hostility and frequently blamed Hollywood representations of robots and artificial intelligence (AI) for this.

The Japanese researchers whom I met in US robotics labs contrasted their robot philosophies and practices as distinct, even radically different, from those of their Euro-American counterparts. They drew on animistic elements of Japanese Buddhism and Shintoism to support a distinctive cultural narrative of robots as friends and not foes (Coeckelbergh 2013; Jensen and Blok 2013; Robertson 2007; Sone 2008). These religious beliefs provide a ‘principle of equivalence’ that can be seized upon to relate humans and non-humans (Mori 1970). While a full discussion of Japanese animism is beyond the scope of this article, it is worth briefly discussing what this means for my argument. Based on the interpretations of sociologist John Clammer, Jensen and Blok (2013: 97) suggest that Shintoism can be seen “as a complex and specific form of animism. It is in the shape of a vital animism, within a complex, modernized and advanced techno-scientific country, that Shinto holds interest for us as a vehicle for rethinking relations with the non-human world.” In Shinto, a “radical ‘personalization’” of everything consists in the presence of spirits (kami) in all beings (ibid.).

By drawing on fiction and religion, Japanese and North American robotic scientists do not diminish their scientific credentials, but add to them. For Japanese roboticists, it seems as if Japanese people are ready to welcome robots with open arms, as they step directly into their living rooms from the production lines of Sony, Toshiba, and Honda. A number of social scientists have contrasted the popular understanding of robots in Japan with Euro-American views (Allison 2006; Jensen and Blok 2013; Sone 2008). They have particularly examined how models of non-human personhood, originating in Japanese Shinto and Buddhism, have influenced the development of ‘techno-animism’, a term first coined by anthropologist Anne Allison (2006) in Millennial Monsters. This idea refers to the Japanese conceptualization of technological entities as heterogeneous hybrids (ibid.: 13), blending what Jensen and Blok (2013: 85) describe as “advanced technologies and spiritual capacities.” However, I propose that technological animism applies not only to Japan, but also to Euro-American imaginings of robots, as explored below. American roboticists distinguish their technoscientific practice from mainstream (Judeo-Christian) religion, but they rely on fiction as a context for making, designing, and imbuing robots with animistic qualities. Whereas fiction, Japanese Buddhism, and/ or a Judeo-Christian heritage might seem to be at odds, the work of roboticists at MIT indicates that the making of humanoid robots is a practice in which religion and fiction fuse with technological practice. Technological animism thus emerges from the interaction between a religious or cultural context, fictional models, and technoscientific production. I begin by examining the fictional models that have brought robots into the popular imagination.

A Short History of Robots in Europe and North America

The first fictional representation of a ‘modern’ robot, and indeed, the origin of the term ‘robot’ itself, was a phantasmagoric tale of revolution set against the backdrop of political turmoil in inter-war Europe. In Rossum’s Universal Robots (hereafter R.U.R.), artificial humans are created out of thousands of individual organic body parts and assembled on a factory production line. The term ‘robot’ comes from a Slavic term meaning ‘to work slavishly’, although it can also be a neutral word for ‘work’ in Slavic languages. In Čapek’s ([1921] 2006) play, the factory production line (where robots are produced) and the scientific laboratory (which creates the formula for robots) merge into one integrated setting in a darkly futuristic vision of modernity: the most important theme in the play is the ‘robot revolution’, which destroys humanity. Written in a historic moment of worldwide embattlement, when World War I (1914–1918) and the Russian Revolution (1917) were still recent memories, (Čapek’s play is a political drama, a commentary on the destructiveness of war. It is also a commentary on humanity and the capacity to (re)animate it in alternative ways, since his robots are made with human body parts, echoing themes occurring in a plethora of Euro-American and Judeo-Christian religious and literary texts, from the Jewish myth of the Golem to Mary Shelley’s Frankenstein. All of these fictions share the leitmotif of a threat to humanity from the remodeled human form.

R.U.R. inaugurated Euro-American robot fiction. In its first phase, spanning the 1920s to the 1950s, this fiction imagined robots as revolutionaries, embedded in the political narratives of the era. From the 1950s to the 1990s, revolutionary robots were reimagined as domestic devices and thus, to a certain extent, neutralized of political energy and force. In the 1960s, robots were extended into a new area when the first industrial robot—the Unimate Robot—entered the production line. Its use on a General Motors assembly line in New Jersey mirrored the factory theme in R.U.R. The 1960s also saw the rise of AI as a field that would inspire Stanley Kubrick’s (1968) classic film, 2001: A Space Odyssey.

The third phase of robot fiction began in the 1990s and continues today. Contemporary fictional robots may be social companions, friends, caregivers, and potential lovers. They are sometimes even religious or faith-based beings. At the same time, the theme of annihilation has persisted, and fictional robots are often still frightening agents working for the destruction of humanity. In the new Battlestar Galactica (Moore and Eick 2004–2009), a remake of the 1970s television series (Larson 1978), humanoid machines motivated by religious beliefs plant bombs and carry out assassinations, threatening the destruction of humanity in order to protect themselves. The themes of quasi-spirituality, robots, and science fiction also feature in the massively popular Star Wars films, beginning with Star Wars, Episode IV: A New Hope (Lucas 1977). Other fictions show humans living in an alienating world, as in the film Surrogates (Mostow 2009), where contact is mediated through mechanical avatars, or the film The Matrix (Wachowski and Wachowski 1999). The Spike Jonze’s (2013) film Her, starring Joacquin Phoenix, puts a new twist on the theme of alienation, showing artificially intelligent beings as potentially rescuing people from emotional disengagement. Contemporary robot fiction thus engages with themes of terrorism, religious war, alienation, and the pervasive integration of technology in everyday life.

Euro-American roboticists are intensely aware of popular conceptions of robots as agents of destruction, and this influences their laboratory practice. Although all of the robots I have studied are research platforms (meaning they are experimental prototypes rather than commercial objects), there is still fear among scientists that, should their robots become commercially available, the belief that robots are destructive will make the general population uneasy about buying or using them. Scientists have pre-empted this threat by working to design robots that they perceive to be culturally less threatening. This is why you will see childlike robots in many robotic labs in North America, Europe, and Japan.

Technological Animism and the Rise of Childlike Bodies

In the past 10 years, research to develop ‘social’ robots, and more particularly childlike robots, has flourished worldwide. Since I conducted fieldwork at MIT in 2003–2005, I have found the child robot to be a recurring laboratory motif. Examples include Kismet and Mertz (MIT), Bandit (University of Southern California), KASPAR (University of Hertfordshire), RobotCub (various labs in Europe), ASIMO (Honda, Japan), and Biomimetic Baby or CB2 (Japan). Even when robots are not intentionally designed to appear childlike, many researchers still incorporate notions of child developmental psychology. During fieldwork in North American and British laboratories, I repeatedly found that robotic labs look more like kindergartens. Even when the US DARPA (Defense Advanced Research Projects Agency) was funding projects, labs were filled with toys, children’s books, and machines deliberately designed to look adorable. In the high-tech AI robotic labs of MIT, where scientists craft robots and engage in ongoing experimentation with them, it is common to find brightly colored objects, rattles, and trains. In keeping with this childlike atmosphere, interactions with humanoid robots often take the form of adult-child exchanges.

The Honda robot ASIMO (Advanced Step in Innovative Mobility) is perhaps the world’s most well-known robot and provides an illustrative example. ASIMO stands at 4 feet 3 inches tall and is roughly the height of an eight-year-old boy. I say ‘boy’ because it looks like a boy dressed in an astronaut’s suit. ASIMO could easily be a machine imitating an astronaut, or a robot imitating a boy imitating an astronaut. The ‘head’ of ASIMO is mostly a helmet, and when facing ASIMO close up, I found that its face is significantly ‘reduced’ in expressive human features, with only a line to mark the mouth and two dots in place of eyes. What exactly are the roboticists at Honda trying to achieve when designing their most significant robot in this way? There are a number of issues to unpack here. Depending on whom you speak to about the Honda robot, you will get a different response. According to the official site, ASIMO was made to provide a platform so that researchers could develop an experimental, sophisticated machine. The robot is envisioned as a supportive assistant to “help people”: “In the future, ASIMO may serve as another set of eyes, ears, hands and legs for all kinds of people in need. Someday ASIMO might help with important tasks like assisting the elderly or a person confined to a bed or a wheelchair. ASIMO might also perform certain tasks that are dangerous to humans, such as fighting fires or cleaning up toxic spills.”1 Is ASIMO a child or a small human or something else altogether? Knowing what ASIMO’s size is supposed to represent is important. If ASIMO is a child duplicate, then are the manufacturers unintentionally creating all kinds of ethical problems that have to do with child labor?

ASIMO was the first technological robot I had seen that was overtly childlike. I was struck by the explanation that a professor of robotics gave to account for the robot’s size: “The child at this approximate height can reach power sockets and light switches.” It never occurred to me that an army of childlike domestic robots caring for the elderly or carrying out domestic work might evoke controversy. The shift to childlike robots represents a shift in cultural consciousness in relation to robots in Japan and North America. It should be noted that labs in the US and Japan make a variety of different robots. But even if the robots in these labs are not formally created to resemble children, other aspects of their research, design, and development draw on studies of child development and the field of ‘epigenetic robotics’ (see Berthouze and Metta 2005; Zlatev and Balkenius 2001). This field sees the development of ‘intelligence’ as an incremental and experiential process that involves the robot and its relations with others and with its environment (Aryananda 2007; Breazeal 2002; Breazeal and Aryananda 2002; Breazeal and Scassellati 2000; Breazeal and Velásquez 1998).

The robotics professor told me it was necessary to design robots to appear cute and childlike in order to counteract popular notions that robots are threatening to humanity and hyper-sophisticated. He explained that the design and utilization of child development models for his group’s robots was effectively a reaction to the threatening images of robots popularized through fiction and film. The robot assassin portrayed by Arnold Schwarzenegger in James Cameron’s (1984) film Terminator is an example. Roboticists anticipate that a robot designed to look like an infant or young child, or otherwise to look cute, will be less threatening and will elicit more engagement. There are considerable design challenges to making robots small and childlike in form—factors such as hardware flexibility, the machine’s function and complexity, and so on. To heighten the attractiveness of their machines, roboticists frequently design their robots to have large eyes, which are common in infants. Evolutionary biologists speculate that children’s larger eyes are more attractive to their caregivers, making it less likely that they will be harmed by them. At the former Artificial Intelligence Lab at MIT, roboticist Cynthia Breazeal imaginatively employed the techniques of Disney animators to make her robots cute.

The scientists I spoke to found that childlike and cute robots encourage adults to interact with them, even if the interaction requires extra effort on the part of the adult to maintain it. An adult might linger for a longer period of time if he or she perceives the robot to be funny or appealing, even if the robot is performing behaviors to benefit only itself (e.g., recording data for the researcher from the interaction). I have seen the efforts of adult men and women as they try to engage with Radius, a socially inspired robot head, and other robots at the Maria Stata Center at MIT, which is a perfect environment for securing a stream of passersby. Often adults persist even if the robot moves erratically and repeats a string of scripted sentences that are at odds with the moment. When a robot is childlike, adults tend spontaneously to support the machine and compensate for what it is lacking, altering their expectations as one might with a child, and they may even nurture the machine. The term ‘caregiver’ is sometimes used to describe the role an adult assumes when interacting with a childlike robot. The exchange between adults and childlike robots results in an asymmetrical social relationality, as between adults and children. For example, I frequently interacted with the robot Radius, using brightly colored toys to help the robotic scientist in the lab better calibrate its facial and color recognition software. I found myself calling the robot’s name as it jerkily moved it head in any direction but mine. I persisted in trying to attract the machine’s attention until Radius’s eyes briefly connected with mine. I had its attention, only to lose it again moments later.

In epigenetic robotics, the robot can be made to learn about its environment in a structured way, mimicking particular stages of child development. However, a robot’s configuration of capabilities, learning, and appearance confound the categories that people use when thinking about childhood. Although it may be the size of an eight-year-old boy, a robot may still be used as a research platform to explore baby crawling; thus, age, size, skill, and functions are jumbled up. Interestingly, presenting the robots as children invites their inscription into honorary kinship categories. I know robotic scientists who openly welcome being labeled the ‘mother’ or ‘father’ of their robot platforms (Breazeal 2002; Robertson 2007, 2010).

The exchanges described above foster technological animism. The human participants in these interactions are aware that the other party is not human, much less a child. Yet they perceive, and respond to, the ‘animation’ of the robot as if it were a (partially human) person. This technological animism is not premised on the putative existence of a soul or on other intangible soul-like qualities. On the contrary, it produces an intangible sense of human-like qualities through purely mechanistic (technological) means.

Fictional robots provide further fuel for technological animism. The robot (a copy) is sustained by its relation to the originals (human adult/robot fiction), as roboticists and laypeople interacting with robots draw on fictional representations when projecting intentions, actions, and agency onto robots. In the case of Radius or ASIMO, the robot child (copy) is bolstered by its original (human child) and enlivened further, through technological animism, by the adult’s extra effort to support the robot child. Let me stress that I have seen robots do absolutely nothing at all and still impress audiences with their human-like qualities. This is because the adult is not seeing just the physical object in front of him or her, but is perceiving a whole catalogue of cultural references that reinforce the exchange within the ontology of technological animism.

Fictional Animism, Automatons, and the Uncanny (Valley)

In 2001, AI: Artificial Intelligence (Spielberg 2001) was released in cinemas across the world. The story focuses on a robot child, David, and his existential crisis at being neglected and abandoned. The movie draws inspiration from the nineteenth-century tale of Pinocchio, in which a childless bachelor crafts a puppet in the shape of a boy because he is lonely and longs to care for a son. In the film, David is programmed to attach to one specific person, and his durability as a robot means that he is doomed to outlive the humans whom he loves and who have abandoned him. David’s story is really about love and separation. The film raises questions that are now being played out in robotics laboratories: Can humans develop bonds and attachments to robotic entities? Can this process be facilitated or supported if the robot is childlike? If robots become sophisticated enough to start performing some of their imagined supportive and social companionship roles, will this transform attachment patterns that currently take place almost exclusively between humans or between humans and animals, particularly domesticated pets? Can such patterns be transferred to robots that have both human and mechanical qualities? Recent films such as Her echo these questions about love and attachment between humans and animate machines.

Donna Haraway (1991) asserts that technoscience breached the boundary between fact and fiction in the twentieth century. Yet in robotic fiction and robotic science, there has never been a boundary to breach, because the latter has inherited the properties of the former as it took robotics into new territory. Films like AI: Artificial Intelligence and Her provide imaginative models for the potential future of robotics, while raising social and ethical questions surrounding robots. These questions arguably boil down to the problems of technological animism, that is, the ascription of human-like qualities of person- hood to robots, which can easily provoke discomfort and fear. Japanese roboticist Masahiro Mori (1970) explores such reactions in his work on the ‘uncanny valley’. Mori placed human-like objects on a grid with appearance at one axis and behavior on another. The highest point in the chart is a healthy human person, and the lowest point is a zombie.2 Behavior and appearance need to connect; otherwise, Mori argues, the entity is frightening. A robot that appears lifelike but moves with jerky movements in repetitive ways gives rise to a sense of the uncanny, since the expectation is that the more human-like the robot appears, the more it should behave in human-like ways. In Olga Ulturgasheva’s description of djuluchen (this issue), she cites an Eveny hunter’s story in which a wolf has sent his ‘forerunner’ ahead of him, which appears to the hunter as if it were a “broken machine that repeated itself … over and over again.” When an entity is out of synch with some aspect of its being, it begins to resemble a machine. Hence, when the human-like robot fails to act appropriately like a human, it falls into the ‘uncanny valley’, the lowest point of the graph, where it ostensibly provokes the greatest fear.

The Freudian idea of the uncanny, which inspired Mori’s writing, is a psychoanalytical explanation for objects and scenarios that provoke terror, uncertainty, and confusion by challenging our familiar concepts. Freud’s ([1919] 2003) essay, The Uncanny, deals in part with his theory about the Oedipus complex, but his notion of the uncanny has become a more popular reading. The “uncanny” sensation arises from “the unhomely” (ibid.: 152), “the ‘double’” (ibid.: 141), or that which provokes “intellectual uncertainty” (ibid.: 140). It can be a consequence of the difficulty in judging “whether something is animate or inanimate” (ibid.: 141) or the confusion when something not alive “bears an excessive likeness to the living” (ibid.). Freud was interested in the context that facilitates this uncomfortable breaching of boundaries and how it might be explained. Not everything that is ambivalently animate and inanimate triggers the uncanny. In Freud’s view: “[T]he fairy tale is quite openly committed to the animistic view that thoughts and wishes are all-powerful, but I cannot cite one genuine fairy tale in which anything uncanny occurs. We are told that it is highly uncanny when inanimate objects—pictures or dolls—come to life, but in Hans Andersen’s stories the household utensils, the furniture and the tin soldier are alive, and perhaps nothing is farther removed from the uncanny. Even when Pygmalion’s beautiful statue comes to life, this is hardly felt to be uncanny” (ibid.: 153). The uncanny, then, is not triggered by the animation of the inanimate per se, but by the animation of the inanimate in a specific type of context. Freud included psychical and physical states in the uncanny and referred to the numerous fictional, imaginary, physical, and real conditions that might trigger them. But the question remains, when do boundaries become so blurred that they trigger this state of discomfort and fear?

Čapek’s R.U.R., which was published two years after The Uncanny, would have been an interesting subject for Freud’s analysis. Instead of robots, Freud examines the psychic states evoked by human-like objects, such as automata and dolls. At the time, automata were objects that some thought destabilized the boundaries between human and machine, living and dead, animate and inanimate. In the eighteenth and nineteenth centuries in Europe, hundreds of mechanics constructed human and animal automata, such as Jacques de Vaucanson’s Digesting Duck, which was exhibited in the 1700s and was said to drink and defecate (Standage 2002). The automata gave rise to the term androïdé, defined as “an automation in human form, which by means of well- positioned springs, etc. performs certain functions which externally resemble that of man,” as defined in Diderot and d’Alembert’s Encyclopédie (cited in ibid.: 20). (Čapek’s term ‘robot’ later replaced androïdé in popular usage to describe a humanoid machine. The making of automatons raised questions about the boundaries between human and machine—and later between the original human and the robot copy—as well as questions about the dialectic transference of potential properties between them.3

The Turk, a famous eighteenth-century automaton, encapsulates these themes. The Turk was designed by Wolfgang von Kempelen and exhibited throughout Europe and the US in the 1700s. Ostensibly an automaton that played chess, the Turk fooled audiences into thinking it was a ‘living’ machine, as it was perceived to ‘think’. “By choosing to make his machine a chess player, a contraption apparently capable of reason, Kempelen sparked a vigorous debate about the extent to which machines could emulate or replicate human faculties,” writes Standage (2002: xiv). The immensely popular Turk was later revealed to be a fraud: its abilities came from a man positioned inside, controlling its actions.

The making of automata raised questions about what was human or machine, living or dead, animate or inanimate—the very questions that, I argue, technological animism infuses into our ontological milieu. Automata produced ‘uncanny’ effects in audiences: many early automata were anthropomorphic mimetic objects that evoked “that species of the frightening that goes back to what was once well known and had long been familiar” (Freud [1919] 2003: 124). As Gaby Wood (2002: xiv) explains: “[T]here was anxiety in the present situation—an anxiety that all androids, from the earliest moving doll to the most sophisticated robots, conjure up.” Wood sees this as a perfect example of the uncanny as “the feeling that arises when there is an ‘intellectual uncertainty’ about the borderline between the lifeless and the living” (ibid.).

So far I have highlighted the potentially uncanny nature of robots and the fictional representations of robots as harbingers of human destruction, which directly parallel Freud’s ([1919] 2003) ideas about the animistic doppelgänger, or ‘double’, and its connection to the fear of death. According to Freud, fear is triggered when persons carry a resemblance to other persons, or when people worry that the psychic contents of their minds are known to another through mental transmission (e.g., telepathy). Freud describes “[t]he invention of such doubling” initially as “a defence against annihilation,” an expression of what he sees as the “primordial narcissism that dominates the mental life of both the child and the primitive man” (ibid.: 142). However, the appearance of animism in advanced technological settings directly challenges the paternalistic evolutionism of Freud’s theory. Freud describes a psychosocial state in which one’s thoughts, fantasies, and feelings become ‘doubled’ in the other, and he puts ‘primitive’ thought in a category with the thinking of children. “The double,” Freud writes, “becomes an object of terror” when the “ego has not yet clearly set itself off against the world outside and from others” (ibid.: 143). Yet the fear of the double as a harbinger of annihilation and object of terror can also emerge in technological animism and its humanoid robots. This is strong evidence for uncoupling the theory of animism from evolutionist models of thought, and I argue that technological animism is a pervasive part of how we understand new technologies.

Ulturgasheva’s fascinating description of djuluchen (a spirit that travels ahead) in this issue provides a useful contrast to Freud’s fearful double. Among the Eveny, peoples of Northeastern Siberia, the djuluchen is a double of another kind, one that moves ahead of the person and is perceived not as an annihilating other but as an ongoing aspect of one’s personhood. In this sense, the Eveny never catch up with their double as it is always several steps ahead.

The uncanny, then, has been a powerful force in imagining and responding to humanoid robots in fiction as in actual technological development, and it affects both how and when robots are treated as persons. The blissful fantasy of the robot coming to life might also become its beholder’s worst nightmare—the double emerges as fact and fiction begin to meld.

Japanese Robots: In the Shadow of Atom Boy

Mori’s description of the ‘uncanny valley’ and the anxiogenic properties of so many robot narratives would seem to define humanoid machines in terms of fear and discomfort. Yet the vision of humanoid robots that I found in Japanese laboratories was very different and may relate to a contrasting ‘cosmogenesis’ in the story of Atom Boy (Jensen and Blok 2013).

Like Čapek’s R.U.R. for Euro-American roboticists, the character Tetsuwan Atomu, translated into English as Astro Boy or Atom Boy, is the point of reference for their Japanese counterparts. Created by Osamu Tezuka, the manga (comic book) series Tetsuwan Atomu was published from 1952 to 1968 and continues to enjoy cult status in Japan today. It is perhaps surprising that Atom Boy became such a uniquely Japanese portrayal of an intelligent being created from the power of the atom at a time when Japan was still reeling from the nuclear strikes that devastated the people and cities of Hiroshima and Nagasaki. This lent particular salience to the atom as both a scientific and military object in 1950s Japan. In contrast with the robots who foment violent revolution in R.U.R., Atom Boy is a benevolent figure, with a childlike relationship to his human guardians.

The power that manga and anime (movie and television animation) such as Atom Boy have had on the Japanese imagination became apparent in my interviews with Japanese roboticists. When I asked a famous Japanese roboticist about his motivations for engaging in his research, he cited Atom Boy and other cartoons as having had an early influence: “When I was a small kid, maybe 10 years old, and after the end of World War II, Japan … was totally destroyed. However, many people struggled to recover … And years later economic prosperity started … TV programs began broadcasting … At that time, animation was played only in movie theaters. I believe that the first animation in the world was Astro Boy … especially made before TV broadcasting. So I watched the TV, and also there are a lot of other types of robot animations all over Japan. So that was so exciting. I was totally imprinted by that kind of movie … cartoons … animations” (pers. comm., June 2003). Atom Boy, or Astro Boy, was not in fact the first animation ever made, but it clearly had a strong impact on this professor. In his narrative, an interest in fictional robots eventually gave way to actual research on humanoid robots. His story suggests that his technological activity was an unfolding of his inner desires—a sense that his desire and fantasy could be realized in the form of robots. His childhood fantasies acted as the backdrop to his adult fantasies and informed the direction of his work as an engineer. This echoes the idea of the djuluchen described by Ulturgasheva whereby Eveny young people project themselves into the future through their ‘forerunners’.

The professor’s story reveals the context for a different vision of robots in Japanese society. In this view, humanoid technological creations belonged to a positive futuristic vision that was shaped by post-war reconstruction and an optimistic unfolding of the future in the present. Certainly, Japanese robotic scientists expressed very different sentiments about the public reception of robots than the European and American scientists I interviewed. The Japanese scientists were confident that the public would accept humanoid robots not as terrifying creations but as potential social agents that would take over the roles of live humans. Japan is already ahead of the robot game in many respects, with the highest number of robots per worker of any country in the world. As its elderly population increases and its working-age population shrinks, the Japanese nation invests in robotics as a means to national self-sufficiency (Robertson 2010; Sone 2008). Robots may also benefit from ideas of animation that appear in Japanese Buddhism and Shintoism. In those beliefs, it is possible for non-humans to be animated and possess a soul without threatening the personhood of humans, thus providing a cultural model for the development of a positive technological animism, as described above.

In spite of these positive cultural attitudes toward robots, Japanese roboticists are aware of the potential for their creations to become uncanny. Take, for instance, the humanoid robot Repliee, developed by Osaka University and manufactured by Kokoro Company, whose appearance is that of an attractive Japanese woman in her twenties. The researchers claim that Repliee can interact naturally with people, but on viewing footage of her, one can quickly spot errors in her behavior. She would pass for a human only briefly as her bodily movements are jerky and the whirring sounds of the mechanical motors that make up her system are audible.

In his article on robotics, Mori (1970) believed that when robots’ behaviors grew in sophistication, their increased human-like appearance would not be a problem. In fact, Japanese scientists have incorporated his concept of the ‘uncanny valley’ into their research as a design philosophy. What the Japanese and MIT laboratory (as well as wider social) imaginaries suggest is that the inherent problem of contemporary robotics rests with the making of robots in human form, which is not just a vastly complex technical problem, but a cultural one.

Anthropomorphism, Liminality, and Mimesis

Let us now consider anthropomorphism in relation to technological animism, since the robots I study have a humanoid form. Anthropomorphism is a polyvalent concept that is important to our understanding of animism and also has broader applications, for instance, in the field of animal behavioral studies (de Waal 1996). Although robots are made in labs by experts in the highly specialized fields of electrical engineering, mechanical engineering, and computer science, they are not merely technological objects but cultural ones, with meanings that their makers do not exclusively control. Despite the propensity of humans to attribute human-like qualities to non-humans, anthropomorphism is still arguably a problematic concept in anthropology (Haraway 1991, 2006) and other disciplines (de Waal 1996), because it locates humans as the main agent in relations with materialities and non-humans. Moreover, anthropologically, seeing animal life-worlds from a human perspective can confuse and elide the meanings that underscore these different existences (ibid.). Haraway (1991, 2006) challenges this perception fiercely in her studies of simians, cyborgs, and dogs. Yet, in my view, the emphasis on hybridity (Latour 1993, 2005) and relational nature-culture mixtures (Haraway 1991) does not adequately explain the multiple ways that anthropomorphism operates in everyday Euro-American interactions with non-human animals, machines, and things (Vidal 2007). Moreover, I suggest that the emphasis on hybridity and relationalities between persons and things diminishes human subjectivity in these processes. The human spectator plays a crucial role in the ways that anthropomorphic entities, such as robots, are configured. While humans may interact with things like robots that trigger thoughts, feelings, and behaviors, their interactions are mediated through human socialities (Gell 1998; Guthrie 1995). Much like the ‘art of capture’ that Swancutt (this issue) describes in relation to spiders and souls, conflicts of description arise, indicating that the same phenomena can have radically different interpretations. It is, then, useful to make an analytical point about what is actually happening and what people think and say is happening: this explains how different persons can draw on radically different ontological frameworks and interpret the same event in different ways (see Swancutt, this issue). This brings us to the efficacy of acts that need to be considered in each specific context. Anthropomorphism is not merely a frame in which to understand human interactions with technologies that have human qualities. It is a means by which to rethink the importance of human sociality in forging interactions with non-humans—especially those interactions that give rise to technological animism.

Anthropomorphism can refer to various processes of attributing human characteristics to non-human animals, things, and, of course, human-like robots. As a concept, anthropomorphism has been discussed widely in anthropology in relation to animals (Miles 1997; Moynihan 1997; Silverman 1997). There is also literature on the cognitive-perceptual bases for ascribing anthropomorphic qualities to other beings. For Guthrie (1995, 2007), anthropomorphism is a cognitive-perceptual process whereby humans ‘guess’ about the states of others or attribute a theory of mind (ToM) to others, including other humans (see also Baron-Cohen 1995). Is even the scientific ToM an outcome of technological animism? While it is a result of scientific practice, I suggest that the ToM is a kind of animism, wherein one can telepathically understand the intentions of others through reading their minds, as if persons were theatrical scripts.

Cognitive-perceptual reactions to robots are somewhat akin to the Euro-American reactions to dolls, puppets, and automata described above (Freud [1919] 2003: 141–142); Wood 2002). Such items are felt to be uncanny because they have a similar appearance to humans but a different ‘materiality’. Robots are not only uncanny; they are even liminal, as they are existentially ‘betwixt and between’ humans and machines. As George Bernard Shaw (1972: vi) wrote about theatrical puppets: “What really affects us in the theatre is not the muscular activities of the performers, but the feelings they awaken in us by their aspect; for the imagination of the spectator plays a far greater part there than the exertions of the actors.” This is not unlike the reaction that people may have to robots through ‘technological imagination’. This ‘everyday’ theatrical dimension of robotics is part and parcel of the roboticists’ genealogies (Richardson 2011). And as we saw from the ethnography presented above, roboticists deliberately exploit human cognitive-perceptual intuitions—which form part of technological animism—in the way that they produce robotic creatures through a mimesis of the human form, but also as a mimesis of fictional automatons.

In the process of mimesis, as Michael Taussig (1993) shows, the power of the original (in his case, colonialists in Panama) is transferred and captured by the copy (ritual figurines made of those colonialists by indigenous Cuna). Here, features of the original are detached and inform the copy, so that in acquiring properties of the original, the copy is empowered by its relation to it. Since the original in the case of robots is a fictional character, the fictional setting is a constituent part of the original as well. Given this, the human-like robot is ‘enhanced’ by virtue of its being a copy that bears within it the power of the original. And it is this co-opted power of the original that produces technological animism. Since these creatures existed as fictions before, albeit in different forms, robot machines are not merely technical entities devoid of cultural properties. They are, in effect, cultural beings.

The robot was never imagined as a pure machine or a pure human. In R.U.R., robots were not made of machine parts; rather, they were made of human parts by machines. But can anthropological theorizing in the field of animism really help us to understand what kind of entity the robot is? Robot systems are designed to be situated and embodied quite deliberately, and the robot is designed to judge and act in its environment from its own perspective. While humans and non-humans come into relation with one another as assemblages in a network (Latour 2005), not all entities have equivalent agency. There is a colonial aspect to actor-network-theory (ANT) that entails speaking on behalf of others, such that scallops, for example, can be understood only when interpreted by the mind of the analyst and presented in human speech and language forms (Callon 1986). In my view, some aspects of Latour’s (2005) actor-network-theory echo classical ideas of animism, except that in the animistic imagination, all entities can have subjectivity and agency. In ANT, however, no one has subjectivity of any greater significance than that which is granted to another entity, as one merely analytically describes how multiple agents come into contact and intersect with others (Callon 1986). The robot, then, is not an entity that can be neatly classified. Rather, as Corsín-Jiménez and Willerslev (2007) suggest, we need to reflect on the concepts we are using. The robot is human and non-human, machine and non-machine, real and nonreal. Just as they are never pure machines, humanoid robots can never be fully human. Taussig (1993: 11) makes this point about the reproduction of the original, showing that, in the copy, a part is distorted or wholly left out. Humanoid robots are, then, meant to be odd approximations of the human bodies they are supposed to resemble. Robots have faces, but they may have a nose or mouth missing. Or they may have a full humanoid shape but lack hands or feet.4 Yet the absence of physical parts is only part of the story of their limitations. It does not matter if robots carry out the acts they have been created to perform, such as ‘socially’ interacting with a person or navigating along a corridor. It is enough that people can imagine the robots performing the acts for which they were produced. Having watched many robot demonstrations, I witnessed this on several occasions. Let me end with one illustrative example from a demonstration held at MIT’s Media Lab that I attended, during which a robot was presented to an excited audience of around 20 people. After the robot was introduced, we waited for it to carry out the actions that had been described by the robotic scientist moments before. The robot could not perform those actions, yet only two people in the audience pointed this out (I was one of them). The other spectators were in awe of the robot despite the fact that it could not do what it was supposed to do. The spectators were filling in the empty spaces of the robot’s performance with technological animism.

Conclusion

In this article, I suggest that the concept of animism has analytical value even in the highly scientific realm of robotics, where the development of humanoid machines opens up questions of what it means to be human or a non-human ‘animate’ machine. Moreover, the evidence of technological practices that are, to all intents and purposes, ‘animistic’ invites us to reframe the discussion of animism as a form of human consciousness that is transcultural and not unique to indigenous cosmologies. If robots can be humanoid and animate—if they offer the possibility of technological animism—what does this mean for animism and its associations with a mythical ‘nature’? Humanoid robots invite us to imagine (and, indeed, seem to embody) a form of ‘non-human’ person- hood that is neither ‘natural’ nor religious in origin. If robots all too easily transport us to the ‘uncanny valley’, it is because of their animistic potential. This suggests that the term ‘animism’ should not be limited to ‘natural’ phenomena, but instead understood as a broader concept underlying the cultural construction of agency and personhood.5 Technological animism is strong evidence against the enduring association between animism and the social evolutionist idea of the ‘primitive’.

Fictional models of robots, along with religious and cultural ideas, merge with laboratory practices in people’s understandings of humanoid robots. Fictional tales become places where we can express fears in particular ways; they are often places of horror, destruction, and annihilation. Fictions serve a purpose in allowing an outlet for the unconscious mind without letting it becoming manifested in the lived (and fearful) realities in which the double appears. I have not tried to diminish the cultural differences between robotic practices in Europe, North America, and Japan, but have attempted to show that the theme of robotic fiction encompasses them all. In fiction, as Freud explains, there is experimentation with boundaries, ontological orders, and cosmological meanings. The analogy between fictional representations and robots is not such a superficial contrast, since the work of roboticists is encased in fiction—even as fiction is merged with scientific creation. Perhaps, then, the most uncanny element of the popular fear surrounding a machine-robot takeover is its own strange potential to become a self-fulfilling prophecy, with the real agents of change (humans) lost in their increasingly machine-moderated world of technological animism.

Acknowledgments

I would like to thank Mireille Mazard and Katherine Swancutt for their ongoing and generous support during the development of this special issue. Their drive, enthusiasm, and passion were a guiding force for me in preparing this article. Thanks are extended as well to the publisher’s editorial staff.

Notes
1

See http://asimo.honda.com/asimo-history/ (accessed 9 September 2014).

2

Mori uses a zombie as the index for the greatest fear-producing robot design because it transgresses the boundaries between living and dead, while being neither one nor the other.

3

This dialectic transference between machines and humans resonates with the notion of ‘hyper-reflexivity’ as explained by Katherine Swancutt and Mireille Mazard in the introduction to this issue.

4

Elsewhere, I explain further how robot scientists narrowly focus on those parts of the robot’s body that human spectators expect to see when robots perform a humanoid function (see Richardson 2015). Because humans spend a great deal of time looking at eyes during social interactions, roboticists who make ‘social’ robots ensure that there are features on the robot face that act as eyes, even if those features have no practical function—that is, if they do not work like human or animal vision systems.

5

I am not the first to suggest that animism can occur in scientific settings. Allison (2006) refers to ‘techno-animism’, and Ingold (2006: 9) has extended the discussion of animism to astronomers fascinated with finding animate life forms elsewhere in the universe.

References

  • AllisonAnne. 2006. Millennial Monsters: Japanese Toys and the Global Imagination. Berkeley: University of California Press.

  • AryanandaLijin. 2007. “A Few Days of a Robot’s Life in the Human’s World: Toward Incremental Individual Recognition.” PhD diss.Massachusetts Institute of Technology.

    • Search Google Scholar
    • Export Citation
  • Baron-CohenSimon. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press.

  • BerthouzeLuc and Giorgio Metta. 2005. “Epigenetic Robotics: Modelling Cognitive Development in Robotic Systems.” Cognitive Systems Research 6 no. 3: 189192.

    • Search Google Scholar
    • Export Citation
  • Bird-DavidNurit. 1999. “‘Animism’ Revisited: Personhood, Environment, and Relational Epistemology.” Current Anthropology 40 no. S1: S67S91. Special issue titled “Culture: A Second Chance?

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia. 2002. Designing Sociable Robots. Cambridge, MA: MIT Press.

  • BreazealCynthia and Lijin Aryananda. 2002. “Recognition of Affective Communicative Intent in Robot-Directed Speech.” Autonomous Robots 12: 83104.

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia and Brian Scassellati. 2000. “Infant-Like Social Interactions between a Robot and a Human Caregiver.” Adaptive Behavior 8 no. 1: 4974.

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia and Juan Velásquez. 1998. “Toward Teaching a Robot ‘Infant’ Using Emotive Communication Acts.” MIT Artificial Intelligence Laboratory Publications. http://www.ai.mit.edu/projects/ntt/projects/NTT9904–01/documents/Breazeal-Velasquez-SAB98.pdf.

    • Search Google Scholar
    • Export Citation
  • CallonMichel. 1986. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay.” Pp. 196223 in Power Action and Belief: A New Sociology of Knowledge? ed. John Law. London: Routledge & Kegan Paul.

    • Search Google Scholar
    • Export Citation
  • CameronJames dir. 1984. The Terminator. Film. Distributed by Orion Pictures.

  • ČapekKarel. [1921] 2006. R.U.R. (Rossum’s Universal Robots). Trans. Claudia Novack; intro. Ivan Klíma. New York: Penguin Books. First published in Prague by Aventinum.

    • Search Google Scholar
    • Export Citation
  • CoeckelberghMark. 2013. “Pervasion of What? Techno-human Ecologies and Their Ubiquitous Spirits.” AI & Society 28 no. 1: 5563.

    • Search Google Scholar
    • Export Citation
  • Corsín-JíménezAlberto and Rane Willerslev. 2007. “‘An Anthropological Concept of the Concept’: Reversibility among the Siberian Yukaghirs.” Journal of the Royal Anthropological Institute 13 no. 3: 527544.

    • Search Google Scholar
    • Export Citation
  • DescolaPhilippe. 1996. In the Society of Nature: A Native Ecology in Amazonia. Trans. Nora Scott. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • de WaalFrans. 1996. Good Natured: The Origins of Right and Wrong in Humans and Other Animals. Cambridge, MA: Harvard University Press.

    • Search Google Scholar
    • Export Citation
  • FreudSigmund. [1919] 2003. The Uncanny. Trans. David McLintock; intro. Hugh Haughton. London: Penguin.

  • GellAlfred. 1998. Art and Agency: An Anthropological Theory. Oxford: Clarendon Press.

  • GuthrieStewart E. 1995. Faces in the Clouds: A New Theory of Religion. Oxford: Oxford University Press.

  • GuthrieStewart E. 2007. “Anthropology and Anthropomorphism in Religion.” Pp. 3762 in Religion Anthropology and Cognitive Science ed. Harvey Whitehouse and James Laidlaw. Durham, NC: Carolina Academic Press.

    • Search Google Scholar
    • Export Citation
  • HarawayDonna J. 1991. Simians Cyborgs and Women: The Reinvention of Nature. London: Free Association Books.

    • Export Citation
  • HarawayDonna J. 2006. The Companion Species Manifesto: Dogs People and Significant Otherness. Chicago: Prickly Paradigm Press.

  • IngoldTim. 2006. “Rethinking the Animate, Re-animating Thought.” Ethnos 71 no. 1: 920.

  • JensenCasper B. and Anders Blok. 2013. “Techno-animism in Japan: Shinto Cosmograms, Actor-Network Theory, and the Enabling Powers of Non-human Agencies.” Theory Culture & Society 30 no. 2: 84115.

    • Search Google Scholar
    • Export Citation
  • JonzeSpike dir. 2013. Her. Annapurna Pictures. Distributed by Warner Bros. Pictures.

  • KubrickStanley dir. 1968. 2001: A Space Odyssey. Film. Distributed by Metro-Goldwyn-Mayer.

  • LarsonGlen A. prod. 1978. Battlestar Galactica. Television series. Distributed by MCA/Universal.

  • LatourBruno. 1993. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, MA: Harvard University Press.

  • LatourBruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

  • LucasGeorge dir. 1977. Star Wars Episode IV: A New Hope. Film. Distributed by Universal Studios.

  • MilesH. Lyn. 1997. “Anthropomorphism, Apes, and Language.” Pp. 383404 in Mitchell et al. 1997.

  • MitchellRobert M.Nicholas S. Thompson and H. Lyn Miles eds. 1997. Anthropomorphism Anecdotes and Animals. Albany: SUNY Press.

  • MooreRonald D. and David Eick prod. 2004–2009. Battlestar Galactica. Television series. David Eick Productions.

  • MoriMasahiro. 1970. “Bukimi no tani.” [The Uncanny Valley] Energy 7 no. 4: 3335.

  • MostowJonathan dir. 2009. Surrogates. Film. Distributed by Walt Disney Studios Motion Pictures.

  • MoynihanMartin H. 1997. “Self-Awareness, with Specific References to Celeoid Cephalopods.” Pp. 213219 in Mitchell et al. 1997.

  • RichardsonKathleen. 2011. “Are Friends Electric?” Times Higher Educational Supplement9 June. https://www.timeshighereducation.com/features/are-friends-electric/416434.article.

    • Export Citation
  • RichardsonKathleen. 2015. An Anthropology of Robots and AI: Annihilation Anxiety and Machines. New York: Routledge.

  • RobertsonJennifer. 2007. “Robo Sapiens Japanicus: Humanoid Robots and the Posthuman Family.” Critical Asian Studies 39 no. 3: 369398.

    • Search Google Scholar
    • Export Citation
  • RobertsonJennifer. 2010. “Gendering Humanoid Robots: Robo-Sexism in Japan.” Body & Society 16 no. 2: 136.

  • ShawGeorge Bernard. 1972. “Note on Puppets.” P. vi in Max Von BoehnPuppets and Automata. New York: Dover Publications.

  • SilvermanPaul S. 1997. “A Pragmatic Approach to the Inference of Animal Mind.” Pp. 170188 in Mitchell et al. 1997.

  • SoneYuji. 2008. “Realism of the Unreal: The Japanese Robot and the Performance of Representation.” Visual Communication 7 no. 3: 345362.

    • Search Google Scholar
    • Export Citation
  • SpielbergSteven dir. 2001. AI: Artificial Intelligence. Film. Distributed by Warner Bros. Pictures and DreamWorks Pictures.

  • StandageTom. 2002. The Mechanical Turk: The True Story of the Chess-Playing Machine That Fooled the World. London: Penguin.

  • TaussigMichael. 1993. Mimesis and Alterity: A Particular History of the Senses. New York: Routledge.

  • TylorEdward B. [1871] 1920. Primitive Culture: Researches into the Development of Mythology Philosophy Religion Language Art and Custom. Vols. 1 and 2. London: John Murray.

    • Search Google Scholar
    • Export Citation
  • VidalDenis. 2007. “Anthropomorphism or Sub-anthropomorphism? An Anthropological Approach to Gods and Robots.” Journal of the Royal Anthropological Institute 13 no. 4: 917933.

    • Search Google Scholar
    • Export Citation
  • Viveiros de CastroEduardo. 1998. “Cosmological Deixis and Amerindian Perspectivism.” Journal of the Royal Anthropological Institute 4 no. 3: 469488.

    • Search Google Scholar
    • Export Citation
  • Viveiros de CastroEduardo. 2004. “Exchanging Perspectives: The Transformation of Objects into Subjects in Amerindian Ontologies.” Common Knowledge 10 no. 3: 463484.

    • Search Google Scholar
    • Export Citation
  • Von BoehnMax. 1972. Puppets and Automata. New York: Dover Publications.

  • WachowskiLana and Andrew P. Wachowski dirs. 1999. The Matrix. Film. Distributed by Warner Bros Pictures.

  • WoodGaby. 2002. Living Dolls: A Magical History of the Quest for Mechanical Life. London: Faber and Faber.

  • ZlatevJordan and Christian Balkenius. 2001. “Introduction: Why ‘Epigenetic Robotics’?” Pp. 14 in Proceedings of the First International Workshop on Epigenetic Robotics (Cognitive Studies series Vol. 85). Lund: Lund University. http://www.lucs.lu.se/LUCS/085/Zlatev.Balkenius.pdf.

    • Search Google Scholar
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

Contributor Notes

Kathleen Richardson is a Senior Research Fellow in the Ethics of Robotics at the Centre for Computing and Social Responsibility (CCSR), De Montfort University, Leicester. Her research examines the development of robots as companions, therapists, friends, and sexual partners. She is also part of the Europewide DREAM project (Development of Robot-Enhanced Therapy for Children with Autism Spectrum Disorders), a project developing therapeutic robots for helping children with autism in their social learning. She is the author of An Anthropology of Robots and AI: Annihilation Anxiety and Machines (2015) and is currently working on another manuscript.

Social Analysis

The International Journal of Anthropology

  • AllisonAnne. 2006. Millennial Monsters: Japanese Toys and the Global Imagination. Berkeley: University of California Press.

  • AryanandaLijin. 2007. “A Few Days of a Robot’s Life in the Human’s World: Toward Incremental Individual Recognition.” PhD diss.Massachusetts Institute of Technology.

    • Search Google Scholar
    • Export Citation
  • Baron-CohenSimon. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press.

  • BerthouzeLuc and Giorgio Metta. 2005. “Epigenetic Robotics: Modelling Cognitive Development in Robotic Systems.” Cognitive Systems Research 6 no. 3: 189192.

    • Search Google Scholar
    • Export Citation
  • Bird-DavidNurit. 1999. “‘Animism’ Revisited: Personhood, Environment, and Relational Epistemology.” Current Anthropology 40 no. S1: S67S91. Special issue titled “Culture: A Second Chance?

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia. 2002. Designing Sociable Robots. Cambridge, MA: MIT Press.

  • BreazealCynthia and Lijin Aryananda. 2002. “Recognition of Affective Communicative Intent in Robot-Directed Speech.” Autonomous Robots 12: 83104.

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia and Brian Scassellati. 2000. “Infant-Like Social Interactions between a Robot and a Human Caregiver.” Adaptive Behavior 8 no. 1: 4974.

    • Search Google Scholar
    • Export Citation
  • BreazealCynthia and Juan Velásquez. 1998. “Toward Teaching a Robot ‘Infant’ Using Emotive Communication Acts.” MIT Artificial Intelligence Laboratory Publications. http://www.ai.mit.edu/projects/ntt/projects/NTT9904–01/documents/Breazeal-Velasquez-SAB98.pdf.

    • Search Google Scholar
    • Export Citation
  • CallonMichel. 1986. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay.” Pp. 196223 in Power Action and Belief: A New Sociology of Knowledge? ed. John Law. London: Routledge & Kegan Paul.

    • Search Google Scholar
    • Export Citation
  • CameronJames dir. 1984. The Terminator. Film. Distributed by Orion Pictures.

  • ČapekKarel. [1921] 2006. R.U.R. (Rossum’s Universal Robots). Trans. Claudia Novack; intro. Ivan Klíma. New York: Penguin Books. First published in Prague by Aventinum.

    • Search Google Scholar
    • Export Citation
  • CoeckelberghMark. 2013. “Pervasion of What? Techno-human Ecologies and Their Ubiquitous Spirits.” AI & Society 28 no. 1: 5563.

    • Search Google Scholar
    • Export Citation
  • Corsín-JíménezAlberto and Rane Willerslev. 2007. “‘An Anthropological Concept of the Concept’: Reversibility among the Siberian Yukaghirs.” Journal of the Royal Anthropological Institute 13 no. 3: 527544.

    • Search Google Scholar
    • Export Citation
  • DescolaPhilippe. 1996. In the Society of Nature: A Native Ecology in Amazonia. Trans. Nora Scott. Cambridge: Cambridge University Press.

    • Search Google Scholar
    • Export Citation
  • de WaalFrans. 1996. Good Natured: The Origins of Right and Wrong in Humans and Other Animals. Cambridge, MA: Harvard University Press.

    • Search Google Scholar
    • Export Citation
  • FreudSigmund. [1919] 2003. The Uncanny. Trans. David McLintock; intro. Hugh Haughton. London: Penguin.

  • GellAlfred. 1998. Art and Agency: An Anthropological Theory. Oxford: Clarendon Press.

  • GuthrieStewart E. 1995. Faces in the Clouds: A New Theory of Religion. Oxford: Oxford University Press.

  • GuthrieStewart E. 2007. “Anthropology and Anthropomorphism in Religion.” Pp. 3762 in Religion Anthropology and Cognitive Science ed. Harvey Whitehouse and James Laidlaw. Durham, NC: Carolina Academic Press.

    • Search Google Scholar
    • Export Citation
  • HarawayDonna J. 1991. Simians Cyborgs and Women: The Reinvention of Nature. London: Free Association Books.

    • Export Citation
  • HarawayDonna J. 2006. The Companion Species Manifesto: Dogs People and Significant Otherness. Chicago: Prickly Paradigm Press.

  • IngoldTim. 2006. “Rethinking the Animate, Re-animating Thought.” Ethnos 71 no. 1: 920.

  • JensenCasper B. and Anders Blok. 2013. “Techno-animism in Japan: Shinto Cosmograms, Actor-Network Theory, and the Enabling Powers of Non-human Agencies.” Theory Culture & Society 30 no. 2: 84115.

    • Search Google Scholar
    • Export Citation
  • JonzeSpike dir. 2013. Her. Annapurna Pictures. Distributed by Warner Bros. Pictures.

  • KubrickStanley dir. 1968. 2001: A Space Odyssey. Film. Distributed by Metro-Goldwyn-Mayer.

  • LarsonGlen A. prod. 1978. Battlestar Galactica. Television series. Distributed by MCA/Universal.

  • LatourBruno. 1993. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, MA: Harvard University Press.

  • LatourBruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

  • LucasGeorge dir. 1977. Star Wars Episode IV: A New Hope. Film. Distributed by Universal Studios.

  • MilesH. Lyn. 1997. “Anthropomorphism, Apes, and Language.” Pp. 383404 in Mitchell et al. 1997.

  • MitchellRobert M.Nicholas S. Thompson and H. Lyn Miles eds. 1997. Anthropomorphism Anecdotes and Animals. Albany: SUNY Press.

  • MooreRonald D. and David Eick prod. 2004–2009. Battlestar Galactica. Television series. David Eick Productions.

  • MoriMasahiro. 1970. “Bukimi no tani.” [The Uncanny Valley] Energy 7 no. 4: 3335.

  • MostowJonathan dir. 2009. Surrogates. Film. Distributed by Walt Disney Studios Motion Pictures.

  • MoynihanMartin H. 1997. “Self-Awareness, with Specific References to Celeoid Cephalopods.” Pp. 213219 in Mitchell et al. 1997.

  • RichardsonKathleen. 2011. “Are Friends Electric?” Times Higher Educational Supplement9 June. https://www.timeshighereducation.com/features/are-friends-electric/416434.article.

    • Export Citation
  • RichardsonKathleen. 2015. An Anthropology of Robots and AI: Annihilation Anxiety and Machines. New York: Routledge.

  • RobertsonJennifer. 2007. “Robo Sapiens Japanicus: Humanoid Robots and the Posthuman Family.” Critical Asian Studies 39 no. 3: 369398.

    • Search Google Scholar
    • Export Citation
  • RobertsonJennifer. 2010. “Gendering Humanoid Robots: Robo-Sexism in Japan.” Body & Society 16 no. 2: 136.

  • ShawGeorge Bernard. 1972. “Note on Puppets.” P. vi in Max Von BoehnPuppets and Automata. New York: Dover Publications.

  • SilvermanPaul S. 1997. “A Pragmatic Approach to the Inference of Animal Mind.” Pp. 170188 in Mitchell et al. 1997.

  • SoneYuji. 2008. “Realism of the Unreal: The Japanese Robot and the Performance of Representation.” Visual Communication 7 no. 3: 345362.

    • Search Google Scholar
    • Export Citation
  • SpielbergSteven dir. 2001. AI: Artificial Intelligence. Film. Distributed by Warner Bros. Pictures and DreamWorks Pictures.

  • StandageTom. 2002. The Mechanical Turk: The True Story of the Chess-Playing Machine That Fooled the World. London: Penguin.

  • TaussigMichael. 1993. Mimesis and Alterity: A Particular History of the Senses. New York: Routledge.

  • TylorEdward B. [1871] 1920. Primitive Culture: Researches into the Development of Mythology Philosophy Religion Language Art and Custom. Vols. 1 and 2. London: John Murray.

    • Search Google Scholar
    • Export Citation
  • VidalDenis. 2007. “Anthropomorphism or Sub-anthropomorphism? An Anthropological Approach to Gods and Robots.” Journal of the Royal Anthropological Institute 13 no. 4: 917933.

    • Search Google Scholar
    • Export Citation
  • Viveiros de CastroEduardo. 1998. “Cosmological Deixis and Amerindian Perspectivism.” Journal of the Royal Anthropological Institute 4 no. 3: 469488.

    • Search Google Scholar
    • Export Citation
  • Viveiros de CastroEduardo. 2004. “Exchanging Perspectives: The Transformation of Objects into Subjects in Amerindian Ontologies.” Common Knowledge 10 no. 3: 463484.

    • Search Google Scholar
    • Export Citation
  • Von BoehnMax. 1972. Puppets and Automata. New York: Dover Publications.

  • WachowskiLana and Andrew P. Wachowski dirs. 1999. The Matrix. Film. Distributed by Warner Bros Pictures.

  • WoodGaby. 2002. Living Dolls: A Magical History of the Quest for Mechanical Life. London: Faber and Faber.

  • ZlatevJordan and Christian Balkenius. 2001. “Introduction: Why ‘Epigenetic Robotics’?” Pp. 14 in Proceedings of the First International Workshop on Epigenetic Robotics (Cognitive Studies series Vol. 85). Lund: Lund University. http://www.lucs.lu.se/LUCS/085/Zlatev.Balkenius.pdf.

    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 30 30 30
PDF Downloads 21 21 21