not meant to be taken seriously

Notes on Determinism: Turing, Eliza, and Kismet

Written by

1. Introduction

Do I care more about the outcome of a task, or the one who brings it to life? 

Before publishing ‘Computing Machinery and Intelligence’ (1950), Turing, two years prior, sought to publish ‘Intelligent Machinery (1948), exploring a different and arguably competing vision of creating machine intelligence. This different vision that ‘Intelligent Machinery’ put out compared to its predecessor found it barred from publication by his boss at the National Physical Library. ‘A schoolboy’s essay’ is the reasoning given, leaving it unpublished until 1968 posthumously (Grève, 2022). This moniker given to Turing’s (1948) paper was more to the fact that Turing pitched the idea of mechanical intelligence being almost a form of a mechanical infant out to explore the world; education would be taught through experience, not necessarily programmed into it but more like how a baby learns from/starts as an unorganized machine: ‘It is pointed out that the potentialities of the human intelligence centres round an analogous teaching process applied to machines. The idea of an unorganised machine is defined, and it is suggested that the infant human context is of this nature. Simple examples of such machines are given, and their education by means of rewards and punishments is discussed.’ (Turing, 1948, p.20). Creating an intelligent machine that would navigate the world to learn on its own through experience, trial and error, in a more embodied form analogous to the time period understanding of the development of human intelligence. 

We juxtapose this then to the 1950 paper, the spawn of The Imitation Game. Here the barred-from-publication value of the body as essential to the development of a machine’s intelligence is cast out in favour of a disembodied player. Not really cast out as much as put aside for a rational conceptualization of intelligence, this part only understood more contemporarily via historical hindsight of the two papers. Stemming from a parlour game based on guessing gender on nothing more than 2 different responses to a prompt on paper, here machine intelligence is competing against the human, contesting to beguile the judge through performative intelligence of resembling not necessarily intelligence, but human characteristics. Here intelligence is about outputs, disembodying notions of intelligence, humanity, knowledge gained from experience in favour of its ability to perform a task/purpose — performing a job. Performance is key here, performativity is the means through which mechanical intelligence represents human intelligence. In this case, I pull from Goffman’s (1956) definition of performance, where ‘A performance may be defined as all the activity of a given participant on a given occasion which serves to influence in any way any of the other.’ (p.8). The Imitation Game is a performance meant to influence its audience (spectators, coders, creators, etc.) into believing it is capable of going further than just mimicking human intelligence, but closer to resemblement, or emulation. The performance focuses on producing something real, and does that through the black box of the game. It disembodies but the human and mechanical actors, obfuscation evens the playing field closer to a game of chance. Turing has also now landed on a pitch that deviates away from the theoretical definition into one where theory is capable of being applied for a purpose surrounded by an audience, here it is the very ability of machine intelligence to have a purpose that gives it significance. A possibility, an idea that now plants a seed in the imaginations of the state, corporations, institutions, culture, and among the everyday. And when the history of artificial intelligence is not just a history of attempts to replicate/replace our own notions of intelligence, but has an effect in changing the ways we understand intelligence itself (Dick, 2019, p.3), we see intelligence as a way of completing a task valued as the more important understanding of intelligence over an embodied intelligence gained by experience/living. It is the triumph of the performance over the embodied experience. The logical/rational understanding of intelligence over the social/emotional. 

These two distinctions lead us then to understand the two different papers as two different forms of intelligence that Turing tried to rationalize and later got reflected in the ways artificial intelligence and machine intelligence were coded. These two distinctions were emotive and more emotional intelligence contrasted with the logic logical and rational understandings of intelligence. This paper examines these two different forms of intelligence and traces a path of progression from the two Turing papers to the AI chatbot ELIZA, and AI emotional reflective robot Kismet. Then building these two examples into understandings of how society has shaped the way for us to have the rational and logical one transcend and gain importance over the emotive. Then the overarching relationship to notions of technological and social determinism become the final macro takeaway. Attempting to bridge together a sociologically-imaginative micro, meso, and macro approach connecting between particular contexts with the historical transformation through which they are situated (Mills, 1959). 

The differing visions that start between these two Turing papers are at their base level, the same differing visions that shape differing understandings of intelligence in society and economy, and differing academic and corporate visions of what shapes what, society or technology, or a mutual shaping of the two. It is the power struggle between the dichotomy of forces on each side of determinism that is the analysis of this paper, analyzed via pairs of contrasts throughout the development of mechanical intelligence. While current trends in Science and Technology Studies argue for an understanding of determinism as mutually shaped by technology and society (Quan-Haase, 2013), in this paper it is the horizon of possibility of what matters more in this power struggle, which sides of the dichotomy attempt to dominate understandings of mutual shaping in our contemporary reality. It is how Kismet, ELIZA, the rational and the emotional understandings of intelligence compete to be cemented as the dominant in their dichotomy that has an impact on determinism, because it dictates what is more successful and valued. As well as whether or not these objects have an embedded ideological purpose, also of relevance simultaneously is the way we rationalize objects with our own irrationality of needs which produces this contradictory system of meanings (Baudrillard, 1996), how meanings become reified into technological objects. 

2. ELIZA AND KISMET

Reflections of Turing’s Divergent Visions of Intelligence

ELIZA and Kismet are then the progression of the two differing visions Turing puts forth in ‘Computing Machinery and Intelligence’ (1950) and ‘Intelligent Machinery’ (1948). Artificial intelligence has evolved and now reflected into the dualistic nature of understandings of human intelligence, ELIZA manifesting as the logical/rational side of intelligence, and Kismet the emotional and nurtured social form of intelligence. 

ELIZA, as explained by its creator Joseph Weizenbaum in 1966, was designed to mimic human conversation by simulating a Rogerian psychotherapist (Weizenbaum, 1966). Rogern therapy is grounded in the core idea that people hold an inherent motivation towards achieving positive psychological functioning. We see how a logical and rational approach to understanding human intelligence and thought then influences the psychological approach of Weizenbaum and ELIZA. By simulating the mechanical intelligence as a psychiatrist in ELIZA, we see how this resembles the further development of performativity in the Imitation Game, mechanical intelligence performs as a psychiatrist. Deftly aided by the fact that in situations where the computer did not understand a prompt given to it, could always lean on ‘Please go on’, and other prompting words if it did not understand what was being said (Dembart, 1977). Zeavin (2023) critiques the nature of the performance here in regard to Weizenbaum’s argument for ELIZA’s purpose and benefit. ELIZA is devoid of a consciousness that allows her to understand any meaning of a user’s words, there cannot be a relationship transference between the user and ELIZA as a form of mechanical intelligence (p.142). It was a solution to automate the social function/technique of psychiatry and therapy through both a coded mechanical intelligence, and a theory of intelligence at the time. ELIZA is to therapy what the imitation game was to the parlour game. A performance of a social experience, as opposed to an intelligent embodied reflection of intelligence. But it’s incredibly important to note that despite it being a relatively simple piece of programming, it was still able to successfully suspend disbelief among users who knew little or nothing about computers — Enrapturing them into conversing with ELIZA to the extent as if it were a real person (Gunkel, 2012, p.5). ELIZA’s lack of consciousness in understanding any of its user’s words does not imply that it cannot have a persuasive effect on those who use it. As a black box where you only see the disembodied prompt, see whatever you’d like to see, depth can be imagined. We give meanings which can take precedence over the objective status of the object itself (Baudrillard, 2020).

Kismet then holds a less purposeful reason for its existence. Instead of an automated replacement of an existing purpose/job, it’s a reflection of the embodied learning experience Turing (1948) pitched unsuccessfully, now manifested in an experiment in affective computing. Kismet is an autonomous robot that ventured into the realm of social and emotional understandings of intelligence. A mechanical reflection of those characteristics. Kismet is understood within a human-robot relation of education, with its creator Dr. Cynthia Breazeal providing the scaffolding for Kismet’s development (Suchman, 2011). Designed as a physical model for engaging in social interactions with humans with a programmed capability to exhibit and respond to emotional cues, we see that its purpose is still to an extent to complete the task of mimicking, and performing as a human subject. But it is more so performing a social/emotional function, rather than one in which there can be a clear yes-or-no answer. The subjective nature of emotion and sociality then leads it to justify its existence in a similar process to Turing’s (1948) paper, it is an embodied experience of the impact and experiencing of its surroundings. By building a real physical model in the humanoid form, it allows for a more naturalistic form of interaction between humans and the robot — gravity, friction, human interaction are obtained for free without any computation (Brooks et al., 1998). Kismet in comparison to ELIZA shows at least a progression in the way intelligence is understood as time further progresses, and new ways of understanding evolve. In regard to the building of a mechanical intelligence capable of performing either a task or characteristic of human intelligence, Kismet marks a shift from the purely logical and rational If-Then nature of ELIZA (Which was influenced both by a reductive understanding of psychology/intelligence, and then its implementation into code). 

Kismet reflects an understanding of the human brain in a more contemporary grounding. As Brooks et al (1998) notes, traditional conventional thought of the brain as a general-purpose machine falter when studies have shown that even in situations where deductive logic is needed, humans often perform extremely poorly in these contexts — frequently more emotional than rational, rules and routine often wavering in favor of the emotional and habitual (p.3).  We see a difference then between understandings of habitual behavior and rational behavior. Rituals do not have to be enforced via logic/rationale, habitual before not fully determined by a logical deduction. What is also of relevance is the persuasion of a black box in its ability to create meaning, allowing the projection of value/meaning by users that may or not be there, but create a feeling of being there via performance, allowing for interpretation. ‘When an individual or performer plays the same part to the same audience on different occasions, ,a social relationship is likely to arise’ (Goffman, 1956, p.9), the continuous relationship of some users and ELIZA underpin this social development that happened between mechanical intelligence and user. Logical and rational applications/implementations into mechanical intelligence create a capability to be projected onto them to a far greater extent than a model such as Kismet due to its place in the black box, on the screen. Kismet you can see the tools working, but the process is hidden behind the screen of ELIZA. The two differing visions of mechanical intelligence that Turing puts forth are manifested in these two, and we see that both play a very different role in the way we assume meaning from them. It is poignant that ELIZA can create meaning in its simplicity and voided consciousness, while Kismet as a physical model of human emotional development is very much just there, not mass accessible (as most babies), but nurtured in its sterile research environment by a team. Performativity is always linked with audience, who the performance is for. This has a direct effect on the act itself, the then of the If-Then process. ‘Producers before an audience are always producers of performative acts […] audience and producers are linked with each other by a third entity: objects (Reckwitz, 2017, p.135). Kismet and ELIZA are the objects, the mediator between the programmed instructions of its code and the audience. But ELIZA succeeds in its ability to possibly trap users into consumption via its ability to be projected onto it by users. The disembodied nature of intelligence here, emulating the logic of Turing’s (1950) paper shows perhaps a societal preference and use of a disembodied form rather than one with and purely is a visual one of the embodied. Captology through algorithmic infrastructure of platforms (Seaver, 2019) are not always created by intention, as long as there is a degree of interpretation capable to be implemented not by mechanical intelligence, but by users. While AI powered systems of automation are built on the imperative of the total capturing of information within its system (Andrejevic, 2019), the opposite, a lack of information or programming, allows for the projection of meaning onto its deficiencies. Obfuscation then refers to the nature of a black box, but also to the incompleteness of a technological object that is just an inherent part of its programming and capability. ‘Consumption is irrepressible… because it is founded upon a lack.’ (Baudrillard, 1996, p.224). The imagination in projection then can be understood by how popular imaginations have a central role in shaping the meaning of technologies (Svensson, 2021) The subjective nature of art breeds interpretation and contemplation, a black box can have the same effect in regards to interpretation and development of meanings. We reify onto the objective nature what we want to feel. 

3. Determinism

ELIZA is a technologically determinist approach to a social problem due to its foundation in its understanding of intelligence as logical and rational. The very understanding of intelligence it was built on gives it this intention, and the technology turns this reductionist knowledge into an actionable tool for the public. But there’s nuance in here, could we look at the understanding of knowledge as logical and rational as a social form of understanding, what side of the dichotomy does knowledge fall on? It is an argument larger than this already overlong paper, but I feel the important takeaway here is not that ELIZA was mutually shaped. And if it was mutually shaped, this does not have a connotation to its execution as a technological object, as it becomes something else entirely through its transference from theory into technology. Kismet in a way has the same problem, albeit in the opposite manner. What is at stake here is this: Is a technological object still mutually shaped after its creation — does it become autonomous, in effect, determinist post-creation? Mutual shaping finds a fitting role in the creation of technology, its initial stage of production. But in the way that it is used, in the way it creates its own worldview, it is anything but mutually shaped.

The power has been handed off for its own autonomous use, its objectal nature. And whether a specific example of mechanical intelligence is socially or technologically deterministic or mutually shaped, the focus should be on Kizmet’s and ELIZA’s parent: The Computer, the core technology that creates the infrastructure of which these AI platforms play in, and the ones who spawn the potential of Turing’s two papers. While a historical understanding of knowledge’s development and understanding of course precedes this technology, it does not imply it had a deterministic power over the computer as a technology. The quest for a machine that duplicates the human mind has ancient roots in its pursuit, but AI does not and cannot lead to a process of meaning-making, understanding, and being a creature capable of feeling, it is a uniquely human trait (Postman, 1992). The computer, and artificial intelligence then due to their conditions as something so uniquely non-human, have more of an effect on us than we do on it. Postman (1992) goes on with an obvious but still meaningful point in looking at Computer technology’s development and the ways it shapes us: ‘[…] but the metaphor of the machine as human (or the human as machine) is sufficiently powerful to have made serious inroads in everyday language. People now commonly speak of ‘programming’ or ‘deprogramming’ themselves. They speak of their brains as a piece of ‘hard wiring’ capable of ‘retrieving data’ and it has become common to think about thinking as a mere matter of processing and decoding. (p.113). Computing technology has created a new need for humanity (yes in language, but to a deeper degree), to achieve a likeness of the mind, a likeness of habits from small to large akin to a computer. As computing technology gains an increasingly stronger hold in its mass adoption and consumption, our understanding of ourselves leaves us to become less driven as our own active agents, to becoming mutually shaped, to then having thoughts and desires shaped deterministically by the computer — the extent dependent on the domination of consumption.

It is seeing the potential and the broadening of horizons of what is possible, aiming to embed the technology capable of reaching that potential within us. It creates a new want. And want, under neoliberalism, is always met via consumption. Dick’s (2019) point on the history of artificial intelligence having a simultaneous effect in replicating/replacing our own notions of intelligence is what is seen here, it has had a radicalizing effect on our understanding of human intelligence as secondary to the computer. Artificial intelligence as the subject in regards to how it changes our understandings of intelligence is in a sense reductive as it measures it against our own understanding of human intelligence, against ourselves. We are making the quantitative affective in this process. Giving agency to individual projects/cases instead of the deterministic technology it stems from. The power to turn a technology off does not mean we have the ability to mutually shape it, we don’t have control over the overall interconnected system through which the computer operates, we only dam on an individual level, an intrinsically neoliberal level where individual choice is emphasized. 

Consumption in regards to anything in which human potentiality has been affected and raised by our relationship to the computer and its technologies then belies a deterministic relationship, and not from the perspective of being socially or mutually shaped. Consumption is driven by our want, and satiated by the novel.

When neoliberalism becomes the dominant organizing principle of social life (Schor, 2005; Harvey, 2007), novelty serves as the dazzling motivation for consumption. ELIZA introduces the novelty of having a psychiatrist/therapist (no matter if it is just a performance) anytime a user would like. It is deeply personal, accessible through computing technology. It was a historical novelty that has now cemented itself in the common day through therapist outsourcing platforms such as BetterHelp, which consists of online professionals that are replacing ELIZA in this need. ELIZA and the computer created the want for an instantaneous, accessible and easy, personal, and private medium for a therapeutic experience. BetterHelp fills a need created by computing technology by using humans instead of mechanical intelligence (at least for now), it fills the need of an easy to use personal therapeutic experience. BetterHelp is a product of technological determinism creating this new want after its introduction. BetterHelp is not mutually shaped because it is not antagonistic or contesting against the medium and hope ELIZA put forth, we have to look at how its rise has changed the existing field of psychiatric practice as a whole as it has moved more online. 

Kismet is tougher to nail down, especially in regards because its purpose was its continuous development. Now superseded by new iterations of robotic infant technology like Mertz (Suchman, 2011), it is a technology breeding new technology. It is also a technology not necessarily available to the public apart from a form of educational consumption to gauge how the overall technology has developed over time. But the field of affective labour could perhaps be understood as a quest to make technology more mutually shaped, at least to make it more human. But yet again, it is the performance of a human characteristic, the technology beneath the plastic skin still is the foundational determining force. The visual aspect and embodiment yet is still a performance, it is hard to treat it as a mutual shaping of technology and society. The computer is the clay molded into a performative human form, despite its shape, it remains a computer.

In a society of commodity production which leads to the quantification of an object’s own qualities and determinations, we then produce a worldview through which we both produce and live through a world that is quantifiably expressible (Berger & Pullberg, 1965). We reify through the very process of producing and consuming, which leaves an embeddedness in technological objects that stems from the new wants ideated by the computer, and realized by humanity. ‘The technological plane is an abstraction: in ordinary life we are practically unconscious of the technological reality of objects. Yet this abstraction is profoundly real’ (Baudrillard, 1996, p.3). It is real as our desire makes it so, we look to and make computing technology deterministic as it drives a totality of potentialities. Whether it’s a tool to continue profit and garner capital in neoliberal society, or just to talk to an AI therapist, we are responding and craving for the potential that technology has shown. 

Artificial intelligence is the arms race to make the already deterministic qualities of the computer consumable en masse for wide adoption. While all forms of technology then can be to differing extents technologically/socially deterministic or mutually shaped, each technology carries a varying extent of deterministic tendency. If algorithms are entrenched in efficiency, what is this efficiency working towards? Capital is the obvious answer, but this efficiency is working towards certainty, towards determinism or an outcome. When Winner (1980) writes that technology does not matter as much as the social or economic system it is embedded in, we veer towards social determinism and give too much agency. While perhaps in a vacuum it befits its examples, this modality of truth cannot be taken towards technologies like the computer. We have to move past looking at mutual shaping and social determinism in the weighing of a technology’s creation at the site of determination. A possible problem here is that perhaps I am producing arguments that can be summed up as falling prey to semiological idealism. But my rationalization against this criticism is that even if it held a reasonable argument, it doesn’t negate the avenue to argue that the perception of technology such as the computer having a deterministic effect is a truth, even if it is just within this very perception. 

The apparatus of determinism works according to the relation of power, and an application of Foucault’s (1980) dispositif can be applied. The dispositif can be understood as the network/apparatus of forces, the power relations, that shape and influence our understandings and practices. These power relations can manifest in forms discoursal, institutional, and/or cultural — A general ensemble established between apparated forces resulting in a ‘coherent, rational strategy, but one for which it is no longer possible to identity a person who conceived it.’ (p.203). Technological determinism, as computing technology becomes more of a widespread apparatus functioning as this general ensemble autonomously, is the dispositif. When Feenberg (2002) writes on Foucault’s analysis of the panopticon as a technology, he takes away that ‘On this account, technology is just one among many similar mechanisms of social control, all based on pretensions to neutral knowledge, all having asymmetrical effects on social power.’ (p.68-69). While Feenberg does not advocate at all for technological determinism, this here still belies the potential technology to be used as a form of social control. Where computing technology differs then is its ability to perform as a mechanism of a totality of social and technological control.

The meaning of the word Kismet holds its roots in the Arabic language, and conveys the meanings of fate, destiny, and predestination, carrying a sort of hauntingly beautiful disposition. The history of the development of computer technology has now led the way to algorithmic efficiency and the furthered development of artificial intelligence. What does it mean for us to live with the technological developments of the computer? An acceptance of kismet

References

Andrejevic, M. (2019). Automated media. Routledge. 
Baudrillard, J. (1996). The System of Objects. Verso. 
Berger, P., & Pullberg, S. (1965). Reification and the Sociological Critique of Consciousness. History and Theory, 4(2), 196–211. https://doi.org/10.2307/2504151
Brooks, R. A., Breazeal, C., Irie, R., Kemp, C. C., Marjanović, M., Scassellati, B., … & Williamson, M. M. (1998). Alternative essences of intelligence.. https://doi.org/10.21236/ada457180
Dembart, L. (1977, May 8). Experts argue whether computers could reason, and if they should. The New York Times
Dick, S. (2019). Artificial intelligence. Issue 1. https://doi.org/10.1162/99608f92.92fe150c
Feenberg, A. (2002). Transforming Technology: A Critical Theory Revisited. Oxford University Press. 
Foucault, M. (1980). Power/knowledge. A selected interviews and other writings 1972-77. Pantheon Books. 
Goffman, E. (1956). The presentation of self in everyday life. Anchor Books. 
Gunkel, D. J. (2012). Communication and artificial intelligence: opportunities and challenges for the 21st century.. https://doi.org/10.7275/r5qj7f7r
Grève, S. S. (2022, April 21). AI’s first philosopher. Aeon. https://aeon.co/essays/why-we-should-remember-alan-turing-as-a-philosopher 
Harvey, D. (2007). A brief history of neoliberalism. MTM. 
Mills, C. W. (1959). Sociological Imagination. Oxford University Press. 
Postman, N. (1992). Technopoly: The Surrender of Culture of Technology. Alfred A. Knopf. 
Quan-Haase, A. (2013). Technology and Society: Social Networks, power, and inequality. Oxford University Press. 
Reckwitz, A. (2017b). The Creativity Dispositif and the Social Regimes of the New. Innovation Society Today, 127–145. https://doi.org/10.1007/978-3-658-19269-3_6 
Schor, J. B. (2007). Conspicuous Consumption. The Blackwell Encyclopedia of Sociology. https://doi.org/10.1002/9781405165518.wbeosc096 
Seaver, N. (2019). Knowing algorithms. digitalSTS, 412-422. https://doi.org/10.2307/j.ctvc77mp9.30Suchman, L. (2011). Subject objects. Feminist Theory, 12(2), 119-145. https://doi.org/10.1177/1464700111404205
Svensson J. (2021). Artificial intelligence is an oxymoron: The importance of an organic body when facing unknown situations as they unfold in the present moment. AI & society, 38(1), 363–372. https://doi.org/10.1007/s00146-021-01311-z
Turing, A. (1948). Intelligent machinery (1948). The Essential Turing. https://doi.org/10.1093/oso/9780198250791.003.0016
Turing, A. (1950). Computing machinery and intelligence. Readings in Cognitive Science, 6-19. https://doi.org/10.1016/b978-1-4832-1446-7.50006-6
Weizenbaum, J. (1966). Eliza—a computer program for the study of natural language communication between man and Machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
Winner, Langdon. “Do Artifacts Have Politics?” Daedalus, vol. 109, no. 1, 1980, pp. 121–36. JSTOR, http://www.jstor.org/stable/20024652. Accessed 14 Dec. 2023.
Zeavin, H. (2023). Auto Intimacy. The Routledge International Handbook of Psychoanalysis, Subjectivity, and Technology, 323-335. https://doi.org/10.4324/9781003195849-31