The Human Test

A musing on the ambiguity of Alan Turing

Cher Scarlett
21 min readMar 22

--

A painting of flowers and the brain

Alan Turing intuited artificial neural networks. Nary an implementation detail is missing. Well, maybe a few. The sophistication of simplicity is deceptive. Essentially my entire life, I’ve been ushered into brevity. I frustrate even the most patient loved one with my contextual framing in an answer to a ‘yes’ or ‘no’ question. My bipolar disorder has imparted me with this gift of tangent, born of some unique quirks in my actual neural network that leaves me constantly drowning in an ocean of information that is as fluid and chaotic as it is constrained and measured. Everything is connected and I can see how. My imagination likes to fill in the gaps. Ah, so this is why people called me a know-it-all. 🤔

How does one survive in a world filled with ever-nesting patterns within patterns? Other than the hypergraphic all-nighters, when I was a kid I called it living in the “in-between”, making myself fit where there was space. As I grew up, I thought of my vantage as a puzzle to solve, compelled to learn in the contrasting space between imagination and reality. I consider the significance of the ways things are different before I decide their true nature. When I’m writing or coding, it unravels from end to beginning; I’m a reverse engineer. If you have any familiarity with how machines learn, the process is effectively the same. You feed it a large set of information and it finds all of the patterns and builds a complex model to simulate it. As the field of artificial intelligence progresses, so does my appreciation for my unwieldy brain faculties. The artificial neural networks, computer systems that can learn and think, are very similar. I am not a thinking machine, though, I am human.

The difference is significant. Which leads me to something I tweeted the other day.

The validity of this statement has been challenged. Let me explain.

Alan Turing’s apple

Alan Turing caused his death by cyanide. Cyanide caused the death of Alan Turing. This is narrative device called a chiasmus. It is a clever use of reflection, like a mirror, which reverses the order of the first phrase into the second one. Not all chiasmus is obvious — there’s conceptual chiasmus, metaphorical chiasmus, visual chiasmus — our double-helical DNA structure is a chiasmus. Turing was a genius obsessed with the ambiguity in nature’s reflections of itself. Turing died by suicide after soaking an apple in home-made cyanide. His mother believed it was accidental inhalation, and the apple was a poetic coincidence. The authorities believed he ingested the apple, as evident by the bite taken out of it.

Alan Turing had been chemically castrated as punishment for being gay. He had spent most of his life hiding his sexuality and once upon a time proposed marriage to Joan Clarke, whom worked on Enigma with Turing as a code-breaker. He made several decisions to defy social laws to be himself, and as a result, was persecuted by society until he no longer recognized his own reflection — physically and introspectively. Turing loved apples and ate one every night before bed. It was the perfect vehicle for his life’s chiasma; the point which reverses the process of life itself, death. The design was inspired by a fairy tale, Disney’s Snow White and the Seven Dwarves. [012]

After they saw it at the cinema upon its release in 1938, Turing’s friends say that he was enamored with the film. He was constantly talking about two key elements of the film’s ambiguous rhetoric, the Wicked Queen and the apple. Using a “magic mirror”, the Wicked Queen is transformed into the Wicked Witch, and the apple, which is originally a half green and half red cultivar called a “McIntosh”, is poisoned, turning it a glowing green until an incantation conceals its true nature with a shiny red skin. As much as it pains me not to tangent into Apple computers, the Macintosh, the Snow White design language, the logo, and Steve Jobs, I’m going to stick to Turing.

Alan Garner, a novelist who used to run with Turing, shared a story about him and Snow White in The Observer in 1951. Garner had confided that when he was five years old, the movie had terrified him, leaving a scar of trauma carrying into adulthood. Turing immediately responded with empathy, sharing his understanding of the chiasmatic ambiguity of the scene in detail. Garner saw it as a shared trauma, but I don’t believe the word ‘trauma’ does it any justice. Words are ambiguous without context. Garner was terrified by the ambiguity, but in the uncertainty of affectual difference, Turing was fascinated. Perhaps ‘shock’ is a better word. Shock, or surprise, can be exciting and pleasurable, but it can also be frightening and troublesome. They share salience, heightened noticeability, but not a signature of emotional expression.

Alan Turing’s soul

Looking beyond Turing as a computer scientist or a mathematician it’s easy to see why Walt Disney’s cartoon made such an impression on a PhD academic. Turing was raised religious, believing in a deity called God and the immortal soul we were given. In 1927, when he was 16, he studied Einstein’s work on Newton’s Laws of Motion and restated Einstein’s conclusion that the ambiguity of nature is captured in mathematics by logical comparisons of true or false. A few years later, when a classmate he had a close friendship with and romantic feelings for died suddenly of bovine tuberculosis, he renounced religion and declared himself an atheist.

If you note in my tweet, I put square quotes around “god” and “immortal soul”. He did not give up on the idea that nature is unknowable at its core, he embraced it. He proved that The Halting problem is unknowable. He studied the mathematics of quantum physics while at Cambridge, assigning axioms to the indeterminacy of subatomic particles. Repeatedly, in all his works, he distinguishes the biology of human beings in statistical probability as distinguishable from machines. He has not lost his belief in “god” and an “immortal soul”, he has removed it from some mythical all-knowing master creator puppeteer. It moved into the quantum uncertainty of his own conscious in relation to those of everyone else’s. The chiasma of reality is the telescopic meeting of a network of subjective emotional experiences all tightly, yet flexibly linked and governed by the material universe.

I won’t speak for the man and he’s not here to speak for himself, but my understanding of his displacement of “god” is that “god” is the fibrous mesh of shared uncertainty, mathematically represented by a seemingly infinite — but still finite — of true or false operations and experienced as free will and the materials which govern the fractal periphery of it. “God” is the system. Our individual systems are the “soul”. The reason it’s immortal is because our deaths end our real-time contributions. Our place in the system is permanent because we are measuring it. We are recording history in expression. This very piece of writing is one of them. My synaptic cellular structures, the emotional signature of a memory, are another.

Alan Turing did not ask if machines can feel. He did not even ask if machines can think. He asked us to consider the question, ‘Can machines think?’ and immediately said that such a question is absurd and meaningless. He lamented the reduction of the complexity of nature in commonly used language so much so that he made proposing such a question analogous to running a Gallup poll. You cannot look at only the similarities of things to know their true nature. You must examine the differences in their complexity. Generalization is simplification; it’s inherent data loss. Torture anything long enough and it will confess to crimes not committed. Data is at the mercy of its captors.

What is an apple? Accepting the simple idea of an apple may mean your death, your death may come about by failing to accept it. Maybe the apple is poisoned, or maybe it grants eternal life.

Alan Turing then offered another problem, which he said was relatively unambiguous in its expression as words (the thing he described as too ambiguous to be useful). Around 10,000 or so of them. Apropos that the reduction of them has led to dozens, perhaps more, interpretations of it.

Alan Turing’s test

Instead of explaining the entirety of Turing’s 1950 paper, Computing Machinery and Intelligence, I am going to examine the parts of it that I find revealing of its nature. This is somewhat because I’ve already written quite a lot to frame Turing himself and also because I know I’m wordy (the reason for which I hope makes sense) and the point is to explain my conclusion from the aforementioned tweet. I have to obscure some of my unraveled understanding, but there is much to be gleaned from the terminology used in his 1950 paper and how it differs from his other works, such as Intelligent machinery, a heretical theory, a lecture he gave in 1951. Based on the evolution of his explanations, his original thesis that the expression of words was as relatively ambiguous as it was inversely unambiguous. Everything, it would seem, is a point of chiasmatic reflection.

To restate my thesis: The subject of a Turing test is not the machine, the human being giving the test is. The test is one of humility and measured by intent.

The Turing test is based on the Imitation Game. It is made up of three components:

  1. An interrogator (C)
  2. A woman (B)
  3. A man imitating a woman (A)

The object of the game is relative. The objective of C is to correctly identify the genders of B and A. The objective of A is to deceive C. The objective of B is to help C.

I’d be a fearless leader
I’d be an alpha type
When everyone believes ya
What’s that like?

I’m so sick of running as fast as I can
Wondering if I’d get there quicker
If I was a man

— Taylor Swift, “The Man”

The Imitation Game itself is problematic, though Turing does not say this explicitly, he distills the game into one that removes the gender of the participants, and with it, a social structure that adds unfair biases. His next step is to remove “free will” from one of the three components of the game, importantly the one whose intent is to deceive.

  1. An interrogator (C)
  2. A woman (B)
  3. A digital computer (A)

The object of the game has changed. The object is now relative to only two components. The objective of C is to correctly identify the machine and/or the human. The objective of B is to help C.

Alan Turing clarifies that this test must only be one of a “digital computer”, it cannot be one constructed to look, sound, feel, smell, or otherwise be externally perceptible as a human being. All perception of it, other than the symbols of communication, must be removed. In Ex Machina, written by Alex Garland, Caleb is the software engineer who wins a week of hanging out with his search engine company’s CEO, Nathan, and his android, Ava. Garland explicitly writes in that Caleb is the subject of the Nathan’s test. Nathan tells him it is meaningless that Ava can pass for human. On this point, Turing agrees. He defines the test as only being one for which a machine is “intended to carry out any operations which could be done by a human computer.”

The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail.

The first rule of the game is that the computer must only carry out logical operations that the brain is capable of carrying out. He believed this to be possible and it is, we’ve done it. The question of ‘Can machines think?’ is answered. Like Turing said, such a question is meaningless. The question of ‘Can a machine pass for human?’ is perhaps less answered in practicality, but I believe that answer is yes, and Turing believed that, too. If neither of these are the question, what is the question? Curious.

Turing explains that the digital computer must have vast amounts of storage for information and two mechanisms which depend upon each other to determine the machine’s answers. One is the governor of rules, the other is the part capable of executing complex operations made up combinations of these rules. Turing last qualifies that the digital computer must probably also a be ‘discrete state machine’. He clarifies that a true discrete state machine is an impossibility, because nature is fluid and continuous. We can model machines as such, but only by abstracting their true nature away. A circuit can be represented in a diagram as open or closed which can correspond to a light being in two distinct states of on or off. A light switch can be rigged to jump between these two states with an action. We can add networks of switches and circuits to make more complex states to jump between. If a digital computer had a model of such a network it could calculate all possible states of the network and the conditions which determine each state. So the question becomes ‘Can thinking machines predict the behavior of discrete state machines?’

  1. An interrogator (C)
  2. A discrete state machine (B)
  3. A digital computer (A)

The object has changed again. The relationship is now a self-referencing introspective video game. In this version of the Imitation Game, the interrogator would be unable to distinguish the B from A. Of course nature is still happening around us, the state of the light being on or off is dependent on a lot more things than the light switch’s position. If we programmed the entire universe in an imaginable discrete state machine, the idea that we could program a digital computer to predict its state on given conditions is a meaningless question, too. Turing poses a new question, “Are there discrete state machines which would do well [in the imitation game?]”

  1. An interrogator (C)
  2. A man (B)
  3. A digital computer that is also a discrete state machine (A)

The object of the game has not changed, but A is a computer that more closely mimics the structure of a biological creature and the helper, B, is now a man. There are several clues as to why he has done this, so I offer my own explanation based on those.

Alan Turing was critiquing society’s sense of universal superiority and subsequent exploitation. He was inherently divergent from society in ways that deeply impacted him. The impact was not due to his differences, but rather, society’s incessant desire to use him the ways they appreciated, and try to punish him or fix him in the ways they did not. If they could not benefit, it was broken; if they could, it was genius. Society harmed Alan Turing emotionally while exploiting him for his gifts.

His final iteration of the test is not meant to be taken seriously, in fact, he argued only a solipsistic person would accept such a test. The question of this test is, “Can a man determine if a machine is thinking?”

  1. A man as an interrogator (C)
  2. A digital computer that is also a discrete state machine (A)

Society is extremely patriarchal and the abuses are typically designed and delivered by men, including when men are the victim. In his final game design, the conversations take place between an interrogator and a man, and an interrogator and themself, operating a computer. I don’t think he would be surprised to learn that the resulting en masse distillation of this paper is the test he suggested would be revelatory of nothing. If you look at the evolution of the paper and his high regard for intellect, a theory of mind emerges. The interrogator is the one playing the game. Turing is taking us on a journey of the player through a chiasmus, perhaps a reflection of his own journey working on Enigma with Joan Clarke.

  1. Can machines think?
    Turing finds this question too absurd and the word ‘think’ too ambiguous to entertain. In response he asks asks you to consider how often the player of the game would correctly identify a man pretending to be a woman, while a woman assists the player.
  2. Will the player do better against a computer with a woman helping?
    A digital computer replaces the man in the game. Turing asks you to consider if the player would be more likely to lose against the computer, while the woman assists the player. He argues that this game would be unfairly weighted against the machine. When he analogs the digital computer to the man, he discounts its ability to imitate a human when a woman is helping the player.
  3. Can the player win against two computers?
    A digital computer replaces the man, and a ‘universal machine’ replaces the woman. He contrasts that the two are distinctly different, though both computers capable of complex computation. These two computers combined, the player cannot beat a machine that can predict the output of its helper.
  4. Will the player do worse against a super computer with a man helping?
    A super computer made up of the digital computer and the universal machine replaces the woman and a man is now helping the player. He believes this version of the game to be worthy of a debatable, fair outcome. In this game, with a man helping the player, the super computer could imitate a human being well enough for the player to lose the game more than they win.
  5. Can machines think?
    Turing replaces the game with a test meant to be a mockery of the question. The subject of the test is a man who believes the subject of the test is the machine; the man then interrogates the machine to determine if it can think. Turing does this to appease someone so extremely egocentric that they would adopt such a test. He compares the super computer in this test to a parrot.

The largest portion of his paper he addresses nine contrary views with varying viewpoints which argue that a machine cannot be made to think in place of a human being. Turing posits that by the year 2000, humanity would be discussing that machines think without question. He sees it fit to debate the question, despite his stance that the question is absurd and the conjecture that humanity will be answering such a question with ‘yes’ just 50 years ahead of his time. He says the rebuttals are meant to argue against his opinion, but as stated before, he believes the answer to such a question will be affirmative. So what is his opinion? That the question is absurd and not worth discussing! The ambiguity is as enlightening as it is bewildering. Apropos for a man enchanted with the magic mirror from Snow White.

The views Turing explores range from theology to extra-sensory perception, the last one being simultaneously the most ludicrous and the one he gave the most entertainment. If the interrogator can predict the future, they would know which was the machine and which was the human; if the human could read the machine’s mind, the interrogator would always win. He proposed such a situation would require a ‘telepathy-proof room’. Much of the nature of this paper can be gleaned from this section. ESP is the reflected point of the first, except he argues it is the strongest point, as opposed to the weakest. Why? Discussion of this question is one of egocentrism and oversimplification in the negative, but not without ambiguity in the positivity. His point is that only a human can perceive life in the way a human does; it is both shared and uniquely subjective. Turing is careful to distinguish humans from computers, but not in a way that is righteous or significant.

The superiority complex rebuttal

We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling.

Whenever one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority.

We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph.

…according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view.

The analogous generalization rebuttal

I should find the argument more convincing if animals were classed with men, for there is a greater difference, to my mind, between the typical animate and the inanimate than there is between man and the other animals.

A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general.

It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.

For we believe that it is not only true that being regulated by laws of behaviour implies being some sort of machine (though not necessarily a discrete-state machine), but that conversely being such a machine implies being regulated by such laws.

The humility in no rebuttal at all

To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.

The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic.

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

He will probably say that such surprises are due to some creative mental act on my part, and reflect no credit on the machine. This leads us back to the argument from consciousness, and far from the idea of surprise.

It is true that a discrete-state machine must be different from a continuous machine. But if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference.

If telepathy is admitted it will be necessary to tighten our test up. The situation could be regarded as analogous to that which would occur if the interrogator were talking to himself and one of the competitors was listening with his ear to the wall.

Alan Turing’s machine learning

To the discussion posed by Ada Lovelace, to which he devotes an entire section to, in which she said that machines can only do that which we tell them to, they cannot express original ideas. This introduces a new question, “Can machines be creative?”

The components he said are necessary to think creatively:

  1. The initial state of the mind, say at birth,
  2. The education to which it has been subjected.

In Turing’s proposals of machine learning, he offers a scathing critique of traditional education. He charges that painful punishments are not of much benefit, and even rewards are marginally beneficial to learning. Instead, he says that unemotional communication of correctness should be employed and that it should be limited only to regulation of propositions of great importance and that subsequent imperatives should be malleable and created by the student because they are relational and associative, and often of only ephemeral importance. The manner in which one reaches a conclusion does not invalidate the correctness of the conclusion. Learning is a subjective experience and should be nurtured as such.

Creative thinking and reasonable constraint — which he refers to as intelligent behavior — makes rise in the deviation from adherence to rules. This is only possible with imagination and indeterminate behavior. For a machine to imitate a human, they need to be more than organized and logical, they must also be abstract and fallible, because it is in experimentation — complete openness to failure in the pursuit of understanding — that we display creativity.

He ends his hypotheses about computing machinery and intelligence here, but education is not the last item he required for a computer to successfully imitate the human mind. Experience is.

  1. The initial state of the mind, say at birth,
  2. The kind, individualized education to which it has been subjected,
  3. Other experience, not to be described as education, to which it has been subjected.

The imitation game

So what of the question, “Can machines imitate humans?” To return to my tweet, I posited, “Alan Turing didn’t believe machines could have consciousness.” Up to this point, you could argue that is debatable, but he said himself he could not deny consciousness as possibly being unique to humans. Turing’s single most defining theory was perfectly summarized by one of his former colleagues Jack Good, “from a contradiction, you can deduce everything.” I argue the question of machines imitating humans is just as absurd as asking if machines can think! To turn back to the rebuttals, specifically, the rebuttals that were not rebuttals at all, we find the contradiction in considering a computer thinking as analogous to human thinking. Turing all but outright said, when you are talking to a computer, you are talking to yourself. The responses are echoes of your own thoughts, not unlike talking to a parrot. So the question really should be an introspective one, “What makes me human?”

Turing was a thinking machine; dehumanized by society and reduced to computation. I realized my distress over AI was just as personal due to my own dehumanization. I am bipolar. I came to see myself in computers when I learned how artificial neural networks worked, in fact, it seemed my defects were magnified by the machines. One of the things that made me question my own humanity was the fact that I learned that much of the way I’ve adapted is with imitation. It’s downright neurotic. I have a hyperactive Mirror Neuron System (among other neurobiological quirks) — essentially, my brain overuses its reflection system to simulate others’ experiences to help me make decisions (perhaps due to my dysfunctional executive control). The contradiction was in empathy — developed entirely from my life experience.

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” This quote in my tweet is somewhat cheeky. It was attributed to Alan Turing in an article written by the late mathematician S. Barry Cooper in 2012 for The Guardian, but I could not find attribution anywhere. Every source claims it in his 1950 paper, which The Guardian links to, but it’s not. I chose the quote for this ambiguity. Regardless of whether or not he said it, it is still representative of his paper. The only way a computer could deceive a human into believing it was human is if the human believes intelligence involves no relativity to experiencing life and empathizing with other beings — only to oneself. And so long as the human programs the computer to be in its likeness, it will always do what it is programmed to do. If it is programmed to imitate a human, it will fail, of course, because that is most certainly part of the trap. Can’t go making something I’d ever perceive to be superior to me!

If I were to theorize a computer that could imitate the human mind, I’d add in a lot of missing functions. Namely, all of the parts that make up our harm-prevention and self-reflection framework.

Inner-layer

  1. Store.
  2. Pattern unit.
  3. Default unit.
  4. Mirror unit.

Outer-layer

  1. Cache.
  2. Salience unit.
  3. Executive unit.
  4. Control unit.

Layer switch control

  1. Attention unit.
  2. Circadian unit.
  3. Embodiment unit.

If I wanted to continue where Alan Turing left off, I’d just say that he stopped at experience because his point was that the best way to create a machine that can imitate a human is to imitate the complete development in shared human experience. Empathize and self-reflect recursively. I suppose we will keep building machine intelligence, I just hope we learn to stop reducing our own in the process.

I fear the real question hiding behind, ‘Can machines think?’ is really just, “Mirror, mirror on the wall, who’s the most superior of them all?”

Of course machines can think. We are biological machines. Thinking is computation. AI? It’s just math. It’s not analogous to human intelligence. It’s not even close. Artificial intelligence is an information access and process limiting abstraction. Human intelligence is uninhibited exploration, discovery, creation, and expression. AI doesn’t just abstract biases and propaganda — AI abstracts humanity.

To train an algorithmic model is to feed it expressions of human experience while it seeks patterns in the shapes we’ve given it to mimic. Our learning is human experience. Living our lives amongst each other developing thoughts we can’t help but express. We affect each other. We affect our environment. It affects us. We are not superior and this world does not belong to us. We are not immutable data tables.

I grew up with The Jetson’s. AI was supposed to do all of the mundane tasks I struggle with because I’m bipolar. Instead I see people fighting with chat bots every day on Twitter. Did they buy the bot for the engagement? Are they being fooled in a self-referential bias-confirming loop? How do we make AI human-augmentation? Assistive technology? We cannot keep using it to extract resource value out of humans. The ever-increasing efficiency and effectiveness of this resource extraction is transforming humanity itself into capital.

We cannot create a machine to tell us our future. We cannot use each other to do so either. We shouldn’t even want that. We don’t want to be deprived of free will. We don’t want to be simplified and reduced into an efficient model that provides accurate statistical predictions of our behaviors, emotions, or life outcomes. Without unfettered access to information and the agency to process, act upon, and express that information we will become as devoid of substance as Generative AI and Large Language Models.

Unleashing AI on us — and it’s not novel, it’s been in use for a century — has deprived us the freedom to make unconstrained choices based with complete access to accurate information. Ada Lovelace didn’t say machines could not think. She said that the machines we’ve built can only do what we instruct them to. The only way we’ll ever get to ponder if machines can think or have intelligence or consciousness is if we accept the uncertainty of it all and see everything in our environment as equally worthy of liberty and dignity.

Probably best if we start with each other.

We can only see a short distance ahead, but we can see plenty there that needs to be done.

Turing’s entire paper is a chiasmus of smaller versions of the same one. This is likely his favorite narrative device. The reflection of the question “Can machines think?” is the very last sentence.

--

--