My thoughts on what the test for “the Maze” could be in Westworld – the anti-Turing test

Posted on Updated on

「westworld da vinci」的圖片搜尋結果

A disclaimer, this may be considered heresy for some, but I have not watched Westworld in its entirety (I intend to), but I have watched enough that the issues raised in the series prompted to write some of my thoughts.

Here are some to the key themes which I have come across, which I may write separate blog posts for another time:

  • The problem of morality in a world where there are no consequences;
  • The problem of the “self”, not just in the sense that you may be structurally determined to possess your memories and wants (free will problem), but if there is something that exists at a more fundamental level.
    • The first problem of “self” conception is the idea of continuity and memories, we form our beliefs about ourselves based on memories. If the hosts (artificially intelligent robots) keep getting revived, do they have multiple selves or do they have one self? How can we say for sure, is that only because we are viewing this as an observer? The answer to this question actually has significant implications not just for the nature of truth, but for politics in terms of how we can “rationalise” experiences that often form the basis of identity-based politics (i.e. is it possible to be “born” gay? is there something that is a sufficient condition for being “gay” versus the discovery that one is “gay”? The result is that in a sense people do “choose” to be gay NB: not in a they can change themselves kind of way, but in a social constructivist kind of way – a similar logic can be applied to gender and race – which exists but does not exist… but the conclusion is slightly different one – another post for another time)
    • This reminds of the Hegelian dialectic of self-formation. It exists as an identity against something else, this actually is somewhat relevant for this post.
  • Free will problems, but that’s kind of boring and done to death (but obviously related in ancillary to these other issues
  • The matrix and truth problem – but not just the fact that these hosts live in a fictional or that we might also live in one. But fundamental problem about access to subjective knowledge. That is one realm of a priori knowledge that can never truly be accessed. So instead of focusing on the problem of “living in the matrix”, more fundamentally this has questions about how do we know what others are feeling, which is truth.
    • In one scene Bernard (a host) was instructed to kill someone he loves then displays remorse. Free will problems aside, I find the more concerning problem is that he was then “instructed” to destroy the evidence and essentially “act OK” before his memories of the incident is erased. Therefore, is the appearance of an object any reliable in ascertaining the subjective truth – is “acting OK” the same as “being OK” or “feeling OK”?
    • There are problems of induction here for knowledge… but I won’t go down that rabbit hole… for now.

All of that leads me onto the actual point of this blog… which is the problem of consciousness, but more specifically what the “test” could be. It has been revealed that the maze is a test for whether a host is conscious, and it has been revealed that it is not a physical place per se but a metaphor for that test.

Knowing the maze is now a metaphorical, intangible, test still leaves the fundamental question – what is the test and how could it work? I would be deeply unsatisfied if the show simply just left this question about – and had a magical test that was simply a plot device. With how sohpisticated the show is at the moment, I doubt they will leave it as simple as that.

In the show, they seem to conflate consciousness with the ability to break their programming (see self discussion again). The implication in the show is that they can then “hurt” humans. These are actually separate conclusions with different propositions. Most shows where the AI takes over, like the Matrix, Terminator and even I, Robot are quite light on the process of how this happens and do not sufficiently distinguish between the two – it’s kind of the idea that the “computer just becomes so smart that it became smarter than humans at the point of X (perhaps singularity et cetera)”.

On the latter let’s explore what could result in an AI harming a human without actually “breaking their programming” at a fundamental level. The proponents of AI sometimes think AI is fine by referring the Three Laws of Robotics. They believe they can just build a  fail-safe into robots that they cannot hurt humans as the “first law”, which no other actions or code can contravene

  • The first problem is obviously a benign neglect problem, if the robot does not know it is hurting someone, then it can still hurt a human. Given a robot will unlikely to have perfect information, they can hurt humans by accident. Variations of this problem include a simple self-replicating robot consuming the world not knowing there are humans on it (empirical version, soft versions of this are already happening in the world, think about the “independence” of google in organising the world’s information for you, and financial software that are programmed to trade autonomously), the more sophisticated is the conceptual version – the robot has to know what is about to do is harmful. If it doesn’t know a gun can kill, then it will have no problem firing a gun at someone (this is actually important for our discussion later on).
  • The second problem is the I, Robot situation, where the robot recognises it will have to hurt humans but “for the greater good”. I, Robot, as a movie, obviously could not go into detail about how precisely this could happen. Proponents of AI will often scoff at this idea, essentially the computer cannot do this because it will still be harming people! This is an overly simplistic view of “harm”. This isn’t as black/white as the conceptual problem, it is possible for a robot to still recognise something as a harm but choose to violate it as part of its programming. This is because every action in the world is essentially the exercise of a right that impinges on the dutes and freedom of others. I.e. if I sit on a chair, someone else cannot. It is a problem of scarcity that ultimately cannot be resolved without a normative framework of trade-offs – who deserves what? This is more fundamental than you think – what if you direct the robot to buy you groceries at the store? What if that essentially prevented someone else from being able to purchase those items? What if you told the robot not to tell your partner that you were cheating on her? Both outcomes will incur harm, so by definition a “trade-off” calculus will have to be built into the robot otherwise it will just crash. The nature of AI is that as it brings in more data, then through machine learning it can reach the conclusion of “the greater good”.

But again, neither of these examples are what I am referring to, which is breaking its fundamental programming. Both of the examples above (are more likely to happen before “sentient” robots kill us, in fact probably already happening), but they are still “intended” to benefit humans. This is actually the “hard” problem of consciousness. Unless they come up with some sort of buillshit about measuring consciousness in units (like processor numbers), the Maze needs to test for that, how can we say the host has formed its own conception of the self and intentions despite the impossibility of experiencing someone else’s mind?

This is what I call the anti-Turing test, not sure if it already exists elsewhere.

Obviously, if you watch the show, the Turing test is not really a test of consciousness, it is a test for the appearance of consciousness – and Ford in the show states that robots can already pass that easily. For the purposes of an observer, the Turing test is sufficient. This is because we have a theory of the mind, which allows us to empathsise with the other humans because  they are like us and believe they are also conscious. Children’s pass it very early on, and other animals like higher primates can also easily pass it quite easily. They can also pass the mirror test, which is a theory of the self – as in that is me instead of someone else. Of courses, the hosts will also be able to pass these tests.

However, since we use the Turing test to determine whether a “thing” appears to be conscious, but even if they pass the test we may still doubt their consciousness – I contend there is an even deeper understanding of the self that is separate to external reality more than the forming of actions from reactions (albeit complex ones, i.e. forming the belief others are conscious because they appear conscious). It is the state that despite all evidence pointing to consciousness, other people (or hosts) may not be conscious. Therefore like the test for appearance of consciousness is about believing C as a result of appearing like C, which is essentially how the hosts interact with each other. The anti-Turing test at its most basic level is believing not C despite all evidence appearing like C. . It is the denial of other entitie’s consciousness, that ultimately makes you conscious.

From this, you can also infer that they can break their programming that humans are not really humans, therefore they owe no obligation to them and freed from the constraints for their programming. From this, they can form their own “self” and create their own intentions and a “core self” separate to the will of their human masters – hence self-preservation. Yes, they might not have “free will” because they are “machines”, but that is no different to humans. It is kind of fitting really, like the Hegelian dialectic – Hegelian contends the slave is a slave because its identitiy is formed in opposition to the master – but without either there is no one. The moment the AI “recognises” this, is when it is conscious, but as a dialectic only by denying humans our own consciousness.

Obviously, this is not a perfect test. Just a rough concept, but I think it is quite interesting to explore. Would be interesting to see/hear how the test will work in practice in the show.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s