Could computers think? Could robots have minds? The Chinese Room Argument, devised by John Searle, is a thought experiment meant to show that computers can't have minds, no matter how good technology gets. The amount of debate this thought experiment has garnered has been enormous, and it has proven to be one of the most fascinating ideas in philosophy. In this video, I explain the Chinese Room Argument and five major replies to it.
NOTES
- Definitions
- understands: Whatever it is we're referring to when, before we start doing philosophy and thinking about it, we say "X understands Y"
- X p-understands Y: "X runs a program that always produces a set of behaviors B we associate with understanding that thing Y"
- Program- a list of rules for what to do
- r-understands:
- 'Understands' includes one or more of the following:
- Qualitative aspect: A feeling of understanding
- Conscious aspect: Awareness of understanding and how you are using it
- Intentional aspect: Content of understanding as we experience it
- 'Understands' includes one or more of the following:
-
- x Ci-understands y: x produces the same behaviors as someone who understands y and this behavior begins with a causal connection from y to x
- x X-understands y: x has the same complexity as the brain of a person that understands y
- Strong AI
- (Computational theory of mind) Understanding is nothing more than p-understanding
- A computer can p-understand (Chinese)
- So, a computer can understand (Chinese)
-
- O1: The Chinese Room Argument
- If (1), then we can't p-understand without understanding
- I can p-understand (Chinese) without understanding (Chinese)
- S1: Chinese Room
- I don't understand Chinese
- In the middle of the room is:
- boxes of Chinese symbols (a data base)
- a book of instructions for manipulating the symbols (the program)
- People outside the room send in other Chinese symbols: questions in Chinese (the input)
- By following the instructions in the program I pass the Chinese symbols which are correct answers to the questions (the output)
- I p-understand Chinese
- So, I p-understand Chinese w/ understanding Chinese, which is (5)
- S1: Chinese Room
- So, ~(1)
- The only thing a computer can do is p-understand
- So, a computer can't understand
-
- R1: Systems Reply
- I am not the whole system here, but more like the cpu of the computer
- So, me not understanding is irrelevant
- The system as a whole understands, and that's what counts
- O1: Internalized Chinese Room Argument
- Memorize the rules, then there's only one physical system
- R1: Virtual Mind Reply
- There is a virtual mind working the program
- O1: there is only one physical system
- R2: Robot reply
- Include Ci-understanding
- O1: Internalized Chinese Room Robot
- Use digital readouts of cameras and this satisfies Ci-understanding without true understanding
- R1: Systems Reply
-
- R3: Brain Simulator Reply
- Make a computer that takes natural-language as inputs and runs a program identical to a human brain that understands Chinese
- Add X-understanding
- O1: Supergenius Internalized Chinese Room Robot
- Increase complexity of the Chinese Room program too
- O2 (Searle): the water valve brain
- R4: Other Minds
- We attribute understanding to other people because of their behavior
- Robots and aliens share the same behavior
- So, we should attribute understanding to robots and aliens
-
- N1: this is R-understanding
- S1: pragmatic reasons
- O1: anthropomorphizing is useful, but metaphoric
- R5: Intuition Reply
- The Chinese Room Argument is based on intuition
- Intuition is unreliable in metaphysics
- Computational Theory of Mind has explanatory power
- We should believe in things that have the most explanatory power
- So, we should trust Computational Theory of Mind over the Chinese Room Argument
-
- O1: framing CRA in the first person appeals to observation, not intuition
- R3: Brain Simulator Reply
- O1: The Chinese Room Argument
Leave a Reply