• Skip to content

john searle

Emergentism: Does the mind emerge from the brain?

July 23, 2022 by The Philosurfer Leave a Comment

In this video, I explore the idea of emergentism as exposited by John Searle.

NOTES

  1. Four assumptions are made in the mind-body problem that must be jettisoned
    1. Assumption: ‘mental’ and ‘physical’ name mutually exclusive ontological categories
      1. Descartes: physical = matter = spatially extended
        • O1: excludes things modern physics accepts as matter
      2. Searle: physical  =
        1. located in space and time
        2. causally explainable by microphysics
        3. function causally
      3. On Searle's definition, there is no reason mental cannot be physical
    2. Assumption: there is only one kind of reduction
      1. ontological reduction: x is ontologically reducible to y = x is real, but is identical to some more fundamental entity, y
      2. causal reduction: x is weakly/causally reducible to y = x is not identical to y, but all of the intrinsic facts about x are explained by (or caused by) facts about y
      3. emergent properties/system-level functions = novel properties of a system that the parts lack which emerge when the system is properly organized
    3. Assumption: causation is always a relation between discrete events ordered in time, where cause precedes effect
      1. discrete causation = occurs between two discreet objects and is ordered in time where cause precedes effect
      2. nondiscrete causation = emergent properties causing and being caused by parts in an arrangement
        1. bottom-up = parts cause the emergent property
        2. top-down = emergent property affects something about the parts
    4. Assumption: identity is unproblematic; everything is identical with itself and nothing else; paradigms of identity are object identities and identities of composition
      1. object identity = two objects are actually just the same one object
      2. identity of composition = a thing is identical to the parts that compose it
      3. Emergent things can be totally dependent on their parts without being identical to them
  2. biological naturalism = sensations and thoughts are system-level features of neurophysiological processes in the CNS
  3. So, sensations and thoughts are causally reducible to and emergent from neurophysiological processes in the CNS
  • C1: frees naturalist from problems of reduction, supervenience, or elimination of psychological phenomena that fly in the face of common experience
  • O1: ad hoc
    • R1: the same emergent properties are found in all kinds of natural systems
  1. Conscious states match Searle’s criteria for physicality
    1. They are located in space and time (i.e., the brain)
    2. They are causally explainable by microphysics (i.e., reducible to neurophysiological processes)
    3. They have physical effects (i.e., downward causation on the brain)
  2. Conscious states are not strictly identical to neurophysiological bases, but all their powers are extensions of the latter, so they are not independent things
  3. So, this is only a pseudo-problem
  4. The only questions left are how the brain does this, which is the job of neuroscience
  • O1: the problem of psycho-physical emergence
    • When any other property of an object emerges, it makes sense how it does so
      • Structural emergence
        • The particles that make up a tire aren't round, but when they come together the property of roundness emerges
        • However, when you look at the laws of nature and the particles, you can understand why the particles cause roundness to emerge
          • You can see how they necessitate it
      • Quantitative emergence
        • Consider a kid’s choir, where each kid is singing softly, but the whole choir is very loud. Has this brought something into reality?
        • Is this mysterious at all?
        • What causes the new property?
    • The emergence of consciousness from brain matter does not make sense
      • We have made tremendous progress in neuroscience
      • By now, we should have some semblance of an idea, but it's not even close
    • The emergence of consciousness from brain matter isn't even intelligible
      • Galen Strawson
      • What makes emergence intelligible is this
      • Take the thing that emerges and that which it emerges from
      • You should be able to characterize them both--describe them both--using conceptually homogeneous concepts
        • Shape of the atoms and shape of the tire
        • Motion of the atoms and motion of the tire
      • No set of conceptually homogeneous concepts could capture both the experiential and the non-experiential
        • Shape of the brain and…what? Shape of consciousness?
        • Electrons in motion and first person experience of red?
    • O1: brute emergence
      • R1: We can't say "It just does"
        • We want to say one thing emerges from another,
        • There must be something about the thing it emerges from which is sufficient for the thing to emerge
        • But this is something would be our explanation that we’re lacking

Further Reading

Mind, Matter, and Nature: A Thomistic Proposal for the Philosophy of Mind" by James Madden

Filed Under: Philosophy of Mind Tagged With: emergentism, john searle, mind

The Chinese Room Argument

June 25, 2020 by The Philosurfer Leave a Comment

Could computers think? Could robots have minds? The Chinese Room Argument, devised by John Searle, is a thought experiment meant to show that computers can't have minds, no matter how good technology gets. The amount of debate this thought experiment has garnered has been enormous, and it has proven to be one of the most fascinating ideas in philosophy. In this video, I explain the Chinese Room Argument and five major replies to it.

NOTES

  • Definitions
    • understands: Whatever it is we're referring to when, before we start doing philosophy and thinking about it, we say "X understands Y"
    • X p-understands Y: "X runs a program that always produces a set of behaviors B we associate with understanding that thing Y"
    • Program- a list of rules for what to do
    • r-understands:
      • 'Understands' includes one or more of the following:
        • Qualitative aspect: A feeling of understanding
        • Conscious aspect: Awareness of understanding and how you are using it
        • Intentional aspect: Content of understanding as we experience it
    • x Ci-understands y: x produces the same behaviors as someone who understands y and this behavior begins with a causal connection from y to x
    • x X-understands y: x has the same complexity as the brain of a person that understands y
  • Strong AI
    1. (Computational theory of mind) Understanding is nothing more than p-understanding
    2. A computer can p-understand (Chinese)
    3. So, a computer can understand (Chinese)
    • O1: The Chinese Room Argument
      1. If (1), then we can't p-understand without understanding
      2. I can p-understand (Chinese) without understanding (Chinese)
        • S1: Chinese Room
          • I don't understand Chinese
          • In the middle of the room is:
            • boxes of Chinese symbols (a data base)
            • a book of instructions for manipulating the symbols (the program)
          • People outside the room send in other Chinese symbols: questions in Chinese (the input)
          • By following the instructions in the program I pass the Chinese symbols which are correct answers to the questions (the output)
          • I p-understand Chinese
          • So, I p-understand Chinese w/ understanding Chinese, which is (5)
      3. So, ~(1)
      4. The only thing a computer can do is p-understand
      5. So, a computer can't understand
      • R1: Systems Reply
        • I am not the whole system here, but more like the cpu of the computer
        • So, me not understanding is irrelevant
        • The system as a whole understands, and that's what counts
        • O1: Internalized Chinese Room Argument
          • Memorize the rules, then there's only one physical system
          • R1: Virtual Mind Reply
            • There is a virtual mind working the program
            • O1: there is only one physical system
      • R2: Robot reply
        • Include Ci-understanding
        • O1: Internalized Chinese Room Robot
          • Use digital readouts of cameras and this satisfies Ci-understanding without true understanding
      • R3: Brain Simulator Reply
        • Make a computer that takes natural-language as inputs and runs a program identical to a human brain that understands Chinese
        • Add X-understanding
        • O1: Supergenius Internalized Chinese Room Robot
          • Increase complexity of the Chinese Room program too
        • O2 (Searle): the water valve brain
      • R4: Other Minds
        1. We attribute understanding to other people because of their behavior
        2. Robots and aliens share the same behavior
        3. So, we should attribute understanding to robots and aliens
        • N1: this is R-understanding
        • S1: pragmatic reasons
          • O1: anthropomorphizing is useful, but metaphoric
      • R5: Intuition Reply
        1. The Chinese Room Argument is based on intuition
        2. Intuition is unreliable in metaphysics
        3. Computational Theory of Mind has explanatory power
        4. We should believe in things that have the most explanatory power
        5. So, we should trust Computational Theory of Mind over the Chinese Room Argument
        • O1: framing CRA in the first person appeals to observation, not intuition

Further Reading

Filed Under: Philosophy of Mind Tagged With: artificial intelligence, chinese room, computational theory of mind, computers, consciousness, john searle, philosophy of mind, physicalism, robots, thought experiment