Uploaded by advx

Wstęp do kognitywistyki Summaries

advertisement
Wilfrid Sellars
Philosophy and the Scientific Image of Man
Sellars in his article has a closer look at phylosophical problem related to ways of
describing the perciving of the world. Two kinds of image: The Manifest Image and the
Scientific Image are the two conceptions which philosophers are concerned with when trying
to understand the status of man-in-the-world. They are both explanatory constructs for
understanding the world and our place in it, though there are some clearly visible
differences between them. The Manifest Image is the understanding of itself and the world
that humankind has developed throughout history – it is the framework in terms of which
man encountered himself. The Scientific Image is the understanding of the world that has
arisen through new techniques of inquiry and through the modern revolutions in science. In
this conception, the world can be described with terms of science – and there we have
reached the main point: is the manifest image really useful? After all, every proces can be
described in scientific terms!
Sellars claims that we can not reduce the MI to the SI. His argument refers to our
need to find ourselves in the community. We percive ourselves and others as the human
beings – to do that we need to use terms which will describe us in the system of categories.
That means, we need to use the manifest image – only by finding other people as an
intencional beings, we can see the similarities to our behavior and, moreover, predict their
behaviour by recognising their mental states. That is when we find our place in the world.
The key word in this matter is normativity – consider ourselves as intentional beings give us
an oportunity to enforce the social rules which depens on community that establish them.
Those rules are form of social agreement. The community establish norms, consisting of the
words, which have defined intention. By the Manifest Image we can correctly interprate
them and try to enforce them in our social live. This strong arguments lead us to the
statement that the Manifest Image is very important and can not be reduced to Scientific
Image.
Daniel Dennett
Personal and Sub-personal Levels of Explanation
Dennett’s artice contains similar considiration as Sellar’s paper – differences between
ways of explanation. He defends the sharp distinction between the personal and
subpersonal levels of explanation. Personal level refers to horizontal explanation, which can
be presented as story telled step by step. It is used when we talk about the person. The Subpersonal level of explanation is vertical – it is used when we talk about sever states of mind
(a process represented by a certain state). His primary example to illustrate the need for this
distinction is the phenomenon of pain. For Dennett, the subpersonal level of explanation for
pain is pretty obvious - it involves a scientific account of the various neurophysiological
activities triggered by afferent nerves responding to damage which would negatively affect
the evolutionary condition of an organism. The question is if we explain the brain processes
which occure when we feel pain, do we really explain the feeling of suffering?
Dennett considers three questions about pain that a mechanical explanation might
attempt to answer: How does a person distinguish pains from other sensations? How does a
person locate pain? Why do we abhor pain? In each case, Dennett attempts to show that the
mechanical explanation fails. The answers never refers to the cause of the state of mind,
while on the subpersonal level you can’t refer to the phenomenon of pain. You simply
account for the physical behavior of the system in whatever scientific vocabulary is
appropriate. On the personal level you acknowledge that the term “pain” does not directly
refer to any neurophysiological mechanism. Dennett claims that it references the
phenomena of “just knowing you are in pain”, in virtue of the immediate sensation of
painfulness. The explanation can only makes sense in terms of being the pain of a person
(not a brain) who “just knows” he is in pain, when he is in pain. Dennett shows an example
of the differences between those two explanations - when a person touches a hot stove, the
personal explanation is that he had a sensation of pain in a specific place, and that sensation
caused him to withdraw his hand. No further explanation is possible, and these facets of the
explanation are “brute facts”. You simply can not talk about the feeling of pain only with
Sub-personal explanations – it concludes that you can not get on without the Personal level
of explanation.
Jerry Fodor
The Persistence of the Attitudes
Fodor in his article claims that commonsense belief “is worth saving” and what he
means is that folk psychology does in fact exist as a science and, moreover, it is useful for
understanding how the mind operates. This paper was written at the time when folk
psychology was a target of other papers and discussions and it was critisied. Fodor gives us 3
seperate arguments in order to save folk psychology.
The first of these acknowledges the high probability that folk psychology is an
accurate predictor of human behavior. It is so commonly used and accepted that it is
practically unnoticed by people. In fact, people use it in ordinary conversation persistently
and constantly, for example when they settle the meeting. The initial attitudes of these
people then result in a predictable reality. The second argument is that folk psychology has
“depth”. Analyzing a situation using folk psychology involves describing a person’s attitudes
and the process these attitudes go through to create new states of mind and possibly new
states of the real world. The power of attitudes to cause change in the mind is combined
with folk psychology’s similarity to the “most powerful etiological generalizations” to create
an expansive but predictable complex system. Final argument states that we have no
alternative to explaine human behavior and the process of its causes without folk
psychology. Fodor points out that without folk psychology there will be no science to
generalize over human behaviour using the counter-factual evidence.
Folk psychology is essential for establishing certain components of theory of the
mind, such as the stipulations for propositional attitudes. Representational theory of mind
refers to states of mind as representations to which we apply as language representations,
which can have different meanings. We need to use Ceteris Paribus clause when we draw
inferences about each other. According to this theory, language of thoughts is comprised of
tokens that are the objects of propositional attitudes. This tokens allows to transitions
between mental states to preserve the truth. Our beliefs are used to describe the rules of
the folk psychology, they are the components of our world. As we describe behaviour, we
can’t refer only to changes of the state in our brain - we have to assume an intencional
attitude. A chess-playing computer doesn’t token dispositional beliefs that might become
occurrent, though we can still use intentional explanation to describe what it does. After
Fodor statement, I completely agree that commonsense belief “is worth saving”.
Daniel Dennett
Real Patterns
Dennett in his paper is considering the issue about the existance of our beliefs. When
it comes to beliefs, there are two opposite stances: realism or eliminationism. However,
Dennett has claimed to be a mild realist about the ontological status of beliefs. He is
presenting a completely new ontology of mental states: the fact that we can predict
behaviours is not conditioned by respecting the laws of Folk Psychology, but rather using the
patterns of behaviour.
We use folk psychology to interpret each other as believers, wanters, intenders, and
to predict what people will do next. Ability to interpret the actions of others depends on our
ability to predict them. Our ability to predict behaviour is a function of the pattern that
becomes discernible from the perspective of the intentional stance. But prediction is
impossible from genuinely random, patternless noise. If we can make useful predictions,
there is a discernible pattern underneath.
But what exacly are those patterns? They consist compressed information. If we can’t
compress the inforamtion, there is only a random information which doesn’t give us a
posibility to predict correctly. Pattern recognition requires the transfer of information which
is possible in two ways: efficiency and accuracy. It may be appeled to Dennett’s three levels
of stances. The physical stance is the least efficient, but yields the most accurate predictions.
The design stance is more efficient, but sacrifices accuracy. Intentional stances are those
which let us see the patterns. Higher level stances offer scales of compression and this is
there the importance of patterns enters. Mild realism is the doctrine that makes the most
sense when what we are talking about is real patterns from the intentional stance.
When can we talk about “real patterns”? Dennett claims, that a pattern existing in
some data is real if there is a description of the data that is more efficient than the bit map.
Sometimes we may not recognize a pattern in the very same data, but it may genuinely be
there to be recognized by other systems, even though we are unable to do so. Moreover,
that pattern, though genuinely there, may not be recognizable by any currently devised
system. That demonstrates the ontological independence of patterns - patterns may exist
independently of whether anyone recognizes it. Pattern happens in time and space, not
nessecerly in mind.
While there are no belief-like structure in the head, there are patterns of behaviour,
and these are the referents attribution in folk psychology.
David Rosenthal
Explaining Consciousness
David Rosenthal in his paper considers a disputable issue about consciousness itself.
At the beginning he distinguishes the various concepts which we are called consciousness:
creature consciousness – biological matter - organism is conscious when is awake and when
its sensory systems are receptive normally, transitive consciousness – relation between an
organism and the world – when an organism is conscious of something and state
consciousness – matter of phenomenology – distinguish whether the state is conscious or
not. It is one thing to say of an individual person or organism that it is conscious and it is
quite another thing to say of one of the mental states of a creature that it is conscious. And
it is recognized that not all mental states are conscious (such as certain desires, emotions or
some bodily sensation such as pain). But what it is for a mental state to be conscious?
According to Rosenthal’s theory, mental state is conscious if it is accompanied by a
specific type of thought: Higher-Order Thought. A conscious mental state M, of mine, is a
state that is actually causing an activated belief (generally a non-conscious one) that I have
M, and causing it non-inferentially. An account of phenomenal consciousness can then be
generated by stipulating that the mental state M should have a causal role or content of a
certain distinctive sort in order to count as an experience and that when M is an experience
it will be phenomenally conscious when suitably targeted. Mental state is conscious just in
case it is accompanied by a noninferential, nondispositional, assertoric thought to the effect
that one is in that very state.
Other issue which is worth to mention is the explanation of the connection between
consciousness and sensory states. Sensory states occur when a mental state has two
properties: the sensory quality and the property of the state consciousness. What it is like to
be a particular conscious individual is a matter of the sensory qualities of that individual's
conscious experiences. The consciousness of those experiences, by contrast, is simply that
individual's being aware of having the experiences. The HOT one determines what it’s like to
be in relevant sensory state. The HOT hypothesis deals satisfactorily with the phenomenon
of state consciousness, even for the special case of sensory state.
Robert Cumins
“How does it work?” vs. “What are the laws”
Cummins in his article considers different ways of psychological explanation and its
language. Special science like psychology differs from more fundamental, “normal” science
like physics. Psychology relies on discovering effects and confirming them, not going the
“axiomatic or analytic way” like fundamental science. What author is pointing out is that
psychological explanations do not refer to the laws, as the model of explanation called
Deductive Nomological claims. According to DN, scientific explanation is subsumption under
natural law. But laws tell us simply what happens, they do not tell us why or how – for
example point of physical explanations is not to explain why do leaves fall, it just refer to the
basics of the universe. The laws of psychology are explananda because they specify effects
(they are laws in situ). The principles of psychology do not govern nature generally, but only
special sorts of systems that are their proper field of studies. Theories in such sciences are
constructed to discover and specify effects and to explain them in the structure of the their
systems.
Psychologists faced with the task of explaining an effect generally have recourse to
imitating one or another of the explanatory paradigms established in the discipline. There
are five general explanatory paradigms.
First paradigm is called Belief-Desire-Intention. It explains how our desires, intentions
and beliefs interact. There are some problems there, such as Leibniz’ Gap - when we are
inspecting machines interior, we will only find parts that push one another, and we will
never find anything to explain a perception. And so, we should seek perception in the simple
substance and not in the composite or in the machine.
Second paradigm is computationalism – brain is compared to a computer and mind is
the effect of its computations. This paradigm uses “top-down” strategy – it spots certain
skill, then analyses it to find the input and output. But still if we manage to find the
computation and program the device, we won’t prove that brain works the same way
(biological mechanism is not a machine).
Connectionism is another top-down paradigm, which apply to architecture, that is
capable of doing something, not inputs and outputs. But it has to deal with the same issue
that previous paradigms had.
Neuroscience works similarly to connectionism by – it is also about architecture, but
it focuses on brain, not on abstract objects.
Last one is evolutionary psychology, which focuses more on answering “why?”.
In spite of a good deal of lip service to the idea that explanation is subsumption
under law, psychology, though pretty seriously disunified, is squarely on the right track. Its
efforts at satisfying explanation, however, are still bedeviled by the old problems of
intentionality and consciousness.
Allen Bewel and Herbert A. Simon
Computer Science as Empirical Inquiry: Symbols and Search
Authors of this paper are trying to point out the issue about understanding of
computer science by emipirical inquiry. They claim that society needs to understand that the
phenomena surrounding computers are deep and obscure, requiring much experimentation
to assess thair nature. As they say, the best way to show the development of this
understanding is by illustrations. The authors use 2 examples of conceptions: Physical
Symbol System Hypothesis and Heuristic Search Hypothesis.
Laws of qualitative structure are seen everywhere in science. While such statements
or laws are generic in nature, they are of great importance and often set the terms on which
a whole science operates. To capture the essense statement about the representation of
behaviour, we need to understand what symbols are. We will then reach the roots of
inteligent action, which is the primary topic of artifical intelligence.
A physical symbol system consists of a set of symbols, related to each other, which
can occure as components of expression. It contains a collection of processes that operates
on these expressions to produce other expressions. So physical symbol system is a machine
that produces through time an evolving collection of symbol structures. It has two central
notions: designation and interpretation. The Physical Symbol System Hypothesis implies
that a physical symbol system has the necessary and sufficient means for general intelligent
action. It is clarly the law of qualitative structure and empirical hypothesis. Research in
information processing psychology consists of observations and experiments on human
behaviour and programming of symbol system to it. Computer science developes scientific
hypothesis which then seeks to verify by empirical inquiry.
The question of how symbol systems provide intelligence behavior leads to the
concept of heuristic search. The Heuristic Search Hypothesis implies that the solutions to
problems are represented as symbol structures. A physical symbol system exercises its
intelligence in problem solving by search - that is, by generating and progressively modifying
symbol structures until it produces a solution structure. The task of intelligence is to avert
the threat of exponential explosion of search by extracting and using information about the
structure of the problem space. Physical symbol systems solve problems by means of
heuristic search - it extracts information from a problem domain and using the information
to guide the search and avoid wrong turns.
Artificial intelligence researches are based on concrete experience about the
behaviour of specific classes of symbol systems in specific task domains – on empirical
inquiry.
Daniel C. Dennett
Cognitive wheels: the frame problem of AI
Dennett in his article is trying to deal with the matter which is so called frame
problem. To most AI researchers, the frame problem is the challenge of representing the
effects of action in logic without having to represent explicitly a large number of intuitively
obvious non-effects. But Dennett point out that it is a deep epistemological problem brought
to light by the novel methods of AI, and still far from being solved. The puzzle is how a
cognitive creature with many beliefs about the world can update those beliefs when it
performs an act so that they remain roughly faithful to the world.
An intelligent being learns from experience, and then uses what it has learned to
guide expectation in the future. The problem is AI initially knows nothing at all `about the
world' and clearly some information has to be innate. Problem of instalation and storing is
also mentioned – the information must be installed in a usable format. The task facing the AI
researcher appears to design a system that can plan by using well-selected elements from its
store of knowledge about the world it operates in. What is needed is a system that genuinely
ignores most of what it knows, and operates with a well-chosen portion of its knowledge at
any moment. An artificial agent with a well-stocked compendium of frames, appropriately
linked to each other and to the impingements of the world via its perceptual organs would
face the world with an elaborate system of what might be called habits of attention and
benign tendencies to leap to particular sorts of conclusions in particular sorts of
circumstances.
Many critics of AI have the conviction that any AI system is and must be nothing but a
gearbox of cognitive wheels - designs that are profoundly unbiological, however wizardly
and elegant it is as a bit of technology. The model of AI should not be described as several
implications of actions but as the network of connections which AI could use in solving
problems. Key to a more realistic solution to the frame must require a complete rethinking
of the semantic-level setting, prior to concern with syntactic-level implementation.
Though frame problem remines unsolved, we may be a few steps further after taking
under cosidiration all of our deductions. At least some problems are now pointed out – now
we only need to look forward for its development.
Benjamin Libet
Time of conscious intention to act in relation to onset of cerebral activity
In early 1980’s the neurologist Benjamin Libet performed a sequence of remarkable
experiments in the early 1980's that were adopted by determinists and compatibilists to
show that human free will… does not exist. Libet with other scientists have decided to
conduct an experiment that will test the process of making decision empirically using EEG.
The most surprising fact is that researchers recorded mounting brain activity related
to the resultant action as many as three hundred milliseconds before subjects reported the
first awareness of conscious will to act. In other words, apparently conscious decisions to act
were preceded by an unconscious buildup of electrical activity within the brain. So brain is
ready to act even before one could consciously perceive the intention to move.
Libet's experiments suggest to some that unconscious processes in the brain are the
true initiator of volitional acts, and free will plays no part in their initiation. If unconscious
brain processes have already taken steps to initiate an action before consciousness is aware
of any desire to perform it, the causal role of consciousness in volition is all but eliminated,
according to this interpretation. His measurements of the time before a subject is aware of
self-initiated actions have had an enormous impact on the case of human free will, despite
Libet's view that his work does nothing to deny human freedom. As Libet says, the
judgement of being aware of something given by subject may not be accurate, just because
subject needed also to focus on the experiment itself. It has to be taken into cosideration
that this results were achived in experimental environment and the decisive task was not
complicated. But, nevertheless, experiment was innovatory – besides it uses the
introspection which is subjective and can be not accurate, no one has tired to examine
consciousness like that before. But I think that statements which were made afterwards by
philosophers are misleading because of overinterpretation.
Jakob Hohwy
The predictive mind
The author is wondering how our brain accomplishes perception, so that we percive
sellected information and we are able to make proper inferentions. He remarks on
additional constraints, which are needed to perform reliable causal inference about the
sensory input – like prior beliefs or previous experiences which may update the inferentional
thinking. The hypotheses picked by our brain have a high likelihood (probability that the
cause would cause those effects) but also the prior probability (based on the frequency of
the events it describes). But how does our brain can accomplish to do that?
Likelihood and prior are the main ingredients of Bayes’ rule – it tells us to update the
probability of given hypothesis, given some evidence by considering the product of the
likehood and prior probability of the hypothesis, which gives us in result the posterior
probability. To connet the inference with the Bayes’s rules it has to be shown that this
approach accommodate differences in perception as such, not only the ones in
categorisation. We can only understand how brains engage in probabilistic inference if we
understand how neurons can realize the fuctional roles set out by forms of Bayes’ rule. And
to understand that it is necessary to appreciate a hierarchical notion of perception: brain
proceeds regularities in order from faster to slower – there is a massive data transfes
between the levels. Lower ones produce general hypothesis. Higher levels help to make the
prior belief.
If the model is badly fitted to data, there comes a error in prediction. Prediction error
is combination of surprise about the outcome and the difference between the model and
hypothesis. Perception arises in prediction error minimization where the brain’s hypotheses
about the world are stepwise brought closer to the flow of sensory input caused by things in
the world.This idea gives the brain all the tools it needs to extract the causal regularities in
the world and also to use them for the predictions about what comes next in a way that is
sensitive to what is currently delivered to the senses. Using the notion of prediction error
minimization, it is easy to see how the brain perceives and ahy this is best conceived as an
inferential process: it fits with the normative constraints of Bayes. Neuronal populations are
just trying to generate activity that anicipates their input. In the process of doing this they
realize Bayesian inference.
Download