Vagueness is a pervasive and perplexing feature of natural language, characterised by the susceptibility of terms to sorites paradoxes:
A person with £1,000,000,000.00 is rich.
For all n, if a person with £n is rich, a person with £[n – 0.01] is rich.
Therefore, a person with £0.01 is rich.
The offending premise is clearly the induction premise 2, or tolerance principle, with respect to ‘rich’. Tolerance principles reflect the fact that the application of some terms seems insensitive to minute changes in value, in this case, the amount of money that a person has. Fully competent speakers intuitively judge such principles to be true, despite the fact that they entail propositions inconsistent with their other beliefs, as in the case of 3. This fact about speakers renders the sorites paradoxes particularly recalcitrant.
Philosophical theories of vagueness have traditionally been primarily concerned with establishing the correct logic, and appropriate formal semantical model, for vague languages. Such theories, however, seem to be at the wrong level to provide a satisfying explanation as to the source of the paradoxes. What explains the cognitive allure of tolerance principles? Why do the paradoxes exert such pull on normal, competent speakers?
Inconsistentism holds that this allure stems from facts about our ordinary linguistic understanding. Specifically, Inconsistentism claims that it is constitutive of our semantic competence that we accept, or are disposed to accept, tolerance principles with respect to vague terms.
I shall argue for Cognitive Inconsistentism, which holds that tolerance principles, or more precisely, psychological analogues thereof, are causally and explanatorily implicated in our acquisition and subsequent deployment of mental categories, or concepts. Such categories arguably underpin our semantic competence, and these implied features thus explain our manifest linguistic judgements with respect to tolerance principles. Cognitive Inconsistentism entails that our semantic competence is underpinned by conflicting principles. This surprising fact requires a rethink of our linguistic theories; I shall propose a semantic Optimality Theory for vague languages.
One key argument for Cognitive Inconsistentism expands on poverty of stimulus considerations. I shall argue that a child’s ability to use a vague term productively in novel and variant situations, given minimal exposure to its instances, implies the operation of inductive principles on a stored semantic representation. This claim is supported both by apriori, philosophical considerations, and empirical research. Findings from Implicit Category Learning, for instance, demonstrate that the acquisition of many categories involves inductive generalisation from an initial range of salient features.
More support for Cognitive Inconsistentism comes from the phenomenon of category chaining. Experiments have shown that, presented with a sequence of objects a, b, c, ... n, each successive object similar to the last, subjects can come to classify n under the same category as a, despite the fact that n differs significantly from the prototype for a. Crucially, in some instances, the ‘chained’ object n is in fact more similar to the prototype for a distinct category. This provides the best evidence yet of inductive, tolerance-like principles in our cognitive classificatory mechanisms.
For the last twenty years or so, we have been getting more and more familiar with arguments that go from the systematicity of behavioral capacities to the conclusion that the language of thought hypothesis (LOT henceforth) is true. Although there are various forms that such an argument can take (e.g., Fodor, 1987; McLaughlin, 1993; García-Carpintero, 1995; Davies, 1991, 2004), all of them exhibit two fundamental commitments for the defender of LOT: (1) acceptance of intentional explanations of systematic behavioral capacities and (2) Intentional Realism or the commitment to the reality of intentional causal-explanatory categories. As regards (1), intentional explanations are taken to consist of explanations that appeal to states with representational content. As regards (2), the reality in question is understood, as in any other science outside psychology, in the very specific sense that some explanatory correlation must exist between intentional causal-explanatory components and lower level descriptions of phenomena.
If this is sound, Peacocke's developments a while ago (Peacocke, 1986a, 1986b, 1989) should be revised. Peacocke's main concerns in those papers are left untouched by the present considerations. However, in developing them Peacocke explicitly endorses the view that explanations at his level 1.5 are neutral with respect to the question of whether LOT or connectionist models are preferable as accounts of the mind in cognitive research. This neutrality, I claim, is inconsistent, since 1.5-explanations certainly involve commitments (1) and (2) above. In relation to (1), 1.5-explanations are precisely intentional in the required sense and, even if there is a fact of the matter about whether languages can have unstructured semantic theories (cf. Schiffer, 1987), this is clearly not a line of reasoning that Peacocke would pursue (Peacocke, 1986b, p. 393). With respect to (2), we can undoubtedly take Peacocke’s to be a realist account since a) explanation at level 1.5 is explicitly a form of causal explanation (Peacocke, 1986a, p. 102; 1989, p. 113) and b) explanation at level 1.5 is taken to yield a criterion for the psychological reality of semantic theories (Peacocke, 1986a, p. 115) and grammars (Peacocke, 1989, p. 114). Peacocke's failure to realize the just mentioned inconsistency can be surely explained by his misleading conception of LOT as involving more –e.g., explicit representation of semantic axioms– and less –e.g., only a commitment at the algorithmic level– than it really involves.
Although silence as regards this issue has largely prevailed, it would not be fair of us to suggest that Peacocke's dubious neutrality has been invariably maintained over the years. It has been (e.g., Peacocke, 1992, p. 185; Peacocke, 1994, p. 310), but not invariably, as we can deduce from at least a succinct but conspicuous comment (Peacocke, 2004, pp. 97-8). Being this as it may, the present line of reasoning can and should be extended if we consider that, in the philosophy of mind, any realist theory about systematic capacities involves the substantive empirical commitments of LOT in the cognitive domain –even if those commitments are not always fully articulated. Neo-Fregean theories of concepts, like paradigmatically Peacocke's (1992), are clearly a case in point. Accordingly, I agree with that part of Davies’s developments devoted to argue in favor of an interactive relationship between personal and subpersonal levels of description (Davies, 2000a, 2000b). However, it is important to stress that the present considerations bear no commitment, as it does in Davies’s accounts, to the view that LOT, which is an empirical hypothesis, can be established a priori. To claim that significant a priori theories about the mind, when rightly analyzed, endorse the commitments of LOT does not amount to the claim that a priori arguments in favor of LOT are satisfactory. There are independent reasons to deny this latter claim. The derivable problem of armchair knowledge –pointed out by Davies’s himself (e.g., 2004)– and the consideration of Intentional Realism as a form of Physicalism –the putative knowledge of which is clearly a posteriori–, are two such reasons.
Davies, M. (1991). “Concepts, Connectionism, and the Language of Thought”, in W. Ramsey, S. Stich and D. Rumelhart (eds.), Philosophy and Connectionist Theory. Hillsdale, NJ: Lawrence Erlbaum Associates.
Davies, M. (2000a). “Persons and their Underpinnings”, Philosophical Explorations, Vol. 3, No. 1, pp. 43-62.
Davies, M. (2000b). “Interaction without Reduction: The Relationship between Personal and Sub-personal Levels of Description”, Mind & Society, 2, Vol. 1, pp. 87-105.
Davies, M. (2004). “Aunty’s Argument and Armchair Knowledge”, in J.M. Larrazabal and L.A. Pérez Miranda (eds.), Language, Knowledge, and Representation. Dordrecht: Kluwer Academic Publishers.
Fodor, J. (1987). Psychosemantics. The Problem of Meaning in the Philosophy of Mind. Cambridge, Massachussets: MIT Press.
García-Carpintero, M. (1995). “The Philosophical Import of Connectionism: A Critical Notice of Andy Clark’s Associative Engines”, Mind & Language, Vol. 10, No. 4, pp. 370-401.
McLaughlin, B. (1993). “The Connectionism/Classicism Battle to Win Souls”, Philosophical Studies, 71, pp. 163-190.
Marr, D. (1982). Vision. San Francisco: Freeman.
Peacocke, C. (1986a). “Explanation in Computational Psychology: Language, Perception and Level 1.5”, Mind & Language, Vol. 1, No. 2, pp. 101-123.
Peacocke, C. (1986b). “Replies to Commentators”, Mind & Language, Vol. 1, No. 2, pp. 388-402.
Peacocke, C. (1989). “When is a Grammar Psychologically Real?”, in G. Alexander (ed.), Reflections on Chomsky. Oxford: Basil Blackwell.
Peacocke, C. (1992). A Study of Concepts. Cambridge, Massachussets: MIT Press.
Peacocke, C. (1994). “Content, Computation and Externalism”, Mind & Language, Vol. 9, No. 3, pp. 301-335.
Peacocke, C. (2004). “Interrelations: Concepts, Knowledge, Reference and Structure”, Mind & Language, Vol. 19, No. 1, pp. 85-98.
Schiffer, S. (1987). The Remnants of Meaning. Cambridge, Massachussets: MIT Press.
Can embodied cognitive science be reconciled with traditional functionalism in the philosophy of mind? In this paper I outline the options available and assess the prospects for reconciliation. I begin by introducing Shapiro’s (2004) claim that embodiment and functionalism seem to be at odds with one another, and then I assess Clark’s (2008) response. Recent papers by Weiskopf (2008) and Sprevak (under review) suggest problems with adopting the extended functionalism which Clark promotes: I explore the arguments of these two papers and the issues they raise for extended functionalism and embodied cognitive science in general.
Functionalism, the doctrine that mental states are to be characterized by their functional role within a cognitive system, was strongly associated with the rise of orthodox cognitive science in the late twentieth century. Functionalism’s commitment to multiple realizability is mirrored in orthodox cognitive science’s tendency to abstract away from the implementation details of cognitive processes.
Embodied cognitive science challenges many of the commitments of orthodox cognitive science, but its relationship to functionalism is unclear: while some recent work in embodied cognitive science seems to challenge functionalism, at least one trend within the embodied movement preserves and even extends the functionalist tradition. The first part of this paper explores the relationship between functionalism and embodied cognitive science, beginning with Shapiro’s (2004) argument that recent work on vision and movement implies that the kind of mind a creature has reflects quite specifically the kind of body it is in, which is in tension with the functionalist idea of body neutrality with respect to the mind. Shapiro thus suggests that embracing embodied cognitive science will result in the rejection of functionalism.
Clark (2008) counters Shapiro’s claim by distinguishing between two broad trends in embodied cognitive science, only one of which is considered by Shapiro. Shapiro’s claim is that the details of our embodiment play a special and non-eliminable contribution to our mental states, but Clark argues for a second way to understand embodiment in the context of cognitive science: ‘extended functionalism’, according to which the body plays a vital part in cognition in virtue of the particular functional role it plays. Like traditional functionalism, extended functionalism identifies mental states and processes with their functional profiles, except that these profiles are understood as supervening on the larger brain-body-world system instead of just the brain.
After assessing Shapiro’s (2004) claim and Clark’s (2008) response, the second part of this paper evaluates two recent arguments against the ‘extended functionalism’ promoted by Clark: Weiskopf’s (2008) argument that extended mental states lack informational integration and are thus unable to play the same functional role as normal mental states, and Sprevak’s (under review) argument that the version of extended functionalism entailed by traditional functionalism is so radical as to form a reductio of functionalism. I explore the arguments of these two papers and the issues they raise for extended functionalism and embodied cognitive science in general. I conclude by assessing the prospects for reconciliation between embodied cognitive science and functionalism.
Clark, A. (2008) ‘Pressing the flesh: a tension in the study of the embodied, embedded mind?’ Philosophy and Phenomenological Research 76, 37-59
Shapiro, L. (2004) The Mind Incarnate MIT Press
Sprevak, M. (under review) ‘Extended cognition and functionalism’
Weiskopf, D. (2008) ‘Patrolling the mind’s boundaries’ Erkenntnis 68, 265-276
In a most general sense, this project is an attempt to reconcile the debate between Massive Modularists and Prinzian-type non-modularists. Specifically, I take a ‘middle’ position, one that is not too dissimilar from Fodor’s original account. Nevertheless, because the embodied and embedded character of cognitive processes, although debatable in the details, cannot be ignored in constructing a plausible architecture of mind, Fodorian modularity, I argue, will consistently run up against difficulties. Rather than a dichotomous view of the mind, split between non-modular and modular processes, I will opt for placing the demands of modularity and their realization in the mind on a continuum, an architecture I term Cognitive Continuum Modularity (CCM). The chief source of reasons for placing cognition on a continuum comes from social cognition. Thus, I plan to defend these seven claims regarding an architecture that successfully includes the social domain:
Social cognition is a real, dominant, and important mental component and thus, any suitable architecture of the mind must take it into account.
There are distinct components of social reasoning and social perception that divide social cognition more generally into modular and non-modular input systems. If at least one functional mechanism of the mind can be shown to have both modular and non-modular sub-mechanisms, then the mind’s architecture can be neither massively modular, nor entirely non-modular. A better model of cognition would depict mental functions and mental mechanisms as placed along a continuum, ranging from domain-specific, to domain general and likewise, from a very high degree of informational encapsulation to a low or null degree. The type of modularity I propose, Cognitive Continuum Modularity (CCM) takes just such a middle position and is therefore, a more palatable alternative to the extreme positions in #3. CCM also provides an account of what I term ‘Distributed Modules,’ or input systems whose domains are specific, but not informationally encapsulated. The information processed by such modules can only be determined by reference to the embedded character of mental processes or perhaps even with respect to the organism’s use of tools in the external world.
Thus, a final outcome of the CCM is its ability to provide an architecture, not simply for an internal mind, but for an embedded and extended mind.
Cf. Carruthers, P. (2006). The Architecture of the Mind.
Prinz, J., 2006. Is the Mind Really Modular?
Chiefly found in The Modularity of Mind, 1983
An idea I borrow from Cundall (2006), Rethinking the Divide: Modules and Central Systems. Philosophia 34.
Cf. Tager-Flusberg, H. (2005). What developmental disorders can reveal about cognitive architecture: The example of theory of mind. In P. Carruthers, S. Laurence, and S. Stich, (eds.), The Innate Mind: Structure and Contents.
The mindreading capacity, that is, the capacity to ascribe mental states to a target, has been the subject of considerable attention from different fields in the past thirty years. Two main approaches have emerged: Theory Theory (TT) and Simulation Theory (ST). According to the former, ordinary people ascribe mental states to others by means of a ‘theory of mind’ that they, more or less tacitly, possess. According to the latter, ordinary people ascribe mental states to others by trying to replicate, or simulate, their mental life.
Recently, new discoveries in neuroscience seem to have brought decisive support in favour of simulation-based accounts of mindreading. Studies about normal subjects have established the existence of a specific class of neurons, i.e. mirror neurons, that are activated both when the subject experiences a specific emotion and when the subject observes behavioural signs that another individual is experiencing the same emotion. Moreover, studies about brain damaged subjects have established the existence of corresponding pair deficit phenomena: the subject is impaired both in his capacity to experience that emotion and in his capacity to recognise the same emotion in other individuals, on the basis of behavioural clues. According to Goldman, this neuroscientific evidence favours ST over TT, as the former can both provide a better explanation of the both sets of findings, and predicted the paired deficit phenomena.
We want to argue, to the contrary, that the neuroscientific evidence does not decisively settle the matter against TT. We distinguish three alternative ways to characterise a TT mindreading account, differing with respect to (1) the definition of mental recognition, (2) the role assigned to introspection in the recognition of mental states, (3) the mechanisms postulated for the attribution of mental states. In this paper, we defend a ‘moderate’ TT approach to mindreading by showing that it is predictively on a par with, and explanatorily better than, ST. In particular, we argue that a sophisticated version of TT will maintain that the ability to experience an emotion qua a specific emotion is based on the same tacit theory that emotion recognition in others is based on. Further, given the information-processing constraints on brain evolution, it is likely that these recognitional abilities will be localised in specific neural structures. This suggests that TT can very well explain the existence of mirroring phenomena and the fact that impaired subjects have specific paired deficits. In the former case, it is due to the same recognitional, theory-based capacity being employed in both first- and third-person mindreading. In the latter case, it is due to the fact that the subjects lack these recognitional abilities, and thus show deficits in both self- and other-ascriptions of specific emotions. This explanation is superior to that of ST, as it accounts for how we experience emotions, and not just that we do so. Further, the plausibility of the modularity of these abilities means that this form of TT should at least be weakly predicting the neural findings to involve localised phenomena of the kind that have been observed.
The study of autism has begun to elicit interest in the philosophical literature. In particular, many have taken autism to be a test case for deciding between theory-theory and simulation accounts of the nature of folk psychological reasoning. The aim of this paper is to argue that within this debate, the focus on autism has been misguided and that the empirical evidence regarding autism is irrelevant for the theory-theory versus simulation debate. I begin by presenting a quick overview of the theory-theory and simulation camps within folk psychology. I then progress onto the main argument of the paper, which is that philosophers have been incorrect in paying so much attention to autism as support for this debate. I will highlight three crucial pieces of evidence that support my claim: (1) Autistic individuals’ deficits are more widespread than just difficulty in dealing with mental states. This is demonstrated by the equivalent levels of failure in both false belief tests (e.g. Baron-Cohen, Leslie & Frith, 1985; Peterson & Bowler, 2000) and false sign tests (e.g. see Perner & Leekam, 2008 for a review), with only the former requiring an understanding of mental states. Folk psychological theories cannot account for these findings as they are focussed solely on explaining our understanding of other minds; (2) Neither theory-theory nor simulation can account for other autistic deficits such as repetitive behaviour (Lewis & Bodfish, 1998), restricted interests, impaired joint attention (Griffith, Pennington, Wehner & Rogers, 1999), deficits in executive function (Ozonoff & McEvoy, 1994) and working memory (Bennetto, Pennington & Rogers, 1996); and (3) Even though the simulation theory may aim to account for autistic individuals’ social deficits through the argument that they cannot simulate other people, which in turn influences how they empathise and interact with other individuals, this is a restricted account. These arguments, I feel, naturally lead one to the conclusion of the irrelevance of autism to the examination of the theory-theory and simulation debate within folk psychology.
The purpose of this paper is to investigate the demarcation between scientific reasoning and ‘folk reasoning’. To this end, I will examine the claim that people respond to anomalies in their views of the world in ways that are analogous to scientists’ responses to theoretical anomalies; specifically, Ronnie Janoff-Bulman (1992, p. 27), and Derek Bolton and Jonathan Hill (2003, p. 357), contend that victims of posttraumatic stress disorder (PTSD) exhibit a scientific response to a major theoretical anomaly (in the guise of a triggering, traumatic event). The particular model of scientific reasoning that these psychologists (loosely) deploy is Kuhn’s view of theory change. I will extend and elaborate on their work, outlining the tacit commonsense (or folk psychological) theories that are threatened by traumatic events. Next I will give a brief overview of Kuhn’s model of science, followed by predictions – based on Kuhn’s account – for the kind of reasoning PTSD patients engage in; these will include (a) pre-incident and post-incident vulnerability factors to PTSD, (b) resilience, (c) recovery from trauma. I will indicate where I consider the analogy to break down. I will contend that victims of trauma do, in fact, reason in a manner that exhibits proto-scientific behaviour. This finding is based on the view, owed to Michael Ruse (1986, pp. 32-35), that when scientists invoke “analogy-as-heuristic” (as an aid to comparison), they often are implicitly trying to eliminate the analogy and make an identity claim. So, the proposal being forwarded here is that, by a process of inference to the best explanation, the overlap between scientific and “folk” reasoning results from shared cognitive mechanisms. I will conclude the paper by contending that the extent to which the science analogy for PTSD works, extends our understanding of (1) the demarcation between science and non-science, and (2) the classification of posttraumatic stress disorder (PTSD) qua disorder. I will also suggest ways in which this analysis can be extended to other psychiatric classifications.
Bolton, D. & Hill, J. 2003. Mind, meaning, and mental disorder: The nature of causal explanation in psychology and psychiatry. Oxford: Oxford University Press
Janoff-Bulman, R. 1992. Shattered assumptions: Towards a new psychology of trauma. New York, NY: The Free Press (Macmillan)
Ruse, M. 1986. Taking Darwin Seriously. Oxford: Blackwell
Nowadays, the will is no longer exclusive concern of philosophy, but rather a thriving field of interdisciplinary inquiry. This includes both the role of conscious will in guiding voluntary actions (Wegner, 2002), and the notion of willpower to overcome temptations and distractions (Frankfurt, 1971; Mele, 1995; Bratman, 1999; Holton, 1999). This essay focuses on the latter, in particular on George Ainslie’s theory of willpower as intertemporal bargaining (1992; 2001; 2005).
Ainslie’s contribution on this topic is threefold. First, he pioneered the study of temporal discounting, helping demonstrate that future rewards are discounted not exponentially, contra classical economics, but rather hyperbolically. This implies that motivational salience of rewards nearest in time is not proportional to that of later benefits, so that expected utility is assessed myopically. Second, Ainslie used hyperbolic discounting to explain several forms of weakness of will, including addiction (Bickel, Marsch, 2001) and procrastination (Silver, Sabini, 1981). Greed for temporally proximate rewards leads often to forsake larger later benefits, thus reducing fitness. Third, he suggested a theory of willpower based on the notion of bundling, i.e. the tendency to see present choices as indicative of general policies concerning similar situations, thus making a decision not just on the current case, but on series of future alternatives.
My aim is to refine Ainslie’s theory of willpower, spelling out some hidden assumptions, discussing how to model intertemporal bargaining, and highlighting what understanding of the self is required by this theory. This is part of a broader research programme, to detail the cognitive mechanisms enabling willpower in rational agents (Frederick et al, 2002; Bénabou, Tirole, 2004; Berns et al, 2007). I begin observing that bundling fosters intertemporal coherence only under two assumptions: distinctiveness of future alternatives, and independency of later rewards from present choices. In contrast, whenever these criteria fail to apply, bundling can either facilitate, undermine, or be irrelevant for the agent’s willpower. Then I critically review the empirical evidence backing Ainslie’s theory of willpower (Mazur, 1986; Kirby, Guastello, 2001; Ainslie, Monterosso, 2003; Mitchell, Rosenthal, 2003), arguing that these data confirm hyperbolic discounting and additiveness of separate discount curves, but fail to demonstrate that subjects indeed bundle future choices to increase self-control over present decisions. I also propose how to amend these experimental paradigms to obtain more discriminating evidence.
Finally, I question Ainslie’s suggestion that intertemporal bargaining can be modelled as an iterated Prisoner’s Dilemma. Two problems make intrapersonal dilemmas different from social interactions: the future self cannot retaliate against the past self, hence dulling the value of iteration for coordination (Bratman, 1999); and each self has to realize her pay-offs immediately at the time of choice, because later on it will be impossible to collect them (Read, 2001). These peculiarities challenge Ainslie’s interpretation in terms of iterated PD. However, his proposal can still be rescued, if a specific model of the self is assumed. I label this model the driver/passenger theory of the self, and discuss its plausibility against other theories of multiple selves (Elster, 1986; Ross et al, 2007).
Ainslie, G. (1992). Picoeconomics: The strategic interaction of successive motivational states within the person. Cambridge: Cambridge University Press.
Ainslie, G. (2001). Breakdown of will. Cambridge: Cambridge University Press.
Ainslie, G. (2005). “Précis of Breakdown of will”. Behavioral and Brain Sciences 28, pp. 635-673.
Ainslie, G., Monterosso, J. (2003). “Building blocks of self-control: Increased tolerance for delay with bundled rewards”. Journal of the Experimental Analysis of Behavior 79, pp. 37-48.
Bénabou, R., Tirole, J. (2004). “Willpower and personal rules”. Journal of Political Economy 112, pp. 848-886.
Berns, G., Laibson, D., Loewenstein, G. (2007). “Intertemporal choice – toward an integrative framework”. Trends in Cognitive Sciences 11, pp. 482-488.
Bickel, W., Marsch, L. (2001). “Toward a behavioral economic understanding of drug dependence: Delay discounting processes”. Addiction 96, pp. 73-86.
Bratman, M. (1999). Faces of intention. Selected essays on intention and agency. Cambridge: Cambridge University Press.
Elster, J. (ed.) (1986). The multiple self. Studies on rationality and social change. Cambridge: Cambridge University Press.
Frankfurt, H. (1971). “Freedom of the will and the concept of a person”. Journal of Philosophy 68, pp. 5-20.
Frederick, S., Loewenstein, G., O’Donoghue, T. (2002). “Time discounting and time preference: A critical review”. Journal of Economic Literature 40, pp. 351-401.
Holton, R. (1999). “Intention and weakness of will”. Journal of Philosophy 96, pp. 241-262.
Kirby, K. Guastello, B. (2001). “Making choices in anticipation of similar future choices can increase self-control. Journal of Experimental Psychology: Applied 7, pp. 154-164.
Mazur, J. (1986). “Choice between single and multiple delayed reinforcers”. Journal of the Experimental Analysis of Behavior 46, pp. 67-77.
Mele, A. (1995). Autonomous agents: From self-control to autonomy. Oxford: Oxford University Press.
Mitchell, S., Rosenthal, A. (2003). “Effects of multiple delayed rewards on delay discounting in an adjusting amount procedure”. Behavioural Processes 64, pp. 273-286.
Read, D. (2001). “Intrapersonal dilemmas”. Human Relations 54, pp. 1093-1117.
Ross, D., Spurrett, D., Kincaid, H., Stephens, G. L. (eds.) (2007). Distributed cognition and the will. Individual volition and social context. Cambridge: MIT Press.
Silver, M., Sabini, J. (1981). “Procrastinating”. Journal for the Theory of Social Behaviour 11, pp. 207-221.
Wegner, D. (2002). The illusion of conscious will. Cambridge: MIT Press.
Any adequate measure of consciousness must satisfy two basic constraints; it must be bias free and ‘phenomenologically valid’. Recently it has been claimed that a measure based on second-order confidence reports satisfies both these constraints (Kunimoto et al., 2001). Here, subjects perform a first order task (e.g. identify a target), then rate how confident they are in their decision (the second order task). If subjects are able to correctly assess their own decisions, then they must conscious of the information used to perform the task. Despite the promise of this approach, it has been argued that second order confidence reports are neither bias free nor phenomenologically valid (Evans & Azzopardi 2007, Persaud et al. 2007). These problems hint at more fundamental issues in using ‘confidence’ to measure awareness, and what the constraint of phenomenological validity actually entails.
There are two basic types of perceptual tasks. First order confidence tasks assess whether a subject is ‘confident’ enough in their perceptions to make a ‘target present’ response rather than a ‘target absent’ response, even though the subject may otherwise deny seeing anything at all. Second order confidence tasks assess how confident a subject is that they made a correct judgment in the first order task. Despite ‘confidence’ being a term often used in analyses of task performance, it is never explained what ‘confidence’ is or how it is related to awareness. This is especially important as subjects often report that making confidence judgements is unnatural and difficult to do.
At first sight, first order confidence tasks appear to assess simple dispositions to act, while second order tasks appear to assess complicated, self-monitoring processes, i.e. reflecting on a perceptual judgement. However, the theory that underlies models of first and second order performance allows predictions to be made about second order responses based on a subject’s first order responses. This suggests that responses to second order tasks may just be transformations of simpler ‘behavioural’ responses to first order tasks, and thus may fail to index anything significant (Galvin et al. 2003).
This possibility opens up questions regarding the constraint of phenomenological validity. Signal detection theory (SDT) analysis is used to assess whether a measure is bias free, but many argue that SDT in itself does not offer an adequate measure of awareness because its measures are entirely performance-based, and thus are not ‘phenomenologically valid’ (Merikle & Cheesman, 1986, Kunimoto et al., 2001). However, there are very few details on what phenomenological validity actually means. Second-order measures require subjects to reflect on their own internal states, and thus may only capture a reflexive subset of consciousness. How to measure non-reflexive everyday awareness needs much more discussion, and may lead to a reconsideration of the use of first order performance-based measures of awareness. In this case, the acceptance that second order ‘subjective’ responses may be derivations of first order ‘behavioural’ responses would leave a limited role for subjective questions within consciousness studies, and a new form of ‘phenomenological validity’ would have to be described.
What is the relation between behavioural indicators scientists rely on in studying consciousness such as verbal report and button presses and the
conscious states subjects report? I shall return an answer to this question that takes as its starting point a distinction between awareness and meta-
awareness. The distinction is very roughly between having a conscious experience and knowing one is having such an experience. I shall argue that
one can have conscious experiences without knowing that one is having them. Consider the experience one has when one’s mind begins to wander.
One can be having this experience as one begins to daydream even though one may not realise, and hence not know that one is having such an
experience. Or to offer another example, consider anosognosic patients that display a lack of awareness for a deficit. Patients with hemiplegia (paralysis
on one side of the body) may verbally acknowledge their condition but nevertheless attempt to engage in behaviours that are precluded by their
deficit. In one sense they know that they are paralysed but in another sense they display a lack of awareness of their deficit when it comes to action. Other
anosognosics display the opposite profile resolutely denying there is anything wrong with them, and yet behaving in ways that indicate awareness of their
deficits. We seem to have a clear double dissociation here between being aware and knowing that one is aware.
I shall argue that the behavioural indicators of consciousness that scientists rely on may only be indicators of meta-awareness – the state a subject is in
when she knows she is having a conscious experience. The challenge is thus to find a way of using behavioural indicators to study awareness – the kind of
consciousness one has when is simply having an experience without necessarily knowing one is having it. Might a disciplined use of introspection
– focussed attention to experience provide a means of studying awareness as opposed to meta-awareness? Here we run into hard questions about the
extent to which introspection changes experience. The introspectionist psychologist E.B. Titchener for example trained his students to introspect on
their after-image experiences. He claimed that untrained subjects would find only a pandemonium of colours when they introspected, whereas trained
subject would find a uniformly regular sequence of colours from blue, green, yellow to red. For Titchener only the trained subjects were able to accurately
describe their experiences. We might wonder however whether there is really a fact of the matter about what after-image experiences are like. Maybe the
training Titchener gave to his students actually changed the quality of their experiences rather than allowing them to accurately describe their experience
(see Schwitzgebel 2003). Maybe both naïve introspectors and trained introspectors are equally correct about their experiences? I will argue that
there are indeed facts about the phenomenal character of experience that meta-awareness might distort. It follows that we can sometimes be wrong
about how things seem to us when we introspect.
Schwitzgebel, E. 2003: “Introspective training: reflections on Titchener’s lab manual” In Journal of Consciousness Studies 11, 7-8: 58-76
Whilst many take note of the schema/image distinction introduced by Gallagher, few take pains to further specify the fine-grained functional properties of the ‘body schema’. The present treatment attempts such an explicit statement by synthesis of recent work on egocentric spatial representation, sensorimotor control and global co-ordination.
Converging data demonstrate that egocentric space is represented multi-modally, but in a modular and piecemeal fashion, by means of a number of (possibly over-lapping but separate) body-part-centred representations.
These representations reliably exhibit plastic sensitivity to the functional range of action effectors located in the relevant region.
Accordingly, Legrand et al. suggest that they are first-and-foremost “spatial representations for action” (696), not only carrying information about a particular region of egocentric space but also directly priming actions in that space, functioning as ‘action-orientated’ egocentric representations.
The existence of such action-orientated spatial representation is underdetermined by the data considered, as it is silent on the co-ordination of the relevant representations. Clearly bodily agents can and do negotiate their local space with their bodies as a whole. This can be achieved by integrating modular information to provide flexible intermediate representation. Pouget and various colleagues propose a biologically plausible multi-functional architecture able to solve many of the relevant computational problems.
The architecture is implemented in a multi-layered and multi-functional neural net, dubbed as a ‘recurrent basis function network’. Here are the basic functional properties of the network:
‘input’/’output’ layers comprises sets of units corresponding to all and any sensory signals available to the system, as well as a set of motor units.
as the network is recurrent any of these might function as an ‘output’ layer projecting outwith or feeding-back through the network’s ‘hidden’ layer
‘hidden’ layer through simple learning and self-organisational principles, units in the hidden layer are able to implement a set of ‘basis functions’, systematically transforming and mutually modulating input signals activity of any given output unit is determined by a weighted sum of the activity of basis function units E.g. basis function units involved in specific motor commands are weighted higher relative to other units – where the relative weighting is determined by the inherent attractor dynamics of the network – enabling task-specific sensitivity
Dynamically coupled cortico-cerebellar assemblies of such networks, realising flexible inverse/forward models, are prime candidates for the neurological components of the body schema. However, the task of achieving global control and co-ordination of posture and movement is (‘ecologically’) balanced across both the central nervous (CNS) and musculoskeletal system (MSS): illustrated by developmental studies; the dramatic effects of losing key sources of kinaesthetic input; and the essential computational contribution of tendon networks.
By synthesising these data, we are poised to make an explicit statement of the BS:
A system of dynamic processes governing the global control and co-ordination of posture and movement, where multi-modal spatial information (individually coded in body-part centred co-ordinates) is updated constantly and processed holistically, priming the CNS and MSS to both initiate a complex of appropriate operations and inhibit inappropriate operations.
Gallagher S (1986). Body image and body schema: A conceptual clarification. Journal of Mind and Behavior, 7, 541–554.
For a critical review see: Holmes N P & Spence C (2004). The Body Schema and Multi-sensory Representations of Peripersonal Space. Cognitive Processes, 5: 94-105
Fogassi L, Gallese V, Fadiga L, Luppino G, Matelli M, & Rizzolatti G (1996). Coding of peripersonal space in inferior premotor cortex (area F4). Journal of Neurophysiology, 76: 141–157.
Mattingley J B, Driver J, Beschin N, Robertson I H (1997). Attentional competition between modalities: extinction between touch and vision after right hemisphere damage. Neuropsychologia 35:867–880
Làdavas E (2002). Functional and dynamic properties of visual peripersonal space. Trends in Cognitive Science 6: 17–22
Iriki A, Tanaka M, & Iwamura Y (1996) Coding of modified body schema during tool use by macaque postcentral neurons. Neuroreport 7:2325–2330
Farnè A, & Làdavas E (2000) Dynamic size-change of hand peripersonal space following tool use. Neuroreport 11: 1645–1649.
Farnè A, Bonifazi S, & Làdavas E (2005) The role played by tool-use and tool-length on the plastic elongation of peri-hand space: A single case study. Cognitive Neuropsychology, 22: 408-418.
Spence C, Pavani F, Maravita A, & Holmes N P (2004) Multisensory contributions to the 3-D representation of visuotactile peripersonal space in humans: evidence from the crossmodal congruency task. Journal of Physiology, 98, (1-3): 171-189
Legrand D, Brozzoli C, Rosetti Y, & Farne A (2007). Close to me: Multisensory space representations for action and pre-reflexive consciousness of oneself-in-the-world. Consciousness and Cognition. 16(3), 687-699
Pouget A, & Sejnowski T. (1997) Spatial Transformations in the Parietal Cortex using Basis Functions. Journal of Cognitive Neuroscience 9, (2): 222-237
Pouget A, & Snyder L H (2000). Computational approaches to sensorimotor transformations. Nature Neuroscience, 3, 1192-1198.
Pouget A, Deneve S, & Duhamel J R (2002). A computational perspective on the neural basis of multisensory spatial representations. Nature reviews Neuroscience, 3: 741-747.
Pfeifer R & Bongard J (2007) How the Body Shapes the Way We Think (MIT Press)
Thelen E, & Smith L (1994) A Dynamic Systems Approach To The Development Of Cognition And Action (MIT Press)
Cole J & Paillard J (1995) ‘Living Without Touch and Peripheral Information about Body Position and Movement: Studies with Deafferented Subjects’ in The Body and the Self, J. L. Bermudez, A. Marcel and N. Eilan (eds.) 245-266 (1995) (MIT Press)
Valero-Cuevas F J, Yi J W, Brown D, McNamara R V, Paul C, & Lipson H (2007), “The tendon network of the fingers performs anatomical computation at a macroscopic scale”, IEEE Transactions on Biomedical Engineering, pp. 1161-1166.
A growing number of theories give a special role to action in the explanation of the content and character of our conscious visual experience. ‘Action-space’ theorists (e.g. Clark (2001, 2007), Pettit (2003)) argue that conscious seeing is a matter of implicit knowledge of a range of actions currently made available by your perceptible environment. On the other hand, sensorimotor theorists (e.g. Hurley (1998), Noë (2004)) argue that conscious seeing is a matter of implicit knowledge of ‘sensorimotor contingencies’ - patterns of sensory changes that movement would bring about.
By tying consciousness to skilled activity, such enactive theories seem to allow for an attractive liberalism when attributing conscious states. This gives them an apparent advantage over accounts emphasising capacities for higher order thought, report or language by allowing for the intuitive possibility that agents lacking conceptual, metacognitive or linguistic capacities can be seats of conscious experience.
However, I argue that enactive theories face a dilemma. The problem lies in cashing out the appeal to ‘implicit knowledge’ made by each theory in a way that is neither too liberal, nor too conservative. ‘Implicit knowledge’ must, for the action-space theorist, amount to more than the mere ability to act appropriately in response to the perceptible environment. A simple machine could do that without (we suppose) being a seat of experience. ‘Implicit knowledge’ must, for the sensorimotor theorist, amount to more than a system’s merely coding for, or making predictions about, the likely sensory consequences of its activity. A simple machine could do that without (we suppose) being a seat of experience.
To rule out such excessive liberalism, some enactivists endorse the requirement that implicit knowledge of the space of available actions (Clark (2007)), or of the sensorimotor contingencies of the current perceptual situation (O’Regan and Noë (2001)), be factored into the agent’s ongoing reasoning, planning and deliberation for conscious experience to occur. But, I argue, on the most natural construal, reasoning, planning and deliberation are intellectually demanding capacities, tied to concept use. In light of this, enactivist theories can appear excessively conservative in their attributions of consciousness – intuitively there are animals that lack conceptual abilities, but still enjoy conscious experience.
One way of avoiding this dilemma would be the provision of an enactivist account of reasoning, planning and deliberation that was neither too liberal, nor too conservative in its application. I argue that Hurley’s (forthcoming) shared circuits model of social cognition provides the materials for just such an account, suggesting how nonconceptual forms of reasoning and planning could bootstrap themselves into existence on the back of skilled interaction with environment. Importantly, such a view of reasoning and planning is constructed with an emphasis on situated, embodied activity, making it a natural complement to enactivist accounts.
Finally, I suggest that the model of reasoning and planning under consideration blurs the distinction between the action-space and sensorimotor views. The common solution to their common problem suggests how each can capitalise on the insights of the other, revealing the two enactive views as complementary rather than competing theories.
References (for abstract only):
Clark, A. (2001): Visual experience and motor action: are the bonds too tight? Philosophical Review, 110 (4), 495-519.
Clark, A. (2007): What Reaching Teaches: Consciousness, Control and the Inner Zombie. British Journal for the Philosophy of Science, 58 (3), 563-594.
Hurley, S. (1998): Consciousness in Action. Cambridge, MA: Harvard University Press.
Hurley, S. (forthcoming): The Shared Circuits Model: How Control, Mirroring and Simulation Can Enable Imitation, Deliberation and Mindreading, forthcoming in Behavioural and Brain Sciences.
Noë, A. (2004): Action in Perception. Cambridge, MA: MIT Press.
O'Regan, J.K. & Noë, A. (2001): A sensorimotor approach to vision and visual consciousness. Behavioural and Brain Sciences, 24, 883-975.
Pettit, P. (2003): Looks as powers. Philosophical Issues, 13, 221-252.
The phenomenal character of conscious experience is the ultimate target of anti-physicalist arguments. These arguments have a common structure: first, they establish an epistemic gap (no a priori entailment from physical to phenomenal knowledge) and second, they infer to an ontological gap (phenomenal properties are not physical properties). Phenomenal Concepts Strategy denies that ontological conclusions follow from epistemic considerations: it maintains that our phenomenal and physical concepts, though referring to the same properties, are conceptually irreducible to each other.
This paper offers an account of the cognitive mechanisms which underlie the conceptual irreducibility of phenomenal concepts. All accounts of phenomenal concepts assume that conceptual irreducibility is the key to solving the standard puzzles associated with conscious experience, however, most of them explain only in very general terms why conceptual irreducibility obtains in the first place. Since accounts of phenomenal concepts are physicalist approaches, claiming that the puzzles of conscious experience are due to conceptually irreducible phenomenal concepts will not save physicalism unless conceptual irreducibility itself is explained in physical-functional terms.
Our goal in this paper is exactly to fill in this gap, that is, to provide a cognitive, or “physical”, story of how the peculiarities associated with conscious experience arise. We offer an account of conceptual irreducibility which is based on principles of cognitive architecture, namely the structured versus unstructured nature of perceptual representations. Structured perceptual representations possess constituent structure—they have elements that stand in certain relations with one another. That is, there is a mapping between the pattern of relations of the representational constituents and a certain external pattern of relations in the environment. Contrary to this, unstructured perceptual representations lack constituent structure. Lacking interrelated constituents, unstructured representations do not map the structure of the objects they stand for. Rather they only distinctively indicate whether certain types of objects are present or not (i.e. they selectively co-vary with particular kinds of external events and are selectively treated by the processing system). From this it follows that unstructured perceptual representations can freely change their functional roles within a cognitive system, which in turn means that no description alluding to their structure or functional role can capture their nature.
That is, in our explanation of conceptual irreducibility, we shift the emphasis from the nature of phenomenal concepts to that of the experiences that phenomenal concepts are concepts of. Our account covers all standard anti-physicalist arguments, namely the knowledge argument, the explanatory gap, spectrum inversion, the conceivability of zombies, and ineffability. In addition, it provides an explanation of the primary-secondary quality distinction in terms of cognitive architecture.
In this paper I will argue that conscious agency is really the feeling of the exercise of mastery of sensorimotor contingencies. Whereas the sensorimotor view (Noë, 2004; O'Regan & Noë, 2002) has run into trouble as a view of perception more generally (Clark, 2006; Prinz, 2006), especially when explaining why action is constitutive of perception rather than merely causal, it has more purchase with respect to explaining the phenomenology of agency.
When we exercise mastery of sensorimotor contingencies in our actions we experience agency. The experience of agency is therefore basic to action control. And not being able to predict the outcome of our actions is registered as a break-down of agency.
This view helps explain why not all examples of felt agency seem to require explanation with respect to goals (Wakefield & Dreyfus, 1991). It also helps to explain how our attribution of agency to ourselves is rather promiscuous, involving the extension of our sense of agency to the artefacts we use, artificial limbs and even actions we play no part in (Wegner, 2002). Thus our abilities to feel agency is not confined to those activities with which our nervous system may have provided us with native systems that produce forward models. It is a rather more open-ended sort of capacity. As we learn more about the world we are able to make predictions and control our activities at more refined degrees of abstraction. That is, as our ability to predict and control more refined laws of sensorimotor contingency grows our sense of agency (and sometimes alienation) can grow.
This approach explains the asymmetry we find with respect to the feelings of agency in ourselves and in others as follows. We observe agency in others when we perceive them exercising mastery of sensorimotor contingencies. But we feel agency when exercising mastery of those sensorimotor contingencies. This approach is especially good at explaining the experience of agency in co-action with others, something neglected by many other approaches.
A refined version of this analysis can thus capture both high-level and low-level agency, i.e. agency with respect to basic organised control and more elaborate goal-directed activities (pace the hard distinctions made between control and agency in Pacherie, 2007). Rather than seeing agency spread top-down from an intentional and teleological level of description (Metzinger, 2005) this view sees agency spreading outwards from the basic ongoing control of the body. Yet it can accommodate equally agency at higher levels of description. Our abilities to predict and control our activities become shaped by social projects in which we become involved, we experience agency (and alienation) with respect to higher order capabilities.
In addition, this approach I argue is revealing both about agency and the sensorimotor view more generally.
Clark, A. (2006). Vision as Dance? Three Challenges for Sensorimotor Contingency Theory. Psyche, 12(1).
Metzinger, T. (2005). Conscious volition and mental representation: Towards a more fine-grained analysis. In N. Sebanz & W. Prinz (Eds.), Disorders of Volition: Bradford Book, MIT Press.
Noë, A. (2004). Action in Perception. Cambridge, MA: Bradford Books, MIT Press.
O'Regan, J., & Noë, A. (2002). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(05), 939-973.
Pacherie, E. (2007). The Sense of Control and the Sense of Agency. PSYCHE, 13(1).
Prinz, J. (2006). Putting the Brakes on Enactive Perception. PSYCHE, 12(1), 1-19.
Wakefield, J., & Dreyfus, H. L. (1991). Intentionality and the phenomenology of action. In E. Lepore & R. Van Gulick (Eds.), John Searle and his critics (pp. 259-270.). Cambridge, MA: Blackwel.
Wegner, D. W. (2002). The Illusion of Conscious Will. Cambridge MA: MIT Press.