Thursday, 22 February 2018

On Folk Epistemology

Mikkel Gerken is associate professor at the University of Southern Denmark. In this post he writes about his new book ‘On Folk Epistemology. How we think and talk about knowledge’.





A central claim of my book, On Folk Epistemology. How we think and talk about knowledge, is that some folk epistemological patterns of knowledge ascriptions are best explained by cognitive biases. I argue that this approach to folk epistemology yields diagnoses of some hard puzzles of contemporary epistemology. So, On Folk Epistemology seeks to contribute to some prominent debates in contemporary epistemology. For example, I criticize contextualism, pragmatic encroachment, knowledge-first epistemology etc. If you want to check it out, there is an introduction and overview here.

In this blog post, however, I will emphasize why the study of folk epistemology is an important task. In a nutshell, it is because folk epistemology is extremely consequential. Consider, for example, the roles of knowledge ascriptions in our social interactions. We acquire the ability to think and talk about knowledge early in life. Moreover, mental and linguistic ascriptions and denials of knowledge remain extremely prominent in adulthood. Indeed, linguistic knowledge ascriptions are arguably among the most important speech acts that we engage in on a daily basis.

To ascribe knowledge to oneself or to someone else is a powerful speech act that gives the proposition said to be known a special status. Often it indicates that we are in a position to act on the proposition. Moreover, the subject to whom knowledge is ascribed is often given a stamp of social approval or disapproval. Just consider phrases such as “she is in the know” or “he doesn’t know what he is talking about.” Consequently, knowledge ascriptions are central to many of the social scripts that govern social life. So, if our knowledge ascriptions and intuitions about them are biased, we’d want to understand how and why. After all, we do not want to make our decisions about whom to trust and how to act based on biased judgments.

Understanding the biases of our folk epistemology is all the more urgent given that they may lead to social injustices. This may be the case if biases reflect stereotypes that pertain to gender, race or class. Folk epistemological biases are particularly relevant to distinctively epistemic injustices. While epistemic injustices may be caused by general “identity prejudices”, folk epistemological biases are especially relevant.

After all, they may lead us to mistakenly regard someone who in fact knows that p as not knowing it. Thus, biases of our folk epistemology may lead to “wrongs done to someone specifically in their capacity as a knower” which is Miranda Fricker’s initial conception of epistemic injustice (Fricker 2007). At present, we do not know enough about whether folk epistemological biases interact with biases pertaining to gender, race or class. Here I think of On Folk Epistemology as providing part of a framework for further research on epistemic injustice.

Tuesday, 20 February 2018

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization

Today's post is by Jonathan Ellis, Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel, Professor of Philosophy at the University of California, Riverside. This is the second in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences, eds. J. F. Bonnefon and B. Trémolière, (Psychology Press, 2017).




Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....