Want to improve your golf or tennis game? Take lessons from a local “pro.” Got an obscure medical problem? See a specialist. Nasty divorce? Hire a good lawyer. In short, if you’ve got a hard question or problem, you should seek the guidance of an expert.
Experts don’t just guide personal choices: they shape social policy, law and collective decision-making: how should scientific research be interpreted? What unintended consequences might result from this policy? Is this proposed law legally viable? Experts help answer hard questions like these.
But what is an expert? How can we tell if someone is an expert? And, since experts are important, why do people often reject their advice?
1. What is expertise?
There are many types of experts. Some experts – e.g., physicists, art historians, accountants – have exceptional knowledge. Others – e.g., chess masters, professional violinists, surgeons – have exceptional abilities. Some expertise relates less to doing and more to perceiving: e.g., an expert distiller can taste the nuances in whiskey; an expert radiologist can read an X-ray efficiently.
Despite these differences, all experts have a high degree of competence in a specific domain (or subject area) that makes them an authority in that domain.
Becoming an expert typically requires thousands of hours of focused study and practice. Other experts usually review this practice and give feedback, leading to more practice. Because expertise develops over time, someone can be more or less of an expert. Expertise requires continual study and practice or else it can be lost.
Experts are not simply reservoirs of information; they have a deep understanding of their domain, usually including how and why its claims are accepted. They are not dabblers or merely well-informed: they are authorities.
The authority of an expert is epistemic: their knowledge and understanding gives us good reason to trust them in matters of their domain of expertise. Our beliefs are more likely to be justified or true if we get them from experts. Experts, then, make us better off, by sharing their knowledge or using their skills.
2. How can we tell whether someone is an expert?
Sometimes it’s clear that someone is an expert: the proof is in their performance, e.g., gold medal winners, trial-winning attorneys. Other times, it’s harder to tell.
There are, however, some general indicators that someone is likely an expert. Experts usually:
- have extensive specialized education and experience;
- have a track record of being right or helpful;
- have high professional standing or credentials;
- have marks of professional achievement, such as publications or awards;
- can speak easily and confidently about their domain.
These indicators aren’t perfect. Some degree programs, credentials and awards don’t amount to much. Some fields don’t have track records, or they aren’t available. Some experts tend to lose skill with greater experience. Some well-meaning people claim to be among the experts in a domain, and even have credentials and experience, but they simply are not experts: they slipped through the cracks, so to speak. Some people claim expertise in areas where there is no genuine expertise: if there is no knowledge or well-justified belief in a domain (as opposed to mere beliefs or opinions), then there are no experts in that domain. And there are charlatans, people who, for a variety of reasons, present themselves as experts, even though they are not experts and they know it.
Identifying who and who is not an expert is sometimes challenging. But it’s not impossible or not worth trying. And, again, sometimes it’s clear who the experts are.
3. Skepticism About Experts
People who haven’t had the education, training, or practice of an expert are far less likely to be as skilled or knowledgeable in that domain, compared to an expert. So, expert guidance is more likely to be correct or useful than our own: this is why experts matter.
Some people, however, are skeptical of experts and even the whole idea of expertise. Here are some reasons why:
A. Expert opinion sometimes changes:
We all know of cases where it seems like the experts got it wrong: butter was bad for you, now it’s fine; all women should get mammograms, now it’s only women over 50; one doctor told you X, but another said Y. Ever-changing “expert opinion” sometimes leads people to doubt experts in general.
B. Experts sometimes overreach:
Experts sometimes overstep their bounds. They make claims about subjects outside their expertise, e.g., physicians offering political commentary. When experts present themselves as authorities in areas outside of their expertise, it is harder to know who the genuine experts are and it’s easier to lose respect for experts.
C. Experts sometimes disagree:
Even genuine experts – people with equally strong training and experience – sometimes disagree. TV courtroom dramas make much of this: “You bring your experts; we’ll bring ours.” If we can’t decide which expert is more likely correct, we might lose confidence in experts generally.
D. We sometimes resist expert authority:
Experts sometimes tell us things we just don’t want to hear, and so we refuse to listen. We are so confident that we are right that we dismiss anyone who contradicts us, including experts. Sometimes this is because a belief is so important to us that we are offended by the idea that others know more about it than we do. When we claim expertise that we really don’t have, this is often at our own peril.
Recognizing why people sometimes reject expertise can make us more critical of that rejection. Sometimes skepticism about experts is justified. But, more often, experts are in better positions than us in knowing what we should think or do, and so skepticism is unwise.
Studying expertise helps us better identify experts and understand why they are important.
Many questions remain, however, e.g.: what should we believe, and do, when experts disagree? Could recognizing the value of expertise discourage democracy, which allows non-experts’ opinions to influence important decisions and policies? Are there experts on controversial topics, like ethics or justice or politics? Finally, are these all just questions for experts, or should everyone try to answer them as best they can?
 See Bilalić (2017).
 This characterization is not uncontroversial. Alvin Goldman (2018: 6) argues that computer algorithms like Google’s search engine are experts. The algorithms certainly “learn” from searchers and other click data, but whether this amounts to training that develops understanding remains an open question. Further, some think people with savant syndrome (a condition in which someone with significant mental disabilities has a far-above-average ability to count, calculate, or remember) have expertise without rigorous study (see Ericsson and Pool 2016: 219-222). And we might wonder whether, if there were an all-knowing being like God, that being would be an expert. Nevertheless, almost all cases of human expertise (including savants) requires specialized training or learning.
 According to Ericsson and Pool (2016), acquiring any type of expertise typically requires thousands of hours of training. The popular “10,000 hour” rule was a misinterpretation by Malcolm Gladwell (2008) of a point made by Krampe, Tech-Römer, and Ericsson (1993). By age 20, expert violinists had practiced an average of ten-thousand hours. But many were experts long before age 20. Their findings show that achieving expertise sometimes requires more than 10,000 hours of practice, and sometimes less. Improved teaching techniques help, too. Ericsson, et al. (2018: 6) note that the mathematics now taught in one college course is material that Roger Bacon (1214-1292) said would take 30 to 40 years to learn.
 This is especially the case with people who want to achieve the highest levels of expertise: Olympians, professional athletes, chess grandmasters, and Noble Prize winners. Precisely what that standard is will be determined by the domain. For example, expert swimming requires technique and speed comparable to other expert swimmers, and expertise in a sub-domain of mathematics (like topology) requires drawing a sufficient number of valid inferences relative to the current state of that field. And what counted as the highest level of expert swimming in the 1968 Olympics was different from what counted in the 2016 Olympics.
A widely studied version of this rigorous training was coined “deliberate practice” by psychologist K. Anders Ericsson (2009). This practice requires full attention on tasks that are not in one’s comfort zone and for which others have already developed training techniques and involves using copious feedback to improve particular aspects of a skill. For discussion, see Ericsson and Pool (2016).
 See Dreyfus and Dreyfus (1988).
 See Collins and Evans (2007) for a novel distinction between interactional expertise and contributory expertise. Both have extensive understanding of the concepts and methods in a domain, but whereas interactional experts can generally only discuss the history and development of a field, contributory experts have the ability to contribute new concepts, theories, and methods to the field.
 Contrast this with “political authority,” the authority to make decisions and enforce behaviors at political or governmental levels.
 See Coady (2012).
 See Goldman (2001).
 Some health professionals tend to lose skill over time. Ericsson and Pool (2016, chapter 5) show that some physicians and nurses get worse despite increasing experience.
 Raz (1986) and Zagzebski (2012).
 Shaw (2016); Schrager (2016); Fox (2018); Burdick (2018).
 In some cases, the fact that we are not experts can lead us to distrust experts. Research on confidence in ourselves suggests that people who are especially incompetent in a domain are likely to be ignorant of their own incompetence. This results in a cognitive bias known as the Dunning-Kruger effect (Dunning and Kruger, 1999). David Dunning, who co-identified the problem, writes, “[T]he knowledge and intelligence that are required to be good at a task are often the same qualities needed to recognize that one is not good at that task” (2016). People under the influence of Dunning-Kruger have an over-inflated view of their own competence, which then leads them to demote the competence of others, including genuine experts.
 Noveck (2015); Brennan (2016)
 See Burch (1974), Rasmussen (2005), Tetlock (2005), Watson (2017), and Watson and Guidry-Grimes (2018).
Ericsson, K. Anders. (2009) “Enhancing the Development of Professional Performance: Implications from the Study of Deliberate Practice.” In K. Anders Ericsson, ed. Development of Professional Expertise. Cambridge: Cambridge University Press, pp. 405-431.
Feltovich, Paul J., Michael J. Prietula, and K. Anders Ericsson. (2006) “Studies of Expertise from Psychological Perspectives.” In K. Anders Ericsson, Neill Charness, Robert R. Hoffman, and Paul J. Feltovich, eds. The Cambridge Handbook of Expertise and Expert Performance. Cambridge: Cambridge University Press, pp. 41-68.
Kruger, Justin and David Dunning. (1999) “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology, Vol. 77, No. 6, pp. 1121-1134.
Lackey, Jennifer. (2018) “Experts and Peer Disagreement.” In Matthew A. Benton, John Hawthorne, and Dani Rabinowitz, eds. Knowledge, Belief, and God: New Insights in Religious Epistemology. Oxford: Oxford University Press, pp. 228-245.
Rakel, Horst. (2004) “Scientists as Expert Advisors: Science Cultures Versus National Cultures?” in Kurz-Milcke, Elke and Gerd Gigerenzer, eds. (2004) Experts in Science and Society. New York: Kluwer Academic/Plenum Publishers, pp. 3-25.
Vickers, Andrew J., Fernando J. Bianco, Angel M. Serio, James A. Eastham, Deborah Schrag, Eric A. Klein, Alwyn M. Reuther, Michal W. Kattan, J. Edson Pontes, and Peter T. Scardino. 2007. “The Surgical Learning Curve for Prostate Cancer Control after Radical Prostatectomy.” Journal of the National Cancer Institute. Vol. 99, No. 15, pp. 1171-1177.
Vickers, Andrew J., Fernando J. Bianco, Mithat Gonen, Angel M. Cronin, James A. Eastham, Deborah Schrag, Eric A. Klein, Alwyn M. Reuther, Michael W. Kattan, J. Edson Pontes, and Peter T. Scardino. 2008. “Effects of Pathologic Stage on the Learning Curve for Radical Prostatectomy: Evidence that Recurrence in Organ-confined Cancer is Largely Related to Inadequate Surgical Technique.” European Urology. Vol. 53, No. 5, pp. 960-966.
Take My Word for It: On Testimony by Spencer Case
Moral Testimony by Annaleigh Curtis
The Epistemology of Disagreement by Jonathan Matheson
Download this essay in PDF.
About the Author
Jamie is Assistant Professor of Medical Humanities and Bioethics at the University of Arkansas for Medical Sciences. He received his Ph.D. from Florida State University. He specializes in traditional and social epistemology and applied bioethics. His research focuses on the nature and limits of expertise, especially in medicine and politics. Fitting for someone named Dr. Watson, he has a very intelligent dog named Sherlock. JamieCarlinWatson.com