Skip to main content

Scientific Inquiry in Social Work: 10.4 A word of caution: Questions to ask about samples

Scientific Inquiry in Social Work
10.4 A word of caution: Questions to ask about samples
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeSWK 340: Social Work Research Methods OER Collection
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Title Page
  3. Copyright
  4. Table Of Contents
  5. Student and Instructor Resources
  6. Copyright Information
  7. Acknowledgements and Contributors
  8. Version Information
  9. 1. Introduction to research
    1. 1.0 Chapter introduction
    2. 1.1 How do social workers know what to do?
    3. 1.2 Science and social work
    4. 1.3 Why should we care?
    5. 1.4 Understanding research
  10. 2. Beginning a research project
    1. 2.0 Chapter introduction
    2. 2.1 Getting started
    3. 2.2 Sources of information
    4. 2.3 Finding literature
  11. 3. Reading and evaluating literature
    1. 3.0 Chapter introduction
    2. 3.1 Reading an empirical journal article
    3. 3.2 Evaluating sources
    4. 3.3 Refining your question
  12. 4. Conducting a literature review
    1. 4.0 Chapter introduction
    2. 4.1 What is a literature review?
    3. 4.2 Synthesizing literature
    4. 4.3 Writing the literature review
  13. 5. Ethics in social work research
    1. 5.0 Chapter introduction
    2. 5.1 Research on humans
    3. 5.2 Specific ethical issues to consider
    4. 5.3 Ethics at micro, meso, and macro levels
    5. 5.4 The practice of science versus the uses of science
  14. 6. Linking methods with theory
    1. 6.0 Chapter introduction
    2. 6.1 Micro, meso, and macro approaches
    3. 6.2 Paradigms, theories, and how they shape a researcher’s approach
    4. 6.3 Inductive and deductive reasoning
  15. 7. Design and causality
    1. 7.0 Chapter introduction
    2. 7.1 Types of research
    3. 7.2 Causal relationships
    4. 7.3 Unit of analysis and unit of observation
    5. 7.4 Mixed Methods
  16. 8. Creating and refining a research question
    1. 8.0 Chapter introduction
    2. 8.1 Empirical versus ethical questions
    3. 8.2 Writing a good research question
    4. 8.3 Quantitative research questions
    5. 8.4 Qualitative research questions
    6. 8.5 Feasibility and importance
    7. 8.6 Matching question and design
  17. 9. Defining and measuring concepts
    1. 9.0 Chapter introduction
    2. 9.1 Measurement
    3. 9.2 Conceptualization
    4. 9.3 Operationalization
    5. 9.4 Measurement quality
    6. 9.5 Complexities in quantitative measurement
  18. 10. Sampling
    1. 10.0 Chapter introduction
    2. 10.1 Basic concepts of sampling
    3. 10.2 Sampling in qualitative research
    4. 10.3 Sampling in quantitative research
    5. 10.4 A word of caution: Questions to ask about samples
  19. 11. Survey research
    1. 11.0 Chapter introduction
    2. 11.1 Survey research: What is it and when should it be used?
    3. 11.2 Strengths and weaknesses of survey research
    4. 11.3 Types of surveys
    5. 11.4 Designing effective questions and questionnaires
  20. 12. Experimental design
    1. 12.0 Chapter introduction
    2. 12.1 Experimental design: What is it and when should it be used?
    3. 12.2 Pre-experimental and quasi-experimental design
    4. 12.3 The logic of experimental design
    5. 12.4 Analyzing quantitative data
  21. 13. Interviews and focus groups
    1. 13.0 Chapter introduction
    2. 13.1 Interview research: What is it and when should it be used?
    3. 13.2 Qualitative interview techniques
    4. 13.3 Issues to consider for all interview types
    5. 13.4 Focus groups
    6. 13.5 Analyzing qualitative data
  22. 14. Unobtrusive research
    1. 14.0 Chapter introduction
    2. 14.1 Unobtrusive research: What is it and when should it be used?
    3. 14.2 Strengths and weaknesses of unobtrusive research
    4. 14.3 Unobtrusive data collected by you
    5. 14.4 Secondary data analysis
    6. 14.5 Reliability in unobtrusive research
  23. 15. Real-world research
    1. 15.0 Chapter introduction
    2. 15.1 Evaluation research
    3. 15.2 Single-subjects design
    4. 15.3 Action research
  24. 16. Reporting research
    1. 16.0 Chapter introduction
    2. 16.1 What to share and why we share
    3. 16.2 Disseminating your findings
    4. 16.3 The uniqueness of the social work perspective on science
  25. Glossary
  26. Practice behavior index
  27. Attributions index

10.4 A word of caution: Questions to ask about samples

Learning Objectives

  • Identify three questions you should ask about samples when reading research results
  • Describe how bias impacts sampling

We read and hear about research results so often that we might sometimes overlook the need to ask important questions about where the research participants came from and how they are identified for inclusion. It is easy to focus only on findings when we’re busy and when the really interesting stuff is in a study’s conclusions, not its procedures. But now that you have some familiarity with the variety of procedures for selecting study participants, you are equipped to ask some very important questions about the findings you read and to be a more responsible consumer of research.

Who sampled, how, and for what purpose?

 Have you ever been a participant in someone’s research? If you have ever taken an introductory psychology or sociology class at a large university, that’s probably a silly question to ask. Social science researchers on college campuses have a luxury that researchers elsewhere may not share—they have access to a whole bunch of (presumably) willing and able human guinea pigs. But that luxury comes at a cost—sample representativeness. One study of top academic journals in psychology found that over two-thirds (68%) of participants in studies published by those journals were based on samples drawn in the United States (Arnett, 2008). [1] Further, the study found that two-thirds of the work that derived from US samples published in the Journal of Personality and Social Psychology was based on samples made up entirely of American undergraduates taking psychology courses.

two white people and a dog lounging with coffee

These findings certainly raise the question: What do we actually learn from social scientific studies and about whom do we learn it? That is exactly the concern raised by Joseph Henrich and colleagues (Henrich, Heine, & Norenzayan, 2010), [2] authors of the article “The Weirdest People in the World?” In their piece, Henrich and colleagues point out that behavioral scientists very commonly make sweeping claims about human nature based on samples drawn only from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies, and often based on even narrower samples, as is the case with many studies relying on samples drawn from college classrooms. As it turns out, many robust findings about the nature of human behavior when it comes to fairness, cooperation, visual perception, trust, and other behaviors are based on studies that excluded participants from outside the United States and sometimes excluded anyone outside the college classroom (Begley, 2010). [3] This certainly raises questions about what we really know about human behavior as opposed to US resident or US undergraduate behavior. Of course, not all research findings are based on samples of WEIRD folks like college students. But even then, it would behoove us to pay attention to the population on which studies are based and the claims that are being made about to whom those studies apply.

In the preceding discussion, the concern is with researchers making claims about populations other than those from which their samples were drawn. A related, but slightly different, potential concern is sampling bias. Bias in sampling occurs when the elements selected for inclusion in a study do not represent the larger population from which they were drawn. For example, if you were to sample people walking into the social work building on campus during each weekday, your sample would include too many social work majors and not enough non-social work majors. Furthermore, you would  completely exclude graduate students whose classes are at night. Bias may be introduced by the sampling method used or due to conscious or unconscious bias introduced by the researcher (Rubin & Babbie, 2017). [4] A researcher might select people who “look like good research participants,” in the process transferring their unconscious biases to their sample.

a cartoon of a man standing on a plinth with the word ignorance on it

Another thing to keep in mind is that just because a sample may be representative in all respects that a researcher thinks are relevant, there may be aspects that are relevant that didn’t occur to the researcher when she was drawing her sample. You might not think that a person’s phone would have much to do with their voting preferences, for example. But had pollsters making predictions about the results of the 2008 presidential election not been careful to include both cell phone-only and landline households in their surveys, it is possible that their predictions would have underestimated Barack Obama’s lead over John McCain because Obama was much more popular among cell-only users than McCain (Keeter, Dimock, & Christian, 2008). [5]

So how do we know when we can count on results that are being reported to us? While there might not be any magic or always-true rules we can apply, there are a couple of things we can keep in mind as we read the claims researchers make about their findings.

First, remember that sample quality is determined only by the sample actually obtained, not by the sampling method itself. A researcher may set out to administer a survey to a representative sample by correctly employing a random selection technique, but if only a handful of the people sampled actually respond to the survey, the researcher will have to be very careful about the claims she can make about her survey findings.

Another thing to keep in mind, as demonstrated by the preceding discussion, is that researchers may be drawn to talking about implications of their findings as though they apply to some group other than the population actually sampled. Though this tendency is usually quite innocent and does not come from a place of malice, it is all too tempting a way to talk about findings; as consumers of those findings, it is our responsibility to be attentive to this sort of (likely unintentional) bait and switch.

Finally, keep in mind that a sample that allows for comparisons of theoretically important concepts or variables is certainly better than one that does not allow for such comparisons. In a study based on a nonrepresentative sample, for example, we can learn about the strength of our social theories by comparing relevant aspects of social processes. We talked about this as theory-testing in Chapter 7.

At their core, questions about sample quality should address who has been sampled, how they were sampled, and for what purpose they were sampled. Being able to answer those questions will help you better understand, and more responsibly read, research results.

Key Takeaways

  • Sometimes researchers may make claims about populations other than those from whom their samples were drawn; other times they may make claims about a population based on a sample that is not representative. As consumers of research, we should be attentive to both possibilities.
  • A researcher’s findings need not be generalizable to be valuable; samples that allow for comparisons of theoretically important concepts or variables may yield findings that contribute to our social theories and our understandings of social processes.

Glossary

  • Bias- in sampling, when the elements selected for inclusion in a study do not represent the larger population from which they were drawn due to sampling method or thought processes of the researcher

Image attributions

men women apparel couple by 5688709 CC-0

ignorance by Rilsonav CC-0


  1. Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63, 602–614. ↵
  2. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–135. ↵
  3. Newsweek magazine published an interesting story about Henrich and his colleague’s study: Begley, S. (2010). What’s really human? The trouble with student guinea pigs. Retrieved from http://www.newsweek.com/2010/07/23/what-s-really-human.html↵
  4. Rubin, C. & Babbie, S. (2017). Research methods for social work (9th edition). Boston, MA: Cengage. ↵
  5. Keeter, S., Dimock, M., & Christian, L. (2008). Calling cell phones in ’08 pre-election polls. The Pew Research Center for the People and the Press. Retrieved from http://people-press.org/files/legacy-pdf/cell-phone-commentary.pdf↵

Annotate

Next CHAPTER
11. Survey research
PreviousNext
Copyright © 2018 by Matthew DeCarlo. Scientific Inquiry in Social Work by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org