Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2.3 Research Design in Sociology

Learning objective.

  • List the major advantages and disadvantages of surveys, experiments, and observational studies.

We now turn to the major methods that sociologists use to gather the information they analyze in their research. Table 2.2 “Major Sociological Research Methods” summarizes the advantages and disadvantages of each method.

Table 2.2 Major Sociological Research Methods

Method Advantages Disadvantages
Survey Many people can be included. If given to a random sample of the population, a survey’s results can be generalized to the population. Large surveys are expensive and time consuming. Although much information is gathered, this information is relatively superficial.
Experiments If random assignment is used, experiments provide fairly convincing data on cause and effect. Because experiments do not involve random samples of the population and most often involve college students, their results cannot readily be generalized to the population.
Observation (field research) Observational studies may provide rich, detailed information about the people who are observed. Because observation studies do not involve random samples of the population, their results cannot readily be generalized to the population.
Existing data Because existing data have already been gathered, the researcher does not have to spend the time and money to gather data. The data set that is being analyzed may not contain data on all the variables in which a sociologist is interested or may contain data on variables that are not measured in ways the sociologist prefers.

Types of Sociological Research

The survey is the most common method by which sociologists gather their data. The Gallup Poll is perhaps the best-known example of a survey and, like all surveys, gathers its data with the help of a questionnaire that is given to a group of respondents. The Gallup Poll is an example of a survey conducted by a private organization, but it typically includes only a small range of variables. It thus provides a good starting point for research but usually does not include enough variables for a full-fledged sociological study. Sociologists often do their own surveys, as does the government and many organizations in addition to Gallup.

A pile of surveys

The survey is the most common research design in sociological research. Respondents either fill out questionnaires themselves or provide verbal answers to interviewers asking them the questions.

The Bees – Surveys to compile – CC BY-NC 2.0.

The General Social Survey, described earlier, is an example of a face-to-face survey, in which interviewers meet with respondents to ask them questions. This type of survey can yield a lot of information, because interviewers typically will spend at least an hour asking their questions, and a high response rate (the percentage of all people in the sample who agree to be interviewed), which is important to be able to generalize the survey’s results to the entire population. On the downside, this type of survey can be very expensive and time-consuming to conduct.

Because of these drawbacks, sociologists and other researchers have turned to telephone surveys. Most Gallup Polls are conducted over the telephone. Computers do random-digit dialing, which results in a random sample of all telephone numbers being selected. Although the response rate and the number of questions asked are both lower than in face-to-face surveys (people can just hang up the phone at the outset or let their answering machine take the call), the ease and low expense of telephone surveys are making them increasingly popular.

Mailed surveys, done by mailing questionnaires to respondents, are still used, but not as often as before. Compared with face-to-face surveys, mailed questionnaires are less expensive and time consuming but have lower response rates, because many people simply throw out the questionnaire along with other junk mail.

Whereas mailed surveys are becoming less popular, surveys done over the Internet are becoming more popular, as they can reach many people at very low expense. A major problem with Web surveys is that their results cannot necessarily be generalized to the entire population, because not everyone has access to the Internet.

Experiments

Experiments are the primary form of research in the natural and physical sciences, but in the social sciences they are for the most part found only in psychology. Some sociologists still use experiments, however, and they remain a powerful tool of social research.

The major advantage of experiments is that the researcher can be fairly sure of a cause-and-effect relationship because of the way the experiment is set up. Although many different experimental designs exist, the typical experiment consists of an experimental group and a control group , with subjects randomly assigned to either group. The researcher makes a change to the experimental group that is not made to the control group. If the two groups differ later in some variable, then it is safe to say that the condition to which the experimental group was subjected was responsible for the difference that resulted.

A student working on an experiment in science class

Experiments are very common in the natural and physical sciences and in sociology. A major advantage of experiments is that they are very useful for establishing cause-and-effect-relationships.

biologycorner – Science Experiment – CC BY-NC 2.0.

Most experiments take place in the laboratory, which for psychologists may be a room with a one-way mirror, but some experiments occur in “the field,” or in a natural setting. In Minneapolis, Minnesota, in the early 1980s, sociologists were involved in a much-discussed field experiment sponsored by the federal government. The researchers wanted to see whether arresting men for domestic violence made it less likely that they would commit such violence again. To test this hypothesis, the researchers had police do one of the following after arriving at the scene of a domestic dispute: they either arrested the suspect, separated him from his wife or partner for several hours, or warned him to stop but did not arrest or separate him. The researchers then determined the percentage of men in each group who committed repeated domestic violence during the next 6 months and found that those who were arrested had the lowest rate of recidivism, or repeat offending (Sherman & Berk, 1984). This finding led many jurisdictions across the United States to adopt a policy of mandatory arrest for domestic violence suspects. However, replications of the Minneapolis experiment in other cities found that arrest sometimes reduced recidivism for domestic violence but also sometimes increased it, depending on which city was being studied and on certain characteristics of the suspects, including whether they were employed at the time of their arrest (Sherman, 1992).

As the Minneapolis study suggests, perhaps the most important problem with experiments is that their results are not generalizable beyond the specific subjects studied. The subjects in most psychology experiments, for example, are college students, who are not typical of average Americans: they are younger, more educated, and more likely to be middle class. Despite this problem, experiments in psychology and other social sciences have given us very valuable insights into the sources of attitudes and behavior.

Observational Studies and Intensive Interviewing

Observational research, also called field research, is a staple of sociology. Sociologists have long gone into the field to observe people and social settings, and the result has been many rich descriptions and analyses of behavior in juvenile gangs, bars, urban street corners, and even whole communities.

Observational studies consist of both participant observation and nonparticipant observation . Their names describe how they differ. In participant observation, the researcher is part of the group that she or he is studying. The researcher thus spends time with the group and might even live with them for a while. Several classical sociological studies of this type exist, many of them involving people in urban neighborhoods (Liebow, 1967, 1993; Whyte, 1943). Participant researchers must try not to let their presence influence the attitudes or behavior of the people they are observing. In nonparticipant observation, the researcher observes a group of people but does not otherwise interact with them. If you went to your local shopping mall to observe, say, whether people walking with children looked happier than people without children, you would be engaging in nonparticipant observation.

A related type of research design is intensive interviewing . Here a researcher does not necessarily observe a group of people in their natural setting but rather sits down with them individually and interviews them at great length, often for one or two hours or even longer. The researcher typically records the interview and later transcribes it for analysis. The advantages and disadvantages of intensive interviewing are similar to those for observational studies: intensive interviewing provides much information about the subjects being interviewed, but the results of such interviewing cannot necessarily be generalized beyond the subjects.

A classic example of field research is Kai T. Erikson’s Everything in Its Path (1976), a study of the loss of community bonds in the aftermath of a flood in a West Virginia mining community, Buffalo Creek. The flood occurred when an artificial dam composed of mine waste gave way after days of torrential rain. The local mining company had allowed the dam to build up in violation of federal law. When it broke, 132 million gallons of water broke through and destroyed several thousand homes in seconds while killing 125 people. Some 2,500 other people were rendered instantly homeless. Erikson was called in by the lawyers representing the survivors to document the sociological effects of their loss of community, and the book he wrote remains a moving account of how the destruction of the Buffalo Creek way of life profoundly affected the daily lives of its residents.

A man interviewing a woman on video

Intensive interviewing can yield in-depth information about the subjects who are interviewed, but the results of this research design cannot necessarily be generalized beyond these subjects.

Fellowship of the Rich – Interview – CC BY-NC-ND 2.0.

Similar to experiments, observational studies cannot automatically be generalized to other settings or members of the population. But in many ways they provide a richer account of people’s lives than surveys do, and they remain an important method of sociological research.

Existing Data

Sometimes sociologists do not gather their own data but instead analyze existing data that someone else has gathered. The U.S. Census Bureau, for example, gathers data on all kinds of areas relevant to the lives of Americans, and many sociologists analyze census data on such topics as poverty, employment, and illness. Sociologists interested in crime and the legal system may analyze data from court records, while medical sociologists often analyze data from patient records at hospitals. Analysis of existing data such as these is called secondary data analysis . Its advantage to sociologists is that someone else has already spent the time and money to gather the data. A disadvantage is that the data set being analyzed may not contain data on all the variables in which a sociologist may be interested or may contain data on variables that are not measured in ways the sociologist might prefer.

Nonprofit organizations often analyze existing data, usually gathered by government agencies, to get a better understanding of the social issue with which an organization is most concerned. They then use their analysis to help devise effective social policies and strategies for dealing with the issue. The “Learning From Other Societies” box discusses a nonprofit organization in Canada that analyzes existing data for this purpose.

Learning From Other Societies

Social Research and Social Policy in Canada

In several nations beyond the United States, nonprofit organizations often use social science research, including sociological research, to develop and evaluate various social reform strategies and social policies. Canada is one of these nations. Information on Canadian social research organizations can be found at http://www.canadiansocialresearch.net/index.htm .

The Canadian Research Institute for Social Policy (CRISP) at the University of New Brunswick is one of these organizations. According to its Web site ( http://www.unb.ca/crisp/index.php ), CRISP is “dedicated to conducting policy research aimed at improving the education and care of Canadian children and youth…and supporting low-income countries in their efforts to build research capacity in child development.” To do this, CRISP analyzes data from large data sets, such as the Canadian National Longitudinal Survey of Children and Youth, and it also evaluates policy efforts at the local, national, and international levels.

A major concern of CRISP has been developmental problems in low-income children and teens. These problems are the focus of a CRISP project called Raising and Leveling the Bar: A Collaborative Research Initiative on Children’s Learning, Behavioral, and Health Outcomes. This project at the time of this writing involved a team of five senior researchers and almost two dozen younger scholars. CRISP notes that Canada may have the most complete data on child development in the world but that much more research with these data needs to be performed to help inform public policy in the area of child development. CRISP’s project aims to use these data to help achieve the following goals, as listed on its Web site: (a) safeguard the healthy development of infants, (b) strengthen early childhood education, (c) improve schools and local communities, (d) reduce socioeconomic segregation and the effects of poverty, and (e) create a family enabling society ( http://www.unb.ca/crisp/rlb.html ). This project has written many policy briefs, journal articles, and popular press articles to educate varied audiences about what the data on children’s development suggest for child policy in Canada.

Key Takeaways

  • The major types of sociological research include surveys, experiments, observational studies, and the use of existing data.
  • Surveys are very common and allow for the gathering of much information on respondents that is relatively superficial. The results of surveys that use random samples can be generalized to the population that the sample represents.
  • Observational studies are also very common and enable in-depth knowledge of a small group of people. Because the samples of these studies are not random, the results cannot necessarily be generalized to a population.
  • Experiments are much less common in sociology than in psychology. When field experiments are conducted in sociology, they can yield valuable information because of their experimental design.

For Your Review

  • Write a brief essay in which you outline the various kinds of surveys and discuss the advantages and disadvantages of each type.
  • Suppose you wanted to study whether gender affects happiness. Write a brief essay that describes how you would do this either with a survey or with an observational study.

Erikson, K. T. (1976). Everything in its path: Destruction of community in the Buffalo Creek flood . New York, NY: Simon and Schuster.

Liebow, E. (1967). Tally’s corner . Boston, MA: Little, Brown.

Liebow, E. (1993). Tell them who I am: The lives of homeless women . New York, NY: Free Press.

Sherman, L W. (1992). Policing domestic violence: Experiments and dilemmas . New York, NY: Free Press.

Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49 , 261–272.

Whyte, W. F. (1943). Street corner society: The social structure of an Italian slum . Chicago, IL: University of Chicago Press.

Sociology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Sociology Notes by Sociology.Institute

Experimental Design in Sociology: Techniques and Limitations

sociology experimental design

Table of Contents

Have you ever wondered how sociologists manage to unravel complex social behaviors and structures? Unlike chemists or physicists , who can often observe phenomena under tightly controlled conditions, sociologists face a daunting task. They must design experiments within the unpredictable and multifaceted landscape of human society. Let’s embark on a journey through the intricate process of experimental design in sociology, discovering how it compares to the natural sciences and the innovative approaches required to glean valuable insights.

The nature of experimentation in sociology

At its core, experimentation is about isolating variables to test hypotheses . In the natural sciences, this often involves controlled laboratory settings where extraneous factors can be held constant. However, when it comes to sociology, the ‘laboratory’ is the real world itself, teeming with variables that are challenging to control. This means that sociologists must employ a variety of experimental designs to ensure their findings are valid and reliable.

Comparing social sciences to natural sciences

The first step in understanding experimental design in sociology is to appreciate the fundamental differences from natural sciences. While a biologist can observe the effects of a drug on a group of cells, a sociologist might look at the impact of a policy change on a community. The latter involves variables like human behavior, social norms, and cultural contexts that are less predictable and harder to measure.

Adapting experimental design to social contexts

  • Randomized Control Trials (RCTs) : Borrowed from medicine, RCTs are used in sociology to randomly assign participants to a treatment or control group, aiming to eliminate bias.
  • Field Experiments : These take place in natural settings, allowing researchers to observe real-world interactions while still maintaining some level of control over the experimental conditions.
  • Quasi-Experiments : Often used when random assignment is not possible, quasi-experiments compare groups that already exist in the world, such as schools or communities.

Designing social experiments with precision

When designing an experiment, sociologists must consider numerous factors to ensure the results will be meaningful. They must determine how to measure complex social outcomes, decide which variables to manipulate, and figure out how to account for the myriad of influences that can affect human behavior.

Examples of tailored experimental designs

  • Case Studies : In-depth investigations of a single group or event can offer insights into broader social phenomena.
  • Longitudinal Studies : Following the same subjects over a period of time can reveal how social processes unfold and change.
  • Comparative Studies : Looking at different groups or societies can help isolate the cultural or structural factors that influence behavior.

Innovative approaches to social inquiry

Sociologists are continually refining their methods to better understand the intricacies of human society. They employ cutting-edge technologies, like big data analytics and social media monitoring , to track patterns and trends that were previously invisible. They also use participatory methods, engaging with communities to gain a deeper understanding of social dynamics from an insider’s perspective.

Challenges and limitations in social experiments

Despite the creativity in experimental design, sociologists encounter significant challenges. Ethical considerations often limit the types of experiments that can be conducted, and the ever-changing nature of society means that results may not be generalizable beyond a specific context or moment in time. Moreover, the complexity of social phenomena can make it difficult to draw clear, causal conclusions.

Experimental design in sociology is a balancing act between the rigor of scientific method and the flexibility required to study dynamic social entities. While the challenges are significant, the potential insights that well-designed social experiments can offer are invaluable. They can lead to better policies, deeper understandings of social issues, and ultimately, a more equitable and informed society.

How do you think sociologists can further improve their experimental designs to tackle the complexities of human society? Can you envision a future where social experiments provide even more accurate and impactful insights?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Research Methodologies & Methods

1 Logic of Inquiry in Social Research

  • A Science of Society
  • Comte’s Ideas on the Nature of Sociology
  • Observation in Social Sciences
  • Logical Understanding of Social Reality

2 Empirical Approach

  • Empirical Approach
  • Rules of Data Collection
  • Cultural Relativism
  • Problems Encountered in Data Collection
  • Difference between Common Sense and Science
  • What is Ethical?
  • What is Normal?
  • Understanding the Data Collected
  • Managing Diversities in Social Research
  • Problematising the Object of Study
  • Conclusion: Return to Good Old Empirical Approach

3 Diverse Logic of Theory Building

  • Concern with Theory in Sociology
  • Concepts: Basic Elements of Theories
  • Why Do We Need Theory?
  • Hypothesis Description and Experimentation
  • Controlled Experiment
  • Designing an Experiment
  • How to Test a Hypothesis
  • Sensitivity to Alternative Explanations
  • Rival Hypothesis Construction
  • The Use and Scope of Social Science Theory
  • Theory Building and Researcher’s Values

4 Theoretical Analysis

  • Premises of Evolutionary and Functional Theories
  • Critique of Evolutionary and Functional Theories
  • Turning away from Functionalism
  • What after Functionalism
  • Post-modernism
  • Trends other than Post-modernism

5 Issues of Epistemology

  • Some Major Concerns of Epistemology
  • Rationalism
  • Phenomenology: Bracketing Experience

6 Philosophy of Social Science

  • Foundations of Science
  • Science, Modernity, and Sociology
  • Rethinking Science
  • Crisis in Foundation

7 Positivism and its Critique

  • Heroic Science and Origin of Positivism
  • Early Positivism
  • Consolidation of Positivism
  • Critiques of Positivism

8 Hermeneutics

  • Methodological Disputes in the Social Sciences
  • Tracing the History of Hermeneutics
  • Hermeneutics and Sociology
  • Philosophical Hermeneutics
  • The Hermeneutics of Suspicion
  • Phenomenology and Hermeneutics

9 Comparative Method

  • Relationship with Common Sense; Interrogating Ideological Location
  • The Historical Context
  • Elements of the Comparative Approach

10 Feminist Approach

  • Features of the Feminist Method
  • Feminist Methods adopt the Reflexive Stance
  • Feminist Discourse in India

11 Participatory Method

  • Delineation of Key Features

12 Types of Research

  • Basic and Applied Research
  • Descriptive and Analytical Research
  • Empirical and Exploratory Research
  • Quantitative and Qualitative Research
  • Explanatory (Causal) and Longitudinal Research
  • Experimental and Evaluative Research
  • Participatory Action Research

13 Methods of Research

  • Evolutionary Method
  • Comparative Method
  • Historical Method
  • Personal Documents

14 Elements of Research Design

  • Structuring the Research Process

15 Sampling Methods and Estimation of Sample Size

  • Classification of Sampling Methods
  • Sample Size

16 Measures of Central Tendency

  • Relationship between Mean, Mode, and Median
  • Choosing a Measure of Central Tendency

17 Measures of Dispersion and Variability

  • The Variance
  • The Standard Deviation
  • Coefficient of Variation

18 Statistical Inference- Tests of Hypothesis

  • Statistical Inference
  • Tests of Significance

19 Correlation and Regression

  • Correlation
  • Method of Calculating Correlation of Ungrouped Data
  • Method Of Calculating Correlation Of Grouped Data

20 Survey Method

  • Rationale of Survey Research Method
  • History of Survey Research
  • Defining Survey Research
  • Sampling and Survey Techniques
  • Operationalising Survey Research Tools
  • Advantages and Weaknesses of Survey Research

21 Survey Design

  • Preliminary Considerations
  • Stages / Phases in Survey Research
  • Formulation of Research Question
  • Survey Research Designs
  • Sampling Design

22 Survey Instrumentation

  • Techniques/Instruments for Data Collection
  • Questionnaire Construction
  • Issues in Designing a Survey Instrument

23 Survey Execution and Data Analysis

  • Problems and Issues in Executing Survey Research
  • Data Analysis
  • Ethical Issues in Survey Research

24 Field Research – I

  • History of Field Research
  • Ethnography
  • Theme Selection
  • Gaining Entry in the Field
  • Key Informants
  • Participant Observation

25 Field Research – II

  • Interview its Types and Process
  • Feminist and Postmodernist Perspectives on Interviewing
  • Narrative Analysis
  • Interpretation
  • Case Study and its Types
  • Life Histories
  • Oral History
  • PRA and RRA Techniques

26 Reliability, Validity and Triangulation

  • Concepts of Reliability and Validity
  • Three Types of “Reliability”
  • Working Towards Reliability
  • Procedural Validity
  • Field Research as a Validity Check
  • Method Appropriate Criteria
  • Triangulation
  • Ethical Considerations in Qualitative Research

27 Qualitative Data Formatting and Processing

  • Qualitative Data Processing and Analysis
  • Description
  • Classification
  • Making Connections
  • Theoretical Coding
  • Qualitative Content Analysis

28 Writing up Qualitative Data

  • Problems of Writing Up
  • Grasp and Then Render
  • “Writing Down” and “Writing Up”
  • Write Early
  • Writing Styles
  • First Draft

29 Using Internet and Word Processor

  • What is Internet and How Does it Work?
  • Internet Services
  • Searching on the Web: Search Engines
  • Accessing and Using Online Information
  • Online Journals and Texts
  • Statistical Reference Sites
  • Data Sources
  • Uses of E-mail Services in Research

30 Using SPSS for Data Analysis Contents

  • Introduction
  • Starting and Exiting SPSS
  • Creating a Data File
  • Univariate Analysis
  • Bivariate Analysis

31 Using SPSS in Report Writing

  • Why to Use SPSS
  • Working with SPSS Output
  • Copying SPSS Output to MS Word Document

32 Tabulation and Graphic Presentation- Case Studies

  • Structure for Presentation of Research Findings
  • Data Presentation: Editing, Coding, and Transcribing
  • Case Studies
  • Qualitative Data Analysis and Presentation through Software
  • Types of ICT used for Research

33 Guidelines to Research Project Assignment

  • Overview of Research Methodologies and Methods (MSO 002)
  • Research Project Objectives
  • Preparation for Research Project
  • Stages of the Research Project
  • Supervision During the Research Project
  • Submission of Research Project
  • Methodology for Evaluating Research Project

Share on Mastodon

The Principles of Experimental Design and Their Application in Sociology

Annual Review of Sociology, Vol. 39, pp. 27-49, 2013

Posted: 27 Jul 2013

Michelle Jackson

Stanford University - Institute for Research in the Social Sciences

University of Oxford - Nuffield Department of Medicine

Date Written: July 2013

In light of an increasing interest in experimental work, we provide a review of some of the general issues involved in the design of experiments and illustrate their relevance to sociology and to other areas of social science of interest to sociologists. We provide both an introduction to the principles of experimental design and examples of influential applications of design for different types of social science research. Our aim is twofold: to provide a foundation in the principles of design that may be useful to those planning experiments and to provide a critical overview of the range of applications of experimental design across the social sciences.

Suggested Citation: Suggested Citation

Stanford University - Institute for Research in the Social Sciences ( email )

Stanford, CA 94305 United States

University of Oxford - Nuffield Department of Medicine ( email )

New Road Oxford, OX1 1NF United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, psychology research methods ejournal.

Subscribe to this fee journal for more curated articles on this topic

Annual Review of Sociology

Subscribe to this free journal for more curated articles on this topic

Experimental Design

Ethics, Integrity, and the Scientific Method

  • Reference work entry
  • First Online: 02 April 2020
  • Cite this reference work entry

sociology experimental design

  • Jonathan Lewis 2  

2839 Accesses

5 Citations

Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental design has become an important part of research in the social and behavioral sciences. Experimental methods are also endorsed as the most reliable guides to policy effectiveness. Through a discussion of some of the central concepts associated with experimental design, including controlled variation and randomization, this chapter will provide a summary of key ethical issues that tend to arise in experimental contexts. In addition, by exploring assumptions about the nature of causation and by analyzing features of causal relationships, systems, and inferences in social contexts, this chapter will summarize the ways in which experimental design can undermine the integrity of not only social and behavioral research but policies implemented on the basis of such research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

sociology experimental design

Research Design: Toward a Realistic Role for Causal Analysis

sociology experimental design

Experiments and Econometrics

Alderson P (1996) Equipoise as a means of managing uncertainty: personal, communal and proxy. J Med Ethics 223:135–139

Article   Google Scholar  

Arabatzis T (2014) Experiment. In: Curd M, Psillos S (eds) The Routledge companion to philosophy of science, 2nd edn. Routledge, London, pp 191–202

Google Scholar  

Baele S (2013) The ethics of new development economics: is the experimental approach to development economics morally wrong? J Philos Econ 7(1):2–42

Beauchamp T, Childress J (2013) Principles of biomedical ethics, 7th edn. Oxford University Press, Oxford

Binmore K (1999) Why experiment in economics? Econ J 109(453):16–24

Bogen J (2002) Epistemological custard pies from functional brain imaging. Philos Sci 69(3):59–71

Bordens K, Abbott B (2013) Research and design methods: a process approach. McGraw-Hill, Boston

Brady H (2011) Causation and explanation in social science. In: Goodin R (ed) The Oxford handbook of political science. Oxford University Press, Oxford, pp 1054–1107

Broome J (1984) Selecting people randomly. Ethics 95(1):38–55

Brown A, Mehta T, Allison D (2017) Publication bias in science: what is it, why is it problematic, and how can it be addressed? In: Jamieson K, Kahan D, Scheufele D (eds) The Oxford handbook of the science of science communication. Oxford University Press, Oxford, pp 93–101

Cartwright N (1999) The dappled world: a study of the boundaries of science. Cambridge University Press, Cambridge, UK

Book   Google Scholar  

Cartwright N (2007) Hunting causes and using them. Cambridge University Press, Cambridge, UK

Cartwright N (2012) RCTs, evidence, and predicting policy effectiveness. In: Kincaid H (ed) The Oxford handbook of philosophy of social science. Oxford University Press, Oxford, UK, pp 298–318

Cartwright N (2014) Causal inference. In: Cartwright N, Montuschi E (eds) Philosophy of social science: a new introduction. Oxford University Press, Oxford, pp 308–337

Churchill L (1980) Physician-investigator/patient-subject: exploring the logic and the tension. J Med Philos 5(3):215–224

Clarke S (1999) Justifying deception in social science research. J Appl Philos 16(2):151–166

Conner R (1982) Random assignment of clients in social experimentation. In: Sieber J (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 57–77

Chapter   Google Scholar  

Cook T, Campbell D (1986) The causal assumptions of quasi-experimental practice. Synthese 68(1):141–180

Cook C, Sheets C (2011) Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials. J Man Manipulative Ther 19(1):55–57

Crasnow S (2017) Bias in social science experiments. In: McIntyre L, Rosenberg A (eds) The Routledge companion to the philosophy of social science. Routledge, London, pp 191–201

Douglas H (2014) Values in social science. In: Cartwright N, Montuschi E (eds) Philosophy of social science: a new introduction. Oxford University Press, Oxford, pp 162–182

Feest U, Steinle F (2016) Experiment. In: Humphreys P (ed) The Oxford handbook of philosophy of science. Oxford University Press, Oxford, pp 274–295

Freedman B (1987) Equipoise and the ethics of clinical research. N Engl J Med 317(3):141–145

Freedman B, Glass K, Weijer C (1996) Placebo orthodoxy in clinical research II: ethical, legal, and regulatory myths. J Law Med Ethics 24(3):252–259

Fried C (1974) Medical experimentation: personal integrity and social policy. Elsevier, New York

Gangl M (2010) Causal inference in sociological research. Annu Rev Sociol 36:21–47

Geller D (1982) Alternatives to deception: why, what, and how? In: Sieber JE (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 38–55

Gifford F (1986) The conflict between randomized clinical trials and the therapeutic obligation. J Med Philos 11:347–366

Gillon R (1994) Medical ethics: four principles plus attention to scope. Br Med J 309(6948):184–188

Goldthorpe J (2001) Causation, statistics, and sociology. Eur Sociol Rev 17(1):1–20

Guala F (2005) The methodology of experimental economics. Cambridge University Press, Cambridge

Guala F (2009) Methodological issues in experimental design and interpretation. In: Kincaid H, Ross D (eds) The Oxford handbook of philosophy of economics. Oxford University Press, Oxford, pp 280–305

Guala F (2012) Experimentation in economics. In: Mäki U (ed) Philosophy of economics. Elsevier/North Holland, Oxford, pp 597–640

Hacking I (1999) The social construction of what? Harvard University Press, Cambridge, MA

Hammersley M (2008) Paradigm war revived? On the diagnosis of resistance to randomized controlled trials and systematic review in education. Int J Res Method Educ 31(1):3–10

Hegtvedt K (2014) Ethics and experiments. In: Webster M, Sell J (eds) Laboratory experiments in the social sciences. Academic, London, pp 23–51

Holmes D (1976) ‘Debriefing after psychological experiments: I. Effectiveness of postdeception dehoaxing’ and ‘Debriefing after psychological experiments: II. Effectiveness of postexperimental desensitizing’. Am Psychol 32:858–875

Humphreys M (2015) Reflections on the ethics of social experimentation. J Glob Dev 6(1):87–112

Kaidesoja T (2017) Causal inference and modeling. In: McIntyre L, Rosenberg A (eds) The Routledge companion to philosophy of social science. Routledge, London, pp 202–213

Kelman H (1982) Ethical issues in different social science methods. In: Beauchamp T et al (eds) Ethical issues in social science research. John Hopkins University Press, Baltimore, pp 40–98

Kuorikoski J, Marchionni C (2014) Philosophy of economics. In: French S, Saatsi J (eds) The Bloomsbury companion to the philosophy of science. Bloomsbury, London, pp 314–333

Levine R (1979) Clarifying the concepts of research ethics. Hast Cent Rep 9(3):21–26

Lilford R, Jackson J (1995) Equipoise and the ethics of randomization. J R Soc Med 88(10):552–559

Miller F, Brody H (2003) A critique of clinical equipoise: therapeutic misconception in the ethics of clinical trials. Hast Cent Rep 33(3):19–28

Miller P, Weijer C (2006) Fiduciary obligation in clinical research. J Law Med Ethics 34(2):424–440

Mitchell S (2009) Unsimple truths: science, complexity, and policy. University of Chicago Press, Chicago

Morton R, Williams K (2010) Experimental political science and the study of causality: from nature to the lab. Cambridge University Press, Cambridge, UK

Oakley A et al (2003) Using random allocation to evaluate social interventions: three recent UK examples. Ann Am Acad Pol Soc Sci 589(1):170–189

Papineau D (1994) The virtues of randomization. Br J Philos Sci 45:437–450

Pearl J (2000) Causality-models, reasoning and inference. Cambridge University Press, Cambridge, UK

Risjord M (2014) Philosophy of social science: a contemporary introduction. Routledge, London

Sieber, Joan (1982) Ethical dilemmas in social research. In: Sieber J (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 1–29

Sieber J (1992) Planning ethically responsible research: a guide for students and internal review boards. Sage, Newbury Park

Sobel M (1996) An introduction to causal inference. Sociol Methods Res 24(3):353–379

Sullivan J (2009) The multiplicity of experimental protocols. A challenge to reductionist and non-reductionist models of the unity of neuroscience. Synthese 167:511–539

Urbach P (1985) Randomization and the design of experiments. Philos Sci 52:256–273

Veatch R (2007) The irrelevance of equipoise. J Med Philos 32(2):167–183

Wilholt T (2009) Bias and values in scientific research. Stud Hist Phil Sci 40(1):92–101

Woodward J (2008) Invariance, modularity, and all that. Cartwright on causation. In: Cartwright N et al (eds) Nancy Cartwright’s philosophy of science. Routledge, New York, pp 198–237

Worrall J (2002) What evidence in evidence-based medicine? Philos Sci 69(3):316–330

Worrall J (2007) Why there’s no cause to randomize. Br J Philos Sci 58(3):451–488

Download references

Author information

Authors and affiliations.

Institute of Ethics, School of Theology, Philosophy and Music, Faculty of Humanities and Social Sciences, Dublin City University, Dublin, Ireland

Jonathan Lewis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jonathan Lewis .

Editor information

Editors and affiliations.

Chatelaillon Plage, France

Ron Iphofen

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Cite this entry.

Lewis, J. (2020). Experimental Design. In: Iphofen, R. (eds) Handbook of Research Ethics and Scientific Integrity. Springer, Cham. https://doi.org/10.1007/978-3-030-16759-2_19

Download citation

DOI : https://doi.org/10.1007/978-3-030-16759-2_19

Published : 02 April 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-16758-5

Online ISBN : 978-3-030-16759-2

eBook Packages : Religion and Philosophy Reference Module Humanities and Social Sciences Reference Module Humanities

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.1 Experimental design: What is it and when should it be used?

Learning objectives.

  • Define experiment
  • Identify the core features of true experimental designs
  • Describe the difference between an experimental group and a control group
  • Identify and describe the various types of true experimental designs

Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.

sociology experimental design

Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.

Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:

  • random assignment of participants into experimental and control groups
  • a “treatment” (or intervention) provided to the experimental group
  • measurement of the effects of the treatment in a post-test administered to both groups

Some true experiments are more complex.  Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.

Experimental and control groups

In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.

Treatment or intervention

In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.

In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.

The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test .  In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.

Types of experimental design

Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.

Steps in classic experimental design: Sampling to Assignment to Pretest to intervention to Posttest

An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963).  The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.

Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.

Table 8.1 Solomon four-group design
Group 1 X X X
Group 2 X X
Group 3 X X
Group 4 X

Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.

Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we  will discuss in the next section–can be used.  However, the differences in rigor from true experimental designs leave their conclusions more open to critique.

Experimental design in macro-level research

You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals.  For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change.  There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013).  Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments.  For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.

Key Takeaways

  • True experimental designs require random assignment.
  • Control groups do not receive an intervention, and experimental groups receive an intervention.
  • The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
  • Testing effects may cause researchers to use variations on the classic experimental design.
  • Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
  • Control group- the group in an experiment that does not receive the intervention
  • Experiment- a method of data collection designed to test hypotheses under controlled conditions
  • Experimental group- the group in an experiment that receives the intervention
  • Posttest- a measurement taken after the intervention
  • Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
  • Pretest- a measurement taken prior to the intervention
  • Random assignment-using a random process to assign people into experimental and control groups
  • Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
  • Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
  • True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups

Image attributions

exam scientific experiment by mohamed_hassan CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Pressbooks @ Howard Community College

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Research Design in Sociology

Learning objective.

  • List the major advantages and disadvantages of surveys, experiments, and observational studies.

We now turn to the major methods that sociologists use to gather the information they analyze in their research. Table 2.2 “Major Sociological Research Methods” summarizes the advantages and disadvantages of each method.

Table 2.2 Major Sociological Research Methods

Method Advantages Disadvantages
Survey Many people can be included. If given to a random sample of the population, a survey’s results can be generalized to the population. Large surveys are expensive and time consuming. Although much information is gathered, this information is relatively superficial.
Experiments If random assignment is used, experiments provide fairly convincing data on cause and effect. Because experiments do not involve random samples of the population and most often involve college students, their results cannot readily be generalized to the population.
Observation (field research) Observational studies may provide rich, detailed information about the people who are observed. Because observation studies do not involve random samples of the population, their results cannot readily be generalized to the population.
Existing data Because existing data have already been gathered, the researcher does not have to spend the time and money to gather data. The data set that is being analyzed may not contain data on all the variables in which a sociologist is interested or may contain data on variables that are not measured in ways the sociologist prefers.

Types of Sociological Research

The survey is the most common method by which sociologists gather their data. The Gallup Poll is perhaps the best-known example of a survey and, like all surveys, gathers its data with the help of a questionnaire that is given to a group of respondents. The Gallup Poll is an example of a survey conducted by a private organization, but it typically includes only a small range of variables. It thus provides a good starting point for research but usually does not include enough variables for a full-fledged sociological study. Sociologists often do their own surveys, as does the government and many organizations in addition to Gallup.

A pile of surveys

The Bees – Surveys to compile – CC BY-NC 2.0.

The General Social Survey, described earlier, is an example of a face-to-face survey, in which interviewers meet with respondents to ask them questions. This type of survey can yield a lot of information, because interviewers typically will spend at least an hour asking their questions, and a high response rate (the percentage of all people in the sample who agree to be interviewed), which is important to be able to generalize the survey’s results to the entire population. On the downside, this type of survey can be very expensive and time-consuming to conduct.

Because of these drawbacks, sociologists and other researchers have turned to telephone surveys. Most Gallup Polls are conducted over the telephone. Computers do random-digit dialing, which results in a random sample of all telephone numbers being selected. Although the response rate and the number of questions asked are both lower than in face-to-face surveys (people can just hang up the phone at the outset or let their answering machine take the call), the ease and low expense of telephone surveys are making them increasingly popular.

Mailed surveys, done by mailing questionnaires to respondents, are still used, but not as often as before. Compared with face-to-face surveys, mailed questionnaires are less expensive and time consuming but have lower response rates, because many people simply throw out the questionnaire along with other junk mail.

Whereas mailed surveys are becoming less popular, surveys done over the Internet are becoming more popular, as they can reach many people at very low expense. A major problem with Web surveys is that their results cannot necessarily be generalized to the entire population, because not everyone has access to the Internet.

Experiments

Experiments are the primary form of research in the natural and physical sciences, but in the social sciences they are for the most part found only in psychology. Some sociologists still use experiments, however, and they remain a powerful tool of social research.

The major advantage of experiments is that the researcher can be fairly sure of a cause-and-effect relationship because of the way the experiment is set up. Although many different experimental designs exist, the typical experiment consists of an experimental group and a control group , with subjects randomly assigned to either group. The researcher makes a change to the experimental group that is not made to the control group. If the two groups differ later in some variable, then it is safe to say that the condition to which the experimental group was subjected was responsible for the difference that resulted.

A student working on an experiment in science class

biologycorner – Science Experiment – CC BY-NC 2.0.

Most experiments take place in the laboratory, which for psychologists may be a room with a one-way mirror, but some experiments occur in “the field,” or in a natural setting. In Minneapolis, Minnesota, in the early 1980s, sociologists were involved in a much-discussed field experiment sponsored by the federal government. The researchers wanted to see whether arresting men for domestic violence made it less likely that they would commit such violence again. To test this hypothesis, the researchers had police do one of the following after arriving at the scene of a domestic dispute: they either arrested the suspect, separated him from his wife or partner for several hours, or warned him to stop but did not arrest or separate him. The researchers then determined the percentage of men in each group who committed repeated domestic violence during the next 6 months and found that those who were arrested had the lowest rate of recidivism, or repeat offending (Sherman & Berk, 1984). This finding led many jurisdictions across the United States to adopt a policy of mandatory arrest for domestic violence suspects. However, replications of the Minneapolis experiment in other cities found that arrest sometimes reduced recidivism for domestic violence but also sometimes increased it, depending on which city was being studied and on certain characteristics of the suspects, including whether they were employed at the time of their arrest (Sherman, 1992).

As the Minneapolis study suggests, perhaps the most important problem with experiments is that their results are not generalizable beyond the specific subjects studied. The subjects in most psychology experiments, for example, are college students, who are not typical of average Americans: they are younger, more educated, and more likely to be middle class. Despite this problem, experiments in psychology and other social sciences have given us very valuable insights into the sources of attitudes and behavior.

Observational Studies and Intensive Interviewing

Observational research , also called field research, is a staple of sociology. Sociologists have long gone into the field to observe people and social settings, and the result has been many rich descriptions and analyses of behavior in juvenile gangs, bars, urban street corners, and even whole communities.

Observational studies consist of both participant observation and nonparticipant observation . Their names describe how they differ. In participant observation, the researcher is part of the group that she or he is studying. The researcher thus spends time with the group and might even live with them for a while. Several classical sociological studies of this type exist, many of them involving people in urban neighborhoods (Liebow, 1967, 1993; Whyte, 1943).

Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book and describe what he or she witnessed and experienced.

This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, as the story goes, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study. To her surprise, her editor responded, Why don’t you do it?

That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter.

She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper-class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer.

A related type of research design is intensive interviewing . Here a researcher does not necessarily observe a group of people in their natural setting but rather sits down with them individually and interviews them at great length, often for one or two hours or even longer. The researcher typically records the interview and later transcribes it for analysis. The advantages and disadvantages of intensive interviewing are similar to those for observational studies: intensive interviewing provides much information about the subjects being interviewed, but the results of such interviewing cannot necessarily be generalized beyond the subjects.

A classic example of field research is Kai T. Erikson’s Everything in Its Path (1976), a study of the loss of community bonds in the aftermath of a flood in a West Virginia mining community, Buffalo Creek. The flood occurred when an artificial dam composed of mine waste gave way after days of torrential rain. The local mining company had allowed the dam to build up in violation of federal law. When it broke, 132 million gallons of water broke through and destroyed several thousand homes in seconds while killing 125 people. Some 2,500 other people were rendered instantly homeless. Erikson was called in by the lawyers representing the survivors to document the sociological effects of their loss of community, and the book he wrote remains a moving account of how the destruction of the Buffalo Creek way of life profoundly affected the daily lives of its residents.

A man interviewing a woman on video

Fellowship of the Rich – Interview – CC BY-NC-ND 2.0.

Similar to experiments, observational studies cannot automatically be generalized to other settings or members of the population. But in many ways they provide a richer account of people’s lives than surveys do, and they remain an important method of sociological research.

Ethnography

Ethnography is the extended observation of the social perspective and cultural values of an entire social setting. Ethnographies involve objective observation of an entire community.

The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a community. An ethnographic study might observe, for example, a small U.S. fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or an amusement park. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible.

A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results.

Existing Data

Sometimes sociologists do not gather their own data but instead analyze existing data that someone else has gathered. The U.S. Census Bureau, for example, gathers data on all kinds of areas relevant to the lives of Americans, and many sociologists analyze census data on such topics as poverty, employment, and illness. Sociologists interested in crime and the legal system may analyze data from court records, while medical sociologists often analyze data from patient records at hospitals. Analysis of existing data such as these is called secondary data analysis . Its advantage to sociologists is that someone else has already spent the time and money to gather the data. A disadvantage is that the data set being analyzed may not contain data on all the variables in which a sociologist may be interested or may contain data on variables that are not measured in ways the sociologist might prefer.

Nonprofit organizations often analyze existing data, usually gathered by government agencies, to get a better understanding of the social issue with which an organization is most concerned. They then use their analysis to help devise effective social policies and strategies for dealing with the issue. The “Learning From Other Societies” box discusses a nonprofit organization in Canada that analyzes existing data for this purpose.

Learning From Other Societies

Social Research and Social Policy in Canada

In several nations beyond the United States, nonprofit organizations often use social science research, including sociological research, to develop and evaluate various social reform strategies and social policies. Canada is one of these nations. Information on Canadian social research organizations can be found at http://www.canadiansocialresearch.net/index.htm .

The Canadian Research Institute for Social Policy (CRISP) at the University of New Brunswick is one of these organizations. According to its Web site ( http://www.unb.ca/crisp/index.php ), CRISP is “dedicated to conducting policy research aimed at improving the education and care of Canadian children and youth…and supporting low-income countries in their efforts to build research capacity in child development.” To do this, CRISP analyzes data from large data sets, such as the Canadian National Longitudinal Survey of Children and Youth, and it also evaluates policy efforts at the local, national, and international levels.

A major concern of CRISP has been developmental problems in low-income children and teens. These problems are the focus of a CRISP project called Raising and Leveling the Bar: A Collaborative Research Initiative on Children’s Learning, Behavioral, and Health Outcomes. This project at the time of this writing involved a team of five senior researchers and almost two dozen younger scholars. CRISP notes that Canada may have the most complete data on child development in the world but that much more research with these data needs to be performed to help inform public policy in the area of child development. CRISP’s project aims to use these data to help achieve the following goals, as listed on its Web site: (a) safeguard the healthy development of infants, (b) strengthen early childhood education, (c) improve schools and local communities, (d) reduce socioeconomic segregation and the effects of poverty, and (e) create a family enabling society ( http://www.unb.ca/crisp/rlb.html ). This project has written many policy briefs, journal articles, and popular press articles to educate varied audiences about what the data on children’s development suggest for child policy in Canada.

Key Takeaways

  • The major types of sociological research include surveys, experiments, observational studies, and the use of existing data.
  • Surveys are very common and allow for the gathering of much information on respondents that is relatively superficial. The results of surveys that use random samples can be generalized to the population that the sample represents.
  • Observational studies are also very common and enable in-depth knowledge of a small group of people. Because the samples of these studies are not random, the results cannot necessarily be generalized to a population.
  • Experiments are much less common in sociology than in psychology. When field experiments are conducted in sociology, they can yield valuable information because of their experimental design.

Erikson, K. T. (1976). Everything in its path: Destruction of community in the Buffalo Creek flood . New York, NY: Simon and Schuster.

Liebow, E. (1967). Tally’s corner . Boston, MA: Little, Brown.

Liebow, E. (1993). Tell them who I am: The lives of homeless women . New York, NY: Free Press.

Sherman, L W. (1992). Policing domestic violence: Experiments and dilemmas . New York, NY: Free Press.

Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49 , 261–272.

Whyte, W. F. (1943). Street corner society: The social structure of an Italian slum . Chicago, IL: University of Chicago Press.

gathers its data with the help of a questionnaire that is given to a group of respondents.

A procedure typically used to confirm the validity of a hypothesis by comparing the outcomes of one or more treatment groups to a control group on a given measure.

When researchers actively engage in the activity they are studying to understand it better.

A research technique whereby the researcher watches the subjects of his or her study without taking an active part in the situation under study.

A qualitative research method in which a researcher observes a social setting to provide descriptions of a group, society, or organization.

empirical information that someone else has gathered.

Introduction to Sociology: Understanding and Changing the Social World Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

sociology experimental design

Understanding Society

Daniel Little

Experimental methods in sociology

sociology experimental design

An earlier post noted the increasing importance of experimentation in some areas of economics ( link ), and posed the question of whether there is a place for experimentation in sociology as well. Here I’d like to examine that question a bit further.

Let’s begin by asking the simple question: what is an experiment? An experiment is an intervention through which a scientist seeks to identify the possible effects of a given factor or “treatment”. The effect may be thought to be deterministic (whenever X occurs, Y occurs); or it may be probabilistic (the occurrence of X influences the probability of the occurrence of Y). Plainly, the experimental evaluation of probabilistic causal hypotheses requires repeating the experiment a number of times and evaluating the results statistically; whereas a deterministic causal hypothesis can in principle be refuted by a single trial.

In “The Principles of Experimental Design and Their Application in Sociology” ( link ) Michelle Jackson and D.R. Cox provide a simple and logical specification of experimentation:

We deal here with investigations in which the effects of a number of alternative conditions or treatments are to be compared. Broadly, the investigation is an experiment if the investigator controls the allocation of treatments to the individuals in the study and the other main features of the work, whereas it is observational if, in particular, the allocation of treatments has already been determined by some process outside the investigator’s control and detailed knowledge. The allocation of treatments to individuals is commonly labeled manipulation in the social science context. (Jackson and Cox 2013: 28)

There are several relevant kinds of causal claims in sociology that might admit of experimental investigation, corresponding to all four causal linkages implied by the model of Coleman’s boat ( Foundations of Social Theory )—micro-macro, macro-micro, micro-micro, and macro-macro ( link ). Sociologists generally pay close attention to the relationships that exist between structures and social actors, extending in both directions. Hypotheses about causation in the social world require testing or other forms of empirical evaluation through the collection of evidence. It is plausible to ask whether the methods associated with experimentation are available to sociology. In many instances, the answer is, yes.

There appear to be three different kinds of experiments that would possibly make sense in sociology.

  • Experiments evaluating hypotheses about features of human motivation and behavior
  • Experiments evaluating hypotheses about the effects of features of the social environment on social behavior
  • Experiments evaluating hypotheses about the effects of “interventions” on the characteristics of an organization or local institution

First, sociological theories generally make use of more or less explicit theories of agents and their behavior. These theories could be evaluated using laboratory-based design for experimental subjects in specified social arrangements, parallel to existing methods in experimental economics. For example, Durkheim, Goffman, Coleman, and Hedström all provide different accounts of the actors who constitute social phenomena. It is feasible to design experiments along the lines of experimental economics to evaluate the behavioral hypotheses advanced by various sociologists.

Second, sociology is often concerned with the effects of social relationships on social behavior—for example, friendships, authority relations, or social networks. It would appear that these effects can be probed through direct experimentation, where the researcher creates artificial social relationships and observes behavior. Matthew Salganik et al’s internet-based experiments ( 2006 ,  2009 ) on “culture markets” fall in this category ( Hedström 2006 ). Hedström describes the research by Salganik, Dodds, and Watts (2006) in these terms:

Salganik et al. (2) circumvent many of these problems [of survey-based methodology] by using experimental rather than observational data. They created a Web-based world where more than 14,000 individuals listened to previously unknown songs, rated them, and freely downloaded them if they so desired. Subjects were randomly assigned to different groups. Individuals in only some groups were informed about how many times others in their group had downloaded each song. The experiment assessed whether this social influence had any effects on the songs the individuals seemed to prefer. 
As expected, the authors found that individuals’ music preferences were altered when they were exposed to information about the preferences of others. Furthermore, and more importantly, they found that the extent of social influence had important consequences for the collective outcomes that emerged. The greater the social influence, the more unequal and unpredictable the collective outcomes became. Popular songs became more popular and unpopular songs became less popular when individuals influenced one another, and it became more difficult to predict which songs were to emerge as the most popular ones the more the individuals influenced one another. (787)

Third, some sociologists are especially interested in the effects of micro-context on individual actors and their behavior. Erving Goffman and Harold Garfinkel offer detailed interpretations of the causal dynamics of social interactions at the micro level, and their work appears to be amenable to experimental treatment. Garfinkel ( Studies in Ethnomethodology ), in particular, made use of research methods that are especially suggestive of controlled experimental designs.

Fourth, sociologists are interested in macro-causes of individual social action. For example, sociologists would like to understand the effects of ideologies and normative systems on individual actors, and others would like to understand the effects of differences in large social structures on individual social actors. Weber hypothesized that the Protestant ethic caused a certain kind of behavior. Theoretically it should be possible to establish hypotheses about the kind of influence a broad cultural factor is thought to exercise over individual actors, and then design experiments to evaluate those hypotheses. Given the scope and pervasiveness of these kinds of macro-social factors, it is difficult to see how their effects could be assessed within a laboratory context. However, there are a range of other experimental designs that could be used, including quasi-experiments ( link ) and field experiments and natural experiments ( link ),  in which the investigator designs appropriate comparative groups of individuals in observably different ideological, normative, or social-structural arrangements and observes the differences that can be discerned at the level of social behavior. Does one set of normative arrangements result in greater altruism? Does a culture of nationalism promote citizens’ propensity for aggression against outsiders? Does greater ethnic homogeneity result in higher willingness to comply with taxation, conscription, and other collective duties?

Finally, sociologists are often interested in macro- to macro-causation. For example, consider the claims that “defeat in war leads to weak state capacity in the subsequent peace” or “economic depression leads to xenophobia”. Of course it is not possible to design an experiment in which “defeat in war” is a treatment; but it is possible to develop quasi-experiments or natural experiments that are designed to evaluate this hypothesis. (This is essentially the logic of Theda Skocpol’s (1979) analysis of the causes of social revolution in  States and Social Revolutions: A Comparative Analysis of France, Russia, and China .) Or consider a research question in contentious politics, does widespread crop failure give rise to rebellions? Here again, the direct logic of experimentation is generally not available; but the methods articulated in the fields of quasi-experimentation, natural experiments, and field experiments offer an avenue for research designs that have a great deal in common with experimentation. A researcher could compile a dataset for historical China that records weather, crop failure, crop prices, and incidents of rebellion and protest. This dataset could support a “natural experiment” in which each year is assigned to either “control group” or “intervention group”; the control group consists of years in which crop harvests were normal, while the intervention group would consist of years in which crop harvests are below normal (or below subsistence). The experiment is then a simple one: what is the average incidence of rebellious incident in control years and intervention years?

So it is clear that causal reasoning that is very similar to the logic of experimentation is common throughout many areas of sociology. That said, the zone of sociological theorizing that is amenable to laboratory experimentation under random selection and a controlled environment is largely in the area of theories of social action and behavior: the reasons actor behave as they do, hypotheses about how their choices would differ under varying circumstances, and (with some ingenuity) how changing background social conditions might alter the behavior of actors. Here there are very direct parallels between sociological investigation and the research done by experimental and behavioral economists like Richard Thaler ( Misbehaving: The Making of Behavioral Economics ). And in this way, sociological experiments have much in common with experimental research in social psychology and other areas of the behavioral sciences.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

Leave a comment Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

inequality.com

inequality.com

The stanford center on poverty and inequality, search form.

  • like us on Facebook
  • follow us on Twitter
  • See us on Youtube

Custom Search 1

  • Stanford Basic Income Lab
  • Social Mobility Lab
  • California Lab
  • Social Networks Lab
  • Noxious Contracts Lab
  • Tax Policy Lab
  • Housing & Homelessness Lab
  • Early Childhood Lab
  • Undergraduate and Graduate Research Fellowships
  • Minor in Poverty, Inequality, and Policy
  • Certificate in Poverty and Inequality
  • America: Unequal (SOC 3)
  • Inequality Workshop for Doctoral Students (SOC 341W)
  • Postdoctoral Scholars & Research Grants
  • Research Partnerships & Technical Assistance
  • Conferences
  • Pathways Magazine
  • Policy Blueprints
  • California Poverty Measure Reports
  • American Voices Project Research Briefs
  • Report: Renter's Tax Credit Design Considerations
  • Report: Bay Area Cost of Living and Public Supports
  • Policy Brief: Introduction of the CalEITC
  • State of the Union Reports
  • Multimedia Archive
  • Recent Affiliate Publications
  • Latest News
  • Talks & Events
  • California Poverty Measure Data
  • American Voices Project Data
  • California Data Dashboard
  • About the Center
  • History & Acknowledgments
  • Center Personnel
  • Stanford University Affiliates
  • National & International Affiliates
  • Employment & Internship Opportunities
  • Graduate & Undergraduate Programs
  • Postdoctoral Scholars & Research Grants
  • Research Partnerships & Technical Assistance
  • Other Reports and Briefs
  • Talks & Events
  • History & Acknowledgments
  • National & International Affiliates
  • Get Involved
  • Report: Renter's Tax Credit Design Considerations

The Principles of Experimental Design and Their Application in Sociology

In light of an increasing interest in experimental work, we provide a review of some of the general issues involved in the design of experiments and illustrate their relevance to sociology and to other areas of social science of interest to sociologists. We provide both an introduction to the principles of experimental design and examples of influential applications of design for different types of social science research. Our aim is twofold: to provide a foundation in the principles of design that may be useful to those planning experiments and to provide a critical overview of the range of applications of experimental design across the social sciences.

Reference Information

Author: .

Experiments in Sociology – An Introduction

Table of Contents

Last Updated on July 27, 2020 by

Experiments aim to measure the effect which an independent variable (the ’cause’) has on a dependent variable (‘the effect’).

The key features of an experiment are control over variables, precise measurement, and establishing cause and effect relationships.

Different types of experiment

The key features of the experiment.

It’s easiest to explain what an experiment is by using an example from the natural sciences, so I’m going to explain about experiments further using an example used from biology

An example to illustrate the key features of an experiment

sociology experimental design

You would then collect the tomatoes from each plant at the same time of year** (say in September sometime) and weigh them (*weighing would be a more accurate way of measuring the amount of tomatoes rather than the number produced), the difference in weight between the two piles of tomatoes would give you the ‘effect’ of the 5 degree temperature difference.

In the above example, the amount of tomatoes is the dependent variable, the temperature is the independent variable, and everything else (the water, nutrients, soil etc. which you control, or keep the same) are the extraneous variables.

The Role of Hypotheses in Experiments

The point of using a hypothesis is that it helps with accuracy, focussing the researcher in on testing the specific relationship between two variables precisely, it also helps with objectivity (see below).

Experiments and Objectivity

A further key feature of experiments are that they are supposed to produce objective knowledge – that is they reveal cause and effect relationships between variables which exist independently of the observer, because the results gained should have been completely uninfluenced by the researcher’s own values.

A final (quick) word on tomato experiments, and objective knowledge…

The importance of objective, scientific knowledge about what combination of variables has what effect on tomato production is important, because if I have this knowledge (NB I may need to pay an agricultural science college for it, but it is there!) I can establish a tomato farm and set up the exact conditions for maximum production, and predict with some certainty how many tomatoes I’ll end up with in a season…(assuming I’m growing under glass, where I can control everything).

The advantages of the experimental method

Disadvantages of the experimental method, experiments – key terms.

Hypothesis – a theory or explanation made on the basis of limited evidence as a starting point for further investigation. A hypothesis will typically take the form of a testable statement about the effect which one or more independent variables will have on the dependent variable.

Extraneous variables – Variables which are not of interest to the researcher but which may interfere with the results of an experiment

Related Posts 

Laboratory experiments: definition, explanation, advantages and disadvantages

Simply Psychology – The Experimental Method

Share this:

6 thoughts on “experiments in sociology – an introduction”, leave a reply cancel reply, discover more from revisesociology.

Logo for Open Washington Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Learning Objective

  • List the major advantages and disadvantages of surveys, experiments, and observational studies.

We now turn to the major methods that sociologists use to gather the information they analyze in their research. Table 2.2 “Major Sociological Research Methods” summarizes the advantages and disadvantages of each method.

Table 2.2 Major Sociological Research Methods

Method Advantages Disadvantages
Survey Many people can be included. If given to a random sample of the population, a survey’s results can be generalized to the population. Large surveys are expensive and time consuming. Although much information is gathered, this information is relatively superficial.
Experiments If random assignment is used, experiments provide fairly convincing data on cause and effect. Because experiments do not involve random samples of the population and most often involve college students, their results cannot readily be generalized to the population.
Observation (field research) Observational studies may provide rich, detailed information about the people who are observed. Because observation studies do not involve random samples of the population, their results cannot readily be generalized to the population.
Existing data Because existing data have already been gathered, the researcher does not have to spend the time and money to gather data. The data set that is being analyzed may not contain data on all the variables in which a sociologist is interested or may contain data on variables that are not measured in ways the sociologist prefers.

Types of Sociological Research

The survey is the most common method by which sociologists gather their data. The Gallup Poll is perhaps the best-known example of a survey and, like all surveys, gathers its data with the help of a questionnaire that is given to a group of respondents. The Gallup Poll is an example of a survey conducted by a private organization, but it typically includes only a small range of variables. It thus provides a good starting point for research but usually does not include enough variables for a full-fledged sociological study. Sociologists often do their own surveys, as does the government and many organizations in addition to Gallup.

A pile of surveys

The Bees – Surveys to compile – CC BY-NC 2.0.

The General Social Survey, described earlier, is an example of a face-to-face survey, in which interviewers meet with respondents to ask them questions. This type of survey can yield a lot of information, because interviewers typically will spend at least an hour asking their questions, and a high response rate (the percentage of all people in the sample who agree to be interviewed), which is important to be able to generalize the survey’s results to the entire population. On the downside, this type of survey can be very expensive and time-consuming to conduct.

Because of these drawbacks, sociologists and other researchers have turned to telephone surveys. Most Gallup Polls are conducted over the telephone. Computers do random-digit dialing, which results in a random sample of all telephone numbers being selected. Although the response rate and the number of questions asked are both lower than in face-to-face surveys (people can just hang up the phone at the outset or let their answering machine take the call), the ease and low expense of telephone surveys are making them increasingly popular.

Mailed surveys, done by mailing questionnaires to respondents, are still used, but not as often as before. Compared with face-to-face surveys, mailed questionnaires are less expensive and time consuming but have lower response rates, because many people simply throw out the questionnaire along with other junk mail.

Whereas mailed surveys are becoming less popular, surveys done over the Internet are becoming more popular, as they can reach many people at very low expense. A major problem with Web surveys is that their results cannot necessarily be generalized to the entire population, because not everyone has access to the Internet.

Experiments

Experiments are the primary form of research in the natural and physical sciences, but in the social sciences they are for the most part found only in psychology. Some sociologists still use experiments, however, and they remain a powerful tool of social research.

The major advantage of experiments is that the researcher can be fairly sure of a cause-and-effect relationship because of the way the experiment is set up. Although many different experimental designs exist, the typical experiment consists of an experimental group and a control group , with subjects randomly assigned to either group. The researcher makes a change to the experimental group that is not made to the control group. If the two groups differ later in some variable, then it is safe to say that the condition to which the experimental group was subjected was responsible for the difference that resulted.

A student working on an experiment in science class

biologycorner – Science Experiment – CC BY-NC 2.0.

Most experiments take place in the laboratory, which for psychologists may be a room with a one-way mirror, but some experiments occur in “the field,” or in a natural setting. In Minneapolis, Minnesota, in the early 1980s, sociologists were involved in a much-discussed field experiment sponsored by the federal government. The researchers wanted to see whether arresting men for domestic violence made it less likely that they would commit such violence again. To test this hypothesis, the researchers had police do one of the following after arriving at the scene of a domestic dispute: they either arrested the suspect, separated him from his wife or partner for several hours, or warned him to stop but did not arrest or separate him. The researchers then determined the percentage of men in each group who committed repeated domestic violence during the next 6 months and found that those who were arrested had the lowest rate of recidivism, or repeat offending (Sherman & Berk, 1984). This finding led many jurisdictions across the United States to adopt a policy of mandatory arrest for domestic violence suspects. However, replications of the Minneapolis experiment in other cities found that arrest sometimes reduced recidivism for domestic violence but also sometimes increased it, depending on which city was being studied and on certain characteristics of the suspects, including whether they were employed at the time of their arrest (Sherman, 1992).

As the Minneapolis study suggests, perhaps the most important problem with experiments is that their results are not generalizable beyond the specific subjects studied. The subjects in most psychology experiments, for example, are college students, who are not typical of average Americans: they are younger, more educated, and more likely to be middle class. Despite this problem, experiments in psychology and other social sciences have given us very valuable insights into the sources of attitudes and behavior.

Observational Studies and Intensive Interviewing

Observational research , also called field research, is a staple of sociology. Sociologists have long gone into the field to observe people and social settings, and the result has been many rich descriptions and analyses of behavior in juvenile gangs, bars, urban street corners, and even whole communities.

Observational studies consist of both participant observation and nonparticipant observation . Their names describe how they differ. In participant observation, the researcher is part of the group that she or he is studying. The researcher thus spends time with the group and might even live with them for a while. Several classical sociological studies of this type exist, many of them involving people in urban neighborhoods (Liebow, 1967, 1993; Whyte, 1943).

Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book and describe what he or she witnessed and experienced.

This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, as the story goes, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study. To her surprise, her editor responded, Why don’t you do it?

That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter.

She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper-class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer.

A related type of research design is intensive interviewing . Here a researcher does not necessarily observe a group of people in their natural setting but rather sits down with them individually and interviews them at great length, often for one or two hours or even longer. The researcher typically records the interview and later transcribes it for analysis. The advantages and disadvantages of intensive interviewing are similar to those for observational studies: intensive interviewing provides much information about the subjects being interviewed, but the results of such interviewing cannot necessarily be generalized beyond the subjects.

A classic example of field research is Kai T. Erikson’s Everything in Its Path (1976), a study of the loss of community bonds in the aftermath of a flood in a West Virginia mining community, Buffalo Creek. The flood occurred when an artificial dam composed of mine waste gave way after days of torrential rain. The local mining company had allowed the dam to build up in violation of federal law. When it broke, 132 million gallons of water broke through and destroyed several thousand homes in seconds while killing 125 people. Some 2,500 other people were rendered instantly homeless. Erikson was called in by the lawyers representing the survivors to document the sociological effects of their loss of community, and the book he wrote remains a moving account of how the destruction of the Buffalo Creek way of life profoundly affected the daily lives of its residents.

A man interviewing a woman on video

Fellowship of the Rich – Interview – CC BY-NC-ND 2.0.

Similar to experiments, observational studies cannot automatically be generalized to other settings or members of the population. But in many ways they provide a richer account of people’s lives than surveys do, and they remain an important method of sociological research.

Ethnography

Ethnography is the extended observation of the social perspective and cultural values of an entire social setting. Ethnographies involve objective observation of an entire community.

The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a community. An ethnographic study might observe, for example, a small U.S. fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or an amusement park. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible.

A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results.

Existing Data

Sometimes sociologists do not gather their own data but instead analyze existing data that someone else has gathered. The U.S. Census Bureau, for example, gathers data on all kinds of areas relevant to the lives of Americans, and many sociologists analyze census data on such topics as poverty, employment, and illness. Sociologists interested in crime and the legal system may analyze data from court records, while medical sociologists often analyze data from patient records at hospitals. Analysis of existing data such as these is called secondary data analysis . Its advantage to sociologists is that someone else has already spent the time and money to gather the data. A disadvantage is that the data set being analyzed may not contain data on all the variables in which a sociologist may be interested or may contain data on variables that are not measured in ways the sociologist might prefer.

Nonprofit organizations often analyze existing data, usually gathered by government agencies, to get a better understanding of the social issue with which an organization is most concerned. They then use their analysis to help devise effective social policies and strategies for dealing with the issue. The “Learning From Other Societies” box discusses a nonprofit organization in Canada that analyzes existing data for this purpose.

Learning From Other Societies

Social Research and Social Policy in Canada

In several nations beyond the United States, nonprofit organizations often use social science research, including sociological research, to develop and evaluate various social reform strategies and social policies. Canada is one of these nations. Information on Canadian social research organizations can be found at http://www.canadiansocialresearch.net/index.htm .

The Canadian Research Institute for Social Policy (CRISP) at the University of New Brunswick is one of these organizations. According to its Web site ( http://www.unb.ca/crisp/index.php ), CRISP is “dedicated to conducting policy research aimed at improving the education and care of Canadian children and youth…and supporting low-income countries in their efforts to build research capacity in child development.” To do this, CRISP analyzes data from large data sets, such as the Canadian National Longitudinal Survey of Children and Youth, and it also evaluates policy efforts at the local, national, and international levels.

A major concern of CRISP has been developmental problems in low-income children and teens. These problems are the focus of a CRISP project called Raising and Leveling the Bar: A Collaborative Research Initiative on Children’s Learning, Behavioral, and Health Outcomes. This project at the time of this writing involved a team of five senior researchers and almost two dozen younger scholars. CRISP notes that Canada may have the most complete data on child development in the world but that much more research with these data needs to be performed to help inform public policy in the area of child development. CRISP’s project aims to use these data to help achieve the following goals, as listed on its Web site: (a) safeguard the healthy development of infants, (b) strengthen early childhood education, (c) improve schools and local communities, (d) reduce socioeconomic segregation and the effects of poverty, and (e) create a family enabling society ( http://www.unb.ca/crisp/rlb.html ). This project has written many policy briefs, journal articles, and popular press articles to educate varied audiences about what the data on children’s development suggest for child policy in Canada.

Key Takeaways

  • The major types of sociological research include surveys, experiments, observational studies, and the use of existing data.
  • Surveys are very common and allow for the gathering of much information on respondents that is relatively superficial. The results of surveys that use random samples can be generalized to the population that the sample represents.
  • Observational studies are also very common and enable in-depth knowledge of a small group of people. Because the samples of these studies are not random, the results cannot necessarily be generalized to a population.
  • Experiments are much less common in sociology than in psychology. When field experiments are conducted in sociology, they can yield valuable information because of their experimental design.

Erikson, K. T. (1976). Everything in its path: Destruction of community in the Buffalo Creek flood . New York, NY: Simon and Schuster.

Liebow, E. (1967). Tally’s corner . Boston, MA: Little, Brown.

Liebow, E. (1993). Tell them who I am: The lives of homeless women . New York, NY: Free Press.

Sherman, L W. (1992). Policing domestic violence: Experiments and dilemmas . New York, NY: Free Press.

Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49 , 261–272.

Whyte, W. F. (1943). Street corner society: The social structure of an Italian slum . Chicago, IL: University of Chicago Press.

gathers its data with the help of a questionnaire that is given to a group of respondents.

A procedure typically used to confirm the validity of a hypothesis by comparing the outcomes of one or more treatment groups to a control group on a given measure.

When researchers actively engage in the activity they are studying to understand it better.

A research technique whereby the researcher watches the subjects of his or her study without taking an active part in the situation under study.

A qualitative research method in which a researcher observes a social setting to provide descriptions of a group, society, or organization.

empirical information that someone else has gathered.

Introduction to Sociology: Understanding and Changing the Social World Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • A-Z Publications

Annual Review of Sociology

Volume 43, 2017, review article, field experiments across the social sciences.

  • Delia Baldassarri 1 , and Maria Abascal 2
  • View Affiliations Hide Affiliations Affiliations: 1 Department of Sociology, New York University, New York, New York 10012; email: [email protected] 2 Department of Sociology, Columbia University, New York, New York 10027; email: [email protected]
  • Vol. 43:41-73 (Volume publication date July 2017) https://doi.org/10.1146/annurev-soc-073014-112445
  • First published as a Review in Advance on May 22, 2017
  • © Annual Reviews

Using field experiments, scholars can identify causal effects via randomization while studying people and groups in their naturally occurring contexts. In light of renewed interest in field experimental methods, this review covers a wide range of field experiments from across the social sciences, with an eye to those that adopt virtuous practices, including unobtrusive measurement, naturalistic interventions, attention to realistic outcomes and consequential behaviors, and application to diverse samples and settings. The review covers four broad research areas of substantive and policy interest: first, randomized controlled trials, with a focus on policy interventions in economic development, poverty reduction, and education; second, experiments on the role that norms, motivations, and incentives play in shaping behavior; third, experiments on political mobilization, social influence, and institutional effects; and fourth, experiments on prejudice and discrimination. We discuss methodological issues concerning generalizability and scalability as well as ethical issues related to field experimental methods. We conclude by arguing that field experiments are well equipped to advance the kind of middle-range theorizing that sociologists value.

Article metrics loading...

Full text loading...

Literature Cited

  • Abascal M . 2015 . Us and them: black–white relations in the wake of Hispanic population growth. Am. Sociol. Rev. 80 : 789– 813 [Google Scholar]
  • Adida CL , Laitin DD , Valfort MA . 2016 . Why Muslim Integration Fails in Christian-Heritage Societies Cambridge, MA: Harvard Univ. Press [Google Scholar]
  • Ahmed AM , Hammarstedt M . 2008 . Discrimination in the rental housing market: a field experiment on the Internet. J. Urban Econ. 64 : 362– 72 [Google Scholar]
  • Ahmed AM , Hammarstedt M . 2009 . Detecting discrimination against homosexuals: evidence from a field experiment on the Internet. Economica 76 : 599– 97 [Google Scholar]
  • Arceneaux K , Nickerson DW . 2009 . Who is mobilized to vote? A re-analysis of 11 field experiments. Am. J. Political Sci. 53 : 1– 16 [Google Scholar]
  • Attanasio O , Augsburg B , De Haas R , Fitzsimons E , Harmgart H . 2012 . Group lending or individual lending? Evidence from a randomised field experiment in Mongolia. Work. Pap. No. 136, Eur. Bank Reconstr. Dev. [Google Scholar]
  • Attanasio O , Pellerano L , Reyes SP . 2009 . Building trust? Conditional cash transfer programmes and social capital. Fiscal Stud. 30 : 139– 77 [Google Scholar]
  • Avdeenko A , Gilligan MG . 2015 . International interventions to build social capital: evidence from a field experiment in Sudan. Am. Political Sci. Rev. 109 : 427– 49 [Google Scholar]
  • Ayres I , Siegelman P . 1995 . Race and gender discrimination in bargaining for a new car. Am. Econ. Rev. 85 : 304– 21 [Google Scholar]
  • Baldassarri D . 2015 . Cooperative networks: altruism, group solidarity, and reciprocity in Ugandan farmer organizations. Am. J. Sociol. 121 : 355– 95 [Google Scholar]
  • Baldassarri D . 2016 . Prosocial behavior across communities: evidence from a nationwide lost-letter experiment Presented at Advances with Field Experiments Conf., Sept. 16, Univ Chicago: [Google Scholar]
  • Banerjee A , Bertrand M , Datta S , Mullainathan S . 2009 . Labor market discrimination in Delhi: evidence from a field experiment. J. Comp. Econ. 37 : 14– 27 [Google Scholar]
  • Banerjee A , Duflo E . 2009 . The experimental approach to development economics. Annu. Rev. Econ. 1 : 151– 78 [Google Scholar]
  • Banerjee A , Duflo E . 2011 . Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty. New York: Public Affairs [Google Scholar]
  • Banerjee A , Duflo E , Glennerster R , Kothari D . 2010a . Improving immunization coverage in rural India: Clustered randomized controlled immunisation campaigns with and without incentives. Br. Med. J. 340:c2220 [Google Scholar]
  • Banerjee A , Duflo E , Glennerster R , Kinnan C . 2010b . The miracle of microfinance? Evidence from a randomized evaluation. Work. Pap. No. 13-09, Dep. Econ., MIT [Google Scholar]
  • Barr A . 2003 . Trust and expected trustworthiness: experimental evidence from Zimbabwean villages. Econ. J. 113 : 614– 30 [Google Scholar]
  • Bauchet J , Marshall C , Starita L , Thomas J , Yalouris A . 2011 . Latest findings from randomized evaluations of microfinance. Access Finance Forum Rep. 2 : 1– 27 [Google Scholar]
  • Beath A , Christia F , Enikolopov R . 2013 . Empowering women: evidence from a field experiment in Afghanistan. Am. Political Sci. Rev. 107 : 540– 57 [Google Scholar]
  • Benson PL , Karabenick SA , Lerner RM . 1976 . Pretty pleases: the effects of physical attractiveness, race, and sex on receiving help. J. Exp. Soc. Psychol. 12 : 409– 15 [Google Scholar]
  • Benz M , Meier S . 2008 . Do people behave in experiments as in the field? Evidence from donations. Exp. Econ. 11 : 278– 81 [Google Scholar]
  • Bertrand M , Karlan D , Mullainathan S , Shafir E , Zinman J . 2010 . What's advertising content worth? Evidence from a consumer credit marketing field experiment. Q. J. Econ. 125 : 263– 306 [Google Scholar]
  • Bertrand M , Mullainathan S . 2004 . Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Am. Econ. Rev. 94 : 991– 1013 [Google Scholar]
  • Besbris M , Faber JW , Rich P , Sharkey P . 2015 . Effect of neighborhood stigma on economic transitions. PNAS 112 : 4994– 98 [Google Scholar]
  • Bettinger EP . 2012 . Paying to learn: the effect of financial incentives on elementary school test scores. Rev. Econ. Stat. 94 : 686– 98 [Google Scholar]
  • Bigoni M , Bortolotti S , Casari M , Gambetta D , Pancotto F . 2016 . Amoral familism, social capital, or trust? The behavioural foundations of the Italian north–south divide. Econ. J. 126 : 1318– 41 [Google Scholar]
  • Blommaert L , Coenders M , van Tubergen F . 2014 . Discrimination of Arabic-named applicants in the Netherlands: an Internet-based field experiment examining different phases in online recruitment procedures. Soc. Forces 92 : 957– 82 [Google Scholar]
  • Bond RM , Fariss CJ , Jones JJ , Kramer AD , Marlow C . et al. 2012 . A 61-million-person experiment in social influence and political mobilization. Nature 489 : 295– 98 [Google Scholar]
  • Bosch M , Carnero MA , Farré L . 2010 . Information and discrimination in the rental housing market: evidence from a field experiment. Reg. Sci. Urban Econ. 40 : 11– 19 [Google Scholar]
  • Brearley HC . 1931 . Experimental sociology in the United States. Soc. Forces 10 : 196– 99 [Google Scholar]
  • Butler DM , Broockman DE . 2011 . Do politicians racially discriminate against constituents? A field experiment on state legislators. Am. J. Political Sci. 55 : 463– 77 [Google Scholar]
  • Butler DM , Nickerson DW . 2011 . Can learning constituency opinion affect how legislators vote? Results from a field experiment. Q. J. Political Sci. 6 : 55– 83 [Google Scholar]
  • Camerer C . 2003 . Behavioral Game Theory: Experiments in Strategic Interaction New York, NY: Russell Sage Found. [Google Scholar]
  • Cardenas J , Carpenter J . 2008 . Behavioural development economics: lessons from field labs in the developing world. J. Dev. Stud. 44 : 337– 64 [Google Scholar]
  • Casey K , Glennerster R , Miguel E . 2012 . Reshaping institutions: evidence on external aid and local collective action. Q. J. Econ. 127 : 1755– 812 [Google Scholar]
  • Castilla EJ , Benard S . 2010 . The paradox of meritocracy in organizations. Adm. Sci. Q. 55 : 543– 676 [Google Scholar]
  • Centola D . 2010 . The spread of behavior in an online social network experiment. Science 329 : 1194– 97 [Google Scholar]
  • Charness G , Gneezy U . 2009 . Incentives to exercise. Econometrica 77 : 909– 31 [Google Scholar]
  • Chetty R , Hendren N , Katz LF . 2015 . The effects of exposure to better neighborhoods on children: new evidence from the moving to opportunity experiment. Work. Pap. 21156, NBER, Cambridge, MA [Google Scholar]
  • Chong D , Junn J . 2011 . Politics from the perspective of minority populations. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia, 602– 33 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Cialdini RB , Ascani K . 1976 . Test of a concession procedure for inducing verbal, behavioral, and further compliance with a request to give blood. J. Pers. Soc. Psychol. 61 : 295– 300 [Google Scholar]
  • Cialdini RB , Vincent JE , Lewis SK , Catalan J , Wheeler D , Darby BL . 1975 . Reciprocal concessions procedure for inducing compliance: the door-in-the-face technique. J. Pers. Soc. Psychol. 31 : 206– 15 [Google Scholar]
  • Clampet-Lundquist S , Massey DS . 2008 . Neighborhood effects on economic self-sufficiency: a reconsideration of the Moving to Opportunity experiment. Am. J. Sociol. 114 : 107– 43 [Google Scholar]
  • Cohen J , Dupas P . 2010 . Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment. Q. J. Econ. 125 : 1– 40 [Google Scholar]
  • Cole S , Giné X , Tobacman J , Topalova P , Townsend R , Vickery J . 2013 . Barriers to household risk management: evidence from India. Am. Econ. J. Appl. Econ. 5 : 104– 35 [Google Scholar]
  • Cook TD , Shadish WR . 1994 . Social experiments: some developments over the past fifteen years. Annu. Rev. Psychol. 45 : 545– 80 [Google Scholar]
  • Correll SJ , Benard S , Paik I . 2007 . Getting a job: is there a motherhood penalty?. Am. J. Sociol. 112 : 1297– 339 [Google Scholar]
  • Cox D . 1958 . Planning of Experiments New York: Wiley [Google Scholar]
  • Crépon B , Devoto F , Duflo E , Parienté W . 2011 . Impact of microcredit in rural areas of Morocco: evidence from a randomized evaluation. Work. Pap., Dep. Econ., MIT [Google Scholar]
  • Cross H , Kenney GM , Mell J , Zimmerman W . 1990 . Employer hiring practices: differential treatment of Hispanic and Anglo job seekers. Tech. rep., Urban Inst., Washington, DC [Google Scholar]
  • Deaton A . 2010 . Instruments, randomization, and learning about development. J. Econ. Lit. 48 : 424– 55 [Google Scholar]
  • Dehejia R , Pop-Eleches C , Samii C . 2015 . From local to global: external validity in a fertility natural experiment. Work. Pap. 21459, NBER, Cambridge, MA [Google Scholar]
  • Doob AN , Gross AE . 1968 . Status as an inhibitor of horn-honking responses. J. Soc. Psychol. 76 : 213– 18 [Google Scholar]
  • Druckman JN , Green DP , Kuklinski JH , Lupia A . 2011 . Cambridge Handbook of Experimental Political Science Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Duflo E , Kremer M , Robinson J . 2008 . How high are rates of return to fertilizer? Evidence from field experiments in Kenya. Am. Econ. Rev. 98 : 482– 88 [Google Scholar]
  • Duflo E , Kremer M , Robinson J . 2011 . Nudging farmers to use fertilizer: theory and experimental evidence from Kenya. Am. Econ. Rev. 101 : 2350– 90 [Google Scholar]
  • Dunn EW , Aknin LB , Norton MI . 2008 . Spending money on others promotes happiness. Science 319 : 1687– 88 [Google Scholar]
  • Dunning T . 2012 . Natural Experiments in the Social Sciences: A Design-Based Approach Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Dupas P . 2009 . What matters (and what does not) in households’ decision to invest in malaria prevention?. Am. Econ. Rev. 99 : 224– 30 [Google Scholar]
  • Dupas P . 2011 . Do teenagers respond to HIV risk information? Evidence from a field experiment in Kenya. Am. Econ. J. Appl. Econ. 3 : 1– 34 [Google Scholar]
  • Dupas P . 2014 . Short-run subsidies and long-run adoption of new health products: evidence from a field experiment. Econometrica 82 : 197– 228 [Google Scholar]
  • Dupas P , Robinson J . 2011 . Savings constraints and microenterprise development: evidence from a field experiment in Kenya. Work. Pap. 14693, NBER, Cambridge, MA [Google Scholar]
  • Emswiller T , Deaux K , Willits JE . 1971 . Similarity, sex, and requests for small favors. J. Appl. Soc. Psychol. 1 : 284– 91 [Google Scholar]
  • Enos RD . 2014 . Causal effect of intergroup contact on exclusionary attitudes. PNAS 111 : 3699– 704 [Google Scholar]
  • Enos RD , Fowler A , Vavreck L . 2014 . Increasing inequality: the effect of GOTV mobilization on the composition of the electorate. J. Polit. 76 : 273– 88 [Google Scholar]
  • Fearon JD , Humphreys M , Weinstein JM . 2009 . Can development aid contribute to social cohesion after civil war? Evidence from a field experiment in post-conflict Liberia. Am. Econ. Rev. 99 : 287– 91 [Google Scholar]
  • Fearon JD , Humphreys M , Weinstein JM . 2015 . How does development assistance affect collective action capacity? Results from a field experiment in post-conflict Liberia. Am. J. Political Sci. 109 : 450– 69 [Google Scholar]
  • Fershtman C , Gneezy U . 2001 . Discrimination in a segmented society: an experimental approach. Q. J. Econ. 116 : 351– 77 [Google Scholar]
  • Fisher RA . 1935 . The Design of Experiments New York: Hafner [Google Scholar]
  • Fiszbein A , Schady N . 2009 . Conditional cash transfers: reducing present and future poverty. World Bank Policy Res. Rep., World Bank Washington, DC: [Google Scholar]
  • Forbes GB , Gromoll HF . 1971 . The lost letter technique as a measure of social variables: some exploratory findings. Soc. Forces 50 : 113– 15 [Google Scholar]
  • Freedman JL , Fraser SC . 1966 . Compliance without pressure: the foot-in-the-door technique. J. Pers. Soc. Psychol. 4 : 195– 202 [Google Scholar]
  • Freese J , Peterson D . 2017 . Replication in social science. Annu. Rev. Sociol. 43. In press [Google Scholar]
  • Fryer R . 2011 . Financial incentives and student achievement: evidence from randomized trials. Q. J. Econ. 126 : 1755– 98 [Google Scholar]
  • Gaddis SM . 2015 . Discrimination in the credential society: an audit study of race and college selectivity in the labor market. Soc. Forces 93 : 1451– 79 [Google Scholar]
  • Gaddis SM , Ghoshal R . 2015 . Arab American housing discrimination, ethnic competition, and the contact hypothesis. Ann. Am. Acad. Political Soc. Sci. 660 : 282– 99 [Google Scholar]
  • Galster G , Constantine P . 1991 . Discrimination against female-headed households in rental housing: theory and exploratory evidence. Rev. Soc. Econ. 49 : 76– 100 [Google Scholar]
  • Gantner L . 2007 . PROGRESA: An integrated approach to poverty alleviation in Mexico. Case Studies in Food Policy for Developing Countries: Policies for Health, Nutrition, Food Consumption, and Poverty P Pinstrup-Andersen, F Cheng, Vol 1 211– 20 Ithaca, NY: Cornell Univ. Press [Google Scholar]
  • Garfinkel H . 1967 . Studies in Ethnomethodology Englewood Cliffs, NJ: Prentice-Hall [Google Scholar]
  • Gelman A . 2014 . Experimental reasoning in social science. Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences DL Teele 185– 95 New Haven, CT: Yale Univ. Press [Google Scholar]
  • Gerber AS . 2011 . Field experiments in political science. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia 115– 38 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Gerber AS , Green DP . 2000 . The effects of canvassing, telephone calls, and direct mail on voter turnout: a field experiment. Am. Political Sci. Rev. 94 : 653– 63 [Google Scholar]
  • Gerber AS , Green DP . 2012 . Field Experiments New York: Norton [Google Scholar]
  • Gerber AS , Green DP , Larimer CW . 2008 . Social pressure and voter turnout: evidence from a large scale field experiment. Am. Political Sci. Rev. 102 : 33– 48 [Google Scholar]
  • Gerber AS , Green DP , Shachar R . 2003 . Voting may be habit-forming: evidence from a randomized field experiment. Am. J. Political Sci. 47 : 540– 50 [Google Scholar]
  • Gil-White F . 2004 . Ultimatum game with an ethnicity manipulation: results from Kohvdiin Bulgan Sum, Mongolia. Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies J Henrich, R Boyd, S Bowles, C Camerer, E Fehr, H Gintis, 260– 304 Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Gilligan MJ , Pasquale BJ , Samii C . 2014 . Civil war and social cohesion: lab-in-the-field evidence from Nepal. Am. J. Political Sci. 58 : 604– 19 [Google Scholar]
  • Giné X , Karlan D . 2014 . Group versus individual liability: short and long term evidence from Philippine-microcredit lending groups. J. Dev. Econ. 107 : 65– 83 [Google Scholar]
  • Giné X , Karlan D , Zinman J . 2010 . Put your money where your butt is: a commitment contract for smoking cessation. Am. Econ. J. Appl. Econ. 213– 35 [Google Scholar]
  • Gneezy U , List J , Price MK . 2012 . Toward an understanding of why people discriminate: evidence from a series of natural field experiments. Work. Pap. 17855, NBER, Cambridge, MA [Google Scholar]
  • Gneezy U , Meier S , Rey-Biel P . 2011 . When and why incentives (don't) work to modify behavior. J. Econ. Perspect. 25 : 191– 210 [Google Scholar]
  • Gneezy U , Rey-Biel P . 2014 . On the relative efficiency of performance pay and noncontingent incentives. J. Eur. Econ. Assoc. 12 : 62– 72 [Google Scholar]
  • Gneezy U , Rustichini A . 2000 . A fine is a price. J. Legal Stud. 29 : 1– 17 [Google Scholar]
  • Goel V . 2014 . Facebook tinkers with users’ emotions in news feed experiment, stirring outcry. New York Times , June 30 B1
  • Gosnell HF . 1927 . Getting Out the Vote: An Experiment in the Stimulation of Voting Chicago: Chicago Univ. Press [Google Scholar]
  • Green DP , Gerber A . 2008 . Get Out the Vote: How to Increase Voter Turnout Washington, DC: Brookings Inst. Press. 2nd ed. [Google Scholar]
  • Green DP , Wong J . 2009 . Tolerance and the contact hypothesis: a field experiment. The Political Psychology of Democratic Citizenship 228– 46 Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Greenberg D , Shroder M . 2004 . The Digest of Social Experiments. Washington, DC: Urban Inst. Press [Google Scholar]
  • Grose CR . 2014 . Field experimental work on political institutions. Annu. Rev. Political Sci. 17 : 355– 70 [Google Scholar]
  • Grossman G , Baldassarri D . 2012 . The impact of elections on cooperation: evidence from a lab in the field experiment in Uganda. Am. J. Political Sci. 56 : 964– 85 [Google Scholar]
  • Grossman G , Paler L . 2015 . Using experiments to study political institutions. Handbook of Comparative Political Institutions J Gandhi, R Ruiz-Rufino 84– 97 London: Routledge [Google Scholar]
  • Habyarimana J , Humphreys M , Posner DN , Weinstein JM . 2009 . Coethnicity: Diversity and the Dilemmas of Collective Action New York: Russell Sage Found. [Google Scholar]
  • Harrison GW . 2013 . Field experiments and methodological intolerance. J. Econ. Methodol. 20 : 103– 17 [Google Scholar]
  • Harrison GW , List JA . 2004 . Field experiments. J. Econ. Lit. 42 : 1009– 55 [Google Scholar]
  • Hausman JA , Wise DA . 1985 . Social Experimentation Chicago: Chicago Univ. Press [Google Scholar]
  • Heckman JJ . 1992 . Randomization and social policy evaluation. Evaluating Welfare and Training Programs CF Manski, I Garfinkel 201– 30 Cambridge, MA: Harvard Univ. Press [Google Scholar]
  • Heckman JJ . 1998 . Detecting discrimination. J. Econ. Perspect. 12 : 101– 16 [Google Scholar]
  • Heckman JJ , Siegelman P . 1993 . The Urban Institute audit studies: their methods and findings. Clear and Convincing Evidence: Measurement of Discrimination in America M Fix, RJ Struyk 187– 258 Washington, DC: Urban Inst. Press [Google Scholar]
  • Henrich J , Boyd R , Bowles S , Camerer C , Fehr E . et al. 2001 . In search of homo economicus: behavioral experiments in 15 small-scale societies. Am. Econ. Rev. 91 : 73– 78 [Google Scholar]
  • Henrich J , Ensminger J , McElreath R , Barr A , Barrett C . et al. 2010 . Markets, religion, community size, and the evolution of fairness and punishment. Science 327 : 1480– 84 [Google Scholar]
  • Henrich J , McElreath R , Barr A , Ensminger J , Barrett C . et al. 2006 . Costly punishment across human societies. Science 312 : 1767– 70 [Google Scholar]
  • Henry PJ . 2008 . College sophomores in the laboratory redux: influences of a narrow data base on social psychology's view of the nature of prejudice. Psychol. Inq. 19 : 49– 71 [Google Scholar]
  • Herberich DH , List JA , Price MK . 2011 . How many economists does it take to change a light bulb? A natural field experiment on technology adoption Work. Pap., Univ. Chicago [Google Scholar]
  • Heyman J , Ariely D . 2004 . Effort for payment: a tale of two markets. Psychol. Sci. 15 : 787– 93 [Google Scholar]
  • Holland J , Silva AS , Mace R . 2012 . Lost letter measure of variation in altruistic behaviour in 20 neighbourhoods. PLOS ONE 7 : e43294 [Google Scholar]
  • Houlette MA , Gaertner SL , Johnson KM , Banker BS , Riek BM , Dovidio JF . 2004 . Developing a more inclusive social identity: an elementary school intervention. J. Soc. Issues 60 : 35– 55 [Google Scholar]
  • Humphreys M , Sanchez de la Sierra R , van der Windt P . 2013 . Fishing, commitment, and communication: a proposal for comprehensive nonbinding research registration. Polit. Anal. 21 : 1– 20 [Google Scholar]
  • Imbens G , Wooldridge J . 2009 . Recent developments in the econometrics of program evaluation. J. Econ. Lit. 47 : 5– 86 [Google Scholar]
  • Isen AM , Levin PF . 1972 . Effect of feeling good on helping: cookies and kindness. J. Pers. Soc. Psychol. 21 : 384– 88 [Google Scholar]
  • Jackson M , Cox DR . 2013 . The principles of experimental design and their application in sociology. Annu. Rev. Sociol. 39 : 27– 49 [Google Scholar]
  • Jensen R , Miller N . 2008 . Giffen behavior and subsistence consumption. Am. Econ. Rev. 98 : 1553– 77 [Google Scholar]
  • Kamenica E . 2012 . Behavioral economics and psychology of incentives. Annu. Rev. Econ. 4 : 427– 52 [Google Scholar]
  • Karlan D . 2005 . Using experimental economics to measure social capital and predict financial decisions. Am. Econ. Rev. 95 : 1688– 99 [Google Scholar]
  • Karlan D , Appel J . 2011 . More Than Good Intentions: Improving the Ways the World's Poor Borrow, Save, Farm, Learn, and Stay Healthy New York: Penguin [Google Scholar]
  • Karlan D , Goldberg N . 2011 . Microfinance evaluation strategies: notes on methodology and findings. The Handbook of Microfinance B Armendáriz, M Labie 17– 58 London: World Scientific [Google Scholar]
  • Karlan D , McConnell M , Mullainathan S , Zinman J . 2014 . Getting to the top of mind: how reminders increase saving. Manag. Sci. 62 : 3393– 3411 [Google Scholar]
  • Karlan D , Osei-Akoto I , Osei R , Udry C . 2010 . Examining underinvestment in agriculture: measuring returns to capital and insurance. Work. Pap., Abdul Latif Jameel Poverty Action Lab. https://www.poverty-action.org/sites/default/files/Panel3-3-Farmers-Returns-Capital.pdf [Google Scholar]
  • Karlan D , Zinman J . 2011 . Microcredit in theory and practice: using randomized credit scoring for impact. Science 332 : 1278– 84 [Google Scholar]
  • Keizer K , Lindenberg S , Steg L . 2008 . The spreading of disorder. Science 322 : 1681– 85 [Google Scholar]
  • Kelly E , Moena P , Oakes J , Fan W , Okechukwu C . et al. 2014 . Changing work and work-family conflict: evidence from the work, family, and health network. Am. Sociol. Rev. 79 : 485– 516 [Google Scholar]
  • Kling JR , Liebman JB , Katz LF . 2007 . Experimental analysis of neighborhood effects. Econometrica 75 : 83– 119 [Google Scholar]
  • Kotran A . 2015 . Opower and utility partners save over eight terawatt-hours of energy power and utility partners save over eight terawatt-hours of energy. News release, May 21
  • Kramer ADI , Guillory JE , Hancock JT . 2014 . Experimental evidence of massive-scale emotional contagion through social networks. PNAS 111 : 8788– 90 [Google Scholar]
  • Kremer M . 2003 . Randomized evaluations of educational programs in developing countries: some lessons. Am. Econ. Rev. 93 : 102– 6 [Google Scholar]
  • Kremer M , Brannen C , Glennerster R . 2013 . The challenge of education and learning in the developing world. Science 340 : 297– 300 [Google Scholar]
  • Kremer M , Leino J , Miguel E , Zwane AP . 2011 . Spring cleaning: rural water impacts, valuation, and property rights institutions. Q. J. Econ. 126 : 145– 205 [Google Scholar]
  • Kugelmass H . 2016 . “Sorry, I'm not accepting new patients”: an audit study of access to mental health care. J. Health Soc. Behav. 57 : 168– 83 [Google Scholar]
  • Lacetera N , Macis M . 2010 . Do all material incentives for pro-social activities backfire? The response to cash and non-cash incentives for blood donations. J. Econ. Psychol. 31 : 738– 48 [Google Scholar]
  • Lacetera N , Macis M , Slonim R . 2013 . Economic rewards to motivate blood donations. Science 340 : 927– 28 [Google Scholar]
  • Landry CE , Lange A , List JA , Price MK , Rupp NG . 2010 . Is a donor in hand better than two in the bush? Evidence from a natural field experiment. Am. Econ. Rev. 100 : 958– 83 [Google Scholar]
  • Langer EJ , Rodin J . 1976 . The effects of choice and enhanced responsibility for the aged: a field experiment in an institutional setting. J. Pers. Soc. Psychol. 34 : 191– 98 [Google Scholar]
  • Lauster N , Easterbrook A . 2011 . No room for new families? A field experiment measuring rental discrimination against same-sex couples and single parents. Soc. Probl. 58 : 389– 409 [Google Scholar]
  • Leuven E , Oosterbeek H , van der Klaauw B . 2010 . The effect of financial rewards on students’ achievement: evidence from a randomized experiment. J. Eur. Econ. Assoc. 8 : 1243– 65 [Google Scholar]
  • Levine M , Prosser A , Evans D , Reicher S . 2005 . Identity and emergency intervention: how social group membership and inclusiveness of group boundaries shape helping behavior. Pers. Soc. Psychol. Bull. 31 : 443– 53 [Google Scholar]
  • Levitt SD , List JA . 2009 . Field experiments in economics: the past, the present, and the future. Eur. Econ. Rev. 53 : 1– 18 [Google Scholar]
  • Levitt SD , List JA , Neckerman S , Sadoff S . 2012 . The behavioralist goes to school: leveraging behavioral economics to improve educational performance. Work. Pap. 18165, NBER Cambridge, MA: [Google Scholar]
  • List JA . 2007 . Field experiments: a bridge between lab and naturally occurring data. B.E. J. Econ. Anal. Policy 5 : 2 [Google Scholar]
  • Lucas JW . 2003 . Theory-testing, generalization, and the problem of external validity. Sociol. Theory 21 : 236– 53 [Google Scholar]
  • Ludwig J , Duncan GJ , Gennetian LA , Katz LF , Kessler RC . et al. 2013 . Long-term neighborhood effects on low-income families: evidence from moving to opportunity. Am. Econ. Rev. 103 : 226– 31 [Google Scholar]
  • Ludwig J , Liebman JB , Kling JR , Duncan GJ , Katz LF . et al. 2008 . What can we learn about neighborhood effects from the moving to opportunity experiment?. Am. J. Sociol. 114 : 144– 88 [Google Scholar]
  • Marwell G , Ames RE . 1979 . Experiments on the provision of public goods: resources, interest, group size, and the free-rider problem. Am. J. Sociol. 84 : 1335– 60 [Google Scholar]
  • Massey DS , Lundy G . 2001 . Use of Black English and racial discrimination in urban housing markets: new methods and findings. Urban Aff. Rev. 36 : 452– 69 [Google Scholar]
  • McDermott R . 2011 . Internal and external validity. Cambridge Handbook of Experimental Political Science JN Druckman, DP Green, JH Kuklinski, A Lupia, 27– 40 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • McEwan PJ . 2015 . Improving learning in primary schools of developing countries: a meta-analysis of randomized experiments. Rev. Educ. Res. 85 : 353– 94 [Google Scholar]
  • McNutt M . 2015 . Editorial retraction of Lacour & Green. Science 346 : 1366– 69 Science 348 : 1100 [Google Scholar]
  • Merton RK . 1945 . Sociological theory. Am. J. Sociol. 50 : 462– 73 [Google Scholar]
  • Michelson M , Nickerson DW . 2011 . Voter Mobilization Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Miguel E , Kremer M . 2004 . Worms: identifying impacts on education and health in the presence of treatment externalities. Econometrica 72 : 159– 217 [Google Scholar]
  • Milgram S , Liberty HJ , Toledo R , Wackenhut J . 1986 . Response to intrusion into waiting lines. J. Pers. Soc. Psychol. 51 : 683– 89 [Google Scholar]
  • Milgram S , Mann L , Hartner S . 1965 . The lost letter technique: a tool of social research. Public Opin. Q. 29 : 437– 38 [Google Scholar]
  • Milkman KL , Akinola M , Chugh D . 2015 . What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations. J. Appl. Psychol. 100 : 1678– 712 [Google Scholar]
  • Milkman KL , Beshears J , Choi JJ , Laibson D , Madrian BC . 2011 . Using implementation intentions prompts to enhance influenza vaccination rates. PNAS 108 : 10415– 20 [Google Scholar]
  • Morgan S , Winship C . 2007 . Counterfactuals and Causal Inference Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Morton R , Williams K . 2010 . Experimental Political Science and the Study of Causality Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Moss-Racusin CA , Dovidio JF , Brescoll V , Graham MJ , Handelsman J . 2012 . Science faculty's subtle gender biases favor male students. PNAS 109 : 16474– 79 [Google Scholar]
  • Munnell AH . 1986 . Lessons from the Income Maintenance Experiments Boston: Fed. Res. Bank of Boston [Google Scholar]
  • Mutz DC . 2011 . Population-Based Survey Experiments Princeton, NJ: Princeton Univ. Press [Google Scholar]
  • Nagda BRA , Tropp LR , Paluck EL . 2006 . Looking back as we look ahead: integrating research, theory, and practice on intergroup relations. J. Soc. Issues 62 : 439– 51 [Google Scholar]
  • Neumark D , Bank RJ , Nort KDV . 1996 . Sex discrimination in restaurant hiring: an audit study. Q. J. Econ. 111 : 915– 41 [Google Scholar]
  • Nickerson DW . 2008 . Is voting contagious? Evidence from two field experiments. Am. Political Sci. Rev. 102 : 49– 57 [Google Scholar]
  • Nolan JM , Kenefick J , Schultz PW . 2011 . Normative messages promoting energy conservation will be underestimated by experts unless you show them the data. Soc. Influence 6 : 169– 80 [Google Scholar]
  • Nolan JM , Schultz PW , Cialdini RB , Goldstein NJ , Griskevicius V . 2008 . Normative social influence is underdetected. Pers. Soc. Psychol. Bull. 34 : 913– 23 [Google Scholar]
  • Nosek B , Aarts A , Anderson J , Anderson C , Attridge P . et al. 2015a . Estimating the reproducibility of psychological science. Science 349 : 943– 51 [Google Scholar]
  • Nosek B , Alter G , Banks G , Borsboom D , Bowman S . et al. 2015b . Promoting an open research culture. Science 348 : 1422– 25 [Google Scholar]
  • Olken B . 2007 . Monitoring corruption: evidence from a field experiment in Indonesia. J. Political Econ. 115 : 200– 49 [Google Scholar]
  • Olken B . 2010 . Direct democracy and local public goods: evidence from a field experiment in Indonesia. Am. Political Sci. Rev. 104 : 243– 67 [Google Scholar]
  • Pager D . 2003 . The mark of a criminal record. Am. J. Sociol. 108 : 937– 75 [Google Scholar]
  • Pager D . 2007 . The use of field experiments for studies of employment discrimination: contributions, critiques, and directions for the future. Ann. Am. Acad. Political Soc. Sci. 609 : 104– 33 [Google Scholar]
  • Pager D , Quillian L . 2005 . Walking the talk: what employers say versus what they do. Am. Sociol. Rev. 70 : 355– 80 [Google Scholar]
  • Pager D , Western B , Bonikowski B . 2009 . Discrimination in a low-wage labor market: a field experiment. Am. Sociol. Rev. 74 : 777– 99 [Google Scholar]
  • Paluck EL . 2009 . Reducing intergroup prejudice and conflict using the media: a field experiment in Rwanda. Interpers. Relat. Group Process. 96 : 574– 87 [Google Scholar]
  • Paluck EL , Cialdini RB . 2014 . Field research methods. Handbook of Research Methods in Social and Personality Psychology HT Reis, CM Judd 81– 97 New York: Cambridge Univ. Press, 2nd ed.. [Google Scholar]
  • Paluck EL , Green DP . 2009 . Prejudice reduction: what works? A review and assessment of research and practice. Annu. Rev. Psychol. 60 : 339– 67 [Google Scholar]
  • Paluck EL , Shepherd H . 2012 . The salience of social referents: a field experiment on collective norms and harassment behavior in a school social network. J. Pers. Soc. Psychol. 103 : 899– 915 [Google Scholar]
  • Paluck EL , Shepherd H , Aronow PM . 2016 . Changing climates of conflict: a social network driven experiment in 56 schools. PNAS 113 : 566– 71 [Google Scholar]
  • Pedulla DS . 2016 . Penalized or protected? Gender and the consequences of non-standard and mismatched employment histories. Am. Sociol. Rev. 81 : 262– 89 [Google Scholar]
  • Pettigrew TF . 1998 . Intergroup contact theory. Annu. Rev. Psychol. 49 : 65– 85 [Google Scholar]
  • Riach PA , Rich J . 2002 . Field experiments of discrimination in the market place. Econ. J. 112 : 480– 518 [Google Scholar]
  • Rodríguez-Planas N . 2012 . Longer-term impacts of mentoring, educational services, and learning incentives: evidence from a randomized trial in the United States. Am. Econ. J. Appl. Econ. 4 : 121– 39 [Google Scholar]
  • Rondeau D , List JA . 2008 . Matching and challenge gifts to charity: evidence from laboratory and natural field experiments. Exp. Econ. 11 : 253– 67 [Google Scholar]
  • Ross SL , Turner MA . 2005 . Housing discrimination in metropolitan America: explaining changes between 1989 and 2000. Soc. Probl. 52 : 152– 80 [Google Scholar]
  • Rossi PH , Berk RA , Lenihan KJ . 1980 . Money, Work, and Crime: Experimental Evidence New York: Academic Press [Google Scholar]
  • Rossi PH , Berk RA , Lenihan KJ . 1982 . Saying it wrong with figures: a comment on Zeisel. Am. J. Sociol. 88 : 390– 93 [Google Scholar]
  • Rossi PH , Lyall KC . 1978 . An overview evaluation of the NIT experiment. Eval. Stud. Rev. 3 : 412– 28 [Google Scholar]
  • Sabin N . 2015 . Modern microfinance: a field in flux. Social Finance Nicholls A, Paton R, Emerson J Oxford, UK: Oxford Univ. Press [Google Scholar]
  • Salganik MJ , Dodds PS , Watts DJ . 2006 . Experimental study of inequality and unpredictability in an artificial cultural market. Science 311 : 854– 56 [Google Scholar]
  • Sampson RJ . 2008 . Moving to inequality: neighborhood effects and experiments meet social structure. Am. J. Sociol. 114 : 189– 231 [Google Scholar]
  • Sampson RJ . 2012 . Great American City: Chicago and the Enduring Neighborhood Effect Chicago, IL: Chicago Univ. Press [Google Scholar]
  • Schuler SR , Hashemi SM , Badal SH . 1998 . Men's violence against women in rural Bangladesh: undermined or exacerbated by microcredit programmes?. Dev. Pract. 8 : 148– 57 [Google Scholar]
  • Schultz P . 2004 . School subsidies for the poor: evaluating the Mexican Progresa poverty program. J. Dev. Econ. 74 : 199– 250 [Google Scholar]
  • Shadish WR , Cook TD . 2009 . The renaissance of field experimentation in evaluating interventions. Annu. Rev. Psychol. 607– 29 [Google Scholar]
  • Shadish WR , Cook TD , Campbell DT . 2002 . Experimental and Quasi-experimental Designs for Generalized Causal Inference. New York: Houghton, Mifflin and Company [Google Scholar]
  • Simpson BT , McGrimmon T , Irwin K . 2007 . Are blacks really less trusting than whites? Revisiting the race and trust question. Soc. Forces 86 : 525– 52 [Google Scholar]
  • Sniderman PM , Grob DB . 1996 . Innovations in experimental design in attitude surveys. Annu. Rev. Sociol. 22 : 377– 99 [Google Scholar]
  • Steinpreis RE , Anders KA , Ritzke D . 1999 . The impact of gender on the review of the curricula vitae of job applicants and tenure candidates: a national empirical study. Sex Roles 41 : 509– 28 [Google Scholar]
  • Stutzer A , Goette L , Zehnder M . 2011 . Active decisions and prosocial behaviour: a field experiment on blood donations. Econ. J. 121 : 476– 93 [Google Scholar]
  • Teele DL . 2014 . Reflections on the ethics of field experiments. Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences DL Teele 115– 40 New Haven, CT: Yale Univ. Press [Google Scholar]
  • Thornton RL . 2008 . The demand for, and impact of, learning HIV status. Am. Econ. Rev. 98 : 1829– 63 [Google Scholar]
  • Tilcsik A . 2011 . Pride and prejudice: employment discrimination against openly gay men in the United States. Am. J. Sociol. 117 : 586– 626 [Google Scholar]
  • Travers J , Milgram S . 1969 . An experimental study of the small world problem. Sociometry 32 : 425– 43 [Google Scholar]
  • Turner MA , Bednarz BA , Herbig C , Lee SJ . 2003 . Discrimination in metropolitan housing markets phase 2: Asians and Pacific Islanders Tech. rep., Urban Inst., Washington, DC [Google Scholar]
  • Turner MA , Fix M , Struyk RJ . 1991 . Opportunities Denied, Opportunities Diminished: Racial Discrimination in Hiring Washington, DC: Urban Inst. Press [Google Scholar]
  • Turner MA , Ross SL , Galster GC , Yinger J . 2002 . Discrimination in metropolitan housing markets: national results from phase 1 of the Housing Discrimination Study (HDS) Tech. rep., Urban Inst Washington, DC: [Google Scholar]
  • Van Bavel JJ , Mende-Siedlecki P , Brady WJ , Reinero DA . 2016 . Contextual sensitivity in scientific reproducibility. PNAS 113 : 6454– 59 [Google Scholar]
  • Van de Rijt A , Kang SM , Restivo M , Patil A . 2014 . Field experiments of success-breeds-success dynamics. PNAS 111 : 6934– 39 [Google Scholar]
  • Van Der Merwe WG , Burns J . 2008 . What's in a name? Racial identity and altruism in post-apartheid South Africa. South Afr. J. Econ. 76 : 266– 75 [Google Scholar]
  • Vermeersch C , Kremer M . 2005 . School Meals, Educational Achievement, and School Competition: Evidence from a Randomized Evaluation. New York: World Bank [Google Scholar]
  • Volpp KG , Troxel AB , Pauly MV , Glick HA , Puig A . et al. 2009 . A randomized, controlled trial of financial incentives for smoking cessation. N. Engl. J. Med. 360 : 699– 709 [Google Scholar]
  • Whitt S , Wilson RK . 2007 . The dictator game, fairness and ethnicity in postwar Bosnia. Am. J. Political Sci. 51 : 655– 68 [Google Scholar]
  • Wienk RE , Reid CE , Simonson JC , Eggers FJ . 1979 . Measuring racial discrimination in American housing markets: the housing market practices survey. Tech. Rep. HUD-PDR-444(2), Dep. Hous. Urban Dev Washington, DC: [Google Scholar]
  • Williams WM , Ceci SJ . 2015 . National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track. PNAS 112 : 5360– 65 [Google Scholar]
  • Yamagishi T . 2011 . Trust: The Evolutionary Game of Mind and Society New York: Springer [Google Scholar]
  • Yamagishi T , Cook KS , Watabe M . 1998 . Uncertainty, trust, and commitment formation in the United States and Japan. Am. J. Sociol. 104 : 165– 94 [Google Scholar]
  • Zeisel H . 1982 . Disagreement over the evaluation of a controlled experiment. Am. J. Sociol. 88 : 378– 89 [Google Scholar]

Data & Media loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, birds of a feather: homophily in social networks, social capital: its origins and applications in modern sociology, conceptualizing stigma, framing processes and social movements: an overview and assessment, organizational learning, the study of boundaries in the social sciences, assessing “neighborhood effects”: social processes and new directions in research, social exchange theory, culture and cognition, focus groups.

Experimental Design: Types, Examples & Methods

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

IMAGES

  1. 15 Experimental Design Examples (2024)

    sociology experimental design

  2. Experimental Design Storyboard Tarafından oliversmith

    sociology experimental design

  3. Understanding Society: Experimental methods in sociology

    sociology experimental design

  4. Experimental Design.

    sociology experimental design

  5. Experimental design. (Section 1.3)

    sociology experimental design

  6. Experimental design

    sociology experimental design

VIDEO

  1. EXPERIMENTAL Research Design & Comparative Methods. #researchmethods #sociology

  2. Research Design

  3. Research Design Lecture 6 Part 1

  4. Sociology and Research Design

  5. Introduction to Social Research |4th Semester B.A Sociology |Module 2|Prelude to Social Research

  6. Types of Research Design, Lecture 2a of 6 Dr. Ayaz Muhammad Rana

COMMENTS

  1. The Principles of Experimental Design and Their Application in Sociology

    In light of an increasing interest in experimental work, we provide a review of some of the general issues involved in the design of experiments and illustrate their relevance to sociology and to other areas of social science of interest to sociologists. We provide both an introduction to the principles of experimental design and examples of influential applications of design for different ...

  2. 2.3 Research Design in Sociology

    Observational research, also called field research, is a staple of sociology. Sociologists have long gone into the field to observe people and social settings, and the result has been many rich descriptions and analyses of behavior in juvenile gangs, bars, urban street corners, and even whole communities.

  3. Experimental Design in Sociology: Techniques and Limitations

    Experimental design in sociology is a balancing act between the rigor of scientific method and the flexibility required to study dynamic social entities. While the challenges are significant, the potential insights that well-designed social experiments can offer are invaluable. They can lead to better policies, deeper understandings of social ...

  4. The past, present, and future of experimental methods in the social

    The experimental method has long been part of social scientists' methodological toolkit (for a history of the experimental method in various contexts, see Druckman et al., 2006; Salsburg 2001; Thye 2014a).Despite experiments' long history in social science, for years they remained mostly a niche methodology in the fields of sociology, political science, and economics (Druckman et al., 2006 ...

  5. The Principles of Experimental Design and Their Application in Sociology

    True experimental design is widely used in various fields, including sociology, social sciences, physical sciences, engineering, medicine, and more (Jackson, M., & Cox, D.R., 2013). It allows ...

  6. An Introduction to Experimental Design Research

    Abstract. Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007). A growing part of this empirical tradition is experimental, which has gained in importance as the field has matured.

  7. The Principles of Experimental Design and Their Application in Sociology

    We provide both an introduction to the principles of experimental design and examples of influential applications of design for different types of social science research. ... Michelle and Cox, D.R., The Principles of Experimental Design and Their Application in Sociology (July 2013). Annual Review of Sociology, Vol. 39, pp. 27-49, 2013 ...

  8. Sage Research Methods

    "This book is a must for learning about the experimental design-from forming a research question to interpreting the results this text covers it all." -Sarah El Sayed, University of Texas at Arlington Designing Experiments for the Social Sciences: How to Plan, Create, and Execute Research Using Experiments is a practical, applied text for courses in experimental design.

  9. Experimental Design

    Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental ...

  10. Sage Research Methods Foundations

    Abstract. This entry introduces the principles of experimental designs, the role played by randomisation in establishing causal inference, and thus the importance of these designs in establishing what works in policy and practice. It explains how randomisation helps to eliminate alternate explanations for any relationship established between an ...

  11. 8.1 Experimental design: What is it and when should it be used?

    Experimental group- the group in an experiment that receives the intervention; Posttest- a measurement taken after the intervention; Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest; Pretest- a measurement taken prior to the intervention

  12. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  13. Research Design in Sociology

    Research Design in Sociology Learning Objective. List the major advantages and disadvantages of surveys, experiments, and observational studies. ... The researcher makes a change to the experimental group that is not made to the control group. If the two groups differ later in some variable, then it is safe to say that the condition to which ...

  14. Experimental Design in Sociology: Limitations and Abuses

    EXPERIMENTAL DESIGN IN SOCIOLOGY 27 expressed in P-tests which rest upon analytical proof. By contrast with this smooth and powrerful analysis, the typical experimental design in sociol-ogy, when conducted in situ, is empirical and char-acterized by the following marks: (1) the experi-mental and control groups are selected cases and

  15. Experimental methods in sociology

    It is feasible to design experiments along the lines of experimental economics to evaluate the behavioral hypotheses advanced by various sociologists. Second, sociology is often concerned with the effects of social relationships on social behavior—for example, friendships, authority relations, or social networks.

  16. The Principles of Experimental Design and Their Application in Sociology

    In light of an increasing interest in experimental work, we provide a review of some of the general issues involved in the design of experiments and illustrate their relevance to sociology and to other areas of social science of interest to sociologists. We provide both an introduction to the principles of experimental design and examples of influential applications of design

  17. Experiments in Sociology

    Experiments aim to measure the effect which an independent variable (the 'cause') has on a dependent variable ('the effect'). The key features of an experiment are control over variables, precise measurement, and establishing cause and effect relationships. In order to establish cause and effect relationships, the independent variable is changed and the dependent variable is measured; all

  18. 2.2 Research Design in Sociology

    When field experiments are conducted in sociology, they can yield valuable information because of their experimental design. Self Check. References. Erikson, K. T. (1976). Everything in its path: Destruction of community in the Buffalo Creek flood. New York, NY: Simon and Schuster.

  19. 12. Experiments

    This chapter is aimed at giving you a richer sense of how scientists use experimental methods to learn about the social world. Fundamentally, an experiment involves the researcher introducing a variable (the. independent variable. , also known as the experimental stimulus or treatment) and then observing what happens to another variable ...

  20. Field Experiments Across the Social Sciences

    Using field experiments, scholars can identify causal effects via randomization while studying people and groups in their naturally occurring contexts. In light of renewed interest in field experimental methods, this review covers a wide range of field experiments from across the social sciences, with an eye to those that adopt virtuous practices, including unobtrusive measurement ...

  21. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.