This is a guest post by Katharine Owens, a professor of political science at the University of Hartford. Her full report is here. This research was performed via SurveyMonkey Audience. If you're a SurveyMonkey customer and want to write a post about your research, send your pitch to coletted@surveymonkey.com.
What if we learn more from images than we do from text?
As a political science professor teaching about American government at the University of Hartford, I think it’s important that students understand foundational documents like the Declaration of Independence and the Constitution. But who really sits down and reads the Declaration of Independence? My anecdotal evidence suggests that not many people do.
To address this problem, I made an assignment where I ask students to create comic versions of the Declaration of Independence and the Constitution. Breaking into small groups, students are given a portion of the text and they have to figure out how to translate it into images. What I loved about this assignment is that students are really wallowing in the text—considering it, reconsidering it, talking about it with their group mates, and finally making it into images. The depictions don’t have to be fancy and no artistic talent is required: sometimes stick figures can be very compelling! Together, the class re-creates the whole of the Declaration of Independence and the Constitution in comic form. Without this assignment, it would have been really difficult to help my students understand how the grievances of the colonists in the Declaration—their complaints to King George— formed the basis of the rights and responsibilities inherent in the Constitution.
This assignment got me thinking— could I measure what people learn from text and compare it to what people learn from an image? Would they learn more from an image than from text? But testing on my own students would not work for a couple of reasons—I’m at a relatively small school, so finding enough students to have a robust sample would be difficult. Even if I were able to measure this with my students, the conclusions would be limited—because I’d only be able to say it worked with the kind of population you’d find at a small private University in central Connecticut. That’s where SurveyMonkey Audience came in.
Using SMA we were able to design a project to test a group of 450 Americans between the ages of 18-22 on their knowledge of the Constitution. After taking a short quiz, the participants were diverted by SMA to an A/B screen—half were shown a portion of the text from the Constitution and the other half were shown a portion of the Constitution as images. Then we quizzed them again on their knowledge of the Constitution so we could compare.
We found that both groups’ average quiz scores increased, but the group who saw the comic increased more and the results were statistically significant.
Then we pulled the quiz questions apart—to find out if maybe increases in some irrelevant questions were making our results look better than was merited. We found that the group that saw the images showed increases in all question types (those on the Supreme Court, the President, and on the Congress)-- but we’d only shown them text and images of Congressional Duties. When we analyzed each subgroup of questions, we found that the scores on the questions about congressional duties increased by the greatest margin (when comparing the groups who saw the text and the image) and that this was the only difference that proved statistically significant.
This provides evidence that something interesting is happening. More studies are planned to test it more.
The full study can be accessed in the journal PS: Political Science and Politics from the American Political Science Association, available online now and in print later in January, 2020.