Confusion is still reality

As an advocate for a technology integration, I admit there are two issues that concern me. The first is whether or not exposure to technology (define as you think most appropriate) influences brain function. The second is whether learning from static screen displays (reading or studying online information) is inferior to reading/studying traditional (i.e., paper) content). I regard these topics as still unresolved by science. I also also see them as nuanced questions. For example, reading is different from studying hence questions about whether a paper or digital textbook is best might possibly lead to different decisions depending on whether the question relates to reading or studying.

I think it far too early for recommendations on paper vs screen to claim the support of the research community. I also think some issues will be difficult to address should decisions be made too quickly. It seems very possible that despite widespread use of technology by young people, specific uses such as extended reading/studying are rare meaning research comparing paper-based and screen-based reading/studying cannot be equated for existing experience. It seems very possible that existing patterns of use (some have described as quickly moving from stimulus to stimulus when online) represents a confounding that may generalize to types of desired use (extended reading/studying) triggered by the medium (screen or paper). Inferior performance for screen reading that might exist today would not necessarily be present with greater extended reading/studying screen-based experience.

Anyway, let me describe a specific study. I think it worth the time of advanced students in education to read the study and others interested in the topic of learning from static screen content to at least read the introduction and the discussion. I make this general recommendation because researchers in the introduction provide background on the topic. If you have little background, reading the introductions to relevant research allows you to see how researchers frame the topic without having to understand their research methdology and the statistical analyses applied to their data.

Sidi, Y., Ophir, Y., & Ackerman, R. (2016). Generalizing screen inferiority-does the medium, screen versus paper, affect performance even with brief tasks?. Metacognition and Learning, 11(1), 15-33.

As I understand the core idea of this study, the authors are saying some research with extended reading seems to demonstrate the advantage of paper. This may be because extended reading from a screen is more demanding in some undefined way. Would the same differences be found with shorter material? So, for example, if screen reading is more fatiguing, one might not find the same disadvantage with shorter passages

You don’t see this in the title, but the authors investigate the hypothesis that inferior results from screen-based reading results from a shallower form of processing. In other words, readers jump to faulty conclusions because they are investing less cognitive effort in their reading (they call this cognitive recruitment) and this overconfidence sometimes results in failed comprehension. To come up with reading content that is short, but requires careful thought, the authors used three very short questions that many get wrong. One of these questions follows.

A bat and a  ball cost $1.10 in total. The bat costs $1. 00 more than the ball. How much does the  ball cost? _____ cents”.

The answer to the question is .05.  1.05 + .05 = 1.10. The most common, but incorrect response  is .10.

So, unless a reader thinks very carefully about the content, their inclination is to give the wrong answer to such questions. Hence, the researchers were using reading material that was prone to faulty, but quick comprehension and they hypothesized this would be a great way to demonstrate the difference between screen and paper content.

No treatment (screen vs paper) differences were found.

In a second experiment, the researchers asked the learners to offer a confidence estimate for each response. They did another interesting thing – some readers were exposed to the questions in a traditional font and some a font that was difficult to read. The idea was to introduce an experience for some that required the reader to exert additional cognitive effort.

The performance data indicated an interaction, but no main effect for paper vs screen. The traditional font resulted in superior performance for the paper group and the more difficult font resulted in superior performance for the screen group. When the researches examined the confidence ratings, the screen readers made similar confidence ratings to matter the font type. In contrast, the paper readers were more confident with the traditional font.

My conclusion – even though performance difference in the two experients were not there, paper readers had a more accurate perception of the difficulty of the tasks. Because the ratings of confidence seem to run in opposition to performance in experiment 2, this gets difficult to explain. You almost have to conclude that screen readers adjusted automatically to difficulty and paper readers adjusted because of metacognitive insight in order to explain the results. This seems a bit of a stretch and without related overall performance differences I see little reason to question screen learning. Like so many applied studies, the results are interesting, but confusing. As is often the case, the researchers note that additional research is necessary. 

This entry was posted in Uncategorized. Bookmark the permalink.