Key Findings
- Findings from multiple studies indicate similarities and differences in digital and paper reading.
- Studies differed as to whether significant outcome (i.e., comprehension) and process (i.e., timing, eye-gaze metrics, and highlighting) differences between digital and paper reading were found, with the two studies that noted differences favoring small benefits for paper reading outcomes.
- Differences depended on the order in which students completed reading tasks: when digital reading was done first, students performed better on subsequent paper passages; when paper reading was done first, no significant differences in comprehension were found. Eye-tracking data suggested shallower processing for digital reading and deeper processing for paper reading, indicating that task order may matter, possibly due to fatigue.
- When outcome differences (comprehension for digital vs. paper reading) were noted, results suggested there were process differences as well. The process differences consistently indicated deeper processing of paper reading versus more shallow processing of digital reading.
- Results showed that response time was related to response accuracy mainly for “locate and recall” items. As response time increased, the probability of a correct response decreased, meaning that spending additional time on such items was inefficient as it did not lead to improved accuracy.
- Most highlighting variables were nonsignificant predictors of reading comprehension except highlights in areas of interest which support comprehension for both mediums and were particularly important for locate and recall items.
- Content-specific gaze features (e.g., line coverage, first-pass fixation, and answer dwell time) predicted comprehension better than general fixation metrics, and students who strategically looked back at answer-relevant text were more likely to answer correctly, highlighting the value of intentional, content-aware reading behaviors.
- Using a mixed-method, AI-driven approach, digital reading behaviors (e.g., skimming, scanning, deep reading) were identified with high accuracy, advancing understanding of how students naturally engage with digital text.
Publications associated with these findings can be viewed on the Publications page.
Tools Developed
- EIRM-RF, a model that incorporates a random forest approach into an explanatory item response model to model the nonlinear and interaction effects of person- and item-level predictors in person-by-item response data, while accounting for random effects over persons and items.
- RedForest, a digital reading platform that supports classroom and research collection of reading process and product data.
- WebEyeTrack, a browser-based eye-tracking framework that integrates lightweight state-of-the-art gaze estimation models and on-device calibration to support eye-tracking at scale.
- AI-based approaches, including: an AI classifier model to classify reading behaviors, using AI to develop an eye-gaze tool (in development) that is delivered on a laptop camera, and using AI to support communication of process data (involving thousands of data points) to students and teachers.