Are Ebooks Bad for You? A Critical Look at the Research

Dan Kimberg
39 min readSep 5, 2022

Is reading ebooks somehow bad for you? I traded paper for ebooks more than a dozen years ago, so some of the headlines I’ve seen are quite alarming. “Students Reading E-Books Are Losing Out” (New York Times) and “Why reading paper books is better for your mind” (Washington Post) are typical. The articles point to a body of scientific research purportedly showing that you get more out of your reading in print than on screens.

I could have taken these articles at face value and gone back to reading in print. I’m sure some people have done exactly that. But since I have a background in cognitive psychology, and I know a few things about e-reading technology, I thought I should take a closer look at the actual research. So I tracked down the original research reports, did a lot of reading, took a lot of notes, and kept at it until I felt I had a pretty good handle on the subject.

I’m not going to hold you in suspense. Now that I’ve had a close look at the science, I don’t see any cause for concern. As long as you use a little bit of common sense, there’s no reason to worry that ebooks are any worse for you (and some encouraging signs that they may be better).

This article is my summary of what I found in that scientific literature, and what I think it actually tells us about modern e-reading.

Before I get too far, I need to mention that although I work for a company that sells ebooks, the views expressed here are strictly my own, and not those of my employer. And my personal preference for ebooks began long before I had any notion of working in this area. There’s a longer disclaimer, along with some other notes on my background, at the bottom of this article.

Ground Rules

  • I’m only concerned with long-form prose to which you devote dedicated reading time and from which you hope to extract real value. Novels and book-length non-fiction, but also novellas, essays, short stories, and maybe even poetry. The precise length is less important than the reading experience — this is about reading you really care about, not popcorn reading or time killers, not text threads you follow on on your cell phone or articles you skim briefly just to get the gist.
  • The scientific literature I care about consists mostly of peer-reviewed studies that report novel experimental data or analyses and are published in academic journals. Evidence and analysis, not anecdotes and opinions. It’s important to consider arguments made by people familiar with the literature, because I think their perspectives can be helpful. But ultimately, perspective is not evidence.
  • For present purposes, I’m not interested in the effects of e-reading on sleep or on your eyes, just on how it affects what you get from the actual reading.
  • The big questions I want to address here concern how you should read today. I take it as a given that the technology of 20 years ago, before ebooks became popular or e-readers became available, was grossly inadequate, and the technology of 15 years ago, when the first Kindles and iPhones were sold, not much better.

High-level Summary

Recent research in e-reading has generally shown an advantage for print materials over reading on screens, in measures of comprehension and recall. Not every study shows a statistically reliable effect, but the majority of recent studies directly comparing reading on screens vs print show at least a numerical, and often a statistically reliable, advantage for reading in print. Meta-analyses (which combine data from multiple studies) confirm the pattern across a swath of studies, and also suggest a degree of specificity to expository (generally, non-fiction) vs narrative (generally, fiction) text.

I believe the results, as far as they go. But I’m not convinced they have much relevance to anyone deciding whether or not to read ebooks. To me, the most plausible interpretation of the research is that the apparent print advantages are mostly driven by five characteristics of many of these studies:

  • Participants who dislike long-form reading on devices.
  • Antiquated, low-quality, uncomfortable, or inappropriate hardware for reading.
  • Software ill-suited to immersive long-form reading.
  • Exclusion of key features of e-readers (particularly, adjustability).
  • Study participants who have little or no experience with reading ebooks or other long-form e-reading.

While few studies exhibited all of these features, most demonstrated at least three. In my view, this seriously undermines the real-world relevance of the studies. Outside of the context of a scientific study, you would almost certainly:

  • Decide for yourself if you find e-reading aversive, and avoid it if you do.
  • Use a much better device than the average scientific study (even if you’re on a tight budget).
  • Use software designed for immersive reading.
  • Adjust the display for comfort.
  • Give yourself more than an hour or two to get used to e-reading before passing judgment.

In my view, there’s no compelling reason to believe that e-reading is any worse than reading in print, as long as you avoid these obvious pitfalls. But the scientific studies that are relevant to modern e-reading are few, and invariably run afoul of those pitfalls, creating the illusion of a general disadvantage for e-reading.

Each study has its own strengths and weaknesses, and this isn’t a comprehensive review of the literature. But below, I’ve tried to sort the difficulties these studies present, divided into two groups: issues pertaining to the reading conditions, and issues pertaining to interpreting the findings.

Reading Conditions #1: Poor or dated technology used in studies

Although the scientific literature on “reading on screens” goes back decades, the rapid evolution of technology poses a tricky problem for research in this area. Studies with desktop computers or fuzzy CRTs shouldn’t dissuade you from reading ebooks on a modern handheld reading device. But if we only consider studies using the latest devices, there won’t be many to choose from. So where should we draw the line?

For practical purposes, we can date the birth of modern ebooks to the release of the first Kindle in November 2007. Nothing before then resembles today’s ebooks very closely. And it’s worth remembering that although smartphones are now ubiquitous, the first iPhone, with its tiny 3.5” screen, also only came out in 2007. Before then, the only widely used devices even remotely viable as handheld e-readers were PDAs with tiny, low-resolution screens.

E-reading didn’t really start to grow until 2010, when the first iPad was released (and along with it, Apple Books, then known as iBooks), book-sized Android tablets became widely available, and the Kindle app for Android was released. The iPad Mini, a 7.9 inch iOS device (in my opinion, a much better reading device than the full-sized iPads), was released in November 2012. So we could pick a starting point somewhere in that 2007–2012 range. But the devices have improved steadily over the past 15 years, and studies using the earliest Kindles, or tablets from 2010, are largely irrelevant to today’s choices.


Screens in particular have improved greatly since 2007. We can group screen technology into two groups: E Ink (e.g., Kindle, Kobo, Onyx) and LCD (e.g., iPad, Kindle Fire, basically all tablets, laptops, and desktop computers). I include OLED screens in the latter group, even though it’s technically a different technology.

Over the past decade, both E Ink and LCDs have improved tremendously. E Ink has improved in contrast, resolution, and refresh speed, while LCDs have improved in resolution, brightness, and viewing angles. Poor screen technology may affect reading adversely, so although we should recognize that not everyone can afford the latest devices, we should still be cautious about studies conducted with early e-readers. The studies may be fine, but just not that relevant to today’s choices.

Many recent studies do use reasonably capable Kindles or tablets, even if the tablets are often too large for my taste. But still, if you’re looking to the scientific literature for practical guidance, there’s a paucity of data using modern technology. The best meta-analysis I’ve seen (Clinton, 2019) quite sensibly excludes research published prior to 2008, but still includes studies with alarmingly poor e-reading conditions. For example, several included studies were carried out with 17” desktop monitors at 1024x768 resolution (roughly 75 dpi). This is extremely poor by current standards, even before we consider the likelihood of primitive page-turning mechanisms. The meta-analysis by Delgado et al. (2018) includes studies going all the way back to 2000, which predates modern handheld devices entirely. And they did observe that print advantages may be more apparent in studies using laptops and desktops than in those using handheld devices.

The science is also tricky to evaluate because many studies are vague about the display technology they used. For example, within the broad category of LCDs there are important differences between display types such as passive matrix, TFT, IPS, and OLED. Specific devices vary in screen size, resolution, contrast, viewing angles, illumination, and portability. Many e-reading researchers seem to be either unaware of these distinctions or don’t consider them worth reporting. If I assumed that every researcher reporting the use of an unspecified LCD was using a passive matrix display (truly awful screens you basically can’t buy today), half the literature would evaporate in a cloud of irrelevance.


Software for e-reading has also been evolving over the past 15 years. The flowing text experience you get from a decent ebook reader or app, while still evolving, has been designed to support comfortable, immersed reading. Advancing to the next page is done with a swipe, tap, or button press. However, this is not always what researchers use in their studies. Some studies use desktop PDF readers. But PDF is a format for encoding page images that are normally optimized for printing, and ill-suited for reading on-screen. Some studies use Microsoft Word, which is a similarly poor choice for immersive reading, though it does reflow text. Early studies often require scrolling rather than paging, and are typically nonspecific about how the scrolling is done. No one conversant with electronic media should be surprised that typical PDF readers and Microsoft Word provide a sub-optimal reading experience, especially if the content must be scrolled with a mouse, trackball, or touchpad.

Desktops/Laptops vs Handheld Devices

I was surprised by the number of studies focused on desktop reading at a fixed monitor on a desk. Reading on a desktop can be a great convenience, especially if you’re looking to retrieve a specific bit of information or if you’re just skimming. But it’s not, in my experience, how people who care about getting the most out of their reading consume long-form content. And it’s certainly not the kind of evidence to support arguments against reading ebooks, which are overwhelmingly more often consumed on handheld devices. I can imagine how this happens — when I was in academia, I read a lot on my desktop computer (although I always printed out articles I wanted to read in full, as I did when writing this). But if we’re talking about immersive long-form reading, results from reading on monitors may be of mostly academic interest.

Within the space of handheld devices, there’s some room for debate. I personally dislike iPads larger than the iPad Mini, and other large tablets. Large screens (often with large bezels) feel extremely uncomfortable to me, and I would never use one for reading. I don’t know if there are many people who can read comfortably on these large tablets, but at the very least, I’m sure some of the participants in these studies were using uncomfortable handheld devices.

Publication Lag

Science will always lag behind the latest devices, because the technology keeps improving, devices can be expensive, and it takes a while to get the research out. Before data can even be collected, researchers must design their studies, obtain funding, get regulatory approval, purchase devices, design and construct materials, train research assistants, and recruit participants. Often this process is iterative, as small pilot studies dictate the design, or initial results dictate follow-up investigations. And it only gets messier after that, with data analysis, article preparation, peer review, and publication delays. Often, reviewers ask for re-analysis, or even new data, and rejections start a big chunk of the process over. From conception to publication, delays of three years or more are common. One of the best studies I’ve read was published in 2019, but still used an e-reader last manufactured in 2012.

How big are the issues with old devices used in studies? We don’t know for sure, but if I had to guess, I’d imagine that most handheld e-readers and tablets, even those manufactured as long as a decade ago, are adequate, but not ideal, for immersive reading. To be safer, I’d prefer to draw the line around 2015, when E Ink was more mature and higher-quality LCD tablets were more readily available. And I’d prefer to stick to screens in the 6–8” range. But many of the studies I’ve read didn’t even use handheld devices, or were nonspecific about what they did use, either of which is a huge red flag. The meta-analysis from Delgado et al. (2018) provides some tentative support for this, with the observation that the advantages of print may be in large part due to studies using desktop and laptop computers and/or requiring scrolling (although they don’t differentiate between scrolling and pagination).

As a society, we’ve had centuries of practice with print, and little more than a decade with ebooks. So a study using technology from 10 years ago will be comparing the same print we use today with e-reading technology from the distant past. If you’re thinking about trying a newish e-reader or tablet, studies using these older devices have limited relevance. While there may be legitimate reasons to wonder about comprehension under a variety of software conditions, neither Microsoft Word nor Adobe Acrobat, nor really anything presented on non-portable devices, is how people who care about reading generally settle in to consume long-form content. If that were the pinnacle of the e-reading experience, I would be firmly committed to print. We all would.

None of this invalidates the research. But it does suggest a revision to some of those headlines: “E-reading with poor or inappropriate technology may be slightly worse than print.” I’ll get back to that word “slightly” later.

Reading Conditions #2: Participants have little experience with E-reading, and a preference for print

Virtually everyone who can read today learned to do so in print. The typical participant in these studies has read many dozens, perhaps hundreds, of books in print, and prefers print for long-form reading. Few have much experience with reading ebooks.

Much of the scientific literature either implicitly or explicitly assumes that young readers, as “digital natives,” will be fluent in reading on screens. But while the typical student has plenty of experience with things like text messaging and social media on their smartphone, they have little experience with long-form e-reading on devices. Even readers who read nearly everything else electronically still tend to prefer print for books. Writers critical of e-reading are often expansive about the distinction between superficial popcorn reading and immersed long-form reading, so it behooves us to take that distinction seriously. Endless hours of reading text messages and Instagram on your phone, skimming news sites on your desktop, or with your math e-textbook open on a Chromebook, don’t prepare you for reading Middlemarch on a Kindle.

Only a few of the scientific studies I’ve seen gave participants anything resembling meaningful practice at e-reading. The vast majority were content to present results comparing deeply experienced print readers with (on average) nearly complete novice e-readers. Over a single weekend, you can easily get ten times more practice reading ebooks than researchers gave participants in any of the studies I reviewed. So again, we can imagine a more honest (but less dramatic) headline: “E-reading may be slightly worse than print if you’ve never done it before.”

Would readers in these studies do better with a bit of practice? Researchers should at least consider the possibility. I had a solid 30 years of consuming online content under my belt before I read my first ebook. Still, I was very conscious of having to adapt to my first Kindle. I had to get comfortable with the device’s controls, experiment with its font and layout options, and get used to its peculiar rhythm of turning pages (if you’ve never used an E Ink device, it’s nothing like your cell phone). And I was probably distracted by the novelty of my first dedicated e-reader. Even though I’m a bit of a technophile, it took a book or two for me to feel comfortable. (My wife, by contrast, tells me that she felt comfortable with her very first ebook.)

Preferences, not surprisingly, also strongly favor print. While there may have been some pandemic-induced shift, as students were forced to do more digitally mediated learning, survey data generally show a clear majority of readers with a strong preference for the much more familiar print format. It’s not hard to imagine that an aversion to ebooks would interfere with comprehension. But that’s not a good reason to warn people off trying ebooks voluntarily.

I don’t know how much preferences and experience contribute to the reported print advantages. But neither do the authors of the studies I’ve seen. As with the technology issues, this doesn’t invalidate the studies, but it does suggest a degree of irrelevance. If you have thousands of hours of reading ahead of you in life, and no strong aversion to e-reading, you don’t want to give up on all the advantages just because it’s not right for someone else, or because it takes a few hours to get up to speed.

I was surprised that so few of the studies I’ve read made an effort to assess e-reading experience and format preferences, and to incorporate these into the analyses. Giving participants significant experience during a study can sometimes be impractical. But college undergrads, however fluent they may be with digital media, often have little experience with immersive e-reading, and a clear preference for reading long-form content in print. None of the studies I reviewed gave participants more than an hour or so of exposure to e-reading, and for most it was closer to zero. It bears repeating that “digital natives” are not necessarily fluent in immersive long-form reading on devices, especially on E Ink devices.

Reading Conditions #3: Devices are not always adjustable

E-readers give the user a lot of control, typically including typeface and font size, line spacing, margins, brightness and color of the illumination, and dark mode. With print, you get whatever the publisher produces. This makes things a little tricky for researchers — you can’t give all your participants identical materials and give users control.

Most studies don’t mention allowing adjustments (I hope that if it were allowed, they would). While this may make interpreting the results superficially more straightforward, it also makes the results much less relevant to the question of how you should read, because it makes the e-reading conditions so unrealistic. And in most cases, it doesn’t make the comparison easier anyway, because the e-reading materials are neither customized by the participants nor carefully matched to the print materials.

There have been exceptions. For example, a recent study by Schwabe et al. (2021) did allow adjustments (and failed to find significant differences between print and e-reading or narrative texts). a study by Mangen et al. (2019), which I review in more detail below, didn’t allow adjustments, but did match the materials carefully between the two conditions.

One approach I would have liked to see is to allow participants to adjust the e-reader, and then to produce matching print materials. Another reasonable comparison, from a practical standpoint, would be to allow participants free control of the materials. The drawback of this approach is that it becomes harder to answer “why” questions about any effects. But if you want to know how e-reading compares to print reading, it would answer the question more directly.

Failing to allow adjustments clearly handicaps the e-reading conditions somewhat. Virginia Clinton-Lisell, whose 2019 meta-analysis I refer to often, summarizes the issue succinctly in this quote from The Hechinger Report: “My findings weren’t fair to screens because the screens couldn’t offer everything they could. They were really just a shiny piece of paper.”

The headline version of this would be, “E-readers are not that great if you take away some of their most supportive features.”

Reading Conditions #4: Sample length

I have one more observation that bears on the relevance of the literature, though I’m not sure in which direction. Critics of ebooks are often emphatic about the difference between long-form reading and less immersed reading of brief materials. But the recent scientific literature comparing screens and print concerns short samples almost exclusively. Only one of the studies I reviewed gave participants texts to read as long as 10,000 words (Mangen at al., 2019, which I discuss in more detail below). The vast majority use extremely small samples, rarely more than 2,000 words and often short enough to fit on a single page.

I imagine critics would argue that this discrepancy favors e-reading, because short texts have less room for complexity — more complex texts could exaggerate any differences. But I don’t see support for that in the studies. The Mangen et al. study, with its longer texts, showed parity between ebooks and print on the majority of measures. That may be due to their careful matching of presentation variables, but it’s also possible that longer samples wash out the disadvantages of reading in an unfamiliar medium.

One might also argue that short texts present a best-case for e-reading because they more closely resemble the text messages and Instagram stories with which students may be more familiar. But expecting participants to be fluent in reading 500-word informational texts for that reason would be a mistake.

Ideally we’d have more research with longer samples. But until we do, it bears emphasis that most of the e-reading research I’ve seen is about short-form reading that resembles neither Instagram stories nor ebooks.

Now let’s move on to a few more esoteric concerns about scientific methods.

Science Issue #1: Inadequate reporting of methods

The gold standard for scientific reporting is replicability — another researcher in your field should be able to read your report and, at least in principle, repeat the study with perfect fidelity.

In reality, scientific reporting always requires some judgment in deciding how much detail to include. Studies that involve paper surveys rarely report the brand of pen or type of paper used. Nor do they record barometric pressure or sunspot activity. If we later discover that those factors are critical, the earlier studies may be impossible to interpret. Researchers necessarily use some judgment in deciding what’s worth recording and reporting, and we live with the omissions as best we can.

However, these studies are about screens, and about text, and about how reading material is presented. So vagueness with these details is inexcusable. Although some studies do a reasonably good job, I was stunned by how often I found critical details about the reading materials absent, and by how often I found language suggesting that the authors were not very familiar with the technology.

To be specific, here’s a list of some minimal information that should be reported by anyone doing research in this area:

  • What kind of screen was used? If LCD, was it passive matrix, TFT, IPS, OLED, or what? For E Ink, which panel (they have names like “Pearl” and “Carta,” in addition to specs)? “A standard computer screen” is not sufficient detail for a scientific report. Neither is just reporting the name of the device (e.g., “iPad 2”), even if the reader can find the missing information online.
  • What was the geometry of the display? There are different ways to describe this, but resolution and diagonal size should be the bare minimum (e.g., a 17” display with 1024x768 resolution). It’s also helpful to know if the screen was driven at its native resolution or not, and what its native resolution actually was.
  • What typefaces and font sizes were used? This should be reported for both screens and print.
  • Were line lengths (in words) matched between materials? If not, how were the line lengths set? (This is particularly important for studies using laptops or desktops, where it might be tempting to use excessively long lines.)
  • What software was used in the e-reading conditions? Was a full-screen or similar distraction-free mode used? What controls were visible on the screen?
  • Which settings were participants free to adjust and which were fixed by the researchers? For the latter, what were the settings, and how were they chosen?
  • Were the devices and print materials hand-held or affixed to a desk? If the latter, how were they positioned? For laptops, did they fold flat?
  • How much did hand-held materials weigh?
  • Was the e-reader front-lit, back-lit, or illuminated via ambient light, and how brightly? Was contrast measured?
  • Were electronic materials scrollable, paginated, or neither? What was the scrolling or page-turning mechanism (e.g., scroll wheel, mouse click, complex mouse action, trackpad tap, keypress, swipe, touchscreen tap, button press, etc.)? Were participants experienced with the input devices they were asked to use?
  • How much practice with the devices were participants given before the critical data collection?

And so on. These factors are all potentially important, and easy to report, even if they’re not manipulated in the study or can’t easily be matched between conditions. And they should be reported even when they can be inferred from knowing what device was used. If necessary, they can be provided in an appendix, or with supplementary online materials (where some authors provide the texts they used).

It should be obvious why these details are so important, but it still bears emphasis. If a study observes a print advantage because of conditions that are unrealistically poor by modern standards, then we shouldn’t base decisions on it. Although I try to give every study the benefit of the doubt, I’ve also spent a lot of time in or near under-funded psychology labs, and it wouldn’t shock me if some studies that seem superficially reasonable were actually using passive matrix LCD monitors (ancient, horrible screens) with low enough resolution to be irritating, and asking participants to scroll text by clicking on scrollbars with a mouse (an awkward and error-prone mechanism).

While I’ve focused this criticism on the reading materials, the same can be said for other aspects of the study design. Many studies do an inadequate job of reporting even the instructions given to participants. I don’t necessarily need each study to follow a rigid script (though some would argue this is essential). But the key instructions should be consistent and intentional, and should be reported. This may be particularly important for tasks like reading that can be subject to speed/accuracy trade-offs, which are notoriously sensitive to the wording of the instructions.

Science Issue #2: How big are these differences?

Findings from these kinds of studies are usually drawn from tests of statistical “significance,” carried out to identify differences between conditions, or correlations between variables. The word “significant” is an unfortunate bit of terminology. To scientists it means, roughly, “statistically reliable,” and is taken as an indication that there is some true difference or correlation in the observed direction. It sounds like it should mean “large” or “meaningful,” but it doesn’t.

In a few spots above, I implied that even for those studies that show a print advantage, the differences are slight. I’m not the only one. In Clinton’s (2019) meta-analysis, a summary presented at the top of the article describes the observed paper benefit for narrative text as “small,” I believe based just on conventional categorization of effect sizes. Delgado et al. (2018), by contrast, argue that the effect sizes are comparable to the magnitudes of yearly learning in elementary school, or to the magnitudes of remedial interventions.

I don’t actually find either of these assessments of effect sizes convincing. Truly understanding the magnitudes of these effects would require a degree of standardization of measures that doesn’t exist, unfortunately, often leaving us to try to intuit if the effect sizes feel meaningful (Singer & Alexander, 2017, is particularly critical of the state of testing instruments used in these studies). In practical terms, does the difference amount to forgetting one superficial fact per 100 pages, or completely missing the point of nearly every paragraph? The artificial tasks used in reading comprehension studies are not always easy to translate into real-world terms. Even measures like reading speed, while fairly intuitive, can be difficult to interpret in isolation, and are surely sensitive to the instructions given, and to the goals of participants (which are never exactly what researchers would like them to be).

Out of conservatism, we should probably be at least tentatively concerned by any statistically significant differences between print and screens. But for the moment, I don’t think it’s clear, for the majority of studies, that the effects are really big enough to be of practical concern.

Science Issue #3: Could the case against e-reading be worse than we’re told?

Notwithstanding my concerns about the relevance of the existing studies, it’s also possible that they underestimate the advantages of print.

Many studies that fail to report a print advantage actually just report “no significant difference.” To the layperson this may sound like this means there’s no difference between ebooks and print, or that the difference is small. But neither is necessarily true — it really just means that any observed differences were not statistically reliable.

If you look at comprehension measures across studies, you’ll often see a non-significant advantage for print. It’s likely that there’s a pervasive effect that some studies are just not powerful enough to detect. This may be due to a combination of factors, including small study size and measures that just aren’t adequately sensitive to subtle differences in deep, immersive reading.

Meta-analyses like those of Clinton (2019) and Delgado et al. (2018) do a good job of teasing out these regularities. But researchers could also be more specific, by reporting confidence intervals around the effect sizes. This is a good way to convey what a study does tell you, regardless of whether the results are “significant” or not. (Apologies to statisticians, I do realize it’s more complicated than that.) While I wish more authors took this extra step, it only really helps if the measurements have some intuitive meaning. As I argued above, this is not always true for the measures used in reading studies.

As much as I’d love to believe in the null results, I suspect that with a bit more data and/or more carefully crafted materials, more studies would have observed a print advantage. That doesn’t, however, change my overall assessment that these studies are poorly representative of modern e-reading.

Science Issue #4: Over-interpretation

Few of the studies I’ve read present detailed theories about what underlies the differences they report, and that’s fine. But the willingness of authors to conclude from narrow (and occasionally bizarre) reading conditions that e-reading is generally bad for you struck me as excessive. No single study can support that conclusion very firmly, and even meta-analyses are vulnerable to the systematic biases and design flaws of the literatures they review.

While I expect some over-reach from articles aimed at the general public, it’s also evident to an alarming extent in the journal articles written for other scientists. Journal articles in psychology usually set aside some space at the end for authors to discuss their findings more informally, and some hand-waving is customarily tolerated there, at the discretion of reviewers and journal editors. Arguably, this is okay, since everyone understands the practice. But if you don’t have a research background, it would be easy to mistake the author’s preferred interpretation as a direct finding of the study. And in the case of e-reading research, the interpretation is often that the findings from this study are more general than what the study really demonstrates.

It’s easy to see how, if your personal biases favor print reading, you would find some support in the literature. But it’s the duty of a scientist — arguably the most important duty — to look at data skeptically. To consider evidence that seems consistent with your views and to say to yourself, “If I started with completely different beliefs, would the data before me convince me that I was wrong?”

Other Factors: Distraction

Ebook critics often cite vulnerability to distraction as a strong argument against e-reading. But since none of the studies I reviewed addressed it directly, I’d just like to make a few points that I think should be relatively uncontroversial.

Cell phones and tablets can clearly be distracting — they may ring and vibrate, and tempt you with apps, games, and various forms of social interaction. Any device that does these things while you’re reading — even if it’s just passive temptation — is a terrible device for reading. In this at least, I find myself in agreement with ebook critics.

At the other extreme, E Ink devices like Amazon’s Kindle are extremely unlikely to distract you, because social apps generally run poorly (if at all) on them. Basically no one texts or tweets from their E Ink reader, and the novelty of the device wears off very quickly (although perhaps not during the duration of a typical research study). You would be hard pressed to find a way to be distracted by one of these devices.

And tablets are somewhere in the middle, depending on what else you use them for.

Of course, electronic distractions can strike when you’re reading in print too. Your phone doesn’t stop buzzing just because you’re holding a paperback. But it’s clearly a more serious problem when you’re reading on the same device.

For me, my phone seems just fine — it essentially never rings, vibrates, or interrupts my reading in any way. It has a few games on it that I don’t even enjoy playing. Its 6.2” screen is small for my taste, but larger than some Kindles. My phone does get notifications, but they don’t appear while I’m reading. Basically, nothing on my phone ever lures me away from reading. But I realize I’m far from typical in my phone usage.

It’s always possible to get your phone to quiet down. My daughter tells me the trick is putting the phone in airplane mode. It’s even easier with dedicated reading devices — it’s not hard to configure these devices to minimize distractions. I rarely do anything but read on my tablets. No matter what device I’m on, I’m much more likely to be distracted by my family or by my own wandering thoughts than by anything happening on the device. But I understand that everyone’s situation is a little different.

I don’t want to downplay the risk of distraction, especially with phones. If you read on a device that will interrupt your reading any more often than once in a blue moon, then your reading conditions may be quite poor, even if everything else is optimized. I would hope that anyone doing so is making an intentional sacrifice, and not really trying to get the most out of their reading. But I understand that not everyone thinks about these things very carefully, and that phones are often the most readily available e-reading device.

Other Factors: Motivation

One of the lessons I learned early in grad school was that participants in research studies don’t always share your goals. You’re trying to get them to perform your experimental task conscientiously, and they’re trying to find the fastest and easiest way to get their payment or credit. An early mentor once told me that, when possible, it helps to offer a monetary incentive for performance (and that within reasonable bounds, the amount didn’t really matter). I don’t recall seeing this in any e-reading studies, certainly it’s not the norm.

As with e-reading experience and medium preference, issues with motivation may not affect all participants equally — most students are conscientious, and some may even have found the reading samples interesting. But it may only take a few to skew the results. And even students who are willing to play the game are reading for different reasons in a lab study than they would be in the real world.

I haven’t seen any relevant data on this, but it’s not hard to imagine that participants who dislike reading on screens and have never used a Kindle before might be less inclined to pay close attention in that condition. Or that participants would be more likely to give up on awkward controls and poorly formatted e-text when nothing is really at stake. Who knows. My point here is really that reading in a study lacks what psychologists call “ecological validity” — the task is fundamentally different from reading in the real world, which is more directly motivated either by intrinsic interest (personal reading) or at least a more compelling extrinsic need/reward (reading for work or school).

What about students?

I’ve been arguing that a bit of common sense will go a long way towards avoiding the issues I see in these studies. But students, and even their teachers, have less control of how they read. They may be forced to use non-optimal technology, and to do so whether they like e-reading or not. Students spend a tremendous amount of time on laptops, which are not well-designed for reading. Many schools are under-funded, and dated or low-quality technology may still be in service. E-textbook software, often provided by publishers inexperienced with software development, varies widely in quality.

It would be disingenuous of me to claim that no right-thinking person would read ebooks under these conditions if students are being forced to do so. Even if there’s nothing inherently wrong with e-reading, it’s certainly possible to do it poorly, and it wouldn’t shock me to find out that we’re providing our students with substandard tools.

To the extent schools are making students read under poor conditions, studies that examine those conditions should be taken very seriously. But I don’t think panic is warranted, for a few reasons:

  • The conditions in these studies rarely if ever seem modeled on specific or realistic classroom materials. I don’t know that students do their long-form e-reading in Microsoft Word or Adobe Acrobat any more often than I do. And I don’t believe any of the studies I’ve seen used 5-pound 800-page textbooks that resemble what many schools use.
  • It’s generally unclear if the magnitudes of the observed differences are meaningful, which would be a priority for a study aimed at policy.
  • Use of a non-preferred medium and poor technology are only two of the shortcomings I see in these studies. For example, students get plenty of time to adapt to whatever they’re using to read, and I doubt most students are prevented from adjusting font sizes.

If we want to make good policy decisions, we can’t base those decisions on research drawn from what happens in your first hour, or even day or two, of e-reading, and we can’t base those decisions on studies with unrealistically poor reading conditions. If reading Shakespeare on a tablet turns out to be just as effective as reading in print then we would be foolish to abandon the very substantial advantages of e-reading in schools just because reading a PDF on a shimmery LCD is a bad idea.

On the other hand, it’s clear that many students do prefer print, so to the extent that this preference affects their ability to learn from ebooks, it would be best not to force them. And many schools are not in a position to invest in ideal reading devices.

Although I would personally hate to see any student forced to read in their non-preferred medium, I know that the practicalities of school materials don’t always make it possible for each student to do their own thing. For now, I would just say that while we should certainly be concerned with student reading, the academic literature addresses those reading patterns no better than it addresses those of the general reader.

Access to the Latest Devices

Much of the foregoing assumes that anyone who would consider reading ebooks can also afford a relatively modern reading device. But this is surely untrue. The cost of a reading device is an up-front cost, and high-end devices can cost $300 or more. Far cheaper devices are available, but the cheapest devices are often of poorer quality. The most affordable e-reading device is probably the phone you already own, and that is often a poor choice for reading.

Although not everyone can afford an ideal reading device, there are some mitigating factors:

  • Depending on your reading habits, some or all of the cost of a reader can often be absorbed by the cost of the actual books. But for readers who use free libraries, the devices will of course cost more than the books.
  • Used or refurbished Kindles, including older (but still quite decent) models, can be obtained for less than $50. Low-end current models are often on sale for less than $100. It’s similar for tablets. This may be a substantial expense for some, but much less so than in the early days of e-reading. Although I’m not a fan of the screens on the cheapest devices, they’re still better than what’s used in many studies. And as time marches forward, the quality of low-end devices will continue to improve.
  • If you’re diligent about avoiding distraction, the phone you already own can be a pretty good reader.

The Importance of Reading the Original Research

The scientific literature on e-reading often bleeds into the popular media in the form of short news articles. Over the past decade, I’ve seen dozens of such articles in publications as respected as The New York Times and Washington Post, and many smaller outlets. I’m sure I’ve missed many more. But as any scientist will tell you, the popular media is not a very reliable source for science. Real scientific studies are published in journal articles intended for other researchers. When adapted for non-scientists, the results are often distorted or misrepresented in order to provide a simpler story or catchier headline. As a scientist, I saw this many times in reading articles about research I knew well (and once about my own research).

In this case, however, I think the issue is more subtle. The popular reports (in a few cases, written by scientists) have done a pretty good job of summarizing the perspectives of at least some of the study authors. But they haven’t generally been in a good position to evaluate the science critically, either on its own terms (as scientific research into mechanisms of reading) or in real-world terms (as useful guidance for readers deciding how to read). The end result necessarily reflects all of the shortcomings of the scientific literature. This doesn’t necessarily imply bad science or bad reporting, although of course there’s some of both. It’s simply difficult to understand the state of the research well without digging into the original studies.

Could E-reading Be Better Than Print?

I’ve written this article mostly as a rebuttal to those critical headlines. But arguably, if the best I can muster is that ebooks can be just as good as print books, then ebooks are on some shaky ground. But I think ebooks have the potential to be better than print for most readers. There are two broad reasons why I believe this.

First, even under the sketchy conditions I’ve described above, e-reading seems to do pretty well in many studies. It’s reasonable to suspect that unbiased readers, given a chance to familiarize with modern devices, and to adjust those devices for comfort, will perform better than with print. And although screen technology and software have both come a long way in the last 15 years, they are both still evolving much more rapidly than print. I’m doing my best to see to that personally.

Second, ebooks have many advantages that print can’t easily match, and that are unexplored by these studies. My previous article explored the pros and cons in detail, but I’ll just re-mention one key feature here. For some, having the freedom to adjust the font is game changing. Narrowly available large-print editions are a poor substitute for what ebooks can do, and it’s patently silly to pretend that the exact same book is optimal for everyone.

There are many more pros and cons that anyone thinking about ebooks will naturally want to consider. I personally find the pros overwhelming, and have for a long time. You may agree or disagree, depending on what’s important to you. But I anticipate that as ebooks evolve, the balance will generally shift in favor of ebooks.

My Bottom Line (For Now)

It’s clear from these studies that it’s possible to do e-reading poorly. I’m not surprised to see that people who have never done any long-form e-reading and strongly prefer print suffer when you have them click through PDFs on a desktop computer. Anyone should immediately recognize that as a confluence of bad ideas.

It’s possible to do dumb things in print too. We could use ornate typefaces in tiny font sizes, printed in smudgy red ink on broad sheets of green paper, in wide columns under dim lighting. But everyone knows not to do those things. Ebooks are comparatively new, and that novelty is reflected in much of the scientific literature. If you’re going to read ebooks, you can do much better than the typical conditions in these studies, and probably would, even without any prompting. Specifically, you can and should:

  • Decide for yourself if you want to do it at all.
  • Use a much better device than the average scientific study, even on a tight budget.
  • Use modern software designed for immersive reading.
  • Adjust the display for comfort.
  • Spend more than a few minutes getting used to it.

And a bonus item not addressed by the studies I reviewed:

  • Take steps to minimize distraction on whatever device(s) you use.

For me, the bottom line right now is that although ebooks are still in their infancy, there’s no reason to believe they’re inferior to print today, as long as you don’t do anything too foolish. While I recognize that foolish people are never in short supply, and we may sometimes be making foolish decisions on behalf of students, I think anyone who has a dedicated reading device should be in very good shape.

E-reading already offers many advantages over print reading, and some disadvantages (again, see my previous article for a run-down). Everyone has to balance the considerations in their own way, and I know many reasonable people are firmly committed to print. But it would be silly to convince someone who might otherwise benefit from e-reading to stick with print for reasons that are irrelevant to them.

There will be more to say as more research comes out, but I believe that’s a reasonably fair assessment of the evidence against reading ebooks. I’ll be sure to post updates as more studies are published.


The opinions here are my own and not those of my employer. Although I do work for an ebook seller (I’ve worked at Google since 2011, and on Google Play Books since 2018), I’ve been a devotee and advocate of e-reading since before I started working in the tech industry, and long before I worked on ebooks. I joined my current team because I’m a strong believer in the value of ebooks, not the other way around. And I do use three different e-reading products, though mostly the one I work on.

Appendix: What did I actually read?

This isn’t a scientific article, and I’ve mostly avoided naming specific studies above. But here’s a brief overview of what I read in preparing this article.

I started with Naomi Baron’s 2015 book, Words Onscreen, which is both a peek at the scientific literature and an opinion piece. Baron is a card-carrying academic and a vocal critic of e-reading, as well as an excellent writer, so Words Onscreen was an obvious place to start. I later read her 2021 book, How We Read Now, which considers the research on modern reading options more broadly.

From there, I dove into the scientific literature via meta-analyses/reviews by Clinton (2019) and by Delgado et al. (2018), as well a review by Singer & Alexander (2017) that was more focused on methodological issues. I’ll throw in a shout-out for a more recent meta-analysis by Schwabe et al (2022), focused on narrative texts.

Finally, I read (or where appropriate, skimmed) 40 or so journal articles comparing e-reading and print reading, as well as various other articles that came up along the way. This is far from everything written on the subject. But although the literature goes back decades, studies using modern e-reading devices only started coming out around 2012. It would have been tricky to publish a journal article much before that, since the first modern e-reader, the Kindle, only came out in late 2007.

Process-wise, Clinton’s 2019 article was my anchor for the more recent literature (newer studies invariably cite it, so it’s great for citation searches/alerts on Google Scholar). The meta-analysis considered only studies published since 2008, and ultimately turned up a mere 29 usable articles, the oldest from 2010. This is slim pickings, especially since she included many studies conducted on desktops and laptops rather than Kindles and tablets, as well as some unpublished dissertations. I didn’t read all of the studies included in the analyses of Delgado et al., because they went back to 2000, far too inclusive for my purposes. I did read some earlier reviews (Noyes & Garland, 2008; and Dillon, 1992), though for present purposes they are of mostly historical interest.

There are things I missed. Although most of these articles are freely available for download, some are protected by paywalls, with costs as high as $100. If you work at a major university, your library maintains a subscription, and you don’t have to worry about it (well, you should, but that’s a different story). I don’t, and I refuse to pay for access. In two cases, I wrote directly to study authors for “reprints” (dated academic jargon for “please send me a copy”). One kindly complied, while the other never replied (though the article became freely available later).

I’ve surely missed many relevant articles, and intentionally skipped most of the literature prior to 2008. If you feel there’s some telling data I’ve missed (or mis-interpreted) feel free to drop me a line. And if you do work in this area, feel free to send me pointers, preprints, PDFs of your posters, copies of your slide decks, or lengthy counter-arguments!

Appendix: My Credentials

Since I’m asking you to trust that I’ve done a decent job distilling the literature, I should say something about my credentials. I got my PhD in cognitive psychology from Carnegie Mellon in 1994, and spent the next 17 years in various academic positions. My final academic job was a “research faculty” position — I never pursued a tenure-track faculty job (probably a bad career move) or held a teaching job. Most of my time in academia was spent in cognitive neuroscience, a multidisciplinary field that draws on cognitive psychology and neuroscience. Most of my work was in brain imaging, including fMRI studies of college undergrads and lesion analysis in stroke patients. I don’t have any background specifically in the psychology of reading, and although I’ve collaborated with language researchers, it’s not an area of expertise for me.

In 2011 I got an offer I couldn’t refuse, and left academia for a software engineering job at Google. And in 2018, I joined Google Play Books, where I remain (though I need to again emphasize that the views here are mine, not my employer’s).

Appendix: A Closer Look at Mangen et al. (2019)

The criticisms I’ve outlined here apply broadly to the literature, but there may still be individual studies that are more compelling. Here I go into a little more depth here with one study that I think is in most respects exemplary, with the goal of conveying how difficult it is to draw clear conclusions from this kind of research.

The study by Mangen et al (2019) avoids many of the pitfalls in study design and reporting. The authors gave 50 participants a long reading sample (28 pages, over 10,000 words) to read over the course of about an hour. Half of the participants read in print, while half read on a Kindle DX, an early Kindle model with a 9.7-inch E ink Pearl screen. Unlike many researchers, they were careful to match things like page size, spacing, and font size — basically, readers in the print and e-reading conditions saw the same pages, just presented via different media.

They assessed things like reading speed, content recognition, “transportation and engagement,” and five types of content recall. On the majority of these measures, they found no significant differences between e-reading and print, (with non-significant differences in both directions). But they did find reliable differences in three measures of temporal order processing. For example, in one such measure, participants were asked to recall whether something occurred in the first third, middle third, or last third of the text. Participants using e-readers fared more poorly than those using print when the event occurred in the first third of the text. In another measure, subjects were asked to place events from the text in order. And again, participants using e-readers performed significantly worse than print readers.

Mangen at al. suggest that these differences in memory for temporal order may be related to “the sensorimotor assessment of the device.” Their explanation is complicated, but basically boils down to the fact that physical materials in print offer more cues about where you are in a book than ebooks do.

From my perspective, although this seems very plausible, it also feels like it could be an artifact of the study design. I’ll break down a few issues.

Experience with e-reading. Participants in this study were generally unfamiliar with e-reading — the average response to a question about familiarity with Kindle-like readers was closer to “never used” than to “occasionally used.” This means fewer than half had ever used a Kindle or anything like it. The use of a longer passage gave participants a chance to acclimate to the device during the study, but most were apparently using a Kindle for the first time. E Ink screens are very different from the smartphone and tablet screens familiar to most of us today, so this is a potentially important point.

Learning or ceiling effect? There was an interesting pattern evident in one of the temporal order measures (called “Where in the text”). The statistically reliable advantage for print was present only for events from the first third of the text. Eyeballing the numerical results, the difference was greatest for the first third, smaller for the second third, and near zero for the final segment. Supposing this pattern to be reliable, why would the effect wane? One possibility is that the advantage of print can be erased by less than an hour of practice with the Kindle. That would be a great conclusion from my point of view, but it feels unlikely to be the whole story. Another is that the waning is an artifact due to the fact that remembering more recently read content is easier. This “ceiling effect” could easily explain the pattern. Having participants repeat the procedure with a second text would have sorted this out, but the authors can’t be faulted for failing to anticipate this question.

Positional cues. In presenting the text, Mangen et al. stripped both texts of the page numbers, and covered a progress indicator present on the Kindle. The latter seems hard to justify. Progress indicators in ebooks are designed specifically as a substitute for the physical affordances of books, to give readers exactly the sense of location that Mangen et al. were measuring. Although not all ebook reading software provides the same type of progress indicator, removing it from the e-reading condition but not from the print condition stacks the deck against e-reading.

By contrast, in the print condition the authors didn’t fully strip physical progress indicators. The reading sample was embedded close to the beginning of a dummy book, with 10 blank pages preceding the sample. Keeping the sample close to the beginning of the book probably makes it pretty easy to tell how far along you are by feel alone, as the thickness of the left-side pages increases. Given that they covered the Kindle progress bar, I think a fairer comparison would have been to use a ring binding, or to embed the pages closer to the middle of the book.

What does this imply for real-world e-reading? The most straightforward conclusion would be that if you remove positional cues, readers have trouble remembering positional information. The authors did so on the Kindle, and not with the print book. Arguably this is okay, because the positional cues in books are more inherent in the medium than those of ebooks. Since not all e-reading software provides always-on progress indicators, maybe it does point to a real (but probably surmountable) disadvantage. It would be nice to see this study repeated without covering the position indicator.

Reading affinity. There was a small numerical difference between the two groups in self-reported reading frequency — participants in the print condition were slightly more avid readers, by self report. Perhaps the groups were sufficiently closely matched and perhaps not. It’s conceivable that the difference accounts for some or all of the results. We don’t know because it wasn’t factored out in any of the analyses. Relatedly, the average self-reported reading frequency of participants in the study was about halfway between 3–5 books/year and 5–10 books/year. Given that college students often read books for coursework, these are shockingly low numbers (though maybe less so in light of Naomi Baron’s observation, in How We Read Now, of a shift away from book-length reading in schools). Perhaps Mangen et al. would have observed a different pattern had they examined more avid readers, although it’s hard to say in which direction.

The device itself. We might take issue with the choice of the Kindle DX, a 10-year-old device last manufactured in 2012. The DX has lower resolution (150ppi) than most devices sold today, and was criticized for this even when it first came out. Newer devices also have smoother page turns and better contrast. And to me at least, 9.7” is too large for e-reading, especially on devices with large bezels, though a large device was probably necessary to match the page images. The Kindle DX was also somewhat heavier than the text materials (540g vs 328g). Although it may have been the best choice for this study’s design, the Kindle DX offers a poorer e-reading experience than modern devices.

Multiple comparisons. Finally, the authors report non-significant differences on most measures of comprehension. Testing a large number of measures and finding one reliable one is the hallmark of a type of statistical artifact. In this case, they found three differences, although those three measures are probably not independent. The other measures show a mix of numerical differences in both directions. However, the authors report a specific prior interest in temporal processing, which by tradition gives them some leeway to report this finding without the normally required statistical correction.

Overall, I think this study presents a more solid basis for investigating purported differences between e-reading and print than most of the literature. There’s some genuine evidence that people inexperienced with e-reading may have trouble placing textual events in temporal order when all positional cues are removed, at least on the venerable Kindle DX. This is an interesting finding worth pursuing, but I don’t think it’s a smoking gun for a general e-reading disadvantage.



Dan Kimberg

I am who I am. Who else would I be?