COVID-19 Data Dives: The Perfect COVID Test Should Not Be Our Only Goal

William P. Hanage, PhD


August 28, 2020

Medscape asked top experts to weigh in on the most pressing scientific questions about COVID-19. Check back frequently for more COVID-19 Data Dives, and visit Medscape's Coronavirus Resource Center for complete coverage.

William P. Hanage, PhD

A recent article in The Atlantic explores the potential of different sorts of testing and the ways that we are limited by demands for sensitivity and specificity. There is one important thing that it is missing, which I will come to later.

Different sorts of questions require different sorts of tests. The US is hung up on tests and testing to the extent that some people think that testing is, on its own, a sufficient pandemic response. It's not. It's just a way of keeping score.

The crucial thing is not the test itself but what you do in response. For instance, to ensure appropriate treatment of cases in a healthcare setting, you want a very sensitive and specific test — meaning, you can trust the result because getting it wrong matters. 

But if you are screening out in the community, you might want to pool tests by combining several samples. That would be useful in schools, because if the test came back positive you could send kids home without needing to know exactly which student is infected. (That can be cleared up later with PCR testing.)

My colleague, Michael Mina, MD, PhD, has eloquently described in that article in The Atlantic and another in The New York Times the benefits of a test that is highly specific but not so sensitive, one cheap and quick enough to be taken daily.

These tests exist; an example is one made by E25Bio. Such tests run into trouble with regulators, however, because of the lack of sensitivity. A fraction of people who are genuinely infected will get a false negative. This is obviously not ideal, but for screening purposes it may not be such a hindrance. 

Imagine a test with 50% sensitivity. Now imagine if the tests were really being done daily. While the test might come back (wrongly) negative on the first day, it would be unlikely to come back negative throughout infection. Even if you only detected half of cases and got them isolated in time to limit onward transmission, that would be much better than most current contact tracing efforts.

As Mina argues, there is a good reason to think that this sort of test would be more sensitive the more contagious a person is. Full disclosure: Mina is a colleague. That's not influencing my opinion; I can happily disagree with him. But I think his suggestion is reasonable.

The thing the article misses is another advantage of such tests, as my colleagues and I show in this not-yet-reviewed preprint examining the use of low-sensitivity tests to mitigate outbreaks. These highly specific but less sensitive tests actually take advantage of the mathematics of viral transmission and overdispersion in transmission.

SARS-CoV-2 transmits in clusters (the technical term is overdispersion): A minority of infections are responsible for the majority of transmission. Most infections don't transmit. And it's transmission that we care about. 

So why does that work well with these tests? Because where there is one transmission event, there are likely more. Imagine combining these tests with contact tracing. While one test might be a false negative, it's unlikely that all of them will be. If a contact comes back positive, you know to quarantine and investigate all who share the exposure. 

These tests may also help "cluster busting" backwards contact tracing strategies. Any case had to transmit from someone, and that someone most likely transmitted to at least one other person.

Tracking them quickly is key. For this to work, the speed of the test matters. 

But if regulatory authorities demand very high sensitivity and specificity, regardless of whether the test is for screening or diagnostics, this strategy will be off the table — and not because it's not possible. 

As the article points out, we should have taken the virus seriously in the first place. Sure, testing costs money. How much do you think all the testing we didn't do in the spring is costing us now?

Every day of the pandemic, the US economy hemorrhages more money, money that could be spent on lots of cheap, imperfect tests.

And in this case, requiring the perfect may be the enemy of the good. 

Bill Hanage is an associate professor at the Center for Communicable Disease Dynamics in the Department of Epidemiology at the Harvard T. H. Chan School of Public Health. He specializes in pathogen evolution. Follow him on Twitter.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube


Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.
Post as: