Inside the CDC’s pandemic “weather service”


Scientists have been modeling infectious disease epidemics since at least the early 1900s, when Nobel Prize winner Ronald Ross used mosquito reproduction rates and parasite incubation periods to predict the spread of malaria. Over the past decades, Britain and several other European countries have succeeded in making forecasting an integral part of their infectious disease programs. So why, then, has forecasting remained an afterthought at best in the United States? For starters, the quality of a given model, or the predictions that result from it, is highly dependent on the quality of the data that goes into it, and in the United States, it is difficult to get good data on infectious disease outbreaks. : place; not easily shared between different entities like test sites, hospitals and health services; and difficult to access or interpret for university modelers. “For modeling, it is essential to understand how the data was generated and what are the strengths and weaknesses of any data set,” says Caitlin Rivers, epidemiologist and associate director of CFA. Even simple metrics like test positivity rates or hospitalizations can be fraught with ambiguities. The fuzzier these numbers are, and the less the modelers understand this fuzziness, the weaker their models will be.

Another fundamental problem is that the scientists who make models and the officials who use those models to make decisions often disagree. Health officials concerned about protecting their data may be reluctant to share it with scientists. And scientists, who tend to work in academic centers and not in government offices, often fail to take into account the realities that health officials face in their work. Misaligned incentives also prevent the two from collaborating effectively. Academia tends to favor advances in research while public health officials need practical solutions to real-world problems. And they need to implement these solutions at scale. “There is a gap between what academics need to be successful, which is publishing, and what they need to have real impact, which is to create systems and structures. Explains Rosenfeld.

These shortcomings have so far hampered all responses to epidemics in the real world. During the 2009 H1N1 pandemic, for example, scientists struggled to communicate effectively with decision makers about their work and, in many cases, failed to access the data they needed to make decisions. useful projections on the spread of the virus. They still built many models, but almost none of them succeeded in influencing the response effort. Model makers faced similar hurdles with the Ebola outbreak in West Africa five years later. They were able to guide successful vaccine trials by identifying when and where cases were likely to increase. But they have not been able to establish a cohesive or sustainable system for working with health officials. “The network that exists is very ad hoc,” says Rivers. “Much of the work that is done is based on personal relationships. And the bridges you build during a given crisis tend to evaporate as soon as that crisis is resolved. “

Scientists and health officials have made numerous attempts to fill these gaps. They have created several programs, collaborations, and initiatives over the past two decades, each aimed at improving the science and practice of modeling real-world epidemics. The success of these efforts depends on who you ask: one of these efforts changed course after its founder retired, some lacked funding, others still exist but are too limited to meet the challenges ahead. . Marc Lipsitch, infectious disease epidemiologist at Harvard and CFA scientific director, says that, nevertheless, everyone has contributed something to the current initiative: “It was these previous efforts that helped lay the foundation for what we are doing. now.

At the start of the pandemic, for example, modelers drew on lessons they learned from FluSight, an annual challenge in which scientists develop real-time influenza predictions that are then put together on the CDC’s website. and compared to each other, to build a system they called the Covid-19 Forecast Hub. In early April 2020, this new hub was posting weekly forecasts on the CDC’s website that would eventually include the number of deaths, number of cases and hospitalizations at state and national levels. “This was the first time that modeling was formally incorporated into the agency’s response on such a large scale,” George, CFA director of operations, told me. “It was a big deal. Instead of an informal network of individuals, you had somewhere in the realm of 30-50 different modeling groups that were helping with Covid in a consistent and systematic way. ”

But while those projections were meticulous and modest – scientists ultimately decided that any forecast longer than two weeks was too uncertain to be useful – they also fell short of requirements at the time. As the coronavirus epidemic has turned into a pandemic, scientists of all stripes have been inundated with calls. School and health officials, mayors and governors, business leaders and event planners all wanted to know how long the pandemic would last, how it would play out in their specific communities, and what action they should take. take to contain it. “People were panicking, scouring the internet and calling whatever name they could find,” Rosenfeld told me. All of these questions could not be answered: data was scarce and the virus was new. There was little that could be modeled with confidence. But when modelers balked at these demands, others stepped into the void.


Comments are closed.