What Do We Really Know About the Maternal-Mortality Crisis?

How a measurement change fueled a crisis narrative

What Do We Really Know About the Maternal-Mortality Crisis?

If you’re someone who wants to have kids, there seems to be one alarming message screaming at you from every direction: Pregnancy is getting more dangerous.

From 1999 to 2019, researchers found that maternal-mortality deaths in America more than doubled. To many observers of U.S. media, this was no surprise. For years, headlines had blared warnings about a rising crisis in maternal mortality, pointing out that the U.S. was falling far behind its peers and suggesting that a tide of scientific progress had somehow reversed itself.

But there have been concerns about the data underlying this narrative. New research calls into question whether maternal deaths have really increased and whether the U.S. is actually doing much worse than other countries.

On this episode of Good on Paper, I talk with Saloni Dattani, a researcher at Our World in Data, who dug into the data and found that the rise in maternal mortality was actually the result of measurement changes.

In 1994, the International Classification of Diseases recommended adding a pregnancy checkbox to national death certificates. Several U.S. states, over the course of a few years, began adopting this change. The staggered nature of the adoption made it look like there was a steady rise in maternal deaths—but rather, it was a shift in how researchers were classifying the deaths.

There were some serious problems with using the pregnancy checkbox to determine whether a death should count toward maternal-mortality statistics. One study looked at Georgia, Louisiana, Michigan, and Ohio—four states that had adopted the checkbox—and found that more than a fifth were false positives. The women hadn’t even been pregnant.

“It can be really misleading to have an incorrect picture of what’s happening,” Dattani said. “Not just because we would misinterpret whether our policies are effective, but you’d also go down a route where you’re wasting time on this problem when there are actually important insights that you could learn from the new type of measurement.”

Listen to the conversation here:

Subscribe here: Apple Podcasts | Spotify | YouTube | Overcast | Pocket Casts


The following is a transcript of the episode:

Jerusalem Demsas: In the decades from 1999 to 2019, researchers found that maternal-mortality deaths had more than doubled. This finding capped years of concerns that the U.S. was steadily becoming a deadlier place for pregnant women. These data filtered their way through academic journals and papers and national statistics to newspapers and magazines.

I remember reading these stories myself and, as someone who wanted kids, becoming more and more afraid and confused. What was going on? How could things be getting so much worse every year when medical progress should be moving us forward?

And then I started hearing that there were some concerns with the maternal-mortality statistics, that the story might be more complicated than was commonly understood.

This is Good on Paper, a policy show that questions what we really know about popular narratives. I’m your host, Jerusalem Demsas, and I’m a staff writer here at The Atlantic.

Today’s guest is Saloni Dattani. She’s a researcher at Our World in Data who has studied death certificates and causes-of-death data broadly and kept getting questions about why the U.S. maternal-mortality data looked so bad.

Her research builds on the work of other skeptical scientists and found that the seeming rise in maternal mortality is actually the result of measurement changes. In short, things aren’t getting deadlier for pregnant women; it’s that we’ve gotten better at tracking what was already going on.

[Music]

In 1994, the International Classification of Diseases recommended adding a “pregnancy checkbox” on national death certificates to try and make sure we weren’t undercounting maternal-mortality deaths. It succeeded, but it also ended up overcounting deaths from other causes.

For instance, a study looked at Georgia, Louisiana, Michigan, and Ohio—four states that had adopted the checkbox—and found that more than a fifth of the pregnancy deaths were false positives. The women hadn’t even been pregnant!

Correcting the record on these statistics doesn’t change the fact that the U.S. needs to do more to promote women’s health. But when we’re using shoddy facts to inform our understanding of the world or to inform policy making, it can lead us down fruitless paths. And that’s not in pregnant women’s interests at all.

On the surface, this is an episode about measurement error, but it’s also one about how scientific narratives develop—both within the academy and when they reach the media and the general public. And it’s about how hard it is to communicate science to the public, even when you have the best of intentions.

Saloni, welcome to the show!

Saloni Dattani: Thank you for having me on.

Demsas: I’m excited to have you on because I’d heard whispers for a long time about concerns with the maternal-mortality data. And I’d seen tweets, or someone had mentioned it at some economics conference I was at, but I never really looked into it. And then your article came out, and I was like, Oh, wow. This is pretty definitive stuff. I need to look into it myself, which is why I wrote my own piece.

I want to start at the beginning, though, for our audience because you’ve done a bunch of research on death certificates and cause of death in the U.S., so can you just start us there? How do we determine cause of death in the U.S.? What does that process look like?

Dattani: When someone dies, there are different people who might certify their cause of death. And the death certificate—it includes a description of the specific things that led up to the person’s death. And you go back down that list to figure out what the underlying cause of death was. For example, someone might die from a gunshot wound, which eventually caused a cardiovascular—like a heart attack or something like that—and the gunshot wound would be the ultimate underlying cause of death. So once that field is filled in, all of that data then gets sent to the state to collect statistics for deaths across the state.

And then it goes to the national—the Centers for Disease Control and Prevention, the CDC. And they collect all of this data; they turn it into codes so that they can be interpreted by researchers in a standard way. And then it’s reported internationally to the World Health Organization each year.

Demsas: That’s a really clear case, right? Someone gets shot and, even if you have pneumonia or cancer or something, everyone understands that the reason you died was because you were shot.

But I remember during COVID, there were a lot of conversations around how to classify causes of death when people maybe had other reasons why they were really sick. How do you decide which the cause of death is? How do they deal with that gray area?

Dattani: Depending on the actual cause, sometimes the type of measurement might vary. During COVID, for example, there were generally two different ways to determine whether a death was caused by COVID. You could either have COVID listed as the underlying cause of death by the doctor; sometimes they would also look at just test results in the last month and try to just count anyone who had died from any disease that might have been exacerbated by COVID. And that’s quite useful because it can be quite difficult to determine whether a specific condition was worsened by COVID or not. So you would have to have some consistent way of classifying deaths by a certain disease.

And that’s also what happens with maternal deaths. In the past, we used to have the system where you would just look at what’s listed as the underlying cause of death. And if it was pregnancy related—for example, if it said preeclampsia or something else that is very obviously pregnancy related—only those would have counted as maternal deaths in the past.

The problem with that, though, was that there were many cases where pregnancy might have worsened a woman’s underlying health conditions, like hypertension, diabetes. It could be AIDS. It could be various other conditions that she has that gets worsened by the pregnancy. And those weren’t being considered maternal deaths, or sometimes they were by some doctors but not by others.

And to make that process consistent, the International Classification of Diseases—which is the international system for determining what the cause of death is—they decided that they should give additional guidance to countries on how to classify maternal deaths. And in the ’90s, they gave this recommendation that we should count any deaths that occurred during pregnancy or within six weeks of the end of pregnancy as a maternal death. So this would allow for a kind of standard.

Demsas: But that means all deaths? Even if you were hit by a car or something?

Dattani: Oh, sorry. No. This would only include deaths that are caused by medical conditions, so anything that’s not an injury or an accident or homicide or suicide. Those would be excluded. But for any other cause of death, you would look at deaths during pregnancy or within the six weeks after.

And then they asked countries to add a checkbox to their death certificate to just tick off whether the woman had been pregnant at the time of death or within six weeks. And then they could do some further investigation to follow up, like the specific cause of death and try to understand whether it was worsened by pregnancy. But this started out as this way to standardize deaths from maternal causes.

Demsas: So the changes were that if something happens to a woman when she’s pregnant, that is related to a medical event. So basically the only things that are excluded are homicides or suicides? Or how did they classify the stuff that was separate from maternal death? Because, obviously, there are things like suicides that can be very related to pregnancy.

Dattani: When you look at causes of death, there are broadly two categories of causes of death: One are medical conditions and causes that are considered natural events, and then there are others that are called external causes. So that includes specific things like falls, accidents, suicides, homicides, injuries—basically things that happen to you rather than diseases that worsen over time.

Demsas: Are those changes what led to the narrative around the maternal-mortality crisis? In 2019, we have a research finding that the U.S. records twice as many maternal deaths as in 1999. Is that because of these changes in measurement?

Dattani: It eventually led to that, right? So what happened was the International Classification of Diseases recommended that countries have this checkbox so that they could identify maternal deaths that had been missed in the past. And different countries adopted it at different times, but they also didn’t necessarily use it to compile their statistics.

So some countries actually used the checkbox directly and said, Any woman who has this pregnancy checkbox ticked and doesn’t have their death caused by an external cause, we’ll classify that as a maternal death and send that to the World Health Organization. But other countries didn’t do that, so they had the checkbox, but they didn’t use that data for further investigation.

The U.S. is an example where, in 2003, the U.S. decided to adopt this checkbox. But this change happened in different states at different times. So different states in the U.S. have slightly different procedures for certifying a death, and they have slightly different death certificates. And, between 2003 and 2017, each of the states eventually implemented this checkbox.

And because that was done in a gradual fashion: You saw that some states would implement this checkbox. They would identify maternal deaths through this new measurement. They would then report it to the CDC, and then it would go forward into international statistics and so on. But because different states were doing that at different times, it seemed like the overall rate continued to keep rising between that period.

Demsas: So if you were just looking at the statistics that the CDC was reporting about national maternal mortality, you would just see this rise happening over time.

But what’s going on underneath that is that states are just updating how they’re measuring maternal deaths. And so this seemingly natural rise throughout the 2000s is actually a function of different states at different times changing how they’re doing measurement?

Dattani: That’s right. If you look within states, you can see a very clear, sudden rise in the rates of maternal mortality that they report. So before the change, it was relatively stable for a few years. And then just after the change, the rates, on average, doubled in states and then remained stable after that.

Demsas: What do we actually know about maternal-mortality rates right now then, relative to the ’90s? Are they stable? Is it hard to tell? Does it feel like there’s an overcount? An undercount? What do we think is going on, given these changes?

Dattani: It’s a little bit difficult to tell because of how much that measurement has made an impact. In the years since all of the states implemented the measurement change, there’s still been a slight rise, especially during the pandemic.

It’s difficult to say whether the rise without the pandemic would be greater, if that makes sense—so whether there is a rising trend, besides the measurement change now. But we have seen a rise since then. And I think that’s partly attributed to COVID infections and hospital capacity and so on.

Demsas: One thing I wanted to ask you about, too, though, is that I think the way that a lot of people have heard about this crisis is the racial disparities—the gap between maternal deaths for Black Americans and for white Americans—so the racial gap in maternal deaths there has been a really big part of this narrative.

What do we know about that gap? Has that been affected by this measurement change?

Dattani: One thing that’s clear from the research is that gap was present before the change and continues to be present now. So there is a racial disparity regardless of the measurement change. And also, at the same time, the measurement change had a bigger impact in maternal deaths that were counted among Black women.

I think part of the reason for that is that, in the past, this determination of whether pregnancy was actually the cause of death was quite difficult—and maybe that was especially the case among Black women—so the measurement change meant that fewer of those deaths were being missed than they were in the past. But we do see this racial disparity regardless of the measurement change.

Demsas: Someone listening to this right now might just be like, So what? Who cares? I guess it’s fine that people updated. But if people were concerned about women dying—women dying in pregnancies is bad regardless—if we’re measuring more of them now, and it’s getting people to pay attention to the problem, why is this a problem? Why is this a big deal? Why are we talking about it?

So why did you look into this? What made you think this was important to correct?

Dattani: I looked into this mainly because we would often get this question. I work at Our World in Data, and we put together datasets on global issues. This was a big issue that, I think—maternal mortality is a really important issue that we have historically made a lot of progress on. And it’s quite alarming to see that these statistics suggested that there was a sudden rise in the U.S. that seemed to be a reversal of this trend of progress.

I think it’s quite important for people to be able to trust that data as, This is really showing a rise, if it is a rise. And so trying to dig into what the cause of that rise was, was the point of this piece. But it’s generally important for people to know, you know: How much progress are we actually making? Are our policies working? Or are there new problems or challenges emerging that we need to tackle now?

This is an example where, fortunately, that doesn’t seem to be the case. And it was the result of a measurement change that helped us recognize a problem that had already been there. But I do think that, in general, it can be very useful for people to dig into these statistics.

Demsas: Yeah. One thing that I thought that was really important is that there’s only one cause of death that someone gets counted under, right? Some people, after my article, asked me, Why are you trying to minimize this problem of maternal mortality? Even one woman dying is terrible, and we should try to prevent that. And I think that’s obviously true.

But I think what people often don’t realize is that if you’re saying, Oh, we should really push as many things into the maternal-mortality cause-of-death numbers, you’re necessarily also taking away from other causes of death. If a woman dies because of depression, and she was also pregnant, I think it’s a difficult question to ask: Which one of these is classified as maternal mortality or suicide? Or whether something is classified as a result of high blood pressure or a heart attack, all of these things are really important medical areas that deserve a lot of attention. People dying in these ways is terrible.

And so one of the things that’s important to keep in folks’ mind is that you have to know what the actual numbers are—as best to our approximation, given that there’s a lot of assumptions built into these models, regardless—because you’re not going to be able to address what is maybe the largest cause of death for women or largest cause of death for people in general.

Dattani: That’s right. So on a death certificate, you would have one underlying cause of death. Doctors can also list other conditions that they thought contributed to the death. But in statistics, we have this single cause of death for each person. And that means that if we’re saying, Actually, maternal mortality is much higher or much lower, that is actually changing the way that we’re classifying deaths. We’re moving deaths from a particular cause to another. And so it would mean changing how we’re able to tackle other problems, as well.

Demsas: And I also think that it’s more because—one thing that you were pointing out is that—there was this progress that we were making on maternal mortality, both in the U.S. but also worldwide, and then seeing the sudden reversal in the U.S. Part of the problem with that is that it’s not just drawing attention to a problem that people should care about. Obviously, people should care about maternal mortality, but the argument that the data was making was that something has changed in the last 20 years to make women more likely to die in childbirth.

And if you’re a policy maker, then the thing that happens is you start looking for solutions to what’s going on now, and you start looking for what’s changed in the last 20 years. But if the reality is, actually, It’s pretty stable, the number of women who are dying, but that means there are chronic issues that we should still continue to be addressing, that actually leads you down different paths.

Dattani: That’s right. It can be really misleading to have an incorrect picture of what’s happening, not just because we would misinterpret whether our policies are effective, but you’d also go down a route where you’re wasting time on this problem, when there are actually important insights that you could learn from the new type of measurement. And there are important things that you could do research on to understand what the causes of these deaths were and so on, and whether they’re preventable in some other way.

Demsas: All right. Time for a quick break. More with Saloni when we get back.

[Break]

Demsas: Just looking at the numbers and looking at the story of, We tried to deal with an undercount of maternal mortality, and now we’re slightly overcounting, but now our research institutions—folks like Our World in Data but also researchers at different scientific journals—have been putting out studies showing that we do think that we are now maybe overcounting maternal deaths, that doesn’t necessarily sound like a problem for our media or our research institutions, right?

Sometimes we make changes. Sometimes we overcorrect. We undercorrect. We learn. We try to do better. But I think what made me really concerned, as I did research for my article, is just how long it took for us all to update, when it seems like we had this knowledge for years.

I found research from 2017 showing that there was concerns that this maternal-mortality narrative was being driven by a misunderstanding of the data. And even there was a blogger from 2010 who was an OBGYN arguing against the crisis narrative, also saying, We need to take a close look at these numbers.

From your perspective—you’re a researcher; you’re in these spaces a lot—why did it take so long for this narrative to become questioned?

Dattani: It’s hard for me, personally, to understand. Currently, maternal-mortality statistics get collected by the National Center for Health Statistics in the U.S., which is part of the CDC. And, because of these measurement changes that were happening in different states at different times, they decided not to publish national-level data in their own reporting. And they didn’t explain that there was this measurement change going on and didn’t alert researchers to the problem until about 2017, when all the states had then implemented this change, and then they could look at the impact of the change.

I’m not really sure why that was the case. Sometimes researchers have this caution around: We don’t want to say something that we don’t know is true. We think that this might have been the reason for the rise, but we should wait until all the data is available before we write about it. I think that might have been one issue.

I think another part of it is that sometimes the communication just doesn’t actually reach the general audience. There were some researchers who knew about the problem, but they didn’t manage to communicate it effectively to the general public, or the CDC didn’t manage to communicate it effectively to researchers.

And it’s a real shame because it means a lot of research time is wasted looking at something that is artifactual and not focusing on what we now know about the problems and also about these deaths that were previously unreported.

Demsas: Yeah. Obviously, it’s important to talk about this first-order concern about research and about policy making. But, second order, there’s also just a level in which it created a culture of fear downstream—like if I was just a reader of these articles and seeing these reports and studies. And, of course, it’s important to talk about these stories of women in childbirth. But part of it, it feels like there’s a problem with how scientists communicate risk to the public, or how the public even understands risks, right?

Even if the original research was completely correct, and there had been a doubling from 1999 to 2019, that means 505 women were counted as dying of maternal mortality in 1999, and a little over 1,200 were counted in 2019. For context, there were more than 3.7 million births in 2019.

Now, again, it’s weird because there’s a level at which I don’t want to minimize the sadness that anyone is dying, but it is important to place it in the context of that. So when scientists communicate maternal deaths are doubling, right, in my brain, I’m thinking, Thousands and thousands of women are dying. And I’m at serious risk of dying if I choose to get pregnant and have a kid, when, in reality, you have a really, really safe time to give birth in this country, and people should feel much safer than they have perhaps at any point in history.

So how do you think about communicating this kind of thing to the public? How do you make decisions about whether you’re talking in percentages or you’re talking in more colloquial terms? Are there things that you think are really important for scientific communicators to do when talking about risks?

Dattani: That’s a really good question. The way that I usually tackle it is by giving people all the information, so not just focusing on, Has something doubled? but also telling people what the rates actually are. Both of them can be useful for different purposes.

For example, a doubling, as we said—if that was actually the case, it’s really important for policy makers to look into it, find out what the reason was for the sudden reversal of progress, and make changes. But at the same time, for the individual person, it’s not exactly very helpful, because they don’t have the context to know, Is this a risk for me personally? And should I be thinking about it during pregnancy?

Different people need different kinds of information on this, but I also think that we can treat people as adults who can understand these issues if we communicate them correctly, and kind of giving people enough information to know, Okay. This has gotten worse over time—if that was the case—but also the risks, in general, are at this level. And, in general, you should not be that afraid of dying during pregnancy.

Demsas: Yeah. It’s hard—probably increasingly in the age of social media—to segment communication this way. I can imagine there being different ways that someone puts out a press release than they are briefing a congresswoman or talking to a committee or an NGO who’s working on this problem. There’s just different ways you would talk to those groups.

But in public, if you’re talking about it, it becomes very, very difficult to just segment yourself. You can’t go, Okay. Now this part of my podcast is for the scientists, and this part of my podcast is for the people who want to get pregnant. That’s very difficult to do, and this is something where, obviously, I think the media has a big role in how this narrative has really spread.

But it’s also hard, given that so much of how we communicate about science is downstream of the academy itself. So how we’re hearing things—if a study gets big within academic research circles, it usually takes a couple of years for it to filter into journalism and filter into the general knowledge base after that. And so correcting that also takes several years.

Dattani: Yeah. Exactly.

Demsas: And one thing that really struck me about what you just said, too, is this need to just then say, Given that there’s all of these different, competing ways that narratives get understood or contextualized, you need to really just give people the information.

And so one part of when I was doing my reporting for my article that really shocked me was a statement from Christopher M. Zahn—he’s the interim CEO of the American College of Obstetricians and Gynecologists—where he wrote, reducing “the U.S. maternal mortality crisis to an ‘overestimation’ is irresponsible and minimizes the many lives lost and the families that have been deeply affected.” That makes sense, but the why was what really struck me. And he says it’s because it “would be an unfortunate setback to see all the hard work of health care professionals, policy makers, patient advocates, and other stakeholders be undermined.”

Rather than pointing out any major methodological flaw in the paper, Zahn’s statement is expressing the concern that it could undermine the goal of improving maternal health. Obviously, that’s laudable, but that is not usually how we expect scientific fact finders to make claims.

I understand academics worry about how their work will be operationalized in the real world, but I think both of us have contested this would undermine the goal of preventing maternal mortality. I think what’s true will help people.

But secondly, there’s this dominant sense within the public-health-research space that you need to be thinking about how your work will be perceived. Is this something that you see a lot in the academic community?

Dattani: With that quote, in particular, I’m not sure what his reasoning was. I think your explanation makes a lot of sense. The other issue here is that it was not solely about overestimation or underestimation. It was also, previously, these deaths were being unreported. Now we have this new system, which captures some of those unreported deaths but also introduces some false positives. It’s a little bit complicated, and it’s difficult to have a summary of whether it’s been overestimated or not. So I understand that, but I also think the way that we’re communicating this has just not been very clear to people. And it is just difficult to communicate all of this stuff at once and try to have a clear picture that people can take away.

Partly because this narrative around the maternal-mortality crisis has lasted so long, it’s difficult for people to now make this argument without seeming like they’re backtracking or saying that all these things that we’ve been working on are not useful. I wonder if that’s part of the reason. I’m not actually sure.

Demsas: One of the things, though, about this that struck me is—going beyond this—I brought up COVID at the beginning of this conversation because that was another time where a lot of trust was broken between public-health communicators and the general public.

You saw this theory of, We should try and get the public to act the way we want, not give them the information that we have. And I thought with masks—this was the clearest case of that—you had the situation where, at the very beginning, there was a real push to preserve masks for first respondents and for nurses and for doctors. And, in order to do so, they said, Don’t worry about masks. You don’t need masks. Masks aren’t important. Just stay at home.

And later, there’s a real push to get everyone to wear masks. And it kept coming up, over and over again. I remember this happening all the time—both in my real life but also on the internet—that people would just say, You guys said masks didn’t matter and, now all of a sudden, they do. Why would we believe you? You don’t know what you’re talking about.

I feel like this is a broader issue than just in the maternal-mortality space.

Dattani: Yeah. I definitely agree. Part of the reason is that people are trying to do multiple things at once. They’re trying to explain things. Maybe they don’t have a great understanding of how best to explain everything at once, so they just think, Okay. What’s the goal that I have, and what should I say so that people follow these guidelines? And it’s really tricky because if you say something that’s inaccurate—because you’re trying to achieve a certain goal—you don’t know if that same statement is going to affect other issues that are quite important later on.

And, just like the example that you gave, what’s much more helpful is to just be quite clear about what your understanding is. What are the uncertainties? What are these different metrics that you should think about? And how could people misinterpret the statistic? And just tell them what the problems with that misinterpretation are, rather than hiding it from them and then waiting for them to think, This is a contradiction. What’s going on? Why did you lie to me before?

Demsas: Yeah. One pushback I got on my article was from folks who are very sincerely concerned with the crisis of maternal mortality. They either work in this space or they themselves didn’t feel safe to have children or they didn’t feel safe in their pregnancy, and they’re very concerned about this.

And if you’re an activist—if your goal is to just make this issue prominent in both the media discourse but also amongst politicians and policy makers—it has become a really core cultural understanding. And that’s largely, in part, due to the fact that there have been lots of articles about this rise in maternal mortality.

And so, what do you say to groups or to people who say, I think that it’s really important that we not push back on this narrative, even if it’s not exactly right?

Dattani: In this case, it’s a strange takeaway for people to have, partly because what this measurement change has actually shown is that the actual number of maternal deaths was much higher than we had known in the past. The case is not that it was rising, but that it was already much higher than we thought.

And so it’s not exactly that it’s minimizing the problem or the crisis. What this research shows is, actually, these deaths were occurring. They were going unreported before; now we know a lot more about what specific causes of death they came from, and we have a much better ability to try to prevent those deaths.

So I think that’s how I would see it. And I think it’s strange for people to say this story shows that, actually, the crisis was overblown. What is true is that it hasn’t actually had this underlying rise over time, which contradicts what a lot of people have been saying.

Demsas: What was the reaction to folks when your piece published? How did people respond to you?

Dattani: A lot of people were just surprised they hadn’t heard about this measurement issue before. They just didn’t know that it was what the process actually was. There wasn’t that much pushback. It was more just, Why wasn’t this communicated to us before?

Demsas: I think that’s part of the problem that we’re having now, too—and I think this is an issue within the media, as well—is that there’s an asymmetrical thing with corrections. People see this on Twitter all the time—a false tweet that’s inflammatory will get thousands and thousands of retweets and likes and responses, but something that is saying, Oh. Actually, that was misquoting or whatever, will not even reach that number of people.

And so, in part, this feels like a story about—not misinformation or mistakes within the scientific community, but rather—the way that narratives flourish in the media environment that we’re in right now.

Dattani: I think that’s right, as well. If this was about some other topic, and this hadn’t been a national story for a long time, I don’t think people would have cared very much if a measurement changed. And that might have been part of the reason that it just didn’t get much traction before, because people are like, Okay. Well, this is just some little issue that only technical experts should care about.

But, in fact, it really is part of this—any of these statistics that we look at can be affected by how we collect the data, how we interpret the data. And looking at these can be really important. And communicating them along with the statistics that we’re sharing with people is very important, just in case there are these issues that crop up.

Demsas: Yeah. I wrote this article where I was looking at the COVID economic catastrophes that weren’t. So there were a bunch of predictions about what would happen with women dropping out of the labor force en masse, 30 to 40 million people being at risk of eviction, state and local debt crises happening.

And all of these predictions are coming, and they don’t come to pass. And one of the big things that I pointed out, at the time, is that it felt like we were just swimming in data. There were just so many numbers, so many studies, so many preliminary charts that people were putting out about various things.

And I wonder how you feel about this, but it’s almost like we have such an abundance of numbers that we can put to arguments that it feels like people have a lot more certainty in the things that they’re saying now than they may have had before. And a lot of these numbers—they’re good. They’re telling us something, but they shouldn’t give you full certainty that you understand everything that’s going on in the world, right?

But it does seem like it’s created a level of certainty when people are making arguments. I don’t know. Do you feel like that’s happening, too?

Dattani: Yeah. I completely agree. It’s a general problem that we have with statistics and numbers that they sound a lot more empirical than just telling people how you think things have changed, or something. And, in some ways, that’s good.

It’s really important to have empirical data on problems. But at the same time, they would just take these numbers for granted and not look into, What is the process by which this data was collected? and so on.

And what’s tricky with this is if you’re seeing statistics all the time, it’s really difficult for each person to have the time to go in and look at where this data comes from and try to understand that. It’s really important for there to be people who do that on a regular basis, who know, generally, about the field and who can interpret the data and explain that in a clear way. But it’s not something that we should really be expecting of a general audience.

And so some of the stuff that we do at Our World in Data helps with that. But also, there are various other writers and statisticians who I think should be working much more in this science-communication area to help people interpret these statistics.

Demsas: Yeah. I mean, this is why I love Our World in Data, so feel free to sponsor the Good on Paper podcast. (Laughs.)

Dattani: (Laughs.)

Well, always our final question: What is an idea that you thought once was good on paper, but then it didn’t end up panning out for you in the end?

Dattani: I have a really silly example of this, which is that a few years ago—just before the pandemic—I had moved into a new apartment. And I hadn’t actually properly looked at whether it had a washing machine. I hadn’t properly checked some of the utilities.

One of the problems that I then discovered was that the radiator in the apartment was—the dial of the radiator was broken. And so it was just permanently on the hottest setting. And the building would shut off the radiator during some of the summer months, but it was pretty much boiling for a few months every year.

Demsas: Where were you? Was this in London?

Dattani: This was in London. It just happened to be this very large building where there was only one small maintenance team, and they just never got around to fixing it in my apartment.

Demsas: I’m going to be honest. I don’t think that was good on paper, even to begin with. (Laughs.)

Dattani: (Laughs.)

Demsas: I think it was bad on paper, and then it didn’t turn out well at all.

Dattani: That’s true.

Demsas: Yeah. Well, thank you so much for coming on the show, Saloni. This has been fantastic to talk with you, and we cannot wait to have you back.

Dattani: Thank you. Yeah. I really enjoyed the conversation. I hope you have a great week.

[Music]

Demsas: Good on Paper is produced by Jinae West. It was edited by Dave Shaw, fact-checked by Ena Alvarado, and engineered by Erica Huang. Our theme music is composed by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.

And hey, if you like what you’re hearing, please follow the show, and leave us a rating and review.

I’m Jerusalem Demsas. We’ll see you next week.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow