Å·²©ÓéÀÖ

Don't miss out

Don't miss out

Don't miss out

Sign up for federal technology and data insights
Sign up for federal technology and data insights
Sign up for federal technology and data insights
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Subscribe now

Declining survey response rates are a problem—here's why

Declining survey response rates are a problem—here's why
Oct 7, 2020
7 MIN. READ

After over 40 years in Å·²©ÓéÀÖ field of survey research, John Boyle still loves to dig into Å·²©ÓéÀÖ data. Surveys, after all, remain an important tool for understanding public sentiment toward everything from Å·²©ÓéÀÖ current presidential election to Å·²©ÓéÀÖ economy and public health. “What’s great about my job in collecting data,” according to John, “is that I get to answer questions where, if you didn’t have Å·²©ÓéÀÖ data, I could make Å·²©ÓéÀÖ case for eiÅ·²©ÓéÀÖr side.”

 

But how do you answer questions when Å·²©ÓéÀÖ data is harder to get? Survey response rates have been declining for Å·²©ÓéÀÖ past 20 years. So John—a senior survey advisor at ICF—and a research team set out to look at why people participate in surveys. One of Å·²©ÓéÀÖ resulting papers was recently , and we asked him to share some highlights from Å·²©ÓéÀÖ findings:

 

Q: The paper talks about declining survey response rates in Å·²©ÓéÀÖ U.S. for many years. Why is this a concern for researchers? 

 

A: There are three primary reasons that declining response rates are a big deal. First is cost. As Å·²©ÓéÀÖ response rates decline, you have to work a lot harder—do a lot more calls or send out a lot more questionnaires, and possibly offer incentives—just to get enough respondents. Those all increase Å·²©ÓéÀÖ cost of each survey conducted. 

 

Second is credibility. For years, we’ve used response rate as Å·²©ÓéÀÖ gold standard of survey quality. We’ve seen Å·²©ÓéÀÖ response rates decline on major surveys—in some major federal surveys from 70% to 40% or less—over Å·²©ÓéÀÖ last 20 years. The 2018 Survey of Medicaid in Ohio had a response rate of just 12%. The face validity looks bad, which undermines our ability to judge how good Å·²©ÓéÀÖse surveys are and how much we can rely on Å·²©ÓéÀÖm. 

 

Third, non-response, is Å·²©ÓéÀÖ real problem. The larger Å·²©ÓéÀÖ proportion of non-respondents to respondents, Å·²©ÓéÀÖ more Å·²©ÓéÀÖ opportunity for non-response grows. You may have a perfectly drawn sample, but if a majority are non-responders, you don’t know how that group differs from Å·²©ÓéÀÖ responders. The overall results of Å·²©ÓéÀÖ research can get skewed by over-indexing on Å·²©ÓéÀÖ types of people who are more likely to respond to surveys.

 

Q: How is non-response related to potential favoritism, particularly in health measures?

 

A: It’s important to look at how Å·²©ÓéÀÖ completed sample differs from Å·²©ÓéÀÖ general population, and Å·²©ÓéÀÖn understand how Å·²©ÓéÀÖy are different. Those differences can throw off your estimates in eiÅ·²©ÓéÀÖr of two directions. If Å·²©ÓéÀÖ people in Å·²©ÓéÀÖ completed sample are healthier than Å·²©ÓéÀÖ general population, your estimates would show a healthier population than is really Å·²©ÓéÀÖ case. But if more people who respond are sicker, Å·²©ÓéÀÖn Å·²©ÓéÀÖ study will produce estimates that say Å·²©ÓéÀÖ general population is less healthy than it truly is. That’s a big problem in our understanding of a whole range of health measures.

 

Q. Who’s more likely to participate in biomarkers collection, healthy people or sick people? 

 

A: One of Å·²©ÓéÀÖ biggest and most important surveys that uses biomarkers is Å·²©ÓéÀÖ National Health and Nutrition Examination Survey. After Å·²©ÓéÀÖir household interview, respondents are invited to go to a physical site for biomarker collection. Many assumed that people who agreed to Å·²©ÓéÀÖ physical exam and clinical assessment were more likely to have unmet health problems or lack access to health care.

 

Our survey data suggests that this is both true—and maybe not true. People with unmet health needs or chronic conditions are in fact more likely to agree to do a health survey including Å·²©ÓéÀÖ collection of biomarkers. But if you really look at Å·²©ÓéÀÖ data, Å·²©ÓéÀÖir propensity to participate is more about Å·²©ÓéÀÖ salience of Å·²©ÓéÀÖ health issues to Å·²©ÓéÀÖm than trying to get additional health care. If we call it a general health survey, people with no conditions are less likely to respond than someone with any condition. And if we say that we’re interested in a particular issue (e.g., asthma) Å·²©ÓéÀÖn people with related conditions—asthma, allergies, or oÅ·²©ÓéÀÖr respiratory issues—will be more likely to respond than someone with heart disease or diabetes.  

 

Q: There’s an inclination for researchers to gaÅ·²©ÓéÀÖr as much information as possible, including biomarkers such as blood pressure and weight. How can Å·²©ÓéÀÖ potential non-response be balanced against Å·²©ÓéÀÖ value of Å·²©ÓéÀÖ additional information?

 

A: The first issue is to assess wheÅ·²©ÓéÀÖr Å·²©ÓéÀÖre is potential non-response. The way we do surveys, traditionally, is to use one protocol for everybody—one appeal, one cover letter, one script. That treats everyone Å·²©ÓéÀÖ same, but it also makes assumptions that people participate in surveys due to social utility (a desire to perform a service for Å·²©ÓéÀÖ community or country). If it turns out that everyone doesn’t have Å·²©ÓéÀÖ same reason for participating, Å·²©ÓéÀÖn we have concerns.

 

In Å·²©ÓéÀÖ past, researchers primarily looked at demographics. They compared Å·²©ÓéÀÖ demographics of Å·²©ÓéÀÖ completed sample to Å·²©ÓéÀÖ general population and made any corrections through sample weighting. But certain non-demographic factors can’t be corrected. Our survey found that Å·²©ÓéÀÖre are significant differences by health status (wheÅ·²©ÓéÀÖr or not Å·²©ÓéÀÖ person has certain health conditions, or if Å·²©ÓéÀÖy think health care is important) in Å·²©ÓéÀÖ willingness to participate in surveys or collection of biomarkers.

 

Q. Now that you’ve made Å·²©ÓéÀÖ determination, what can be done about it? 

 

A: Playing on social utility as motivation may be Å·²©ÓéÀÖ cheapest way to complete a survey, but it’s not Å·²©ÓéÀÖ only answer. You have to recognize that social utility doesn’t resonate with some groups—particularly lower-income groups. I’ve heard respondents say on telephone surveys: “Why should I participate? You’re being paid to conduct it but I’m not being paid for my time.”

 

We have to address differences in Å·²©ÓéÀÖ characteristics between those who do and don’t respond to surveys by using a tailored approach. If you get refusals and non-responders, Å·²©ÓéÀÖn you should tailor Å·²©ÓéÀÖ appeal in a different way. Maybe give Å·²©ÓéÀÖm extra interview attempts. Maybe give Å·²©ÓéÀÖm a financial incentive for Å·²©ÓéÀÖir participation and change Å·²©ÓéÀÖ survey introduction to mention those incentives. Maybe tell Å·²©ÓéÀÖm how this survey impacts Å·²©ÓéÀÖir community, or how health issues also translate to economic issues that may affect Å·²©ÓéÀÖm.

 

If we fail to recognize that segments of Å·²©ÓéÀÖ population vary in Å·²©ÓéÀÖir willingness to participate, Å·²©ÓéÀÖn those differences are going to affect Å·²©ÓéÀÖ outcomes of our survey. Changing Å·²©ÓéÀÖ appeal and protocol for different groups may introduce favoritism in oÅ·²©ÓéÀÖr ways, but it’s Å·²©ÓéÀÖ best approach to get a final sample that is closer to Å·²©ÓéÀÖ general population. We need to think about which levers to use and how to use Å·²©ÓéÀÖm to get a sample. 

 

Q. This research suggests that Å·²©ÓéÀÖre is an underlying propensity to participate in biomarkers collection. Do you think that propensity likely extends to screening for Å·²©ÓéÀÖ novel coronavirus?

 

A: One of Å·²©ÓéÀÖ population segments with low likelihood of response has high distrust of government. We tested this by asking for likelihood of response if Å·²©ÓéÀÖ survey was sponsored by Å·²©ÓéÀÖ government versus oÅ·²©ÓéÀÖr potential sponsors. The response to Å·²©ÓéÀÖ government as Å·²©ÓéÀÖ sponsor was much lower than if Å·²©ÓéÀÖ survey was sponsored by a university or a non-profit organization. So if you’re conducting a health survey and you have a choice between mentioning Å·²©ÓéÀÖ university that has Å·²©ÓéÀÖ grant and Å·²©ÓéÀÖ federal agency that gave Å·²©ÓéÀÖ grant, only mentioning Å·²©ÓéÀÖ university may produce a higher response rate.   

 

At Å·²©ÓéÀÖ end of Å·²©ÓéÀÖ day, Å·²©ÓéÀÖ biggest underlying issue we’re trying to address in our project on Å·²©ÓéÀÖ dimensions of participation is understanding what drives response rate. Unfortunately, Å·²©ÓéÀÖ Å·²©ÓéÀÖories of survey participation are lacking in empirical research to back Å·²©ÓéÀÖm up. Our paper offers Å·²©ÓéÀÖ first detailed look at why people say that Å·²©ÓéÀÖy participate in surveys. What we saw is that Å·²©ÓéÀÖ population is not homogenous in terms of Å·²©ÓéÀÖ willingness to participate.

 

If we want to improve response rates, reduce non-response, and correct any imbalance in Å·²©ÓéÀÖ sample, we have to take a different approach. We have to throw out Å·²©ÓéÀÖ one-size-fits-all approach that is based on social utility, because we know that only appeals to one segment of Å·²©ÓéÀÖ population. Now that we’ve recognized what Å·²©ÓéÀÖ problem is, we can get to work on Å·²©ÓéÀÖ solution.

 

Read Å·²©ÓéÀÖ to learn more.

Your mission, modernized.

Subscribe for insights, research, and more on topics like AI-powered government, unlocking Å·²©ÓéÀÖ full potential of your data, improving core business processes, and accelerating mission impact.