No ‘heckler’s veto’ in online ratings of doctors, study shows

0
405

Doctors have many concerns about online crowdsourced ratings, which are intended to make patients better-informed consumers of health care, but this is a big one: They worry that complainers will be the most outspoken contributors to rating sites, skewing scores and resulting in a kind of heckler’s veto.

A new study from the Robert H. Smith School of Business at the University of Maryland finds that that fear is unwarranted. Researchers compared the ratings of 1,425 doctors in three metropolitan areas — Denver, Kansas City and Memphis — on the popular site RateMDs.com against very thorough surveys of patient satisfaction conducted by Checkbook.org, a nonprofit consumer research organization. The surveys were designed by the Agency for Healthcare Research and Quality (AHRQ) within the U.S. Department of Health and Human Services.

It emerged that there was indeed a correlation — while far from perfect — between the online ratings and the more thorough examinations of patient satisfaction. That suggests that the ratings were representative of a broad spectrum of the patient population. More surprisingly, physicians who did poorly in the government evaluations tended to receive fewer online ratings than those who did well — the opposite of what you’d expect if people with bad experiences dominated the ratings. “The concern that ratings aggregation sites will become digital soapboxes for disgruntled patients appears to be unfounded,” write Gordon Gao and Ritu Agarwal of the Smith School, Brad N. Greenwood of Temple University (and a Smith PhD), and Jeffrey McCullough of the University of Minnesota. Agarwal and Gao co-direct the Smith School’s Center for Health Information and Decision Systems (CHIDS).

In other areas of the economy, unhappy customers tend to be the most vocal. Why might that not be true in health care? The authors offer several possible explanations. First, it’s conceivable that the patients of the worst doctors might have less access to the Internet or be less familiar with online reviews. Second, patients might be worried that if they leave reviews, health-care providers might retaliate against them in some way, even if the reviews are anonymous. Finally, people might just evaluate health care in a different way than they evaluate products on Amazon.

The effectiveness of online ratings is a subject of intense interest that is only increasing: Some 37 percent of patients have consulted a ratings website when they sought healthcare. According to the new study, online star ratings tended to be most helpful for distinguishing doctors in the middle 50 percent of performance (as measured by the government surveys). A “hyperbole effect” was evident for doctors in the highest-performing and lowest-performing quartiles: Their rankings tended to group together, meaning that small differences in star ratings had no significance.

One big caveat is that the study was limited to an evaluation of patient satisfaction, as opposed to objective measures of patient outcomes, or protocols doctors followed. A study published in the February 2015 issue of JAMA Internal Medicine, by Gao and four co-authors, found little statistically significant connection between patient ratings on eight websites and objective measures involving 1,299 internists.

“This is what we should keep in mind: A very high score in patient satisfaction is not wholly connected with clinical quality,” Gao says. “If you want to use the online ratings to infer how good a doctor is clinically, take them with a grain of salt.”

The Centers for Medicare & Medicaid Services are working on an online resource that would allow consumers to compare data on health-care outcomes of different physicians, called the Physician Compare Initiative, but it remains controversial because doctors doubt it will be possible to correct for such things as the general health of a physicians’ patients and whether patients adhere to doctors’ recommendations.

Source:

The above post is reprinted from materials provided by University of Maryland. The original item was written by Greg Muraski. Note: Materials may be edited for content and length.

Journal Reference:
1.Guodong (Gordon) Gao, Brad N. Greenwood, Ritu Agarwal, and Jeffrey S. McCullough. Vocal Minority and Silent Majority: How Do Online Ratings Reflect Population Perceptions of Quality? MIS Quarterly, June 2015