top of page
Search
Writer's pictureLucian@going2paris.net

Sadly, Many Happiness Studies Are Flawed -- WSJ


Charlottesville

July 24, 2023


I have developed a bad habit of sharing my thoughts in the comment section of the WSJ. Here's an example.


New research concludes that older practices in psychology allowed scientists to find results when in truth there were none


Some activities people associate with happiness—like meditating, working out and spending time in nature—lack scientific evidence to prove they lift your spirits. 


Self-help websites and news articles often suggest these activities are mood boosters, but most peer-reviewed studies that argue for such a link are weak or inconclusive. Partly to blame are older practices in psychology that allowed scientists to use small sample sizes and massage the data to find results when in truth there were none, researchers say in a paper published Thursday in the journal Nature Human Behavior. 


Some years ago, a journalist asked psychologist Elizabeth Dunn if strategies for happiness that commonly appeared in news stories were backed by strong evidence. Dunn, who studies happiness at the University of British Columbia in Canada, didn’t know the answer. So she set out to find out.  


“People Google ‘how to be happy’ more often than ‘how to be rich,’” she said. “We wanted to understand: What answers are they getting?”


Dunn and Dunigan Folk, a doctoral student in Dunn’s lab at the University of British Columbia, worked with a team to compile news articles in the first 10 pages of search results in response to variations on the query: “How to be happy.” The five activities recommended most often were expressing gratitude, being social, exercising, spending time in nature and meditating. 


Dunn and Folk then scoured the scientific literature for studies that examined the effects of each activity on moods. They gathered 494 peer-reviewed papers involving healthy people, in which one of the five happiness strategies was evaluated against a control group. But when they weeded out weakly designed work, only 57 robust studies were left.


The vast majority of the papers were too poorly designed to support their conclusions.  


This remaining subset of studies met at least one of two conditions for good science: They included sufficient numbers of study participants or the researchers committed to hypotheses or study plans before analyzing their data.


But even these studies failed to confirm that three of the five activities the researchers analyzed reliably made people happy. Studies attempting to establish that spending time in nature, meditating and exercising had either weak or inconclusive results.


“The evidence just melts away when you actually look at it closely,” Dunn said.


There was better evidence for the two other tasks. The team found “reasonably solid evidence” that expressing gratitude made people happy, and “solid evidence” that talking to strangers improves mood.


The new findings reflect a reform movement under way in psychology and other scientific disciplines with scientists setting higher standards for study design to ensure the validity of the results.


To that end, scientists are including more subjects in their studies because small sample sizes can miss a signal or indicate a trend where there isn’t one. They are openly sharing data so others can check or replicate their analyses. And they are committing to their hypotheses before running a study in a practice known as “pre-registering.” 


Pre-registering involves submitting plans for analyses to independent registries that research peers can access. Committing to a research design up front discourages researchers from burying negative results when the data don’t support the hypotheses and inoculates against “p-hacking,” where data is selected or manipulated to falsely yield a statistically significant result.


The practice drastically improves the credibility of findings, said Brian Nosek, executive director of the Center for Open Science, a nonprofit based in Charlottesville, Va., that champions rigor in research. 


A handful of scandals in psychology in the 2010s—including a bombshell paper that demonstrated the danger of p-hacking by showing, tongue-in-cheek, that people who listened to the Beatles song “When I’m Sixty-Four” grew younger—sent shock waves through the field and forced researchers to re-examine the status quo. 


“It was really a wake-up call,” Dunn said of the Beatles study. “We really changed everything about how we do science.” 


In the decade since, “the goal posts have shifted,” said Sonja Lyubomirsky, an experimental social psychologist at the University of California, Riverside, who wasn’t involved with the study but commented on an early draft. 


“This is really important as a paper that will galvanize researchers in the field to make sure that all of our studies going forward are well-powered and preregistered,” she said of the new work. 


This reform isn’t limited to psychology. “Every field that has bothered to look at issues of rigor and reproducibility in their evidence has found challenges,” Nosek said.


Lyubomirsky said the new analysis won’t persuade her to stop exercising or take nature walks to improve her mood, because future studies could establish that those activities are linked to happiness. 


For the full story, she said, “We really need those big studies.”

14 views0 comments

Recent Posts

See All

Comments


bottom of page