Or does this even count as writing between the lines anymore?
Bazerman declined to discuss his co-authors. But in his book Complicit: How We Enable the Unethical and How to Stop, published in November, the Harvard professor reflected on the debacle of the 2012 study. How was it, he mused, that experiments Nos. 1 and 2 had both ended up being irreproducible?
“In retrospect, Gino reported that her lab manager at her prior university managed data collection for the two laboratory experiments in the 2012 paper,” Bazerman wrote in a chapter about the risks of putting trust in relationships. “Thus, none of the authors, including me, provided sufficient supervision of these experiments. In addition, as I review emails from 2011 containing the dialogue between coauthors of the 2012 paper, I see concerns raised about the methods. I failed to actively engage and deferred to the decisions of my colleagues, and that failure makes me complicit.”
That is from Stephanie Lee on the latest academic fraud developments.
What I find most interesting personally is how each of the big name players in this story—3 out of 5 coauthors—appears to have written a book about the precise kind of misbehavior he or she was involved in.
Max Bazerman’s “Complicit” came out 15 months after the first Data Colada post demonstrating data fabrication1 and 8 months prior to last week’s follow-up revealing much deeper issues.2
Dan Ariely released “The Honest Truth About Dishonesty” in 2013 and was the first coauthor to be directly implicated.
Francesca Gino is the author of “Rebel Talent: Why it Pays to Break the Rules in Work and Life” (2018), and is the focus of current allegations of “fraud in papers spanning over a decade.”
Some additional observations…
1. It’s not just the books. Dan Ariely produced and participated in the documentary “(Dis)Honesty: The Truth About Lies”, whose accompanying promotional site invites visitors to #SHAREYOURLIE.3
2. Bazerman wrote more than one book of possible relevance…
3. Andrew Gelman makes an astute point:
After all, who would write a book called “Why We Evolved a Taste for Being Bad” or “How We Lie to Everyone”? These sound like the writings of people who believe that “we” cheat. From that point of view, they might not realize that they are implicitly confessing anything! They might just think they are saying aloud what everyone is really thinking. From their (confused) perspectives, they’re the truth-tellers about human nature and it’s the rest of who are hypocrites.
Bazerman’s one line summary for “Complicit” certainly fits with this...
What all of us can do to fight the pervasive human tendency to enable wrongdoing in the workplace, politics, and beyond
Emphasis added.4
4. The more general idea that believing others are cheating increases one’s own willingness to cheat makes sense to me, and is supported by the literature I’ve seen on student cheating.
Another hypothesis would be that engaging in dishonesty stimulates a need to rationalize by seeing others as similarly bad. Even one star restaurant reviews are more likely to use “we” than “I”, presumably because bad things are less emotionally painful when shared.
5. Ariely remains a tenured professor at Duke.5
6. More good stuff from Gelman:
Ultimately it’s my impression that these people don’t understand science very well. They think their theories are true and they think the point of doing an experiment (or, in some cases, writing up an experiment that never happened) is just to add support for something they already believe. Falsifying data doesn’t feel like cheating to them, because to them the whole data thing is just a technicality.
He’s right, but I can’t help but feeling there’s a kernel of truth in the cynical view. Compare with Daniel Kahneman from a recent interview:
It turns out that in order to be able to state those ideas in a way that will influence thinking, you've got to pass a test of… You've got to develop a formal theory that will impress mathematicians, that you know what you're doing. Constructing a theory — so far as I'm concerned, this is very iconoclastic, what I'm saying now — constructing a theory like prospect theory is a test of competence. Once you demonstrate competence, what makes the theory important is whether there are valuable ideas that can be detached from it completely.
Now DK is not in any way implying that you don’t need to validate ideas with empirical observation, let alone that it’s ok to blatantly fabricate data in the way that (at minimum) Ariely and Gino appear to have done. But it’s not hard to see how an ambitious, charismatic and creative personality could convince themselves into believing that what they were doing was simply passing a test of competence to influence thinking.
Doesn’t a lot of the blogosphere tacitly believe the publication process is mostly just unnecessary formality and bureaucratic BS?
7.
Throughout our careers, we are taught to conform — to the status quo, to the opinions and behaviors of others... By the time we reach high-level positions, conformity has been so hammered into us that we perpetuate it in our enterprises.
Francesca Gino6
8. The central fraudulent article consisted of three separate experiments performed independently. When experiment #3 got busted, Ariely was the only one holding the bag:
[The study] was supervised by Dan Ariely, and it contains data that were fabricated. We don’t know for sure who fabricated those data, but we know for sure that none of Ariely’s co-authors – Shu, Gino, Mazar, or Bazerman – did it.
Not extremely unusual as a one off, but the same thing happened again this time around:
Gino, who was a professor at UNC prior to joining Harvard in 2010, was the only author involved in the data collection and analysis of Study 1.
The Data Colada team underscores,
Two different people independently faked data for two different studies in a paper about dishonesty.
It’s almost as though the collaboration was designed to make it easy for a majority of authors to detach from accountability.
9. The original authors of a published paper rarely fail to replicate it.7
Yet in 2020 the five original authors teamed up with two additional scientists to resoundingly dismiss the results of the earlier, not yet known to be fraudulent study. In a Scientific American article titled “When We’re Wrong, It’s Our Responsibility as Scientists to Say So”, the seven wrote,
Seven years and hundreds of citations and media mentions later, we want to update the record. Based on research we recently conducted—with a larger number of people—we found abundant evidence that signing a veracity statement at the beginning of a form does not increase honesty compared to signing at the end.
Their recommendation for the future?
We believe that incentives need to continue to change in research, such that researchers are able to publish what they find and that the rigor and usefulness of their results, not their sensationalism, is what is rewarded.
Thanks for reading! If you enjoyed this, I would appreciate if you would subscribe or share this article with one of your coauthors.
Specifically regarding experiment no.3, implicating Dan Ariely as the only possible culprit among the five coauthors.
The scientist bloggers present evidence that experiment No. 1 from the 2012 article was also tampered with and make credible claims about various other papers.
”Specifically, we wrote a report about four studies for which we had accumulated the strongest evidence of fraud. We believe that many more Gino-authored papers contain fake data. Perhaps dozens.”
He also appeared in a documentary about Elizabeth Holmes to discuss “potential reasons why Holmes could have given such barefaced lies to the media and her colleagues about how her company operated.”
Also this blurb: “When confronted with an ethical dilemma, most of us like to think we would stand up for our principles. But we are not as ethical as we think we are.”
Ariely concluded his decade long column “Ask Ariely” at the Wall Street Journal on September 22, 2022, more than a year after findings of data tampering came to light.
In his final essay he wrote,
“I answered many of the questions in these pages, but I answered even more privately, whether because the questions were very personal or idiosyncratic or because I had limited space. Regardless of the response format, I was aware of the trust readers placed in me and felt tremendous responsibility to give it my best.”
From “Let Your Workers Rebel”. Also…
“I hear a lot of people saying, as soon as they get in, they feel the pressure to conform” to company culture and norms, said Francesca Gino
See “Replications in Psychology Research: How Often Do They Really Occur?” (2012)
“High authorship overlap is important to note because the success rates of replications were significantly different based on whether there was author overlap, with replications from the same research team more likely to be successful than replication attempts from a unique research team (91.7% vs. 64.6%, respectively)”
Very interesting. As someone who has read Bazerman, Ariely, and Kahneman over many years, this is particularly fascinating. Particularly insightful is the point that those convinced of their theory and (largely correctly) disdainful of the system of academic publishing see the data as a mere technicality and tell themselves that it's getting the theory out there that matters.
I would add that this applies to the current version of the rationalist community. Those who talk endlessly about rationality are not necessarily any more rational. I think this is how very smart people have convinced themselves with a very high degree of certainty that AI is going to kill us all -- something I've blogged on several times recently.
This is hilarious.