Science as Status. Why Do Managers Like Citing Studies They Haven’t Read?

Imagine a common scene. A LinkedIn post spreads claiming that working from home “according to a Harvard study” destroys productivity. Someone asks for the source, the author provides a link, and that’s often where it ends. A reference to Harvard or a “meta-analysis” often works as a status signal.

The post creates an impression of the author’s rationality, without anyone asking about the research methodology, effect size, sample composition, or whether the findings can be transferred beyond a single company. As long as the chart looks smart, the logo does the rest. Studies thus become a kind of social currency that legitimizes an already-formed opinion. And in the end it turns out that most people who share a popular article from Harvard Business Review never opened the original study. By the way, Harvard Business Review is a popular magazine, not a peer-reviewed scientific study from Harvard University.

What the Data Actually Say About Working From Home

In reality, data on working from home are not black and white. Meta-analyses suggest a dual effect: more remote work increases performance through greater autonomy, but can also reduce it through isolation and more difficult coordination. On average, the impact of working from home on productivity is small to zero; fully remote setups may be negative for some roles, whereas hybrid arrangements typically maintain comparable performance and improve retention. This “unsexy” nuance—that it depends on intensity and conditions, not a simple yes/no—spreads less easily than sharp claims. In practice, what wins is how something sounds, not what it actually says.

When Form Overtakes Content

This logic is reinforced mainly by form. When a text sounds technical, uses terminology, and appears scholarly, we readily attribute more credibility to it than it deserves. The Grievance Studies project illustrated this well. The authors deliberately wrote absurd articles—for example an “analysis” claiming that dogs in dog parks reproduce patriarchal rape culture—and still managed to get them accepted into legitimate-looking academic journals because they looked scientific enough. In academia, form sometimes wins over substance; in business, where there is no time to dissect study design and data quality, this tendency is even stronger. Convincing charts and catchy phrases then become a legitimization of decisions that were made long before, instead of serving as critical evidence for them.

Meanwhile, social networks have turned “Source?” into a modern incantation. It acts as a sign of critical thinking, yet without basic methodological literacy, the citation remains mostly a symbol. We don’t ask how large the effect is and whether it is practically significant. We don’t care about the context in which study results might hold in practice—or under what conditions an academic effect collapses in the complexity of a workplace. We lack the habits that would turn a source into the beginning of a conversation rather than its end.

LLMs Make the Confusion Even Bigger

Into this situation enter large language models. ChatGPT and similar tools have accelerated access to information and wrapped it in a smooth, confident package. The answer arrives in seconds, fluent and persuasive. Yet this is not a researcher who evaluates the meaningfulness of the question, the quality of evidence, or the suitability of methods. The model responds to how it is asked and relies on probabilistic patterns. If we pose a vague, suggestive, or misleading question, we get an output that mainly sounds good. Thus arises an illusion of competence. The more coherent and elegant the answer appears, the more we trust it—even if it stands on shaky foundations.

Psychology offers an explanation especially useful in managerial practice. Our expectations shape what we notice in information and what we overlook. Motivated cognition makes us more prone to accept claims that confirm our prior beliefs and more critical toward those that contradict them. Large language models subtly reinforce this tendency. They accept the frame we provide in the question and return it in polished language, often with amplified confidence. This triggers a procedural loop: we ask a question with a certain expectation, get an answer that strengthens it, and then ask even narrower, less critical questions. Confidence grows faster than understanding.

Same Model, Different Framing, Different Decision

Imagine selecting a candidate for a team. If we ask the model: Find the risks of candidate X, it will produce a list of weaknesses, and the team will easily reinforce its doubts. But if we rephrase the query to explicitly require a balanced assessment and conditions under which the conclusion would not hold, the bias decreases significantly and the quality of reasoning increases. Same model, different framing, different decision.

Or consider employee turnover. In a quick narrative, attributing it to salaries makes sense. But once we broaden the data picture to include onboarding, leadership, role clarity, and workload—and test small experiments—we often find that the key factor is something less visible than a compensation table.

What Makes a Good Story Is Rarely Accurate

A classic example is the Dunning–Kruger effect: people with low skill levels lack the very knowledge needed to accurately estimate their own performance and therefore tend to think they are better than they are. This easily suggests itself as a meta-explanation of our topic: maybe managers share “studies” they don’t understand because those who don’t see their own limits more readily lean on external authority for a cheap signal of competence. But this is just one of many popular shortcuts used in countless situations, and precisely for that reason it should be handled carefully. It sounds apt—we all know such an overconfident “idiot”—but unfortunately it’s not that simple. The effect has historical issues in how it was initially measured and interpreted, and current data suggest the phenomenon is smaller than commonly claimed. Still, this narrative spreads on social networks because it’s appealing, easy to tell, and easy to share. Reality is more complex, and our minds prefer stories in which we come out looking smarter than everyone else.

What an Organization Can Do If It Doesn’t Want to Settle for Good-Sounding Claims

The first step is to return to the question. Instead of searching for a study that proves us right, clarify what exactly we want to decide and what finding would make us change our mind.
The second step is to distinguish patterns from causes and get used to counterfactual thinking: What would have to be different for our interpretation not to hold?
The third step is how we work with prompts for LLMs. It pays to provide several variants, including one that seeks counterarguments, and always request limitations, assumptions, and an estimate of effect size.
The fourth step is triangulation. A model is only one perspective. Supplement it with internal data and conversations with people affected by the decision.
And the fifth step is piloting. A small, measurable test before implementing a large-scale change saves money and reputation and helps build a culture in which admitting mistakes and learning from them is normal.

Science and artificial intelligence are not magic words that turn opinions into truth. They are tools that can be extremely helpful—if we use them critically and with humility. In an age when information spreads faster than ever, the competitive advantage will lie in depth of understanding, the ability to ask good questions, work with limitations, and verify conclusions on a small scale before applying them broadly. Good-sounding confident claims may be good marketing, but good work and good decisions often require time.

 

Author: Nikola Frollová

Nikola Frollová is a behavioral scientist and psychologist at the University of Economics in Prague. She focuses on dishonest behavior, norms, and overconfidence. She has experience in both academia and startups, where she helps translate psychological insights into practice and apply them in technology, innovation, and artificial intelligence.

  • Author: Nikola Frollová
  • Created on:
  • Last update: