Advertisement

Op-Ed: Economics isn’t a bogus science — we just don’t use it correctly

The American flag flies above the Wall Street entrance to the New York Stock Exchange on Nov. 13, 2015.
The American flag flies above the Wall Street entrance to the New York Stock Exchange on Nov. 13, 2015.
(Richard Drew / Associated Press)
Share

The financial crises of the past two decades, and our failure to predict them, have wreaked havoc on more than just the global economy. The bursting of the dot-com bubble in 2000, the Enron scandal and the global financial crisis of 2008 have led to a loss of faith in economics itself.

But these crises and scandals do not mean that the science of economics is inherently unreliable. Most of them occurred because we ignored what we knew.

Perhaps most obviously, we deputized — and continue to deputize — the wrong people as authorities. For instance, many assume that the real experts on the subject of money are those who have a lot of it. But the opinions of wealthy tycoons are often dissociated from scientific evidence, out of touch with reality and all too plainly wrong. Amassing wealth as an individual is not the same thing as building and sustaining broad economic growth across nations. Often, making a private fortune is a matter of luck. “Fortuna” is the Latin word for luck, after all.

Advertisement

What’s more, our culture elevates predictions about the stock market and the course of the economy at large, when any good economist will tell you that the most advanced models are not much better than gut feeling. Credit rating agencies and highly paid gurus are largely selling products of little or no value to the gullible.

Crises and scandals do not mean that the science of economics is inherently unreliable. Most of them occurred because we ignored what we knew.

Similarly, politicians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

This isn’t a distinctively American problem. The Eurogroup, the body of finance ministers from nations that use the euro, does not include any top scientists. The group’s chair, Jeroen Dijsselbloem, at one point claimed that he held a master’s degree in economics from University College Cork, but he had to remove that statement from his official website because it wasn’t true.

The former finance minister of Greece, Yanis Varoufakis, built his political career on the notion that he is a top-caliber professor of economics. But a search of scientific databases shows that Varoufakis has published only one paper in the 40 most respected journals of economics.

The U.S. government generally has more access to economic knowledge — the Federal Reserve employs several hundred people who hold graduate degrees in economics — and this has helped to curtail the ignorance of some of its politicians. Still, knowledge has often gone unheeded. The repeated bursting of bubbles in the real estate market might have been partly averted had standard principles of economics been applied, had someone checked the math more carefully, and had we learned more from past errors.

Advertisement

All that said, while it’s true that economic knowledge is often overlooked, it is also true that there are problems in the field that affect its overall reliability.

In economics, scientific evidence can come in many different forms — theoretical models and simulations, observational and survey data, and experimental studies, for example. Not all forms of evidence have equal validity.

Most empirical data do not come from experiments but from non-experimental sources such as surveys and routinely collected information. Along with Chris Doucouliagos and Tom Stanley, my research center examined 6,700 empirical studies encompassing 159 topics. We found that there is probably substantial bias in much of this literature. For example, the value of a statistical life, which measures how much people are willing to pay to reduce their risk of death, appears to have been exaggerated by a factor of eight. On average, the strength of the results may have been exaggerated by a factor of two. In a third of the studies, by a factor of four.

Most published studies use limited data. By a conservative estimate, the average study has 18% power to detect a modest association if one exists. Due to this low power of prediction, researchers could easily miss a genuine association. Or, they could declare a spurious, non-existing association, having been led astray by small amounts of bias.

In several areas, researchers are gathering more data than they can possibly analyze, and this is itself becoming a problem. Much of this information is prone to error, with major biases that are difficult to correct. Results are often cherry-picked and exaggerated. Many companies are investing in big data without scrutinizing the quality of the information.

Advertisement

Thankfully, economists are increasingly turning to experimental methods, which have the best reproducibility record. According to one evaluation, two-thirds of experimental studies were fully reproducible when other researchers tried to repeat them.

Several economics journals, moreover, are now employing standards that are likely to enhance transparency and reproducibility. These journals require researchers to share all of their protocols, raw data, software and code.

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less. Until then, the pseudo experts can claim anything.

John P. A. Ioannidis is a professor of medicine, health research and policy, biomedical data science, and statistics at Stanford University and a director of its Meta-Research Innovation Center.

Follow the Opinion section on Twitter @latimesopinion or Facebook

Advertisement