A survey of 159 meta-analyses of economics revealed that empirical economics research is often greatly underpowered. Regardless of how ‘true’ effect is estimated, typical statistical power is no more than 18%. Nearly half of the areas surveyed have 90% or more of their reported results stemming from underpowered studies. Take, for example, the crash of 2008. How could it be that economists all over the world couldn’t predict such a destructive event despite all the date lying before them? Many point blame toward the pre-designed used to understand the economy. models. If economics is to be considered a science as opposed to a somewhat political narrative, it has to be derived from scientific principles. Yet the weakness of empirical models and parameterisation in economics highlights a significant flaw in economic forecasting.
A criticism often arises, is economics merely a mechanism for ineffectual policies to be made more interpretable. Take, for example, the ties between American politics and the Laffer-curve. Art Laffer theorised a relationship between tax rates and revenues. At one end of the curve, when taxes are 0%, the government will receive no revenue. Similarly, when taxes are at a maximum level, there will be no revenue as labour wouldn’t function without pay. Hence, the Laffer-curve is accurate in extreme circumstances. However, the curve also indicated a turning point suggesting that there was an optimal state of revenue – beyond which, any increment in tax would directly reduce the level of employment and consumer investment. Yet this was misrepresentative. It wrongly suggested that cutting tax rates on the rich, as a method fiscal stimulus policy, would enhance growth. By way of the curve, the Reagan administration as academically justified to cut taxes in the 1980s from 70% to 28% – in the hopes of reducing budget deficits. In fact, Reagan doubled it to $155 billion and tripled government debt to more than $2trillion. What does this show? Similar to the 2008 financial crash, it again lies at the fault of economic modelling to accurately forecast how the economy will operate under planned conditions.
This relates closely to the famed criticism of the over-integration of mathematics within economics. Paul Romer, in his paper ‘Mathiness in the Theory of Economic Growth’, criticised the misuse of mathematical reasoning in economic analysis to suggest that ‘mathiness’ created ample room for slippage between statements of theoretical and empirical content. In essence, the use of mathematical symbols has solely perpetuated the misconception of economics being a knowledge-based science.
Does ideological bias play a large part in the failures of models? Scholars hold different views on whether economics can be a ‘science’ in the strict sense and free from ideological biases. However, perhaps it is possible to have a consensus that the type of ideological bias that could result in endorsing or denouncing an argument on the basis of its author’s views rather than its substance, is unhealthy and in conflict with scientific tenor and the subject’s scientific aspiration, especially when the knowledge regarding rejected views is limited. The study prompted by Mohsen Javdani and Ha-Joon Chang found clear evidence that changing or removing source attributions significantly affects economists’ level of agreement with statements. Mainstream economics, as the dominant and most influential institution in economics, propagates and shapes ideological views among economists through different channels. As far as this goes, the recurring theme of biases within economics indicates that models may possibly be misconceived by means of one’s political standpoint, for example, the Laffer-curve.
On the other hand, however, it is necessary to argue for the validity of economic modelling. To an extent, it is possible to argue that ideology will always have an unofficial influence over the outcomes of empirical models. Take, for example, the debate surrounding the state of economic globalisation. Despite the heaps of evidence and calculation that went into illustrating the growing interdependence of world markets, one string of realist thinking will always object this idea of a co-operative society. Therefore, as in all cases of a final judgement, a natural human bias will be present. Adhering to Gulker, ‘economics as a field does not lend itself to single watershed results forcing researchers to immediately rethink their prior views. But an approach that lives with bias rather than trying to extinguish it allows researchers to gradually evolve over time’ (Gulker, 2019)
Recently, there have been claims of an ‘econometric credibility revolution’ with a focus on how such models can be scientifically improved to be applied universally. As Angrist and Pischke have said, the advantages of a good research design are perhaps most easily apparent in research using random assignment, which not coincidentally includes some of the most influential micro-econometric studies to appear in recent years. It’s difficult to imagine a randomised trial to evaluate the effect of immigrants on the economy of the host country. However, human institutions and the forces of nature can step into the breach with informative natural quasi-experiments. As they concluded, ‘empirical work in this spirit has produced a credibility revolution in the fields of labour, public finance, and development economics over the past 20 years’ (Angrist and Pischke, 2010).
As evident, there will always be a clash of the validity of economic models. In my opinion, the progression toward a more free-market status in several developing countries, in fact, may aid the creation of stronger econometric modelling. Despite this assertion, it is necessary for the statistical power of econometrics to become more prevalent, thus limiting any sway of opinion through an appeal of bias or unofficial influence. I myself began to question the effectiveness of economics to reflect our interactions in society. Yet, models like the Laffer-curve, despite being widely disregarded continue to be referred to in policymaking- and what’s more, taught to be applied to the next generation of economists. Perhaps, for empirical economics to take its next step in credibility, it has to abandon those recognised flaws and adapt to govern itself on principles that are universally accepted. It seems this new trajectory of econometrics has taken the stand.