Concepts such as subjective well-being, satisfaction, happiness, trust, measures of quality, and even standardized test scores are all measured using an ordinal variable. This means that we know the rank of the response categories (e.g. a respondent reporting being “very satisfied” indicates they are more satisfied than if they had reported being “satisfied”), but we do not know the interval between response categories (e.g. we don’t know how much more satisfied “very satisfied” is compared to “satisfied”). This is contrasted with cardinal variables, such as earnings, where we know $10 is more than $5 *and* represents twice as much money.

This reality has implications for empirical analysis. How do we measure the central tendency of an ordinal variable? Often times researchers quantify these variables by assigning numerical values to the categories. For example, “very satisfied” = 3, “satisfied” = 2, “dissatisfied” = 1, and “very dissatisfied” = 0. This is an assumption and may not be valid, since literally any other set of numerical values that preserves the order of these categories is also theoretically defensible.

Last year I wrote a blog post on this issue in which I discussed a recent paper published in the *European Economic Review *by Carsten Schroder and Shlomo Yitzhaki entitled “Revising the Evidence for Cardinal Treatment of Ordinal Variables” (2017). The basic idea of their paper is that comparing means and performing OLS regression analysis of outcomes measured with ordinal variables must be robust to monotonic increasing transformations of the ordinal scale. Said differently, in order for simple comparison of means or OLS to be theoretically valid, we must be able to change the numerical values around to anything that preserves the rank ordering and the empirical results must remain unchanged. Noting that there are infinitely many monotonic increasing transformations, Schroder and Yitzhaki (2017) derive theoretical sufficient conditions for the valid cardinal treatment of ordinal variables.

In a new working paper, entitled “How Much Does the Cardinal Treatment of Ordinal Variables Matter? An Empirical Investigation” I build off of the theoretical insights of Schroder and Yitzhaki (2017) to provide a practical methodology for empirical researchers faced with the task of analyzing an ordinal variable. The questions I address are as follows:

- The methods derived by Schroder and Yitzhaki (2017) shed light on whether or not there exists a monotonic increasing transformation that can change the sign of OLS regression coefficients. Since, existence does not necessarily imply such transformations are relatively common or rare, I examine the likelihood that reasonable transformations change the sign of OLS coefficient estimates.
- Next suppose there does not exist a monotonic increasing transformation that can change the sign of OLS coefficient estimates. Is the cardinal treatment of ordinal variables valid? I argue no, since the magnitude of coefficient estimates and therefore the economic and policy significance of empirical findings can also meaningfully change. This study develops a method for understanding the robustness of coefficient estimates to reasonable monotonic increasing transformations of an ordinal dependent variable.
- Finally, similar logic begs for the question of how do monotonic increasing transformations impact statistical significance of empirical findings. This paper’s method also examines robustness of statistical inference to reasonable monotonic increasing transformations of an ordinal dependent variable.

To address these questions I examine three existing empirical studies that each use cardinal statistical methods (e.g. OLS regression) with an ordinal dependent variable. The first illustration examines Aghion, Akcigit, Deaton, and Roulet (AER, 2016) on the effect of “creative destruction” on subjective well-being. The second illustration looks at the work of Nunn and Wantcheckon (AER, 2011) on the effect of the slave trade on trust in sub-Saharan Africa. Finally, the third illustration revisits the “fragile” results of the black-white test score gap in kindergarten through third grade by Bond and Lang (RESTAT, 2013). For each of these illustrations, I perform a Monte Carlo simulation that randomly assigns a monotonic increasing transformation to the dependent variable. This allows for an understanding of the robustness of empirical findings to such transformations.

The key findings of this working paper are as follows: Although the relationships under investigation in Aghion et al. (2016) fail the theoretical sufficient conditions of Schroder and Yitzhaki (2017), I find that it may be reasonable to conclude that no reasonable transformations change the sign of the core empirical results. However, the economic significance of the findings meaningfully change. Specifically, a one standard deviation increase in the MSA-level job turnover rate has an effect on SWB that is equivalent to between a statistically insignificant -0.02 and a statistically significant 0.6 standard deviation change in the MSA-level unemployment rate. This ranges from a null effect to an effect that is twice the size reported by Aghion et al. (2016).

Alternatively, the empirical findings of Nunn and Wantcheckon (2011), that the slave trade has caused mistrust in modern day Africa, are largely robust to all reasonable monotonic increasing transformations. Not only does the sign remain consistent for all such transformations, but the coefficient estimates do not change dramatically and statistical significance persists for most specifications

Finally, the illustration of the results from Bond and Lang (2013) provide a test of the validity of the methodology applied in this paper. Since, Bond and Lang (2013) already establish the “fragility” of results to reasonable transformations of the test score, similar findings will lend credence to the present methodology. Indeed I find that the growth in the black-white test score gap between kindergarten and third grade could be between a statistically insignificant 0.01 and a statistically significant 0.72 standard deviation change.

This study develops a methodology for choosing empirical methodology (aka “a methodology methodology”) in the presence of an ordinal dependent variable. The fundamental insight is that in such situations the choice of an empirical methodology requires careful thought and valid reasoning. Although some empirical findings may be robust to monotonic increasing transformations, many will not be so fortunate.

I’d love comments and feedback on this work. Cheers!

*Updated on October 16, 2018 with revised draft of the working paper. *

Interesting paper Jeff! Monte Carlo simulations to explore the robustness of empirical assumptions is a cool technique. After reading Ioannidis, Stanley, and Doucouliagos (2017) on low statistical power in empirical economics, I developed a similar technique to map out power curves for econometric hypothesis tests over a range of sample sizes and error distributions. Within languages like R, it’s quite straight forward to set up a simulated data generating process and see how getting your assumptions wrong can affect your findings.

Would be interested in following up with ordinal explanatory variables as well. Do reasonable transformations of ordinal variables of interest matter? How about controls? How about when ordinal variables are fed into data reduction routines such as PCA or other factor analysis techniques?

Ioannidis, John P. A., T. D. Stanley, and Hristos Doucouliagos. “The Power of Bias in Economics Research.” The Economic Journal 127 (2017): 236-265.

Thanks Braeden! Yeah, the method presented in this paper is quite straight-forward. There are a lot of extensions to this, I think. One is what you note, basically doing the same sort of analysis with ordinal explanatory variables. I focus on dependent variables because in most cases, that I’m aware, ordinal explanatory variables are broken up into dummies for each category. Thanks for reading and providing feedback!

Pingback: The Impact of Christian Theology on Economic Outcomes | Jeff Bloem