Tuesday, November 9, 2010

Globalization of the Old and the Very Old

A recent article by Ted C. Fishman titled “The Old World” (the New York Times Magazine, October 17, 2010, pp.50-53), paints a very disturbing picture for the developing economies. Mind you that the current picture is as bad as “probably” can get, nonetheless the author gives the impression that it is going to be even worse. The author puts the following thesis: “The world is currently divided between those under 28 and those over 28”.Twenty eight seems to be the magic number. If you are one of those, I say REJOICE, if not read on. But, before getting into the nitty-gritty of the old age globalization thesis, let me begin by defining the globalization term. This may turn out to be as useful to understanding the current economic problems as it is for understanding the problems of an aging world population. So what globalization means and how did the term came about?
Robert Cox (1994) defines the global economy as a “system generated by globalizing production and global finance”. Thus, the definition conveys two things: First that the global economy is not a universe without borders. Rather, it is a socio-cultural, economic structure brought about by some event or events. Secondly, globalization is a process encompassing modes of production, trade and finance. Reinicke (1998), advanced the notion that globalization is a process brought about through transborder by firms undertaken to organize their development, production, sourcing and financing activities. The globalization process thus described links globalization of the world economy to the organizational structure of the firm. In short, it is an economic phenomenon undertaken to enhance competition in the belief that the market ideology and discipline enhance “world” welfare.
Having defined the concept as it has originated why the term did become imbedded in our lingo and when it is talked about it seems to conjure all sorts of influences most of them are not salutary. Two issues are commonly talked about. The first is outsourcing – exporting jobs and the resulting dislocation of workers giving rise to unemployment, wage differentials and hence income redistribution. These effects are magnified in stagnant economies and during severe and prolonged recessions. The second is the technical revolution and the rapid transmission of technology. A technology that makes low skills redundant have not only far reaching impact on low skills workers in both advanced and developing economies but also the changes the dynamics of production (where firms would locate) with far reaching implications for the world wage structures and balance of payments.
The US and many of its partners in the developed world is the engine that spurts technical advance and innovation. Put to practice, these advances are labor saving technology replacing workers by “workerless” production techniques and thus making labor redundant. Reallocation of labor across industries or occupations is not easy or rapid. When labor services become redundant, it has a devastating effect on workers, not only on those who lost their jobs but also on those in occupations or industries close to the ones that have experienced updated and/or new technical advances.
It is worth emphasizing that the middle age group—those between the ages of 28 (the magic number) and 40 will likely bear the brunt of redundancy and reallocation. The reason is that skills are acquired well before the age of 28. Augmenting skills to meet advances in technology is needed if the economy were to accommodate displaced workers. This task may be accomplished during periods of economic growth but may not be feasible when the local or the global economy is in recession. In short displaced workers with low skills have a long “row to hoe”.
How about the old-- those over the age of 40? This group traditionally has had the lowest unemployment rate. However, with outsourcing and a “executive watch over the bottom line”, their employment picture is far from clear. This group clearly have a mix of skills acquired by “being on the job” as well as through formal education. Hence, this group will experience job security as well as redundancy.
Let me now turn to the old and very old.
It need not be emphasized that “we, the people,” have brought the “aged-nonaged” problem on ourselves. How many articles and speeches were given about the need to restrict the size of the family to insure a better standard of living? The development of birth controls and their widespread use in the developed world was the medical response to the “Malthusian” prediction that population growth if left unchecked spell hunger, wars and disease. In his First Essay on Population (1979) reprinted 1926, Malthus wrote: “The power of population is indefinitely greater that the power in the earth to produce subsistence for man”. Another factor which took a life of its own is advances in medical technology that eliminated many of the ills that made life if not unbearable as one ages but also inspired the old to seek forcefully these cures to insure longer and healthy life. Initially, the benefit by far exceeded the cost. Children and their parents in the developed world enjoyed higher standard of living, higher educational attainment and better health. Unfortunately, the dynamics of these events meant that a smaller population has down side effects: a shrinking pool of the young and an expanding pool of the old. It does not take a “mathematician” to figure out the odds against the young. They have to bear the cost of “raising” their parents as their parents once did. That is o.k. The problem is that the scale has become unbalanced, not enough young to return the favor bestowed on them by their elders.
The cost of living longer is unattainable in a stationary population especially in periods of unemployment and stagnant economy. Short of undertaken “measures” at either end of the age structure, societies need to think of ways to address intergenerational transfers. It is worthwhile to remember that today’s old were yesterday’s young. They raised today’s young and educated them, they funded the public sector budgets, they made advances in health technology as well as other technical innovation possible, and they went to war giving up their life to secure the life and liberty of today’s young. Every generation bears the cost for the succeeding generation and under a social contract it follows that the younger generations transfers some of its wealth to the older generations. The problem is, when resources are diminished the social contract looses its imperative. Intergenerational transfers are reexamined.
Guess who will make such reexamination? With few exceptions they are likely to be made by the “younger generations”. They will have to legislate higher taxes, cut spending or both. The inescapable fact is that they have to face the question of intergenerational transfers.
Let me now return to the globalization of the old and the very old which is the title of this blog. A look at the population data is quite useful (Table 1).
A few examples suffice. Three types of data are shown: The age distribution of the population; the birth rate and the US unemployment rates by age group. The dat given covers 10 year periods. The story that emerges is what Fishman’s article sought to convey. The age structure of population in the developed world makes it clear that a shrinking pool of the young faces a growing pool of the old and the very old. Birth rate data show one of the fundamental reasons why the young pool has been shrinking over time. Without changes at either end of the age spectrum (not likely in the short run), the young need to garner higher intergenerational transfers. Short of

that, the very old will see the social contract crumbles and they have to find ways to cope with the loss of support.
A look at the US unemployment rate by age groups, population in the two age brackets (34-44 and 45-54) have typically enjoyed the lowest rates of unemployment. If this scenario were to change a further stress on society resources will be manifested. That is not only transfers of resources will have to take place from the young to the old but also from the working population to the unemployed population.
Putting the unemployment issue aside (for another blog), the question that was raised in the Fishman’s article is how to deal with the old and especially the very old. One solution which has been practiced not only in the US but elsewhere is to alter the social contract—raising the retirement age, taxing pension benefits and health benefits and so on. The problem is these remedies are ad hoc solutions that are not likely to “remove” the problem of aging from the social agenda.
Good minds and not such good minds have advocated many solutions ranging from “privatizing” the social insurance system including pension and medical care to keeping the old and the very old in the work force. But as Fishman puts it “one conundrum for aging societies is how to keep older people employed”. Obviously, if that was possible, then tax revenues rise, benefit payments fall or pushed further into the future. This scenario has been achieved to some extent in the US but the problem of an aging population did not go away. As outlined above, the problem gets exacerbated when economic conditions worsen and when technical advances favor the very young. Mr. Fishman is very pessimistic about the future outlook of aging societies. According to Fishman “…as the world gets older, we need to anticipate how this extraordinary change might undermine our commitments, weaken nations and push able people to the side…it now looks as if global power rests on how willing a country is to neglect its older citizens”.
Well… I wonder how old is Mr. Fishman? Would he retain this thought when he gets to join the “army of the old and the very old”?
What kind of solutions are likely to emerge if the age structure continue as it has been in the past? If the future is a repetition of the past, societies will have to revisit those solutions advocated by many. Mind you, these solutions are not new—they are worn out so to speak, but there they are:
• Negate or abolish the “implicit” social contract—you pay, do not pass go and do not collect your “owed” pension.
• Curtail the delivery of medical care through rationing—if you are over 50 you are on your own—sink or swim (The British National Health Service ration medical care to the old).
My prescription is:
• “Do not send the young to do the business of the old. If the over 40 politicians need to change the world through war-like engagement, they should send the age group over 40, over 50 and over 60’s (that should end war quickly).
• Lend credence to the world globalization by allowing free movement of people. Migration is one way to populate a country by the “young”. Of course that means not only migrants have to be young but also possess needed skills. This option not only will chip away at the “outsourcing” problem but also will neutralize the effects of aging throughout the global economy.
Alas, these solutions, at least for the moment, are far fetched solutions. Nonetheless, they should give us all a pose, perhaps food for thought.

Thursday, July 29, 2010

Is it Irrational Behavior or Risk Aversion? (Round Two)

On July 15, 2010, I posted a blog with the same title except that it was round one. Today, I write a follow up hence the label round two. Why the designation? In the first round, I have contrasted the outcome of choice involving two medical options (pp.2, 3). The options involve a risky choice. One of the options had a certain outcome—the individual knows with certainty the outcome of the choice whereas the other was labeled “risky” choice as it involves uncertainty. I discussed how the individual calculates the outcomes of the two options by assigning what economists call utiles, a scale of measurement for the utility of the choice. My example was intended to show that the risk averter is likely to choose the certain outcome over the risky choice. I have also shown, that to make the uncertain option equally valued would require (given my numerical assignment) a reduction of the risk valuation ascribed to the risky option. In this round I would like to put forth another scenario. I would like to suggest another way of making option Y the winning option. For this to take place option X is rendered an “uncertain” option. Given that my examples are medical intervention options, the agent who can alter this and hence the choice is the “physician”.
Now assume that the physician who monitors the patient were to inform the patient that the certainty attached to the outcome of option X is no longer valid. That is, the probability of success now is equals to zero. Moreover, assume that the physician were to point out that there exists the probability that a side effect that did not exist earlier will materialize if option X is chosen. This means that not only the intervention with 100 percent probability has become ineffective, but also it carries with it a “negative” outcome. The comparison between the two options will hinge on the value a risk averter will assign to this negative effect. Note that in comparison to option Y there is no benefit attached to continuing the treatment. This scenario then pushes the individual to choose option Y without a change in its probabilities or the negative valuation attached to the side effect, a component of the option.
In short, the purpose of the exercise was to show how difficult it is for the individual to exercise choice when faced with uncertainty, not only because information may be incomplete but also because of his/her dependence on the market (third party) to evaluate the risky options. When an option is chosen in situations involving risk or uncertainty it is not an easy task to label the choice as rational or irrational. The saving grace is that in some situations a choice can be amended, in others the loss arising from the “wrong” choice cannot be recouped. That brings me to the literature that brought the risky choice models to the theory of consumer choice.
I have mentioned in the previous blog the seminal article by Milton Friedman and L.J. Savage (1948). A year earlier a most influential contribution by Von Neumann, J and O. Morgenstern’s Theory of Games and Economic Behavior (1947) offered utility functions that permit the complete ranking of options in situations involving uncertainty; the comparison of utility differences and the calculation of expected utilities thus making it possible to analyze choice in situation involving uncertainty (see chapter 1 of their book). Since then we have gained insight into this issue through contributions by several economists about consumer choice in situations characterized by risk and uncertainty. It is worth noting that risk has an objective probability while uncertainty involves assigning “subjective” probability. Hence the importance of ascertaining the individual type: whether he/ she is a risk averter, a risk neutral or a risk lover as this is critical to understanding choice. It is of note that an individual may be a risk averter in one situation and a risk lover in another. Now, I turn to the contribution of Behavioral Economists to the study of consumer choice.

Behavioral economists reject the economist models’ assumptions of rationality and maximization of utility. Rationality is defined as the “cognitive abilities” for solving economic problems .Behavioral economists dispute full rationality on the basis of research findings by psychologists and some economists “people exhibit preference reversals; have problems with self control and make different choices depending on how the issue is framed.” For a number of reasons, people make errors and behave in a manner contrary to their self interest. Given this premise, placing constraints on the exercise of free choice may be called for. That is, “paternalistic” intervention by the state or the community may be called for.
There are many variants of state paternalism: Paternalism (Mead, editor 1997), Patronizing Paternalism (Burrows, 1993), Libertarian Paternalism (Sustains and Thaler 2003), Permissible Paternalism (Goodin 1991), Benign Paternalism (Choi et al.2003), and Asymmetric Paternalism (Cramer et al. 2003). The different labels notwithstanding, the underlying premise of paternalism is simple: intervention by the state or the community will generate significant welfare gains. Sources that give rise to “bad” individual choice are: bounded rationality, slow learning, framing and lack of self control.

A question that needs to be posed: when people choices are ‘bad’, should the state and /or the community (a) Override their choices? (b) Steer them towards ‘welfare’ improving choices? (c) Encourage “good” choices without being coercive? or (d) Do nothing?
I have explored this issue in a conference presentation: “State Paternalism and the Rules of Reason” at the International Atlantic Economic Society meeting, which took place in Savanna, GA, on October 2007. In the paper presented, I have summarized the arguments put forth in a number of papers pointing out differences in the policies advocated to deal with “irrational” behavior. In a nut shell, paternalistic policies are advocated to help those individuals whose rationality is bounded (i.e. less than perfect) from costly errors. In the medical intervention example, if the individual was to reject option Y then he/she will bear a costly outcome.
It needs to be emphasized that not all policies advocated by behavioral economists call for coercion. Libertarian paternalism for example allows for differences between individuals and ‘covert’ coercions are not contemplated. At this juncture, it is befitting to call upon one of the architect of liberalism in economic thinking, John Stuart Mill. In his essay on liberty (1859), Mill wrote: “the only purpose, for which power can rightfully be exercised over any member of a civilized community against his will, is to prevent harm to others” (1984 edition, p.92). Following the writing of Nobel Laureate James Buchanan in his Logic of Limits (Buchanan and Musgrave 2000, p.111), one may ask: Why should constraints be placed on the individual when his actions do not infringe on others?
An understanding of the logic of limits may be gained by examining individuals choice in situations where they voluntarily impose restrictions on own actions. Behavioral economists advocating paternalistic intervention do so because they doubt the validity of the logic of limit concept altogether or at least as it applies to those individuals exhibiting bounded rationality. Buchanan’s observation that persons do adopt rules that they intend to abide by is valid for many but not for all. Not all smokers purchase stops smoking aid; alcoholics join Alcoholics Anonymous (the examples given by Buchanan, p.112).
Ruling out the logic of limits as it applies in the example of medical intervention given above in favor of paternalistic intervention; it is imperative to recall that “state or community” intervention most often entails coercion for they command the tools to implement said intervention (power to tax, impose fines, outlaws certain actions and so forth). Paternalism exercised by a parent and/ or a care giver does not command the same coercive power. In the medical intervention example cited above, the physician may attempt to move the patient towards his preferred option but he cannot coerce the patient to do so. Unlike the state, or the community, his power over the individual is not absolute; the patient can opt out of his care.
To conclude:
Behavioral economists have made a significant contribution towards our understanding of human behavior. It is undoubtedly true that some “bad choices” at least from the point of view of society are likely to be made, others are not .But placing constraints on free or voluntary choice of the individual should not be taken up lightly. To err on one side or the other demand more empirical proof that we currently do not have.
Each and every one of us can relate to situations where choices made were far from utility maximizing and/ or ‘fully’ rational. Most of us believe that the choices we voluntarily make are optimal in the sense that their expected costs are below those associated with alternative-dictated choices.

Thursday, July 15, 2010

Is it Rational Behavior or is it Risk Aversion?

As I was struggling with a “medical” decision as to whether to continue a “mode” of treatment that has run its course and is no longer viable or alternatively embark on a new treatment with a new technology, I read a most interesting and to the point write up in THE WEEK. It goes without saying that the “old” is familiar; the new is approached with apprehension if not suspicious. Hence, anything that makes the “new” a bit “old” helps. In the June 18th issue of THE WEEK, under the heading of “Author of the Week” , a story unfolds about a medical choice made quite a few years back by Dan Ariely who has written a couple of books using this choice to facilitate the understanding of a relatively new theory of economics.
Dan Ariely is a fellow economist, a “behavioral economist”, who according to THE WEEK has written a 2008 bestselling book: “The Upside of Irrationality”. Behavioral economists are a new breed of economists who are challenging the standard “textbook” notion of human “rationality”, a notion so fundamental to main stream economics. Rationality is a basic assumption economists use to analyze the individual decision, whether the choice is that of a consumption basket, a choice of occupation, and work versus leisure and so on. It is assumed that the individual knows the alternatives and chooses the one that “maximizes” his or her utility. This assumption is essential to our understanding of choice exercised in the market place in a setting which does not involve risk or uncertainty. When the individual is confronted with a choice that involves “RISK”, the choice is not as simple. In this situation we need to sort out individuals in terms of their “sensitivity” to risk taking.
Economists analyze three categories of risk taking exhibited by the individual: Risk averter, Risk lover and Risk neutral. The individual is said to be “risk averse” when he/she places a much higher weight to a choice that “minimizes” taking of risk; a risk lover goes for a risky choice while a risk neutral gives equal weight to risky and non-risky options.
That was more or less all we needed to know to decipher choices of the individual. But then, a group of economists mainly from the “Chicago School” revived a critique levied against the assumption of rationality and self control (See for example Schelling, T., (1978) Economics or the Art of Self Management, Am. Eco. Rev. pp. 290-94). The new group including Sunstein, Thaler, Laibson, O’Donoghue and Rabin to name but a few, earned the label: Behavioral economists for their contributions to the understanding of human decisions. In a nut shell, behavioral economists challenged the notion of rationality and maximization of utility. In effect, they argue that observed behavior is more likely to exhibit “irrational” rather than a rational decision making process. An example that is often cited has to do with the consumption of “sin” goods—cigarettes, alcohol, drugs and the like. Lack of self control is essential to the analysis, as well as the dimension of choice.
Back to the choice made by Dan Ariely, which THE WEEK Magazine uses in alluding to his book on the thesis of behavioral economics. As the magazine tells it (I have yet to read the book), Dan uses his own choice to explain one of the tenants of the behavioral economists’ theory—that people are less than perfectly rational in their choices. The author uses his own choice which he has made several years back to make the point. The choice involved two types of “medical intervention”. According to the write up, at the age of 18, the author suffered burns on 70% of his body. Two medical options were put before him: Amputate his right arm and replace it with a “hook”, or retain the arm after an excruciating surgery and endure severe pain and partial use of the arm for the rest of his life. At the time, at the age of 18, the choice he made was to retain the arm. Was this a rational or irrational decision?
At the time the decision was made, the author, in my view, exhibited what traditional economists label as “risk aversion”. As he put it: I was “incredibly attached to my hand—in multiple ways”. On backward reflection on such decision, Dan Ariely posits that the decision made at the time was “irrational”. When revisiting the decision about his own arm, as it is retold by THE WEEK, he admits that IRRATIONALITY may have led him astray. On revisiting the decision, at least for the book’s benefit Dan Ariely speculated about whether “prosthesis might have been more functional—that keeping my arm was, in a cost-benefit sense, a mistake”.
This reflection on a past decision causes me to revisit in this blog the assumption of rational choice not in terms of one period horizon but intertemporally. In simple terms, a choice with consequences lasting more than one period, for example, a choice involving one period can be depicted by the consumption of an ice cream cone, a cup of tea or a glass of mineral water. An intertemporal choice is a choice with consequences beyond the period when the item is consumed. As an example, cigarette smoking in one period gives satisfaction in that period but carries with it undesirable consequences in subsequent periods.
A great deal has been written about this type of choice. Traditional economists have advanced theories explaining intertemporal choice. The add-on by behavioral economists is that the individual may exhibit what is called “bounded rationality”—that the individual lacks self control when it comes to consumption of sin goods. Accepting this proposition has led some behavioral economists to advocate “paternalism”. Government or some higher authority would override individuals’ preferences for society’s preferences (the ban on smoking in public places, restaurants and bars is an example). For more on this point and references see Bae and Ott “The Public Economics of Self Control”, Journal of Economics and Finance (October 2008. Pp. 356-367).
Let us contemplate a decision at time t, involving two courses of action: An action A and action B. If one knows with “certainty” the outcomes of both, then the standard economist model applies. That is if a choice of A gives pleasure or satisfaction equal to X utiles, and B gives satisfaction equal to Y utiles (discounted if it were to materialize in a future period), then if A is chosen rather than B, then X utiles are greater than Y’s and vice versa. The individual choice maximizes his/her utility. Two problems arise in this scenario: first, a choice with outcome extending beyond the one period (future period(s)), involves uncertainty or unmeasured risk. Secondly, what discount rate to apply to future outcomes?
Back in the late 1940’s, Milton Friedman and T. J. Savage put this issue before us in their seminal article “The Utility Analysis of Choice Involving Risk” in the Journal of Political Economy (August 1948, pp. 279-304). In order to get as close as possible to explaining the individual temporal choice—a choice involving one period when faced with risky choice, they use the categorization of individuals as risk averter, risk lover and risk neutral. In their example the choice involved two options: a “certain sum of money”, and a chance (game) with two outcomes: losing with a high probability a small sum of money and winning with a very small probability a very large sum of money. Depending on the threshold of risk a choice is made among the two options. If the individual is risk averse he is likely to choose the “certain outcome”, if risk lover he will choose the “bet”. Nothing in the second choice is said to exhibit irrational behavior even if the individual were to bet the house and looses it.
Fundamental to the analysis of choice involving risk, is not only the computation of the expected value of the bet so that it can be compared with the “certain” option, but also the expected utility of the uncertain outcome. The expected utility depends on the shape of the utility function of the individual exercising the choice. Such utility is a function of the individual tolerance of risk. Unfortunately, this is a subjective value that can only be assigned by the individual. Which brings me back to Dan Ariety’s choice, and to a choice I am contemplating.
To illustrate:
Using the example of medical intervention, let option X be current treatment mode which has lost most of its effectiveness in the face of the disease progression. Staying with option X is given a probability Pr. =0.2 that it will have some effect. Let individual A be designated as “risk averter”. He/she assigns a utility value to this option as equal to 100 utiles (some scale of value). Hence:
Pr* Ux= 0.2(100) +0.8(0) =20 is the expected utility.
Option Y has the probability of success of 0.7 that it will be effective, (1-0.7) it will not be. If effective, the utility is 1,000. Accordingly:
Pr*Uy=0.7(1,000) + (1-0.7) (0) =700.
Comparing the expected utility of the two options clearly indicates that option Y will be chosen.
This however is not the complete story. If the new technology carries with it, in addition to the failure probability, a probability of adverse side effects then such probability has to be incorporated to arrive at the expected utility of this option. This complicates the analysis as one needs to know, in addition, something about the risk tolerance of the individual.
Let the side effects (usually ascertained from clinical trials) to have a low probability equals (0.007) such that if it materialized will have severe consequences, even death. To calculate the expected utility of option Y one needs to account for this second component.
But there the problem with the optimal choices lies: one needs to know the risk profile of the individual.
As I have mentioned earlier, the individual can be a risk averter, risk lover or risk neutral (this last category is not likely to be prevalent in the population). Hence, I focus on the risk averter.
Suppose that the risk of side effect was evaluated as equal to -100,000 utiles and a probability of occurrence equals to 0.007. The calculation of the expected utility of option Y is: 0.7(1,000) + (1-0.7) (0) +0.007(-100,000) =0. Option Y will be rejected. For it to win over option X the evaluation of the risk has to be lowered. The equivalent value of the option Y to X requires a risk evaluation equals -97,142 utiles. With this value the individual would be indifferent between the two options. For Y to be chosen over X the risk tolerance has to be reduced so that the expected value of the loss is below the threshold of -97,142 utiles.
This is a problem a concerned physician is likely to face: First, he/she has to ascertain the risk tolerance of the patient (that can be done with a full review of the patient medical history a time consuming process to be sure), and secondly, how to induce the patient to lower the evaluation of risk as the probability of occurrence of the side effects is not subject to change without new information. The solution of this problem is not easy, not for the physician or for the patient.
It needs to be emphasized that at the time one is contemplating a choice involving risk, risk assessment has to be made so that the appropriate discount rate can be applied (the discount rate is used to convert future values to the present. This is ignored in this presentation). The choice Dan Ariely faced was a choice involving risk. His first option, keeping the arm may be viewed as the “sure bet” or the “certain” option. The second option is the uncertain option or the risky choice. The uncertainty about the outcome of the second option with all its ramifications would suggest that at the time the decision was made a very high discount rate was applied to the utility derived from choosing the second option to tip the scale in favor of the first option. In my view, that decision has nothing to do with being “IRRATIONAL”.
In a dynamic world, the discount rate does not remain constant. The discount rate that one would use at the age of 20, 30 versus 50, 60 or 70 is not likely to be the same. Accordingly, many years after the fact the author may have denigrated the discount rate he has used when he was at the age of 18. The traditional theory still holds in that intertemporal choices are made at the beginning of the period. However, nothing is irrational about revising the choice in subsequent periods when more information becomes available.
Having cleared up in my own mind as to how my own choice of the two medical options is likely to come down to, I maintain that “given the information at hand whatever option I choose”, my choice will be a “RATIONAL” choice. A fundamental lesson I have learned during my studies, teaching and research is the value of information and the quality of said information. Without good information the discount rate will be faulty and the choice “suboptimal” although “not irrational.”
A final note to reflect upon:
In a decision involving medical intervention, with an option that has an uncertain outcome, a physician uses his/her expertise to calculate the probability of a successful outcome of the option so that it can be compared with the status quo or some other less uncertain options. This probability when communicated to the patient would help the patient to calculate the discount rate that must be applied to obtain the expected value (and utility) of the uncertain mode of intervention, which can then be compared with the outcome of other options including the status quo. It is worth emphasizing at this juncture that the discount rate computed by the patient reflects his/her type, whether, he/she is risk averter, risk lover or risk neutral. As only the individual can put himself/herself in one of these three categories, a choice that may appear “irrational” is in effect completely “rational.”
In my next blog I shall review some of the literature on risky choice and some of the contributions of the behavioral economics especially as some aspects of the theory pits individual choice against societal choice.

Tuesday, May 4, 2010

What Africa Deserves

A recent short piece by Joachin Chisano[1] which appeared in the African Executive (April 7, 2010), has as its title: Africa Deserves Better Leadership. The author’s main thesis revolves around the quality of leadership (or lack of leadership) in Africa. In his view, a “majority of Africa’s leaders are more of a liability than an asset to their countries. They are more interested in fulfilling their selfish personal ambitions than working for their people”. Mr. Chisano goes on to enumerate incidences where leaders short changed their countrymen (and countrywomen) and other leaders who put their people first and worked to improve their standard of living. Although the message is not new, it needs to be said and said often. Perhaps one day what has been said can be translated into action.
Everyone who is concerned in one way or another with Africa, whether as an active or a passive participant bemoans the prevalence of corruption, the blundering of rich resources, and the over presence of ethnic violence and mass killing.
Many articles and volumes have been written, speeches and advice have been given by African and non-African scholars, ordinary people and concerned citizens. But nothing suggested seems to have worked well. Nothing seems to penetrate the armors that shield corrupted leaders and/or corrupted citizens from the day of reckoning if such a day were indeed to come. As one of those scholars who came lately (I am not a development economist or engaged in regional studies) to the recognition that efforts, no matter how small, should be exerted so that we, the outsiders, could understand better why corruption persists, why leaders behave the way they do and above all why little seem to work or has been accomplished over decades of writing, speech giving and aid.
As I mentioned earlier, I did come lately as a concerned scholar about the progress of Africa (see the mission of the Institute for Economic Policy Studies, http://www.iespolicy.org/about.html). In addition, to IEPS conferences, the blog media offers me an outlet for airing out some of the problems and concerns I have about what is happening in the continent (see Ott’s Blog 2007-2010).
Two issues should be up front in addressing the leadership issue. A solution, a “viable” solution, must be found to deal with them otherwise, all writing and lecturing will be for knot. These two issues are: Length of executive tenure—how long the country leader stays in office—and corruption—especially external corruption. Let us look at these two issues starting with the length of tenure.
Citizens of the world are not unaware of the fundamental link between the institution of democracy and the length of the leader in office. In most democratic states, there is a “constitutional” limit on the tenure of the leader (president), but is absent or flaunted in non-democratic states. Many countries on the African continent have leaders “who won’t go”. This phenomenon was commented upon in an Economist’s piece: “Another President Who Won’t Go”. This piece in the Economist (March 15-17, 2008, pp 49-50) was in relation to a debate which was taking place in Cameroon regarding presidential tenure limits. The Economist reported that on February 24th and 25th , 2008, violent protests broke out in Douala, Cameroon’s commercial capital, in response to Cameroon’s President, Paul Biya’s declaration that he might stay for a third term of another 7 years. President Biya has presided over Cameroon for 25 years. The new constitution that came into force in 1996, limits the president to only two terms in office. For the president “not to go”, the constitution had to be changed. And as the story goes he did not go—Paul Biya still remains in office.
At the time this event was taking place, I wrote a blog (April 8, 2008) with the title “No, No, We Won’t Go: Why Some African President’s Refuse to Retire” (http://attiatott.blogspot.com/2008_04_01_archive.html). Old men in Africa, most of whom are in their late seventies or in their eighties have ruled for over two decades. Most of these rulers have come to power on the heel of independence or following the overthrow of dictators with the blessing of their citizens. Expectations ran high. But then expectations dimmed, hope dashed. What went wrong?
To be “scientific” one needs to examine a country by country experience. Two excellent books give an account of what has transpired: “The Fate of Africa From the Hopes of Freedom to the Heart of Despair: A History of 50 Years of Independence” (2005) by Martin Meredith, and “A Continent for the Taking: The Tragedy and Hope of Africa” (2004) by Howard W. French. The Ott’s Blog (2008) gives information on the relation between the ruler’s tenure and the Freedom House rating of freedom over the period of 1973-2006. The upshot of this analysis is to uncover the link between the state of democracy and the “won’t go” phenomenon. The findings are not unexpected: FREEDOM with all its ramifications is the most significant factor in determining the staying power of a ruler. Freedom is much more than conducting an election. It involves the guarantee of political rights and civil liberties. Few of these rights are guaranteed in many Sub-Saharan African countries. When elections are held, the current ruler most often either “wins not fairly and squarely”, the opposition either attacked, silenced or jailed (see French for examples).
There is a saying that “power corrupts”, and, the longer the tenure, the more power to the corrupt. So now I turn to the phenomenon of corruption.
Corruption and bad governance is as natural if not inevitable like birth and death. What makes us, more accurately, at least some of us “incorruptible” is civil education and good sense of what is in it for me vis a vis what is good for someone else. The concept of a leader as distinct from the people the leader is supposed to serve has evolved from being one of us, to one of himself. But what defines corruption and who is instigating the corrupt behavior?
There is a great deal of literature on corruption. For the sake of brevity it suffices to offer definitions for the two types of corruption: internal or domestic corruption and external corruption. The first type is relatively insignificant whereby an individual in one nation bribes an individual (most often a government official) in the same nation to facilitate action on an impending request. The second type, external corruption is the one talked about when the term corruption is used.
So what is it? External corruption arises as one individual or an official of one nation bribes another individual or an official of another nation.
Transparency International (1995) definition is one that is commonly used. It defines it as “abuse of public office for private gain”. Jain A., (2001), “Corruption: A Review”, Journal of Economic Surveys, 15(1): 71 -121 defines it as “an act in which the power of public office is used for personal gain”. In both definitions, what is stressed is who is being corrupted and for what purpose. Just as Freedom has been indexed, calculated and the results published for most countries in the world, so corruption scores are calculated for developed and developing economies. Corruption scores are good indicators for the quality of leadership and governance in the same way as economic variables like GDP and GDP growth are indicators of the growth path of the economy. Available data illustrates not only why many “bad” leaders are the product of corrupt practices in their countries, but also why once practiced it is difficult to get rid of, it persists (see N. Bissessar, (2009), “Does Corruption Persist in Sub-Saharan Africa?”, In International Advances in Economic Research, Special Issue: Developing the African Continent, pp. 336-350, Attiat F. Ott, guest editor).
Using corruption scores obtained from TI over the period 1989-2006 countries are ranked as highly corrupt (scores 0.00 – 3.59); middle corrupt (scores 3.60 – 4.19) and least corrupt (scores 4.20 – 6.00). The results obtained for 27 Sub-Saharan countries show that most of these African countries 63 percent – 100 percent (pp.336 – 350) fell in the most corrupt category while the percentage of countries in the low risk category was less than 7 percent. Of note is the fact that corruption which began to subside between 1984-1995 has taken a turn for the worst. The percentage of countries in the most corrupt category rose steeply and the percentage of countries in the middle corruption category fell dramatically (p. 339).
But corruption data, although stands on its own conveys little about societal structures—political, cultural, economics—that give rise to corruption and worse of all perpetuate its hold on society. Rather than “wring one’s hand in despair by pointing out that corruption is the “norm” rather than the “exception”, one needs to look closely at society’s make up and see how these “structures” made corruption not only tolerable but also persistent as well. Few data will help this search. Societal attributes such as ethnic, language, religion, literacy rate as well as the type of government (democratic, semi-democratic or authoritarian) in place help us understand why many leaders in Sub-Saharan Africa as well as elsewhere are corrupt and most of all why corruption is tolerated. In a paper “Is Economic Integration the Solution to African Development?” presented by Ott at a conference held on August 19-21 in Botswana (2008) sponsored by the Institute for Economic Policy Studies, these attribute along with corruption scores were compiled for 44 Sub-Saharan African countries over the period 1986-2005. From the data provided in Table 1 in the paper, it is clear that Sub-Saharan countries are quite diverse in terms of population profile—they are quite diverse in terms of ethnicity, language and religion. With respect to political structure there too, the democracy rating show the majority to be partly free or not free but the most disturbing finding perhaps is the level of education attained. The data shows a wide dispersion in the literacy rate, from a low of 22 percent for Burkina Faso (with high corruption score; partly free) to 84.4 percent for Mauritius (low corruption; free) and Namibia with a literacy rate of 85 percent (low corruption and free) (for more details see, www.iespolicy.org/files/Is%20Economic%20Integration_Ott.pdf) In short, by understanding the societal structure one may be able to pinpoint the factors that need to be addressed to promote good citizenship. After all, in post colonialism the leader is not the “stranger”, he/she is one of the people. What Africa deserves is good citizens with power to change the status quo. A good place to start is with education.

[1] Joachin Chisano received the Mo Ibrahim Prize for demonstrating good leadership.

Thursday, April 29, 2010

How Much Public Debt?: The Sky is the Limit and Why Not?

How much public debt a nation can withstand? In my previous blog (April 21, 2010), I discussed the fiscal deficit and its projected path over the next 10 years. The concern there had to do not solely with its size in relation to GDP but also in relation to the “expected growth” in federal spending with the full implementation of the Health Care Reform Act of 2010.
Economists are in the habit of making “prescriptions” for the economy by setting dosage levels that when exceeded, the patient’s health (in this case the economy) will falter if not die (become bankrupt). Those targets are neither pulled out of thin air nor set in stone. Moreover, one “target” does not fit all. Two such doses or “targets” are prescribed for world economies: one target for the deficit; the other for public debt.
Setting target levels does not mean strict adherence to these levels, except in such cases where it is mandated by some super national authority. For example, the admission to the European Union (EU) requires adherence to the union rules, two of which are the ratio of deficit to GDP and the second is a debt to GDP ratio (The Maastricht criteria are: The budget deficit must not exceed 3 percent of GDP and public debt not to exceed 60 percent or declining towards 60 percent. As I will discuss later on the first one is much more critical for admission than the second. Other countries (i.e. the US) may aim for the deficit or debt target but there is no mandate that requires the federal government to adhere to the target levels. Many state governments in the US however, cannot run deficits as their constitutions mandate a balanced budget. Before addressing the “economies” of deficit or debt targets, a bit of US history of deficits and debt along with some theorizing may be useful.
The Record:
In the US, talking about the fiscal deficit or the public debt makes news, good and bad. But the deficit and the debt numbers by themselves tell us nothing about budget issues or the fundamentals that shape their levels as well as their trends. Let me explain.
The fiscal deficit is a “byproduct” of fiscal actions reflecting both the “ideology” of successive administration as well as the historical path of the federal budget. As discussed in the earlier blog, the discretionary deficit reflects the decision of the “current administration” about its fiscal program—level of spending on federal programs and level of taxes on the national income.
To talk about the deficit and debt one needs to start with the federal budget. In democratic societies, the government is a political institution to which the public has assigned the task and the power of defining and protecting their property rights and, when necessary for the enhancement of the general welfare (i.e. the health care reform), the redistribution of these property rights. Since the government possesses no resources of its own, it must acquire them from private owners through taxes and by borrowing (debt). But how far can the government tax and borrow from the public? Put in another way what size should the federal budget be?
Back at the time of Adam Smith, the role of government was well defined and quite limited in scope; people then did not worry much about the size of the government. Two hundreds or so years later, the size of the budget, the budget programs, taxes, deficits and debts are issues that concern all societies and people of various persuasion.
One needs not go back to the days of Adam Smith to show the dramatic path the federal budget followed. It suffices to point out that its growth has been quite impressive—in 1794, for example, the federal government spent about $7 million; in 2009 federal spending reached $3,518 billion. Relative to GDP ($310 million in 1794), the ratio of spending to GDP was 2 percent; the corresponding figure for 2009 is 24.7 percent. Well! Going that far back is not very meaningful—nothing stays the same and no one (except maybe the Tea Party) would want to return to those days.[1]
So let us look at the numbers by five years interval over the past 59 years 1950-2009. From the data (Table 1) we discern the following: A critical imbalance between spending and receipts beginning in 1975 (except for a short span of time during the Clinton period), with receipts falling below their levels of the 1960’s. Note that in the 1960’s we had almost a balance budget followed by small deficits, except during the Reagan years (1981-89) where tax cuts coupled with an increase in defense spending produced an unprecedented level of deficits to GDP (5.1 percent) in the year 1985. After 1985, the deficit number was in the “tolerant” threshold of 3 percent. Today (2009) the deficit is blown off the chart at the unprecedented level of 9.9 percent of GDP.

But look closely at what is deriving this “out of experience” record. Budget receipts to GDP ratio between 2000 and 2009 are below their levels 50 years ago, (14.7 percent in 2009 compared to 17.8 percent in 1960) and a spending /GDP ratio in 2009 of 24.6 percent, only 2 percentage points above their level of 22.8 percent recorded during the Reagan years. The culprit then is not that spending is out of control, although this may be true, but that the government budget receipts have not kept up with the growth of spending. Both sides of the coin paint the deficit picture. To attack the deficit, one needs to change the fundamental thinking about its source. It is not uncommon to hear critics of the budget posture pointing to the growth of spending but few dare to point out the erosion in budget revenues. The conventional wisdom (proven again and again) is that it is easy to point out to a “runaway spending” than to point out the shortfall in budget receipts. Most of us feel the pinch of taxes, and only a few recognize the value of public spending. But to be serious about closing the budget gap, both sides of the equation have to adjust for it to come to a state of balance. This is easier said than done.
What about the debt? As shown in the table, the debt/GDP ratio over the period 1960-1990 oscillated between 30 percent and 55 percent, a level that did not depart much of what conventional (economics) wisdom suggests, around 60 percent. Once again we have an “out of experience” level of debt, a level equals to 83.3 percent in 2009. The values of these two indicators (deficit and debt) for 2009, where the trend continues bode ill for the growth of the American economy.
Why deficit and debt matter?
Recall that the debt evolves through deficits. Borrowing domestically (debt in the hands of the public) or from foreign countries (external debt) is not without cost. Borrowing entails interest payments on the debt which must be financed through budget receipts. An ever rising debt service requirements impairs the government’s ability to meet its budget spending and/or absorbs private income through taxation which would adversely affect private consumption and investment. A preponderance of evidence also shows that budget deficits raise interest rates and causes the inflation rate to rise by fueling inflation expectations. In short, a rising deficit and debt have repercussions over the short and the long run.
The ratios of 3 percent for the budget deficit to GDP and 60 percent of the public debt to GDP have been advocated for advanced economies and, in some instances, have been adhered to. When these indicators are discarded financial crisis occur and defaults are not far behind (experience of New York City in the 1970’s but one example).[2] A great deal can be said about the links between domestic debt and external debt. A most insightful account of debt crises domestic and external is given in a recent volume (2009), This Time is Different: Eight Centuries of Financial Folly by Carmen M. Reinhart and Kenneth S. Rogoff (Princeton University Press). Not only does the volume provide a historical account of crises arising from both domestic debt and external debt but also provide the theoretical underpinning of debt crisis.[3] Policy makers may do well to take their diagnoses to heart so that they may hit on the right prescription.

[1] For historical statistics see: http://usgornementspending,com/federal_debt_chart.html, Budget data and GDP data are also reported in the National Income and Product Accounts
[2]See Attiat F. Ott (1975), “The New York Financial Crisis: Can the Trend be Reversed? American Enterprise Institute (November), Washington, D.C.
[3] Concern today is voiced for the debt problem facing Greece, Spain and Portugal. The debt crisis may spillover affecting both the US and emerging economies.

Wednesday, April 21, 2010

The Fiscal Deficit…Is As Far As The Eye Can See

For quite some time I thought about venting my frustration about what is written about the soaring fiscal deficit. Not that it is beyond the “norm”, if there is such a thing of what a “responsible” fiscal stand dictates but because of the latest claim as to why the budget deficit and hence the federal debt is soaring, uncontrollable, unsustainable or downright un-American.
What prompted me to get on with it and write this blog is an article about the deficit which appeared in the New England Journal of Medicine (April 1, 2010): “The Specter of Financial Armageddon-Health Care and Federal Debt in the United States” by Michael E. Chernew, Ph. D., Katherine Baicker, Ph.D. and John Hsu, M.D., M.B.A, M.S.C.E. All of the authors, one way or another, are involved in health care policy and/or health economics (see NJM p.1168 for affiliation). The authors’ main thesis is that the health care reform goals (which became law on March 21, 2010), although “laudable”, will have dire consequences in that spending on health care will “add substantially to our structural spending and thus necessitate more draconian fiscal austerity elsewhere” (NJM p. 1168).
Before going further, let me first congratulate the authors for explaining to the journal’s audience (after all not all readers of the NJM are likely to be versed in the field of public economics), that the term deficit and hence the size of the deficit is linked to the state of the economy. In other words, “not all deficits are equals”. The total deficit consists of two parts: autonomous and induced. Autonomous means discretionary while induced is not. When the government makes a decision to purchase an item (expenditure), just like you and I, it needs revenue to meet the purchase. If there is no revenue, it borrows the money and hence the deficit. This deficit was labeled “structural deficit”. The other component, the “induced” reflects what is happening in the economy. Just like the individual seller, when the economy falters and people lose their jobs, their purchases of certain goods are curtailed or eliminated, and hence the seller’s income falls not because of his own actions (i.e. he decided not to sell), but because the economic condition has worsened. The same with government revenues: when economic conditions falter, the Treasury tax collection falls, government spending on income support such as unemployment compensation rises and the deficit materializes. After the economy rebounds, the opposite takes place. In short, this part of the deficit is “induced” and not at the discretion of the government. Since the rise and fall in economic activity is labeled “the business cycle”, this deficit component was called the “cyclical” deficit.
There is not much policy makers can do about the cyclical deficit (at least in the short run) and should not be of concern. Obviously, if the business cycle can be “eliminated” this component will disappear and all we have is the discretionary deficit. I prefer the discretionary term rather than the structural term as it conveys the willful act of the public sector.
Having decomposed the deficit into its two components, any discussion about its rise or fall has to be clear about which deficit we are talking about and here lie the thesis of the NJM’s article. Since Health Care Reform entails discretionary federal spending, unless it is financed dollar for dollar by new federal receipts, or through reallocation of federal spending (cuts in other federal programs), no matter how you slice it the discretionary deficit will increase. This is true now as it has always been.
The history of federal deficits, which I will explore in a later piece, will convey this simple story. The budget process (where the determination of expenditures, revenues and the discretionary deficit take place) is invariant to the expenditure program initiated (whether that may be health care, defense spending, social security and so on), or taxes raised or lowered.
To form an opinion about the predicted dire consequences of the Health Care Reform on the deficit and whether “draconian” fiscal measures may have to be used to address them, a look at the actual and projected federal deficits may be enlightening if not useful. With respect to the public debt, its path follows the path of deficits since budget short falls have to be funded by issuing new debt which adds to the stock of the public debt in the hands of the public. A government that balances its budget will have no need to add to the stock of debt. The debt (a stock accumulation of flows of deficits) as a ratio of GDP (a flow) is a useful indicator of the capacity of the economy to sustain indebtedness.
Why accumulate deficits and debt?
There is an economic theory of deficits backed up by research suggesting that politicians behave strategically (Persson and Svensson (1989) and Alesina and Tabellini (1990)). The essence of this proposition is that the deficit and debt issues are used strategically by the current government to influence the fiscal decisions of those who will succeed them. In other words a Republican government may accumulate debt during its tenure to force its successor (presumably a democratic government) to curtail federal spending or raise federal taxes.
Getting back to Chernew et al’s article in the NJM (April 1, 2010), there is a need to examine the link between federal health care spending and the budget deficit, a link which is of great concern to the authors.
The authors provide data (obtained from a letter written by the director of the Congressional Budget Office to Senator Inouye, March 5, 2010) which purports to show that federal health care spending which amounted to 5 percent of the GDP and 20 percent of the federal outlays in 2009 is forecast to reach 12 percent of the GDP by 2050—a 41 year stretch.
The authors do not elaborate on whether or not this growth reflects the full implementation of the Health Care Reform Act neither do they inform the reader about the underlying reasons for the growth in health care spending, one of which is the aging of the population. A better indicator would be the growth of federal spending on health care per capita as compared to the growth rate of per capita GDP.
Even assuming that the only concern is with the growth rate, then one can translate the figure (7 percent increase from 5 percent to 12 percent) into an anAdd Videonual growth rate over the 41 year period. This growth rate is equal to 2 percent per year, not much if GDP is to grow at an annual growth rate of 3 percent which (hopefully) is achievable over the business cycle.
Calculation of growth rates aside, the concern should be over the “planning cycle” of the federal budget which is 10 years. The Congressional Budget Office (CBO, January 2010) provides projections of budget outlays, revenues and deficits over the period 2010-2020. It is perhaps useful to divide the period into two sub-periods 2010-2014 and 2015-2020. In the first sub-period, some but not all of the provisions of the Health Care Reform Act will be implemented. The second sub-period, 2015-2020, is the period where the full provisions of the reform would have been implemented. So let us look at the numbers (These are CBO’s projections based on the President’s Budget of 2009). The year 2009 is taken as the reference point.

What these numbers tell us and why the budget path matters.
Let us first look at the first period 2010-2014. Two numbers—deficits and debt—are of significance no matter what one’s political persuasion—fiscal conservative or fiscal liberal. CBO has been labeled as “bipartisan”. By that it meant that those who “crunch up” the numbers are professional (mostly economists) who do not inject their political views into the projection. That does not necessarily means that the actual numbers will exactly match projected numbers but this is the “best” given the uncertainty of the path of the economy. With this caveat one ought to take the projection in “the spirit” they are given. Now, what we have in the first period (2010-2014) is a steady decline in the deficit path from 10.3 percent of the GDP in 2010 to 4.1 percent of the GDP in 2014. This is a remarkable achievement indeed. Now comes the “horrifying” story of deficits for the period 2015-2020 and hence the overly pessimistic forecasts of what await us (expressed in Chernew’s article as well as by others) if these projected deficits turned out to be actual deficits.
The projected outlook shows a rise in the ratio of deficit to GDP from 4.1 percent in 2014 to 5.6 percent in 2020, an increase of 1.5 percent of GDP over a 5 year period. But then recall that in 2009 the actual deficit GDP ratio was 9.9 percent and the projection for 2010 is 10.3 percent. Remember what is being said earlier—the total deficit reflects both induced (discretionary budget action) and autonomous (reflecting the state of the economy) deficits. Given that we are in a “recession”, the economy’s lower path impacts the total deficit because of the fall in the level of economic activity and the rise in spending on transfer payments (for example unemployment insurance and stimulus packages).
The projection beyond 2010 clearly assumes a rebound in the level of economic activity and hence one would expect lower deficits to GDP ratios beyond 2010. Why then the deficit/GDP ratios are projected to rise in the 2015-2020 period?
The story lies in the projected path of spending and revenues. In the first period, revenues are projected to grow by 9 percent annual growth rate while spending growth is projected to grow at an annual rate of 2 percent. In the second period, revenues is projected to grow at 4 percent annual rate while spending growth is projected at 3.9 percent annual growth rate. The projection hence assumes an “increase” in the annual growth rate of spending in the second period by 2.7 percent (from 2 percent annual growth rate in the first period) while revenue growth falls from an annual rate of 9 percent in the first period to 3.9 percent annual growth rate. The increase in the annual growth rate of spending beyond 2014 is understandable as the growth must reflect the spending impact of the Health Care Reform Act. As to revenue projections many underlying assumptions may have to account for this such as the assumption about the state of the economy, a discretionary tax cut, or both perhaps enacted in the period. These assumptions should give the reader “food for thought”.
If our economic theory of deficits is correct, then the current administration in increasing, over its tenure, the growth path of the deficit will “tie” the hands of the next administration, especially if the new administration was “fiscally conservative”. By tying its hands it is meant that neither new spending initiatives can be undertaken nor “tax cuts” would be effected.
What about the public (or national) debt?
The projected debt numbers from 2015-2020 are “staggering” but what those numbers mean? And should they give up a pose? Because of the complexity of this issue, I will defer the answers to a follow up blog.
Where do the Health Care spending figures in all of this?
There are many sources which give projected levels of Health Care spending. According to the National Health Expenditure Projections, total National Health Care spending was to exceed $2.5 trillion in 2009 which put it at 17.6 percent of the GDP (see also Economic Report of the President, February 2010). CBO projections put the trend upward so that by 2020 National Health Care expenditures would be around 20 percent of GDP. The corresponding GDP figures for these two years are $14.3 trillion in 2009 and $22.4 trillion in 2020, an annual growth rate of GDP of 4.2 percent. Translating the growth rate of National Health Care spending into “levels” we get spending level for the year 2020 of $4.48 trillion—an annual growth rate of health care spending between 2009 and 2020 of 5.4 percent which (if the projection holds) mean a 1.2 percent difference in the projected annual growth rates of health care spending and GDP. This clearly suggests a shift in the spending growth in favor of health at the expense of other type of spending.
What about public sector (government) spending on health care (the culprit in all the debates on health care)? It is interesting if not “funny” that National Health Expenditure Projections (released by the Center for Medicare or Medicaid services) project health care spending to “decelerate” to 3.9 percent in 2010. CMS attributes this slowdown to a “deceleration in Medicare spending growth (1.5 percent in 2010 compared to 8.1 percent in 2009).” If this trend continues, then it would provide a cushion against the anticipated rise in health care spending under the Health Care Reform Act.
Obviously more can be said about these projections. This task has to wait until projected numbers are on a more solid footing as projected growth rates of health care spending impact the federal deficits and these in turn impact the projected path of the federal debt. These issues will be dealt with in my next blog. For now, a quotation from “Government by Red Ink” by Nobel Laureate James M. Buchanan, Professor Charles K. Rowley and Robert Tollison gives us food for thought:

“Many and varied are the perspectives on budget deficits offered by those who analyze them from a reformist standpoint. Some look for a return to the Victorian Prudent house hold ethic…others look for the election to office of the more responsible, less myopic politicians”
(In Deficits, 1986, p,3).

Thursday, March 25, 2010

Good News For Africa: The Poverty Rate has Fallen

Good News For Africa: The Poverty Rate has Fallen
“African poverty is falling”, so says Xavier Sala-i-Martin and Maxim Pinkovskiy in a recent NBER paper (National Bureau of Economic Research, working paper 15775, February 2010).
The study of poverty in general and African poverty in particular goes in cycle. During the Johnson and Nixon administrations the economics profession devoted a great deal of efforts to measure US poverty rates and to come up with solutions to poverty. These solutions ranged from a negative income tax (Milton Friedman) to the family assistance plans.
Measurements of poverty however, have remained an engaging subject for many in the economic profession and especially for those at the University of Wisconsin--Madison Institute for Research on Poverty, the Urban Institute and, the Brooking Institution.
The economics profession’s interest in the study of poverty although was dominated by the study of US poverty, development economists notably Amartya Sen have exerted great efforts in measuring poverty in developing counties, identifying its causes as well as advancing solutions to address it. Such efforts notwithstanding, eradicating or substantially reducing poverty here and elsewhere has remained beyond the grasp of politicians and their economic advisors.
In the past couple of years, measurement of poverty has taken central stage. For the most part, the focus was on US poverty (see AEI, “The Poverty of the Official Poverty” by Nicholas Eberstadt, November 2008, and Brookings, “Improving the Measurement of Poverty”, December 2008). Poverty rates and “poverty line” in the developing world and especially in Sub-Saharan Africa did not escape the attention of economists motivated in part by the goal set in the 2008 Millennium Development Goals Report (MDG) (UN, 2008). The report has set a target for reducing developing countries’ poverty rates by the year 2015. The MDG report was not very optimistic about reducing Sub-Saharan Africa poverty rate by that date citing many factors that hinder growth.
In his presidential address to the American Economic Association (January 17th, 2010), Professor Angus Deaton of Princeton University provided an exhaustive analysis of world poverty and inequality. Most of the discussion however was devoted to the role played by the purchasing power parity (PPP) price indexes in the measurement of global poverty and global inequality.1 The Deaton address (paper) is 60 pages in length and for the non professional economist it is a bit “boring” and cumbersome. The critique aside, the general tenet of the paper and hence comparisons of poverty rates hinges on the revision of the PPP indexes reported in the International Comparison Project (ICP). If one were to believe in the soundness of the ICP revision of the purchasing power parity.
Thus according to the ICP, “Global inequality has increased and this reduced the global poverty line relative to the US dollar”.
Before getting into the “nitty-gritty” of measurement, few numbers may be helpful. Deaton provides in Table 1 of the paper, poverty head count ratios and the global poverty line expressed in year X (say 2005) PPP international dollars. The information reported are note worthy in that it contrasts changes in poverty ratios since 1981 (¬¬¬¬Three data points are given 1981, 1993, 2005). Measurement of the percentage of people in poverty in each of these years has to be calculated on the basis of the poverty line derived from the PPP at a given date. Thus, using the year 2005, the poverty line is defined by a number, in this case $1.25. This means that if a person in country Y for example Ethiopia or Nepal received $1.25 a day in income (whatever the source), then he is not counted as poor. At $1.24 he is counted as a poor person. The poverty line hence is used to count the ratio of the population that is labeled as “poor”. Given the significance of the poverty line to the count of people who are in poverty, Deaton’s 60-page-paper engages the reader into an explanation of what this number means for the count of people in poverty and hence changes in global inequality over time.2
Not to get tangled in the “details”, the paper provides a “powerful” message. Global inequality has fallen over the past two and a half decades. The total percentage of people in poverty (using the $1.25 poverty line) has fallen from 51.9 percent in 1981 to 25.2 percent in 2005. This is good news to be sure. How do people in one country or a continent fare against people in another or others depend on where they live. For example, contrast the poverty profile of a person in East Asia Pacific with that of a person in Sub-Saharan Africa. In the year 1981, 77.7 percent of the people in East Asia Pacific were classified as poor at the $1.25 poverty line. This percentage has fallen to 16.8 percent, a decline of almost 80 percent.
Compare this rate of decline with that for Sub-Saran Africa. Again, at the $1.25 poverty line measure, the percent of population that would have been classified as poor in 1981 was 53.4 percent. By 2005 this percent has fallen to 50.9 percent, a decline of almost 5 percent—Great news for Sub-Saharan Africa.
Looking at the count rather than the percentage, the global poverty (measured at $1.25 poverty line), has fallen from a count of 1,900 millions in 1981 to 1,374 million in 2005, a fall of more than 27 percent in a two and one half decade. That is not bad or is it?
Deaton‘s paper goes beyond the statistics I have cited. It is worth the time and effort to expend to understand the magnitude of the problem. One issue that is worth investigating is the sample entities, the size of the population in each member of the sample and the economic progress attained there.
Back to Sub-Saharan Africa. As mentioned earlier, Deaton tells us that the poverty rate there has fallen from 53.4 percent in 1981 to 50.9 percent in 2005. Xavier-sala-i-Martin and Maxim Pinskovskiy (NBER, 2010) are much more upbeat about the prospects of poverty reduction in Sub-Saharan Africa. The authors focus exclusively on African poverty using three poverty lines; daily income of $1, $2 and $3. Translated into yearly income in US dollars, they get a value of $365 in 1985 dollars for a poverty line of $1/day. The authors provide extensive data analysis of the African distributions of income for 4 data points: 1970, 1990, 2000 and 2006. Because of this selection and the choice of the poverty lines, values, their findings are not totally comparable to those of Deaton. Nonetheless, their results on changes in the African poverty rate between 1970 and 2006 at $1/day poverty line augment what one learns from Deaton’s.
Taking the range of $1/day and $2/day poverty lines one may be able to contrast their findings with those reported by Deaton. African poverty rate for 1981 at $1/day is given as 39.4 percent. At $2/day, the rate is 64.8 percent. The number reported by Deaton is somewhere in between, at 53.4 percent for $1.25/day. The difference is not unreasonable. In 1993, according to Sala-i-Martin and Pinkovskiy the poverty rate (surprisingly) rose to 42.2 percent at $1/day and to 67.1 percent at $2/day. For 2005, the respective rates are 33.1 percent and 60.9 percent. Over the 1981 to 2005 period, the decline in the poverty rate measured at $1/day is 15.9 percent but at $2/day it is only 6 percent, which is a bit closer to Deaton’s rate of decline of 5 percent estimated at $1.25/day.
How great is the decline? Measured at $2/say, it is 6 percent over two and half decades. Good news but nothing to write Home about.
Fortunately, the story does not end there. Sala-i-Martin and co-author Pinkovskiy predict that by 2015 the $1/day poverty rate will be 22.8 percent, a decline of 10.3 percent over 10 year period (from 33.1 percent in 2005), a one percent reduction per year. That is indeed remarkable if true. What makes it so is that it is within the range reported for the previous 14 years where the rate has fallen 1.4 percent per year from 1981 to 2005. With such progress the authors predict that the poverty rate of 21.0 percent by 2015 set by the 2008 Millennium Development Goals (UN, 2008), is attainable but perhaps in the year 2017 rather than 2015.
What accounts for the poverty rate decline? Whether one accepts the numbers given in Deaton or Sala-i-Martin and Pinkovskiy the answer is simple and expected: a “decent” growth rate of per capita Gross Domestic Product (GDPP). The correlation between the two holds for all countries, whether the country is developed or developing in Africa or in the EU and the US. The question that has and continued to be asked is what accounts for economic growth and whether observed growth in any country over any period is sustainable. This is a topic for another essay. For now it suffices to point out that the driving force in the evolution of poverty “is an almost exact mirror image of the evolution of GDP per capita” (Sala-i-Martin and Pinkovskiy, p.10). If growth is sustained then the authors’ prediction for 2017 will be achieved, if not the MDG target will neither be met in 2015 or 2017.
Authors’ prediction clearly hinges on the state of affairs in the world following the global financial crisis of 2008. But there is room for hope as the world economy seems to be moving, albeit slowly towards a decent recovery.
A final note which I shall revisit in the future. The notion that $1.25/day, $2/day or $3/day as a benchmark for counting people in poverty worldwide is a “bare” and “unfeeling” statistic. Think about what $1.25 buy an individual in Timbuktu, Ethiopia or Nepal. The bold hard fact does not convey what that individual be a male, female, adult, child or an elderly consumes in terms of goods and services. The number is said to be obtained from household surveys. Given that household survey data are available what would be most enlightening is to use it to show the composition and the size of the basket of goods which the $1.25 buy in each of these countries compared to the “standard” for a decent living. Undoubtedly, one can set the rate at $5, $6, $7 or what a decent living implies.
Deacon gives us a glimpse of what a decent living means derived from data on well being measures from the Gallop World Poll (Table 8, p. 55). Using the year 2006 as the base, 37.9 percent of the world population reported “poor living standards” with the percentage rising to 38.6 in 2009. For Sub-Saharan Africa population the corresponding number is 61.4 percent in 2006 rising to 62.6 percent in 2009. These numbers tell a different story, perhaps closer to what poverty means than the sheer count of the numbers in poverty.
One can easily define what a decent living standard requires (for example BEA basket of goods and services used to determine the cost of living index in the US). What is not easy is how to get there. If indeed the world organizations and we as individuals in affluent societies care about global poverty then efforts should be exerted to enable Sub-Saharan African countries (the region with the highest poverty rate) to make substantial advances in reducing poverty. We should not be satisfied with 5, 6, 10 or even 15 percent reduction in the poverty rate over a decade with a standard of living set at $1 or even $2 a day.
1 PPP are needed to translate purchasing power in one country’s currency into US purchasing power. In other words, if a good costs 16 rupees in India, with PPP exchange rate, the cost is expressed in terms of dollars.

2 To get a feel of the numbers in a comparative setting, the poverty rate in the US in 1981 was 14.0 percent at $12.65/day. In 2005 the corresponding figures are 12.6 percent at $27.32/day.

Monday, March 22, 2010

The Health Care Reform Saga: President Obama said: Get on with it and the Democrats did “Hallelujah”

Representative George Miller, D. California put it best when he said:
“Tonight we answered the call”.
Sunday, March 21, 2010 will forever be remembered as the day where a “remarkable piece of legislation was passed without a single Republican vote”.
When the dust settles, I shall get back to take up the “nitty-gritty” of the legislation.

Tuesday, March 9, 2010

“The time for debate over Health Care Reform has come to an end” (President Obama -Health Care Summit, Thursday February 25, 2010)

“Let us get it done”
(President Obama address, March 3, 2010)

At the Health Care Summit which has taken place on Thursday, February 25th, 2010, the President sought to elicit support for the health care reform bill before Congress in the hope that the deadlock over its passage may be broken. Another motive more significant perhaps is to gain the support of one or more Republicans so that the passage of the reform bill may be touted as “bipartisan”. To those of us who have watched the progression of the bills (The House and Senate) from their inception more than a year ago until their final resting place in the hands of the democratic majority in the House and the super majority in the Senate, it is not only disheartening to witness the dysfunctional public sector but worse than that the egotism and self promoting public servants at the expense of those they are supposed to serve.
Having said that, it does not follow that elected public servants do not serve their constituents in voting for or against a legislation. But the spectacle of the debate on the health care reform, if nothing else it has confirmed what ordinary Americans (not the health care experts, journalists as well as health economists) have maintained for decades—that the federal government is ‘BROKEN’ that changing the man at the helm does nothing to fix it.
The 2008 campaign put forth the priority to fix the nation’s health care system. At the February 2010 Summit and in speeches given by both Republicans and Democrats there and elsewhere the public is told that the US health care system is riddled with inequities and inefficiencies and that something has to be done to correct it.
Having acknowledged these deficiencies, politicians and experts alike do not go about the business of addressing those issues in a manner consistent with the public interest but rather to promote their own ideas or better yet, their self importance.
Being a Republican or a Democratic member of Congress should not be the overriding reason to accept or reject the reform because it is put forth by a Democratic President. Our newly elected Senator from Massachusetts in his critical assessment of the reform bills before congress argued that the Obama Health Care Reform is not so good for Massachusetts. Massachusetts after all passed over the objections of some Republicans Health Care Reform that requires all Massachusetts’ residents to have proof of insurance coverage and penalizes those who do not. It is interesting that Senator Brown uses Massachusetts as an example that the reform did reduce the cost of insurance. Most significant perhaps is the fact that the Senator’s remark that what is good for the nation is not good for Massachusetts flies in the face of what the “Constitution” stipulates: The United States public sector is a federal system comprising three levels of governments: National, State and local. The legal division of responsibility among the three levels is found in the Constitution and in court interpretation of the Constitution. The Constitution divides the powers of government: Those of the national government are specified in articles I, section 8, while those of the states and their subdivisions are residual. The federal government, through Congress, was given the power to levy and collect taxes, duties, imposts and excise, to pay the debts and provide for the common defense and general welfare of United States. During the 1930’s, amidst the problems and pressures of the greatest depression in US history, there developed a Judicial interpretation of the Constitution which accepted a reading of the general welfare clause that placed no discernable Judicial limits on the amounts or purposes of federal spending (for details on federal, state and local responsibilities, see Ott, D. and A. Ott, Federal Budget Policies, Third edition, The Brooking Institutions, 1978; see also, A. Ott, Public Sector Budgets: A Comparative Study, 1993, Edward Elgar Publishing, ch. 6) Hence, the federal government and not each state has a constitutional duty to enhance the welfare of the citizens. Given that medical care enhances welfare, the federal government has a responsibility to promote the welfare of all citizens.
Disinformation along with a “heap” of unsubstantiated claims has soured the public about health reform—any reform regardless of who has originate it. In an AEI commentary “Here’s the RX for a Bipartisan Health Care Reform Bill” (American Enterprise Institute, February 24, 2010), Norman J. Ornstein addresses both of my comments. First he states that “ The plan that Obama has put up on the White House website, while basically built on the senate-passed bill as amended by the House and refined by the President, is no radical leftist plan, much less a government takeover of our health care” . Second, that “The public unhappiness with health care reform is built not on the substance here but on the distrust of Washington polls, the messy and the rancorous powers and the unease about a leap of faith to get change”.
The media and the Republicans summiteers keep repeating that public opinion is against the reform. It would be enlightening if the media were to focus a bit more on how a majority who put Obama and the Democrats in Congress are turning against one of the pillars of the Democratic agenda. But this is another story.
Let me now turn to the “Summit”. Politicians and commentators are fond of using mega phrases for an event to signal not the importance of said event but of who’s who attending the event. Merriam-Webster dictionary defines the word “summit” as “the highest point attained or attainable”. That, it “implies the top most level attainable”. In the political arena the “summit” label is meant to confer a status of the event not in terms of the issue, of “the raison d’ĂȘtre” of the Summit but rather in terms of who is attending the Summit.
An event, hence, is called a summit if attendees are heads of states (like the G-20 Summit) or that the attendees are highest-level officials. The Health Care Summit is clearly an event that brought together congressional leaders—officials at the highest level of government. (A detail coverage one hour after hour and who’s who at the Summit from 10:00am until 5:25pm, is given in the Washington Post, February 26, 2010)
The Washington Post, The Wall Street Journal, The New York Times as well as Research Institutions such as the American Enterprise Institute have adequately dealt with the Summit not only in terms of producing excellent summary of the issues debated but also provided their “experts’” analysis of the “contestable” provisions in the reform plan. By now, those who watched the health care reform saga unfold, and those who are satisfied with “snips” of the debate are aware of the major elements of the reform:
• Requiring health insurance coverage with penalty for lack of coverage. (The state of Massachusetts requires it)
• Regulating the insurance market by creating an oversight body.
• Cost saving through changes in Medicare reimbursements and hence the “deficit” effect.
• Expand Medicaid coverage by subsidizing state governments for services to the uninsured people who cannot afford to purchase insurance.
These are simple provisions which ought not to have taken a year’s time to formulate and certainly not to have raised the blood pressure of so many of those “high government officials” attending the Summit. But then as someone put it, “the devil is in the details”. This is not surprising given the size of the bill which is put at some 2700 pages.
Where do we go from here?
One option embraced by the Republicans and few health experts and economists is to start over. (see for example Glenn Hubbart et al “A Better Way to Reform Health Care”, WSJ, February 25, 2010). Maybe it is good business for quite a few to start over (many speeches, articles and media blitz) but for some of us and in my view the public at large this is not an option, it is a “penance”. In his third of March address, the president urged the congress to vote the “Reform Bill up or down”.
“Now it is time to make a decision. Let us get it done”.
To that there is but one word: “AMEN”.

Thursday, January 14, 2010

Getting the Facts, Just the Facts about Health Care Reform

Information and misinformation is the name of the game. The plethora of articles and blogs that have been written or aired have and are still being written about the health care reform bills soon to be heading to the conference committee for reconciliation. Interest in the bill’s provisions is a good thing. Without scrutiny, debates, even falsification of facts and/or intended consequences democracy is blemished. Let me dwell a bit on that.
Some of you may recall a TV series where the actor utters the words: “facts, just the facts”. I suppose that was in relation to some narrative involving a complaint, a report on some thing or another. The idea was to cut to the meat of the issue in order to come up with the appropriate response, for without the facts and “just the facts” there would be many responses, some appropriate, others not.
The reason for this reminiscence is to assure one and all that we are indeed a democracy. One and all have the right, not only to applaud “rulers” for their efforts on our behalf but also to scorn these efforts. We are neither of the same mind, convictions, nor temperament. One may look at a picture and see a bright sky; another would see a storm looming on the horizon. No right or wrong there. One calls it as one sees it.
What prompted me to choose the title for this blog is the title of an article written by a fellow economist: Jonathan Gruber of MIT, “Getting the Facts Straight on Health Care Reform “, appearing in the New England Journal of Medicine (December 24, 2009). The author takes on the most common critiques levied against the health care reform bill passed by the House of Representatives and the bill that was then before the US Senate (since passed). The main thrust of the article is to refute these claims, but in particular the charge that the reform “represents a government takeover of the health care”. Other refuted claims (six in all), deal with some aspects of the bill(s), from cost containments, to erosion of the Medicare program. These “false” claims which are enumerated and analyzed by Dr. Gruber are not likely to go away any time soon. In a democracy they should not. False claims, if indeed false will die down eventually; their contribution is to sharpen the debate and the public awareness to the issues. Moreover, these claims and counter claims teaches us how to sort out the facts, the “just facts” from the myriad of claims for and against the reform.
Take for example the most serious attack on the reform bill: that it represents a government takeover of health care. One need not lose sleep over this claim. As Dr. Gruber (as well as few others) reminds the Journal’s readers that the Medicare program (defended by those worried about the government takeover) is a government run insurance program which started back in 1965. Had the government had a design on turning it into a national government health insurance program, it has surely failed, or better yet is taking it’s time (55 years) in doing so. But then the wheel of justice seems always to grind slowly!
As I have stated earlier about information being fundamental to the survival of democracy, one need not go too far back in the 19th and the 20th centuries to ascertain that the facts, the true facts chase the false facts out of circulation. In this New Year we shall embark on a new venture called Health Care Reform. In an earlier blog, I have put down the definition of reform as ‘to change or improve what was defective’, to ‘change for the better’. At this juncture our function as economists and the function of medical care providers is to sort out its “reform” features so that a judgment can be made as to whether one feels comfortable to call it a reform or a legislation. While waiting for such a judgment to be made, it is worthwhile to revisit the most fundamental issue that faces society today as it was faced many centuries ago: Defining the role of government and the limits to its power.
Economists, especially those of us who study and write about the role of the public sector are not of one mind. However, one thing we do agree on is that the role of the government is to address “Market Failure”. I believe that the question being debated is at the heart of this—whether or not there exists a market failure in the delivery of medical care in the US. If indeed there is a market failure, the issue then becomes: how far should the government go in dealing with the market failure.
This is the question that will be answered, not today or tomorrow, but by generations to come.