The Deep Roots of the Great Recession
It has now been six years since the start of the subprime financial crisis, an event that many people mentally connect with the September 2008 collapse of Lehman Brothers. But in reality, the financial crisis that erupted in the financial markets that year actually began several years before. Just as the Great Crash of 1929 started with the unraveling of the Florida real-estate bubble and other events in the mid-1920s, the financial bust of 2008 was visible years before, as real-estate investors and lenders began to evidence mounting signs of stress.
Most observers connect the crisis with events in the financial markets: securities fraud by Wall Street investment houses, bad mortgages extended by lenders, and even worse, policy from Washington that encouraged an artificial increase in homeownership. But the roots of the crisis actually go far deeper than any single decision, action or policy originating in the 2000s. Thus, when people consider the financial meltdown of 2008 and ask whether the right policy tools were used to combat the crisis, the first question we need to ask is: what is the question?
Did we overreact to the crisis of 2008? Or did we underreact? Was the fiscal and monetary stimulus applied since 2008 the best way to address the crisis? What lessons can we learn from the financial meltdown? All of these questions suppose that we actually understand why the crisis occurred, and in particular, whether it was due to a set of narrow problems originating in the financial markets around the time of the event or instead part of a broader set of economic and political factors that go back decades.
You can argue, for example, that the roots of the housing crisis started with the push for “affordable housing” during the Clinton administration; in fact, the wellsprings of the crisis stretch back decades earlier. Look at the Federal Funds rate maintained by the Federal Reserve Board going back to the 1950s, when the U.S. central bank regained its independence from the U.S. Treasury after keeping interest rates artificially low to support the war effort. The history of short-term interest rates in the US not only illustrates the ebb and flow of short-term interest rates over the past sixty years, but it also serves as a surrogate for other factors—including changes in demographics and inflation—that have had a profound impact on the U.S. economy over the past six decades.
In the 1950s and 1960s, interest rates were relatively low as the nation sought to grow after decades of the Great Depression and economic constraints imposed during WWII. As the demographic bulge known as the “baby boom” grew and reach adulthood in the 1970s and 1980s, however, interest rates rose as inflation became a major public concern. During this same period, federal spending and the growth of public debt also surged, adding a new complication to the economic-policy mix.
Even as issues like inflation and the federal deficit became public-policy concerns, though, growth and job creation remained the top priorities for both political parties. In 1978, Congress passed the Full Employment and Balanced Growth Act, better known as the Humphrey-Hawkins Act, to mandate that unemployment should not exceed 3 percent for people twenty years old or older, and inflation should be reduced to 3 percent or less, provided that its reduction would not interfere with the employment goal. And the law provided that by 1988, the targeted inflation rate should be zero, again, provided that pursuing stable prices would not interfere with the 3 percent target for unemployment.
The legislation was named after two of the more liberal members of Congress at the time, Hubert Humphrey of Minnesota and Augustus Hawkins of California, two men who could be generously described as socialist in their economic orientation. Humphrey actually wanted the government to explicitly guarantee a job for all Americans, but fortunately, Congress balked at creating that level of personal entitlement. Despite the so-called dual mandate of low or no inflation, and “full employment,” since the late 1970s, the Fed has consistently used lower and lower interest rates to boost nominal job growth.
Through the S&L crisis of the 1980s, the Fed kept interest rates relatively high, but by the early 1990s, rates gradually fell and would remain at low levels through much of the decade. During this period, baby boomers reached their maximum earning years and federal deficits almost disappeared because of swelling tax rolls. By the end of the 1990s, however, a stagnant housing sector again caused Washington to resort to policy machinations to boost short-term job creation and consumer-spending growth.
The Fed under Chairman Alan Greenspan had been lowering interest rates for months prior to the September 11, 2001 terrorist attacks, but then accelerated the process of interest-rate ease and kept rates extremely low until almost 2005, when Fed funds were trading below 1 percent. By the time that the Fed started to raise interest rates late in 2004, the U.S. mortgage market was booming and the Wall Street deal machine was already spinning out of control like a runaway nuclear reactor.