Yesterday, the Wall Street Journal had a story, “Economy Can Withstand More Mortgage Foreclosures,” which said,
About 1.1 million foreclosures are likely to result from jumps in monthly payments on adjustable-rate home-mortgage loans made in 2004 through 2006, according to a study by First American CoreLogic.
Christopher Cagan, director of research at the real-estate-information concern based in Santa Ana, Calif., said those foreclosures are likely to occur over six to seven years and won’t be enough to damage the national economy.
Dean Baker of Beat the Press took issue with the WSJ report of the study because it focused on the “housing prices don’t change from 2006” scenario, when Baker noted that year end 2006 prices were 3.1% below 2005 and unsold inventories are also significantly higher. Felix Salmon then criticized Baker’s comments, saying that that the Cagan study found that the negative equity problem was getting better, and the 10% price decline scenario that Baker focused on was unlikely, particularly since the Cagan study reported generally improving home sale prices in 2006.
I encourage those of you who have the patience to look at the Cagan report. It is a very impressive piece of work. It is also clearly based on some pretty big hairy modeling.
I’ve learned it is ALWAYS important to look at the assumptions and logic of analyses like this.
Now having been involved in some hairy modeling exercises, the outcomes (as in any model) are dependent on the assumptions. The bigger and more complicated the model, the more the interaction of assumptions can lead to outcomes that don’t seem correct Perhaps Cagan has had more success, but pretty much every first attempt at a model that I have been involved with has produced extreme results, which has then led us to go back and inspect what we did wrong to get such a weird outcome. So these models are tweaked to produce “reasonable” outcomes more often than anyone wants to admit. And since most people inevitably have a predisposition as to what a “reasonable” or “acceptable” outcome is, the model will inevitably be tweaked in that direction.
And there is evidence of data-fitting, or assumption-fitting here. An earlier version of Cagan’s study reported that in 2005, 29% of the subprime mortgages issued had negative equity. More data has been added to his model, six million homes from 136 counties in the heartlands where real estate price appreciation was less pronounced (page 8).
Salmon misses a bit of revisionist history. In his blog, he says:
29% of the mortgages originated in 2005 had negative equity. In this study, just 9% of the mortgages originated in 2005 had negative equity, and less than 18% of the mortgages originated in 2006 had negative equity. Why is that? It’s because, contra Baker, prices did actually rise in 2006.
First, if you read the text (page 11) 11.1% of the ARMs originated in 2005 have negative equity. It’s the ARMs you need to worry about, because when the rates reset, if they can’t meet the payments, they need to refinance, and if they don’t have enough equity, bye bye house.
Second, there is no way of parsing how much of the improvement Cagan claims is attributable to the change in sample versus his alleged improvement in housing prices. On page 9, Cagan comments that his study this year found that fewer mortgages originated in the immediately preceding year had negative equity:
This “17.6 percent for 2006” [of first mortgages financed that year having negative equity] is better than the previous study’s “20.0 percent for 2005” because in many market values continued to rise into 2006 on a year-over-year basis, and because of the addition of six million proprieties…..
Huh? The stat Cagan is highlighting is the equity in the mortgage IN THE YEAR IT WAS ORIGINATED. Year-to-year appreciation wouldn’t factor in (intra-year could, but he doesn’t mention that). His stat reflects mortgage origination policies (ie, how lax lenders were at the time the mortgage was signed) and his sample change.
Now based on press reports, there is no reason to think mortgage lending practices became more stringent in 2006. If anything, one might have thought they became even more lax. Thus, his alleged improvement in negative equity appears entirely due to his change in sample. Query whether this was due to him having been pressured (it is far from unheard of for reputable organizations being forced to recant very well done work that produces unpopular findings).
Now it may be his new sample is indeed better; he argues it is more representative. But of what? Of the distribution of housing in America? Or of the origination of subprime loans? People are looking to his study, perhaps overmuch, for an answer to the latter question, when that may not have been his primary interest. But it is disturbing that in his own text he compares non-comparable samples, and Salmon repeats that error.
The next bothersome bit is on page 28-9. The study classifies ARMs into 4 groups. Group A has payments increasingly 25% or less, which the study assumes will probably not lead to default. Group B will have payments increasing from 26 to 50%. If 30% of income was going to mortgage payments prior to reset, payments will consume 38 to 45% of income afterwards. Cagan opines, “The increase will cause definite strain but will not necessarily lead to default.”
“Group B resets….will often be bearable, though not without considerable pain.” He assigns a default probability of 40%; I would have liked to see him do a sensitivity analysis on 30, 50, 60 and 70%. This assumption has a very big impact on outcomes, and I’d like to see exactly how big.
The next part of his analysis I have trouble with is his classification of houses with 5 to 15% equity as “low” equity strain and +5 to -5% equity ad “medium” equity strain. And he assumes only a 5% cost to refinance or sell. 5% is a an unrealistic cost for a sale (you need to allow for costs of making the house as “sale ready” as possible). He may have used this to average the costs of refinancing versus sale, but in this new post-subprime market, refinancing is not a realistic option for houses that have less than 15% equity and the owner has defaulted.
He assigns 50% “probability of equity difficulty,” which presumably means inability to sell or refinance, to the +5% to -5% equity group. That looks just plain nuts to me. This analysis is way behind changes in attitudes among lenders, and the implosion of the subprime sector. In a normal mortgage market, loan to value ratios of over 80% were unheard of, yet he assumes that many will be able to borrow on precisely these sort of terms. Similarly, I don’t see how you sell a house with around zero equity and come out whole (in addition the seller having to pay brokerage and other closing costs, the bank will collect any arrearages out of the sale proceeds).
I could go on, but you get the point. This study has several variables where changes in assumptions will have a significant impact on the outcome. These assumptions all are optimistic, more optimistic than current conditions suggest is warranted. In addition, the sample was also changed in a way that appears to improve the outcome.
The proof is in the pudding, which is this case is the model’s output. It predicts that 12% of subprime loans will default due to reset. Yet default rates for adjustable subprimes have already reached nearly 14.5%, and most observers expect things to get worse before they get better. I’d therefore discount the report’s conclusions considerably.