In helping potential counterparties to assess the creditworthiness of individual bond issues, Credit Rating Organizations (CROs) earn profits by producing classificatory information that regulators find helpful and that investors and guarantors use to compare credit spreads on issues of risky debt. However, CRO revenues come not from the investor or regulatory side, but from fees that issuers pay CROs for analyzing the credit quality of different issues.
Because regulatory agencies and many investors and bond insurers use CRO credit ratings to substitute for their own due diligence, the contract interest rate an issuer has to pay falls whenever its credit rating rises. Although accurate ratings benefit investors and issuers alike, issuers are asked to pay the freight because, once it is announced, a security’s credit rating becomes public knowledge. This asymmetric arrangement poses an obvious conflict of interest for CRO managers. Borrowers have an incentive to play different CROs against one another and to hold out for higher-than-appropriate ratings.
For issuers and securitizers, the counterincentive to seeking a corrupt rating is that they also need to employ a CRO that has a well-established reputation for honest and accurate work. In turn, for established CROs, the time and effort required to build such a reputation and the bureaucratic difficulties that they have surmounted in being named by the Securities and Exchange Commission (SEC) as Nationally Recognized Statistical Rating Organizations (NRSROs) create a dual barrier for would-be new entrants into the CRO industry. Reinforced by the SEC’s demonstrated reluctance to revoke NRSRO status and by established firms’ tendency to acquire lesser players, these barriers give them a leg up in foreign venues as well. The resulting oligopolistic market structure helps to explain why major rating organizations can get away with calling themselves “agencies” and do not compete either in the models they use to assess credit risks or in the criteria they use to map the forecasts their models produce into different rating classes. Their similarity in methods means that errors are likely to be similar, too.
The core problem in the securitization crisis is to understand how and why securitizers, CROs, and bond insurers drastically over-rated and over-sized the highest-quality tranches of structured-finance obligations. Part of the explanation lies in the incentive conflict managers and line employees of such firms faced between preserving the long-run value of their firm’s reputation and chasing volume-related bonuses and raises that short-run revenue expansion can generate. Errors in classification are slow to reveal themselves. They can only be established after a long and variable lag. This lag means that, to keep a firm’s reputation strong over the long run, compensation structures must include features that promise to reward employees for taking the long view and to penalize them for succumbing to short-termism. Given the high proportion of revenues earned in recent years at the top three ratings firms (Moody’s, Standard & Poors, and Fitch) from rating securitizations, individual managers and analysts must have been sorely tempted to risk the firm’s reputation to secure or retain the repeat business of the biggest issuers and it is doubtful that salary structures fully neutralized this temptation. According to Portes (2008), 44 percent of Moody’s 2006 revenue came from advising issuers first on how to collateralize and to assign (i.e., to slice or “tranche”) cash flows from pools of securitizable assets to get a desirable package of ratings and then going on to rate the credit risk of the various packages that it and other CROs helped to construct.
What’s Different About Rating Structured Instruments?
In principle, each rating should be interpreted as an interval estimate: i.e., as the sum of a point estimate and a two-sided – but asymmetric – margin for error. The asymmetry comes from the fact that the upside of returns is limited by the terms of debt contracts, while the tail of unfavorable outcomes includes numerous states in which substantial losses occur.
When a CRO does a good job of rating bonds or multipart securitization structures, observed value of default and loss rates in different rating classes correlate closely with the riskiness of the grade that securities in each category had previously received. Because securitized instruments are claims on a static portfolio, servicers can do little to mitigate the potential nonlinear downside impact of adverse events on investor returns. Even if point estimates of loss exposure were the same for a bond and a securitized claim, their downside margins for error would be very different. This asymmetry and the absence of through-the-cycle data on innovative instruments make it misleading for CROs to employ the same set of letter grades to rank the through-the-cycle loss exposures of the tranches of structured deals and ordinary bonds.
Even on ordinary bonds, ratings are lagging indicators whose changes tend to come too late to help investors avoid losses when an issuer’s credit standing weakens or to achieve gains when an issuer’s prospects improve. This leads scholars to question whether on most deals CROs add enough informational value to justify their existence. However, because of the growing complexity of structured instruments and CRO access to nonpublic information, there can be no doubt that ratings were central to the successful placement of synthetic securities. Trusteed investors flocked to the highly rated tranches of structured securitizations precisely because they promised miraculously to combine the investment-grade ratings that legal standards imposed on their portfolios with extraordinarily high yields. Regulatory standard setters did not challenge this promise. Instead, the SEC and other regulators effectively ceded to CROs their public-interest responsibility for monitoring the suitability of innovative instruments for particular investors and disclosing investor loss exposures in structured financial instruments.
Proposed reforms would lessen the CROs’ role in establishing suitability and capital adequacy. Several reforms seek to rework the details of CRO and issuer interaction. This interaction is much more extensive in the process of rating complex structures of securitized debt than it is in rating a straightforward bond issue. The process of rating a structured product proceeds as a sequence of bilateral negotiations that starts with the issuer specifying the mix of credit ratings it is looking for. CROs compete by specifying the subordination structure and level of credit support needed to obtain the ratings desired. That a give and take between CROs and securitizers did occur is suggested by the high concentration of CRO forecasts for structured deals that lie at “notches” just above the thresholds that would move the different tranches into the next lower ratings class (Mason and Rosner, 2007). This implies that the associated interval estimates on all issues regularly dipped into at least the next-lowest-rating class and, when the subordination provided by lower classes was thin, into classes well below that as well.
Regulatory concern about ratings shopping is reflected in the June, 2008 agreement to separate the pricing of CRO structuring and credit-rating services negotiated by the New York Attorney General and in the SEC’s proposed ban on allowing individual CROs to rate any deal that they have tranched and collateralized. It is important to recognize that, whether or not separating the rating and structuring functions might have a beneficial effect, assessing the risk of a portfolio of infrequently traded and innovative instruments and monitoring factors that change this riskiness over time pose difficult problems of data verification and analysis. Using a prudent-man standard, CROs could and should have identified and addressed these problems more carefully. A conscientious applied econometrician would have felt dutybound to discount the margins for error assigned to complex mortgage securitizations for the modeling, sampling, legal, and documentation risks that investors were asked to assume. If the industry had been less oligopolistic, I believe that competitive pressure would have led independent parties to be tasked with auditing the models and criteria on which individual CRO ratings were based and to fact-check the data used to estimate model parameters. Most importantly, conscientious outside reviewers would have insisted that CROs update their models and rating methods as soon as evidence began to develop that loan pools in the 2005 and 2006 vintages were declining sharply in credit quality.
The models that CROs used were known to incorporate unverifiable and overly convenient assumptions about correlations, tail risk, and marketability that were bound to break down (i.e., to lose applicability) in periods of stress. Many deal structures were so new and untested that real-world data on default frequency and loss given default had to be drawn entirely from an unrepresentative period of rising house prices. To fashion reliable through-the-cycle ratings, available data needed to be supplemented by synthetic data incorporating reasonable estimates of effects that might unfold in times of price decline and market stress.
CROs Need to Take Responsibility for Their Mistakes Formally, a CRO’s aggressive declaration that an adequately documented “true sale” of a particular loan pool had taken place was a key step in moving the assets off originator and securitizer accounting balance sheets. Although the standards for validating such transfers are set by the Financial Accounting Standards Board, CROs apparently felt no duty to describe how fully the ownership of the pool could be documented and CRO judgments on this matter could have had no legal standing in any case.
The quality of CRO analysis was and is further undermined by CRO efforts to avoid legal responsibility for any mistakes. Despite their intense and critical involvement in designing securitization structures, CROs claim only to be expressing an “opinion.” They insist that the constitutional right of free speech protects them from lawsuits for damages suffered by investors who chose to rely on what might turn out to be incompetent or negligent opinions. To create a foundation for this defense, CROs routinely incorporate language into their reports stating that it is “unreasonable” for anyone to rely on their “mere opinions,” which should not be construed by anyone as “investment advice.” Ironically, for investors and regulators, the reputational damage CROs have absorbed from massively over-rating structured securitizations has imparted to these weaselly disclaimers an element of unintended truth that has undermined the value of their brands and is forcing them to assign ratings more conservatively as a way to rebuild confidence in the value of their work. Regulators will find that this overdue conservatism can be reliably maintained only if CROs can be made to take ex post responsibility for future mistakes.
Because CRO fees were so large and because synthetic securities could not legally have been sold in large quantities to trusteed and municipal investors without the blessing of high ratings, the courts might impose this responsibility on CROs in any case. Whenever someone (say, a lawyer) collects a large fee for communicating his or her opinion to another party, the distinction between opinion and advice seems to break down. The sheer size of the fees collected for forming and issuing opinions about the riskiness of complex securitizations renders hollow the claim that users should not — and therefore would not — rely on them. In fact, CROs had to foresee and value that reliance. They should share responsibility with any securitizer and insurer of these deals who distorted or failed to verify the value of the analysis on which CRO “opinions” ultimately were based.
Going forward, the problem is to find reliable ways to express and value differences in risk on structured instruments. One way is for CROs to bond the quality of their work by subjecting it to effective independent review (Goodhart, 2008) and setting aside some of their fees in a fund from which third-party special masters or expedited civil judgments could indemnify investors for provable harm in instances where the independent reviewers find that negligence or misfeasance occurred. If and only if the analysis by which their credit quality is ranked can be adequately bonded, can bid-asked spreads on securitized instruments fall back toward those quoted on conventional debt.
Goodhart, Charles, 2008. “How, If at All, Should Credit Rating Agencies (CRAs) Be Regulated?,” London, London School of Economics (July 7, Unpublished).
Mason, Joseph R., and Josh Rosner, 2007. “Where Did the Risk Go? How Misapplied Bond Ratings Cause Mortgage-Backed Securities and Collateralized Debt Obligation Market Disruptions”. (Available at SSRN: http://ssrn.com/abstract=1027475)
Portes, Richard, 2008. “Ratings Agency Reform,” Vox Website (http://www.voxeu.org), posted January 22.
For helpful comments on an earlier draft, the author wants to thank Jerry Caprio, Aslı Demirgüç-Kunt, Ramon DeGennaro, James Moser, and James Thomson.