top of page

EXPERTS INSIGHTS

Search
Writer's pictureSEDA Experts

FAQs on "Getting Rid of Issuer-Pay Will Not Improve Credit Ratings"

By Douglas Lucas, Managing Director, SEDA Experts LLC


Getting Rid of Issuer-Pay Will Not Improve Credit Ratings ” showed by historical example that doing away with the rating agencies’ issuer-pay revenue practice would not improve credit rating quality. With issuance effectively zero 2007-09, S&P had no issuer-pay-related incentive to maintain inflated ratings on subprime mortgage-backed securities (subprime bonds) and collateralized debt obligations backed by subprime bonds (subprime CDOs). Yet, S&P’s subprime ratings were severely inflated when compared to market prices and the credit analyses of market observers. Because subprime ratings were inflated 2007-09 in the absence of issuer-pay incentives, getting rid of issuer pay would not improve rating agency accuracy. The article points out that eliminating an incentive to inflate ratings creates neither the incentive nor ability to produce accurate ratings. The article also lists examples of rating agency regulations that failed to improve rating accuracy.


The article sparked questions, which we will try to answer here:


• Why was S&P so slow to downgrade subprime ratings 2007-09?

• Did other rating agencies downgrade these debts faster than S&P?

• If banning issuer pay isn’t sufficient, how can S&P’s credit ratings be improved?



WHY WAS S&P SO SLOW TO DOWNGRADE?


Did issuer-pay incentives still somehow prevent timely downgrades?

Many people asking why S&P was so slow to downgrade subprime bonds and subprime CDOs seek an answer that would rescue the idea that doing away with issuer-pay would improve rating quality. Despite the lack of subprime issuance, was there not some other way S&P’s issuer-pay revenue practice discouraged timely downgrades? The most frequent suggestion along these lines is that arrangers might have taken other ratings business away from S&P if it had downgraded subprime debts as fast as it should have downgraded them. But this isn’t a very plausible explanation for the slow pace of S&P’s subprime downgrades.


Structured products arrangers would be reluctant to boycott S&P over subprime downgrades. Structured product arrangers must offer the best rating execution to issuers to retain their business. Boycotting S&P because of subprime downgrades would reduce an arrangers’ ability to rating shop and might cause the arranger to lose business to an arranger not boycotting S&P. Outside the structured products area, many corporate and financial institution sectors required ratings from S&P 2007-09 for best execution. Arrangers in those sectors could not boycott S&P and retain issuance business. S&P must have understood arrangers’ constraints and must not have been afraid of an arranger boycott.


Moreover, arrangers gave up on subprime and had no motivation to push S&P to maintain inappropriately high ratings. Banking, structuring, and sales of new subprime bonds and subprime CDOs was virtually non-existent, and layoffs in subprime departments began in 2007. By early-2008, most of the few remaining subprime personnel were brokering trades in the distressed secondary market. Over 2007-08, sell-side researchers at the major arrangers were increasingly negative on subprime bonds and subprime CDOs. If arrangers were concerned about potential subprime ratings downgrades, they would never have let their research analysts publish articles that were much more pessimistic on subprime than was S&P.

Did S&P not realize how bad subprime was?

It’s hard to imagine that S&P analysts did not recognize how bad subprime bond and subprime CDO credit quality was throughout 2007, throughout 2008, and up to July 2009. But, as shown in “Getting Rid of Issuer-Pay Will Not Improve Credit Ratings,” S&P’s predicted losses on subprime bonds did not approach those of UBS, JP Morgan, Barclays, and Citi Group analysts until July 2009. It beggars belief that such ignorance persisted so long without it being willful.


However, it is true that there had been a brain drain out of S&P’s structured finance area, as many of their best analysts had left the agency. With a few years’ experience, a smart rating analyst could land himself a more interesting and remunerative sell-side or buy-side position. Remaining analysts at S&P either had no interest in such positions or had been repeatedly passed over for those opportunities. Also, at least early on, S&P may not have had adequate systems to monitor subprime bond and subprime CDO collateral portfolios. And early on, S&P might have been focused on credit models based on borrower and loan characteristics as of origination date. This data was often fraudulent and, with the passage of time, increasingly irrelevant.


Indicating willful ignorance, as opposed to genuine ignorance, is the fact that at least one senior analyst at S&P grasped the severity of the situation early on. On 27 June 2007, he or she wrote in an email that if the 2006 subprime vintage performed as expected, “we could see losses over 25% of original balance.” Such losses would mean the default of AA and AAA subprime bonds, according to the head of S&P’s RMBS Surveillance Group (Settlement Agreement 2015).


Did non-issuer-pay incentives prevent timely downgrades?

Why might S&P’s ignorance have been willful? S&P analysts might have thought their jobs would be at risk if they downgraded subprime bonds and subprime CDOs appropriately. They might have worried that S&P’s senior management would take severe downgrades to mean that the new issue market for these transactions wouldn’t come back for a long time. In that case, S&P’s subprime analysts might have worried that senior management would lay them off. However, this doesn’t explain why S&P’s senior management didn’t come to its own conclusions about the credit quality of subprime bonds and subprime CDOs and make sure they were downgraded appropriately.



DID OTHER RATING AGENCIES DOWNGRADE FASTER THAN S&P?


Fitch analysts were ahead of S&P in downgrading subprime bonds and subprime CDOs, but Fitch still didn’t downgrade subprime credits enough.


December 2007 subprime CDO ratings

At 7 December 2007, among the 815 subprime CDO tranches that S&P and Fitch rated in common, Fitch had downgraded those CDOs an average 8.4 rating notches while S&P had only downgraded those CDOs 1.9 rating notches (Lucas 2007). Among CDOs that both agencies had rated AAA, Fitch had downgraded 23 to CCC or below while S&P had downgraded only four to those ratings.

January 2008 Ambac ratings

Fitch downgraded Ambac Assurance Corporation to AA and put it on watch for further downgrade on 18 January 2008. Ambac subsequently ceased providing information to Fitch and Fitch withdrew its rating. S&P downgraded Ambac to AA five months later on 5 June 2008.


April 2008 subprime bond ratings

In April 2008, UBS subprime analysts predicted that 292 of the 400[1] subprime bonds composing the four rolls of the ABX index would be at least partially written down, with some loss of principal (Lucas 2008a). In fact, more bonds eventually defaulted.


S&P rated all 292 bonds UBS said were going to default and Fitch rated 192 of them. Exhibit 1 shows the then-current S&P and Fitch ratings of these subprime bonds. For example, the second row, second column of the exhibit shows that UBS predicted that 24 bonds rated AAA at 1 April 2008 by S&P would default. The second row, third column shows that UBS did not think any bonds rated AAA by Fitch would default.


Exhibit 1

S&P and Fitch Ratings of ABX Underlying Bonds That UBS Predicted Would Default, 1 April 2008


Sources: INTEX, UBS Securitized Product Research. Lucas 2008a.


Exhibit 1’s fourth and fifth columns show the percentage of bonds in each rating category. The fourth column shows the 29 bonds S&P rated CC were 10% (29/292) of the bonds UBS said would default. Meanwhile, Fitch rated 33% (44/134) CC and another 5% (7/134) C. These CC and C ratings were the only appropriate ratings for bonds destined to default and Fitch rated a greater percentage of bonds in these categories than did S&P, 38% to 10%. Exhibit 1’s fourth and fifth columns also show that S&P rated 33% of bonds UBS said would default investment grade (BBB- or higher) while Fitch only had 22% at such ratings.


July 2008 subprime bond ratings

In July 2008, UBS subprime analysts predicted that 257 of the 480 bonds composing the four rolls of the ABX index would be completely written down, with total loss of principal (Lucas 2008b). In fact, all 257 bonds were completely written down and higher-rated tranches above them in the deals’ capital structure also defaulted.


S&P rated all 257 bonds UBS said would be completely written down and Fitch rated 121. Exhibit 2 shows the then-current S&P and Fitch ratings of these subprime bonds. For example, the second row, second column of the exhibit shows that UBS predicted that five bonds rated AA at 14 July 2008 by S&P would be completely written down. The second row, third column shows that UBS did not think any bonds rated AA by Fitch would be completely written down.


Exhibit 2

S&P and Fitch Ratings of ABX Underlying Bonds That UBS Predicted Would be Completely Written Down, 14 July 2008


Sources: INTEX, UBS Securitized Product Research. Lucas 2008b.


Exhibit 2’s fourth and fifth columns show the percentage of bonds in each rating category. The fourth column shows the 36 bonds S&P rated CC were 14% (36/257) of the bonds UBS said would be completely written down. Meanwhile, Fitch rated 43% (52/121) CC and another 6% (7/121) C. These CC and C ratings were the only appropriate ratings for bonds destined to be completely written down and Fitch rated a greater percentage of bonds in these categories than did S&P, 49% to 14%. Exhibit 2’s fourth and fifth columns also show that S&P rated 15% of bonds UBS said would be completely written down investment grade (BBB- or higher) while Fitch only had 3% at such ratings.



HOW CAN S&P’s CREDIT RATINGS BE IMPROVED?

We aren’t optimistic that the quality of S&P’s ratings can be significantly improved, but we offer two suggestions that might have some positive effect.


Have Regulators Choose Rating Agencies

Tom McGuire argued (McGuire 1995) that when ratings are used in regulation, investors often join issuers and arrangers in wanting inflated ratings. Accepting McGuire’s opinion, regulators are then the only parties that unambiguously desire accurate ratings. So, let’s give regulators the job of policing credit rating quality.


This is fair. Regulators have made the rating agencies their unpaid contractors. In McGuire’s 1995 view, the use of ratings in regulation was “eroding the integrity and objectivity of the credit rating system.” It is only right that regulators put some effort into correcting the situation. Right now, most US regulators treat all NRSROs as equivalent. Meanwhile, the SEC has been mandated by Congress to open up NRSRO status to all comers with little gatekeeping for quality.


The idea is that each regulator selects the rating agencies whose ratings are eligible to be used in its ratings-based regulation. Regulators could select sectors and geographies within a rating agency’s coverage, if they feel, for example, that an agency’s European corporate ratings are accurate, but its US structured ratings are inflated. A regulator might pick one or two rating agencies for each sector and geography and accept the average or the lowest rating from approved agencies.


Each regulator would review its approved rating agency list annually or as the situation warrants. When a regulator removes a rating agency from its approved list, its ratings cannot be used for newly-acquired assets. The disapproved agency’s ratings might continue to be used for assets already owned by the regulated investor, but its ratings might be notched down. Regulatory treatment would depend on how poorly the regulator thinks of the disapproved agency’s ratings.


As regulated investors cannot use unapproved rating agencies, the rule will temper the desire of issuers and arrangers to engage unapproved, and presumably lax, rating agencies. Issuers and arrangers will instead seek ratings from agencies that are approved by the greatest number of regulators. Furthermore, investors will be wary of less conservative, but approved rating agencies, because their ratings might be disapproved in the future, causing the investor to suffer regulatory consequences


Limit Issuer Revenue

The idea behind getting rid of issuer-pay and eliminating ratings in regulation is to better align rating agencies with their original purpose of helping investors make investment decisions. The healthy situation that McGuire envisioned was that rating agencies rely on and protect their credibility with investors. “As long as the product … being sold to issuers [is] credibility with investors, there [is] a natural force, the need to retain investor confidence, to countervail the pressure of rating fees.” Our suggestion is to encourage rating agencies to become more credible to investors.


The regulatory requirement would be that a rating agency’s issuer-pay research revenue could be no greater than some percentage of its investor-pay ratings revenue, say 100%. In this case, the rating agency must get at least half its revenue from investors. Currently, issuer rating revenue dwarfs investor research revenue, so rating agencies would need to massively increase their investor research revenue to retain their existing issuer revenue. By forcing such a balance, rating agencies will have to cater to investors and improve their ratings and credit analyses.


We think rating agencies should seize the opportunity to gain investor revenue. According to one estimate, there are 74,000 credit analysts in the United States making an aggregate $5 billion a year (Sokanu 2020). These numbers show that rating agencies are falling short in meeting the demand for credit analysis and have a significant revenue opportunity. Rating agencies should exploit their economy of scale, produce superior credit analysis, and fill in the gap between the credit analysis they currently provide and the credit analysis the market demands.


One piece of research that would be a hit with investors is explaining discrepancies between ratings and spread, such as when a higher-rated credit has a wider credit spread than a lower-rated credit. Investors would love to know about price dislocations in either direction. And if an analyst can’t justify his rating in a research piece, it’s time for a rating committee on the issuer!


An investor focus would have encouraged S&P to downgrade subprime-related credits more rapidly 2007-09. An investor focus would also have opened up research topics S&P ignored. One such topic was whether there were any good values among distressed subprime debts. Another was which subprime CDOs had documentation flaws that made the proper cash distributions to tranches ambiguous. There is a complementary aspect between issuer ratings revenue and investor research revenue. As new subprime ratings fees declined because of the credit crisis, investors’ need for credit research increased. Fulfilling this investor need would have kept subprime analysts busy and perhaps not worried about losing their jobs.


Implementation of this rule would have to be careful to prevent gaming. To make sure that ratings are the work of the best and brightest, the same analysts who produce credit research must also produce ratings. Issuers that are also investors can’t be allowed to pay for ratings by buying research. Fees for database distribution of ratings should not be counted as research revenue.

 

[1] The penultimate AAA index had not yet been launched, so there were only 400 underlying bonds in all the ABX indices.


 

Douglas Lucas has over 30 years of experience in the financial industry and is a world-class expert in fixed income and structured product credit risk. He created Moody’s CLO rating methodology, including the WARF and diversity score measures of portfolio credit risk. At UBS, he was for 8 years consistently voted onto Institutional Investor’s Fixed-Income Research Team for CLO and CDO research.


Doug Lucas worked at Salomon Brothers, JPMorgan, UBS, and Moody’s. He created Moody’s first default study and its CLO rating approach in 1989 (including WARF and Diversity Score metrics). At UBS, 2000-08, he was voted #1 for CDO research in Institutional Investor’s poll. From 2009 to 2018, he managed Moody’s most-read publication, responsible for 18% of Moody’s total research readership. Some of his articles on CLOs, CDOs, and other structured finance products; default correlation and credit analysis; and rating agency regulation can be found at independent.academia.edu/DouglasLucas2.


 

References

Lucas, Douglas J. 2007. “Market Commentary,” UBS CDO Insight, 13 December 2007, page 2.


Lucas, Douglas J. 2008a. “Rating the Rating Agencies on Subprime.” Mortgage Strategist. New York: UBS, 1 April 2008.


Lucas, Douglas J. 2008b. “Rating Agency Optimism and the ABX.” Mortgage Strategist. New York: UBS, 15 July 2008.


McGuire, Thomas J. 1995. Ratings in Regulation: A Petition to the Gorillas. New York: Moody’s Investors Service, June 1995.


Settlement Agreement 2015. “Statement of Facts” beginning page 46, 2 February 2015. The case is United States v. McGraw-Hill Companies, Inc., and Standard & Poor’s Financial Services LLC, No. CV 13-00779- DOC filed in US District Court for the Central District of California on 4 February 2013. https://www.sec.gov/Archives/edgar/data/64040/000006404015000004/mhfi-ex1034x20141231xq4.htm

Sokanu 2020. Sokanu is “the Internet’s largest career advancement platform” and these statistics are from its page on credit analysts jobs. https://www.careerexplorer.com/careers/credit-analyst/job-market/.

Comentários


Os comentários foram desativados.
bottom of page