How Effective Is the SEC in Identifying Financial Reporting Errors?

The Securities and Exchange Commission (SEC) Division of Corporate Finance (DCF) reviews and regulates information in public filings to “deter fraud and facilitate investor access to information necessary to make informed investment decisions.”

Commentators criticize the SEC for being an ineffective regulator, with specific concerns about its ability to identify financial reporting errors or fraud in companies such as Enron.  These concerns led to a General Accountability Office (GAO) review, new regulations codified in the Sarbanes-Oxley Act, calls for increased funding, and a renewed focus on detecting accounting errors.  Despite ample anecdotal evidence of high profile misses, there is no widely available metric for SEC performance.  The SEC does provide an annual performance report, but it only addresses work volume and response times, not effectiveness in detecting financial reporting errors or fraud.  In my paper, “Examining the Examiners:  SEC Effectiveness in Identifying Financial Reporting Errors,” I propose a measure of SEC effectiveness based on financial reporting error detection rates.

I measure SEC error detection rates using a sample of 17,624 publicly available comment letters between 2005 and 2014. When the SEC reviews a company’s financial statements, there are three potential outcomes:

  1. The SEC identifies an error that causes a restatement (error caught – 1,114 observations).
  2. The SEC fails to identify an error, and the error is subsequently identified by another party, leading to a future restatement (error missed – 1,228 observations).
  3. The financial report contains no errors (15,282 observations).

I use the ratio of errors caught to total errors as a measure of effectiveness, which is 47.6 percent over the sample period.  Error detection rates vary over time.  The error detection rate was 59.7 percent in 2005, and remained above 55 percent until 2008, when it dropped to 39.9 percent.  The error detection rate remained below 45 percent until 2014, when it increased to above 60 percent.

I find that the SEC is more likely to identify errors related to fixed assets, intangible assets, and financial statement presentation and more likely to miss errors related to income taxes, perhaps because all transactions have tax implications.  I find the SEC is more likely to identify severe, pervasive errors and errors related to fraudulent activity.  This implies that the SEC directs its scarce resources toward the more problematic areas.

I also examine how four factors – enforcement intensity, workload, compensation, and individual styles of SEC examiners — affect error detection rates. Regarding enforcement intensity, I find greater enforcement resources, measured as review team size and salary, are positively associated with error identification.  I measure team size by counting the examiners listed on each comment letter and team salary by cross-referencing each examiner to government payroll data.  I analyze comment letter content to understand the nature of enforcement intensity.  I find the SEC is more likely to detect an error when it asks additional accounting questions or reissues comments for inadequate firm responses.

Next, I examine the association between workload and error detection rates. SEC examiners review recurring filings (10-Ks or 10-Qs) and transaction-related filings.  I predict an unexpected increase in transactional filings (for example, due to a high number of IPOs, spin-offs, or acquisitions) will reduce the time examiners spend reviewing each individual filing, increasing the likelihood of missing an error.  Consistent with this prediction, I find the review team is more likely to miss a financial reporting error when SEC examiners face an increase in workload.

Regarding the role of compensation as an incentive mechanism, I study the relation between examiner compensation and the identification of errors.  Between 2005 and 2014, I document an increase in inflation-adjusted examiner salaries, a negative trend in the error detection rates, and no relation between compensation and error detection.  I find some evidence that the SEC rewards both workload and performance, although the reward for identifying an error appears small, less than $1,000 per error.

Finally, academic research argues that the economic agents applying the securities laws, not the “laws on the books,” cause different outcomes.  Different examiners review the same firm in different periods, which allows me to test whether particular examiners have a systematic effect on error detection rates.  I find evidence that specific examiners differ in their ability to identify errors in the financial statements, confirming concurrent academic work on the subject.  The specific attributes of individual examiners that account for differences in error detection remain unknown, and are an important area for future research.

My paper is subject to an important caveat: While I believe error detection rates are a potentially useful measure of SEC effectiveness, they are by no means the only measure of effectiveness. Moreover, it is not possible to assess whether the error detection rates I document are too high or too low, because there is no observable benchmark for the socially optimal error detection rate. The observed detection rates might reflect the optimal use of the scarce resources allocated to the DCF for its many tasks, which go beyond error detection.  In this regard, the objective of the paper is not to judge the SEC but rather to forward an objective measure of effectiveness that capital market participants can use to study one important aspect of regulatory effectiveness.

This post comes to us from Matthew Kubic, a PhD candidate at Duke University’s Fuqua School of Business. It is based on his recent paper, “Examining the Examiners:  SEC Effectiveness in Identifying Financial Reporting Errors,” available here.