Agency Performance Impact GPRA -A Look at Score Cards

Agency Performance Impact GPRA -A Look at Score Cards

Mercatus Center Finds Room For Improvement In Applying Performance Data To Achieve Desired Employee Outcomes

Based on data from a recent Mercatus Center working paper, federal agencies that give out a lot of block and formula grants or have a heavy research-and-development focus seem less likely to have or use certain types of performance information.

Assisting these agencies should be a high priority for the Office of Management and Budget. Congressional oversight and appropriations committees might also prompt progress by insisting that these types of agencies show outcome and output measures and, ultimately, use those results for budgeting purposes.

THE GPRA

The Government Performance and Results Act of 1993 (GPRA) requires federal agencies to explain the concrete public benefits they seek to produce and report annually on their progress toward these outcomes. GPRA mandates tracking of outcomes, not just activities, to ensure that agencies focus on producing end results that citizens value.

Annual performance reporting under GPRA started in fiscal year 1999. If GPRA works the way it is intended, then ultimately we should observe that funding for programs is closely related to the ability of those programs to achieve outcomes. At a minimum, we should observe federal managers using GPRA goals and measures to manage programs for results.

Congress enacted GPRA in part because “federal managers are seriously disadvantaged in their efforts to improve program efficiency and effectiveness, because of insufficient articulation of program goals and inadequate information on program performance.” GPRA’s underlying logic suggests that programs should be evaluated based on empirical evidence that they actually produce the intended outcomes.

A recent Mercatus Center working paper applies this same logic to GPRA itself, assessing whether the quality of agencies’ GPRA efforts are correlated with the availability and use of performance information in federal agencies.

THE DATA

Periodic Government Accountability Office (GAO) surveys track the percentage of federal managers who say they have or use various kinds of performance information in their programs or activities. In 2000 and 2007, GAO surveyed a large enough sample of managers to calculate valid averages for each agency. The survey covers the 24 federal agencies subject to the Chief Financial Officers Act, which accounts for the vast majority of all federal spending.

The average percentage of managers across agencies who reported that they have outcome, output, efficiency, customer satisfaction, or quality measures “to a great extent” or “to a very great extent” increased between 2000 and 2007. The increase was especially large for outcome measures (8.4 percentage points) and efficiency measures (9 percentage points).

Use of performance information also increased, according to our calculations. The largest increases were for “Setting job expectations for employees I manage” (12.3 percentage points), “Rewarding employees I manage or supervise” (9.1 percentage
points), and “Coordinating with external organizations” (9.7 percentage points).

These figures show that the availability and use of performance information has increased since agencies started producing annual GPRA performance reports.

But this improvement may be due to unrelated reasons. If managers with better GPRA initiatives are more likely to report that they have and use the types of performance measures envisioned by GPRA, then we can be more confident that GPRA contributed to the improvement in availability and use of performance information.

In 1999, the Mercatus Center initiated a ten-year research project, the Performance Report Scorecard, which evaluated the quality of agencies’ annual GPRA reports. Like the GAO surveys, the Scorecard covered the 24 agencies subject to the Chief
Financial Officers’ Act. An expert team evaluated each report on 12 criteria — four each for transparency, public benefits, and leadership. On each criterion, the report receives a score that can range from 1 (no useful content) to 5 (best practice that other agencies should adopt). The maximum possible score is 60, with a minimum of 12. The accompanying graph shows each agency’s score on the Mercatus Scorecard in 2000 and 2007.

The Mercatus working paper correlates the Scorecard scores with the GAO survey results. The analysis also includes several variables that control for other factors that might affect results, such as the types of programs the agency administers and
agency leadership’s perceived commitment to achieving results. All results are statistically significant at the 90 percent level or higher, meaning that there is a 90 percent or better likelihood that the correlations are not the result of mere chance.

KEY FINDINGS

1) GPRA increased availability of performance information.

Agency Scorecard scores are correlated with the percentage of managers stating that they have various performance measures for their programs or activities. The
average agency Scorecard score was 34 for the years covered in the study. The correlations imply that an agency producing a GPRA report with an average score of 34 would have at least 10 percent more managers reporting that they have various types of performance measures, compared to an agency that produced no GPRA
report. Since 40 percent to 60 percent of managers said they had outcome, output or efficiency measures, the quality of GPRA reports seems to explain a noticeable portion of the positive response.

2) GPRA increased use of performance information.

Scorecard scores are also correlated with the percentage of managers saying they use performance information for specific purposes. In most cases, an agency producing a GPRA report with an average score of 34 would have at least 10 percent more managers reporting that they use performance information. Since 25 percent to 50 percent of managers said they use performance information for the purposes enumerated in the GAO surveys, the quality of agencies’ GPRA reports seems to explain a noticeable portion of the positive response.

3) Leadership makes a big difference.

One question asked managers whether they agree that their “agency’s top leadership demonstrates a strong commitment to achieving results.” The percent of managers who agreed to a great or very great extent was strongly correlated with the percent of managers reporting that they have or use performance information in their programs. GAO reports similar findings in its own analysis of the survey data.

A one-percentage-point change in affirmative responses to the leadership question is associated with between one-third and two-thirds of a percentage point increase in the number of managers who have or use performance information. Affirmative responses to the leadership question ranged between 39 percent and 89 percent, with most agencies above 50 percent in 2000 and above 70 percent in 2007. Clearly, leadership from top management makes a big difference in driving the development and use of performance measures.

4) Program type affects performance management.

The regressions included control variables measuring the percent of each agency’s budget devoted to competitive grants, block grants, regulation, or research and development. In some cases, the types of programs an agency administers affected the percent of managers saying they have or use performance information:

>>Agencies with a higher percentage of their budgets devoted to competitive grants have a higher percentage of managers who say they use performance information for several purposes enumerated in the GAO surveys.

>> Agencies with a higher percentage of block and formula grants have lower percentages of managers claiming they have and use performance information.

>> Agencies with more of a research and development focus have lower percentages of managers reporting they have output measures. The percent of agency budget spent on research and development is negatively correlated with several uses of performance information.

CONCLUSION

Unlike prior management initiatives, GPRA is written into federal law. The research findings reported here suggest that GPRA has had measurable success in improving the availability and use of performance information in federal agencies.

Data also suggest that leadership clearly has a large effect on performance management. Commitment to GPRA principles—and performance management generally— should be the key component in the performance plans of all senior federal managers, both appointees and career civil servants.

—Dr. Jerry Ellig is a senior research fellow at the Mercatus Center at George Mason
University. Between August 2001 and August 2003, Ellig served as deputy director and acting director of the Office of Policy Planning at the Federal Trade Commission. He has also worked as a senior economist for the Joint Economic Committee of the U.S. Congress and as an assistant professor of economics at George Mason University, Fairfax County,Va.

Leave a reply