In spring 2006, the U.S. Office of Management and Budget and federal agencies unveiled a new Web site http://www.ExpectMore.gov. This is a collaborative review of the Web site by three Community Health Planning and Policy Development Section members, Azella Collins, Mat Despard and Priti Irani. It is organized in four parts. (1) Description of the Web site; (2) Review criteria and comments from each of the three reviewers; and (3) Reviewer's Rating of the ExpectMore Web resource; and (4) Suggested Improvement Plan for the ExpectMore Web resource.I. Description of ExpectMore Web site
"The Federal Government is working to ensure its programs perform well. Here we provide you with information about where we're successful, and where we fall short, and in both situations, what we're doing to improve our performance next year," states the introduction on the Web site.
Users can search for programs by "performing", "not performing", "keyword" or pre-determined "topics".
Programs are rated as "Performing" or "Not Performing". "Performing" programs have three ratings: "Effective", "Moderately Effective", and "Adequate". The main difference between "performing" and "not performing" programs seems to be existence of measurable performance objectives, "achievement of results" and "improved efficiency".
|
| Table 1: ExpectMore Program Performance Ratings |
To date, 80 percent of all Federal programs, according to information on the Web site, have been rated. Table 1 shows the distribution by rating.The federal government uses a standard questionnaire called the Program Assessment Rating Tool, or PART, for short. The PART asks approximately 25 questions about a program's performance and management. For each question, there is a short answer and a detailed explanation with supporting evidence. The answers determine a program's overall rating. Once each assessment is completed, each program develops a program improvement plan so program's performance can be tracked and improved upon.
II. Criteria for Review
The three reviewers looked for two criteria. Each of their comments are listed below
- Is the information regarding programs easy to find?
- Is the information consistent with what we know about the program? Is the rating system transparent and consistent?
1. Is the Information Easy to Find?
Azella – It is very easy to find the programs. I looked for HIV/AIDS prevention, HIV/AIDS care services, and lead elimination programs.
Mat - Very easy to locate information, though results should be sorted by department.
Priti – Very clean and thoughtfully designed Web site. It is relatively easy to find information about programs reviewed, and not reviewed. As I am funded under the Prevent Block, I looked for it, and it was not reviewed. I could not locate Special Supplemental Nutrition Program for Women Infants and Children. My colleague looked for asthma, and found it listed under "Environmental Health". There are buttons for each program such as "View Similar Programs", and "About Improvement Plans" that provides more details, and "Details and Current Status of this Programs".
2. Is the information consistent with what we know about the program?
Azella - I reviewed findings on DHHS four programs:
- Domestic HIV/AIDS Prevention
- Ryan White HIV/AIDS
- HIV/AIDS Research
- Environmental Health
Domestic HIV/AIDS Prevention - Assessment rating - Not Demonstrated
The overall scores were what the reviewer had expected, because HIV incidence is increasing, lack of rates for various units of service, and overall low funding levels needed to combat increasing HIV incidence. (unable to discern how points are assigned -- no score sheet available).
Having worked with programs within the CDC domain, the writer has witnessed a shifting of personnel, an increased focus on program accountability, and the use of program surveillance data to drive selection of interventions. Ryan White HIV/AIDS— Assessment rating: Adequate
The overall scores were higher than what the reviewer had expected, because HIV incidence is increasing and there are no RW acuity levels to guide service planning. (unable to discern how points are assigned -- no score sheet available). Anecdotally, when I first entered the prevention arena (1995) I learned that Prevention case managers could not talk about and document Care communications with their patients and vice versa for Care case managers. I thought, "how do you not have safe sex discussions with people who are HIV positive and document that encounter?"
Having observed activities within this arena this writer believes the rating is line with the programs end results.
HIV/AIDS Research - Assessment rating - Moderately Effective
The assessment rating is in line with how the program is operationalized.
Environmental Health - Assessment rating - Adequate
The overall scores were higher than what the reviewer had expected. In 2005 this writer organized a Lead Detection activity in Chicago; parents of children who tested positive for lead had nightmarish stories about vendors who were sent to rehabilitate their properties. After making various complaints it was clear that the national administrators were unaware of how lead removal procedures were locally implemented.
Mat - I was surprised to see a program of interest to me, the Housing Opportunities for People With AIDS under HUD listed as "not performing" because results have not been demonstrated. The impact of HOPWA has been clearly felt here in North Carolina, yet it is deemed as "not performing" only because HUD has not been collecting sufficient performance data from grantees.
Priti – I looked at five programs. They were: Department of Health and Human Services Health Centers rated as effective; DHHS - Office of Child Support Enforcement rated as effective; FEMA Disaster Response rated as performing-adequate; DHHS – National School Lunch as "not performing – results not demonstrated"; Department of Education – Even Start rated as "not performing – ineffective".
The Health Centers were evaluated in 1998 and users experienced 22 percent lower hospitalization than Medicaid users receiving care from other sources (I wondered what these sources were, and why). Also an increasing proportion of health center patients are insured according to a 2000 Government Accountability Office report. This seemed consistent with the "effective" rating.
The Office of Child Support Enforcement received an effective rating because they aim to increase the cost-effectiveness ratio (dollars collected per dollar spent) from $4.38 in FY 2004 to $4.63 in FY 2008. In short, they can sustain themselves. None of the other program goals were measurable or achievable. If the only purpose of the program is to collect child support, then it is effective. How can this be tied to results that relate to positive outcomes for children? The effectiveness rating, in my perception, is tied only to cost-effectiveness.
The FEMA Disaster Response received an adequate rating -- it is the only federal program of integrated emergency management and coordination that responds to domestic disaster contingencies. It was also acknowledged that the program was reorganized in 2004 (the time when this survey was put together) and was developing baseline measures.
The National School Lunch Program was rated as "Not Performing – Results not demonstrated" because (1) the program did not have a reliable measure of the level of erroneous payments it makes. The number of children approved for free meals each year exceeds estimates of the number of children who should be eligible; and (2) While periodic evaluations show progress towards improved meals, the programs lacks short-term measures that can demonstrate progress on an annual basis. The "not-performing" rating, in my perception, is tied only to erroneous payments.
Even Start was a unified family literacy program that integrated early childhood education, adult literacy, and parenting education into a unified family literacy program. The Department of Education conducted three major evaluations of this program, and none showed greater educational gain for Even Start children and parents. Hence the plan was to eliminate funding for the program. When I clicked "Similar Programs," a public diplomacy and adult education program came up, and no family support programs showed up. Family support program deal with complex issues and are difficult to evaluate. There is something unsettling about the prospect of Child Support Program being funded because they are effective, but having a vacuum with regard to Family Support programs.
My colleague who looked for asthma was led to Environmental Health that is rated as Performing-Adequate, and the note on asthma read "The program addresses the specific need to reduce and mitigate human exposure to a variety of toxic substances and hazardous environmental conditions. There were an estimated 434,000 children with elevated blood lead levels in 1999-2000. Twenty million Americans had asthma in 2001, and 12 million had an attack in the previous year. There were no performance measures listed for asthma.
III. Reviewers' Rating of the ExpectMore.gov site
Mat: Performing – Adequate. Good start, but key improvements are needed.
Priti: Performing – Adequate.
Azella: Performing - Adequate.
IV. Improvement Plan
- It seems the dividing line between "Performing – Adequate" and "Not performing – results not demonstrated" is fuzzy, and non-existent. One suggestion is to have "Results not Demonstrated" as a separate category.
- Share the detailed description of how the information is analyzed and weighed. It was clear that there was desire to assess each federal program, the connection between how the information was analyzed and how the improvement plan was developed was unclear.
- Ensure that federal agencies are working with state and local partners to collect performance data, or you may not have the full picture.
- Place emphasis on program impact (results) and on budget management (performance). They are both important. Programs that do not demonstrate program impact cannot be "effective". Programs the demonstrate positive impact cannot be "not performing". Also, there are programs that may can perform effectively, but not get results.
- Placing a rating on broad-based programs such as Environmental Health, rather than on specific programs within Environmental Health, is not meaningful. Such broad programs, must be rated in sub-categories.
- Can there be a special note or bonus points for programs that take risks, are innovative, and evaluated well?
Reviewers Profile
Azella Collins: Works at the Illinois Department of Public Health and coordinates the Perinatal HIV Elimination Program Administrator, which is a CDC funded initiative. Under her leadership and advocacy Illinois implemented a Rapid HIV Counseling and Testing program in all Illinois birthing hospitals. Professional interests include community program development, evaluation, strategic and business planning.
Mat Despard: Works at the Health Inequalities Program of the Center for Health Policy at Duke University in Durham, N.C. He is a project coordinator working on two (2) different federally funded projects (both out of the Health Resources and Services Administration of DHHS) to improve HIV care coordination using information technology and to increase access to specialty medical care for the uninsured. He is a social worker by training with interests in non-profit management, collaborative community problem solving initiatives, program evaluation and research-to-practice efforts.
Priti Irani: Works at the New York State Department of Health, and is the project director, Assessment Initiative, a CDC-funded cooperative agreement funded by the Prevent Block. She enjoys the planning and evaluation process, reviewing articles among other things. She is also the editor of the CHPPD newsletter.