• Text Resize A A A
  • Print Print
  • Share Share on facebook Share on twitter Share

FY 2020 Annual Performance Plan and Report - Data Sources and Validation

Fiscal Year 2020
Released March, 2019
 

Data Sources and Validation

Reporting HHS Agencies: ACF | ACL | AHRQ | ASA | ASPR | CDC | CMS | FDA | HRSA | IHS | NIH | SAMHSA


HHS FY 2020 Annual Performance Plan and Report Data Validation Table

Administration for Children and Families (ACF)

Measure ID Data Source Data Validation
3A
(ACF)
Classroom Assessment Scoring System (CLASS: Pre-K) CLASS: Pre-K assessment scoring uses reviewer observations to rate the interactions between adults and children in the classroom.  Reviewers who have achieved the CLASS: Pre-K standard of reliability assess classroom quality by rating multiple dimensions of teacher-child interaction on a seven point scale.  Scores of one to two are in the low range; three to five in the mid-range; and six to seven in the high range of quality.  For purposes of this performance measure, ACF defines low range as any CLASS review with a domain scoring below 2.5.  A “domain” is a category of skills, which include classroom organization, emotional support, and instructional support.  ACF has implemented ongoing training for CLASS: Pre-K reviewers to ensure these data they report continue to be reliable.  ACF also uses periodic double-coding of reviewers, which is a process of using two reviewers during observations to ensure they continue to be reliable in their scoring.
4A
(ACF)
The Runaway and Homeless Youth - Homeless Management Information System (RHY-HMIS) In FY 2015, ACF entered into a Memorandum of Understanding with the Department of Housing and Urban Development, the Department of Veterans Affairs, and SAMHSA to use HMIS as the primary information technology systems to report data on clients served by federally funded homeless assistance services.  Since FY 2015, RHY grantees have been using local HMIS systems to upload de-identified client-level data to the RHY national data repository called RhyPoint.  Following each upload, RhyPoint validates grantee data and sends a report to grantees.  This process allows grantees to monitor and improve their data completeness and quality.

The Family and Youth Services Bureau aggregates and cleans these data by using a set of business rules that incorporate a number of logic checks to ensure the records are valid and accurate.
7B (ACF) National Child Abuse and Neglect Data System (NCANDS) States use NCANDS to report child welfare data to ACF. Each state’s annual NCANDS data submission undergoes an extensive validation process to ensure data accuracy. In addition, an ACF contractor provides technical assistance to the states to improve their reporting and to validate all state data that is related to ACF outcome measures. The ACF Children’s Bureau and the NCANDS project team work with directly with states through national meetings, advisory groups, and state-specific technical assistance to ensure the most complete and accurate reporting of these data in all submissions. ACF expects to see continuous improvement in the quality of these data for this measure over the next few years.
7D
(ACF)
State Annual Reports Title II of the Child Abuse Prevention and Treatment Act and the Community-Based Child Abuse Prevention (CBCAP) program requires states to submit an annual report addressing each of the CBCAP performance measures.  States provide evaluation data on the outcomes of funded programs and activities and report on OMB Circular A-11 performance results and other national outcomes for the CBCAP program.  The program’s three percent annual increase represents an ambitious target.  This is the first time that the program has required participants to target their funding towards evidence-based and evidence-informed results.
14D
(ACF)
 Family Violence Prevention and Services Program Performance Progress Report Form Grantees submit this data in an aggregated format (non-client level data).  When the grantees submit their reports in the Online Data Collection System, there are automatic data validation and error checks that run before the grantees are able to submit their reports.  The Family Violence Prevention and Services Act Office provides a check of each grantee’s data by comparing the current year’s data to prior years’ data and checking for inconsistencies or typos.  The grantee is then given a short amount of time to confirm the submitted data or revise the report.  In addition, performance report data are used to inform grant monitoring by state administrators and federal staff.
16C (ACF) Matching Grant Progress Report forms ACF grantees validate data with methods similar to those used with performance reports.  Grantees validate data by periodic desk and on-site monitoring, in which the agency randomly selects refugee cases for review.  During on-site monitoring, grantees verify outcomes reported by service providers with both employers and refugees to ensure the accurate reporting of job placements, wages, and retentions.  All grantees use database systems (online or manual) for data collection and monitoring of their program service locations.  Beginning with FY 2016, grantee data points for each enrolled individual are loaded into the Office of Refugee Resettlement’s Refugee Arrival Database (RADS) system biannually.  RADS generates data for the APP/R.
22B
(ACF)
National Directory of New Hires (NDNH) Beginning with FY 2001 performance reporting, ACF based the performance employment measures–job entry, job retention, and earnings gain–solely on performance data obtained from the NDNH.  States update their data and use normal auditing functions for reported data.  Prior to use of the NDNH, states had flexibility in the data source(s) they used to obtain wage information on current and former TANF recipients under high performance bonus specifications for performance years FY 1998 through FY 2000.  ACF moved to NDNH to ensure equal access to wage data and the uniform application of performance specifications.
22F, 22G, 22H (ACF) The nFORM (Information, Family Outcomes, Reporting, and Management) Data Collection and Reporting System Healthy Marriage grantees use the nFORM system, along with web surveys for clients, to collect data at program exit on clients’ attitudes toward marriage, as well as other information.  To collect high-quality data, grantees receive ongoing training in data collection best practices through webinars, training videos, infographic-style “cheat sheets,” in-person presentations, and bi-monthly virtual office hours. Grantees also receive individualized technical assistance through a ticketing system in nFORM, with assistance provided by email and phone calls.  The use of ACASI (audio computer-assisted self-interviewing) surveys further enhances the rigor of the client survey data.  Clients complete web surveys on their own, which reduces responding in socially desirable ways and eliminates variation in how interviewers might ask questions.  The web surveys also include automated skip patterns so that clients only answer questions that apply to them and “soft checks” that provide an additional prompt for clients to answer questions before skipping them as a way to encourage data completeness.  To help clients with low literacy levels answer on their own, the ACASI surveys include an audio function that reads the questions and response options to the clients.

Administration for Community Living (ACL)

Measure ID Data Source Data Validation
ALZ.3
(ACL)
ACL’s Dementia Capability System Quality Assurance tool Grantees use the Dementia Capability System Quality Assurance tool to assess the extent to which grantees’ long-term services systems are responsive to and appropriate for the needs of individuals with dementia.  Technical assistance liaisons review grantee data for completeness and accuracy.  The ACL online system facilitates grantee completion of the tool and review and analysis of these data.
8F
(ACL)
Protection and Advocacy for Individuals with Developmental Disabilities (PADD) Annual Program Performance Report (PPR) In January of the following fiscal year, ACL grantees report outcome data for each fiscal year in PPRs.  Grantees conduct verification and validation of data through an ongoing review and analysis of annual reports.  Grantees validate and verify the data collected in the PADD PPR by comparing these data against parameters of each relevant field and by comparing the current data with the previous year’s data.  In case of any outlier data, grantees are asked to verify and/or validate these data and provide ACL with an explanation and/or supporting documents.

Agency for Healthcare Research and Quality (AHRQ)

Measure ID Data Source Data Validation
2.3.8
(AHRQ)
Internal AHRQ performance management systems

Tools supporting this measure are publicly available at https://cds.ahrq.gov/cdsconnect/topic/opioids-and-pain-management.

Assistant Secretary for Administration (ASA)

Measure ID Data Source Data Validation
2.6
(ASA)
Office of Personnel Management (OPM) Employee Viewpoint Survey: https://www.fedview.opm.gov/ Office of Personnel Management offers this federal survey as a self-administered web survey of all full-time and part-time employees.  OPM weights respondents’ data to produce survey estimates that accurately represent the agency population.
2.8
(ASA)
Human Resources Employment Processing System and Business Intelligence Information System The HHS Office of Human Resources Director of Analytics reviews and validates these data.
3.3
(ASA)
Risk Management Framework Portal The HHS Office of Chief Information Director of Information Security validates these data.
3.4
(ASA)
OGR Biannual FITARA Scorecard The Subcommittee on Government Operations validates these data.
3.5
(ASA)
PhishMe Solution and PhishMe Report The HHS Office of Chief Information Director of Information Security validates these data.
3.6
(ASA)
RiskVision: Ad Hoc Reports The HHS Office of Chief Information Director of Privacy Office validates these data.

Assistant Secretary for Preparedness and Response (ASPR)

Measure ID Data Source Data Validation
2.4.13a
(ASPR)
Regulatory agencies, including FDA, and/or the associated host country’s regulatory licensing board

 
For all performance measures related to licensure, ASPR captures emergency use authorization and/or commercialization of medical countermeasures either through approval from appropriate regulatory agencies, including FDA, and/or the associated host country’s regulatory licensing board.  After a rigorous review and approval process for the safety, efficacy, tolerability and immunogenicity of such medical countermeasure for the advancement of pandemic preparedness and critical lifesaving interventions, regulatory agencies make this information publicly available.  During emergencies, FDA assigns Emergency Use Authorizations (EUAs) to move forward certain lifesaving technologies in order to meet pandemic preparedness and response timelines.  FDA makes public all EUAs on its website: https://www.fda.gov/EmergencyPrepar
edness/Counterterrorism/MedicalCoun
termeasures/MCMLegalRegulatoryandPo
licyFramework/ucm182568.htm#current


ASPR checks all data against multiple databases to ensure the accuracy and validation of the numbers reported.  ASPR negotiates and issues contracts awarded and drafts requests for proposals for industry comment in accordance with Federal Acquisition Regulations (FAR) and the HHS Acquisition Regulations (HHSAR).  To address specific advanced research questions ASPR develops interagency agreements with federal laboratories.  Contract terms and conditions require contractors and awardees to report on inventions, discovery, and other advancements in the advanced development of medical countermeasures.  ASPR uses this information for data quality assurance and control purposes to ensure accuracy.

 

2.4.15b (ASPR) www.USASpending.gov, www.fbo.gov, UFMS, and other government systems ASPR checks all data against multiple databases to ensure the accuracy and validation of the numbers reported.  ASPR negotiates and issues contracts awarded and drafts requests for proposals for industry comment in accordance with FAR and HHSAR.   

Centers for Disease Control and Prevention (CDC)

Measure ID Data Source Data Validation
1.3.3a
(CDC)
Behavioral Risk Factor Surveillance System (BRFSS) BRFSS is an ongoing state-based monthly telephone survey which collects information on health conditions and risk behaviors from randomly selected people ≥ 18 years among the U.S. population.  CDC asked BRFSS respondents if they had received a flu vaccine in the past 12 months, and, if so, in which month and year.  This information was self-reported and not verified by medical records.  Starting in 2011, BRFSS methods changed by adding persons in households with only cellular telephone service and improving weighting procedures.  The FY 2011-12 and subsequent flu vaccination coverage estimates reflect these changes.
3.2.4b
(CDC)
CDC’s National Healthcare Safety Network (NHSN) and CDC’s Emerging Infections Program (EIP) ’s Healthcare-Associated Infections Community Interface activity surveillance for community-onset Clostridium difficile infections reduction CMS and state/local health departments validate NHSN data.  EIP data undergoes annual audits to ensure accuracy.
3.2.5
(CDC)
NHSN facility survey CDC uses extensive cross-field edit checks for data validation.  CDC provides detailed instructions for completion of report forms to ensure consistency across sites.  CDC cannot report incomplete records.  CDC conducts process and quality improvements through email updates and annual meetings.
3.3.3
(CDC)
NHSN CDC uses extensive cross-field edit checks for data validation.  CDC provides detailed instructions for completion of report forms to ensure consistency across sites.  CDC cannot report incomplete records.  CDC conducts process and quality improvements through email updates and annual meetings.
3.5.2
(CDC)
Electronic Laboratory Reporting Repository – automated CDC’s ELR Implementation Support and Monitoring team analyzes these data for anomalies.
4.6.2a
(CDC)
US Census and Treasury; Alcohol Tobacco Tax and Trade Bureau, Monthly Statistical Reports, and the Census Bureau Annual Census Estimates CDC collects data from public US Census and Treasury reports and validates these data through HHS and CDC calculations.
4.11.10a
(CDC)
National Health and Nutrition Examination Survey (NHANES), CDC, National Center for Health Statistics (NCHS) NCHS validates these data.
4.11.10b
(CDC)
National Health and Nutrition Examination Survey (NHANES) System quality control standards and business rules validate NHANES data.
7.2.6
(CDC)
CDC/NCHS, National Vital Statistics System (NVSS), mortality rates NCHS and vital registration systems operated in the various jurisdictions legally responsible for the registration of vital events, which include deaths, provide NVSS data to CDC.
8.B.1.4
(CDC)
National Notifiable Disease Surveillance System CDC validates these data based on the format of the data transmissions.  The frequency of calculation and monitoring is at least annually.
13.5.3
(CDC)
Self-reported data from 62 grantees  CDC conducts quality assurance reviews during its follow-ups with grantees.

Centers for Medicare & Medicaid Services (CMS)

Measure ID Data Source Data Validation
MCR23
(CMS)
The Prescription Drug Event (PDE) data CMS has a rigorous data quality program for ensuring the accuracy and reliability of the PDE data.  The first phase in this process is on-line PDE editing.  The purpose of on-line editing is to apply format rules, check for legal values, compare data in individual fields to other known information (such as beneficiary, plan, or drug characteristics) and evaluate logical consistency between multiple fields reported on the same PDE.  Online editing also enforces business order logic which ensures only one PDE is active for each prescription drug event.  The second phase of our data quality program occurs after PDE data has passed all initial online edits and is saved in the data repository.  CMS conducts a variety of routine and ad hoc data analyses of saved PDEs to ensure data quality and payment accuracy.
MCR30.1
(CMS)
Medicare Shared Savings Program Financial Reconciliation Reports; Master Data Management System; Integrated Data Repository; TAP files; CCW claims data; CMS Office of the Actuary (OACT) annual Part A and B expenditure data To monitor the movement of payments to more advanced payment models, HHS developed the following payment taxonomy to describe health care payment through the stages of transition from pure FFS to alternative payment models and, ultimately, population-based payments.  CMS is using this framework to measure Medicare payments tied to alternative payment models.  This framework classifies payment models into four categories according to how providers are paid:
  • Category 1—FFS with no link of payment to quality;
  • Category 2—FFS with a link of payment to quality;
  • Category 3—alternative payment models built on FFS architecture; and
  • Category 4—population-based payment.

CMS considers CMMI models in categories 3 and 4 that directly test how providers are paid and that the agency can use to measure these payments for this measure.  For example, CMS considers Accountable Care Organizations (ACOs), which include Pioneer ACO and the Medicare Shared Savings Program, for this measure because the model allows CMS to understand how providers are receiving Fee For Service payments.  CMS also considers advanced primary care medical homes, which include Comprehensive Primary Care and Multi-Payer Advanced Primary Care Practice, in this total.  In contrast, CMS does not include models such as the Health Care Innovation Awards that provide funding to organizations to support providers in moving toward this goal because the model is not necessarily explicitly testing how providers are paid.

Numerator: Model payment actuals based on model specific data, such as the number of aligned beneficiaries and annual per beneficiary spending.  Denominator: The CMS OACT actual or estimated annual Part A and B expenditure.

CMS staff and contractors provide beneficiary alignment and expenditure data to CMMI. Model teams and contractors use quality assurance measures and data cleaning, including an audit and validation process of the programs that calculate the results to ensure the reliability of the results.

MCR31
(CMS)
Medicare Shared Savings Program ACOs and Consumer Assessment of Healthcare Providers and Systems (CAHPS) CMS requires approved survey vendors to follow data collection protocol.  CMS approved vendors must meet minimum business requirements, attend training, complete a post-training assessment, provide a quality assurance plan, submit test data, participate in site visits and quality monitoring activities.
The Survey Administration Contractor then validates CAHPS scores through a multi-step process.  The first step ensures that vendor-submitted data are correct and that complete survey items are programmatically checked to verify that the correct response scale is used.  This process ensures that CAHPS assigns the correct disposition status to respondents and that all beneficiaries in the ACO’s original sample are included.  If errors are detected during this step, the survey vendor is required to re-submit a corrected data file.  

 

The second step applies “forward cleaning logic” to the respondent-level data.  In certain cases, a beneficiary may answer “No” to a screener question but subsequently answer the dependent questions (this is against protocol).  In these cases, CMS programmatically sets responses for the dependent questions as “missing.”  

After forward cleaning, the vendor runs the respondent-level dataset through the CAHPS scoring macro.  Harvard developed and maintains the CAHPS scoring macro, which has been used for over 10 years to correctly calculate case-mix adjusted (CMA) CAHPS scores.  The macro outputs 0-100 CMA scores and reliabilities for each CAHPS item and Summary Survey Measures (SSM).  CAHPS produces scores at both the ACO and national level.  These scores then undergo a final programmatic check to ensure 1) the CMA item and SSM scores are produced for every ACO, 2) the scores and reliabilities are within scale, and 3) reliability flags are correctly assigned to items and SSMs.

MIP1
(CMS)
Comprehensive Error Rate Testing (CERT) Program

The CERT Program selects a random sample of Medicare Fee-for Service (FFS) claims from a population of claims submitted for Medicare FFS payment.  CMS performs complex medical review on the sample of Medicare FFS claims to determine if the claims were properly paid under Medicare coverage, coding, and billing rules.

CMS monitors the CERT program for compliance through monthly reports from contractors.  In addition, the HHS OIG conducts annual reviews of the CERT program and its contractors.
MIP5
(CMS)
Medicare Advantage Organizations (MAOs) The Part C Error estimate measures the extent to which diagnostic data used in payment is substantiated by medical records MAOs submit to CMS.  CMS uses the diagnostic data to determine risk adjusted payments made to MAOs

The Part C program payment error estimate is based on data obtained from a rigorous Risk Adjustment Data Validation process in which independent coding entities review medical records to confirm that the medical record documentation supports the risk adjustment diagnosis data MAOs submitted for payment.
MIP6
(CMS)
PDE records and CMS enrollment and payment files The payment error measurement in the Part D program is a rate that measures payment errors based on errors in PDE records.  A PDE record represents a prescription filled by a beneficiary.

For the Part D payment error rate, data to validate payments comes from multiple internal and external sources, which includes CMS enrollment and payment files.  Several contractors validate these data.  CMS’s PDE data validation process validates PDE data through contractor review of supporting documentation submitted to CMS by a national sample of Part D plans.

 

MIP9.1
(CMS)
Sample of state-level Medicaid claims

As part of a national contracting strategy, CMS gathers adjudicated claims data and medical policies from the States for purposes of conducting medical and data processing reviews on a sample of the claims paid in each state.

CMS and its contractors are working with 17 states to ensure that the Medicaid universe data and sampled claims are complete, accurate, and contain the data needed to conduct reviews.  In addition, the OIG conducts annual reviews of the Payment Error Rate Measurement (PERM) Program and its contractors.
MIP9.2
(CMS)
Sample of state-level CHIP claims

As part of a national contracting strategy, CMS gathers adjudicated claims data and medical policies from the states for purposes of conducting medical and data processing reviews on a sample of the claims paid in each state.

CMS and its contractors are working with 17 states to ensure that the CHIP universe data and sampled claims are complete, accurate, and contain the data needed to conduct reviews.  In addition, the OIG conducts annual reviews of the PERM Program and its contractors.
MMB2
(CMS)
CMS Geographic Variation Database This performance measure defines a readmission as a case of a full-benefit Medicare-Medicaid enrollee in FFS who is discharged from an acute care hospital and admitted to the same or another acute care hospital within 30 days from the date of the index admission discharge.

The formula is the number of readmissions per 1,000 eligible beneficiaries.

CMS uses a hybrid method of extracting readmissions data on Medicare-Medicaid enrollees, which incorporates elements of the Partnership for Patients readmission measure and the Medicare Hospital Readmissions Reduction Program measure methodologies.  CMS analyzes readmissions data on all full-benefit Medicare-Medicaid enrollees in FFS, as opposed to only those 65 enrollees who are years old and older, in order to capture the experience of those with disabilities under the age of 65 years.

 

Data are validated using parallel coding, reasonableness checks on each file, version-to-version changes by variable and service types, and year-over-year comparisons.

MSC5
(CMS)
Minimum Data Set (MDS) CMS reports the percentage of long-stay nursing home residents who received an antipsychotic medication with a quality measure derived from the MDS.  The MDS is part of the medical record.  The nursing home must maintain the MDS and submit it electronically to CMS for every resident of the certified part of the nursing home.

 

CMS reports the prevalence of antipsychotic use in the last three months of the fiscal year.  The numerator consists of long‑stay residents receiving an antipsychotic medication on the most recent assessment.  The denominator is all long-stay nursing home residents, excluding residents with schizophrenia, Tourette’s syndrome, or Huntington’s disease.  If they have resided in the nursing home for 101 or more days, residents are long-stay residents.  The baseline number reflects the prevalence of use in the last quarter of CY 2011.  CMS selected this quarter because it was the last quarter in the pre‑intervention period.

QIO7.2
(CMS)
Nursing Home Compare Data CMS validates the data for nursing home compare five-star rating scores.  The ratings scores are comprised of Medicare claims data and MDS data.  For this measure, CMS analyzed the underlying data for the five-star rating, determined a baseline, and set targets to focus improvements on current one-star value nursing homes to raise the overall quality of care for nursing homes.
QIO11
(CMS)
Medicare Patient Safety Monitoring System (MPSMS), NHSN, ARHQ Patient Safety Indicators The AHRQ National Scorecard data comes mostly from independent clinical chart abstractions of a statistically representative sample of United States Prospective Payment System hospitals. CMS Clinical Data Abstraction Center (CDAC), through the MPSMS, collects these charts, which use software-guided chart review of inpatient records performed by nonclinical analysts to identify 21 types of adverse events. CMS applies the MPSMS methodology to a multi-stage stratified random sample each year of approximately 30,000 inpatient charts from about 400 acute care hospitals eligible for Medicare’s inpatient prospective payment system.  

 

In addition, the AHRQ National Scorecard draws on two other measurement systems to capture additional adverse event types not captured by the MPSMS: select NHSN and AHRQ Patient Safety Indicator data, which are included in order to generate a more comprehensive set of preventable patient harms.  Over 90 percent of this dataset is not dependent on coding or coding practices, making it a highly reliable account of patient safety harms occurring on a national scale.  MPSMS collects preliminary data and provides it to AHRQ for validation and analysis before the delivery of finalized compiled results. 

CMS uses earlier data to fill-in gaps for preliminary estimates.  As all the data for a given year become available, CMS produces a final number.  More details are available at: https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/pfp/hacreport-2019.pdf.

AHRQ and CMS are in the process of updating the MPSMS data source, to which new clinical measures were last added in 2005.  For example, MPSMS does not currently monitor opioid-related adverse drug events.  An updated baseline will accommodate major updates to the measurement system during the performance period.

Food and Drug Administration (FDA)

Measure ID Data Source Data Validation
292202
(FDA)
Sentinel Distributed Database (SDD) Sentinel uses a distributed data approach in which data partners maintain physical and operational control over electronic data in their existing environments. The distributed approach is achieved by using a standardized data structure referred to as the Sentinel Common Data Model.  The combined collection of datasets across all Data Partners is known as the SDD.

 

The Sentinel Data Quality Review and Characterization Programs are used by the Sentinel Operations Center (SOC) for data quality review and characterization of the SDD. To create the SDD, each data partner transformed local source data into the Sentinel Common Data Model format. The SOC created a set of data quality review and characterization programs to ensure that the SDD meets reasonable standards for data transformation consistency and quality and that the SDD data meets expectations needed for a distributed health data network.

292203
(FDA)
Sentinel Coordinating Center and Active Risk Identification and Analysis (ARIA) system
https://www.sentinelinitiative.org/active-risk-identification-and-analysis-aria
ARIA is the FDA’s post-market safety surveillance system for medical products.  ARIA is comprised of pre-defined, routine querying tools that allow ARIA to use customizable parameters to assess data in the Sentinel Common Data Model. Sentinel uses a distributed data approach in which data partners maintain physical and operational control over electronic data in their existing environments.

 

The Sentinel Coordinating Center will track the number of medical product analyses conducted by ARIA by using the Sentinel query tracking database. FDA updates these data on a continuous basis at the end of every calendar quarter and makes them publicly available online.

291101
(FDA)
Office of Scientific Program Development (OSPD) annual evaluation reports OSPD produces annual evaluation reports which offer a detailed summary of the outcomes, including the number of applications and selections, demographics, research contributions to FDA product centers, and yearly percentage of FDA hires.  OSPD creates, maintains, and verifies recruitment and graduation records.

Health Resources and Services Administration (HRSA)

Measure ID Data Source Data Validation
4.I.C.2
(HRSA)
HRSA Bureau of Clinician Recruitment Service's Management Information Support System (BMISS) NIH supports the internal management of BMISS, which provides data management services, data requests and dissemination, analytics, data governance and quality, project planning and requirements development, training, and process improvement.
16.III.A.4
(HRSA)
The Ryan White HIV/AIDS Program (RWHAP) Services Report (RSR) The RSR contains client-level data and enables the program to un-duplicate the estimated number of people who received at least one RWHAP-funded service within the reporting period.

 

This web-based data collection method communicates errors and warnings in the built-in validation process.  To ensure data quality the program conducts data verification for all RSR submissions.  Recipients receive reports detailing items in need of correction and instructions for submitting revised data.  The web system has an array of reports available through which the grantees and their funded providers can identify data issues that need to be resolved.  In addition, the program provides technical assistance and training during and after the submission period to address quality issues.

29.IV.A.3
(HRSA)
HRSA’s Performance Improvement Measurement System HRSA project officers review and evaluate these data.

Indian Health Service (IHS)

Measure ID Data Source Data Validation
81
(IHS)
IHS Integrated Data Collection System Data Mart IHS conducts a monthly review of reports for completeness regarding full participation and monitoring of outliers.
MH-1
(IHS)
Indian Health Service Performance and Evaluation System (IHPES). IHS reviews and periodically verifies reports generated from IHPES to assure data quality control and to monitor percent-change outliers, which may indicate error.

National Institutes of Health (NIH)

Measure ID Data Source Data Validation
SRO-2.1
(NIH)
Publications, databases, administrative records and/or public documents   NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities.  Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources. Scientific journals use a process of peer review prior to publishing an article. Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected.
SRO-2.9
(NIH)
Publications, databases, administrative records and/or public documents NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities. Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources. Scientific journals use a process of peer review prior to publishing an article. Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected.
SRO-2.12
(NIH)
Publications, databases, administrative records and/or public documents   NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities. Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources. Scientific journals use a process of peer review prior to publishing an article. Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected.
SRO-4.9
(NIH)
Publications, databases, administrative records and/or public documents   NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities.  Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources.  Scientific journals use a process of peer review prior to publishing an article.  Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected. 
SRO-5.1
(NIH)
Publications, databases, administrative records and/or public documents NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities.  Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources.  Scientific journals use a process of peer review prior to publishing an article.  Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected.
SRO-5.3
(NIH)
Publications, databases, administrative records and/or public documents NIH staff review relevant publications, databases, administrative records, and public documents to confirm whether the data sources support the scope of funded research activities.  Articles in peer-reviewed journals, as well as presentations and progress reports, are the most common data sources.  Scientific journals use a process of peer review prior to publishing an article.  Through this rigorous process, other experts in the author’s field or specialty critically assess a draft of the article, and the paper may be accepted, accepted with revisions, or rejected.

Substance Abuse and Mental Health Services Administration (SAMHSA)

Measure ID Data Source Data Validation
2.3.19K
(SAMHSA)
Treatment Episode Data Set (TEDS)

TEDS compiles client-level data for substance abuse treatment admissions from state agency data systems. State data systems collect data from facilities about their admissions for treatment and discharges from treatment. TEDS is an admission-based system, but it does not include all admissions.

Many facilities that report TEDS data receive state funds or federal block grant funds to provide alcohol and/or drug treatment services.  State laws require substance abuse treatment programs to report publicly funded admissions.  Some states only collect information on publicly funded admissions.  Other states are able to collect privately funded admissions from facilities that receive public funding.  States report these data from their state administrative systems to SAMHSA.

2.3.19L
(SAMHSA)
National Survey on Drug Use and Health (NSDUH) NSDUH uses audio computer-assisted self-interviewing to provide the respondent with a highly private and confidential mode for responding to questions in order to increase the level of honest reporting of illicit drug use and other sensitive behaviors.
2.3.19O
(SAMHSA)
NSDUH NSDUH uses audio computer-assisted self-interviewing to provide the respondent with a highly private and confidential mode for responding to questions in order to increase the level of honest reporting of illicit drug use and other sensitive behaviors.

 

Content created by Office of Budget (OB)
Content last reviewed on March 29, 2019