• Text Resize A A A
  • Print Print
  • Share Share on facebook Share on twitter Share

Spring 2017 Ignite: Selecting the Teams

Summary: 
We have selected the 6th Round of teams into the HHS Ignite Accelerator, the Department's internal innovation startup program for staff who want to improve the way their program, office, or agency works. This round we accepted a total of 108.

We have selected the 6th Round of teams into the HHS Ignite Accelerator, the Department's internal innovation startup program for staff who want to improve the way their program, office, or agency works.

This round we accepted a total of 108 proposals; from that pool of proposals we have selected our final 13 accepted teams. This post is meant to illustrate the selection process.

This post builds off previous explanations of the Ignite selection. Here are the links to our methodologies for Spring 2016, Winter 2015, and Summer 2015.

We received 108 proposals

Each team that submitted project idea identified a project lead, and we asked for that person's Agency (or as we call it here in HHS, that person's Operational Division [OpDiv]). Below, we breakdown the 108 proposals by OpDiv.

  • ACF = 5
  • AHRQ = 2
  • CDC = 12
  • CMS = 4
  • FDA = 19
  • HRSA = 15
  • IHS = 9
  • NIH = 27
  • OS = 15

A couple of notes:

  • Previous Rounds saw submission numbers of 82, 65, 72, 74, and 42, respectively.
  • There were applications from nearly every OpDiv.

These 108 proposals were scored by 20 Reviewers

The reviewers were comprised of previous Ignite team members and close collaborators on past projects from the Office of the Chief Technology Officer.

  • Amy Wiatr-Rodriguez, ACF
  • Dan Stowell, CDC
  • Leigh Willis, CDC
  • Jennifer Tyrawski, CDC
  • Carin Kosmoski, CDC
  • Roselie Bright, FDA
  • Bethany Applebaum, HRSA
  • Dan Elbert, HRSA
  • Paul Lotterer, HRSA
  • Vinay Pai, NIH
  • Nick Webber, NIH
  • Malini Sekhar, OS
  • Dan Duplantier, OS
  • Katerina Horska, OS
  • Damon Davis, OS
  • Mark Naggar, OS
  • Bonny Harbinger, OS
  • Kate Appel, OS
  • Kevin McTigue, OS
  • Will Yang, OS

Each proposal was scored 3 times

We worked with 20 reviewers, separated them into 7 panels, and then distributed the 108 proposals across the panels. Thus, each proposal was scored roughly 3 times. We used the average of the 3 scores to make a finals score that we used in our analysis.

Each reviewer received standardized guidance for scoring proposals. Naturally, there was some variation in each reviewer's scores - some harsher, some lesser. Thus, we used Z-scores to normalize the scores. See more about that below in the section called: "There Were Three Ways A Project Idea Could Advance"

We asked each individual to self-identify if they should recuse themselves. There were no identified conflicts and no recusals.

Each proposal was scored based upon defined criteria

Each proposal was scored on a 0-100 range based upon our communicated criteria:

  • The project's alignment to the Office, Agency mission [20 points]
  • The proposal's explanation of the process, product, or system to be addressed. [60 points]
  • How well the proposed solution aligns with the communicated problem. [20 points]

Beyond the scoring rubric, reviewers were asked on a binary scale: do you think this proposal should be considered to become a finalist? Each Reviewer was also asked to provide brief comments on the proposal that help justify their score.

There were three ways a project idea could advance

The following were ways in which a proposal was able to advance:

  • The top z-scores overall
  • Review Panel unanimously votes to advance it
  • Wildcards picked by IDEA Lab staff

Z-scores are used as a way to control differences in reviewer scoring. For instance, out of a score of 100, reviewers on Panel A might score teams on average 70, whereas reviewers on Panel B might score teams on average 85. In this case we see that Panel A is harsher, whereas Panel B is less harsh. Now, the raw scores are different, but they both happen to say the same thing - average. Thus, the Z-score accounts for that variation. Here's the wikipedia article on z-scores if you'd like to learn more.

While we rely on statistical methods, we also recognize that the process shouldn't be left to math alone. Thus, we have a wildcard slot. Wildcards were picked by Kevin McTigue and Will Yang - the HHS Ignite Program Leads. We combed through every proposal, rigorously analyzing and fervently debating proposals that we felt were eligible for the wildcard category.

We interviewed 34 project ideas for Ignite

During this stage, we further identified applicants through 25 minute conversations. The applicants were given an opportunity to pitch the Program Directors for 5 minutes and converse for 20 minutes on their problem identified, solution, background for solving the problem, team background, and general clarity of direction.

We selected 13 project ideas for Ignite

We selected 13 teams for the Spring 2017 Ignite Accelerator. This was the most difficult selection process yet, and we are excited to see those who are willing to test new ideas. We wish we could have accepted more teams, and hope to do so in the future. To all those who submitted proposals, keep pressing on and innovating to better serve the American people.

Posted In: 
Health IT