Just Distribution

Written by: Elizabeth Simmons

Primary Source: Mend the Gap

One of the most enjoyable parts of academic leadership is the chance to give people good news: they will be receiving the conference invitation, increased salary, or research grant they have requested.  But since invitations, raise dollars, and grant funds tend to be limited commodities, hard choices have usually preceded the happy announcement.

Designing a competitive review process carries inherent challenges.  There may be an overwhelming number of proposals, making it difficult to evaluate them all thoroughly in the time available.  Choosing among them may be complicated by the presence of multiple, overlapping constraints on how resources must be allocated. There may also be a surfeit of high-quality applicants, putting you in the uncomfortable situation of having to decline or only partially accommodate many with substantial merit.

At the same time, following a thorough, fair, and efficient selection process is satisfying for the reviewers and gives them confidence in the validity of their decisions. It also makes it easier for the leader to explain how decisions were reached, what factors were taken into account, and what measures were followed to minimize the impact of irrelevancies such as the first letter of a candidate’s name.  At a broader level, creating a process everyone can feel proud of is part of building an institutional culture that invites members to invest their time in the community.

This essay is based on my experiences as a member or leader of salary, grant, and conference review panels for several universities, funding agencies, and professional organizations. It draws on moments of puzzlement, dismay, and success.  The first section outlines commonalities between the different types of reviews that enable them to be united under one rubric.  The following sections step through the various stages of the process, describing features that can make them inclusive and efficient.

For clarity, I will focus on the common situation in which the number of applicants or the diversity of their fields of interest is sufficiently large that one must use a multi-stage selection process to ensure that each applicant gets a fair hearing.  Examples include considering all faculty members in a sizable college for merit raises, reading grant proposals in response to a call that cast a wide net, or allocating seats at a popular workshop or summer school.  While I will speak in terms of a three-stage review process, the principles are adaptable to more or fewer stages.


The general situation you may encounter is as follows: You are part of a selection committee dividing constrained resources among many individuals. Unit leaders have provided prioritized recommendations that have been sorted by disciplinary experts, each of whom represents several units in the process. The meeting at which final decisions will be made includes the disciplinary experts, administrative staff, and an active committee chair. The review is confidential and the stakes are high. Individual candidates take the outcomes very seriously; those not awarded resources may leave the institution or complain to authorities about perceived bias in the process.

Specific manifestations of that abstract framework include:

  • A college raise committee is distributing merit raise dollars based on prioritized recommendations from department chairs that have been sorted by disciplinary division leaders.  Constraints include the total number of dollars available, the prior assignment of some dollars for retention raises, the need to maintain equity across disciplines, and the desire to promote inclusive excellence. The raises initially recommended by department chairs amount, in aggregate, to several times the funds available; candidates are generally meritorious.
  • A conference center admissions committee is distributing admissions places for multiple overlapping workshops based on prioritized recommendations from workshop leaders that have been sorted by field experts. Constraints include the size of the facilities, the temporal overlap between workshops, the need to maintain parity across workshops and the desire to promote inclusive excellence. There may be on the order of 1,000 applicants for 200-400 available places; most applicants have credentials making them worthy of admission.
  • An agency panel review committee is distributing grant funds based upon prioritized recommendations from mail-in reviewers that have been sorted by panel members.  Constraints include the total number of dollars available, a standard size range for grants, the need to maintain parity by topic, institution type, investigator seniority, or project location, and the desire to promote inclusive excellence. The combined budgets of the proposals receiving high marks from mail-in reviewers greatly exceed the available funds; many proposals are innovative and authored by respected scholars with impressive track records.

As you may imagine, jumping directly into making final selections may lead you into a mess that will be as frustrating to untangle as a snarled string of outdoor holiday lights.

Early stages of review

An effective final selection meeting depends on having clear input from earlier stages of review, a way to visibly record decisions as they are made, and a means of tracking whether the process is respecting constraints.

Long before that meeting, the unit leaders and disciplinary experts should receive clear directions on how to conduct their preliminary stages of review.  Each should be told what criteria to apply and what evidence to consider. Each should be told what to deliver: e.g., a rank-ordered list of candidates with accompanying notes about the reasons for the rankings. Each should be informed about any constraints (e.g., on the length of the rank-ordered list) and about the consequences of ignoring instructions (e.g., excess candidates will be disregarded).

The expectations should be manageable as well as clear. Asking first-stage evaluators to draw fine distinctions can be needlessly exhausting if the final committee decision will be a simple yes/no binary.

I once helped organize reviews under such a system. Confronted by mounting numbers of applications each year, we were dismayed to find that our reviewers were burning out.  Even worse, their score distributions were idiosyncratic and showed little correlation with the eventual performance of those selected for the program. That last observation prompted us to start asking early reviewers to simply designate which applications were in the top 25 percent and comment on what raised those few above the crowd. This made their workload far more reasonable and actually increased the value of the information they provided.

Once the initial reviewers  (unit leaders) have done their work, each disciplinary expert should read the input from their group of unit leaders and assign each candidate’s case a numerical priority placing it in a tranche (1st tranche = top priority, 2nd = next priority, and so on). The disciplinary experts might also make a preliminary suggestion about the resources to be assigned each individual: e.g., size of raise, type of grant, or likelihood of workshop admission. This information should be entered into a common database or worksheet that is forwarded to the administrative staff for compilation. Having these recommendations clearly organized beforehand will set the stage for a successful final meeting.

If the disciplinary experts undertake their review in advance of the final selection meeting, they can ask unit leaders who have provided non-responsive or incomplete recommendations to make revisions. One can point out, for instance, that submitting too many names or refusing to prioritize will yield inferior outcomes, since the candidates placed in the top tranche will be selected by someone who is less knowledgeable about the individual cases than the unit leader.


At the start of the final selection meeting, the chair should review the process with the entire committee and answer any outstanding questions.  A quick discussion of equity and diversity issues is appropriate here, so that all participants understand the organization’s philosophy as it relates to the work at hand.

During the meeting, key data should be made continuously available to all in attendance. This includes the disciplinary experts’ prioritized suggestions on all cases, the committee’s provisional recommendations, the running totals for resources distributed thus far, and statistics related to equity and diversity.  Keeping such information in view lets the committee make an informed decision the first time each case is considered.  In particular, the running totals can head off those awful moments when you realize that the first half of the candidates have consumed all available resources… implying that you must redo everything you thought you had accomplished over the last several hours.

To ensure that applicants from each department, field, or workshop get fair consideration, the distribution of resources should be handled tranche-by-tranche, according to the priority assigned by the disciplinary experts.  Because each tranche cuts across units, the top candidates from every unit will be considered before the lower-ranked candidates from any unit. The highest-priority tranche will automatically include candidates from each unit; by design, this tranche should be kept small enough to ensure it cannot swallow all of the resources.  In other words, if each unit and disciplinary expert can only recommend a couple of individuals for first priority, the committee should be able to give all of the highest-ranked people raises, admission, or grants regardless of their field or affiliation.

The lower priority tranches will be considered, sequentially, after the first has been handled. Unlike the top tranche, any lower one might not include candidates from every unit; smaller units or those with weaker talent pools will progressively drop out. While units with larger numbers or deeper benches will receive a larger share of resources, following this process ensures they will not do so at the expense of the top cases from other units.  It is also harder for aggressive unit leaders to game the system by submitting too many candidates or refusing to prioritize.


During the meeting, the committee should periodically examine the overall results from several points of view to make sure the outcomes remain in balance.

One question to ask is whether the distribution of resources is congruent with the priority assigned to cases: are the largest raises or favored workshop spots going to those in the top tranche?  One might also double-check that resources are being fairly distributed across disciplinary areas. While the system described here should place all disciplines on a more equal footing than would a process ordered solely by disciplinary affiliation, it is still possible that unit leaders or disciplinary experts in one area might have made more generous recommendations than their colleagues.  That is, although one is not considering all economics applicants before all linguistics applicants (which might direct more resources to the former), if the unit leaders or disciplinary experts in economics have recommended higher raises or larger grants than those in statistics, unintended differences could still creep in.

One can similarly review the results according to other equity and diversity measures.  Doing so along the way makes it easier to keep the process in balance than if one waits till the end.  It also emphasizes that inclusion and equity are central to the process.  Moreover, participants can visualize the cohort of recipients they are building and jointly compare it to their common understanding of the selection process’s goals.

Throughout the meeting, the chair should monitor the pace and focus so that the committee rolls quickly through obvious cases and spends most of its time on the challenging ones.  After all, the committee members’ time, energy and attention will be limited.  If possible it is better to go through a given stage or tranche all at once to foster consistency; keeping the tranches “thin” will make this easier.


A final look at the outcomes lets the committee wrap up loose ends, correct imbalances, and review what has been accomplished.  Confirming that they have abided by constraints, focused on appropriate criteria, afforded different fields or workshops equal consideration, and maintained an inclusive process will give participants a sense of satisfaction with their efforts and confidence in the results.

This leaves the chair well prepared for wrapping up the review.  It is easier to write a summary report about a process and outcome that you have already thought through in detail.  Having followed a just selection procedure also softens the challenges inherent in notifying those who did not receive all that they had requested.  Finally, it is satisfying to be able to tell a successful applicant which aspects of their portfolio led to the good news.

The following two tabs change content below.
Elizabeth Simmons
Dr. Elizabeth H. Simmons is Dean of Lyman Briggs College and University Distinguished Professor of Physics in the Michigan State University Department of Physics and Astronomy.