Judicial performance evaluation (JPE) programs have existed across the U.S. for almost 50 years and were designed to assess the job performance of judges. These assessments center not on case outcomes but rather on desirable judicial qualities: legal knowledge, impartiality, written and oral communication, judicial temperament, and administrative capacity. All JPE programs share the goal of helping judges improve at their jobs. In many states, JPE programs also serve as one component of a broader merit selection plan for judges, educating those responsible for deciding whether judges should be retained—typically elected officials or voters—about the performance of the judiciary.
Since its inception, IAALS has been at the forefront of efforts to bolster JPE, working directly with jurisdictions across the country to develop and implement best practices. Our JPE 2.0 project, launched in 2021, is the most recent iteration of these efforts. This project aims to modernize judicial performance evaluation by thinking creatively about how to maintain the core objectives of JPE while also incorporating emerging best practices and responding to legitimate concerns about the process. All of this is encompassed by IAALS’ overarching goal of improving public trust and confidence in the justice system through greater transparency and accountability for the judiciary. This project is supported and guided by our JPE 2.0 Task Force, which is comprised of experienced JPE administrators and experts from across the country.
The project kicked off with a survey of 658 judges in eight states with JPE programs to solicit judges’ perspectives on the value and administration of JPE in their states. The results of that survey will be released in a report in early 2024.
In May 2022, IAALS hosted an in-person JPE 2.0 convening in Denver. In preparation for that convening, Professor Jordan Singer, chair of the JPE 2.0 Task Force, prepared a “Judicial Performance Evaluation in the States: The IAALS JPE 2.0 Pre-Convening White Paper,” which outlines the history and function of JPE in the United States. The convening brought together stakeholders from across the country to identify key concerns related to JPE and brainstorm innovative solutions.
The convening yielded robust discussions about evaluating judges and raised crucial questions about the future of JPE. IAALS further explored those questions in a series of four virtual convenings between July and October 2023, featuring a diverse group of participants serving in a wide range of roles across the country, including judges, JPE commissioners, attorneys, court staff, and researchers. The conversations in these convenings focused on two core areas: 1) trust and confidence in JPE, and 2) the evolving role of the modern judge and the criteria used to evaluate them.
Gathering wide-ranging perspectives from such a diverse set of participants allowed us to get a holistic view of what is working and what is not when it comes to JPE. Several key themes and takeaways emerged:
JPE is important and necessary.
Although judges and other stakeholders have critiques of JPE, the general consensus is that JPE is vital to ensuring accountability and high-quality performance for judges, and should be preserved.
We need to modernize JPE.
The way that evaluation criteria are defined and assessed needs to reflect judges’ modern role. The judiciary is changing, and the role of judges is changing. JPE needs to reflect those changes. For example, the criteria used to evaluate judges were created in 1985. Although many of the criteria are still applicable, the way those criteria are assessed and defined does not always reflect the current reality of the judicial role. The goal of these evaluations is to capture what makes a good judge and a good judiciary. We need to re-evaluate how we define and measure what that means considering shifting views on desirable judicial attributes and new expectations for judges, including those related to technology, community involvement, and self-represented litigants. And any criteria used also need to be clearly defined, measurable, and consistent with what judges are doing on a daily basis.
Assessment tools need to be updated. In addition to modernizing the criteria, it is clear we need to modernize judicial assessment tools to improve fairness of and engagement with the JPE process. For example, surveys are often a primary assessment tool, but they can rely on outdated distribution techniques that fail to capture feedback from important stakeholders. States are working hard to modernize these methods to reach more people and improve the quality of data. Effective JPE programs will continue to consider the ways in which the judicial system has changed and rethink their approach accordingly.
We need to ensure a fairer process.
Bias interferes with fairness. Bias in evaluations is a key concern across jurisdictions and across many aspects of the JPE process. This applies to bias by survey respondents as well as bias by commissioners who are assessing judicial performance. The subjective nature of many evaluation tools makes it difficult to account for implicit bias. When survey tools or results are biased, it negatively impacts female judges and judges of color, delegitimizes the process, and makes judges less likely to implement the resulting feedback. Many states are evaluating how to limit the opportunity for bias in survey responses as well as how to train commissioners to mitigate their own implicit bias. Additionally, the criteria used to evaluate judges (for example, “judicial temperament”) can be interpreted differently by different people. It is important that commissions use clear definitions and measurable standards for assessment, and that the survey language reflects this through specific and targeted questions aligned with the criteria.
Politicization and weaponization of data interfere with fairness. In addition to bias, the politicization and weaponization of the evaluation process—for example, to get judges removed from the bench following a controversial ruling or due to political disagreement—is also a concern many states are working to address. Survey responses tend to be more critical now than they have been in the past and can feature vicious personal attacks based on a judge’s identity, politics, or the outcome of a specific case rather than constructive, actionable feedback based on their performance.
There is room for improvement when it comes to the quality of data.
Surveys are limited as an assessment tool. Although surveys are a primary assessment mechanism in many states, they have some limitations as an evaluative tool. For example, survey response rates are low across jurisdictions. When survey response rates are low, it is difficult to determine patterns, and negative comments can be weighed more heavily. Increasing the number of responses is critical for improving the overall data, as is being thoughtful about the timing of survey distribution. In addition, commissions need to exercise discretion regarding whether and how to filter inappropriate comments from survey respondents.
Diverse data points are important. Given the challenges with surveys, commissions need to diversify their methods of assessment, such as engaging in courtroom observation, interviewing the judge, soliciting peer evaluations, and reviewing written opinions. Given concerns about bias, it is also important to incorporate more objective data, such as case management data, to supplement subjective responses. While no assessment method is perfect, taken together they can provide a more complete picture of a judge’s performance.
Increased engagement is needed to improve legitimacy of the process.
Improved attorney engagement is critical. JPE programs are working to better engage all stakeholders, including attorneys who have important information about judicial performance. Some attorneys are reticent to engage because they fear retaliation from judges over critical feedback. This deters them from participating in surveys. States are employing diverse measures to better engage attorneys, such as educating about the confidential nature of survey responses and providing training on the importance of high-quality feedback for judges.
There are opportunities to better engage judges. States are similarly considering how to more effectively engage judges and provide a supportive structure for them to implement feedback, such as through training and mentorship.
Self-represented litigants need more of a voice. There are crucial stakeholders, such as self-represented litigants, that are underrepresented in the JPE process because they are difficult to reach. Capturing their feedback will require courts to more reliably collect and disseminate contact information so commissions can reach those individuals when the time comes for feedback.
Informing the public is challenging but crucial. Finally, there is mixed information regarding the extent to which the public is meaningfully engaging with JPE data. Commissions in states where JPE is tied to judicial retention are working to improve transparency. Informing the public is a key goal of commissions, but it is also challenging to capture public attention and adequately convey information. Public education is an ongoing effort that requires the involvement of many stakeholders.
Trust by all involved parties is essential to an effective JPE process.
Although they acknowledge the importance of JPE, judges, attorneys, and the public are all wary of the process, for different reasons. For example, while many judges value the opportunity for feedback and self-improvement, they also have concerns about whether the process is evidence-based, whether it is biased, and whether it is being weaponized for political purposes. When judges do not trust the process, they are less likely to change their behavior based on feedback produced by the evaluations. It is crucial that JPE programs be responsive to the concerns of all stakeholders and work to regain trust. Education, communication, and transparency are key to these efforts.
IAALS’ JPE 2.0 Task Force will meet at the end of the year to begin developing evidence-based recommendations and best practices based on the information gathered throughout the project. These recommendations will be published in a culminating report in 2024 and will serve as a roadmap for states seeking to advance and modernize their JPE programs. And yet the implications of this research extend far beyond JPE itself. They speak to how the role of judges has evolved over the years, how we think about and evaluate quality judges, the importance of transparency and engagement to the legitimacy of courts, and how quality judges ensure access to justice and the rule of law. Thinking creatively about how to hold judges accountable to the highest standards of performance is essential to maintaining public trust and confidence in our courts.