Training session hosted by ASSEDEL, now available on the official ASSEDEL YouTube channel.
On April 23rd, ASSEDEL hosted a thought-provoking training session titled “The Role of Automation in International Human Rights: An Analysis of Fikfak & Helfer.” The session, which explored the increasing integration of Artificial Intelligence (AI) into international judicial systems—particularly the European Court of Human Rights (ECHR)—was also livestreamed and is now available for replay on ASSEDEL’s YouTube channel.
The session built upon the critical work of scholars Fikfak and Helfer, who examine the challenges and opportunities presented by AI in the realm of human rights adjudication.
A Court Under Pressure: The Current Landscape
The ECHR is currently facing an unprecedented influx of applications, a situation worsened by the withdrawal of Russian judges following Russia’s exit from the European Convention on Human Rights in 2022. With Russian cases now redirected to Geneva and staff shortages growing, the conversation around automation and AI has become increasingly urgent.
Can AI Help? And Should It?
The training explored various ways AI might support or streamline judicial procedures:
- Transcribing handwritten complaints into digital formats;
- Translating legal documents, though often at the cost of emotional nuance essential in human rights cases;
- Clustering similar cases based on topic or complexity;
- Building decision-tree algorithms to evaluate domestic remedies, damages, admissibility, and more.
Despite the potential efficiency gains, participants emphasized that human oversight is essential. The values at stake, the irreversible nature of many ECHR decisions, and the need for accountability make it clear that AI cannot—and should not—operate independently in such a sensitive context.
Technical Challenges & Ethical Dilemmas
One paradox addressed was the Court’s continued reliance on paper-based systems. Many applicants still submit complaints via traditional mail, which must then be manually entered into digital systems by court staff. This raises important questions about accessibility and digital equity.
Another major concern is the lack of technological expertise among judges and the risk of automation bias—the tendency to accept algorithmic outcomes without sufficient scrutiny. This highlights the urgent need for data literacy training and transparent algorithmic accountability.
Key Questions and Concerns Raised
- How do we ensure human review of AI-generated decisions?
- Who is responsible for assessing algorithm quality, and how should that be done?
- How do we prevent the misuse of AI tools in high-stakes legal contexts?
- How can we educate legal professionals and the public to engage critically with data-driven decision-making?
Toward Responsible Innovation in Human Rights Justice
The session underscored that AI’s introduction into the judicial realm is not merely a technical upgrade—it’s a profound transformation with ethical and legal implications. Careful, inclusive, and transparent development is needed to ensure that automation serves justice, rather than undermines it.
For those interested in exploring this important topic further, the full recording of the session is available on ASSEDEL’s official YouTube channel, offering a valuable resource for researchers, practitioners, and students alike.