Exploring Opportunities and Human Rights Implications of AI and Modern Technologies in Criminal Justice

City Law School Symposium Report

By Sekander Zulker Nayeen, Laura Vialon, Cheryl Dine and Talha Boyraz

On Tuesday, 6th May, 2025, Professor Dimitrios Giannoulopoulos organised an influential symposium entitled ‘AI and Modern Technologies in Criminal Justice: Opportunities and Human Rights Implications’ held  at the City Law School, City St George’s, University of London. It was focused on discussing the transformative impact of artificial intelligence (AI) and other emerging technologies on the criminal justice system, while critically examining accompanying human rights considerations.

Ms Penelope Gibbs presenting

Panel 1 – Live Facial Recognition, technologically enhanced investigative methods and suspects’ digital rights

The first panel was chaired by the Head of Department at The City Law School, Professor Dimitrios Giannoulopoulos. He stated that the symposium would bring together three aspects: criminal justice, technology and human rights. He hoped that it would give an insight into the topics to everyone who joined both in-person and online. He then introduced and called all the panellists to start the discussion on their topics.

Madeleine Stone, a Senior Advocacy Officer of Big Brother Watch, presented on Live Facial Recognition and Human Rights. She informed that facial recognition is being used in both the private and public sectors. For example, the retailer, as a private entity, uses it. At the same time, police, as a public sector entity, use it.  It could be of two types: facial recognition for identification and facial recognition for verification.

Police usually use facial recognition for identification and verification in three ways: live facial recognition, retrospective facial recognition, operator operator-initiated facial recognition. In case of live facial recognition, police scan faces in real time in public spaces, such as streets, events, and then they match the faces against a watchlist. It can also immediately create alerts when someone on the list is detected. Retrospective facial recognition, common in the US but also used in the UK, involves matching images from CCTV or social media to extensive police databases. Operator-initiated recognition allows officers to take photos in the field and instantly compare them against existing image databases.

Ms Stone commented that while this technology is deployed in the name of public safety, there are growing concerns about its legality, bias, and impact on civil liberties. Currently, there is no specific legislation governing its use in the UK. Police forces operate under individual policies, and databases often include images of people who have not been convicted of any crime. Critics argue that this mass biometric surveillance equates to a police identity check on everyone, without their consent or knowledge.

The human rights implications are significant. Legal challenges have been mounted on the grounds of Article 8 (right to privacy), Article 10 (freedom of expression), Article 11 (freedom of assembly), and Article 14 (protection from discrimination) of the European Convention on Human Rights (ECHR). Additionally, bias in facial recognition algorithms has been documented, particularly concerning race and gender.

Despite improvements in accuracy, the broader issues of misuse, discrimination, and chilling effects on protest and free expression remain unresolved. Campaigners and rights organisations continue to call for clear legal frameworks and accountability mechanisms before facial recognition can be considered a lawful or ethical policing tool in the UK.

Penelope Gibbs, Director of Transform Justice, discussed how legal assistance in police custody changed during the COVID-19 pandemic. Before COVID, all legal advice and support for suspects in custody was provided in person. However, during the pandemic, concerns over virus transmission in confined custody settings led lawyers to stop attending in person. As a result, legal support shifted to remote methods—via Teams, Zoom, or often just by phone.

In the UK, individuals detained in police custody—often children or vulnerable adults—are entitled to legal advice, which can be delivered by a lawyer or trained legal representative, who doesn’t need to be a qualified solicitor. Appropriate adults (family members, or volunteers from agencies such as the justice service) can also be appointed to support minors and vulnerable adults. While appropriate adults continued attending in person during the pandemic, lawyers often gave remote advice, despite legal provisions in the Police and Criminal Evidence Act favouring in-person presence.

Transform Justice and partner organisations raised concerns early on about whether suspects were receiving adequate legal protection under this remote system. Observational and outcome data were lacking, and the nature of custody meant the process was closed to public scrutiny.

Research later found that in around 40% of cases, legal advice was provided by phone alone, making confidential and effective pre-interview advice challenging. Some suspects, like those with ADHD, rejected remote consultations altogether, raising Equality Act concerns. Suspects often expected lawyers to be physically present and became frustrated or disengaged when they weren’t.

Although some lawyers appreciated not having to travel, 42% reported difficulty accessing necessary materials, and privacy issues arose, such as children or background noise during remote calls. Gibbs concluded by emphasising the urgent need for observational research and outcome data before adopting remote custodial legal advice more broadly.

Alex Tinsley of Doughty Street Chambers addressed the significant risks and evolving legal landscape surrounding international cooperation on electronic communications data. He began by highlighting how well-intentioned security policies, like facial recognition, can be misused by authoritarian regimes to target dissidents, setting the stage for broader concerns about digital surveillance.

Tinsley explained the shift from traditional mutual legal assistance (MLA)—where states request evidence through official channels—to direct requests to service providers. Instruments like the US-UK CLOUD Act Agreement (2019) have allowed UK authorities to issue over 20,000 data interception orders directly to US companies, mostly for intelligence and disruption, not criminal prosecutions. This direct access bypasses safeguards and creates significant privacy concerns.

The Second Protocol to the Budapest Convention offers limited protections, such as requiring judicial authorisation and state notification for certain data requests, but uptake remains low. The EU e-Evidence regulation includes provisions for notifying data controllers, but often exempts informing affected individuals.

Tinsley turned to the UN Cybercrime Convention, finalised in late 2023 amid heated negotiations. While it establishes international obligations for surveillance and MLA across a broad range of offences, it lacks strong human rights safeguards. States will be able to compel real-time interception and access to communications with minimal oversight. Civil society has flagged the absence of key protections—like purpose limitation, individual notification, and judicial authorisation—as serious risks.

He warned of dangers, including transnational repression, citing abuses like Pegasus spyware. Whistleblowers, journalists, and lawyers could become unintended targets under vague cybercrime definitions. Tinsley concluded by emphasising the need for transparent MLA processes, democratic scrutiny, and robust safeguards to prevent abuse and uphold freedom of expression and privacy rights.

Dr Peristerioud presenting – right to left HHJ Angela Rafferty KC, Dr Ashlee Beazley, and Prof Susan Blake

Panel 2 – Human rights/procedural safeguards in virtual proceedings

Chair: HHJ Angela Rafferty KC.

In the second panel, Dr Ashlee Beazley, a Postdoctoral Research Associate at KU Leuven, shared insights from her recent work on a project titled DigiRights: Digitalisation of Defence Rights in Criminal Proceedings. This research investigates how certain defence rights are applied in a digital context within six European countries—Belgium, Croatia, Estonia, Germany, Hungary, and Italy—alongside relevant EU law. The project examines both the legal provisions and the practical realities of implementing these rights in criminal proceedings. One of the key areas explored was the right of the defendant to be present online during proceedings. The findings revealed that this right is fully available at all stages of the trial in Estonia, Italy, and Hungary.

However, in Belgium, Croatia, and Germany, this right is either limited to certain stages or only available under specific circumstances. Another important focus of the project was the right to legal assistance provided remotely. While this right exists in all six jurisdictions and is applicable during both the pre-trial and trial phases, the researchers observed that it often lacks clarity, particularly in the pre-trial stage. The project also assessed the right to interpretation via remote means. It was found that remote interpretation services are generally available in all jurisdictions, ensuring that language barriers can be addressed even when the parties are not physically present. Lastly, the research looked into the right to access the case file remotely. This right is gradually being introduced in most of the countries studied.

Following Dr. Beazley, Dr. Christina Peristeridou, Assistant Professor of Criminal Law at the Maastricht Institute of Criminal Sciences and a lawyer registered with the Thessaloniki Bar Association, gave a presentation introducing her upcoming research project on effective participation in virtual criminal trials.

She opened her talk with the famous Star Trek quote, “Beam me up, Scotty,” as a metaphor for the transition from physical to virtual courtrooms. Dr. Peristeridou noted that the use of virtual hearings in criminal proceedings has increased significantly. Some of the assumed benefits are reduced costs, improved efficiency, convenience, and better access to justice for geographically distant or vulnerable individuals. Some studies even suggest that defendants themselves may prefer virtual formats for certain types of hearings, such as bail hearings.

Dr. Peristeridou stressed that the implications of virtual participation go far beyond logistics. Her research will explore whether physical presence is necessary for legitimacy in criminal trials, how virtual formats affect communication and engagement, and whether the courtroom should be understood as a physical space or a service. She raised questions about how defence rights can be safeguarded in online hearings, and whether these developments require a re-conceptualisation of existing rights.

Her upcoming project will be structured in three phases. The first phase involves compiling an inventory of existing socio-legal research on virtual participation. The second phase will include empirical fieldwork, consisting of interviews with defendants in the Netherlands and England, as well as observations of online hearings. The third phase will focus on identifying the normative conditions necessary to ensure effective participation in virtual criminal trials.

The final panel featured Prof. Susan Blake, Associate Dean in Education at The City Law School, City St George’s, University of London. Her presentation, titled Online Courts, AI and the Law Curriculum, explored how legal education must respond to the increasing role of technology and AI in the justice system.

Prof. Blake began by outlining recent developments, including the UK’s Common Platform, the ongoing Court Reform Programme, and the use of AI tools such as ChatGPT, Lexis+ AI, Westlaw Edge, and CoCounsel. She highlighted the growing integration of AI in legal tasks like research, drafting, case management, and even judgment writing.

However, she cautioned that AI’s capabilities are based on existing data and probabilities, not accuracy or truth. While it can suggest arguments or predict outcomes, it has limits in reasoning, may embed bias, and sometimes generates convincing but incorrect advice. This raises significant concerns for the rule of law, especially regarding fairness, transparency, and equal access to justice. Prof. Blake stressed that legal AI tools must be developed with clear goals aligned with a well-functioning justice system and that designers must ensure the rule of law and human rights are upheld. She also discussed access to justice challenges, including the risk of unrepresented individuals relying on flawed AI outputs.

Turning to legal education, she asked whether technology and AI should be fully integrated into law curricula or treated as an additional skill. She emphasised the need for future lawyers to understand how AI works, verify its outputs, ensure data security, and uphold ethical standards. Finally, she noted that legal training at the City Law School is already adapting with these technologies. The City Law School trains Bar students to work on laptops for Criminal Advocacy classes. And the School is also using generative Al licences to assess the implications of Al tools.

Panel 3: Modern technologies and AI in criminal evidence and sentencing

Chair: Prof David Ormerod, Faculty of Laws, UCL (former Criminal Law commissioner for England and Wales)

Digital technologies as evidence and the presumption of reliability
By Micheál Ó Floinn, School of Law, University of Glasgow
AI in sentencing
By Dr Miri Zilka, Information Engineering, University of Cambridge

The third panel focused on the complex issue surrounding algorithmic tools that are being used as evidence in the criminal justice system, as well as the consequences of that trend. The discussions focused on the US and UK jurisdictions, and tackled critical queries about fairness, transparency and reliability.

As algorithmic systems are increasingly being utilised across the criminal justice system, this panel was particularly timely. From predictive tools for arrest to risk assessment instruments at sentencing, these ever-emerging technologies theoretically demonstrate a promise to improve the decision-making process by providing more objective data. However, the reality may very well be far more complicated.

One of the concerns raised was as regards to bias. That is, how biased and “noisy” the data feeding these algorithms can be. For example, data that often reflects arrests rather than the actual offences. Because arrest patterns are deeply affected by systemic bias, such as race/ethnicity, geography, and/or socioeconomic status, the algorithms trained on such data have the potential to reinforce these inequalities.

An example was given as regards to evaluating drug offences. Research has shown that these tools can also overestimate the quantity of drugs sold based on phone data. The assumptions behind these calculations are often obscure, unvalidated, and especially hard to challenge in court. Despite this, the tools continue to be used, creating a troubling precedent where flawed algorithmic output can potentially become accepted as legal evidence. Risk assessment tools, commonly used to predict the likelihood of reoffending, were presented as another area of concern. These tools typically rely on past arrest data, not actual crime data, which means that they may simply predict who is more likely to be ‘arrested’ rather than who is likely to be guilty of an ‘offence’.

In the UK, a lack of transparency in court data was raised as another discussion point. It was discussed that this lack of transparency makes it difficult to evaluate sentencing practices or the impact of technology. While efforts from the Ministry of Justice, such as ‘Data First’, are steps in the right direction, other crucial information, such as victim impact or mitigating circumstances, are often missing in consideration.

To bridge this gap, one innovative solution presented was the extraction of semi-automated data from court transcripts. By combining human expertise with machine assistance, researchers can query specific legal questions and extract relevant information with higher accuracy and speed. This approach allows for the production of customised datasets tailored to particular legal inquiries, without automatically compromising the quality of the data.

The key takeaway from panel 3 was thus that the use of technology in the courtroom is not necessarily inherently good or bad practice. However, when left unregulated, poorly understood, and potentially biased (systemically), there are significant risks to the access and assurance of justice. Until there is solid evidence that these tools improve the outcomes of criminal cases for certain, we do risk automating justice at scale.

left to right – Prof Basak Cali, Dr Pranava Madhyasta, and Prof Peter Hungerford-Welch

Panel 4: AI development and international human rights safeguards

In the fourth panel, Dr Madhyasta, Senior Lecturer in Computer Science at City St. George’s, University of London and Honorary Research Fellow at Imperial College London, spoke about current development and future capabilities of AI.

He first introduced his work, where he focuses on multimodal machine learning (i.e. machine learning across multiple signal modalities such as text, vision, and speech) and grounded representation learning on natural languages for applications such as multimodal machine translation, syntactic parsing, and grounding for language interaction.

He then went on to highlight the novel pace at which research, production and adoption unfold when it comes to AI. Normally, the process is rather slow. With AI, a remarkable acceleration can be witnessed. New research comes out and is immediately marketed into a product and within months, adopted by millions of users. This also means the ethical and societal framework necessarily lags behind. The models developed are for the most a black box with no insight on how they have been trained or how they generate their results. The acceleration in capabilities, performance and computational demands is not accompanied or restrained by any ethical, data privacy, copyright or safety considerations.

Malpractice with data is common at Microsoft, OpenAI or Meta alike. There are serious issues with systems creating avatars difficult to detect as such, deepfakes and biases in AI results. AI is trained for the most likely and the most frequent data points – one could probably say that it has a bias towards the “mainstream” or majority views, which is very problematic in a democracy which lives from pluralism. Particularly worrying is the sheer amount of data about our behaviour, interactions, preferences, content creation, our prompts, fed-in information that amasses in the hands of 5-6 private mega companies which are dealing in AI, social media or the provision of search engines like Google. These private entities have more money and more data, therefore more power than states, while they have no actual accountability to the public.

After Dr Madhyasta, who finished his talk on the potential opportunities for regulatory intervention to address the concerns raised, fittingly Prof Çali from the Bonavero Institute of Human Rights at the University of Oxford informed the audience about the latest Council of Europe 2024 Framework Convention on AI an Human Rights, Democracy and the Rule of Law [AI Framework Convention].

Prof Çali structured her talk around four themes about the rights covered, the sector/use/context, the duty bearers and the model of regulation. The first positive take-away was that the AI Framework Convention protects all existing International and European Human Rights and in its Art. 5 even goes beyond to safeguard the “integrity, independence and effectiveness of democratic institutions and processes”, like the separation of powers.

Prof Çali drew a distinction here with the EU AI Act 2024, which is narrower. Like the AI Act 2024, however, the AI Framework Convention excludes national security and defence from its scope. The duty bearers, noted critically, are only public authorities or private entities acting on their behalf. When it comes to private entities, there is only a voluntary opt-in mechanism deviating from the standard human rights treaties which foresee the states’ duty to regulate private entities.

The regulatory mode chosen is a risk-impact assessment model, which applies, in opposition to the AI Act 2024, uniformly to every AI system that is being developed, deployed, commissioned, etc., assessing actual and potential impact. The AI Act 2024, in comparison, contains a stepped model with different obligations based on risk categories defined by the Act. The AI Framework Convention needs five ratifications, including three from Council members.

In summary, Prof. Çali lauded the breadth of the Convention, while criticising the gap when it comes to regulating private actors and the low incentive for states to go beyond the text of the Convention at this point. Her implementation advice for regulators was to create an independent body composed not only of legal experts, but also, for example, computer and social scientists, and to set up a principled stakeholder engagement.

Prof Brandon Garrett

Keynote Address: ‘Due process and fairness in the time of AI’

The keynote address on ‘Due process and fairness in the time of AI’ was given by Prof Garrett from Duke University, L. Neil Williams, Jr. Distinguished Professor of Law and Director of the Wilson Centre for Science and Justice.

His keynote took up themes and cases from his most recent monograph, “Defending Due Process: Why Fairness Matters in a Polarised World”, published this year. His case studies, which he presented, showed how unquestioned use of AI software led to glaring violations of due process or fair trial norms. One example concerned the right to a fair notice before sensitive rights were taken away by the state, eg the benefits of Medicaid. The software used had wrongly concluded that the respective citizen received benefits from another program, but the notice contained no further explanation nor an information on a right to a hearing. One software use led to the firing of school teachers for poor performance, where it was unclear how the scores had been calculated by the software.

Prof Garrett then explained the difference between the black box machine learning tools, which characterise those cases and such which are interpretable and explainable. He asked quite frankly how witnesses or judges could be able to explain their results, relying on machine learning software whose workings they do not understand. So far, there have been only a few cases about the use of such black box machine learning tools in the US, but Prof. Garrett sees a high need for clearer judicial guidance on this. He also emphasised the lack of rigorous tests for those tools. For the future, Prof. Garrett will assess the use of generative AI in his research as well as it will potentially surface in administrative and criminal proceedings as false evidence.

2 Comments

  1. Sekander Zulkernayeen

    June 6, 2025 at 5:09 pm

    It was a great and timely event.

  2. Attended a very informative session… Be benifited… Hope so

Leave a Reply

Your email address will not be published.

*

© 2025 City Law Forum

Theme by Anders NorenUp ↑

Skip to toolbar