Charting a path from privacity to explainability.

A research initiative focused on embedding
TRACE: TRansparency, ACcountability and Explainability
into the core of algorithmic AI.

Download Concept Note (PDF)

The "Black Box" Problem

As artificial intelligence systems become more powerful and autonomous, their decision-making processes often become opaque "black boxes." This lack of clarity is not just a technical issue; it's a fundamental barrier to trust, creating significant risks in critical sectors like finance, healthcare, and public policy. How can we trust a decision we cannot understand?

Introducing Project TRACE

Project TRACE is a pre-doctoral research initiative designed to move beyond theory and create a practical, robust framework for developing inherently trustworthy AI.

Our mission is to architect algorithmic systems that are not only powerful but also transparent by design, accountable by structure, and explainable to diverse stakeholders. We aim to replace the "black box" with a "glass box," enabling clear insight without sacrificing performance.

Transparency

Making the invisible visible. Transparency provides a clear view into the models, data, and algorithmic mechanics. It's the foundational layer for interrogation and understanding.

Accountability

Ensuring responsibility from design to deployment. Accountability establishes clear lines of ownership and governance, making it possible to address errors, bias, and unintended consequences.

Explainability (XAI)

Translating complex logic into human-centric narratives. Explainability provides faithful and understandable reasons behind specific predictions, enabling users to question, trust, and manage AI-driven outcomes.

Research Vision

The primary goal of Project TRACE is to form the basis of a funded PhD thesis within a leading EU institution. The research will focus on developing a novel framework that systematically manages the trade-offs between model complexity and the imperatives of transparency and accountability.

Key research objectives include:

  • To develop new metrics for quantifying the explainability of complex algorithmic models.
  • To investigate how principles from complexity science can inform the design of more accountable systems.
  • To publish findings in top-tier, peer-reviewed scientific journals and conferences.
  • To contribute actionable insights to the ongoing development of the EU's AI Act and related governance policies.
Esteve Llorens

About Me

My name is Esteve Llorens, and I am a passionate researcher at the intersection of technology, law, and society. After completing my Master's in Innovation and Digital Transformation (UOC), I was driven to tackle one of the most pressing challenges of our time: ensuring that the progress of AI aligns with human values. My goal is to dedicate my doctoral studies to making "trustworthy AI" a technical reality: a framework for regulatory compliance.

Let's Collaborate

I am actively seeking a funded PhD position and a supervisor who shares this research vision. If you are a professor, a research group, or an institution working on AI ethics, explainability, or socio-technical systems, I would be delighted to connect.

Email: esteve@