REAL ML 2025 workshop: Mexico City

Date
–Location
Mexico City, Mexico
Abstract/Summary
Apply now to join our three day in person workshop, bringing together an interdisciplinary and international group of people working on AI and algorithmic accountabilityAre you working on issues relating to AI accountability? Would your work benefit from feedback from a brilliant group of interdisciplinary thought leaders and expert? Would you like an opportunity to meet and work with likeminded people, and strengthen your network? And are you open to sharing your skills and experience with the group?
We’re hosting a three day residential retreat in Mexico City in June, 2025. It’ll be a packed program, a mix of practical work, plenary discussions and break out work. We hope participants will leave with tangible ideas about how to move their project forward – whether it’s a research paper, an advocacy campaign, a news article – as well as a mix of practical skills to apply to their work, and a sense of connection with a community advancing these practices together.
Applications are currently closed, and we’ll get back to all applicants by end of March 2025.
If you have any questions about the workshop, please contact us
About REAL ML
At REAL ML, we have been hosting events to advance AI accountability since 2019, when we brought a group of 30 leading researchers and practitioners together in Berlin to workshop a range of data investigations, complex policy issues, and qualitative research projects. We wanted to share, interrogate and strengthen our methodologies in documenting how AI impacts people in the real world. Whether that’s through ethical data practices, participatory methodologies, or creative partnerships, we aim to help a global movement of researchers capture the impact of AI on communities.
We’ve held multiple iterations of these workshops and other events online and in person over the past five years, all with overwhelmingly positive feedback from attendees.
In June 2025, we’re excited to be back in person for our first flagship residential, three day workshop in person since we set up in 2019. We’d love for you to join us.
Is it for me?
If you’re working on AI and algorithmic accountability work, the answer is probably yes. Past attendees have come from a range of disciplines – across civil society, academia, and industry – and in roles ranging from journalists to data scientists to lawyers to activists.
Workshop theme: AI for whose benefit? Rethinking AI ‘for humanity’
We’re interested in work that challenges industry-led concepts of AI as an inevitable, one-size-fits-all approach to social problems.
The concept of AI for the public good and a net-benefit ‘for humanity’ are by now well-trodden narratives that have traveled beyond industry marketing to AI policy documents and media reports. But what does it mean, and who does it really serve?
We will explore how the concepts of ‘benefit’ and ‘humanity’ can mask problematic ideas and power imbalances. We will collectively workshop ways to challenge the influential actors and approaches behind these terms.
Questions your work might consider
- Who is the primary beneficiary of an AI application, and how is this established?
- What unintended harms arise from an AI application?
- What values, practices and processes are baked into a system, and what should we do about them?
- What does a narrative that really serves the people look like when it comes to AI development and application?
What this might look like in practice
- Quantitative or qualitative research challenging the idea that AI is beneficial ‘for everyone’
- Participatory policy development that prioritises the impact of tech on marginalized communities
- Creative grassroots campaigns to hold dominant AI players accountable for harms
- Development of alternative tech – models, datasets, designs – that recenters power in the hands of users
- Captivating advocacy work that cuts through AI hype and paints an alternative vision for best-practice use of AI
We are open to a wide range of formats. The only criteria is that it’s something you’re actively working on and have some progress to show for itand have made headway on. It must be more developed than an idea – whether that’s a draft paper, story or report, initial research, a beta product – something that you can share in a breakout group, consulting your colleagues where you’re struggling to collectively troubleshoot blockers, workshop ideas for next steps, and maybe even consider collaborating with your group.
Previous REAL ML workshops have helped our attendees develop, finish and launch advocacy campaigns, research papers, data journalism and investigative reports.
At our last workshop, participants contributed insights to investigative reporting on police use of surveillance technology, research on the limitations of large language models for non-English languages, experimental journalism exploring feminist perspectives on environmental megaprojects, legal analysis of how non-technical stakeholders can pursue algorithmic transparency, and a study on different approaches to ensuring cultural representation of data annotators in the datasets they label.
Organizing Committee
- Anna Bacciarelli – Co-Chair
- Shazeda Ahmed – Co-Chair
- Alix Dunn – Facilitator
- Aoise Keogan-Nooshabadi – Project Management
Steering Committee
- Stevie Bergman-Naijar
- Abeba Birhane
- Matt Mahmoudi
- Tina Park
- Paola Ricaurte Quijano