REAL ML 2019 workshop
Date
Location
Berlin, Germany
Anna Bacciarelli
Researcher and advocate
REAL ML, Human Rights Watch
William Isaac
Principal Scientist and Head of Ethics Research
Google DeepMind
Seda Gürses
Researcher
TU Delft
Nathalie Marechal
Center for Democracy & Technology
Vidushi Marda
Lawyer
AI Collaborative
Surya Mattu
Data Journalist
Digital Witness Lab
Emanuel Moss
social scientist, empirical researcher
Intel Labs
Marie-Therese Png
Coalition builder, Advocate, Researcher
Deborah Raji
Fellow and PhD Student
Berkeley, Mozilla
Rashida Richardson
AI Policy and Governance Expert
Northeastern University & Mastercard
Piotr Sapiezynski
Data Scientist
Northeastern University
Showing 3 of 11 ()
Abstract/Summary
A residential workshop for public interest researchers investigating the social impacts of algorithmic systemsResearching Algorithmic Decision-Making Systems
A collaborative three-day workshop for 30 public interest researchers investigating the social impacts of algorithmic systems
Our first REAL ML workshop came about through a shared curiosity about and frustration with understanding the impacts of new algorithmic and automated technologies on the societies and communities they were being deployed in. We wanted to explore how established methodologies – from qualitative data science and statistical approaches to human rights documentation – could help us adequately capture the scale and impact of these technologies in the wild.
With leadership from our Co-Chairs and Steering Committee, and support from MacArthur Foundation and Open Society Foundations, we held a three-day residential workshop in Berlin, Germany, for researchers from a range of disciplines and skillsets to explore how we could pool resources and share feedback to help us collectively advance our research methodology toolkit.
Our key aims? To collaboratively advance the standard of critical research and advocacy on algorithmic decision-making systems by:
- Accelerating and expanding the development of promising new multidisciplinary methodologies
- Fostering a community of like-minded public interest researchers and practitioners
- Supporting promising research projects that can contribute to critical public discourse about the development and use of algorithms.
Program
- Berlin, 2-4 October 2019 at the Michelberger Hotel
No panels! This was a space for everyone to learn from one another in a non-hierarchical structure. We split our time between plenary sessions discussing The workshop will debate and explore methods for researching the human rights and public interest impacts of algorithmic decision-making systems, with most of the time devoted to working on live projects.
The aim of this event is to bring together experts and share our problems and potential solutions. Many of us are struggling with similar issues, from hurdles to accessing data, to ensuring our work is truly participatory and inclusive.
We want this to be an open and collaborative space to share and discuss approaches, looking at what has worked for each of us – and what has not. Over the course of the three days, we will foster a trusting, problem-solving environment where delegates can contribute to one another’s research. The workshop is inspired by the Citizen Lab’s Summer Institute model and has benefited from advice provided by Citizen Lab staff.
This is a small pilot event for around 30 experts. All participants will be expected to be present for the full three days. If the workshop is successful, we hope to expand and build on it in the future.
Workshop structure
Day one (whole group)
- Introductions
- Learnings from other disciplines and fields: tools and methods we can apply from other domains
- Standards for ethical research: how do we ensure research is truly ethical?
- Divide into small groups to begin collaborative discussions and work on live projects
- Dinner with the whole group
Day two (small groups)
- Whole day devoted to project work within small groups
- Lunch and break at your leisure
Day three (small groups/whole group)
- Continue project work within small groups
- Present back to the wider group
- What we have vs. what we need: building a wishlist of accessible info, tools and standards
Is it for me?
Are you currently working on an active public interest research project that is grappling with the impact of algorithms on society? If yes, we want to hear from you! Whether you are an academic, advocate or an investigative journalist – this will be an interdisciplinary event, and your expertise and skills will contribute to a lively group from a range of backgrounds. You will find this event valuable if you are interested in both learning and actively contributing to solving challenges other participants are confronting with their research.
One of the primary aims of this conference is to counter existing norms within industry and academic conferences to minimize the lived experiences and interdisciplinary research by practitioners from historically marginalized groups. To this end, we seek to invert existing power structures by especially welcoming applications from individuals from historically marginalized groups or those investigating the impact of algorithmic systems on marginalized or vulnerable people. As we are hosting this pilot workshop in Europe, we also especially welcome applications for projects that are examining unique social, technical or legal challenges around critical research of these issues in the region.
Research focused on the impacts of algorithmic decision-making systems often involves teams of people. However, given that this is a small event, we are allowing up to two delegates to apply to participate under the same project. There is a place to indicate this on the application. Applicants will be considered on an individual basis, and each applicant should indicate their unique role to play in the project on the application form (we have limited space so want to avoid duplication of roles).
People
Co-Chairs
- Anna Bacciarelli, Amnesty International (Co-Chair)
- William Isaac, Open Society Foundation Fellow (Co-Chair)
Steering Committee
- Frederike Kaltheuner, Privacy International
- Jeff Kao, ProPublica
- Kristian Lum, Human Rights Data Analysis Group
- Irene Poetranto, Citizen Lab, University of Toronto
- Rashida Richardson, AI Now Institute, New York University
- Ashkan Soltani, independent researcher and technologist
- Michael Veale, University College London
Funders
MacArthur Foundation, Technology in the Public Interest Program (Eric Sears)
Open Society Foundations, Information Program (Becky Hogge)