Receptions of Generative AI
Date
Location
Online
Abeba Birhane
AI Accountability Researcher
Corinne Cath
Researcher
Anna Bacciarelli
Researcher and advocate
REAL ML, Human Rights Watch
Vidushi Marda
Lawyer
AI Collaborative
Showing 3 of 4 ()
Abstract/Summary
A REAL ML community event to discuss generative AI hype, and how to rebalance the industry-created narrative on the prominent new technologyIn May 2023, we held a two-day online event for the REAL ML community and partners to dissect the generative AI hype of the previous six months, consider strategic interventions, from policy and advocacy development to technical approaches, and most importantly to discuss how to take collective and individual action to rebalance the industry-created narrative of generative AI.
Understanding and critiquing the generative AI hype
In the first six months since Open AI launched GPT-3, the tech industry threw a huge amount of resource into a generative AI race. From Bard to Bedrock to the surge in new tech start-ups exclusively focused on generative AI products, the hype cycle was and is fed by industry publicity and media coverage that ponders ChatGPT’s transformative impact across society, our information ecosystems, the education sector, financial systems, and more.
But the conversation has been dangerously lopsided. While entities hoping to commercialise this technology claim that it represents concrete advances towards artificial general intelligence, in reality, these models can perform in ways that are unreliable at best and dangerous at worst. Large language models like ChatGPT also pose significant ethical and rights risks, particularly to those from marginalised communities, with systems built using problematic data-scraping techniques, extractive and exploitative labour practices, and are often opaque and inscrutable.
The issues emanating from the development and mainstreaming of generative AI are manifold, but the knowledge and much of the media narrative is driven by the tech industry, particularly the companies creating the tech. We seek to critique that narrative and re-centre those who are ultimately impacted by the technology – those with most to lose, rather than those who have most to gain.
Speakers
Dan McQuillan
Generative AI as ‘bs generator’
Dan McQuillan set out some provocations on generative AI as a technical ‘bs generator’, devoid of reason and problematic in its reinforcement of structural inequities.
After a PhD in Experimental Particle Physics, Dan worked with people with learning disabilities and mental health issues, created websites with asylum seekers and worked in digital roles in both Amnesty International and the NHS. He recently wrote ‘Resisting AI – An Anti-fascist Approach to Artificial Intelligence’. He can be found on Twitter at @danmcquillan and Mastodon at @danmcquillan@kolektiva.social
Daniel Leufer
EU laws and generative AI
Daniel Leufer considered how the European Union’s AI Act, due to be voted on by EU politicians next month, as well as existing legislation such as GDPR and the DSA, interplays with the rapid rise of generative AI, and the reaction of policy-makers and legislators in Europe to the sticky subject of regulating generative AI.
Daniel is a Senior Policy Analyst at Access Now’s Brussels office. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, biometrics, and augmented and virtual reality. While he was a Mozilla Fellow, he developed aimyths.org, a website that gathers resources to tackle myths and misconceptions about AI. He has a PhD in Philosophy from KU Leuven in Belgium. He is also a member of the External Advisory Board of KU Leuven’s Digital Society Institute.
Abeba Birhane
Audits and accountability for generative AI
Abeba Birhane presented her new work auditing LLMs – outlining her findings and where she sees a role for auditing in holding the creators and owners of generative AI systems to account.
Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). She is a Senior Fellow in Trustworthy AI at Mozilla Foundation, and an Adjunct Lecturer/Assistant Professor at the School of Computer Science and Statistics at Trinity College Dublin, Ireland.
Corinne Cath-Speth
Computational infrastructure, clouds and generative AI
Corinne Cath explored the role of computational infrastructure, including cloud computing, on generative AI – and look at some of the questions we should ask about power, ownership and infrastructure.
Corinne is an anthropologist studying tech infrastructure politics. They are starting a post-doc in Delft working with Professor Seda Gürses on the politics of programmable infrastructures. Corinne is also an affiliate at the University of Cambridge’s Minderoo Centre and a fellow at the critical infrastructure lab at the University of Amsterdam.
Arvind Narayanan
Countering generative AI hype narratives
Arvind Narayanan looked back at how we got to this moment in the generative AI boom, give a snapshot of how the tech works, and cover some of the responses to the hype around generative AI from civil society and critics.
Arvind Narayanan is a professor of computer science at Princeton. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification.