Tag: algorithmic-bias

105 links

Are We Automating Racism?
31 mar. 2021 - Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
 · algorithmic-bias · artificial-intelligence · machine-learning · racist-technology · social-justice · twitter

op.europa.eu > Janneke Gerards and Raphaële Xenidis
Algorithmic discrimination in Europe : challenges and opportunities for gender equality and non-discrimination law.
10 mar. 2021 - This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.
 · algorithmic-bias · eu · feminism · not-read · racist-technology

www.volkskrant.nl > Paul Hofstra and Rik Kuiper
Algoritmes gemeente Rotterdam kunnen leiden tot ‘vooringenomen uitkomsten’
15 apr. 2021 - De algoritmes die de gemeente Rotterdam gebruikt om bijvoorbeeld uitkeringsfraude op te sporen kunnen leiden tot ‘vooringenomen uitkomsten’. Dit concludeert de Rekenkamer Rotterdam in een rapport dat donderdag verschijnt. Voorzitter Paul Hofstra legt uit wat er is misgegaan.
 · algorithmic-bias · black-struggle · fraud · netherlands · racist-technology

Gebruik algoritmes Rotterdam kan leiden tot vooringenomen uitkomsten
14 apr. 2021 - De gemeente Rotterdam maakt ter ondersteuning van haar besluitvorming gebruik van algoritmes. Hoewel er binnen de gemeente aandacht bestaat voor het ethisch gebruik van algoritmes, is het besef van de noodzaak hiervan nog niet heel wijdverbreid. Dit kan leiden tot weinig transparantie van algoritmes en vooringenomen uitkomsten, zoals bij een algoritme gericht op de bestrijding van uitkeringsfraude. Dit en meer concludeert de Rekenkamer Rotterdam in het rapport ‘Gekleurde technologie’.
 · algorithmic-bias · algorithmic-registries · black-struggle · fraud · netherlands · racist-technology

Algoritmes kunnen kans op discriminatie bij sollicitaties vergroten, maar ook verkleinen
2 sep. 2020 - Iedereen wil een eerlijke kans op een baan. We willen allemaal dat werkgevers en bemiddelaars kijken naar onze talenten, vaardigheden en ervaring en niet naar bijvoorbeeld onze leeftijd, afkomst, gender of seksuele voorkeur. Om snel te kunnen selecteren zetten steeds meer werkgevers algoritmes in. Uit nieuw onderzoek van het College voor de Rechten van de Mens, blijkt dat de inzet van algoritmes de kans op discriminatie kan vergroten, maar ook verkleinen.
 · algorithmic-bias · project-hva · recruitment

Zoekalgoritmes van vacaturesites zijn soms vooringenomen
31 mar. 2021 - Algoritmes van vacaturesites slagen er niet altijd in om genderneutraal te zoeken. Zo komt het voor dat een werkzoekende na het intikken van een zoekterm als ‘lerares’ minder vacatures te zien krijgt dan bij de zoekterm ‘leraar’ of ‘docent’. Tegelijkertijd kunnen zoekalgoritmes vooringenomenheid ook juist tegengaan. Dit blijkt uit onderzoek dat de Utrecht Data School (UDS) van de Universiteit Utrecht in opdracht van het College voor de Rechten van de Mens uitvoerde.
 · algorithmic-bias · gender · project-hva · recruitment

www.equaltimes.org > Tom Cassauwers
Can algorithmic registers solve automated bias?
24 mar. 2021 - In January 2021 the Dutch government collapsed because of a scandal that highlights the dangers of trying to administer essential governent services with artificial intelligence (AI). Between 2009 and 2019, in what has become known as the toeslagenaffaire (the benefits affair), around 26,000 parents were wrongly accused of committing childcare benefit fraud.
 · algorithmic-auditing · algorithmic-bias · algorithmic-registries · finland · fraud · gdpr · interdisciplinarity · netherlands · saidot · transparency

www.youtube.com > Kate Crawford
The Trouble with Bias - NIPS 2017 Keynote
10 dec. 2017 - Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab.
 · algorithmic-bias · artificial-intelligence · deepmind · machine-learning · not-read · robots · self-driving-cars

www.br.de > Cécile Schneider, Jonas Bedford-Strohm and Uli Köppen
Ethics of Artificial Intelligence: Our AI Ethics Guidelines
30 nov. 2020 - No matter what technology we use, it is never an end in itself. Rather, it must help us deliver on a higher purpose: to make good journalism. This purpose-driven use of technology guides our use of artificial intelligence and all other forms of intelligent automation. We want to help shape the constructive collaboration of human and machine intelligence and deploy it towards the goal of improving our journalism.
 · ai-ethics · algorithmic-bias · artificial-intelligence · data-minimization · ethics · filter-bubble · journalism · personalization

hybridpedagogy.org > Shea Swauger
Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education
2 apr. 2020 - Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
 · algorithmic-bias · black-struggle · calls-for-papers · education · educational-surveillance · educational-technology · eugenic-gaze · eugenics · facial-recognition · feminism · gender · plagiarism · proctoring · proctorio · racist-technology · sexual-violence · solutionism · surveillance · trans-rights · turnitin

medium.com > Arnold Brown
Blackbird in the Coal Mine — Are Technology Platforms Stifling Black Community? — Part 1
12 oct. 2018 - On August 30, 2018 the New York Times ran a piece by Farhad Manjoo entitled, “Here’s the Conversation We Really Need to Have About Bias at Google” which explained why Trump’s unsupported claims about search engine bias against conservative voices potentially undermines the conversation various communities and experts have been trying to have for a very long time about the hidden, pervasive and often unintended bias of the search engine results. The 60 Minutes piece “How Did Google Get So Big?” which aired on September 23, 2018, includes discussion of the harmful impacts of the bias of search engines. Even while Google and others strive to modify their algorithms to produce results that are free from the hidden or unintended bias and also more capable of producing results individually relevant, I would suggest there is still the need to create communities of content in ways that don’t silo us as people, but provide an additive overlay.
 · algorithmic-bias · black-struggle · blackbird · brazil · content-discovery · culture · google · search-engines · technology

phenomenalworld.org > Cosmo Grant
Is it impossible to be fair?
23 aug. 2019 - This post is about fairness. In particular, it's about some interesting recent results, which came out of attempts to check whether particular automated prediction tools were fair, but which seem to have a more general consequence: that in a wide variety of situations it's impossible to make fair predictions. As Kleinberg et al. put it in their abstract: "These results suggest some of the ways in which key notions of fairness are incompatible with each other."
 · algorithmic-bias · compas · fairness · machine-learning · prediction · probability · statistics

joanna-bryson.blogspot.com > Joanna Bryson
Three very different sources of bias in AI, and how to fix them
13 jul. 2017 - Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.
 · accountability · algorithmic-bias · artificial-intelligence · black-struggle · machine-learning · racist-technology · tools-for-justice

www.nrc.nl > Tommy Wieringa
De wet is een slang die alleen mensen zonder schoenen bijt
23 jan. 2021 - Tijdens de inauguratie van Joe Biden als 46ste president van de Verenigde Staten werd er flink getimmerd, geschuurd en gezaagd. Herstelwerkzaamheden aan democratie en rechtsstaat. De plechtigheid raakte me sterker dan ik gedacht had. Misschien omdat ik me de afgelopen vier jaar ook meer zorgen maakte dan ik gedacht had. (Nu zou ik het liefste vervolgen met een lofzang op dichteres van dienst Amanda Gorman – die voordracht! die vingers! die timing! die gele jas! – maar helaas, de plicht roept.)
 · algorithmic-bias · due-process · fraud · netherlands · syri

www.oneworld.nl > Florentijn van Rootselaar
Hoe Nederland A.I. inzet voor etnisch profileren
14 jan. 2021 - China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera's alarm slaan bij auto's met een Oost-Europees nummerbord.
 · algorithmic-bias · artificial-intelligence · china · dna · ethnic-profiling · facial-recognition · false-positives · netherlands · predictive-policing · racist-technology

Programmed Racism - Global Digital Cultures
24 nov. 2020 - This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab,  Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.
 · algorithmic-bias · facial-recognition · not-read · racist-technology

www.dukeupress.edu > Louise Amoore
Cloud Ethics
1 may. 2020 - In Cloud Ethics Louise Amoore examines how machine learning algorithms are transforming the ethics and politics of contemporary society. Conceptualizing algorithms as ethicopolitical entities that are entangled with the data attributes of people, Amoore outlines how algorithms give incomplete accounts of themselves, learn through relationships with human practices, and exist in the world in ways that exceed their source code. In these ways, algorithms and their relations to people cannot be understood by simply examining their code, nor can ethics be encoded into algorithms. Instead, Amoore locates the ethical responsibility of algorithms in the conditions of partiality and opacity that haunt both human and algorithmic decisions. To this end, she proposes what she calls cloud ethics—an approach to holding algorithms accountable by engaging with the social and technical conditions under which they emerge and operate.
 · algorithmic-bias · data-ethics · not-read · project-hva · racist-technology

www.californialawreview.org > Andrew D. Selbst and Solon Barocas
Big Data’s Disparate Impact
1 jun. 2016 - Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply […]
 · algorithmic-bias · black-struggle · data-mining · fairness · project-hva · racist-technology · recruitment

www.vn.nl > Sennay Ghebreab
Ja, gezichtsherkenningstechnologie discrimineert - maar een verbod is niet de oplossing
5 oct. 2020 - Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
 · abolition · algorithmic-bias · amazon · artificial-intelligence · black-struggle · brain-reading · facial-recognition · false-positives · ibm · pulse · racist-technology · rekognition

www.kevindorst.com > Brian Hedden
How (Not) to Test for Algorithmic Bias
Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don’t involve any AI or ‘deep learning.’
 · algorithmic-bias · black-struggle · compas · fairness · justice · philosophy · polarization · project-hva · racist-technology

hackeducation.com > Audrey Watters
Robot Teachers, Racist Algorithms, and Disaster Pedagogy
3 sep. 2020 - I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim's class "Race Before Race: Premodern Critical Race Studies." Here's a bit of what I said...
 · algorithmic-bias · black-struggle · educational-surveillance · educational-technology · plagiarism · proctoring · project-iis · racist-technology · turnitin · united-kingdom

dailynous.com > Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor
Philosophers On GPT-3 (updated with replies by GPT-3)
30 jul. 2020 - Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.
 · algorithmic-art · algorithmic-bias · art · artificial-intelligence · chat-bots · consciousness · disinformation · freedom-of-speech · gpt-3 · justice · language · philosophy · plagiarism · racist-technology

science.sciencemag.org > Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer
Dissecting racial bias in an algorithm used to manage the health of populations
25 oct. 2019 - The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.
 · algorithmic-bias · healthcare · not-read · racist-technology · united-states