Tag: machine-learning

62 links

www.youtube.com
Are We Automating Racism?
31 mar. 2021 - Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
 · algorithmic-bias · artificial-intelligence · machine-learning · racist-technology · social-justice · twitter

ai.googleblog.com > Alan Cowen and Gautam Prasad
Understanding Contextual Facial Expressions Across the Globe
24 may. 2021 - It might seem reasonable to assume that people’s facial expressions are universal — so, for example, whether a person is from Brazil, India or Canada, their smile upon seeing close friends or their expression of awe at a fireworks display would look essentially the same. But is that really true? Is the association between these facial expressions and their relevant context across geographies indeed universal? What can similarities — or differences — between the situations where someone grins or frowns tell us about how people may be connected across different cultures?
 · cultural-relativism · emotions · facial-recognition · machine-learning

www.ftm.nl > Reijer Passchier and Sebastiaan Brommersma
‘Het Nederlandse staatsbestel is niet klaar voor verdere digitalisering’
16 may. 2021 - Digitalisering van de overheid zet de verhoudingen in de rechtsstaat onder druk. Door complexe technologie en scheefgroei in de distributie van digitale middelen kunnen de rechter en het parlement de uitvoerende macht steeds slechter controleren. Het delicate evenwicht van de trias politica raakt zo verder verstoord, waarschuwt Reijer Passchier van de Universiteit Leiden.
 · anti-power · artificial-intelligence · climate-change · institutions · machine-learning · power · rule-of-law · syri · techno-optimism · technology · transparency · trias-politica

www.eff.org > Bennett Cyphers, Hinako Sugiyama and Katitza Rodriguez
Japan’s Rikunabi Scandal Shows The Dangers of Privacy Law Loopholes
12 may. 2021 - Technology users around the world are increasingly concerned, and rightly so, about protecting their data. But many are unaware of exactly how their data is being collected and would be shocked to learn of the scope and implications of mass consumer data collection by technology companies. For example, many vendors use tracking technologies including cookies—a small piece of text that is stored in your browser that lets websites recognize your browser, see your browsing activity or IP address but not your name or address—to build expansive profiles about user behavior over time and across apps and sites. Such data can be used to infer, predict, or evaluate information about a user or group. User profiles may or may not be accurate, fair, or discriminatory, but can still be used to inform life-altering decisions about them.
 · consent · cookies · data-protection · eu · gdpr · japan · machine-learning · recruitment · tracking

www.forensischinstituut.nl
NFI leert computers om berichten met doodsbedreiging uit grote hoeveelheden data te filteren
5 may. 2021 - ‘Ik wil bloed zien’ of: ‘Doe een kogel in zijn hoofd’. Het zijn voorbeelden van berichten die via Encrochat verstuurd werden. Criminelen waanden zich veilig op de server, maar niets bleek minder waar. De politie kon veel berichten meelezen. Ze wilden berichten met een bedreiging er zo snel mogelijk uithalen om mishandelingen, ontvoeringen en liquidaties te voorkomen. Het Nederlands Forensisch Instituut (NFI) heeft daarom een model ontwikkeld om de politie te helpen voorspellen welke berichten een serieuze bedreiging bevatten.
 · encrochat · machine-learning · netherlands · police

medium.com > Alexander Todorov, Blaise Agüera y Arcas and Margaret Mitchell
Do algorithms reveal sexual orientation or just expose our stereotypes?
11 jan. 2018 - A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone.
 · artificial-intelligence · gay-rights · machine-learning · not-read

www.youtube.com > Kate Crawford
The Trouble with Bias - NIPS 2017 Keynote
10 dec. 2017 - Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab.
 · algorithmic-bias · artificial-intelligence · deepmind · machine-learning · not-read · robots · self-driving-cars

modelcards.withgoogle.com
Google Cloud Model Cards
Whether it’s knowing the nutritional content in our food, the conditions of our roads, or a medication’s interaction warnings, we rely on information to make responsible decisions. But what about AI? Despite its potential to transform so much of the way we work and live, machine learning models are often distributed without a clear understanding of how they function. For example, under what conditions does the model perform best and most consistently? Does it have blind spots? If so, where? Traditionally, such questions have been surprisingly difficult to answer.
 · ai-ethics · artificial-intelligence · ethics · google · machine-learning

ai.googleblog.com > Lora Aroyo and Praveen Paritosh
Uncovering Unknown Unknowns in Machine Learning
11 feb. 2021 - The performance of machine learning (ML) models depends both on the learning algorithms, as well as the data used for training and evaluation. The role of the algorithms is well studied and the focus of a multitude of challenges, such as SQuAD, GLUE, ImageNet, and many others. In addition, there have been efforts to also improve the data, including a series of workshops addressing issues for ML evaluation. In contrast, research and challenges that focus on the data used for evaluation of ML models are not commonplace. Furthermore, many evaluation datasets contain items that are easy to evaluate, e.g., photos with a subject that is easy to identify, and thus they miss the natural ambiguity of real world context. The absence of ambiguous real-world examples in evaluation undermines the ability to reliably test machine learning performance, which makes ML models prone to develop “weak spots”, i.e., classes of examples that are difficult or impossible for a model to accurately evaluate, because that class of examples is missing from the evaluation set.
 · adversarial-ai · crowd-sourcing · generative-adversarial-networks · machine-learning · research

phenomenalworld.org > Cosmo Grant
Is it impossible to be fair?
23 aug. 2019 - This post is about fairness. In particular, it's about some interesting recent results, which came out of attempts to check whether particular automated prediction tools were fair, but which seem to have a more general consequence: that in a wide variety of situations it's impossible to make fair predictions. As Kleinberg et al. put it in their abstract: "These results suggest some of the ways in which key notions of fairness are incompatible with each other."
 · algorithmic-bias · compas · fairness · machine-learning · prediction · probability · statistics

joanna-bryson.blogspot.com > Joanna Bryson
Three very different sources of bias in AI, and how to fix them
13 jul. 2017 - Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.
 · accountability · algorithmic-bias · artificial-intelligence · black-struggle · machine-learning · racist-technology · tools-for-justice

medium.com > Jacob Metcalf
“The study has been approved by the IRB”: Gayface AI, research hype and the pervasive data ethics gap
30 nov. 2017 - Just as it has changed the methods of science and engineering, the tools of large scale data analytics have caused major shifts in how we judge the ethical consequences of scientific research. And our current methods are not keeping up. Historically, research ethics has been animated by a core set of questions, such as how do you decide if a scientific experiment is justified given the potential risks and benefits to the people being studied, or to society at large? How do you track who has to bear those risks and who gets the benefits?
 · data-ethics · data-science · gay-rights · machine-learning · project-hva · research-ethics

locusmag.com > Cory Doctorow
Past Performance is Not Indicative of Future Results
2 nov. 2020 - In “Full Employment“, my July 2020 column, I wrote, “I am an AI skeptic. I am baffled by anyone who isn’t. I don’t see any path from continuous improvements to the (admittedly impressive) ‘machine learning’ field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.”
 · antropology · artificial-intelligence · correlation-causation · machine-learning · project-hva · recruitment · statistics · thick-description

www.eff.org > Jillian C. York and Svea Windwehr
One Database to Rule Them All: The Invisible Content Cartel that Undermines the Freedom of Expression Online
27 aug. 2020 - Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to “prevent terrorists and violent extremists from exploiting digital platforms.” And unfortunately, GIFCT has the potential to have a massive (and disproportionate) negative impact on the freedom of expression of certain communities.
 · anti-terrorism · censorship · content-moderation · false-positives · freedom-of-expression · gifct · islamophobia · machine-learning · youtube

www.infoq.com > Vivian Hu
The First Wave of GPT-3 Enabled Applications Offer a Preview of Our AI Future
12 aug. 2020 - The first wave of GPT-3 powered applications are emerging. After priming of only a few examples, GPT-3 could write essays, answer questions, and even generate computer code! Furthermore, GPT-3 can perform algebraic calculations and language translations despite never being taught such concepts. However, GPT-3 is a black box with unpredictable outcomes. Developers must use it responsively.
 · artificial-intelligence · data-science · gpt-3 · machine-learning · not-read