Tag: machine-learning

51 links

medium.com > Alexander Todorov, Blaise Agüera y Arcas and Margaret Mitchell
Do algorithms reveal sexual orientation or just expose our stereotypes?
11 jan. 2018 - A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone.
 · artificial-intelligence · gay-rights · machine-learning · not-read

www.youtube.com > Kate Crawford
The Trouble with Bias - NIPS 2017 Keynote
10 dec. 2017 - Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab.
 · algorithmic-bias · artificial-intelligence · deepmind · machine-learning · not-read · robots · self-driving-cars

Google Cloud Model Cards
Whether it’s knowing the nutritional content in our food, the conditions of our roads, or a medication’s interaction warnings, we rely on information to make responsible decisions. But what about AI? Despite its potential to transform so much of the way we work and live, machine learning models are often distributed without a clear understanding of how they function. For example, under what conditions does the model perform best and most consistently? Does it have blind spots? If so, where? Traditionally, such questions have been surprisingly difficult to answer.
 · ai-ethics · artificial-intelligence · ethics · google · machine-learning

ai.googleblog.com > Lora Aroyo and Praveen Paritosh
Uncovering Unknown Unknowns in Machine Learning
11 feb. 2021 - The performance of machine learning (ML) models depends both on the learning algorithms, as well as the data used for training and evaluation. The role of the algorithms is well studied and the focus of a multitude of challenges, such as SQuAD, GLUE, ImageNet, and many others. In addition, there have been efforts to also improve the data, including a series of workshops addressing issues for ML evaluation. In contrast, research and challenges that focus on the data used for evaluation of ML models are not commonplace. Furthermore, many evaluation datasets contain items that are easy to evaluate, e.g., photos with a subject that is easy to identify, and thus they miss the natural ambiguity of real world context. The absence of ambiguous real-world examples in evaluation undermines the ability to reliably test machine learning performance, which makes ML models prone to develop “weak spots”, i.e., classes of examples that are difficult or impossible for a model to accurately evaluate, because that class of examples is missing from the evaluation set.
 · adversarial-ai · crowd-sourcing · generative-adversarial-networks · machine-learning · research

phenomenalworld.org > Cosmo Grant
Is it impossible to be fair?
23 aug. 2019 - This post is about fairness. In particular, it's about some interesting recent results, which came out of attempts to check whether particular automated prediction tools were fair, but which seem to have a more general consequence: that in a wide variety of situations it's impossible to make fair predictions. As Kleinberg et al. put it in their abstract: "These results suggest some of the ways in which key notions of fairness are incompatible with each other."
 · algorithmic-bias · compas · fairness · machine-learning · prediction · probability · statistics

joanna-bryson.blogspot.com > Joanna Bryson
Three very different sources of bias in AI, and how to fix them
13 jul. 2017 - Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.
 · accountability · algorithmic-bias · artificial-intelligence · black-struggle · machine-learning · racist-technology · tools-for-justice

medium.com > Jacob Metcalf
“The study has been approved by the IRB”: Gayface AI, research hype and the pervasive data ethics gap
30 nov. 2017 - Just as it has changed the methods of science and engineering, the tools of large scale data analytics have caused major shifts in how we judge the ethical consequences of scientific research. And our current methods are not keeping up. Historically, research ethics has been animated by a core set of questions, such as how do you decide if a scientific experiment is justified given the potential risks and benefits to the people being studied, or to society at large? How do you track who has to bear those risks and who gets the benefits?
 · data-ethics · data-science · gay-rights · machine-learning · project-hva · research-ethics

locusmag.com > Cory Doctorow
Past Performance is Not Indicative of Future Results
2 nov. 2020 - In “Full Employment“, my July 2020 column, I wrote, “I am an AI skeptic. I am baffled by anyone who isn’t. I don’t see any path from continuous improvements to the (admittedly impressive) ‘machine learning’ field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.”
 · antropology · artificial-intelligence · correlation-causation · machine-learning · project-hva · recruitment · statistics · thick-description

www.eff.org > Jillian C. York and Svea Windwehr
One Database to Rule Them All: The Invisible Content Cartel that Undermines the Freedom of Expression Online
27 aug. 2020 - Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to “prevent terrorists and violent extremists from exploiting digital platforms.” And unfortunately, GIFCT has the potential to have a massive (and disproportionate) negative impact on the freedom of expression of certain communities.
 · anti-terrorism · censorship · content-moderation · false-positives · freedom-of-expression · gifct · islamophobia · machine-learning · youtube

www.infoq.com > Vivian Hu
The First Wave of GPT-3 Enabled Applications Offer a Preview of Our AI Future
12 aug. 2020 - The first wave of GPT-3 powered applications are emerging. After priming of only a few examples, GPT-3 could write essays, answer questions, and even generate computer code! Furthermore, GPT-3 can perform algebraic calculations and language translations despite never being taught such concepts. However, GPT-3 is a black box with unpredictable outcomes. Developers must use it responsively.
 · artificial-intelligence · data-science · gpt-3 · machine-learning · not-read