3 min read

Issues in AI

Issues in AI

Here’s a collection of recent findings (and wild claims) from machine learning and AI research. Our ability to distinguish between a powerful tool and a magic wand will tell the story of AI’s success.

Algorithm papers need warning labels: "Don't Trust Me"

Here is a common pattern in machine learning research papers:

1. develop a cool new algorithm

2. apply it to an interesting dataset (say, violent deaths)

3. write a paper about the algorithm’s cool findings

4. paper is reviewed and published based on its algorithmic validity

5. but it’s empirical findings aren’t robustly reviewed

6. science news headlines treat the findings as fully reviewed experimental findings

We need a new standard: algorithm papers cannot claim empirical findings without the same level of scrutiny as any other field of research. Cool algorithms are not a substitute for science and expertise.

Messy Heterogeneity

A revolution in behavioral science "will be defined by the recognition that most treatment effects are heterogeneous…"

Not only does the "average person" rarely describe every individual, it often doesn't describe any of them.

Behavioral science is unlikely to change the world without a heterogeneity revolution

Algorithmic monoculture

“Algorithmic monoculture” occurs when many independent decision-makers all rely on the same algorithm. It turns out that “even when the algorithm is more accurate for any one agent in isolation”...”the overall quality of the decisions being made by the full collection” of those decision makers drops. “Algorithmic monoculture and social welfare

For example, doctors all using the same diagnostic AI are not truly giving “second” opinions. It's just the same AI’s opinion over and over again.

Even when the system works exactly as intended, it is not necessarily working in our self-interest.

“Bad” Data

Massive surveys from Facebook and elsewhere led to bad estimates of COVID vaccination rates. The lesson for some “is that data quality matters more than data quantity.”

In other words: ”garbage in, garbage out”.

But for me this is backwards—it’s violated model assumptions that turn data into garbage.

Stop blindly applying models without checking the assumptions.

Unrepresentative big surveys significantly overestimated US vaccine uptake

It's not my fault that the thing I built is biased

Young women see fewer ads for STEM jobs than young men. Ad-targeting algorithms “simply optimize cost-effectiveness in ad delivery” and skip young women because it’s “more expensive to show ads to” this “prized demographic”.

This is exactly why AI must be measured by real-world outcomes. I’m tired of hearing, “It's not my fault that the thing I built is biased.”

Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads

Changing Ourselves

Social Signals Mean Nothing

Years ago I found that social signals—# of followers, badges, up-votes—predicted very little about the quality of code written by professional developers, or quality of work in other fields.

A recent paper offers some insight as to why.

Individuals who share content online “—even without reading—” feel that they know much more than they actually do. All of those quick answers on StackOverflow or Dribble not only create the illusion of expertise to followers but even to the sharer themselves.

“Ignorance is like a delicate exotic fruit; touch it and the bloom is gone.” But with the magic of Twitter, you can eat your fruit and share it too…it’s BlockChain for ignorance!

Digital Cognition & Democracy Initiative

I’ve been collaborating with the Digital Cognition & Democracy Initiative to explore how technology has affected both our thoughts and our democracy.

With an eye out for relevant research, I spotted a new paper arguing that democracy needs more than “love thy neighbor”.

Interventions reducing affective polarization do not necessarily improve anti-democratic attitudes

GOOD: It showed that #politicalpolarization can be reduced though “correcting misperceptions”, “inter-partisan friendships”, and “cross-partisan interactions between political leaders” BUT…

BAD: None of these intervention reduced support for

  • “undemocratic candidates”
  • “partisan violence” or
  • “partisan ends over democratic means”

We are clearly divided by more than an oversimplified story of animosity driving polarization.

You can read our new capstone report, “Rewired: How Digital Technologies Shape Cognition and Democracy,” for further insights on how technology affects attention, memory, reasoning, emotion, trust, and critical thinking, along with a literature review of resources

Institute for Security and Technology » Digital Cognition & Democracy Initiative

AI is US

Is AI biased or is it just us? In Google image searches, “greater nation-level gender inequality was associated with more male-dominated Google image search results for the gender-neutral keyword ‘person’”.

Propagation of societal gender inequality by internet search algorithms

To make the question more confusing, the algorithmic bias learned from users feeds back to influence those very users: “the gender disparity associated with high- vs. low-inequality algorithmic outputs guided the formation of gender-biased prototypes and influenced hiring decisions in novel scenarios”

AI is not a magic wand; it’s a warped mirror.