5 min read

More AI, More Problems

More AI, More Problems

This week's Socos Academy looks at possibilities and problems with AI.

Mad Science Solves...

Causal Reinforcement Learning MSS

Back around 2015, I was thinking hard about causality. When machine learning professors talk about causality they are usually talking about causal graphs and Judea Pearl’s research at UCLA. Years earlier a project at the Redwood Center for Theoretical Neuroscience made impressive progress identifying causal relationships between neurons. But then deep neural networks came along and everyone seemed to collectively forget that causal inference was ever a field of research.

I was the chief scientist of Gild at the time, and we were building models to help companies identify elite programmers, salespeople, and designers. If you only needed to know what correlated most strongly with a job offer at a top company, it was simple: the candidate's name, school, and last job. The problem, somewhat obviously, is that “name” correlates strongly with hiring but has no causal relationship. A male-sounding name doesn’t make you a better programmer, but it dramatically increases your probability of being hired (including for women with male-sounding names). An elite alma mater also strongly correlated with hiring but also appeared to have limited causal relationship to performance (once you control for other variables).

For years “ethical AI” has focused on reducing spurious correlations in the hiring, health, and education data that lead to algorithmic bias. My position has always been that if your algorithm cannot deal with real world data without bias, it is the algorithm, not the data, that is flawed. We should not be hiring based on correlations. We shouldn’t be denying loans and medical treatments based on correlations. A job offer should reflect a causal relationship between the qualities of a candidate and our expectations of their future success.

I know, causality is hard. In fact, true causality is functionally impossible to recover in any real world context. Actionable causal inference, however, is not only achievable, it is the only option in applying AI to the domains of human development. So, for nearly a decade I have been exploring causal machine learning, or more specifically, causal Reinforcement Learning (cRL). By combining advances in deep RL (e.g., embeddings and continuous space policies) with statistical models from the natural experiment methodologies of economics (e.g., discontinuity regression and difference-in-difference), cRL allows agents to not only learn by exploring their environment (ala AlphaFold or OpenAI Halo teams) but to replace their correlational learning with causal inference.

While there are many moving parts in the models I’ve been developing, the results have been exciting. At Socos Labs and our new Data Trust, we’ve explored deploying these models in domains such as parent child activities, precision public health recommendations, and economic inclusion policy analyses. These systems recommend health interventions or public policies not based on correlations but on bounded estimates of causal inference.

There is an enormous amount of work to be done to bring cRL into common usage, but many other groups are developing causal models as well. Causality is (one of) the answers to ethical AI. More importantly, though, it is an enormous breakthrough that can take us beyond hallucinating LLM or diffusion-generated people with flipper hands.

Learn more on Socos.org

Stage & Screen

Dr. Ming will currently be speaking in Chicago on October 19, New York on October 23, and London on November 15. If you have events, opportunities, or would be interested in hosting a dinner or other event with her, please let us know. We're currently reviewing invitations and can be flexible on fees for paid events for these markets and all 2023 dates!

<<If you are interested in pursuing an opportunity with Vivienne in or around these locations, please reach out ASAP!>>

New Podcast Alert!

Modern People Leader

Every week we talk to CHROs, Chief People Officers, and other work experts about the work they're doing to pioneer the way that we work. They share what's working, what's not, and how they've gotten to where they're at in their careers."

The 3 qualities of the smartest teams, letting AI replace routine work & fostering collective intelligence: Dr. Vivienne Ming

Research Roundup

A Word to the Wise

“Generalization…is the ability to repurpose knowledge in novel settings.” Despite advances in diffusion networks and LLMs, it remains the case that “people can generalize compositionally in ways that are elusive for standard neural networks”.

One recent paper, “Curriculum learning for human compositional generalization”, suggests that “human generalization benefits from training regimes in which items are axis aligned and temporally correlated”. On other words, hearing “apple” while seeing one, holding one, and even tasting one at the same time gives humans some generalization advantages.

To overcome DNN’s weakness in generalization, the authors present a “neural network model based around a Hebbian gating process that can capture how human generalization benefits from different training curricula”. This is interesting but only takes us so far.

What about, instead of “gating”, we confront model-based vs model-free learning in neural networks in more substantive terms. This could, for example, involve the addition of language models for abstract, compositional representation during transfer tasks. Some of this could be an explicit linguistic workspace, but it could include implicit systems for transfer within linguistic networks. This doesn’t mean everything is just language, or that full formal language is required. It only means that extending a DNN to be modulated some degree of linguistic abstraction could create some or all of the complex learning discontinuities, model-based generalization, and causal inference patterns (strengths!) seen in humans.

When Answers Create New Problems

Far too much of the noise around LLMs focuses on sentience, extinction, and the end of work (for good or bad). Even substantive research tends toward the immediate: can it solve problem X? How does it affect programming/sales/medicine on day 1? Far too little discussion and research explore the long run impacts of modern AI. Which workers benefit the most on day 1000? How does it alter professional development for younger workers? Will it increase or decrease collective intelligence?

On that last question, enter “Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow”, an analysis of how GPT has altered collective knowledge accumulation online. When I was a kid I read encyclopedias and went to libraries to answer questions. This clearly is not as easy as a quick Wikipedia search, but I’m stunned when people throw down the rhetorical, “How did we learn anything before smartphones?” Despite the obvious inequalities, civilization some managed to get by for several thousand years before windows CE (look it up!).

When I launched my first few companies, Stack Overflow was my friend. I typed pressing my questions about python into its search bar and after wading through reams of useless answers about javascript and ruby, I’d find a hint about the solution. That resource of public Q&A accelerated software development for a generation of engineers (as well as statisticians, physicists, and D&D enthusiasts via other Stack Exchange sites). It also accelerated learning and professional development.

Since the release of ChatGPT, however, things have changed. Compared to the Russian and Chinese versions of these Q&A sites, “activity on Stack Overflow significantly decreased”. In fact there has been a “16% decrease in weekly posts on Stack Overflow”, and this drop in activity “increases in magnitude over time” and for the “most widely used programming languages”.

Given the GPT and Bard provide specific answers and even code samples to complex natural language queries, it is not a shock that programmers, particularly less experienced ones, would favor LLM answers of those found on Q&A sites, but as the authors state, “Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.”

This is crucially important for measuring the feedback loop of LLMs and generated knowledge. It could cause higher quality, leading edge research, engineering, and other innovation to have even greater value while devaluing arbitrary content creation, e.g., Q&A and research spam.

Follow more of my work at
Socos Labs The Human Trust
Dionysus Health Optoceutics
RFK Human Rights GenderCool
Crisis Venture Studios Inclusion Impact Index
Neurotech Collider Hub at UC Berkeley