7 min read

Future of Work

Future of Work

Work is changing, but rarely in the ways many of us assume. In this week’s tour of new research, we’ll look at how real trends and technologies are interacting with the wishful thinking of “thought leaders”.

I grew up in Steinbeck's Salinas Valley, perhaps the highest productivity farm land in the world. An army of people spend every day bent over in those fields producing cheap salads for the world. What would it mean to automate away those jobs? A recent paper finds the expected good and bad...

Increasing robots per worker by 10% (1) reduces poor health in "low-skilled individuals" by 10% because (2) it reduces "physical tasks supplied by low-skilled workers" for 1.5%. With fewer routine tasks, though…

Does the rise of robotic technology make people healthier?

...there are fewer jobs and lower wages for low-skill workers: "1 additional robot reduces employment and wages" within a labor market, the equivalent of 2-4 additional job seekers.

From Immigrants to Robots: The Changing Locus of Substitutes for Workers

Few jobs? Fewer job-related injuries! There's a silver lining to everything :)

The chasm grows!

The Chasm grows! “In 1999, the largest share of employment at high-paying workplaces was blue-collar production workers, but by 2017 it was managers and professionals.” So, “workers benefiting from a high-paying workplace are increasingly those who already benefit from…a high-paying occupation.”

Consolidated Advantage: New Organizational Dynamics of Wage Inequality

And there’s #deprofessionalization as well! “Some previously low-paying but high-professional share workplaces, like hospitals and schools, have deskilled their jobs…” to reduce wages expenses.

[link to Not the Industrial Revolution]

Recruiting Bullshit

I call bullshit on almost every statement in this Tech Review article on AI and recruiting. I don't know what the quoted actually believe in their hearts, but the words…

LinkedIn's job-matching AI was biased. The company's solution? More AI.

“You typically hear the anecdote that a recruiter spends six seconds looking at your résumé, right? ...When we look at the recommendation engine we’ve built, you can reduce that time down to milliseconds.” They say that like it's a good thing. And...

That company "which lists 5-6 million jobs at any given time...incorporates behavioral data into its recommendations but doesn’t correct for bias... Instead, the marketing team focuses on getting users from diverse backgrounds signed up... the company then relies on employers to report back" on bias.

To me this says, "We're too fucking lazy to solve our problem, and so now our willful ignorance is your problem."

We don't need more correlational bandaids. These systems should be recommending candidates because they explicitly, causally predict better performance, not because of reweighed click-through correlational mirrors of long-known biases.

The end of routine

Be skeptical of anyone breezily promising AI will lift everyone equally. New Acemoglu & Restrepo research argues that automation of routine tasks accounts for 50-70% of the growing wage inequality in the last 40 years.

Tasks, Automation, and the Rise in US Wage Inequality

In cognitive automation, I have described this process as "deprofessionalization". Higher-skill routine labor can be completed by lower-wage, lower-autonomy, lower-skill workers partially substituted by AI and other forms of automation.

This Is Not the Industrial Revolution

The famed supergroup Autor, Goldin, & Katz argue that the "race between education and technology" drives these trends, but also note "the recent convexification of education returns". Only the most and least educated have seen increased returns.

Extending the Race between Education and Technology

Both trends are driven by decreasing returns to hard skills learned from universities and increasing demand for (correlated) creativity and meta-learning. The winners are creative economy workers for whom automation (particularly augmented intelligence) is a massive productivity complement.

Autor on Burgers

MIT Economist @DavidAutor says, "There’s no future in working the fry station at White Castle." He's right, but this line's even more insightful: "That burger isn’t as cheap as you think; it’s just that you’re paying part of your meal tab in federal taxes."

The Labor Shortage Is Empowering Workers to Demand Better Jobs

I've written about the devaluation of routine labor and the people who do it in "This is Not the Industrial Revolution". It's even more problematic to think that we choose, if rather passively, subsidizing shitty burgers over investing in human capacity.

This Is Not the Industrial Revolution

Coding: the future of work!

"Within 10 years, if the majority of code isn't written by machines, I'll be stunned." - me, in like 2012

We're not there yet but, for good or bad, the day is approaching. OpenAI Codex

DeepMind, yesterday: “AlphaCode ranked within the top 54% in real-world programming competitions...”

Competitive programming with AlphaCode

P.S., AlphaGo... AlphaFold... AlphaCode... clearly algorithms already run DeepMind’s marketing strategy.

Deprofessionalization, mother fuckers!

More on the continuing deprofessionalization of routine work:

Large, expensive cities have the highest IT investment and biggest decline in routine labor.

Falling IT prices drive the growing wage & demand gaps between routine and creative jobs.

IT and Urban Polarization

About those jobs-from-anywhere…

About the idea of working from anywhere: of the “4.7 million jobs” that left core metro areas during the pandemic, only “198,000, less than 0.2%” moved “to the Heartland [or] Mountain-West”. There remains a huge job quality gap between the elite metro and much of the rest of the country.

Remote work won't save the heartland

Obsolete Art

Articles like this remind me, again and again, of both the power of statistical learning and the degree to which people cannot distinguish between this learning and creativity. We might be surprised at just how stereotyped and statistically derived our own daily language is. In contrast, creativity is the exact opposite: the exploration of the unknown.


We humans are also powerful statistical learners, but we combine it mathemagically with complex, model-based causal inference, meta-model learning, and the amalgam of these in analogical reasoning.

Statistical learning can, and to some extent does, do analogical learning. But this version of metaphor is profoundly local. Those who play with DALL-E are so impressed by the fantastical images it can create with the appropriately absurd caption…and a few trial runs to actually produce an interesting scene.

These users fail to appreciate that it was they that brought the fantastic out of DALL-E; they gave it a whimsical, impossible caption. DALL-E then “simply” minimized some energy functions to bind those disparate ideas together.

In that sense, DALL-E is not putting artists and writers out of a job—it’s putting line artists and staff writers out of a job. The vision (and the choice of execution) still comes from a human. DALL-E, like a cell animator, just carries out that vision the best it can.

That is powerful. It potentially greatly expands the creative potential of artists of all types, even as it eats away at the sorts of entry careers that traditionally lead to more senior, more creative positions.

It’s not TV show runners or directors of animation that need to fear for their jobs. It's their staff working to execute someone else’s vision in a world where AI can do the same without needing health insurance, vacation days, or (god forbid) management.

The chasm between creative and routine labor grows and grows.

BTW - I don’t think it is impossible to create GPT-3 or DALL-E type models that autonomously mash up concepts that are distiant in some internal embedding. While this could be an interesting tool for inspiring new ideas, it still isn’t a metaphor derived from a meta-model of emerging but unexplored meaning. …it could be soon.

Monet as paintbrush

I’ve written about creative complementarity, the nonlinear boost in productivity AI grants creative (but not routine) labor.

A recent paper used ““over 180 million position records and over 52 million skill records from LinkedIn” to claim that “the launch of TensorFlow” dramatically increased the value of the “intangible knowledge assets that firms accumulate”.


The authors estimate “an approximate market value increase of $11 million” for each “1% increase in AI skills exposure for firms with assets complementary to AI.”

This is an interesting finding but I have a problem with the framing that companies have “intangible knowledge assets” but employees only have “technological skills”.

“Intangible assets” are not solely a quality of firms. Individuals also possess them (in the form of meta-learning). The value of technological skills is just as profoundly limited in those individuals who lack meta-learning as in companies lacking those “intangible assets”.

Communities can compliment individuals, but this idea that employees are just a list of skills is classic tools vs artist framing of human capacity. It is like assuming that Paris or New York produce great art but that Parisians or New Yorkers are only paintbrushes and typewriters.

G is just a letter

2 recent papers looked at ‘g’, the concept of general intelligence.

Does g predict career success, and is g different in men and women?

Both reveal that there’s much more to life than one letter.


For example, while “most individuals with IQs ≤ 90 did not have a college degree, individuals with degrees enjoyed the same benefits on “occupational status, income, financial independence, and law abidingness“ regardless of IQ.

Consistent with my own research, “educational success of individuals with low-average IQs” are supported by other strengths, such as meta-learning, community role models, and socioeconomic status. So, higher g helps lift life outcomes but is not required.

There are some parallels with the research on gender and g. For example, “fine-grained measures, such as grade profiles, can be accurate in predicting sex (77.5%)” and grades are also well-predicted by gender. So surely gender and g must be related, right?


Yet the evidence for a “genderness” factor to g is “too weak to support a general factor of genderedness”.

I suspect that data from matriarchal societies like the Mosou would further erode any link driven by gender enculturation.