Metropolitan Museum of Art director Max Hollein. Courtesy Ben Davis.
Metropolitan Museum of Art director Max Hollein at the unveiling of the museum's artificial intelligence collaboration with Microsoft and MIT. Courtesy Ben Davis.

Every Monday morning, artnet News brings you The Gray Market. The column decodes important stories from the previous week—and offers unparalleled insight into the inner workings of the art industry in the process.

This week, inspecting the brave new world of art and AI…

 

INTELLIGENT DESIGN

On Monday night, the Metropolitan Museum of Art unveiled a quintet of machine-learning-powered prototypes developed in partnership with MIT and Microsoft. According to the Met, the initiative’s goal is “to imagine and develop scalable new ways for global audiences to discover, learn, and create with one of the world’s foremost art collections through artificial intelligence.” My colleague Eileen Kinsella recapped the highlights the following day.

At the risk of sounding cynical, I don’t think any of the value propositions presented by the Met x Microsoft x MIT are going to blow your hair back. Worse than that, I think they encapsulate Big Tech’s continued mission to camouflage AI as either a light-hearted parlor trick or an unqualified good while it rams a crowbar into the socioeconomic divides already ripping apart 21st-century society.

Let me explain.

All told, the Met discussed five different applications, only one of which was operational. That application, Gen Studio, uses a Generative Adversarial Network, or GAN—the same basic software responsible for the artwork that makes me angrier than an aggrieved youth sports parent—to allows users to remix structurally related objects owned by the Met, then find close matches to the AI-generated hybrid in the museum’s collection.

Personally, I think it’s a harmless distraction good for about 15 seconds of surplus curiosity. And compared to how I feel about the other four prototypes, that is high praise.

Gen Studio’s art-creation application prototype being shown at the Metropolitan Museum. Image courtesy Ben Davis.

OVERSHARING

Gen Studio aside, the Met x Microsoft x MIT subtly aggregates AI’s greatest threats. Once complete, the remaining quartet of apps can only deliver its (meager) payoff if you give away either A) your personal data, or B) your labor. (As prototypes, none currently store or exchange user data.) All it takes to see this unsettling trade-off is a few italics added to copy from the Met’s webpage.

Artwork of the Day will select one daily image intended to “resonate with you” based on “your location, weather, news, and historical data [italics mine]”; My Life, My Met “will use Microsoft AI to analyze your posts from Instagram and substitute the images with the closest matching Open Access artworks from the Met collection”; and Storyteller “uses voice recognition AI to follow the discussion and share artworks from the Met collection that resonate with the stories being told.”

In other words, for each algorithm to work, you have to give it access to, respectively, where you are at any given moment; every image you’ve ever posted to the world’s most popular photo-sharing app; and both what your voice sounds like and what you’re talking about while the app is open. What could go wrong?

I’m sure everyone involved in the collaboration will say user data is kept safe and only used to refine the prototypes themselves, not leveraged toward other goals. (After publication time, the Met responded to an email inquiry by promising a response from Microsoft.) But even if you ascribe the best intentions to everyone involved, do you believe them? It’s not as if Big Tech has given us much reason to consider its ringleaders hack-proof, bug-proof, or self-interest-proof lately.

Then there’s the Met x Microsoft x MIT’s fifth and final prototype: Tag, That’s It! Beneath the exuberantly punctuated allusion to child’s play, the project provides an even more direct link to the grim stakes of AI than its brethren dependent on self-surveillance.  

A reconstruction of the Mechanical Turk, an 18th-century hoax alleged to be a chess-playing robot. Image courtesy of Wikimedia.

TURKING OVERTIME

The Met describes Tag, That’s It! as “a crowdsourcing means of fine-tuning subject keyword results generated by an AI model” for pieces in its collection. Translation: Users volunteer their time to apply keywords reflecting what’s shown in each image so that the algorithm can better learn to recognize subject matter, making the collection more easily searchable over time.

I expect the Met, Microsoft, and MIT would compare contributing to Tag, That’s It! to contributing to Wikipedia: an online labor of love that makes the community better. But the repetitive, borderline mindless nature of the task, as well as the AI endgame, actually makes users much more like Mechanical Turks.

For those uninitiated into the boiler-room infrastructure of machine learning, “Mechanical Turk” is Amazon’s term for a flesh-and-blood laborer who repeatedly performs a task easy for humans but still difficult for machines, usually for extremely low compensation that incentivizes volume participation. (The name arises from an 18th-century hoax in which an alleged chess-playing “robot” was just a dude hidden inside an elaborate construction.)

Amazon runs an entire marketplace for Mechanical Turks, where “requesters” post listings for tasks they need done online and potential workers (“Turkers”) trawl the available options. Sample tasks include filling out surveys, transcribing recorded audio, and the job central to Tag, That’s It!: labeling image contents to gradually improve “computer vision,” or software’s ability to accurately parse the visual world into discrete components through machine learning.

Turking is almost invariably paid at sub-minimum wage rates. Requesters can set a task’s payout as low as one cent. Since Amazon takes 20 percent of whatever requesters pay to Turkers, they also have incentive to minimize compensation and underestimate the time required to complete a task (another compulsory data point for any listing).

The upshot for the laborers? A recent study by Cornell University found that the median payout per Turker hovers somewhere around two dollars an hour. This is legal because, like Uber drivers, Turkers qualify as independent contractors, freeing their employers from labor regulations such as a minimum wage, designated breaks, and health or vacation benefits.

Still, the grand irony buried within this arena is that, when it comes to helping to train machine-learning algorithms, Turkers are literally incentivized to try to make themselves redundant as fast as possible. Which might seem like a good thing, until you survey the rest of the labor market—an increasingly apocalyptic hellscape scorched by the disruptive effects of automation and AI.

Josh Kline, Unemployment (installation view) (2015). Image courtesy 47 Canal.

VICIOUS CYCLE

If you’re afraid of what AI will do to the global workforce in the future, start looking right in front of you. A 2017 Deloitte study determined that more than half of companies had already started using “robotic process automation,” also known as machine learning or AI, to replace at least some jobs previously handled by people, with nearly three-quarters expected to do the same by 2020.

This push toward a software-dominant economy lends employers a certain Jekyll and Hyde quality. Here’s Kevin Roose summing up his experience with the issue at this year’s World Economic Forum in Davos:

In public, many executives wring their hands over the negative consequences that artificial intelligence and automation could have for workers…. But in private settings, these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for impact on the workers.

Tech evangelists and defenders have long argued that technological progress is nothing to fear, because the same advancements that eliminate old jobs will create new, better ones, just as happened in the Industrial Revolution. But as Eduardo Porter wrote in the New York Times this week…

Something different is going on in our current technological revolution. In a new study, David Autor of the Massachusetts Institute of Technology and Anna Salomons of Utrecht University found that over the last 40 years, jobs have fallen in every single industry that introduced technologies to enhance productivity. The only reason employment didn’t fall across the entire economy is that other industries, with less productivity growth, picked up the slack.

“Industries with less productivity growth” means industries (and thus, jobs) that are still basically the same as they’ve always been: food service, hospitality, eldercare, etc. I say “basically” because, as Porter reports from an increasingly class-divided Phoenix, Arizona, employers are generally only keeping these human-dependent industries going by slashing wages and benefits to minimize their costs.

So the fact that there are jobs, period, papers over the fact that said jobs are not good—and getting worse. Yet even these hard-to-automate industries can’t keep up with the overall labor supply. So where else do the unemployed and underemployed turn?

As relayed in a harrowing Atlantic piece by Alana Semuels, a recent Pew poll found that five percent of Americans made money by doing some amount of remote work through an online platform—about two and a half times more than drove for ride-sharing apps like Uber. And while not all this remote work is paid as poorly as Turking, the trend lines are pointing in the wrong direction. Siddarth Suri and Mary L. Gray, researchers focused on the unskilled gig economy (the one powered by on-demand marketplaces like TaskRabbit, AirBnB, and Uber), recently estimated that up to one-third of Americans may join its ranks within the next 10 years.

All of which creates a vicious cycle: AI makes elites and the highly skilled wealthier by usurping more and more once-decent jobs; unskilled or low-skilled workers retreat to unforgiving service industries; and failing that, they turn to even less forgiving on-demand, online-enabled work that elites are racing to usurp with AI, sometimes with the workers’ labor hastening their own obsolescence.

Example of an artwork suggested by “My Met, My Life.” Image courtesy the Metropolitan Museum of Art.

CONNECTING THE DOTS

What does all this have to do with the Met x Microsoft x MIT? On one level, not much. Even once the initiative’s entire quintet of AI prototypes goes live, I doubt the collective user base will move the needle on Silicon Valley’s quest to harvest enough data and cheap labor to supercharge machine learning into a workforce-reshaping productivity juggernaut.

But on another level, the collaboration typifies the soft-peddling of AI’s concussive impact on the average citizen. It entices us to imagine how machine learning could “connect people through art,” to quote the Met’s copy… while offering nothing more than a few weak software freebies for personal data and unpaid labor. This uneven exchange sets a dangerous precedent by encouraging behaviors and expectations central to AI’s disruption of modern employment, class, and quality of life. It’s like being enthralled by a street magician while his accomplice empties your wallet.

In this way, the Met’s AI initiative invites the art world to once again turn so completely inward that it misses the larger issues at the crux of the latest spectacle. My advice? Pulp the invitation.  

[artnet News]

 

That’s all for this week. ‘Til next time, remember: “Changing the world” can be as much a threat as a promise.