Every Monday morning, artnet News brings you The Gray Market. The column decodes important stories from the previous week—and offers unparalleled insight into the inner workings of the art industry in the process.
This week, in honor of the single deep cut that forced me into the ER on Wednesday night, going deep on one story only…
HISTORY CHANNEL
On Thursday, my colleagues at artnet News ran an excerpt from Deaccessioning and Its Discontents, a new book by Martin Gammon tracing the history of the controversial practice by which museums selectively sell works from their collections for the sake of refinement. It’s a policy that I (and many others) have wrestled with on multiple occasions, and Gammon helps shed new light on it.
The excerpt’s hook comes when Gammon, a founder of the Pergamon Art Group (which advises museums and private collections), recounts a few mortifying trades by onetime Albright-Knox Art Gallery and Rhode Island Museum of Art director Gordon Washburn. Through the magic of deaccessioning, Washburn effectively swapped a peak Picasso for a saccharine Renoir, as well as multiple works by early Modern masters for a misattribution and an outright fake—moves that, if I’d made them, might compel me to abandon life above ground and join the Mole People in the MTA’s subway tunnels.
Yet Gammon argues that Washburn is crucial to museum policy wonks and picket-ready art lovers for a much grander reason. And in the process, his larger historical context shows how popular opinion about deaccessioning has changed dramatically over the decades.
Exhibit A: Gammon demonstrates that, at least in American museums, deaccessioning started tripping the scandal alarms less than 50 years ago. He points to the Metropolitan Museum of Art’s 1972 sale of works from shipping heiress Adelaide Milton de Groot’s bequest—or more precisely, New York Times critic John Canaday’s venom-tipped cobra strike on rumors of a secret art sale by the museum—as a turning point in the issue’s public perception.
But a generation earlier, in 1938, the Times had praised Gordon Washburn’s decision to create the so-called Room for Contemporary Art at the Albright-Knox: A specific gallery dedicated to hosting a collection of cutting-edge works, any of which, in Washburn’s words, “may be resold or otherwise disposed of, if, in the course of time, they do not satisfy the critical judgment of the administrators.” In short, deaccessioning was central to the endeavor, and the paper of record found the idea brave, forward-thinking, and exciting.
So what changed in those 34 years? And why is deaccessioning still widely seen in the art community as sacrilege today?
THE DEVIL IN THE DETAILS
Gammon almost assuredly offers his own theory in the rest of the book. But what struck me as I dug deeper into the 1972 Met sale on my own was the way that, in the US, the prevailing public response to deaccessioning seems to keep morphing to fit the contours of the era’s larger sociopolitical conversation, whatever it may be at the time.
For more detail on the de Groot deaccession, consider a piece written by Josh Niland for Hyperallergic last February. The facts Niland surfaces are fairly stunning, uncovering what we might otherwise assume to be a fairly typical sell-off by today’s standards as something else entirely.
The basics don’t seem especially shady. Then-director Thomas Hoving and 20th Century Art department head Henry Geldzahler coordinated to deaccession multiple works donated by Adelaide Milton de Groot to acquire pieces by postwar New York School artists (like David Smith), who Geldzahler himself had been pushing harder than an expecting mother in the final stages of labor. But what’s surreal is the way they went about it.
To pull off the moves, Geldzahler barrel-rolled around a Met board member skeptical of the plan by conjuring up his own lowball valuation for three Max Beckmann paintings he and Hoving wanted to flip in the deaccession, arguing, in effect, that they were minor works the museum would not miss. However, the Beckmanns “were sold via what Hoving termed a ‘silent auction’” to Viennese dealer Serge Sabarsky for a price over 25 percent higher than Geldzahler’s maximum estimate, suggesting the two men knew they were cashing in paintings of greater import than they let on.
And then, most stunning of all, Niland contends that Hoving tried to gaslight the world by publicly stating that the deaccession never happened.
Unfortunately for the Met, Hoving was nowhere near gangster enough to muscle people into this lie. New York Attorney General Louis J. Lefkowitz started sniffing around and, in Niland’s words, “finally pushed the museum to adopt a set of procedures for the sale of its artworks.”
Other institutions began to follow suit over the succeeding decades. The Alliance of American Museums (AAM) didn’t require members to develop formal collections management guidelines until 1984, and it would take until 1991 before that organization joined with the Association of Art Museum Directors (AAMD) to codify the first version of the sanctions-backed institutional orthodoxy governing deaccessioning in the US today.
Now, it’s possible that the circumstances of the de Groot sale alone might have been enough to flip opinions in the press. After all, in his Times salvo, John Canaday admitted that he himself had been in favor of at least one earlier deaccession: the Guggenheim’s decision to “sell from strength” several of its many Kandinsky works.
But other, grander events in 1970s America may have had a strong influence, too.
DEEP COVER
Canaday’s gentlemanly fury over the Met’s rumored deaccession plan seems to have been triggered as much by the prospect of subterfuge as by the prospect of sales. (For those who didn’t click through earlier, the piece is titled “Very Quiet and Very Dangerous.”) In fact, Canaday didn’t isolate the Met as his only target. He instead wrote of deaccessioning that “the practice is widespread and is carried on on [sic] significant scale” in American museums—“but the rule is, keep it quiet.”
In other words, his allegations were that high-ranking officials in American institutions were regularly and repeatedly carrying out actions against the public’s interests, then covering up the evidence so it never left the halls of power. In his mind, Canaday was doing nothing short of calling out a conspiracy.
Would this theme have had particular resonance in the US at the time? For those of us who didn’t live through it, let’s recap the political climate.
In 1972, the country was less than 10 years removed from the assassination of President John F. Kennedy and a growing number of skeptics believed the truth about his murder had been deliberately buried. For instance, the Warren Commission, formed in 1964 to provide a definitive ruling on JFK’s murder, was widely and immediately criticized for failing to review key evidence en route to endorsing the infamous “magic bullet” theory, which challenged probability and physics to hold that Lee Harvey Oswald acted alone.
However, fueled by years of tour de force investigative journalism, the public demand for answers eventually necessitated multiple high-level federal reviews in the 1970s. Two of these, shorthanded as the Church Committee and Rockefeller Commission, together blasted open a prison of secrets about illegal activities on the part of US intelligence organizations, ranging from sanctioned foreign assassination programs to covert surveillance of American citizens.
By 1972, the US was also deeply embroiled in military action in Vietnam, which proved to be a feeding ground for conspiracies real and imagined. Some high-ranking members of the military stated that the reported attacks on the USS Maddox in the Gulf of Tonkin—the incidents used by the Lyndon Johnson administration to justify an escalation of American aggression in Southeast Asia—were bogus. Furthermore, they alleged the Maddox’s sole purpose was to provoke the North Vietnamese into opening fire precisely to provide just cause for a war some American officials desperately wanted.
In 1969, an intrepid young reporter named Seymour Hersh also revealed a military cover-up of what we now know as the My Lai Massacre, in which US troops murdered more than 500 unarmed villagers the year prior.
And perhaps most important of all, Canaday’s op-ed and the Met’s de Groot sale also happened the same year as a little presidential snafu known as Watergate.
In the black light of this historical context, then, is it possible that the traditionally left-leaning people closely monitoring the arts landscape in the early ‘70s could be triggered to turn against a museum policy that allowed institutionalized power to unilaterally decide, with no external oversight, what the public would and would not be exposed to?
THEN AND NOW
I wouldn’t blame you for slapping a tinfoil hat on me if there were only one period where deaccessioning looked like a clean proxy for the public mood outside of the arts. But it seems to happen again and again.
Let’s time-travel back to 1938, when the Times enthusiastically endorsed the Albright-Knox’s fast-and-loose approach to refining its contemporary collection. While the US hadn’t yet clawed its way out of the Great Depression—John Kenneth Galbraith notes that the unemployment rate that year was still a demonic 20 percent—several New Deal initiatives had helped give considerable hope to an untold number of Americans from coast to coast. And they had done so specifically by acting as a wave of innovation aimed at fixing or replacing archaic systems in order to usher in a smarter, better future.
What could fit better in this context than a plan that incentivized a museum to be fluid and flexible in building a forward-looking collection of what comes next rather than chaining itself to the past out of principle?
And what about the current debate? I don’t think it’s difficult to see how the dominant attitude in the art community toward deaccessioning today—which is to say, that the practice is an ethical tire fire and a wad of phlegm spit into the general public’s eye—might be much more influenced by passionate socioeconomic concerns than by the cold, quasi-extraterrestrial logic I often subscribe to.
Despite the fact that works deaccessioned by one museum are at least sometimes acquired by another—as happened in one of the most famous recent cases, in which Norman Rockwell’s Shuffleton’s Barbershop moved from the embattled Berkshire Museum to George Lucas’s forthcoming Museum of Narrative Art—the most common reason for vitriol seems to be the increasingly unfair contest between the private and public sectors, and more meaningfully, between the haves and have-nots.
To critics of museum sell-offs, the problem seems not to be so much about one audience of average citizens losing great artworks to another audience of average citizens somewhere else. It’s that average citizens, period, are most likely to lose great artworks to obscenely wealthy individuals who can outbid any institution in the world if they so desire. Then our shared cultural riches can be locked away forever, unless the new buyer decides to donate their acquisitions to a public institution (or open their private vaults to the rest of us).
This conception of the issue fits hand-in-gauntlet with the colossal and growing class struggle fracturing the US and other nations around the world. Even as the domestic economy surges to great heights in absolute terms, real wages for the average American remain fixed at the same level as in the late ‘70s. It’s a scenario that gave feet-first birth to the Occupy movement, then populist political waves on the right and left alike, and an “eat the rich” sentiment that feels increasingly ready to explode into actual violence every day.
So when we talk about deaccessioning in any era, what are we actually talking about? Maybe it’s just a wonky debate about collections policy. But maybe it’s about something much, much deeper: a sign of the times, shape-shifting to reflect whatever optimism or anxieties define the epoch.
That’s all for this week. ‘Til next time, remember: Even when art only intends to be about art, the people around it have their own ideas.