James Simpson

 

On his book Permanent Revolution: The Reformation and the Illiberal Roots of Liberalism

Cover Interview of October 16, 2019

In a nutshell

Protestants won in Northern Europe in the sixteenth and seventeenth centuries. The narrative produced by the winners was and largely remains triumphalist: Protestantism won because it foreshadowed the liberal order. It promoted the growth of individuality, now that each Christian had unmediated access to a personal God; liberty of conscience; rationality; the right to interpret scripture for him or herself; equality through the democratic priesthood of all believers; toleration; constitutionalism, and national independence. Winners make history.

In Permanent Revolution I argue that this argument is both wrong and right. It is wrong because most varieties of sixteenth- and seventeenth-century Protestantism were the opposite of liberal. They were illiberal in remarkably extreme, soul-crushing, violence-producing ways.

The triumphalist argument is right insofar as the 150 years following 1517 witnessed a historical process whereby many Protestants ended up repudiating the initial, founding doctrines of Luther and Calvin. Democracy; division of political powers; separation of church and state; free-will; toleration for minorities; liberty and privacy of conscience; artistic liberties; liberty of textual interpretation: all these cardinal features of the Enlightenment emerge from Protestant polities. They do so, however, by repudiating Lutheran and Calvinist Protestantism.

In sum, Protestant triumphalism is wrong with regard to the beginning of the Reformation centuries, and right with regard to the end of that historical period.

How did the process of repudiation occur? I make sense of that historical process by defining three broadly applicable periods of the Reformation centuries in Britain: (i) 1517-1560, the revolutionary, carnivalesque, fun period of smashing the Catholic Church and all its practices; (ii) 1560-1625, the decidedly unfun period when many Protestants discover that they are violence-producing, iconoclastic hypocrites likely damned by predestination; and (iii) 1625-1688, the period in which Protestantism divides into its illiberal, Presbyterian, Calvinist wing on the one hand, and its proto-Enlightenment, proto-liberal wing on the other.

I substantiate the argument with sections devoted to the following topics: despair; hypocrisy; iconoclasm; theater and the pursuit of “witches”; reading; and liberty.




 


 

Herbert S. Terrace

 

On his book Why Chimpanzees Can't Learn Language and Only Humans Can

Cover Interview of October 02, 2019

In a nutshell

This book is about the origin of language, why it is special, and how it got that way. One reason language is special is that it allows human beings to name things and to use those names conversationally. No animal has this ability; only humans do. We are now beginning to understand when and how our ancestors began to talk and what it takes to get an infant to speak their first words.

Some people say language was simply created. End of story! That’s nonsense. Language evolved, just like all other biological and psychological processes. Until recently, however, nobody had any serious idea how it took shape from animal communication. I didn’t either, at least not until I tried to explain the failure of a project in which I attempted to teach a chimpanzee to use language.

The failure of that and similar projects comprise one of three themes in my book. The other two concern (1) an ancestor who likely produced the first words and (2) the non-verbal experiences an infant shares with their parents that are crucial for producing their first words.

Consider the failures of ape language projects. Linguist Noam Chomsky began his distinguished career with a scathing critique of behaviorism in which he claimed language was uniquely human. Many behaviorists reacted to that claim by starting projects in which they attempted to teach apes language. To get around an ape’s articulatory limitations, some projects, including my own, used American Sign Language, a gestural language used by hundreds of thousands of deaf people. The focus of my project was an infant chimpanzee we humorously named Nim Chimpsky. Other projects sought to teach language to chimpanzees and bonobos by training them to produce sequences of arbitrary visual symbols.

All those projects failed. In the case of sign language, my analyses of videotapes in which Nim signed with his teachers revealed that they had inadvertently prompted him to make signs they anticipated he would make. Sequences of signs that seemed spontaneous were, in fact, cued by Nim’s teachers.

Sequences that chimpanzees learned to produce in other projects could be explained by rote memorization, like the sequences people use to enter a password to obtain cash from an ATM. What those sequences have in common is they are motivated by reward. Requests for rewards, however, constitute a minuscule portion of human vocabulary. If such requests were all an infant learned, they would never learn language.

“Ape language” experiments failed because none of the subjects could learn to use symbols as names — the basic function of words. They showed why it is futile to teach an ape to produce sentences if it can’t even learn words. Although the results of Project Nim were negative, they showed why a theory of language’s evolution must begin with words, not sentences.

What ancestor might have produced the first words? Now that we know chimpanzees are unable to learn words, we must ask, which, if any, of our ancestors were the first to use them, why they might have done so, and what they might have said. Recent discoveries by paleoanthropologists suggest it is likely that the first words were invented by Homo erectus.

The critical question to ask about any purported inventor of words is how did they contribute to a species’ survival? If they didn’t, they couldn’t be naturally selected. Most candidates fail that test, e.g., using words to enhance pair-bonding and social bonds. Each of these behaviors develop without words. To survive, Homo erectus needed copious calories to feed its enlarged brain, the volume of which was almost three times the size of a chimpanzee’s.

The most efficient source of calories is meat, but Homo erectus lacked the required weapons to kill large animals. They could use stone tools to butcher animals that had been killed by other predators or that had died a natural death, but they couldn’t kill them outright. They had to use another approach. After one of their group, a “scout”, located a dead animal, he had to recruit colleagues to help butcher it where it lay and scare off other animals that might pick at its remains while they attempted to.

Because the dead animal was far away and out of sight, the scout had to invent arbitrary words to describe it and its location. The innate vocabulary of signals animals used to communicate would not suffice. We don’t know if the scout used gestures or a spoken utterance, or both, to get his colleagues to think about the animal they had to scavenge, but whatever form the gestures or utterance took, some linguists suggest it was the origin of the first words. Our vocabulary grew from there.

Now consider pre-verbal precursors of an infant’s first words. Something remarkable happens to every infant during their first year that distinguishes their history from that of every other primate. They experience two non-verbal relations with their parent that pave the way to language. During the first few months, human infants are cradled by their parents. That provides a basis for their sharing gazes and emotions in a stage of development called intersubjectivity.

Beginning at approximately six months, the infant begins to crawl and to point to external objects, often picking them up to show to their parents. Pointing to an object and sharing it with a parent takes place during a second stage of development called joint attention. When an infant and parent know they are attending to the same object, the infant can readily learn its name by imitating a parent’s comment.

Intersubjectivity and joint-attention, two uniquely human phenomena, are crucial for the development of language. Their absence in chimpanzees is the best explanation for their inability to learn language. Their partial absence in autistic children and in children raised in orphanages also explains why language development in those children is retarded.




 


 

Ian Hodder

 

On his book Where Are We Heading? The Evolution of Humans and Things

Cover Interview of September 25, 2019

In a nutshell

In this book I grapple with a problem: Is human development directional? Does human evolution move in a particular direction? In recent decades the dominant view in the various sciences of evolution has been that change does not tend in any particular direction. The older 19th century idea of progress and advancement towards a civilized ideal has long been overturned by notions of a directionless process of natural selection. Current theories seek to avoid any notion of teleology and goal in the human story. And yet all archaeologists know that when looked at from a distance, the story of human development has a clear direction in at least one aspect – the amassing of more and more stuff. Humans started making simple stone tools and in the millennia of early human development they amassed small assemblages and made tools that had few parts. Today we produce massive machines such as the Hadron Collider, the largest single machine in the world, that connects 170 computing centers in 36 countries and uses $23.4 million in electricity annually.

This book seeks to marry the archaeological evidence of gradual and then runaway increases in material stuff used by humans with a theory that avoids teleology or goal direction. I outline a theory of human evolution and history based on ‘entanglement’ defined as the ever-increasing mutual dependency between humans and made things. It is widely accepted that humans have become increasingly dependent on technologies and on consumerism, but less emphasis has been placed on the way these human dependencies on things also involve things being dependent on humans. And they involve things being dependent on other things in complex, far-reaching entanglements. Much contemporary social theory describes the networks or webs of humans and things that constitute the modern world, but there has been less focus on how humans become entrapped by these material webs so that movement is channeled down certain pathways. This path dependency lies behind specific historical trajectories and it underpins the global movement towards a directional increase in the human dependence on things.

In the book I use archaeological examples, such as housing or the wheel, but also historical examples such as cotton or opium, to show how material things play an active role in pushing entanglements in particular directions. Much social theory has accepted the agency of things, but I argue that it is not the individual things but the systems of things (that is the thing-thing dependencies) that are crucial. When house walls collapse or spinning technologies can no longer achieve their purpose, new things are brought in to fix the problem. These new things often require further human intervention. Thus, humans are caught in a double bind, depending on things that depend on them so that humans are drawn into yet further dependence on things. Once this process has occurred it is difficult to go back – too much has already become caught up in the new entanglements. So, in our arguments about evolution, I want to replace teleology with irreversibility. The direction is always, over the long term, towards greater human-thing entanglement.




 


 

Frank O. Bowman III

 

On his book High Crimes and Misdemeanors: A History of Impeachment for the Age of Trump

Cover Interview of September 18, 2019

In a nutshell

Some book titles are mysterious or allegorical, providing no obvious clue of what the darn things are about.  No danger of that here. I’ve written a book about the long history of impeachment and why that history matters during the presidency of Donald Trump.

Impeachment was invented by the British Parliament in the 1300s as a counterweight to the authoritarian tendencies of monarchy. The legislature could not remove kings or queens except by revolution but could alter or impede royal policies of which it disapproved by removing the monarch’s agents – royal ministers, favorites, and judges. Even though a lot of old British impeachments were no more than rough incidents of ordinary politics, the most important ones were conscious efforts to resist royal tyranny and establish or preserve emerging norms of representative constitutional government.

The great impeachments of British history stuck in the historical memory of North American colonists and the Framers of the American constitution. They adopted both the British mechanism of legislative impeachment and its traditional British purposes when they wrote early state constitutions and the federal constitution of 1788. Likewise, by borrowing the phrase “high crimes and misdemeanors” from parliamentary practice to describe the scope of impeachable behavior, the Framers of the federal constitution incorporated four centuries of British precedent into American practice. Of course, the Framers created an impeachment power broader than that of Parliament in that it can remove the national chief executive him or herself.  But here, too, the Framers were merely transplanting into Republican soil the most important function of impeachment in Britain’s emerging constitutional monarchy – preventing executive overreach and preserving constitutional balance.

The Framers intended impeachment, not as a criminal proceeding for punishing individual wrongdoing, but as a political tool. When directed at judges and lesser federal officials, it is primarily a housekeeping mechanism for dealing with corruption or rank incompetence. In the case of presidents, it is a means of defining and protecting constitutional order against executive malfeasance.

The middle chapters of the book discuss the relatively few federal impeachments or near-impeachments – one Senator, one Secretary of War, a passel of judges, and Presidents Andrew Johnson, Richard Nixon, and Bill Clinton. Each incident, particularly those involving presidents, teaches its own lessons.

I argue that Johnson’s narrow 1868 escape from conviction in the Senate was a mistake – a failure by Congress to assert its power to define the nation’s constitutional future in the aftermath of the Civil War.

The events that forced Nixon’s resignation provided an exemplar of proper investigative process in such cases and exposed behavior that is the paradigm of impeachable “high crimes and misdemeanors.”

The Clinton affair was a misadventure – a misuse of the independent counsel mechanism created after Watergate and a living illustration of presidential misbehavior too inconsequential to merit removal.

The final six chapters address the 25th Amendment and kinds of presidential conduct potentially relevant to the current moment – obstruction of justice, abuse of the pardon power, lying, corruption – and conclude with a summary of the case for and against impeaching Mr. Trump.




 


 

Carla Yanni

 

On her book Living on Campus: An Architectural History of the American Dormitory

Cover Interview of September 11, 2019

In a nutshell

Did you live in a dormitory? Do you have fond memories of it, or were you plotting your escape from the moment you moved in? Many people live in residence halls during a transformational moment in their lives. And lots of people pay for their children to live in dormitories, too. But few stop to think about the way these everyday structures shape young lives.

Living on Campus is the first and only book that looks at the architectural history of this commonplace building type. I argue that college students dwelling together is not obvious or inevitable. Instead, it is an artifact of three centuries of American educational ideology that placed an outsized value on socialization. In the seventeenth and eighteenth centuries, students were boys who needed moral guidance; in the nineteenth century, women began attending college in large numbers, always under protective eyes; as the concept of the adolescent emerged around 1900, youthful men were encouraged to delay adulthood; in the Fifties, they were GIs eager to re-enter society; in the Sixties, students were members of a youth culture that administrators almost feared. This mad dash through the centuries is obviously oversimplified, and yet it nonetheless demonstrates that today’s students bear little resemblance to their forebears, which makes it all the more remarkable that the residence hall still thrives.




 


 

Adrienne Mayor

 

On her book Gods and Robots: Myths, Machines, and Ancient Dreams of Technology

Cover Interview of September 04, 2019

In a nutshell

Who first imagined robots?

Most historians believe that automatons were first developed in the Middle Ages. Some philosophers of science claim that it was impossible for anyone in ancient times to imagine technologies beyond what already existed. Other scholars assume that all animated beings in mythology were inert matter brought to life by gods or magic, like Adam or Pygmalion’s ivory statue. But I wondered, Was it possible that the concepts of robots could have been imagined in classical antiquity?

I found that people were describing imaginary automatons as early as Homer, more than 2,500 years ago. A remarkable group of Greek myths envisioned ways of replicating nature by bio-techne, “life through craft.” Robots, synthetic beings, and self-moving devices appear in myths about Odysseus, Jason and the Argonauts, the sorceress Medea, the bronze automaton Talos, the brilliant craftsman Daedalus, the fire-bringer Prometheus, and Pandora, the female android fabricated by Hephaestus, god of invention. Hephaestus, Homer tells us, made many wonders: automatic gates to the heavens, “smart” bellows for his forge, driverless carts to serve ambrosia at celestial banquets, and a staff of Golden Maidens endowed with movement, mind, and all the knowledge of the gods.

Ancient poets described Talos, Pandora, and other lifelike artificial entities as “made, not born.” This crucial phrase calls out their technological, non-biological origins. And it distinguishes them from things magically given life. These marvels were constructed using the same tools, materials, and methods that human artisans used, but with awesome results. Hesiod (ca 700 BC) even detailed the inner workings and power source of the bronze Talos, fulfilling modern definitions of “robot.”

So, thousands of years before medieval and early modern machines, and even centuries before technological innovations of antiquity made self-moving devices possible, ideas about creating artificial life were being explored in Greek myth. These imaginative tales were ancient thought experiments, set in an alternate world where technology was marvelously advanced. Gods and Robots is the first book to survey the ancient origins of the desire to create artificial life, drawing on narratives and art from the age of mythology to the proliferation of real automatons in the Hellenistic era (fourth century to first century BC).

Of course, it’s important to keep in mind that much of ancient literature and art has vanished or is incomplete. This sad fact determines one’s path of discovery and interpretation. We travel across a landscape ravaged by time—something like the “mosaic effect” after devastating wildfires. In other words, readers should not expect a simple linear route in these chapters. Instead, like Theseus following a thread to navigate the Labyrinth designed by Daedalus—and like Daedalus’s little ant making its way through a convoluted seashell to its reward of honey—we follow a meandering, backtracking, braided trail of stories and images. One can dip into the chapters in any order. Mythology is a great tapestry with myriad threads, interwoven and looping back to familiar characters and stories. We are bound to accumulate new insights as we go.




 


 

Allan J. Lichtman

 

On his book The Embattled Vote in America: From the Founding to the Present

Cover Interview of August 21, 2019

In a nutshell

The Embattled Vote explains why Americans have fought and died for the right to vote. The book is more relevant than ever after the Supreme Court sanctioned partisan gerrymandering in a ruling issued on June 27, 2019.

The world’s oldest continuously operating democracy guarantees the franchise to no one, not even citizens. This lack of universal voting rights originated in a crucial mistake by America’s founders: omitting a right to vote from the Constitution and leaving the franchise to the discretion of individual states. During Reconstruction, Congress missed an opportunity to rectify the founder’s error and enshrine a positive right to vote in the Constitution. Instead, they framed the Fifteenth Amendment on racial voting rights negatively, in terms of what states could not do. They set a precedent for later amendments on sex and age.

The lack of a constitutional guarantee has meant that the right to vote has both expanded and contracted over time. In the early republic, only those with a “stake in society” proven through the ownership of property or the payment of taxes could vote in most states. In the nineteenth century, states eliminated economic qualifications but embraced the ideal of a “white man’s republic,” with people of color and women excluded from the franchise.

Despite advances since adoption of the landmark Voting Rights Act of 1965, our voting rights are still in jeopardy. Politicians that benefit from voter suppression rely on bogus claims of voter fraud to deprive millions of American of the franchise through voter identification laws, political and racial gerrymandering, registration requirements, felon disenfranchisement, and voter purges.

In 2013, the U.S. Supreme Court struck down one of the Act’s most effective provisions, which required states and localities with a history of voter discrimination to preclear changes in voting laws and procedures with the U.S. Justice Department or the Federal District Court in D.C. The Court held that the “pervasive,” “flagrant,” “widespread,” and “rampant” discrimination that Congress sought to correct in 1965 no longer existed in the 21st century.

Yet, by neglecting new forms of voter suppression, the Court unleashed a renewed push to erode voting rights. Dallas minister Peter Johnson, a civil rights activist since the 1960s, said, “There’s nobody that’s going to shoot at you if you register to vote today. They aren’t going to bomb your church. They aren’t going to get you fired from your job. You don’t have those kinds of overt, mean-spirited behaviors today that we experienced years ago… They pat you on the back, but there’s a knife in that pat.”

Historically, players in the struggle for the vote have changed over time, but the arguments remain depressing familiar and the stakes are very much the same: Who has the right to vote in America and who benefits from exclusion?




 


 

George C. Galster

 

On his book Making Our Neighborhoods, Making Our Selves

Cover Interview of August 07, 2019

In a nutshell

Making Our Neighborhoods, Making Our Selves presents a holistic analysis of the origins, nature, and consequences of neighborhood change and offers strategies for making a more socially desirable palette of neighborhoods in America. The foundational proposition of this book is: we make our neighborhoods and then they make us. That is, our collective actions in metropolitan housing submarkets regarding where we live and invest financially and socially will determine what characteristics our neighborhoods will manifest and how they will evolve. In turn, these multidimensional neighborhood characteristics influence our attitudes, perceptions, behaviors, health, quality of life, financial well-being, children’s development, and families’ opportunities for social advancement. The book advances a model of understanding the causes and effects of neighborhood dynamics through the lens of metropolitan housing market demands and supplies.

Unfortunately, the private, market-oriented decision-makers now governing human and financial resource flows among American neighborhoods usually arrive at inefficient and inequitable outcomes from the perspective of the larger society. Inefficient resource allocations arise due to externalities, strategic gaming, and self-fulfilling prophecies. Externalities are actions by investors that create benefits or costs to their neighbors that they do not consider when deciding whether to undertake the action. For example, homeowners do not consider how under-maintaining their homes harms the values of property owners nearby. Strategic gaming occurs in situations where investors’ payoffs from property improvements depend heavily upon whether neighboring investors undertake similar actions and what these investors will do is uncertain. Few investors will choose to be the first to renovate their dwelling in a deteriorated neighborhood when they worry that no one else will follow suit. Thus, each waits for the other to assume the risk of taking the lead, so no one renovates. Self-fulfilling prophecies occur when people react to an expected neighborhood change in ways that actually encourage that change to occur. For instance, if whites anticipate that their home values will fall if many nonwhites move into the neighborhood and react by trying to sell their properties at discounts, they will create the very situation they feared. These various forms of market failure systematically produce too-little housing investment in many places and too much segregation by race and income. This means that the current characteristics of neighborhoods are inefficient form a society-wide perspective.

Moreover, lower-income, black and Hispanic households and property owners typically bear a disproportionate share of the costs associated with under-investment, segregation and neighborhood transition processes, while reaping comparatively little of their benefits. Because neighborhood context powerfully affects children, youth and adults—yet neighborhood contexts are extremely unequal across economic and racial groups—space becomes a way of perpetuating unequal opportunities for social advancement. This means that neighborhoods are inequitable form a society-wide perspective.

To remedy these substantial market failures to provide efficient and equitable neighborhoods, I provide a comprehensive set of neighborhood-supportive public policies and programs in the domains of physical quality, economic diversity, and racial diversity. These initiatives should be guided by the principle of strategic targeting. This means that the public sector must use its scarce resources in a spatially focused, concentrated way such that it leverages substantial private investment and encourages households and investors to pursue the same ends as the policy.




 


 

André Millard

 

On his book Equipping James Bond: Guns, Gadgets, and Technological Enthusiasm

Cover Interview of July 24, 2019

In a nutshell

This is a book about the equipment that James Bond uses: its origins, function, and the essential role it plays in Bond’s missions. The book also describes the role these gadgets play in the success of the entertainment brand based upon 007’s exploits. Bond’s creator, Ian Fleming, included details about the equipment of espionage to make his stories appear more authentic. Then the producers of the Bond films found that audiences were so fascinated by the exploding briefcases and sports cars fitted with machine guns and ejector seats that they made the gadgets an essential part of the character. Audiences have always accepted that Bond’s power comes from his mastery of technology and the high-tech gadgets he uses, and over time the films have become a showcase of new, weaponized technology.

The book follows the development of espionage technology from the early twentieth century when the British secret service was established through to the twenty-first century and then into a future imagined in the Bond films. The inspiration for the character of 007 and the equipment he uses comes from Fleming’s experience as an intelligence operative during World War 2. Thus his hero starts his career with the simple, mechanical weapons and devices used in the war, such as single-shot weapons built into walking sticks or secret compartments in shoes or briefcases.

The first 007 films were made in the 1960s during a period of great technological advance and soon Bond’s equipment was at the leading edge of electronic, computing, and aerospace technology. The profits from the Bond films were so great that the producers were able to make bigger and more spectacular films, using some of the largest film sets ever built. They also had the resources and connections to acquire some of the very latest technology for Bond’s equipment.




 


 

Charlie Hailey

 

On his book Slab City: Dispatches from the Last Free Place

Cover Interview of July 10, 2019

In a nutshell

Slab City is known as the “last free place.” This remote settlement in the southern California desert occupies a decommissioned World War II military camp. Only its slabs and tank-rutted roads remain. Slab City also occupies the legacies of Jeffersonian land policy. Its one-square mile area is one of the last remaining Section 36 plots dedicated exclusively for public use in the National Land Ordinance system. A collaboration between an architect and a photographer, this book sets out to understand the idea of freedom that has been built on those eponymous slabs by a community of anarchists, artists, retirees, snowbirds, squatters, survivalists, and veterans.

Seven decades in the making, Slab City is one of the most enduring temporary settlements in the United States. It endures despite hardship. All of the elements of a city’s infrastructure are here, but the roads are crumbling, the sewers have been capped, the million-gallon water tanks are empty, and the high-tension lines of the nearby power grid pass high overhead. The canals that flank two sides of Slab City like defensive moats rush toward the Imperial Valley’s fields of iceberg lettuce and northward to the lawns of Palm Springs. Like the soldiers bivouacked at Camp Dunlap in the 1940s, Slab City’s inhabitants—known as “slabbers”—are also effectively training to live in the desert, where the rights of public land meet the difficulties of off-grid living.

In Slab City, a resident’s tenure of a site requires physical occupation. And when you can’t be physically present, the structures you have made stand in for your absence. In our field work, Donovan and I set out to understand this place through the things its residents have made. Our collaboration took on Benjamin Buchloh’s charge to discover how creative practice (in our case, writing and photographing) can study marginal places. We hoped to understand new territories caught between complex pasts and indeterminate futures that mix autonomy, necessity, and control. Our method was to read and interrogate the built environment: its shelters, boundaries, markers, art installations, vehicles, cemeteries. In the process, constructions became characters; structures took on personas.

Between its introduction and conclusion, the form of the book has five acts—camp, perimeter, ration, facilities, and drawdown. Each act gathers the scenes and elements that make up this place and tells its story. “Ration,” for example, unpacks the necessary provisions of off-the-grid life in the desert—from tin cans to shade cloth, from cardboard to Slab Mart. “Facilities” narrates Slab City’s ad hoc infrastructures (including a hot springs bath, an improvised public shower, an empty Olympic-sized pool, a pet cemetery, and a library), as well as individual building projects of its residents (including a cave shelter carved into the berm of a canal and a hut made of pallets and palm fronds). Reading Slab City from start to finish follows the rising and falling actions of life on the slabs, but it can also be read more instinctively.

Each scene is short, and readers might choose to flip through the book, stopping at particular photographs or stories, in much the same way that Donovan and I explored the place. Each day, we set out with a particular theme in mind, whether it was boundary or grid or horizon, but our walks through the slabs always yielded other discoveries and insights that we had not expected. One morning, we were walking near the perimeter of the original military camp, and the low sun cast a series of long shadows that formed a nearly perfect line. We had discovered the remains of the camp’s southern perimeter fence. Its posts had long ago been salvaged (someone cut them off a few inches from the ground) to become building materials in Slab City.




 


 

Susan Schulten

 

On her book A History of America in 100 Maps

Cover Interview of June 26, 2019

In a nutshell

From the voyages of discovery to the digital age, maps have been essential to five hundred years of American history. Whether made as weapons of war or instruments of reform, as guides to settlement or tools of political strategy, maps invest information with meaning by translating it into visual form. Maps captured what people knew, but also what they thought they knew, what they hoped for, and what they feared. As a result, maps remain rich yet largely untapped sources of history. This premise animates A History of America in 100 Maps.

Organized into nine chronological chapters, the book examines both large-scale shifts but also little-known stories through maps that range from the iconic to the unfamiliar. Readers will encounter maps of political conflict and exploration, but also those whom we rarely consider mapmakers, such as soldiers on the front, Native American tribal leaders, and the first generation of young girls to be formally educated.

The book can be read as a continuous narrative, though readers may page through the book to discover a particular image that captures their imagination. Some will be drawn to the map used by the British Crown to negotiate the boundaries of the new United States at the end of the Revolutionary War. Others will be riveted by a map drawn to study—and stem—the rampant gang behavior in Chicago during the 1920s. Each map is accompanied by a brief essay that lays out its context, establishes its significance, and connects it to the larger story. Through these maps, we gain a greater appreciation for the contingencies of the past, but also the degree to which maps were integrated into all aspects of American life.

The book is now in its third printing. What has most surprised me about this success is the wide and diverse audience it has reached—a direct function of the many strange and appealing images that I was able to share.




 


 

Kirsten Fermaglich

 

On her book A Rosenberg by Any Other Name: A History of Jewish Name Changing in America

Cover Interview of June 12, 2019

In a nutshell

Despite the prevalence of name changing in American Jewish culture, few historians have studied the actual practice of name changing in the United States. A Rosenberg by Any Other Name – the first book to explore the phenomenon – relies on research into thousands of previously unexplored name change petitions submitted to the New York City Civil Court throughout the twentieth century. Using these petitions, I argue that name changing was a distinctive American Jewish practice in the middle of the century. Although many New Yorkers of different backgrounds changed their names, Jews did so at rates that were far disproportionate to their numbers in the city. They also changed their names together with family members in ways that historians have not considered before.

Jews’ middle-class status helps to explain these high rates of family name changing, as does the rising antisemitism of the era. Jews reached the middle class in the United States earlier than other immigrant groups in the early twentieth century, and they sought to maintain that status through education and white-collar work. By the 1920s, however, universities and employers were developing application forms, specifically, to weed out Jewish candidates by asking questions about birthplace, religion, and, importantly, name-changing. This institutionalized antisemitism formed the context for Jewish name changing in the first half of the twentieth century. Petitioners sought to erase the names that marked them as Jewish and thus exposed their families to discrimination.

The growth of the state during World War II further shaped the context within which Jews changed their names. As the government penetrated individuals’ daily lives to a greater extent, more Jews found it necessary to change their names officially to avoid discrimination and participate in the war effort.

During the war, Jewish communal groups understood name changing as a response to antisemitism, but after the war, the Jewish community became sharply divided over the phenomenon. Some Jewish leaders accused name changers of being “self-hating Jews” who were abandoning the community. A closer look at name change petitions, as well as contemporary literature, however, suggests that the majority of name changers remained members of the Jewish community, using their new names only to make it easier to work in the non-Jewish world. Jewish civil rights organizations understood this complicated balance, and defended name changers’ right to change their names as part of civil rights legislation in the 1940s.

Jews stopped changing their names in large numbers by the 1970s, and just as they did, negative representations of name changing flourished in popular culture. Misleading images of Jewish men betraying their families by changing their names or Ellis Island officials changing immigrant names circulated widely in the last quarter of the twentieth century. And since 2001, new ethnic groups have been changing their names in Civil Court for very different reasons than did Jews 75 years ago. Our culture has mostly forgotten the history of Jewish name changing in the United States. My book attempts to reconstruct that story.




 


 

Sara Lodge

 

On her book Inventing Edward Lear

Cover Interview of May 30, 2019

In a nutshell

Edward Lear, the author of ‘The Owl and the Pussy-cat’ and ‘The Quangle Wangle’s Hat,’ is rightly beloved as a nonsense poet. But few people know that he was also a brilliant musician, who sang and played the piano, the flute, the accordion and the small guitar, and a composer, who published twelve beautiful settings of his friend Tennyson’s poetry. Lear was also a naturalist, whose vivid lithographs of new species of animals and birds were consulted by Charles Darwin, and a landscape painter of surpassing skill, who taught Queen Victoria to draw. My book is the first study to examine Lear fully – as a musician, a visual artist, a naturalist, and a religious dissenter – relating all of these endeavours and identities to his writing. It places Lear firmly within the social, cultural, and intellectual life of his time.

Inventing Edward Lear crystallizes insights gained over six years of research, during which I transcribed over 10,000 pages of unpublished manuscript. It contains many pictures and writings by Lear that have not been seen before. Probably my most exciting realisation was that all of Lear’s poems are really songs. I recovered music for some of his re-settings of comic words to existing tunes by Thomas Haynes Bayly and Thomas Arne. I traced songs that we know from his diaries Lear regularly performed. And I began the process of recording, with the help of pianist David Owen Norris and various singers, the music that Lear wrote, parodied, sang and listened to throughout his long life. Readers of my book (and even those who don’t read it) can listen to these recordings at my website: edwardlearsmusic.com

Lear performed all of his nonsense poems, and many other poems by Tennyson, Swinburne, and Shelley, to music. He also had a lively repertoire of contemporary comic songs, such as ‘Tea in the Arbour’, in which a town-bred visitor takes tea with country friends and is bothered by caterpillars in his tea and spiders in the butter, gets tar on his trousers, is peppered by birdshot, and is finally caught in a man-trap! Lear must have played this Harold Lloyd-style comic role to perfection, as friends forty years on still recalled him singing it. Lear was adept at transitioning, on improvised piano, from high tragedy to breathless comedy. Once we know that songs like ‘The Courtship of the Yonghy-Bonghy-Bò’ are designed to sit uneasily – but brilliantly – between the sentimental yearning of drawing-room ballad and the cockney wordplay of Victorian musical-hall songs about foolish suitors, then it becomes easier to appreciate their genius. Lear creates a feedback loop between pathos and absurdity, where sentiment always threatens to be silly, yet the absurd frequently becomes moving. He makes us laugh and cry simultaneously.




 


 

Lesley A. Sharp

 

On her book Animal Ethos: The Morality of Human-Animal Encounters in Experimental Lab Science

Cover Interview of May 15, 2019

In a nutshell

Animal Ethos is framed by efforts to unearth and decipher moral thought and action in experimental forms of laboratory science. More specifically, as an ethnographic project, it attends to the ordinary, everyday, or mundane aspects of human-animal encounters in lab research. I purposefully distinguish between bioethics—or regulatory principles (that may be codified as law) that determine what one can and cannot do experimentally—and morality, namely, the personal and private musings of lab personnel whose research and livelihoods hinge on the use of animals for furthering medico-scientific knowledge. I consider moral thought in science as an imaginative project, where unexpected conundrums may challenge one to pause and consider the limits of dominant ethical frameworks.  Such reconsiderations lie at the heart of the making of oneself as a moral being, where the core questions I’ve posed to involved lab personnel might be phrased as “how do you think of your work when you go home at the end of the day?” or, as animal activists might restate it, “how do you live with yourself, knowing what you do?” I underscore here that I am not interested in whether one is practicing ethical science but, instead, in the private, subjective (and interspecies) dimensions of ongoing, often lifetime, work in which one engages, and how this plays out in personal efforts to forge a moral sense of self against the backdrop of scientific pursuits.

My earlier ethnographic engagements in specialized realms of transplantation—as described in my works Strange Harvest (2006, University of California Press) and The Transplant Imaginary (2013, University of California Press)—taught me that, whereas lab researchers readily convey complex understandings of regulations that define “ethical research,” there exists no similarly robust lexicon for describing personal experience and sentiment. Indeed, research personnel often explained to me that morality is the purview of philosophy and religion, not science.  As I slowly came to realize, though, when lab personnel talk about animals, which they do openly and often, they shift to a highly personalized, moral register. With this in mind, Animal Ethos is not a study of lab animals, but instead employs “animal talk,” so to speak, as a method for accessing how lab scientists think about the sociomoral underpinnings of what they do. As Claude Lévi-Strauss so famously proclaimed, “animals are good to think.” With this adage in mind, it is through the animal that I access moral thought and action among those whose careers rely on non-human species as essential research participants.




 


 

William L. Silber

 

On his book The Story of Silver: How the White Metal Shaped America and the Modern World

Cover Interview of May 08, 2019

In a nutshell

This book tells the story of the greatest commodities market manipulation of the 20th century – one that was perpetrated by the larger than life Nelson Bunker Hunt, who, at one time the richest man in the world, ultimately went bankrupt trying to corner the silver market with his brothers, Herbert and Lamar, in the 1970s. The Hunts rode the price of silver to a record $50 an ounce in January 1980 and nearly brought down the financial markets in the process.

But the Hunt brothers were not the first nor the last to be seduced by the white metal. In 1997 Warren Buffett, perhaps the most successful investor of the past fifty years, bought more than 100 million ounces, almost as much as the Hunts, and pushed the price of silver to a ten-year peak. In 1933 Franklin Delano Roosevelt raised the price for silver at the U.S. Treasury to mollify senators from western mining states while ignoring the help it gave Japan in subjugating China.

Was FDR’s price manipulation in the 1930s less criminal than Nelson Bunker Hunt’s in the 1970s? Reading this book will let you make an informed judgment and it will also show that the white metal has been part of the country’s political system since the founding of the Republic. Perhaps the most famous speech in American electoral politics, Nebraska Congressman William Jennings Bryan’s “Cross of Gold” sermon at the 1896 Democratic convention, was all about silver. Bryan’s cause, the resurrection of silver as a monetary metal, aimed to rectify the injustice perpetrated by Congress in the Crime of 1873, which discontinued the coinage of silver dollars that Alexander Hamilton had recommended in 1791. Thus, the story of silver spans two centuries and is woven into the fabric of history like the stars and stripes.