Unsuspecting Souls Read online

Page 25


  Keep in mind that Eastman may not have captured the audience he had hoped for, since in 1880 the average U.S. worker—who toiled away some twelve hours a day, six or seven days a week—made a mere sixteen dollars a week. Remember, too, that the first Labor Day, celebrating the heroic factory worker, comes around this time, in 1882. With the help of the socialist organizer and politician Eugene Debs, workers were beginning to agitate for better wages and better working conditions. Only those with a bit of money could afford the camera in the first place, or afford to have the film developed later.

  Eastman the entrepreneur knew that to make his camera a real success, he would have to lower the price a good deal. He also knew he would make up the difference in sales volume. And so in 1900 he put on the market his new Brownie camera, which sold for only one dollar—still nearly a half day’s wages for most workers, but incredibly more affordable than the older Kodak. By calling his camera a Brownie, Eastman harked back to the intimate connection between the camera and the spirit world, for a brownie is one of the fairies and spirits that populated the Scottish woods in the nineteenth century.

  Like almost every technological invention, however, photographs had their dark side, a fact that became much more apparent in the decades after Daguerre extended the camera’s reach into family life. As the price of portraits declined, more and more families had their pictures taken in the hundreds of studios that opened in major cities and towns. In one year alone, 1851, studios churned out over three million daguerreotypes, most of them family or group portraits. While Daguerre’s camera called people together as a fleshy group, it handed them back nothing but a cold, lifeless, odorless image of themselves. For that reason, it offered both a record of, say, a family’s existence, at the same time that it reinforced its feelings of ghostliness. The literary and cultural critic Susan Sontag captured both characteristics in her description, in the book On Photography, of the photograph as “a transparent account of reality.”

  Photographers instructed family members to smile as they shot their images. We know this phenomenon today as the “Say cheese” factor. Family portraits introduced into the culture families draped with smiles. One can imagine a person in nineteenth-century America seeing one of these smiling daguerreotypes and wondering about his or her own family: Was it as happy as that family in the photograph? The family portrait thus brought a new (and unobtainable) sensibility into popular culture. Can the camera have prompted Lewis Carroll to create the Cheshire Cat grin in Alice’s Adventures in Wonderland? Recall it’s the cat that turns invisible at one point, in a tree, leaving behind nothing but a broad smile that hovers, ghostlike, in the air. Perhaps the cat stands for those ghostlike smiling photographs of invisible and nameless families.

  By 1852, the camera was ubiquitous enough that the word photograph, based on its primary quality of consorting with the ethereal, took on a new connotation, referring to a mental or verbal image with such exactness of detail that it resembled an actual photograph. What an enormous turn of events: people beginning to use technology as the standard for trying to capture the quality of their own inner lives. No wonder we can talk so easily these days about the wonderful capacities of the memory of our computers without thinking anything wrong with that analogy, as technology outpaces, we think, our own capacities.

  Almost every contemporary critic acknowledges that ordinary people found this triumph of technology—the photograph—as both liberating and frightening, miraculous and murderous, flat-out reassuring and absolutely terrifying. While that same dichotomy holds true for every so-called technological advance, innovation, or invention, the camera exposes the dichotomy very clearly and quite vividly.

  People began taking photographs of the entire world. Some of them shot more and more landscape photographs as a way of situating themselves back into their immediate surroundings. The professionals turned to other subjects. One of the earliest of them, Nadar, produced magnificent portraits of French celebrities in the 1860s, like Victor Hugo, and the elusive Sarah Bernhardt at the beginning of her career. In fact, he helped make Bernhardt into a celebrity, as thousands of people saw his photographs of her in newspapers and on postcards and in advertisements in the back of magazines around the world. Other photographers, like Julia Margaret Cameron and Eadweard Muybridge, quickly turned their attention to the most important and problematic and elusive of all subjects, the human body. Cameron devoted herself to taking photographs of already established celebrities like Alfred Lord Tennyson, Robert Browning, and Ellen Terry.

  Muybridge, as we have seen earlier in this chapter, took a radically different path. Historian Rebecca Solnit describes Muybridge’s work in the mid-1870s, when he started photographing horses and people in motion, as “an avalanche of images of bodies, the bodies of horses, then men, then women, children . . . ” Solnit does not mean to disparage his work. On the contrary, she describes his photographs as liberating, as welcomed breaks from the quotidian: “[I]t was as though he were returning bodies themselves to those who craved them—not bodies as they might daily be experienced, bodies as sensations of gravity, fatigue, strength, pleasure, but bodies become weightless images, bodies dissected and reconstructed by light and machine and fantasy.”20

  Such reconstructed bodies, fabricated anew out of light and fantasy, no longer count as bodies. We greet them for what they are—nothing more than images. As with still photographs, motion photography constituted for many people another piece of evidence, like ghost stories, of their own eradication, but this time in a more accurate way. These images showed that they existed over dozens or hundreds or even several hundreds of moments in the flux of time, but the string of photographs gave them that proof, once again, only as abstracted, ghostly evidence. Muybridge himself, in a tacit acknowledgment of the nature of photography, felt impelled, in 1870, to photograph the Modoc ghost dance, an American Indian festival performed in the West. The Modocs celebrated Native American spirits on the day they left their graves and returned to join forces with the living, in this case to help battle the white man. Of course, given Muybridge’s medium, ghosts appealed to him, though he refused to become part of the movement that exploited spectral photography.

  No less a figure than Oliver Wendell Holmes commented on this powerful drift toward disembodiment that photography made so painfully and at times so artistically apparent: “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer . . . ”21 And later, on the subject of stereographic photography, a close relative of modern motion pictures, Holmes expressed an even more critical urgency about the dissolution of the corporeal. His comments yoked together, in a weird metaphysical image, the seemingly disparate ways that photography and trains take us to far-off places: “[T]he shutting out of surrounding objects, and the concentration of the whole attention, which is a consequence of this, produce a dream-like exaltation . . . in which we seem to leave the body behind us and sail away into one strange scene after another, like disembodied spirits.”22

  A more contemporary critic of photography, Alison Hennegan, in her essay “Personalities and Principles: Aspects of Literature and Life in Fin-de-Siècle England,” provides an important link between nineteenth-century images and our own period’s fascination with what semioticians call the simulacra, images that make up a good deal of our experience projected onto one screen or another:There was, of course, nothing new about portraiture itself. But the ability to reproduce the same image thousands of times over, the means of distributing it swiftly across enormous geographical areas, the capacity to “capture” the human model and then offer it for sale to another human being who thereafter “had” the sitter in permanent, possible form—all these were different indeed. With them begins that curious and often frightening process whereby, over the years, the “image” of public people has become almost more important, because more “real” and available than the person.

  What is the effect, we must ask here, on the staged and posed
subject, neatly captured by the photographer in his or her studio? What can we trust the photographer to really tell us through photographs? How did the “photo” affect the representation of the truth? This is an especially potent question once we recognize that many photographers manipulated the negatives and played with the final image. Photography belonged to the world of technology, but it also had immediate connections with art and artifice.

  Beyond that, we need to ask, how did people react to these new feelings of disembodiment and strange powers of ownership without really owning anything more substantial than print paper? How did they respond to the increases in speed in their lives? What did they do? What, in effect, could they do? Whatever the outcomes, reactions were not confined to the educated and the elite. General feelings of discorporation reached down into the general public, as well. George Miller Beard, an American physician and neurologist, developed a practice in this country in a medical specialty he called electrotherapy, in which he used electrical stimulation for curing disorders of the nervous system. In 1881, after seeing thousands of patients, he decided to publish a study on the psychological health of the average citizen, giving away his diagnosis in the title: American Nervousness, with Its Causes and Consequences. He could have just as easily saddled England and parts of Western Europe with that same diagnosis.

  Beard concluded that railroad travel, telegraphy, and photography, along with severe competition and excessive brain work, compounded by the general speed of what he termed “modern civilization,” led to a new phenomenon that he termed “nervous collapse,” or what doctors would later call “nervous breakdown.” He introduced a new word into the medical vocabulary, neurasthenia, which tried to capture people’s “lack of nerve power” wrought by what he saw as “modern civilization.” He listed symptoms including dyspepsia, inability to concentrate, impotence, and full-on depression. In an amazing insight into the way machines get under the skin, Beard pointed out that “today a nervous man cannot take out his watch and look at it when the time for an appointment or train is near without affecting his pulse.”

  The medical profession embraced his findings so enthusiastically that they began diagnosing a new malady called Beard’s disease, which doctors defined as “unexplained exhaustion with abnormal fatigability.”23 In its initial use, in the fifteenth century, the word nervous denoted a certain strength or courage. By 1813, the word had totally reversed itself and came to mean a weakness of resolve, a condition of severe lassitude. Wherever we might want to locate the idea of “will,” we can no longer find it, in this period, in the idea of “nerve.”

  The decade before the publication of Beard’s book, the 1870s, the years of refinement in both the nature and quality of motion photography, also saw the invention of the telephone and the phonograph—two more lethal accomplices just poised to further do in both time and space. Clocks also began to appear with a third hand to measure the passing of every second. All these inventions and innovations added to the public’s quickening descent into nervousness. Beard decried not just the speed at which people’s lives suddenly seemed to be moving, but also the swiftness with which life could be calibrated. That is, he took note of the fact that punctuality had suddenly become not just a virtue but also a measure of one’s character. Factory bosses, priests, and friends suddenly all demanded that people arrive at this job, or that prayer service, or even some seemingly casual office party, at five o’clock sharp, “on the dot,” or “on the tick,” or “on the clicker.” The new watchwords of the age: “Be on time.” “Do not be late.” Workers got “docked” for showing up to work late, or for returning from lunch break beyond the prescribed time.

  In a delightful book entitled Keeping Watch: A History of American Time, its author, Michael O’Malley, says that in the 1850s Americans observed eighty different local times. Joining the period’s fervor for reducing the disorder of reality to numbers and essential units, Britain’s Royal Society used the Greenwich Observatory, in 1848, to indicate an imaginary line called zero longitude, which passed through Greenwich, a borough of London, and which terminated at the north and south poles. We all know this line as “the prime meridian,” the baseline for measuring time, one of the most ideal innovations of the nineteenth century. An international conference held in 1884 in Washington, D.C., adopted the Greenwich meridian, which promulgated the use of standardized times in longitudinal zones around the world.

  The year before the Greenwich date of 1884, the railroad owners campaigned to eliminate those eighty local zones and to standardize time within longitudinal sections. As Jack Beatty puts it in his Age of Betrayal: The Triumph of Money in America, 1865-1900, “The sun told time from Genesis to 12:01 A.M. on November 18, 1883,” the date when the railroads assumed dominion over time. “Basically,” Beatty goes on to say, “Americans took nature’s word for time: Noon arrived when the sun looked nearest to being overhead, at times that differed with locations. . . . Town clocks, to be sure, were set not by sundials but by almanacs that averaged the sun’s variations over months and years. A scattering of localities rented astronomically precise time from observatories, which wired them through Western Union.” On November 18, 1883, America’s railroads imposed standard time on the United States, dividing the country into four broad bands of longitude, each fifteen degrees apart. The railroads accomplished an amazing thing: They now could lay claim to owning time in this country. Michael O’Malley summarizes this monumental change: “Once individuals experienced time as a relationship between God and nature. Henceforth, under the railroad standards, men and women would measure themselves in relation to a publicly defined time based on synchronized clocks.”

  A much more seemingly minor refinement to time had come, in the last years of the eighteenth century, with the invention of the stopwatch and, in the nineteenth century, with the time clock. In reality, though, what happened here was no less monumental, for what we witness is the segmenting of time, its division into smaller and smaller units. It really starts with an American born in 1856, Frederick Winslow Taylor, who co-opted the technique of temporal segmentation from the earliest motion picture technology and applied it to the workplace to create a new field, which he called “time and motion studies”; when he decided to sell his ideas to industry, he called it by its more highfalutin-sounding name, “scientific management.”

  In order to increase production and efficiency in the office and factory—and thus to increase profit margins—Taylor broke down each worker’s tasks into fundamental and discrete movements. Then he measured and timed those movements, down to the hundredth of the minute. In the end, Taylor dictated to workers the precise amount of time they should spend performing each and every routine operation. He made them further account for their time by requiring them to punch in to work in the morning and to punch out in the evening, using a new, nineteenth-century device called the time clock. In the language of the day, he had workers “clock in” and “clock out”; they got paid only for the time that they were “on the clock.” He later made every worker account for his or her lunch and coffee break. Taylor’s biographer, Robert Kanigel, says that Taylor “helped instill in us the fierce, unholy obsession of time, order, productivity, and efficiency that marks our age.” His doctrine pervades so much of American culture, Kanigel stresses, and at such a deep level, that “we no longer realize it’s there.” And that is a most dangerous state of affairs.

  Beard feared that, in a generation or two, people would no longer be able to cope with such temporal restrictions and constraints on the freedom of their lives: “The perfection of clocks and the invention of watches have something to do with modern nervousness, since they compel us to be on time, and excite the habit of looking to see the exact moment, so as not to be late for trains or appointments. . . . We are under constant strain, mostly unconscious, to get somewhere or do something at some definite moment.” His predictions proved true. By the end of the century, according to some contemporary accounts, neurasthenia had
reached epidemic proportions.

  The Irish statesman James Beresford wrote a book early in the century to which he gave the most portentous title, The Miseries of Human Life, or The Groans of Samuel Sensitive, and Timothy Testy; with a Few Supplementary Sighs from Mrs. Testy (1806). In Beresford’s world, the stopwatch dominates every aspect, virtually, of people’s experience. He even gave a name to the new people of the new century: “Automata—people who regulate all their thoughts, words, and actions, by the stopwatch.” The word robots might come to mind here, machines that in this period would carry out their tasks as no more than mechanical ghosts, programmed by some outside authority or force.

  Mechanization—industrialization in general—was not the main enemy threatening to strip people of all of their humanity. The rock-solid definition of human beings began to fall apart the moment the nineteenth century opened, as I have said, when the idea of the human being radically shifted, both philosophically and scientifically. Without trying to overstate the case, we might equate such a monumental rupture with the unsettling caused by the Scientific Revolution of the sixteenth and seventeenth centuries. Novelists and poets served as one guide to the problems attendant on this epistemological crisis.

  Some of the era’s literature we can read as warnings about the psychological and social dangers of such a critical dislocation, especially in a world bereft of God. It is revealing that many contemporary critics viewed Dracula, the notorious Count Dracul, as the most seriously religious character in Bram Stoker’s novel. Other literature, like Frankenstein, “The Invisible Man,” and Goethe’s Faust, impeached science for its relentless and distorted search for the command of all nature and the source of human life. Doctor Frankenstein may find the key to life, but in the process he loses his own soul, his creation merely a reflection of his own monstrous, disfigured urge toward power and absolute control. In trying to find the secret of life, of course, he wants to rival the power of God.