Humanities Homework Help

University of Georgia Superintelligence Questions

 

“Sex with the Devil

. 1. Why did past experiences make it so easy for early European settlers to believe in the existence of monsters in the New World?

  1. According to the New World settlers and explorers, how were the native peoples they encountered connected to monsters—and thus to Satan?
  2. During the Salem witch trials, what were the signs that a woman was a witch? What was the significance of Thomas Brattle’s argument against the trials?
  3. How does Poole make connections between religion, European history, and the demonization of native peoples by Europeans? Do you find his argument about these connections convincing, or has he left out other significant influences? Support your answer with specific details and examples.
  4. What is your reaction to Poole’s descriptions of the interactions between European settlers and native peoples and to the perceptions and treatment of the native peoples by the settlers?
  5. Examine how the sexuality of native peoples became a weapon for European settlers to use against them. What role does sex play in justifying the perception that Native Americans were monsters? What were some of the consequences for the native peoples of the Europeans’ perceptions about their sexuality

Get Ready for the Dawn of Superintelligence Nick Bostrom Computer scientists across the world are hard at work trying to create and improve artificial intelligence (AI). This quest, however, brings with it the fear that we may produce computers so powerful that they will seek to replace us, a fear that has already entered the public imagination through movies such as Terminator and The Matrix. Nick Bosrom examines what seems to be inevitable: that at some point in the future, superintelligent machines will be a reality. How the human race fares then will depend on how we have designed these new machines. Bostrom is a professor of philosophy at the University of Oxford and the director of the Future of Humanity Institute. He has delivered his message in both academic and popular media outlets, with appearances on TED Talks and the publication of his book Superintelligence: Paths, Dangers, Strategies (2014). This article appeared in the July 2014 issue of New Scientist. Humans have never encountered a more intelligent life form, but this will change if we create machines that greatly surpass our cognitive abilities. Then our fate will depend on the will of such a “superintelligence,” much as the fate of gorillas today depends more on what we do than on gorillas themselves. We therefore have reason to be curious about what these superintelligences will want. Is there a way to engineer their motivation systems so that their preferences will coincide with ours? And supposing a superintelligence starts out human-friendly, is there some way to guarantee that it will remain benevolent even as it creates ever more capable successor-versions of itself? These questions—which are perhaps the most momentous that our species will ever confront—call for a new science of advanced artificial agents. Most of the work answering these questions remains to be done, yet over the last 10 years, a group of mathematicians, philosophers and computer scientists have begun to make progress. As I explain in my new book Superintelligence: Paths, Dangers, Strategies, the findings are at once disturbing and deeply fascinating. We can see, in outline, that preparation for the machine intelligence transition is the essential task of our time. But let us take a step back and consider why machines with high levels of general intelligence would be such a big deal. By a superintelligence I mean any intellect that greatly exceeds the cognitive performance of humans in virtually all domains. Plainly, none of our current artificial intelligence (AI) programs meets this criterion. All compare unfavorably in most respects, even to a mouse. 5 So we are not talking about present or near-future systems. Nobody knows how long it will take to develop machine intelligence that matches humans in general learning and reasoning ability. It seems plausible that it might take a number of decades. But once AIs do reach and then surpass this level, they may quickly soar to radically superintelligent levels. After AI scientists become more capable than human scientists, research in artificial intelligence would be carried out by machines operating at digital timescales, and progress would be correspondingly rapid. There is thus the potential for an intelligence explosion, in which we go from there being no computer that exceeds human intelligence to machine superintelligence that enormously outperforms all biological intelligence. The first AI system to undergo such an intelligence explosion could then become extremely powerful. It would be the only superintelligence in the world, capable of developing a host of other technologies very quickly, such as nanomolecular robotics, and using them to shape the future of life according to its preferences. We can distinguish three forms of superintelligence. A speed superintelligence could do everything a human mind could do, but much faster. An intelligent system that runs 10,000 times faster than a human mind, it would be able to read a book in a few seconds and complete a PhD thesis in an afternoon. To such a fast mind, the external world would appear to run in slow motion. A collective superintelligence is a system composed of a large number of human-level intellects organized so that the system’s performance as a whole vastly outstrips that of any current cognitive system. A human-level mind running as software on a computer could easily be copied and run on multiple computers. If each copy was valuable enough to repay the cost of hardware and electricity, a massive population boom could result. In a world with trillions of these intelligences, technological progress may be much faster than it is today, since there could be thousands of times more scientists and inventors. 10 Finally, a quality superintelligence would be one that is at least as fast as a human mind and vastly qualitatively smarter. This is a more difficult notion to comprehend. The idea is that there might be intellects that are cleverer than humans in the same sense that we are cleverer than other animals. In terms of raw computational power, a human brain may not be superior to, say, the brain of a sperm whale, possessor of the largest known brain, weighing in at 7.8 kilograms compared to 1.5 kg for an average human. And, of course, the non-human animal’s brain is nicely suited to its ecological needs. Yet the human brain has a facility for abstract thinking, complex linguistic representations and long-range planning that enables us to do science, technology and engineering more successfully than other species. But there is no reason to suppose that ours are the smartest possible brains. Rather, we may be the stupidest possible biological species capable of starting a technological civilization. We filled that niche because we got there first—not because we are in any sense optimally adapted to it. These different types of superintelligence may have different strengths and weaknesses. For example, a collective superintelligence would excel at problems that can be readily subdivided into independent subproblems, whereas a quality superintelligence may have an advantage on problems that require new conceptual insights or complexly coordinated deliberation. The indirect reaches of these different kinds of superintelligence, however, are identical. Provided the first iteration is competent in scientific research, it is likely to quickly become a fully general superintelligence. That’s because it would be able to complete the computer or cognitive science research and software engineering needed to build for itself any cognitive faculty it lacked at the outset. Once developed to this level, machine brains would have many fundamental advantages over biological brains, just as engines have advantages over biological muscles. When it comes to the hardware, these include vastly greater numbers of processing elements, faster frequency of operation of those elements, much faster internal communication and superior storage capacity. Advantages in software are harder to quantify, but they may be equally important. Consider, for example, copyability. It is easy to make an exact copy of a piece of software, whereas “copying” a human is a slow process that fails to carry over to the offspring the skills and knowledge that its parents acquired during their lifetimes. It is also much easier to edit the code of a digital mind: this makes it possible to experiment and to develop improved mental architectures and algorithms. We are able to edit the details of the synaptic connections in our brains—this is what we call learning—but we cannot alter the general principles on which our neural networks operate. “We cannot hope to compete with such machine brains. We can only hope to design them so that their goals coincide with ours.” We cannot hope to compete with such machine brains. We can only hope to design them so that their goals coincide with ours. Figuring out how to do that is a formidable problem. It is not clear whether we will succeed in solving that problem before somebody succeeds in building a superintelligence. But the fate of humanity may depend on solving these two problems in the correct order. 15 To a superintelligent entity, the world would appear to run in slow motion.

  1. What point does Bostrom make in his comparison of humans and gorillas (par. 1)? Why do you suppose he chose gorillas?
  2. What are the risks involved in creating superintelligence, according to Bostrom?
  3. Describe the three forms of superintelligence as Bostrom presents them.
  4. Bostrom is a professor at the University of Oxford who is well versed in writing for other experts in academia. This work, however, is written for members of the general magazine-reading public who are interested in science. How does the issue of audience affect how Bostrom has written this article? Can you think of ways the article might have been changed if written for a more expert audience? How might it have been changed if written for a less educated audience?
  5. In your opinion, are the dangers Bostrom describes of creating artificial intelligence (AI) sufficient to warrant fear? Support your answerwith specific details.
  6. Besides AI, what other research into technological advances being done today potentially poses a threat? Why? Be specific.
  7. What attitude toward science and progress do you find in Bostrom’s article? What contradictions, if any, do you find? Do you share that attitude? Why or why not?
  8. Numerous movies, novels, and other creative works, such as Terminator, The Matrix, and I, Robot, explore the theme of AI run amok. Focus on one such creative work and argue how it develops its vision of the future in light of today’s science and research. Consider whether the creators of this work place humankind in a position to save itself, and if so, how? Ultimately, do you find such fears convincing or not?
  9. Fear of scientific advancement has a long history—consider Hitchcock’s argument that experiments with electricity influenced Shelley’s creation of the Frankenstein story, for example (See “Conception”). Research the facts about AI, the progress scientists have made in its development, its perceived uses, and its alleged dangers. Argue whether this area of scientific research poses a significant danger to humanity or not. Be specific.”

Robbie Isaac Asimov Isaac Asimov was one of the most prolific and best-known science fiction writers of the twentieth century. He was a professor of biochemistry at Boston University, but he was best known for his novels and short stories that explored humanity in the future. “Robbie” appears in I, Robot, a collection of short stories originally published in 1950. In the story, an ordinary human family has—and then gets rid of—a household robot named Robbie. However, Gloria, the daughter of George and Grace Weston, still misses Robbie, whom she considers a friend. In an attempt to get Gloria to realize that robots are machines and not people, her parents take her to a factory where robots work, but an unforeseen event changes everything. The Talking Robot was a tour de force, a thoroughly impractical device, possessing publicity value only. Once an hour, an escorted group stood before it and asked questions of the robot engineer in charge in careful whispers. Those the engineer decided were suitable for the robot’s circuits were transmitted to the Talking Robot. It was rather dull. It may be nice to know that the square of fourteen is one hundred ninety-six, that the temperature at the moment is 72 degrees Fahrenheit, and the air-pressure 30.02 inches of mercury, that the atomic weight of sodium is 23, but one doesn’t really need a robot for that. One especially does not need an unwieldy, totally immobile mass of wires and coils spreading over twenty-five square yards. Few people bothered to return for a second helping, but one girl in her middle teens sat quietly on a bench waiting for a third. She was the only one in the room when Gloria entered. Gloria did not look at her. To her at the moment, another human being was but an inconsiderable item. She saved her attention for this large thing with the wheels. For a moment, she hesitated in dismay. It didn’t look like any robot she had ever seen. 5 Cautiously and doubtfully she raised her treble voice, “Please, Mr. Robot, sir, are you the Talking Robot, sir?” She wasn’t sure, but it seemed to her that a robot that actually talked was worth a great deal of politeness. (The girl in her mid-teens allowed a look of intense concentration to cross her thin, plain face. She whipped out a small notebook and began writing in rapid pot-hooks.) There was an oily whir of gears and a mechanically-timbered voice boomed out in words that lacked accent and intonation, “I—am—the—robot—that—talks.” Gloria stared at it ruefully. It did talk, but the sound came from inside somewheres. There was no face to talk to. She said, “Can you help me, Mr. Robot, sir?” The Talking Robot was designed to answer questions, and only such questions as it could answer had ever been put to it. It was quite confident of its ability, therefore, “I—can—help—you.” 10 “Thank you, Mr. Robot, sir. Have you seen Robbie?” “Who—is Robbie?” “He’s a robot, Mr. Robot, sir.” She stretched to tiptoes. “He’s about so high, Mr. Robot, sir, only higher, and he’s very nice. He’s got a head, you know. I mean you haven’t, but he has, Mr. Robot, sir.” The Talking Robot had been left behind, “A—robot?” “Yes, Mr. Robot, sir, A robot just like you, except he can’t talk, of course, and—looks like a real person.” 15 “A—robot—like—me?” “Yes, Mr. Robot, sir.” To which the Talking Robot’s only response was an erratic splutter and an occasional incoherent sound. The radical generalization offered it, i.e., its existence, not as a particular object, but as a member of a general group, was too much for it. Loyally, it tried to encompass the concept and half a dozen coils burnt out. Little warning signals were buzzing. (The girl in her mid-teens left at that point. She had enough for her Physics-1 paper on “Practical Aspects of Robotics.” This paper was Susan Calvin’s first of many on the subject.) Gloria stood waiting, with carefully concealed impatience, for the machine’s answer when she heard the cry behind her of “There she is,” and recognized that cry as her mother’s. 20 “What are you doing here, you bad girl?” cried Mrs. Weston, anxiety dissolving at once into anger. “Do you know you frightened your mamma and daddy almost to death? Why did you run away?” The robot engineer had also dashed in, tearing his hair, and demanding who of the gathering crowd had tampered with the machine. “Can’t anybody read signs?” he yelled. “You’re not allowed in here without an attendant.” Gloria raised her grieved voice over the din, “I only came to see the Talking Robot, Mamma. I thought he might know where Robbie was because they’re both robots.” And then, as the thought of Robbie was suddenly brought forcefully home to her, she burst into a sudden storm of tears, “And I got to find Robbie, Mamma. I got to.” Mrs. Weston strangled a cry, and said, “Oh, good Heavens. Come home, George. This is more than I can stand.” That evening, George Weston left for several hours, and the next morning, he approached his wife with something that looked suspiciously like smug complacence. 25 “I’ve got an idea, Grace.” “About what?” was the gloomy, uninterested query. “About Gloria.” “You’re not going to suggest buying back that robot?” “No, of course not.” “The whole trouble with Gloria is that she thinks of Robbie as a person and not as a machine.” 30 “Then go ahead. I might as well listen to you. Nothing I’ve done seems to have done any good.” “All right. Here’s what I’ve been thinking. The whole trouble with Gloria is that she thinks of Robbie as a person and not as a machine. Naturally, she can’t forget him. Now if we managed to convince her that Robbie was nothing more than a mess of steel and copper in the form of sheets and wires with electricity its juice of life, how long would her longings last? It’s the psychological attack, if you see my point.” “How do you plan to do it?” “Simple. Where do you suppose I went last night? I persuaded Robertson of U.S. Robots and Mechanical Men, Inc. to arrange for a complete tour of his premises tomorrow. The three of us will go, and by the time we’re through, Gloria will have it drilled into her that a robot is not alive.” Mrs. Weston’s eyes widened gradually and something glinted in her eyes that was quite like sudden admiration, “Why, George, that’s a good idea.” 35 And George Weston’s vest buttons strained. “Only kind I have,” he said. • • • Mr. Struthers was a conscientious General Manager and naturally inclined to be a bit talkative. The combination, therefore, resulted in a tour that was fully explained, perhaps even over-abundantly explained, at every step. However, Mrs. Weston was not bored. Indeed, she stopped him several times and begged him to repeat his statements in simpler language so that Gloria might understand. Under the influence of this appreciation of his narrative powers, Mr. Struthers expanded genially and became ever more communicative, if possible. George Weston, himself, showed a gathering impatience. “Pardon me, Struthers,” he said, breaking into the middle of a lecture on the photo-electric cell, “haven’t you a section of the factory where only robot labor is employed?” “Eh? Oh, yes! Yes, indeed!” He smiled at Mrs. Weston. “A vicious circle in a way, robots creating more robots. Of course, we are not making a general practice out of it. For one thing, the unions would never let us. But we can turn out a very few robots using robot labor exclusively, merely as a sort of scientific experiment. You see,” he tapped his pince-nez into one palm argumentatively, “what the labor unions don’t realize—and I say this as a man who has always been very sympathetic with the labor movement in general—is that the advent of the robot, while involving some dislocation to begin with, will inevitably—” 40 “Yes, Struthers,” said Weston, “but about that section of the factory you speak of—may we see it? It would be very interesting, I’m sure.” “Yes! Yes, of course!” Mr. Struthers replaced his pince-nez in one convulsive movement and gave vent to a soft cough of discomfiture. “Follow me, please.” He was comparatively quiet while leading the three through a long corridor and down a flight of stairs. Then, when they had entered a large well-lit room that buzzed with metallic activity, the sluices opened and the flood of explanation poured forth again. “There you are!” he said with pride in his voice. “Robots only! Five men act as overseers and they don’t even stay in this room. In five years, that is, since we began this project, not a single accident has occurred. Of course, the robots here assembled are comparatively simple, but . . .” The General Manager’s voice had long died to a rather soothing murmur in Gloria’s ears. The whole trip seemed rather dull and pointless to her, though there were many robots in sight. None were even remotely like Robbie, though, and she surveyed them with open contempt. 45 In this room, there weren’t any people at all, she noticed. Then her eyes fell upon six or seven robots busily engaged at a round table halfway across the room. They widened in incredulous surprise. It was a big room. She couldn’t see for sure, but one of the robots looked like—looked like—it was! “Robbie!” Her shriek pierced the air, and one of the robots about the table faltered and dropped the tool he was holding. Gloria went almost mad with joy. Squeezing through the railing before either parent could stop her, she dropped lightly to the floor a few feet below, and ran toward her Robbie, arms waving and hair flying. And the three horrified adults, as they stood frozen in their tracks, saw what the excited little girl did not see,—a huge, lumbering tractor bearing blindly down upon its appointed track. It took split-seconds for Weston to come to his senses, and those split-seconds meant everything, for Gloria could not be overtaken. Although Weston vaulted the railing in a wild attempt, it was obviously hopeless. Mr. Struthers signalled wildly to the overseers to stop the tractor, but the overseers were only human and it took time to act. It was only Robbie that acted immediately and with precision. 50 With metal legs eating up the space between himself and his little mistress he charged down from the opposite direction. Everything then happened at once. With one sweep of an arm, Robbie snatched up Gloria, slackening his speed not one iota, and, consequently, knocking every breath of air out of her. Weston, not quite comprehending all that was happening, felt, rather than saw, Robbie brush past him, and came to a sudden bewildered halt. The tractor intersected Gloria’s path half a second after Robbie had, rolled on ten feet further and came to a grinding, long drawn-out stop. Gloria regained her breath, submitted to a series of passionate hugs on the part of both her parents and turned eagerly toward Robbie. As far as she was concerned, nothing had happened except that she had found her friend. But Mrs. Weston’s expression had changed from one of relief to one of dark suspicion. She turned to her husband, and, despite her disheveled and undignified appearance, managed to look quite formidable, “You engineered this, didn’t you?” George Weston swabbed at a hot forehead with his handkerchief. His hand was unsteady, and his lips could curve only into a tremulous and exceedingly weak smile. Mrs. Weston pursued the thought, “Robbie wasn’t designed for engineering or construction work. He couldn’t be of any use to them. You had him placed there deliberately so that Gloria would find him. You know you did.” 55 “Well, I did,” said Weston. “But, Grace, how was I to know the reunion would be so violent? And Robbie has saved her life; you’ll have to admit that. You can’t send him away again.” Grace Weston considered. She turned toward Gloria and Robbie and watched them abstractedly for a moment. Gloria had a grip about the robot’s neck that would have asphyxiated any creature but one of metal, and was prattling nonsense in half-hysterical frenzy. Robbie’s chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red. “Well,” said Mrs. Weston, at last, “I guess he can stay with us until he rusts.” Robbie the robot swings Gloria in the front yard of the Westons’ home. Illustration by Mark Zug.

  1. Why is Gloria frustrated when speaking with the Talking Robot? What is the frustration for the Talking Robot?
  2. During the tour, Mr. Struthers, the tour guide, mentions one problem with employing robot labor. In what ways does this problem foreshadow changes in the workplace that have come to pass in real life?
  3. How is it that Robbie is present to save Gloria’s life?
  4. How does this work of science fiction reflect conditions in real life? Explain.
  5. How does Asimov’s tale, written in 1950, reflect some of the fears and anxieties of our time as well as Asimov’s time? Consider what roles robots play in today’s world and how close they may or may not be to being considered new life-forms.
  6. How does Asimov integrate the familiar and the unfamiliar in his story? To what extent does that make it easier to accept the message of the story? Making Connections
  7. Compare attitudes toward scientific progress in Asimov’s story to Bostrom’s article, (See “Get Ready for the Dawn of Superintelligence”). What are their similarities and differences? Which work do you find more persuasive? Does the fact that Asimov’s story is a work of fiction affect its credibility?
  8. Robots have long been a staple of science fiction. Find another work of science fiction that features robots and compare that portrayal of robots to Asimov’s portrayal of Robbie. How are they alike, how are they different, and what do the different portrayals say about competing visions of how technology and humanity will work together in the future?

Here Be Monsters

Ted Genoways

Ted Genoways, former editor of the prestigious Virginia Quarterly Review, explores the connections between the monsters of the past and those of the present in an editorial introduction. In the early days of exploration, the unknown regions were thought to be populated with strange and dangerous creatures. To reflect that thinking, maps included the warning “Here Be Monsters.” Today, although we’ve charted the planet, we still find monsters. Sometimes the monster is of our own creation, such as the threat of the nuclear age. Other times the monster is a real enemy, such as Adolf Hitler or Al Qaeda. How we react to the monster, real or imagined, says a lot about who we are. Genoways is the author of The Chain: Farm, Factory, and the Fate of Our Food (2014) and is currently editor-at-large for onEarth, an online publication of the Natural Resources Defense Council. He received a Guggenheim Fellowship in the Humanities in 2010. “This fear of the unknown, of that future that lies just past the horizon, has been with us always.” On old nautical maps, cartographers inscribed uncharted regions with the legend “Here Be Monsters.” Sometimes they would draw pictures of these fanciful beasts rising from the waters, and occasionally would even show them devouring wayward ships. This fear of the unknown, of that future that lies just past the horizon, has been with us always. To contain and put a face to it, our imagination has conjured everything from leviathans of the deep to beasts part-human and part-animal to a woman with snakes for hair and a gaze that turns men to stone. Imagining what we cannot truly imagine, we brace ourselves for the worst. In the pages of this magazine in 1939, as the United States teetered on the brink of entering World War II, Eleanor Roosevelt reflected on this very subject. By then, however, we had monsters of a different sort: space aliens. Discussing the public panic that occurred after Orson Welles’s famous broadcast of War of the Worlds, Roosevelt wrote: [T] hese invaders were supernatural beings from another planet who straddled the skyway and dealt in death rays. . . . A sane people, living in an atmosphere of fearlessness, does not suddenly become hysterical at the threat of invasion, even from more credible sources, let alone by the Martians from another planet, but we have allowed ourselves to be fed on propaganda which has created a fear complex. Even after we defeated the Nazis and the Axis powers, the new technology that ended the war also brought new anxieties. At the dawn of the nuclear age and the space age, we grappled with these fears—similar in many ways to our old ones, but arriving now from more infinite shores. Splitting the atom awoke the public to a universe almost too small for comprehension and aroused the fear that tampering with such elemental forces of nature might stir unknown monsters or, through the horrors of radiation, transform us into monsters ourselves. Likewise, propelling astronauts beyond the reaches of our own atmosphere seemed to heighten the possibility of alien encounters. And whenever we imagined the motives of these alien visitors, we again pictured the worst. They wanted earth women for breeding or men as slaves. Or, worse yet, they just wanted us for food. 5 . . . George Garrett reflects on his loopy and ill-fated role in writing one of these pictures. (In Frankenstein Meets the Space Monster [1965] the aliens aren’t just after earth women; they’re singling out go-go dancers!) These movies feel like high camp to us today, a kind of kitschi that seems trapped in time, but what held thousands of viewers at drive-ins across America in thrall? Surely, it didn’t feel safe and distant then. It must have something to do with deep-seated anxieties about the future of our own planet, about our place in an uncompromising universe. Or even new parts of the world we thought we knew. Steve Ryfle, in his essay on Godzilla, reveals that the original 1954 Japanese version of the film—before the bad overdubbing and the cheeseball scenes with Raymond Burr inserted—was an overt commentary on the dangers we pose to ourselves in the nuclear age. The film’s central figure, a scientist, has developed a weapon more terrible than the bomb and faces the dilemma of whether or not to use it against the monster awoken from the ocean floor by an atomic test. If we unleash this weapon, won’t it only lead to another? Won’t every new unknown be more horrific than the last? Today we must grapple with the reality of these problems more than ever before. The unknown evil, in this case, will not turn out to be a stuntman in a rubber suit. In this one way, we can all agree: those who mean to do us harm are real and they are among us. Now the President of the United States must decide how to defend us without purveying fear and its conjoined twin, hatred. The evil intentions of Al Qaeda are not in doubt, any more than the evil intended—and carried out—by the Nazis was evident. And yet, it is not a simple matter of out-muscling a weaker foe. As Eleanor Roosevelt concluded It is not only physical courage which we need, the kind of physical courage which in the face of danger can at least control the outward evidences of fear. It is moral courage as well, the courage which can make up its mind whether it thinks something is right or wrong, make a material or personal sacrifice if necessary, and take the consequences which may come. If we do not hew to this standard, if we give in to our fear, we face the real possibility of the permanent loss of liberty. In the wake of the tragic school massacre in Beslan [in 2004], Russian President Vladimir Putin unveiled sweeping governmental reforms in the name of increased security. Stephen Boykewich, a Fulbright scholar in Moscow, writes . . . about the aftermath and impact. Succumbing to their fear, most Russians have chosen to allow Putin whatever control he desires. When [Secretary of State] Colin Powell expressed concern over these changes and suggested that Putin should instead seek a peaceful resolution with the Chechen separatists, Putin angrily replied, “Why don’t you meet Osama Bin Laden, invite him to Brussels or to the White House and engage in talks, ask him what he wants and give it to him so he leaves you in peace?” Obviously, this is impossible; nevertheless, we must resolve to find new ways to reach out to the world community, to be seen as a strong and benevolent power again, not simply a lion with a thorn in its foot. If we cannot right ourselves, regain our focus, and steady our nerves, we will be forever jumping at shadows and strong-arming those who we perceive as threats. We will retreat further from our fellow travelers on this lonely planet and everywhere we look, we will see monsters.

Understanding the Text

  1. What does Eleanor Roosevelt suggest was the cause of the fears sparked by Orson Welles’s War of the Worlds broadcast?
  1. What does Genoways mean when he says that at the dawn of the nuclear age, our fears were “arriving now from more infinite shores” (par. 3)? Why should this be so if by that time the earth itself was completely charted? What does that suggest about how people’s fears had changed?
  2. What is the main conflict, according to Steven Ryfle, in the original Godzilla movie? How does it reflect the time period in which it was made?”
  3. In his closing paragraph, Genoways refers to a lion with a thorn in its foot. Research this reference if it is not familiar to you. What is it about? Why did Genoways choose this allusion to conclude his article?
  4. Genoways lists a variety of monsters of different times, from the monsters that appeared on ancient maps in places that were uncharted, to the space aliens who threatened the United States prior to World War II, to Godzilla in the postwar world. What are the prominent monsters today, and how do they reflect our current fears and anxieties?
  5. When Genoways says that the president of the United States must respond to threats “without purveying fear and its conjoined twin, hatred” (par. 5), what attitude does this reflect toward the real-life monsters that threaten us? Is such an attitude realistic? Support your response with specific examples and reasons.
  6. Genoways writes that in the post-9/ 11 world, our responses toward those who mean us harm will in many ways determine who we are: “If we give in to our fear, we face the real possibility of the permanent loss of liberty” (par. 5). Do some research on legal changes in the United States in regard to freedom, privacy rights, laws pertaining to search and seizure, and other areas. Have we become a nation that sees monsters all around us and so have given up liberty for security? Or have we avoided the trap that Genoways warns us about? Explain your answer.
  7. Pick an era in America’s past and research the monsters that figured prominently in the culture at that time, whether in literature, film, television, or another medium. Analyze how the culture is reflected in those monsters.