Monday, April 30, 2007


American Sex:
Rethinking the Intention of Sex Education

Sex education has, and continues to be, an issue of much debate. The moral implications that sexual intercourse carries are numerous. Should one be married before committing to sex? What about unintended pregnancy? Sexually transmitted diseases? These are a few of the many concerns parents, teachers, friends, and lovers have when faced with human sexuality. The belief that sex education provides an impetus for premarital sex has largely dictated public sex education. Thusly, educators can be pressured to avoid culturally “taboo” topics. This has stifled objective, scientific discussion of human sexuality, which restricts broader cultural sexual beliefs and practices. Through examination of previous sex education policy that arose from public health concerns, debunking conservative and liberal beliefs regarding sex education, and understanding the strong and broader influence that education has on culture, one sees that the American sex education system is inefficient.

Despite the constitutional notion of a separation between church and state, the United States has been largely built around traditional Judeo-Christian ethics and morals. The union of marriage between one man and one woman is one of the highest attainments under God. The book of Hebrews in the New Testament says, “Let marriage be honored in every way and the marriage bed be kept undefiled, for God will judge fornicators and adulterers” (Heb. 13:04). In the Hebrew Bible (the Old Testament to Christians) there is an entire chapter (eighteen in the book of Leviticus) devoted to improper sexuality. These divine instructions include forebidance of various arrangements of incestuous sex.

What is most interesting is that there is no mention in Leviticus, or any other book in the bible, that specifically addresses premarital sex. Instead, one usually finds broad phrases such as “sexual immorality” or the specific mention of adultery, which Jesus does in chapter five of Matthew. Much of the focus on sexual purity arises from the two birth narratives in Matthew and Luke that write of Jesus’ mother, Mary, being a virgin. The reverence that Mary had among followers—and still does in the Catholic and Orthodox churches—in the early Church articulated an unwritten moral goal for young women. By remaining a virgin, one could ascribe to the notions of purity held by the Virgin Birth narratives. Any disruption of this ideal became antithetical to the desire of God.

In his book Teaching Sex: The Shaping of Adolescence in the 20th Century, Jeffrey P. Morgan writes of the impact that the underlying religious views towards sex affecting sexual education:

…most Americans persisted in viewing adolescent sexuality—when they considered it at all—as an aberration and a moral failure…[sex educator’s] message focused on preventing disease and immorality rather than on preparing for sexual maturity. (99)

If the cultural ideal was for individuals to avoid sex before marriage, it made no sense to approach sexual education in an open, objective format. To the pious Christian, who was also a “sex educator,” why would they encourage immoral behavior? The way sex education was introduced into the country was through the need to alleviate social health concerns.

In the book Sexuality Education Across Cultures: Working with Differences, Janice M. Irvine writes:

Sexuality education has its roots in the social hygiene movements of the late nineteenth and early twentieth century. These initiatives were organized around specific problems that included the eradication of what were then called venereal diseases. Early hygiene education was largely didactic, and practitioners focused on giving information about diseases and how to prevent them. (125)

Irvine is correct in her appraisal. The Surgeon General under the Franklin Roosevelt administration, Thomas Parran, had to contend with a syphilis problem in the United States. Through a provision in the Social Security Act and the 1938 National Venereal Disease Control Act, Parran’s Public Health Service funneled millions of dollars to state boards of health to aid in syphilis prevention (Moran 115). Parran was not pleased with governmental timidity and excessive moralism (Moran 115). In fact, Parran was more open to approaching the problem of syphilis through education. Moran continues in Teaching Sex: “Parran…called for medical experts in public health to commence a new crusade against syphilis that would frankly confront the disease as a medical matter and not a moral failure” (115). Such change in policy motivation would still not cause an immediate change.

AVERT, an international AIDS charity, lists that syphilis diagnosis was at its highest point in 1946, a few years after the Federal Government began keeping track of such numbers. In 1946, there was 70.9 cases per 100,000 of the population. It would lower to 2.1 per 100,000 in 2000 (AVERT). Although Parran provided an impetus for medical objectivity at the federal level, it would take years of organization, education, and medical advancements to decrease the number of Americans diagnosed with syphilis. However, the lack of government-sponsored health education programs was perhaps most deadliest in the 1980’s.

The election of Ronald Reagan, a social conservative, in 1980 heralded an opportunity for other conservatives to trump the liberal sexuality that they disagreed with. Jeffrey P. Moran writes of the affect the conservative movement had on the “morality” of the country:

In 1981 Congressed passed the Adolescent Family Life Act (AFLA), which quickly came to be known as the Chastity Act…AFLA denied funds to most programs or projects that provided abortions or abortion counseling, and AFLA mandated abstinence education and units promoting “self-discipline and responsibility in human sexuality” in the sex education programs it did fund. (204)

The social conservative tone during the Reagan years was not responsive to the AIDS epidemic in the United States. This is one of the principle and most vigorous criticisms of Reagan and his social policy. Dan Gilgoff’s article, “Why Critics are Still Mad as Hell,” which appeared in the June 16th issue of U.S. News & World Report, just after President Reagan’s death, writes:

AIDS activists were among Reagan's most outspoken critics, printing posters that featured the president's mug shot and the tag line "AIDSGATE." His detractors say he didn't spend nearly enough on AIDS research; Reagan didn't publicly utter the term "AIDS" until his second term, even as the disease killed thousands of Americans in the early '80s. (“Why Critics are Still Mad as Hell”)

Scathing criticism comes within the article from an AIDS activist: "It's incomprehensible that such hideous inaction hasn't put him in any disrepute," says AIDS activist Larry Kramer. "He's being buried as a saint when in fact he was a gigantic sinner." Hitchens notes that British Prime Minister Margaret Thatcher launched a massive AIDS public- education campaign in 1987, "making Reagan's inaction triply disgraceful and obviously deliberate. It wasn't that he wasn't paying attention; it was that he didn't want to go there" (“Why Critics are Still Mad as Hell”)

Kramer seems to have a valid point. President Reagan consistently cut the funds allocated by Congress for the fight against AIDS, and in 1991 the Bush administration [George Bush was Reagan’s Vice President] cancelled the government-sponsored American Teenage Study, which was seeking to gather information about teenage sexual behavior and possible approaches to preventing STD’s (Moran 208).

Social conservatives argue that sex education inspire immoral desires that clash with local community codes of morality. Alexander McKay writes in Sexual Ideology and Schooling: Towards Democratic Sexuality Education, that many conservative notions of democracy “…view that the values and traditions of a particular community can be rightly be promoted to contrast to the values or social traditions of the larger society” (115). A government mandate superceding local norms and mores creates tension among the local communities.

Another issue that conservatives have toward sex education is that it promoted the immoral aspect of sexuality, a belief derived from biblical texts and corresponding religious beliefs. Many of these conservatives saw the liberal sexual behavior that began with the female contraceptive pill in the 1950’s, then the sexual revolution of the 1960’s, and into the Roe v. Wade decision in the 1970’s as evidence that American sexual morals were dramatically off-course. The dominant image remained of a sex education course that encouraged students to engage in sexual behavior (Moran 218).

Liberal sex educators hold the opposite belief. To them, sex education does not go far enough. In Teaching Sex, Moran cites a statistic that iterates the concern that liberal educations had about the education system. From the 1970’s onward, fewer than 10 percent of high school students received comprehensive, value-neutral sexuality education (Moran 218). Liberal thinkers thought that if education could be strengthened and increased, it would have likely minimized the mal-effects of the country’s growing openness to sex. In addition, many modern liberals want the fear from religious-based morality to be removed, enabling a more open and objective discussion of human sexuality. Alan Harris writes in the article “What does ‘Sex Education’ Mean?” that, “…it is high time we adopted a wholly positive approach to sex education, instead of grudgingly throwing a few titbits of information in an atmosphere of moral gloom” (22).

Conservatives and liberals, in a broad generalization of sex education outlooks, could not be further apart. Each side feels that they possess true “common sense” in dictating the sex education policy of the United States. However, it appears that the conservative and liberal camps are neither completely correct.

A 1967 study aimed to address the opinion that sex education increases one’s likelihood to participate in sex. The college-aged participants were asked about any previous sex education. Furthermore, they were questioned about their sexual behavior. The authors of the article “Sex Education and Premarital Petting and Coital Behavior,” Gerald H. Weichman and Altis L. Ellis, summarize the results of the experiment:

Those college students in the sample exposed to “sex education” content prior to college were found no more likely or less likely to have experienced premarital petting or premarital coitus than those without such exposure…Therefore, any promotional or inhibitory effect “sex education” content exposure may have had upon premarital petting or coital experience did not become apparent in the data analyzed. (268)

This study infers that sex education, per se, is not a factor which operates in a significant way to influence premarital sex (Weichman and Ellis 268). These results are not uncommon.
Jeffrey P. Moran writes of the numerous studies done to examine both conservative and liberal claims regarding sex education in Teaching Sex:

Various studies from the 1950’s onward have determined that students who complete a sex education course invariably know more sexual facts than students who have not…But none of the dozens of studies by sociologists, psychologists, and educators has discovered that sex education has a significant effect in either direction on adolescent rates of intercourse, use of contraception, and rates of unwanted pregnancies and births. (219)

Sex education is only a small aspect in determining sexual behavior. The first, and most notable, surveyor of sexuality was Alfred Kinsey.

In the 1950’s, Kinsey both shocked and intrigued the country with his scientific analysis of American sex and sexuality. Kinsey found at mid-century that American sexual patterns differed according to gender, class status, race, educational attainment, religion, decade of birth, age at puberty, and geographical location (Moran 222). Education matters only slightly when compared to the many other determinates of one’s sexuality. How can time spent in limited sexual education curriculum make a “dent,” so to speak, in these numerous determinants of one’s sexual identity?

To penetrate these many social and psychological determinates would require a much more broadened experience in sexual education. The minimal education does not do enough to significantly alter sexual behavior. Janice M. Irvine writes in Sexuality Education Across Cultures, “Comprehensive sexuality education addresses the broadest realm of sexuality, including intimacy, relationships, body image, personal values, and self-esteem” (126). A narrow conception of sexual education does little to impact the many determinates that make up one’s sexuality.

Implementing such change, even if such change was willing to be made at present, would not have immediate affects. Michael Schofield has a diagnosis of what is needed in his article “The Sexual Behavior of Young People.” He states that, “The best hope…is to help the generation now at school to become the kind of parents who can speak simply and sensibly about sex to their children” (170).

Extensive sexual education has been used with success in Sweden. Thomas K. Grose writes of the Swedish sex education system in his article for U.S. News & World Report, “Straight Facts About the Birds and Bees”:

The curriculum starts out clinically at around age 6, when children learn about anatomy, eggs, and sperm. From age 12 on, the topics lean more toward disease and contraception. The classes have a moral dimension, as well: Sex within loving relationships is stressed, as is gender equality (56).

The public education system in Sweden, as in the United States, is one of the best ways to impact the respective culture. Although modern U.S. sex education is deficient in its impacting affects of sexual behavior, a more rigorous, insightful, and objective approach within the sex-related ciruculum will have positive influences for the country, as it has for Sweden. Grose writes that, “The rates of teen pregnancy and sexually transmitted disease in Sweden are among the world’s lowest” (56). In addition, the teenage birthrate is 7 per 1,000 births, compared with 49 in the U.S. (Grose 56). The percentage of teenage girls having sex before 15 is also less in Sweden than it is among the U.S. population (Grose 56). Grose debunks a possible conception that certain conservatives may have that such open sexual discussion may encourage teenagers to engage in sex before their emotional readiness. In fact, Swedish classes urge students to wait until they feel mature (Grose 56).

The enlightened education policy of Sweden is too drastic to be accepted by Americans and their legislators. However, the Sweden-model shows that comprehensive sex education does not have to focus on premarital sex, a largely conservative concern, but can treat human sexuality without the fear and taboo that usually comes with such discussions. If the U.S. were to implement a more enlightened approach to sex, many of the mal-affects of insufficient education would be eliminated. The sexual factors that contributed to such problems in the country’s past, and possible future health issues, would otherwise be already addressed under a comprehensive sex education-model. Yet, the varying cultural norms and mores regarding human sexuality presents conflict to monolithic implementation of sex education. Only by finding common ground among the many participants can the American education system finally move toward an objective, non-religious examination of human sexuality.

Works Cited

Gilgoff, Dan. “Why Critics are Still Mad as Hell.” U.S. News & World Report 13 June. 2004. 13 Apr. 2007

Grose, Thomas K. “Straight Facts About the Birds and Bees.” U.S. News & World Report Mar 26: 56.

Harris, Alan. “What does ‘Sex Education’ Mean?” Sex Education: Rationale and Reaction. Ed. Rex S. Rogers. New York: Cambridge UP, 1974. 18-23.

Holy Bible: The New American. New York: P.J. Kenedy & Sons, 1970.

Irvine, Janice M. Sexuality Education Across Cultures: Working with Differences. San Francisco: Jossey-Bass Publishers, 1995.

McKay, Alexander. Sexual Ideology and Schooling: Towards Democratic Sexuality Education. Albany, NY: New York UP, 1998.

Moran, Jeffrey P. Teaching Sex: The Shaping of Adolescence in the 20th Century. Cambridge: Harvard UP, 2000.

Schofield, Michael. “The Sexual Behavior of Young People” Sex Education: Rationale and Reaction. Ed. Rex S. Rogers. New York: Cambridge UP, 1974. 168-80

“United States STD Statistics.” AVERT. 28 Mar. 2007. 13 Apr. 2007

Weichman, Gerald H. and Altis L. Ellis. “A Study of the Effects of ‘Sex Education’ on Premarital Petting and Coital Behavior.” Sex Education: Rationale and Reaction. Ed. Rex S. Rogers. New York: Cambridge UP, 1974. 265-70.

Monday, April 09, 2007


Separate is Not Equal:
Why the National League Should Adopt the Designated Hitter

Boston Red Sox great Ted Williams said, “Baseball is the only field of endeavor where a man can succeed three times out of ten and be considered a good performer.” Every player dreams of hitting his way to a .300 batting average, a tell of the successful hitter. Yet, a .300 hitter does not get a hit 70% of the time he steps into the batter’s box. Ted Williams is perfectly accurate. Despite the statistical probabilities, every fan and hitter views an at-bat not in terms of what will most likely happen, but with what could happen. This is the mysterious and majestic quality of the baseball hitter. It is this reverence in which the idea of the designated hitter receives such regard by its proponents. By examining the conditions that warranted its implementation, one discovers the beneficial impact that increased offense has on the baseball’s integrity and financial well-being, arguing for its adoption by the National League.

Babe Ruth ruined baseball. The excitement generated from his slugging prowess changed the game forever. If the players before him were mere infantry of the sport, he was the atomic bomb that revolutionized the game. Before him, the game of baseball was boring and uneventful. In his book The Numbers Game: Baseball’s Lifelong Fascination with Statistics, Alan Schwarz describes the bleakness of baseball pre-Ruth. “Baseball in the teens…had basically degenerated into tedious, daily pitcher’s duels. Runs scored one at a time, manufactured piecemeal by the steal-and-sacrifice style…” (44). There was no excitement, no overt drama. Nothing warranted die-hard fascination from the game’s fans. Baseball needed a savior.

George Herman “Babe” Ruth would revive baseball, his bat would be his Lazarus, and he would turn the game into what it currently is—a game of offense. Even those with scarce interest in sports have heard his name uttered among conversations of legendry. The most common association with the mythical hitter is the homerun. By June of 1919, roughly mid-way through the baseball season, Ruth hit his 11th homerun against future hall of fame pitcher Walter Johnson. In fifteen years prior to that hit, no player in the American League had more than 12 home runs in an entire season. Babe Ruth became a star.

Ruth’s power was unprecedented, if not under appreciated. Schwarz writes, “Home runs at that time were like triples today—freak hits that were too rare to be fully appreciated” (45). However, the spectacle of the home run grew with Ruth’s popularity. In 1920, he would hit 54 home runs. He hit 59 one year later. Ruth and his offensive explosion altered the fan base of baseball. Schwarz describes:

The fans Ruth attracted were no the die-hards who put up with the soporific game that baseball had become before and during [World War I]. These new fans who wanted to see runs score, and relished the thrill of watching Ruth swing mightily to make that happen (47).

There was now drama and excitement in baseball. But, perhaps the most ironic aspect to Ruth’s legend comes from knowing his initial role in the sport. The original position of the man that would launch baseball into an era of offensive power, drama, and statistics was that of the archenemy of the baseball hitter—he was a pitcher.

If only every pitcher had the potential for power that Ruth did. General Managers and their team of scouts would purge the U.S. (and every other country, for that matter) of these anomalies for their rosters. However, the game of baseball has evolved over the years since Babe Ruth launched baseballs into mobs of spectators. The offensive efficacy of pitchers has decreased over the years to the near point of embarrassment. Political columnist, and an avid baseball fan, George F. Will goes so far to refer to many of them as “laughable” (not all pitchers are atrocious: Livan Hernandez has a lifetime .234 batting average, Dontrelle Willis, .222). However, most pitchers do not have the talent with the bat that these pitchers do. Despite this, traditionalists remain skeptical of, arguably, the most radical alteration to the game of baseball—the designated hitter.

The reason for the designated hitter is a logical one. The early part of the 1960’s brought the famous home run race involving New York Yankee great Roger Maris, who hit 61 home runs in 1961. This culminated an impressive offensive performance in the preceding decade. G. Richard McKelvey recounts the offensive explosion occurring in baseball in his book All Bat, No Glove: A History of the Designated Hitter. He calculated that during the 1950’s, major league teams had combined for an average of 17.7 hits and 8.8 runs per game (McKelvey 9). As such, the commissioner of baseball after the 1962 season, Ford Frick, persuaded the rules committee of Major League Baseball to enlarge the strike zone, so pitchers would have an advantage against the prevalent offensive potency of their counterparts. This, combined with an increase in the use of relief pitchers by their managers (as batters have a more difficult time adjusting to numerous pitchers in a game as opposed to one, or two), stifled offensive production. In 1968, the combined major league batting average dropped to .237, the second lowest in the century (McKelvey 12). The plan to “equalize” the offense-defense relationship had backfired, and the American League felt the brunt of the force. In 1971, the NL had topped the AL by 129 runs, and they stretched that lead to 824 in 1972 (McKelvey 16). Separate was not equal.

The fans were well aware of the offensive disparity between the two leagues. Without an offensive race that Roger Maris or a Babe Ruth could provide, fans were reluctant to take in a baseball game at American League ballparks. Eight of the twelve AL clubs reported that they had finished in the red in 1972. However, that same season, the NL had nine of its twelve teams attracted over a million fans, compared to three in the AL (McKelvey 19-20). American League owners began to look awfully hard at the different designated hitter appropriations in their minor league affiliates. Finally, by the start of the 1973 season, the American League went ahead with the designated hitter. This change to the rules of baseball was the first in eighty years, when the pitching mound was moved from fifty feet to sixty feet, six inches.

The President of the National League, Charles “Chub” Feeney was opposed to such a dramatic alteration. “Our League doesn’t believe in change for change’s sake. The people know when a tight situation is coming up and it’s fun to sit back and try to figure out who the manager is going to hit for the pitcher. The baseball fan likes to second-guess the manager.” (McKelvey 24). The DH eliminates a piece of strategy whereby the manager must weigh the option of removing a pitcher from the batting lineup for the sake of a pinch-hitter. The pinch-hitter is later replaced by a relief pitcher when the team moves to defense. The “second-guessing” comes into play when one must debate whether a starting pitcher’s performance is more beneficial to a team than their offensive replacement. A close game in late innings makes this an especially intriguing scenario. The opinion of Feeney echoes common, modern objections to the DH. George F. Will summarizes the protests of those opposed to the DH. “The three arguments against the DH are: Tradition opposes it, logic forbids it, and it is anti-intellectual because it diminishes strategy” (58).

Players themselves are split over the decision. In his book Pure Baseball: Pitch by Pitch for the Advanced Fan, former player Keith Hernandez writes:

…if you believe there’s more to baseball than offense, if you believe that a lot of interesting ramifications flow from the fact that your most important player—your pitcher—is, by way of contradiction, probably a weak hitter and that having him bat for himself, or not bat for himself, makes the game more complicated in a dozen ways, then you’re with me (196).

However, the argument against the DH that it minimizes strategy is not universally shared. George F. Will posed this very question to Tony LaRussa, then manager of the Oakland Athletics (LaRussa led the St. Louis Cardinals to a World Series championship in 2006) in his book Men at Work: The Craft of Baseball“: "Warming to his defense of the DH, he says that handling a pitching staff—perhaps a manager’s most important task—is tougher in the American League. ‘Every decision you make in the American League regarding your pitching staff is based solely on who you think should pitch to the next hitter, or in the next inning. In the National League you get certain times when the decision is taken right out of your hands’” (59). The DH doesn’t eliminate strategy—it only alters it. Will grasps this point:

In some ways the DH makes managing more difficult. Again, most pinch-hitting situations are obvious. What often is far from obvious is when to remove pitchers who never need to be removed to increase offense. That is an American League manager’s problem (59).

There is another distinction between AL and NL ball play. When pitchers are in a lineup the offense needs to be more aggressive to compensate for the inadequate hitting pitchers. Because NL lineups have only eight adequate hitters, one less then in AL lineups, offensive risk becomes more acceptable, namely, through the use of stolen bases and sacrifice bunts. This leaves the prototypical American League third base coach with less responsibility. In his book The Hidden Game of Baseball: How Signs and Sign-Stealing Have Influenced the Course of Our National Pastime, Paul Dickson discusses this phenomenon:

The number of offensive signs…in the American League dramatically declined with the advent of the designated hitter in 1973. Baseball historian Andy McCue interviewed several third-base coaches in 1989. He was told by men in both leagues that there were considerably fewer signs given in the early and middle innings of American League games…‘Taking an extra base [via base stealing] is also a one-run strategy, and since an AL third base coach never has to contemplate a pitcher in the on-deck circle as a runner approaches third, he is much freer to put up the stop sign’ (132).

National League lineups require more adverse risk that ultimately diminishes offensive performance. Should the NL adopt the DH, the league would find a minimal need to compensate for inadequate hitting pitchers by sacrificing needed outs through base stealing and sacrifice bunting.

Just as fans flocked to the offensive mammoth that was Babe Ruth, so did they return to the American League ballparks after the implementation of the DH. The American League became the league of power, the home run, the “long ball.” The National League, maintaining the traditional interpretation of the rules and the affirmation of the “intellectual” aspect of the game, was (and still is) known for “small ball,” because of the lack of reliance on pure power in favor of meager strategy. In 1980, well-known Washington Post sports columnist Tom Boswell compared the two leagues eight years after the American League implemented the DH. He wrote that the AL has scored 10.7 percent more runs per team, almost as great as the 12.7 percent that the NL had prior to the DH adoption (McKelvey 65). Accordingly, the AL surpassed the NL in the growth of fan attendance. Between 1973-1982, the regular season attendance increased by 64% in the AL. There was only a 28% increase in the National League (McKelvey 75). The fans liked watching offense, just as they did with Babe Ruth.

The American League is not the sole custodian of the DH. In fact, it seems that most baseball organizations agree with the rule. In the well-regarded Nine Innings: The Anatomy of a Baseball Game, Daniel Okrent describes the prevalence of the DH in baseball. “Still, by 1982, only the National League, and Japan’s Central League, allowed pitchers to hit. In every other baseball league in existence, from Little League and high schools through all of the American minor leagues, the DH rule prevailed” (133, emphases mine). The National League is behind the times.

The original desire for the DH rule was to rectify the dominance of pitchers. Pitching has become a more specialized phenomenon within the sport since the game’s inception. In the early days, it would not be uncommon for a starting pitcher to work into the ninth inning. Their descendents, on the other hand, do not measure up to the mettle of their ancestors. In 1901, 87.3 percent of all games were completed by the starting pitcher. In 1988 only 14.8 were. In 1989 only 11.4 were (Will 135). Why might this happen? George F. Will offers the opinion that the difference the DH makes has attributed the 46% decrease in the number of complete games between the years 1978 through 1987 (Will 135). Will suggest a correlation between the DH rule and the increased usage of relief pitchers. With the addition of more potent hitter replacing a less-adequate one (the pitcher), pitchers would not have the luxury of an “assured” out. However, the addition of one sole hitter can’t solely account for the 46% decline in complete games. If part of the “intellectual” aspect of National League ball is deciding when to replace a starting pitcher with a pinch hitter, then if there was evidence to suggest that pitchers have decreased in hitting competency, one could argue that managers are “forced” to pinch hit more often, requiring more relief pitchers.

Comparing statistics is one of the more difficult undertakings in baseball. How does one compare the offensive performance between two, or more, players? It is not cut and dry as one might think. For instance, managers will often create a lineup where a proficient hitter will bat before a well-known “slugger.” The rationale is that the opposing pitcher would most likely prefer to avoid the slugger. As such, the pitcher will be careful not to walk the batter preceding the slugger, giving him better pitches from which he can hit. However, if the hitter is in, say, the eight spot (usually before the pitcher in the lineup) will most likely not receive hittable pitches—at least ideally—because, even if he should draw a walk, the opposing pitcher will face the opposing pitcher, a far less formidable foe. Comparing a hitter who bats before a “slugger” and one before a pitcher would not yield a truly accurate comparison—the differing variables surrounding their plate appearance are too strong.

Comparing hitters from different generations is even more difficult. How does one account for the different sizes of ballparks, the lack of night games, a shorter season, even a different baseball between decades? There is no scientific way to perfectly “normalize” every variable, and no way to accurately compare different hitters. However, statisticians do try.

One such statistician is David Gassko who, in his February 2007 article “Hitting Pitchers,” which appeared on the online journal The Hardball Times, looked at this very question. He first examined the combined offensive performance of pitchers. He found that pitchers, as a whole, batted .132 in 2006 (the mean of the best and worst MLB team averages was .271). However, Gassko wanted to examine the offensive performance of pitchers in relation to their fielding contemporaries throughout history. Gassko describes his process:

I calculated the [offensive performance] for each player in each season from 1871 to 2005, using whatever statistics were available [this affirms the lack of statistics recorded in the nineteenth century as opposed to the prevalence of modern statistical observation]…I then classified each player in each season as either a pitcher (if he made at least one appearance as a pitcher that year) or a hitter (if he did not), and calculated the league average [offensive performance] BA for both pitchers and hitters…Pitchers were compared to the pitcher average in calculating their runs above average, which were corrected for park factor and then into wins above average to adjust for varying run environments (Hardball Times).

Below is a line graph displaying the results that Gassko found in his research and analysis.

Although occasional spikes from year to year, there is a clear trend of descending pitcher offensive performance with that of fielding hitters. Pitchers are worse hitters now than they were at the game’s inception.

It appears George F. Will’s description of the hitting abilities of pitchers as “laughable” is not so unwarranted. Although the National League maintains the game’s tradition and intellectual dynamic as an essential rationale for avoiding what the majority of other baseball leagues have adopted, one has evidence that NL teams focus too much on the mere words of baseball’s tradition instead of the game’s current spirit and actuality. Of course the original rules of the game stipulated for pitchers to bat, they were actually competent in that day! They no longer are. Fans respond to offense, offense that the designated hitter can provide. The American League exemplifies this point perfectly. Should the NL adopt the designated hitter, they would see similar results, as their fans would see baseball for what it was intended—with nine adequate hitters; not eight.

Works Cited
Boswell, Thomas. “Time to End 9th-Bat Split.” The Washington Post 31 July 1980, sec. 6.

Dickson, Paul. The Hidden Language of Baseball: How Signs and Sing-Stealing Have Influenced the Course of Our National Pastime. New York: Walker & Co., 2003.

Gassko, David. “Hitting Pitchers.” Chart. The Hardball Times. 25 Feb. 2007 Feb. 2007

Hernandez, Keith, and Mike Bryan. Pure Baseball: Pitch by Pitch for the Advanced Fan. New York: HarperPerennial, 1994.

McKelvey, G. Richard. All Bat, No Glove: A History of the Designated Hitter. Jefferson: McFarland & Co, 2004.

Okrent, Daniel. Nine Innings: The Anatomy of a Baseball Game. New York: Houghton Mifflin, 1985.

Schwarz, Alan. The Numbers Game: Baseball’s Lifelong Fascination with Statistics. New York: Thomas Dunne, 2004.

Twombly, Wells. “Now the 10th Man.” New York Times Magazine 1 April 1974, 21, 23.

Will, George F. Men at Work: The Craft of Baseball. New York: HarperPerennial, 1990.

Monday, March 05, 2007

Literary Realism Essay

The Civil War was one of the catalysts for the Realism literary movement. The armed conflict brought disillusionment to the country. Not only were Americans fighting their fellow citizens, but photography brought graphic images of death and disfigurement to the public. It comes as no surprise, then, that writers of this era would use the Civil War as a principle theme in their stories. Ambrose Bierce did so in “Chickamauga,” regarded as one of his finest pieces of literature. The lasting impact of death and emotional disillusionment contrasts mightily with more “romantic” views of war: nobility, patriotism, glory, etc. Hamlin Garland’s “The Return of a Private,” uses realistic elements in his story to accurately depict circumstances of the Civil War.

The story begins as a group of Union soldiers return via train to La Crosse, Wisconsin. The third-person narration describes the elation of the returning soldiers upon arriving home. “When they entered on Wisconsin territory they gave a cheer, and another when they reached Madison, but after that they sank into a dumb expectancy” (185). Here, Garland confronts the reader with the conflicting emotions of the soldier’s return. On one hand, they feel a strong sense of elation as their train nears their ultimate destination, from which their cheers derive. On the other hand, there is this “dumb expectancy.” A more colloquial understanding of dumb is a lack of intelligence. However, dumb can also mean silence, or a temporary inability to speak. Most soldiers were away from home for years. Being alienated from loved ones causes natural gaps in emotional inter-connectivity. This can cause a sense of apprehension for returning soldiers. This conflicting, grappling of emotions is a very “realistic” element. Instead of focusing on more romantic ideals of soldiers when they arrive home, Garland admits to undoubted nervousness and anxiety.

Garland also describes the physical detriments these soldiers have undergone. “Three of them were gaunt and brown, the fourth…gaunt and pale, with signs of fever…One had a great scar down his temple, one limped, and they all had unnaturally large, bright eyes, showing emaciation” (186). These are not descriptions of Romantic characteristics. These conditions are extraordinarily “real,” void of sentimental dillusion.

Garland also confronts the reader with another rebuke of Romantic ideology. When modern readers consider the notion of returning, victorious soldiers, images of ticker-tape parades and a famous photograph of a sailor kissing a strange woman in the midst of celebration in New York’s Time Square can easily come to mind. This is a classic image of admiration in post-victorious warfare. Although this example comes far after the Civil War, it nonetheless illustrates a prevailing assumption of the “glory” of war, and the adulation of the adoring populous.

“There were no bands greeting them at the station, no banks of gaily dressed ladies waving handkerchiefs and shouting ‘Bravo!’ as they came in on the caboose of a freight train into the towns that had cheered and blared at them on their way to war” (186). Garland’s story does not include the vocal adoration so prominent in one’s romantic conscience.

One of the most successful government programs in this country’s history was the G.I. Bill of the 1950’s. Returning war veterans were able to go to college, buy homes with discounted loan rates, and could receive generous unemployment benefits should they need it. However, after the Civil War, there were no programs of that magnitude. It appears, after service in the armed forces, soldiers would return home to face the same economic conditions that they left. Garland includes this accurate condition as it, presumably, would play-out in the mid-West. “All of the group were farmers…and all were poor” (186). Many of the soldiers in the story with responsibilities toward their families could not afford a night stay at a hotel before the venturing home the next day. Private Smith, the story’s protagonist, states, “‘Now I isn’t got no two dollars to waste on a hotel. I’ve got a wife and children, so I’m goin’ to roost on a bench and take the cost of a bed out on my hide’” (186). Another soldier affirms, “‘Hide’ll grow on again, dollars’ll come hard. It’s goin’ to be a mighty hot skirmishin’ to find a dollar these days’” (186). Private Smith later contemplates the conditions in which he finds himself. “He saw himself sick, worn out, taking up the work on his half-cleared farm, the inevitable mortgage standing ready with open jaw to swallow half his earnings. He had given three years of his life for a mere pittance of pay, and now!—” (187). Hardly the ideal soldier’s return home.

The pieces of dialogue quoted above exemplify another characteristic of the Realism movement. Very often, Romantic writers would use proper or “enlightened” language in their discourse. In “The Editor’s Study,” 1887, William Dean Howells wrote “…each new artist, will be considered…in his relation to the human nature” (258, emphasis mine). The essential point that Howells, and other Realists, made was that it is detriment that authors create stories and characters that authentically represent people as a whole. The enlightened language of the Romantics was no longer sufficient, in their opinion, because they were so inauthentic. The dialogue exchange between the soldiers in the story is far more justifiable to a Realist. A Romantic writer might shudder to use “natcher’l” as a phonetic depiction of the word natural by one of their characters, as Garland does (187).

Prolonged dislocation from one’s home implies a separation from the realities surrounding that home prior to leaving. Essentially, one would like their home to remain the same throughout their time away. This is more of a romantic desire (although one that’s not easy to remove from one’s thoughts and imagination, even for the most staunch of Realists). It is not unheard of for any soldier to contemplate fondly on, say, their wife’s geraniums in the backyard, the smell of honeysuckle trees scattered around one’s neighborhood, even the smell of grass and dirt at their nephew’s Little League baseball field. These memories of one’s home nourish hope and help dissipate loneliness. However, upon return one can find geraniums uprooted, honeysuckle trees dead, and baseball fields abandoned to the elements.

Garland illustrates such a point with Private Smith. After arriving in La Crosse, Smith imagines the reaction of his family at his long-hoped return. He imagines returning home late, catching his sons milking the cows long after the preferred time. “‘I’ll step into the barn, an’ then I’ll say: ‘Heah! why ain’t this milkin’ done before this time o’ day?’’” (189). Of course, the mock disapproval would be overshadowed by the elation of his sons at their father’s return.

Smith goes on to even include the family dog in his vision. “‘I’ll jest go up he path. Old Rover’ll come down the road to meet me. He won’t bark; he’ll know me, an’ he’ll come down waggin’ his tail an’ showin’ his teeth. That’s his way of laughin’” (189).

After returning home, both the passing of time and Smith’s beard growth perplex his children. “…the youngest child stood away, even after the girl had recognized her father and kissed him” (197). He then turns to his youngest son. “This baby seemed like some other woman’s child, and not the infant he had left in his wife’s arms. The war had come between him and his baby—he was only a strange man to him…” (197). This is not what Private Smith had expected at the train station. Later, his wife informs him that Rover died the previous winter.

Smith’s expectations did match the ideal he envisioned. This encapsulates the criticism that many Realists felt towards the literature of their time, leading up to their literary era. Despite the nobility of Private Smith, and his genuine longing for his family and home, his more Romantic ideals of his return are not met, and he is forced to reconcile his wishes with actuality. Many stories of Realism do not contain the “ideal” ending, but one of conflict and disappointment—just as life is.

Hamlin Garland’s story, based on his own experience of his father’s return from the Civil War, is filled with conflict and a disillusionment of ideals. Using an event that inspired the Realism movement in literature, Garland encompasses primary characteristics of the period in his story to create a more accurate understanding of the human experience after war. He achieves this by using realist notions and depictions instead of Romantic ideals.

Works Cited

Garland, Hamlin. “The Return of a Private.” The Portable American Realism Reader. Ed. Nagel, James, and Tom Quirk. New York: Penguin, 1997. 185-99.

Howells, William Dean. “The Editor’s Study.” The Heath Anthology of American Literature: Late Nineteenth Century: 1865-1910. 5th ed. Vol. C. New York: Houghton Mifflin, 2006. 258-9.

Tuesday, February 20, 2007

Film Review

Based on the1949 Pulitzer Prize winning series of 24 articles that chronicled corruption and greed on New York City docks, On the Waterfront (1954) is a story of mob rule, violence, and the human conscience. Starring Marlon Brando (The Wild One, A Streetcar Named Desire) as a one-time boxing prospect forced to work on New York City’s docks, the Elia Kazan (East of Eden, A Tree Grows in Brooklyn) film shows impoverished neighborhoods controlled by the local Union’s use of fear and intimidation, in a motion picture that would ultimately win eight Academy Awards.

Kazan appeared before the House Un-American Activities Committee (HUAC) in 1952. He drew heavy criticism from his peers for divulging names associated with Communism to the infamous committee. Arthur Miller, slated to write the film’s screenplay, refused to cooperate during his own testimony before HUAC, and would become “blacklisted” in Hollywood (he would later pen the famous play The Crucible based on the hearings). Columbia Pictures decided to withdraw Miller’s original commitment to the screenplay, replacing him with Budd Schulberg.

The movie’s suspenseful opening involves Terry Malloy (Brando) drawing a fellow dockworker to a roof, where Terry thinks the man is to have a conversation with Union officials. To Terry’s dismay, the young man he lured to the roof is flung off in a brazen attempt to silence a would-be “rat.” Eva Marie Saint (North by Northwest), in her first theatrical appearance, plays the sister of the murdered worker, Edie. Distraught by her brother’s demise, and the reluctance of the fellow workers to confront the Union, she takes it upon herself to solve the murder. In addition, Father Barry, played by Karl Malden, begins his own quest to rid his parish of the Union leadership’s inequities.

Filmed in black and white, scenes in the outdoors during the dock’s working hours contrast with the dark environments in scenes of the local tavern and apartments in nighttime settings. To convey the bleakness within the neighborhood, Kazan used smoke machines to create a strong sense of despair. Music composed by Leonard Burnstein (who would go on to score West Side Story) evokes the moral complexity that certain characters feel, and the tense moments of suspense and action they are thrust into.

Lee J. Cobb (North of the Rio Grande, Twelve Angry Men) plays the wonderfully sinister mob boss, Johnny Friendly. With the loyalty of his foot soldiers to keep the longshoremen in-line, the only way for his workers to remain alive is to be “D and D”—deaf and dumb. Any “canaries” who threaten the monopoly of Friendly’s racketeering meet with a swift, silencing blow.

Terry Malloy begins as a quasi-informant for the Union leadership. However, both the beauty and strong moral conviction of Edie, in addition to the newly found vigor of Fr. Barry, makes Terry reconsider his place in the world, and the fate of his soul. Wrestling with his new convictions of right and wrong, Terry must decide if he will be a “bum” or attempt to redeem himself. With a well-crafted script and wonderful acting, On the Waterfront brings excitement and passion as the audience decides with Terry Malloy whether or not they would rather be “D and D” or a righteous “canary” in the midst of widespread corruption.

Wednesday, February 14, 2007

Rhetorical Analysis

Barbara Lerner wrote her essay “The Killer Narcissists” after widely covered public school shootings in the late 1990’s. A free-lance writer and a psychologist, Lerner argues that the existing psychological explanations regarding the mental and emotional make-up of students who organize and carry-out these violent acts is outdated and ill-effective. She states that a pre-1960’s psychological diagnosis would characterize these students as rejected, abused, and with low self-confidence, using such violently brazen acts as an unconscious “plea” for help. Lerner refutes such ideas, arguing that these students embody narcissism, needing strong social relationships and moral conditioning in such volatile adolescent years. Although mounting valid arguments of definition, evaluation, and proposal, Lerner misses the mark with regards to her essay’s soundness.

Although noted as a psychologist, the precise level of her education in the field remains unknown. In light of the fact that her claim relies heavily on psychological premises and assumptions, the ambiguity of her expertise makes critical readers question her reliability, or her extrinsic ethos. Her article appeared in 1999, in the May issue of the National Review, one month after the infamous school shooting at a high-school in Columbine, Colorado. It’s noteworthy to wonder why an essay examining psychological profiling would not appear in a journal of psychology or education, rather a political publication with leaning conservative tendencies (this point will appear later). The lack of Lerner’s expertise on the matter, coupled with her essay appearing in a political magazine, creates skepticism as to underlying motivation. Namely, would such an article attempt to influence policy towards a specific political outcome? Although no concrete evidence either affirms or denies this concern, it does bring Lerner’s extrinsic ethos into further volatility.

Despite this, the logical fluidity of Lerner’s argument is valid. Her first warrant is both justifiably and emotionally relevant after such shocking and horrific violence in this nation’s schools: society must become more adept at discovering and ameliorating students prone to violence. Concluding that the increased number of school shootings is a result of an increased number of narcissistic children in American society, and not previously-held explanations, Lerner cites logical reasons to justify her claim. Below is a diagram of Lerner’s argument.

Barbra Lerner Argument from “The Killer Narcissists”:

Warrant/Assumption: Ameliorating student’s susceptibility to violence is important
: Morality is a foundation to proper development

: Past evaluation of violent school children is inaccurate and outdated
: Students who committed such acts do not fit into pre-established diagnoses

: Increased school shootings come from an increase in narcissistic children, and not previously held beliefs.

As previously mentioned, the emotional investment much of the country felt after the tragic events in Columbine, CO gives Lerner a strong appeal towards pathos. After the April 1999 shootings, much of the country felt several emotions: sadness, fear, anger, perhaps even a tinge of hopelessness based on the severity of events. This gives Lerner’s argument tremendous emotional appeal, which provides her essay with strong relevance and significance.

Lerner’s first premise relies on evaluating past thinking towards violent, anti-social behavior. This is where Lerner creates her first fallacy. She states that “sensate” Americans have heard of the old psychological explanations before. These explanations come from experts, teachers, preachers, politicians, and journalists. She concludes that these social forces have engrained into the collective social conscious of America that violent acts are a hidden cry for help.

It is important to identify two aspects of the premise. One, where is evidence to support her reason? Why did she not include data that examined the basic psychological understanding of the American public? Whey did she not include a primary source from one of the many “social forces” she lists that permeate such archaic psychological explanations? Second, it is a very broad assumption to state that the propagation of misinformation on American society can be organized from these numerous social forces. Lerner makes it appear as if it is a coordinated movement. This is a fallacy of hasty generalization.

The second of Lerner’s premises is that students who commit these violent acts in school do not follow the old psychological model of evaluation. Lerner states that Eric Harris and Dylan Klebold, who jointly killed thirteen students and one teacher in the Columbine shooting, had been reared in loving homes, were thought of as “normal” by neighbors, and had received psychological counseling that included anger management. They completed their counseling two months prior to the shooting.

On this point, Lerner offers the reader a logical premise (that Harris and Klebold do not fit into a traditional psychological explanation). However, how reliable is this evidence? Lerner does not provide any source(s) for her information. How are readers to know that this information is credible? Why didn’t Lerner interview, or find an existing interview, with one, or both, of the boy’s counselors affirming their completion of their counseling program? Where Lerner provides a valid premise to her claim, she lacks reliable and credible evidence to support her premise. This further undermines her argument’s soundness.

She goes on to discuss Kip Kinkel, a fifteen-year old Oregon high-school student who murdered his parents, and afterwards murdered two of his classmates, in addition to wounding twenty-five others. Lerner states that Kinkel posed a problem to conventional psychological understanding in explaining his motivation. Kinkel, she states, was raised in a loving family environment, making the reader assume that Kinkel had no severe emotional trauma while growing up. Again, Lerner fails to accompany this premise by including evidence to support it. Lerner forces the reader to take her assumptions as truth. She very well could be accurate, but without corroborating evidence (from, say, a psychiatrist who examined the boy) to support her, the reader must remain skeptical.

Finally, Lerner reaches the zenith of her argument by asserting her claim that there are more “wanton schoolboy killers” because of their narcissism. She defines a narcissist as one who did not grow out of their infancy’s self-love, and who develops inauthentic personal and social relationships. To the narcissist, individuals can become expendable after serving their “purpose,” which can include a violent demise, especially in the emotionally-volatile adolescent years. Lerner warns that the narcissists will favor the exercise of rage, especially through dramatic means. Even if one were to take away guns from their possession, narcissists will choose explosives and other melodramatic examples to express their rage. Lerner mentions Ted Kaczynski and the Japanese subway saboteurs from 1995 as examples of extreme narcissistic behavior.

The banality of this claim comes from a few inadequate rhetorical methods. First, Lerner fails to provide the reader with any evidence, either data or expert opinions. Once again, Lerner forces the reader into blindly accepting her assumptions. The absence of evidence bemoans a sound argument.

Second, there is a problem in Lerner’s definition of narcissism. While articulating the possible repercussions of a severely narcissistic individual, she uses the term as a broad “catch-all” of violent, anti-social individuals (Kaczynski, persons involve in the Sarin gas released in the Japanese subway, and adolescent gun violence in school). It is unlikely that all narcissistic individuals possess such violent possibilities, yet Lerner describes the problem in a similar vein when she correlates school shootings with an increase in narcissistic children. This is another example of the author committing a fallacy of hasty generalization. In addition, her definition of narcissism may come close to many individuals idea of self-centeredness and/or arrogance. It would be wise for Lerner to provide an illustrative example to clarify the psychological distinction. Without providing supportive evidence linking a rise in school shootings to an increase in narcissistic children, this portion of her premise borders closely to a fallacy of false cause.

Lerner’s final analysis ultimately leads to a proposal. She feels that children need to experience morality in the lives of their parents to deflect narcissistic development. In her opinion, anger management will not solve adolescent narcissism, only moral conditioning. She utilizes the importance society places on morality as her second warrant in the argument. Although, morality is a strong undercurrent of contemporary American society, without providing specific recommendations and evidence to support her proposal, the reader may infer a stance of moral superiority with this proposal. This, coupled with the conservative-leanings of the National Review, can make readers wary as to the author’s underlying intent.

Barbra Lerner writes a very emotionally significant article with the intent to shed light on the reasons behind such shocking and traumatic events in the nation’s public schools. Her premises align amicably with her conclusion, but the lack of evidence, in addition to argumentative fallacies, undermine the soundness of her claim. Her psychological diagnosis, or some form of it, may prove to be true (if not already). However, without sufficient, quality evidence void of rhetorical deceptiveness, readers cannot analyze her essay’s arguments of definition, analysis, and proposal without retaining healthy skepticism.


To Soldiers

Gentle compatriots flaunting
unabashed skeletons. Moving
towards keystrokes and widgets
widgets to harness the mighty
swords of hieroglyphics.

Dare not be moved in such
wreckless words. Words
to imagine a future
floating amongst dire

dire beaches.

Thursday, February 01, 2007

Rhetorical Analysis

Aaron Lukas’ article “I Love Global Capitalism—and I’m under 30” is rich with opinion and criticism. He states that “carnivals against capitalism” targeting free trade, specifically free trade organizations, shows ignorance of the international economy. Lukas feels most of these protestors inaccurately blame large, multi-national corporations for social maladies. Furthermore, Lukas writes an argument that identifies a specific problem: that the ill-advised view of his contemporaries against global capitalization is uncharacteristic of the majority of his generation, despite public demonstration to the contrary. Although with an interesting claim, Lukas falls short of either a valid, or a sound argument. There are logical fallacies in his premises as relating to his conclusion. In addition, the lack of cited research studies and statistics yield nothing more than a poorly conceived opinion piece. The reader is not only skeptical of his claim that a majority of those younger than thirty years approve global capitalism, but also for his reasons to prove it.

The lack of cited evidence suggests that Lukas’ perceived audience would be predisposed to favor his opinion. The article appeared on the website of the Cato Institute, a political think-tank, of which he was an analyst at the time of the essay’s publication. This background provides the reader with an idea of the author’s possible purpose and intent—his extrinsic ethos. This also could explain why Lukas would elect to forego substantive, supporting evidence when writing for a website whose visitors might be predisposed to similar beliefs (why provide examples when one preaches to the choir?, so to speak). This jeopardizes his intrinsic ethos because he fails in a building competent argument. What could have blossomed into a compelling essay turns into an opinionated rant. Below is a diagram of the Lukas argument:

Aaron Lukas Argument from “I Love Global Capitalism—and I’m under 30”:

Warrant/Assumption: Liberty and prosperity are good for all
Warrant/Assumption: A healthy environment is important

Premise/Reason: Protestors are uneducated to benefits of free trade
Premise/Reason: Free trade benefits workers and environment
Premise/Reason: Liberty and prosperity is sweeping the globe
Premise/Reason: Free Trade agencies do not impede sovereignty
Premise/Reason: Most individuals have some sort of association with corporations

Claim: Most under thirty years favor global capitalism

The “driving force” behind the Lukas argument is two warrants in his essay. It would be hard to find an individual who did not think that a) liberty and prosperity for every person is good, and b) a healthy environment is desirable. These foundations of Lukas’ argument are well established. However, the reasons do not yield a valid claim. Instead of providing premises to justify his conclusion, he cites reasons why people should favor global capitalism. He proposes a non sequitur. His reasons are valid only if his claim argued that individuals should view free trade amicably, not that they actually do. There are no premises that validly conclude his claim. Lukas fails in providing a logical argument.

Essentially, Lukas observes, in his opinion, a prevailing delusion among free-trade protestors that the world is corruptible to both workers and the environment. This, in premise, depicts the participants of the “carnivals against capitalism” as uninformed and uneducated. The Lukas counter to this belief is the prosperity of the West and countries of the Pacific Rim, which has inspired poor, Communist, and developing states to pursue democratic capitalism. Makes sense, right? However, the problem with this Lukas point is the absence of statistics to corroborate his assumption. He does not cite the difference between the wealthy states and the poor ones. Not once does he even mention the most basic of all economic measurements, the Gross Domestic Product (GDP), let alone an income per-capita statistic. How are we to believe that these wealthy nations are, indeed, so wealthy? In addition, has democratic capitalism spread over the years? If this premise by Lukas is accurate, how did he come by it? What statistics generated his knowledge? Does a country like China not counter this premise? The reader has only one option—to take the word of Lukas himself.

This is Lukas’ first blunder of validity. We can only assume what he tells us is true. Personal opinions, by themselves, do not work well in arguments. Evidence rules. Unfortunately, Lukas’ counter to protestors (sweeping capitalism across the globe) comes without reliable evidence to support this reason.

There is another aspect of this premise that degenerates Lukas’ argument. He depicts protestors as uniformed, with their only reason to protest being for the sake of protesting in and of itself. Lukas does not cite any evidence, primary or secondary, that specifically informs the reader of the protestors’ beliefs and logic against free trade. How do we know that the protestors believe what Lukas says they do? It is a hasty generalization. Lukas provides no explicit grievances on the part of protestors. Here, as in my previous criticism, the lack of evidence undermines the effectiveness of Lukas’ argument.

Lukas’ second premise, that free trade benefits workers and the environment, shares a similar fate. The reader comes across absolutely no evidence to corroborate his assumption. Instead, Lukas provides a personal opinion. As mentioned above, assumptions do not make for good arguments. Readers must receive evidence to make an argument sound in order to be swayed to the writer’s opinion. There are no wage-increase data, income per-capita statistics, or environmental research. Lukas even fails to provide an expert opinion on either matter. The omission of any form of evidence makes his argument even more volatile.

Not surprisingly, Lukas fails to provide evidence yet again in his third premise: free trade organizations do not threaten state sovereignty. Here, Lukas argues that free trade organizations align themselves closely with the principles and structure of democracy. This is a difficult premise to bolster with evidence of mere data. However, credible expert opinions from, let’s say, political scientists, would have strengthened this argument ten-fold. Lukas even foregoes implementing an illustrative example to examine the similarities between free trade organizations (such as the World Trade Organization) and democratic states. Instead, Lukas merely provides personal opinion instead of evidence. Lukas even goes so far to connect protestors as akin to mob rule and anti-democratic (how one can view the right to public assembly, and those who enter into that constitutional guarantee, as the antithesis of democracy is beyond me). This last point degrades Lukas’ own moral character with the reader, at the expense of an effective ethos with his audience. By leveling a broad assumption in such a “low blow” fashion, he undermines mutual respect, and causes the reader to suspect an alternative purpose to his essay: to rally individuals who share his own beliefs by depicting their opposition (those against global capitalism) in an obtuse and negative fashion.

His final premise is that most young people do not hate corporations. Lukas gives an illustrative example: most are employed by a corporation, know another employed by a corporation, or retain stock in a corporation. In effect, as Lukas argues, who could possibly hate corporations when most individuals have some degree of association with them? The final “deathblow” to the soundness of the argument comes, once again, with the lack of evidence to support the premise. Lukas provides the reader with no evidence that affirms a) most individuals have some degree of corporation-association, and b) most individuals do not hate corporations. Just as throughout the essay up to this point, Lukas leads the reader into assumptions without the semblance of evidence to justify and corroborate his premises.

Examining dramatic, newsworthy protests of global capitalism and their relationship (or the lack of it) to a segment of the population is a worthwhile pursuit. However, the claim from which Lukas derives his premises is an invalid one. Based on his reasons, he should have constructed a conclusion around the need to accept global capitalism as a positive venture, not that it has been accepted. However, the lack of evidence undermines the efficacy of any argument. Without credible and reliable evidence, no argument can withstand healthy skepticism. Personal opinions do not build soundness—evidence does.

Tuesday, January 30, 2007

Revision: TIME Magazines from the 1950's

Note: I suppose pride is an aspect of most personalities, including writers. With that in mind, many writers can feel hurt, angry, and even disheartened when they recieve bad marks on even the most trivial of assignments.

It's not a new phenonmenon for me. I would like to think that, as a Senior at age twenty-four, I could avoid investing personal attachment (and my ego) into everything that I write and submit. I still cannot.

Below is a revision of my most recent post, on TIME magazines I read for my Senior Seminar course: 1950's American Culture. I don't think my original copy was awful, by any means, but I was very upset (with myself) when I recieved the proofessor's marks. Most were very simple, stupid mistakes: using an adjective in place of a noun, using "amongst" instead of "among," and the awkward sentence structure of a handful of lines.

Even though I imagine few, if any, read this blog I still devote my time and effort to make this as professional as possible, just for my own sake. Therefore, when I see blatantly silly mistakes in a published post, I become upset with myself.

Instead of just editing the existing post, I think openly admitting my mistakes, and showing my revision, is a far better idea. No one is perfect. We often learn more from our mistakes then we do from our success. I could be looking a bit too much into this, but my writing, any writing, means a lot to me. I would rather admit to faulty writing and be open to learning from failures then to hide them. Sometimes a wounded ego can be good for the mind.

The cover of the January 1, 1951 issue of TIME showed an artist’s depiction of an American soldier. TIME decided that, most likely in light of the remnant glory of U.S. soldiers in World War II and the growing confrontation between Communism in Korea, G.I. Joe should be the “Man of the Year.” I found it interesting that the “Man of the Year” award—still vital to the magazines current publishers—dated back so long. In fact, the annual award has been around since the 1920’s. However, the more significant statement of the 1951 award was the focus on U.S. involvement in an escalating and international crisis—Communist containment. In my reading of TIME issues between January and February of 1951, I could not escape the earnestness that Communism had among the publishers of the magazine and, I would venture to conclude, the public at large.

Unlike modern issues, the 1951 counterparts did not contain the volume of unique articles by various staff reporters and freelance writers that the modern issues have. However, I was pleasantly surprised to see concise news coverage in the 1951 issues, notably with regard to foreign states. Although the issues lacked the depth of a modern issue of, say, The Economist, there was information on countries such as Iran, Ireland, Italy, France, Nepal, Germany, Indonesia, and Cuba, among others. The blurbs mentioned key figures and events to give the readership a more global awareness. In addition, the layout of information was very simple: National news separated by sections focusing on the Presidency, Congress, Military, etc., followed by International news and insights.

I imagine the prevalence of international examination reflects this era of American history as one in which isolationism was becoming an outdated global philosophy. World War II, I imagine, countered the isolationist view, and the “Great Debate” pushed the U.S. to engage the world’s actors, obviously as a means to counter Communism. I found the term “Great Debate” in a few of the issues. It was a term that regarded two international perspectives: isolationism v. proactive involvement. Former President Hoover was quite outspoken against the latter approach to international relations during the Truman administration. “Any attempt to make war on the Communist mass by land invasion, through the quicksands of China, India, or Western Europe, is sheer folly. That would be the graveyard of millions of American boys” (January 1 issue of TIME). One cannot help but consider this quote as a rather accurate piece of prophecy, considering the outcomes of both the Korean and Vietnam wars.

In the January 1 issue, Truman addressed accusations against then-Secretary of State Dean Acheson, saying he had been “…shot by the enemies of liberty and Christianity.” The Acheson criticism focused on his effectiveness as Secretary of State. This seemed to embody how divisive Communism, and its containment, were for the country. There was no unanimous decision, as I imagine there seldom is, among a broad, governing body.

The cover of the January 8 issue was a life-like artist’s rendering of Acheson, in addition to the statement below his name “A Time for Re-Examination.” That issue reported that many in Congress were uncertain as to the focus the U.S. should place in their broad international policy. TIME held the opinion that the U.S. had no policy at the time, in part because of the large, on-going “Great Debate.” Some leaders felt that Europe was the key, and the Asia was not as significant. Hoover, who largely represented isolationists in the country, had his ideas referred to as the Hoover Doctrine—with the nickname “retreatism.” One cannot help but see some semblance of contemporary Iraq war hawks “cut-and-run” phrase placed on dissenters of the current conflict. Many seemed to view isolationism, from my reading, as an international relations paradigm that was very close to a “defunct” label. Most, if not all, articles and stories did not have an underlying question of “should we pursue Communism abroad?” Instead the apparent attitude represented in the magazine was the more proactive, “how should we pursue Communism?” This corresponds to observations that TIME magazine, at the time, was a more conservative publication.

Later in that issue, TIME published a small map to aid an article describing MacArthur’s troop movement. The title of the map was the clever “Seoul at Stake,” which then referred to Communist forces as “200,000 Reds;” a clear indication that TIME held Communism as a full-fledged antagonist to the country and its interests.

The January 15 issue reported the intra-country debate on “…effectiveness, practicality, and logic” with respect to U.S. involvement on the Asian continent. This underscores, again, the lack of unanimous consent for military actions.

I did find an advertisement in the January 29 issue to be an interesting representation of 1950’s technology. The Zenith corporation advertised a television set with an accompanying “Turret Tuner” (a remote control) that was exclusive to the particular Zenith model.
The February 5 issue contained an interesting quote from Gen. Douglas MacArthur: “I’ll spend the rest of my life, if necessary, fighting Communism. Democracy—the American way of life—is the most wonderful thing we have and it is worth fighting for when it is threatened.” Hard to imagine a more succinct phrase to represent those who favored containment. The issue also had an article titled “Background for War.” It examined the possible outcomes if Russia were able to topple a fragile Iran. Most notably, the article mentioned the vast oil resources in the Middle East, which could come under dominating control should Russia exercise its might.
Not surprisingly, TIME editors devoted much of the magazines I read to the U.S. conflict in Korea and the global “War on Communism” (to alter a common adage used by current politicians when referring to terrorism). However, I noticed a decrease in the conflict’s immediacy in the later January issues. It seemed as though the Korean War and Containment became an ordinary part of American lives. Perhaps, much the same that the Iraq conflict and the “War on Terror” have for the country’s current population.

A suitable way for me to end this brief reaction essay would be to quote a few lines from an article in the February 26 issue. The article from which I derive the following quote had the title “The U.S. Gets a Policy.” The purpose of the article was to commend the quietly established policy among then-government officials, as heard from TIME reporters. “If the atomic umbrella continues to protect a united free world, if the U.S. strengthens Europe and Asia fast enough, if Communism is rolled back, the West can confront the Kremlin with the conditions for peaceful coexistence.”

If only history could have been more cooperative.