27 September 2012

Jonah Raskin : Jack Kerouac's 'Mishmash' Life

Jack Kerouac. Photo by Tom Polumbo, circa 1956. Image from Wikimedia Commons.

Jack Kerouac’s 'mishmash' life
and his biographers
None of Kerouac’s biographers are as concise as he was, none of them as poetical as he, and none of them as unapologetic about his seemingly chaotic life as he.
By Jonah Raskin / The Rag Blog / September 27, 2012

More biographies have been written about Jack Kerouac (1922-1969) than about any other American writer who lived in the second half of the twentieth century, but no biographer has written anything as alive and as punchy as Kerouac’s own three-page profile of himself modestly entitled “Author’s Introduction” that can be found at the front of Lonesome Traveler, a collection of his essays about America, Mexico, and Europe.

Of course, he was a genius. (Ann Charters includes it in The Portable Jack Kerouac, a volume of his writings that she edited and that was published by Viking in 1995.) Kerouac’s biographers, even the best of them, have been adept researchers, faithful scribes, dogged investigators, and attentive oral historians but they haven’t had his flair for language or his gift for story telling.

Word-for-word Kerouac outclassed nearly everyone who has tried to capture his furtive life in print and aimed to satisfy the seemingly endless appetite for information about his sex life, his drug consumption, and his hi-jinks on the road with Neal Cassady and others.

Kerouac’s friend and mentor, William Burroughs, the author of the surrealistic novel, Naked Lunch, once said that Kerouac persuaded a generation of Americans to drink espressos and to buy and wear Levis. Forty-three years after his death in 1969, Kerouac’s life style is still contagious and readers are still gobbling up books about him and about his work as though he were the golden boy and the patron saint of post-modern American literature.

Joyce Johnson’s The Voice Is All: The Lonely Victory of Jack Kerouac (2012) is the most recent Kerouac biography, and though it weighs in at nearly 500 pages, the author wisely doesn’t call it the definitive biography. She’s much better equipped to write about Kerouac today than anyone else in or out of academia. A scholar who has studied his manuscripts, she’s also an ex-lover who knew him personally in the late 1950s.

Johnson stops her story in 1951 when Kerouac was 29 years old and still had 18 more years to go. Most of Kerouac’s other biographers try to cram his whole life into one volume; readers often feel like they’re swimming in a sea of details and can’t recognize the shore or the main currents. Johnson always provides signs and signals that provide a sense of direction. She frequently says, “for the first time...” -- and that’s helpful.

The publication of Johnson’s not definitive biography offers the opportunity to reconsider Kerouac’s previous biographies, including Paul Mayer’s Kerouac that he unwisely calls “definitive.” It’s definitely not definitive.

It’s also a good time to reflect on the larger subject of Kerouac and the art of biography itself. It feels like it’s now or never especially with three Jack Kerouac movies due to hit movie screens including Walter Salles’s cinematic version of On the Road, which will probably blur the already blurry distinctions between biography and fiction that Kerouac himself created by writing what he called “true-story novels.”

He made that statement in the three-page self-portrait at the front of Lonesome Traveler in which he also noted that he wrote On The Road in three weeks, a statement that Johnson argues persuasively is patently false. Speed mattered to Kerouac: writing fast and driving fast. Spontaneity mattered, too, though his speedy, spontaneous life style and his rapid consumption of alcoholism contributed to his death at 47.

But his early death is also part of his continuing appeal; he didn’t live long enough to betray his own youthful dreams and sense of innocence.

According to Beat scholar Ronna Johnson, who keeps count, there are 21 Kerouac biographies. They include: Tom Clark’s Jack Kerouac: A Biography, Gerald Nicosia’s Memory Babe, Ann Charters’s Kerouac: A Biography, Ellis Amburn’s Subterranean Kerouac: The Hidden Life of Jack Kerouac, Victor-Levy Beaulieu’s, Jack Kerouac: A Chicken Essay, Robert Hipkiss’s Jack Kerouac: Prophet of the New Romanticism, Dennis McNally’s Desolate Angel: Jack Kerouac, the Beat Generation and America, Barry Miles’s Jack Kerouac: King of the Beats, Warren French’s Jack Kerouac, Steve Turner’s Jack Kerouac: Angelheaded Hipster, and Barry Gifford and Lawrence Lee’s, Jack Book: An Oral Biography of Jack Kerouac.

They all add to the Kerouac legend, and sometimes they shed light on Kerouac, too, though they’re often hagiography, not biography.

In some ways, Gifford’s and Lee’s 1978 book, which is still in print, is the most user friendly because it offers the voices of Kerouac’s friends, lovers, and editors -- Lucien Carr, Carolyn Cassady, and Malcolm Cowley to name just a few -- and leaves it to readers to sit in the biographer’s chair and put all the pieces together.

Everyone can have his or her own version of Kerouac. Steve Turner’s biography Angelhead Hipster -- the title comes from Ginsberg’s poem Howl -- has lots of photos of Kerouac, including one that shows him looking happy on a Montreal TV station in 1967, but for the most part Turner repeats the same old stories.

But that’s what nearly all of Kerouac’s biographers do. Many of the stories -- such as Kerouac’s first meeting with Neal Cassady, who inspired the Dean Moriarity character in On the Road -- are lifted entirely or in part from Kerouac’s novels, which means they aren’t the truth, the whole truth, and nothing but the truth. They’re somewhere between fact and fiction, legend and history.

All of Kerouac’s biographers see aspects of him, whether it’s his spirituality, sexuality, or his duality, but none of them see as many different aspects as he saw of himself. No one has ever combined as creatively as he managed to combine, and in three pages no less, all the many congruent and incongruent aspects of his life into a kind of cubist self-portrait.

Kerouac saw himself as a man of nearly a dozen identities: an adventurer, lonesome traveler, hobo, exile, verse poet, mystic, drug taker, and solipsist. He was never just one thing. He didn’t even allow that he was an American pure and simple. He was a “Franco-American,” he insisted.

In the three-page self-portrait that he entitled “Author’s Introduction” he kept adding more adjectives to describe himself, finishing with an image of himself as “independent educated penniless rake going anywhere.” It was characteristic of him not to use commas. He never did care for conventional punctuation. Indeed, one could probably write a biography of Kerouac focusing on his grammar, his use of colons, semi-colons, and periods. They say a lot about his feeling for language, his sense of rhythm, and the spoken word.

The first rule for biographers, of course, is not to allow the subject of the biography to dictate the theme, the tone, or the meaning of the life. One wouldn’t want to be bound by Kerouac’s own outline of his brief, frenetic life. There are essentials he didn’t include, like the fact that he never learned to drive a car. Gerald Nicosia provides that nugget in Memory Babe. But Kerouac’s resume is a gift that no biographer would want to neglect, either.

None of Kerouac’s biographers are as concise as he was, none of them as poetical as he, and none of them as unapologetic about his seemingly chaotic life as he. He wasn’t embarrassed to say that his life was rudderless, directionless, and that he would go “anywhere.” His biographers have been intent on finding direction, goals, and meaning.

While Kerouac claimed in his three-page self-portrait that he had a “beautiful childhood,” most of his biographers have detected tragedy and deep troubles: the poverty of his exiled French-Canadian family, the death of his older brother, Gerard, and his father’s alcoholism. Biographers might go back to his childhood, see it through his eyes and through the eyes of his parents, too, as much as possible.

In his “Author’s Introduction,” Kerouac gave many of the essential biographical details about himself: birth on March 12, 1922; student at Columbia College from 1940 to 1942; his first novel, The Town and the City written from 1946 to 1948 and published in 1950. But he gave more than the bare facts. He gave background and he provided interpretations and insights.

Thus, he describes his father as a printer who was “soured in last years over Roosevelt and World War II.” Of his mother, he wrote that she “enabled me to write as much as I did.” Indeed, he depended on her. Kerouac mentions the death of his brother Gerard at age nine, and he acknowledges the influence on his writing of American and French authors such as Jack London, Ernest Hemingway, William Saroyan, Louis-Ferdinand Celine, and Thomas Wolfe.

London’s impact on Kerouac is often downplayed, though his compact, lyrical 1984 biography, Tom Clark offers a pithy quotation from Kerouac about London. Kerouac called him “the greatest man that ever lived” and “the greatest union of the adventurer and the writer.” Clark also ends his unconventional though refreshing biography with a poem entitled “Jazz for Jack” in which he describes the author hitting the city, listening to jazz, and writing in his notebook.

What’s perhaps most striking about Kerouac's three-page creative resume is the long list of occupations and jobs that he held and that no biographer has used to paint a comprehensive portrait of Kerouac as a worker. He toiled, he noted, as a scullion and deckhand on ships, gas station attendant, railroad yard clerk, cotton-picker, forest service fire lookout, construction laborer, “script synopsizer” for 20th-Century Fox' and newspaper sports writer.

About all of his jobs and occupations he was proud. None were too lowly to mention. All of them together -- with the exception of his job for 20th-century Fox -- suggest his affinities with the proletarian writers of the 1930s, and his sense of solidarity with the hobos, tramps, and migrant laborers of the Depression.

Of course, Kerouac didn’t include everything and everyone in his three-page account of his own life. There was no way he could in that short a space. His exploration and embrace of Buddhism in the 1950s doesn’t elicit a single word. Then, too, he did not, for example, say anything about his Beat friends from New York and Columbia in the 1940s. Allen Ginsberg and William Burroughs do not appear, nor his wives and girlfriends, nor his daughter Jan.

Next to “Married” he wrote “Nah” and next to “Children” he wrote “No.” He was clearly in denial, though he made his brief resume with its many facts and ample details look honest and candid. Under “Special” he wrote “Girls.” He didn’t deny that he had an interest in the opposite sex or in sex itself, but he didn’t want to name names. There were too many women to name.

In 1960, when he wrote his “Author’s Introduction,” he was also eager to cut the ties he once had to Ginsberg and Burroughs, and to the Beat Generation itself. He didn’t like being called “The King of the Beats.” Beat pauper was more his style, since he identified with the down-and-outers.

So, he wrote that he was “actually not ‘beat’ but strange solitary crazy Catholic mystic.” It’s as useful a label as any other. Moreover, he pointed out that he had a basic complaint about the “contemporary world.” What irked him most of all was “the facetiousness of ‘respectable’ people” who were “destroying old human feelings older than Time Magazine.”

Despite all his jobs and occupations that brought him into the work-a-day-world, he had a profound sense of himself as a solitary being and disgruntled, too. Kerouac’s solitariness is in large measure what draws readers to him, but it’s not the only factor.

Readers are also moved by his profound longing for the “old human feelings older than Time Magazine,” by which he means love, friendship, comradeship, and loyalty which he saw quickly eroding in the “sinister new kind of efficiency” that began, he thought, during the Korean War and that were in some ways, “the result of the universalization of Television.” (He didn’t write those words in the “Author’s Introduction, but in his 1957 essay “About the Beat Generation.”)

Like Burroughs and Ginsberg, he could sound conspiratorial. Ideologies and “isms” repelled him, but he was innately political and keenly aware of inequalities of wealth, power, and the force of cultural conformity.

Kerouac’s many biographers have tended to make it seem as though they discovered all on their own the hidden, secret, and subterranean life of their subject. But Kerouac revealed himself as a mystic, Catholic, “lascivious” rake, hobo, football player, worker, and more. He didn’t say “erotic” or “sexual.” He said “lascivious.” It seems as apt as any word to describe his frenetic sexual activity.

He also offered valuable clues about himself, many of which have never been pursued. His main writing teacher, he claimed, was his mother Gabrielle; “learned all about natural story-telling from her long stories about Montreal and New Hampshire,” he wrote. No biographer seems to have taken that claim seriously or to have investigated it and described it, perhaps because it’s too simple and obvious and because his mother wasn’t a published writer.

No one has been willing to say that Kerouac had a “mishmash” of a life, as he himself insisted, or that he was an abject failure as a father, a husband, and perhaps as a son, too, though he was profoundly loyal to his mother and father. Kerouac’s biographers have wanted Kerouac to be an angel -- sometimes fallen, sometimes not -- and a saint, too. They have dressed him up in a heroic suit of clothes that doesn't really fit him.

Of course, biographers don’t get paid and they aren’t published for writing about mishmashes, but rather for creating a sense of order, for imposing pattern, finding links, and offering psychological interpretations that put all the pieces together.

Joyce Johnson puts the pieces together with the help of Sigmund Freud. Kerouac, she wrote, had an “Oedipal complex” with his mother that affected his relationship with other women. But what American writer worth his very soul didn’t have a real or an imaginary Oedipal relationship with his mother, and what difference did it or didn’t it make to the writing itself? Probably none. Oedipus complexes don’t seem to help writers write or be published.

Johnson does a better job as a biographer when she discusses Kerouac’s life as a writer typing his endless sentences than when she plays amateur psychologist and shows him tied to his mother’s apron strings. She’s not the only biographer in that regard. For 40 years, biographers have enjoyed psychoanalyzing the author of On the Road.

To a large extent, they have missed the essential Kerouac: Franco-American disreputable mishmash literary genius no commas. Perhaps one day a biographer will use Kerouac’s snapshots of himself as portals into his life and work. Meanwhile, there’s the “Author’s Introduction” to Lonesome Traveler that sums up poetically the literary travels of a novelist who expressed the angst and ecstasy of the Beat Generation and nearly every generation since.

[Jonah Raskin, a regular contributor to The Rag Blog, is the author of American Scream: Allen Ginsberg’s Howl and the Making of the Beat Generation, and the editor of The Radical Jack London: Writings on War and Revolution. Read more articles by Jonah Raskin on The Rag Blog.]

The Rag Blog

[+/-]

Robert Jensen : Why We Won and How We Are Losing

Going, going... Image from Earthweek.

From start to finish:
Why we won and how we are losing
A review essay on human origins and contemporary crises.
By Robert Jensen / The Rag Blog / September 27, 2012

James Howard Kunstler, Too Much Magic: Wishful Thinking, Technology, and the Fate of the Nation (New York: Atlantic Monthly Press, 2012).
Michael T. Klare, The Race for What’s Left: The Global Scramble for the World’s Last Resources (New York: Metropolitan, 2012).
Ian Tattersall, Masters of the Planet: The Search for Our Human Origins (New York: Palgrave Macmillan, 2012).

We label as “crazy” those members of the human species whose behavior we find hard to understand, but the cascading crises in contemporary political, economic, and cultural life make a bigger question increasingly hard to ignore: Is the species itself crazy? Has the process of evolution in the hominid line produced a species that is both very clever and very crazy?

Paleoanthropologist Ian Tattersall ends his recent book about the Masters of the Planet with such reflection:
[A]part from death, the only ironclad rule of human experience has been the Law of Unintended Consequences. Our brains are extraordinary mechanisms, and they have allowed us to accomplish truly amazing things; but we are still only good at anticipating -- or at least of paying attention to -- highly immediate consequences. We are notably bad at assessing risk, especially long-term risk. We believe crazy things, such as that human sacrifice will propitiate the gods, or that people are kidnapped by space aliens, or that endless economic expansion is possible in a finite world, or that if we just ignore climate change we won’t have to face its consequences. Or at the very least, we act as if we do (p. 227).
We humans routinely believe crazy things, but are we a crazy species? Does the big brain that allowed us to master the planet have a basic design flaw? Given the depth of the social and ecological crises we face -- or, in some cases, refuse to face -- should we be worried about whether we can slip out of the traps we have created?

Reading Tattersall along with recent books by two thoughtful analysts on resource depletion and ecological degradation, those answers seem quite obvious: yes, on all counts. We’re in more trouble than we want to believe, and we are not as well equipped to deal with our troubles as we imagine.

But I find some consolation in thinking about our current troubles in the context of our evolutionary history, which can help us understand why the vast majority of people are firmly committed to denying, minimizing, or ignoring the data about our troubles.

A good first step in moving beyond a focus on crazy individuals to the crazy species to which we all belong is the age-old question, “what makes us human?” Tattersall’s primary answer is that modern humans are defined by symbolic reasoning:
[F]or all the infinite cultural variety that has marked the long road of human experience, if there is one single thing that above all else unites all human beings today, it is our symbolic capacity: our common ability to organize the world around us into a vocabulary of mental representations that we can recombine in our minds, in an endless variety of new ways (p. xiv).
Tattersall also reminds us that while other animals can be cooperative, modern humans have a unique style of “prosociality” that leads us to care about the welfare of others in a much more expansive fashion than other primates do. Within the human family, we have the capacity for a deeper sense of empathy that is generalizable. We also have a history of eliminating competitive species; Homo sapiens have created a hominid monoculture:
From the very beginning of hominid history, the world had typically supported several different kinds of hominid at one time -- sometimes several of them on the very same landscape. In striking contrast, once behaviorally modern humans had emerged from Africa the world rapidly became a hominid monoculture. This is surely telling us something very important about ourselves: thoughtlessly or otherwise, we are not only entirely intolerant of competition, but uniquely equipped to express and impose that intolerance. It’s something we might do well to bear in mind as we continue energetically persecuting our closest surviving relatives into extinction (pp. 197-198).
Our hominid monoculture has of late been fond of other monocultures, particularly in the arenas of agriculture and energy. Large chunks of the modern world are dependent on an increasingly narrow range of plants for food and a dwindling source of concentrated energy from fossil fuels.

The two revolutions that have created us so-called civilized moderns -- the agricultural and the industrial revolutions, which are now intimately linked in our dependence on fossil-fuel based industrial agriculture -- are producing some unexpectedly unpleasant and revolutionary consequences. We’re running out of the resources on which our mass-consumption “lifestyle” is based, and the production of that lifestyle has unleashed destructive forces we can’t contain.

We may not be driving ourselves into extinction, but we are creating conditions that make our future frightening. Our symbolic reasoning capabilities, impressive as they may be, are not yet developed to the point where we can cope with the problems our symbolic reasoning capabilities have created. And, what’s worse, those capabilities seem to make it difficult for us collectively to face reality -- call that the delusional revolution, perhaps the scariest revolution of them all.

The message transmitted and/or reinforced by the culture’s dominant institutions (government, corporations, media, universities) seems to be: (1) it’s not as bad as some people think, but; (2) even if it is that bad, we’ll invent our way out of the problems, and (3) if we can’t invent our way out we’ll just pretend the problems aren’t really problems. In short: deny, minimize, ignore.

Before dealing with the obvious limitations of that strategy, let’s review the reality, starting with Michael Klare’s lucid account of The Race for What’s Left. Chapter by chapter, Klare methodically demonstrates why his subtitle is not hyperbole; these are, literally, the world’s last resources, and the competition for them will only intensify.

While resource competition is not new, this stage of the game is without precedent: “The world is entering an era of pervasive, unprecedented resource scarcity” (p. 8). There are no new frontiers to exploit, technology’s capacity to extract always-more is limited, and there are now more competitors than the traditional imperial powers.

Add to that the implications of global warming and climate disruption, which are not completely known but clearly destabilizing, and Klare’s conclusion -- “The race we are on today is the last of its kind we are likely to undertake” -- seems reasonable (p. 18).

The common glib response to this -- “people have long been predicting the end of things, and they’ve always been wrong” -- is a thin reed on which to lean. Past assessments of resource depletion may have been off a bit on the timing of the draw-down, but they haven’t been wrong. Consider this short summary of Klare’s survey:
  • Deep-water oil and gas drilling is touted as a savoir, but it comes with much greater risk of environmental and political calamity.
  • The opening of new resources in the Arctic, which will become more accessible as global warming melts ice, comes with ownership disputes that will not be easily resolved and increased chances of military conflict.
  • The tar sands, shale gas, and other “unconventional hydrocarbons” require heavy energy inputs and create more problems in the production process. Klare quotes Howard Lacorde, a Cree trapper, reflecting on the tar sands: “The land is dead” (p. 103).
  • The main victims of evermore intense mineral mining are indigenous people and natural landscapes, raising troubling questions of how many people and how much land we are willing to sacrifice for industrial development.
  • On rare-earth minerals, China was willing to ignore environmental dangers to lower costs, and other countries with deposits -- Canada, Australia, and the United States -- dropped out of the market and can’t restart easily.
  • And then there’s the resource we can’t live without -- food. The global “land grabs,” particularly in Africa, by wealthy countries are exacerbating the loss of arable land due to desertification and urbanization. Welcome to “peak soil,” part of the era of what some are calling “peak everything.” Klare suggests we get used to “the end of ‘easy’ everything” (p. 210).
In the first seven chapters of the book, no reader is likely to accuse Klare of avoiding difficult realities. In his final chapter, however, he fails to confront forcefully what all this means. Klare points out that we can’t end our reliance on these materials overnight and that, although the transition has to start now, developing new technology will be expensive and it is cheaper in short run to keep the old. There are incentives for people, corporations, and countries to compete in the race for what’s left, and he acknowledges that the “race to adapt” won’t immediately replace the “race for what’s left”:
In the short term, no doubt, those who prevail in the age-old struggle for finite resource supplies will still enjoy substantial economic and political rewards, but as time goes on those rewards will prove harder and harder to come by, while the price of failure will be increasingly high. On the other hand, those who focus on the new energy and materials technologies will have to pay high start-up costs but will see greater benefits in coming decades (p. 233).
It may be true, as he writes, that eventually “power and wealth will come not from control over dwindling resource supplies, but from mastery of new technologies” (p. 227). But he seems unrealistically confident that “ultra-efficiency and the adoption of renewables” will somehow win out:
At some stage, however, the economics of innovation will outperform the economics of procrastination -- especially when the price of oil and other finite resources becomes substantially higher, as is certain to happen (p. 228).
He argues that the countries that do this will gain competitive advantages by being freed up from supply disruptions and military needs.
Like the current scramble for the world’s last remaining resources, the race to adapt will spell doom for slow-moving companies, and it will cause a grand reshuffling of the global power hierarchy. But it is not likely to end in war, widespread starvation, or a massive environmental catastrophe -- the probable results of persisting with the race for what’s left (p. 234).
Those are nice notes on which to end -- hopeful without being naively optimistic. But there’s one problem: time is most definitely not on our side. If he’s right about the data, the time frame for these shifts is far less than is likely required for an even moderately smooth transition.

We’re not talking about problems for the slower companies or a mere reshuffling of the world hierarchy, processes for which we have historical precedents, but instead massive change of a very different order. Whatever we think we know about how this is going to unfold, it’s best to assume things won’t be predictable or pretty. After such a straightforward account of the data, Klare’s timid “race to adapt” rhetoric seems inadequate, even silly.

James Howard Kunstler is willing to be blunter. Despite my distaste for some of his odd political/cultural rants (more on that later), Kunstler is refreshingly uninterested in spinning a bad situation. He is willing not only to read the data about resources without illusion but also to assess the state of the culture without the triumphalism so common in the affluent world.

Let’s start with the question of time remaining. Kunstler writes that when people ask about the time frame for the “long emergency” (his phrase for our moment in history), he tells them that “we’ve entered the zone.” He’s not claiming a crystal ball and isn’t interested in specific prediction, nor does he have a tidy list of solutions. Instead, he points out that we can’t expect to tackle problems until we recognize them: “The most conspicuous feature of these times is our inability to construct a coherent consensus about what is happening to us and what we’re going to do about it” (p. 2).

Kunstler rejects the demand people often make that analysts and critics must always present “solutions.” What people typically want is not a serious conversation of what obviously has to change; the first step in talking about real solutions is to recognize we humans must dramatically reduce our consumption of energy and materials, effectively ending the lifestyle of widespread affluence subsidized by cheap energy. Because that’s hard, people are “clamoring desperately for rescue remedies that would allow them to continue living exactly the way they were used to living, with all the accustomed comforts” (p. 7).

Kunstler avoids the popular term “collapse,” which implies dramatic destruction, and prefers “contraction.” But whatever the term, there’s no avoiding that we have “no credible model of a postindustrial economy that would permit our accustomed comfort and convenience to continue as is” (p. 10).

Borrowing from anthropologist Joseph Tainter, who argues that societal collapse often results from an overinvestment in complexity that has diminishing marginal returns, he avoids rescue remedies that assume we can invent our way to paradise simply because we want that to be true. “Innovation cannot be an end in itself,” he writes, “and we have made ourselves prisoners to a cult of innovation” (p. 52).

He not only rejects techno-fantasies such as vertical farming in skyscrapers, but recognizes that lots of good projects aren’t going to get us all the way home. For example, urban gardens can’t replace large-scale farming -- fresh produce is great, but humans live primarily on grain crops (wheat, rice, corn, beans) that won’t be grown in community gardens.

Dreams of replacing the concentrated energy of fossil fuels are just that, dreams. There’s nothing wrong with sensible research on, and production of, renewable energy. But whatever might eventually come from those sources, “we must be prepared to live differently. We are not going to run the familiar infrastructures of modernity on any combination of wind, solar, et cetera” (p. 184).

Forget the rescue remedies: “our vaunted ingenuity has not produced a revolutionary energy resource to replace the cheap fossil fuel that modernity absolutely requires in colossal amounts” (p. 188).

To think clearly about what to do now, we need to think honestly about what is achievable:
Our longer-term destination is a society run at much lower levels of available energy, with much lower populations, and a time-out from the kinds of progressive innovation that so many have taken for granted their whole lives. It was an illusory result of a certain sequencing in the exploitation of resources in the planet earth that we have now pretty much run through. We have an awful lot to contend with in this reset of human activities (p. 196).
Kunstler is clear-headed in his analysis of resources, but he turns both too rosy and too cranky when he starts talking politics. The too-rosy glasses come on when he reflects on U.S. history and gives into golden-age talk about the good old days when capitalists weren’t so greedy and politicians were nobler.

He holds up odd examples of great presidents, such as Theodore Roosevelt (yes, a conservationist but also a racist supporter of eugenics and a particularly nasty imperialist) and John F. Kennedy (a conventional politician of limited courage in confronting domestic opponents and dangerous macho posturing on the world stage).

The too-cranky comes when he dismisses anyone with a critique of patriarchy and white-supremacy as “race-and-gender special pleaders” (p. 91). He also can get downright strange, at one point claiming that when working-class people began to prosper in a post-World War II era of economic expansion, culture suffered because “lower ranks of American society were able to despotically impose their tastes on everybody else,” which “drove truth and beauty in the arts so far underground that the sheer memory of it, let alone truth and beauty themselves, may be unrecoverable” (pp. 223-224).

Much of pop culture is corrosive, but he appears to think this problem is centered not in, for instance, profit-driven media but the very limited democratizing of society in recent years. He has disdain for multiculturalism, which is understandable given the lukewarm version of “diversity talk” that dominates the culture, and he makes the reasonable point that some common culture will be essential for a society facing these challenges.

But rather than struggle to understand how we can make sense of the reality of living in a society that has changed culturally, and will continue to change, he seems to prefer to sink into nativist rhetoric.

Kunstler’s crankiness is not a trivial concern, but it shouldn’t obscure the important point he makes: Under conditions of some abundance, we may find it relatively easy to talk about universal human rights (even if we rarely respect them) and solidarity (even if we rarely practice it).

In good times, humans can do a reasonable job of coming together across differences in race, ethnicity, culture, and ideology to work toward common goals. But whatever limited success we’ve had to date may tell us little about what will happen in a time of contraction and intensified resource competition. Strive as we may to act on the better angels of our nature, the devil may be in the devolution of First World societies, when people accustomed to affluence find themselves facing hard choices.

Those who are used to proclaiming the moral superiority of Western “civilization” may find that moral resources of that civilization will be less robust than triumphalism has long asserted.

So, what is to become of us? Tattersall reminds us that the biological process of evolution isn’t going to save us; there are too many people crammed too close together for any genetic novelties to emerge that might improve us. We are going to face these problems with the brain we have today, the same one that got us into this trouble. Tattersall holds out some hope for our cognitive abilities, for the possibility that human innovation isn’t over. He argues that:
this exploration of our existing capacity is far from exhausted. Indeed, one might even argue that it has barely begun. So, while the auguries appear indeed to be for no significant biological change in our species, culturally, the future is infinite (p. 232).
Certainly human innovation will continue, but Klare’s and Kunstler’s books remind us that human innovation is not a get-out-of-collapse-free card. To date, the dominant culture in the United States has been unwilling to confront the reality of multiple ecological crises. In our current presidential campaign, the Republicans simply deny there is a problem, while Democrats acknowledge some aspects of the problem but spin techno-fundamentalist fantasies to avoid the hard choices.

If we look honestly at the ecological realities and the political liabilities, it’s difficult to continue to talk about hope in naïve ways, maybe even to talk about hope at all.

Although he’s often portrayed as a doomsayer, Kunstler ends his book with about as sensible a comment on hope as I can imagine:
I certainly believe in facing the future with hope, but I have learned that this feeling of confidence does not come from outside you. It’s not something that Santa Claus or a candidate for president is going to furnish you with. The way to become hopeful is to demonstrate to yourself that you are a competent person who can understand the signals that reality is sending to you (even from its current remove offstage) and act intelligently in response (p. 245).
I’ve heard people try to escape this challenge by saying, “Well, species go extinct, and humans are no different.” True enough, but there’s a lot of human suffering between today and our eventual extinction. And if we are a uniquely prosocial species with unique capacities to not only live in the world but think about it, glib remarks about extinction are appropriate only for sociopaths. Instead, let’s live up to our own bragging about ourselves, and try to be both morally and intellectually honest.

One good first step might be to stop bragging, to resist the temptation to always telling a story about Homo sapiens that casts us at the hero. Tattersall recounts how a first-rate evolutionary biologist, Ernst Mayr, once erroneously proposed there was only one highly variable hominid species instead of several. Tattersall’s describes Mayr’s thesis as:
intuitively a very attractive proposition to members of a storytelling species that also happens to be the only hominid in the world today. It is somehow inherently appealing to us to believe that uncovering the story of human evolution should involve projecting this one species back into the past: to think that humanity has, like the hero of some ancient epic poem, struggled single-mindedly from primitiveness to its present peak of perfection (p. 87).
But Mayr turned out to be wrong, and Tattersall offers it as a cautionary tale. In another section he points out that in paleoanthropology, the order of discovery of fossils has influenced our interpretation of them; the fact that older fossils often were discovered after newer ones is crucial to understanding the development of the field:
[I]t should never be forgotten that everything we believe today is conditioned in some important way by what we thought yesterday; and some current controversies are caused, or at least stoked, by a reluctance to abandon received ideas that may well have outlived their usefulness (p. 26).
That’s good advice in any endeavor. The idea that human innovation will save us -- summed up in the truism that “necessity is the mother of invention” -- may be one of those received ideas that we need to jettison, asap. Because we’ve invented our way out of some problems in the past doesn’t mean that we will continue to do that indefinitely, especially since the unintended consequences of those inventions keep piling up.

In the end, the science that helps reveal our past or create our present is likely to be inadequate in providing the moral guidance we need for the future. These are times when I find religious language to be helpful, no matter what any person’s particular beliefs about theology. One way to sum up the human predicament is to think of ourselves as cursed, with consciousness. Back to Tattersall:
Other creatures live in the world more or less as Nature presents it to them; and they react to it more or less directly, albeit sometimes with remarkable sophistication. In contrast, we human beings live to a significant degree in the worlds that our brains remake -- though brute reality too often intrudes (p. xiv).
That reality is getting more brutal by the minute. Homo sapiens have the gift of an amazing symbolic capacity which has allowed us to create a wondrous world in which we cannot live much longer if we remain on our current trajectory. In one of humans’ more popular origin myths, we once were banished from a glorious garden as a result of that symbolic capacity, and after that banishment we sharpened our symbolic capacity and created civilization, which has never stopped being a source of problems.

The unintended consequences of civilization now leave us a choice: use the big brain to face our problems or continue our denying, minimizing, and ignoring. The former path is uncertain; the latter is guaranteed to end ugly.

Will this send us back to the garden, hat in hand, asking for a second chance to understand our place in Nature rather than trying to rule over Nature? We once gave up the Tree of Life for a bite at the Tree of the Knowledge of Good and Evil. To suggest we rethink our relationship to that second tree is not an argument against knowledge but rather a reminder of our limits.

We may not be godlike in our ability to know good and evil, but we can, as Kunstler recommends, do our best to understand the signals that reality is sending and act intelligently. The same consciousness that brought us to this place in history provides the vehicle for getting us out. We are stuck using the asset that got us in trouble to try to get out.

This suggests to me that there is, indeed, a god: the God of Irony.

[Robert Jensen is a professor in the School of Journalism at the University of Texas at Austin and board member of the Third Coast Activist Resource Center in Austin. He is the author of Arguing for Our Lives: Critical Thinking in Crisis Times (City Lights, coming in 2013) His writing is published extensively in mainstream and alternative media. Robert Jensen can be reached at rjensen@uts.cc.utexas.edu. Read more articles by Robert Jensen on The Rag Blog.]

The Rag Blog

[+/-]

Bob Fitrakis and Harvey Wasserman : Could Nine GOP Governors Flip the Vote?

Boss Tweed: "As long as I count the votes, what are you going to do about it?” Cartoon by Thomas Nast / Harper’s Weekly, October 7, 1871. Image from Scoop Independent News.

Will nine GOP governors electronically
flip Romney into the White House?
In tandem with the GOP's massive nationwide disenfranchisement campaign, they could -- in the dead of election night -- give Romney a victory in the Electoral College.
By Bob Fitrakis and Harvey Wasserman / The Rag Blog / September 27, 2012
Journalist Harvey Wasserman and author Tova Andrea Wang will discuss Voter Suppression in America with Thorne Dreyer on Rag Radio, Friday, October 5, 2012, 2-3 p.m.(CDT), on KOOP 91.7-FM in Austin, and streamed live on the Internet. Rag Radio is rebroadcast on WFTE-FM in Scranton and Mt. Cobb, PA, Sunday mornings at 10 (EDT).
Nine Republican governors have the power to put Mitt Romney in the White House, even if Barack Obama wins the popular vote.

With their secretaries of state, they control the electronic vote count in nine key swing states: Florida, Virginia, Pennsylvania, Ohio, Michigan, Iowa, Arizona, and New Mexico. Wisconsin elections are under the control of the state’s Government Accountability Board, appointed by the governor.

In tandem with the GOP's massive nationwide disenfranchisement campaign, they could -- in the dead of election night -- flip their states' electronic votes to Romney and give him a victory in the Electoral College.

Thankfully, resistance has arisen to the disenfranchisement strategy, which seems designed to deny millions of suspected Democrats the right to vote. The intent to demand photo ID for voting could result in some 10 million Americans being disenfranchised, according to the Brennan Center at New York University. Other methods are being used to strip voter rolls -- as in Ohio, where 1.1 million citizens have been purged from registration lists since 2009. This "new Jim Crow" -- personified by groups like Houston-based True the Vote -- could deny the ballot to a substantial percentage of the electorate in key swing states.

This massive disenfranchisement has evoked a strong reaction from voting rights activists, a number of lawsuits, major internet traffic and front page and editorial coverage in The New York Times.

But there has been no parallel campaign to guarantee those votes are properly counted once cast. Despite serious problems with electronic tabulations in the presidential elections of both 2000 and 2004, electronic voting machines have spread further throughout the country.

In Ohio, former Secretary of State J. Kenneth Blackwell awarded a no-bid state contract to GovTech -- a well-connected Republican-owned company which no longer exists -- to help count Ohio’s vote. GovTech contracted with two equally partisan Republican companies: Smartech for servers and Triad for IT support (Push and Pray Voting).

Electronic voting machines with ties to Republican-connected companies have proliferated throughout Ohio. Federal money from the Help America Vote Act has helped move electronic voting machines into other key swing states in substantial numbers that are not easy to track.

The machines can quickly tabulate a winner. But their dark side is simple: there is no way to monitor or double check the final tally. These partisan Republican vote counting companies have written contracts to avoid transparency and open records laws.

American courts have consistently ruled that the hardware and software used in e-voting machines is proprietary. For example, California’s Public Records Act (CPRA) contains a Trade Secret Exemption. The courts in California apply a “balancing test” to determine whether the Trade Secret Exemption applies, but the contracts with voting machine vendors are written in such a way that the court usually has no other choice but to side with the vendors and the state and county election officials who inked the contract. High priced attorneys like Daniel McMillan of the Jones Day firm are often hired to "clarify" the law for the court.

In a filing with the Voting Systems Procedures Panel of the California Secretary of State’s office during the 2004 election, McMillan hammered out a “Stipulated Confidentiality Agreement” that states in part that a public records request by a voting activist “contain[s] confidential proprietary or trade secret information” and thus, is not a public record.

Also that year, McMillan showed up in Georgia on behalf of the infamous Diebold Election Systems company and invoked the Peach State’s Trade Secret Exemption to the open record law. McMillan wrote: “If information constitutes a trade secret under the Georgia Trade Secrets Act, the government agency in custody of the information has a duty to protect the information” from public scrutiny.

McMillan goes on to argue that there’s also a Computer Software Exclusion that, “To the extent that any request is made for Diebold’s computer program or software, such a request would not be a valid request for a public record.” Diebold’s attorney cited the concern that “...it makes it easier to sabotage and hack the system and circumvent security features” if there’s transparency.

That same year in Ohio, Diebold’s secret pollbook system "accidentally" glitched 10,000 voters in the Cleveland area from the registration rolls. During the 2004 election in Toledo, thousands of voters lost their votes on Diebold optiscan machines that were improperly calibrated or had the wrong markers. How the calibration and markers work is a trade secret.

So, even the election boards that buy them cannot access their tabulation codes. The bulk of the major e-voting machine companies are owned by Republicans or by corporations whose roots are difficult to trace. While We Still Have Time by Sheila Parks of the Center for Hand Counted Ballots warns that we enter the 2012 election with no reliable means of guaranteeing that the electronic vote count will be accurate.

In fact, whether they intend to do it or not, the Republican governors of the nine key swing states above have the power to flip the election without significant public recourse. Except for exit polls there is no established way to check how the official electronic vote count might square with the actual intent of the electorate. And there is no legal method by which an electronic vote count can be effectively challenged.

There is unfortunate precedent. In the heat of election night 2000, in Volusia County, Florida, 16,000 electronic votes for Al Gore mysteriously disappeared, and 4,000 were erroneously awarded to George W. Bush, causing an incorrect shift of 20,000 votes. This was later corrected. But the temporary shift gave John Ellis at Fox TV News (Ellis is George W. Bush’s first cousin) an opening to declare that the GOP had won the presidency. NBC, CBS, and ABC followed Fox’s lead and declared Bush the winner based on a computer error. That "glitch," more than anything else, allowed the Republicans to frame Gore as a “sore loser.”

In Ohio 2004, at 12:20 election night, the initial vote tabulation showed John Kerry handily defeating Bush by more than 4%. This 200,000-plus margin appeared to guarantee Kerry’s ascent to the presidency.

But mysteriously, the Ohio vote count suddenly shifted to Smartech in Chattanooga, Tennessee. With private Republican-connected contractors processing the vote, Bush jumped ahead with a 2% lead, eventually winning with an official margin of more than 118,000 votes. Such a shift of more than 6%, involving more than 300,000 votes, is a virtual statistical impossibility, as documented in our Will the GOP Steal America's 2012 Election?.

That night, Ohio’s vote count was being compiled in the basement of the old Pioneer Bank building in Chattanooga, Tennessee. The building also housed the servers for the Republican National Committee and thus the e-mail of Bush advisor Karl Rove. Secretary of State Blackwell was co-chair of the Ohio Committee to Re-Elect Bush and Cheney. He met earlier that day in Columbus with George W. Bush and Karl Rove. That night, he sent the state’s chief IT worker home early.

The official Ohio vote count tabulation system was designed by IT specialist Michael Connell, whose computer company New Media was long associated with the Bush family. In 2008 Connell died in a mysterious single-engine plane crash after being subpoenaed to testify in the federal King-Lincoln-Bronzeville voter rights lawsuit (by way of disclosure: Bob is an attorney and Harvey a plaintiff in this lawsuit).

FreePress.org covered the vote shift in depth. The King-Lincoln suit eventually resulted in a federal injunction ordering Ohio’s 88 counties to turn over their ballots and election records.

But 56 of Ohio’s 88 counties violated the injunction and destroyed their election records. Thus no complete recount of Ohio 2004 has ever been done. More than 90,000 “spoiled” ballots, like those in Toledo, went entirely uncounted, and have since been destroyed.

No way was ever found to verify the 2004 electronic vote count. There are no definitive safeguards in place today.

In 2008, swarms of election protection volunteers filled the polling stations in Ohio and other swing states. They guaranteed the right to vote for many thousands of Americans who might otherwise have been denied it.

They had no means of guaranteeing the accuracy of the electronic vote count. But Pennsylvania, Ohio, and Michigan all had Democratic governors at the time. Florida’s governor was the moderate Republican Charlie Crist, not likely to steal an election for a party he would soon leave.

At the time, we advocated banning money from electoral politics, abolishing the Electoral College, universal automatic voter registration for all U.S. citizens, universal hand-counted paper ballots, and a four-day weekend for voting, with polls worked and ballots counted by the nation's students.

But as Sheila Parks puts it in her new book, which is subtitled The Perils Of Electronic Voting Machines and Democracy's Solution: Publicly Observed, Secure Hand-Counted Paper Ballots (HCPB) Elections: "In 2010, ultra-right-wing Republican governors were elected in Alabama, Arizona, Florida, Maine, Michigan, New Jersey, Ohio, South Carolina, Texas, and Wisconsin. In several of these states, these governors were not part of a long line of Republican governors. In fact, in some of these states, these governors interrupted a long line of Democratic governors."

So this year Rick Scott is governor in Florida, Tom Corbett in Pennsylvania, John Kasich in Ohio, Rick Snyder in Michigan, Scott Walker in Wisconsin, and Jan Brewer is in Arizona. All are seen as hard-right Republicans unlikely to agonize over flipping a Barack Obama majority into a victory for Mitt Romney.

That doesn’t mean they would actually do such a thing. But the stark reality is that if they choose to, they can, and there would be no iron-clad way to prove they did.

Another stark reality: hundreds of millions of dollars are being spent to win this election by multi-billionaires Sheldon Adelson, Charles and David Koch, the Chamber of Commerce, and other corporate interests. For them, spending a few extra million to flip a key state's electoral votes would make perfect sense.

While Obama seems to be moving up in the polls, the huge reservoir of dollars raised to elect Mitt Romney will soon flood this campaign. We might anticipate well-funded media reports of a “surge” for Romney in the last two weeks of the election. Polls could well show a "close race" -- for Congress as well as the presidency -- in the early hours of election day.

And then those electronic voting machines could be just as easily flipped on election night 2012 as they were in Ohio 2004.

Would this batch of swing state Republicans do that for Romney? We don’t know.

COULD they do it? Absolutely.

Would you be able to find definitive, legally admissable proof that they did it? No.

Would the courts overturn such a tainted victory? Not likely.

What could ultimately be done about it?

In the short term: nothing.

In the long-term, only a bottom-up remaking of how we cast and count ballots can guarantee this nation anything resembling a true democracy. It is, to put it mildly, a reality worth fighting for.

[Bob Fitrakis and Harvey Wasserman are authors of Will the GOP Steal America's 2012 Election?, their fifth book on election protection. It is available as an e-book at harveywasserman.com and freepress.org. Read more of Harvey Wasserman and Bob Fitrakis' writing on The Rag Blog.]

The Rag Blog

[+/-]

26 September 2012

Tom Hayden : U.S. Special Forces are Back in Iraq

Special Forces patrol in Iraq, 2005. U.S. Army photo.

U.S. Special Forces Deployed in Iraq, again
The irony is that the U.S. is protecting a pro-Iran Shiite regime in Baghdad against a Sunni-based insurgency while at the same time supporting a Sunni-led movement against the Iran-backed dictatorship in Syria
By Tom Hayden / The Rag Blog / September 26, 2012

Despite the official U.S. military withdrawal last December, American special forces "recently" returned to Iraq on a counterterrorism mission, according to an American general in charge of weapons sales there. The mission was reported by The New York Times, in the 162nd line in the fifteenth paragraph of a story about deepening sectarian divides.

The irony is that the U.S. is protecting a pro-Iran Shiite regime in Baghdad against a Sunni-based insurgency while at the same time supporting a Sunni-led movement against the Iran-backed dictatorship in Syria. The Sunni rebellions are occurring in the vast Sunni region between northwestern Iraq and southern Syria where borders are porous.

During the Iraq War, many Iraqi insurgents from Anbar and Diyala provinces took sanctuary in Sunni areas of Syria. Now they are turning their weapons on two targets, the al-Malaki government in Baghdad and the Assad regime in Damascus.

The U.S. is caught in the contradictions of proxy wars, favoring Iran's ally in Iraq while trying to displace Iran's proxy in Syria.

The lethal complication of the U.S. Iraq policy is a military withdrawal that was propelled by political pressure from public opinion in the U.S. even as the war could not be won on the battlefield. Military "redeployment," as the scenario is described, is a general's nightmare.

In the case of Vietnam, a "decent interval" was supposedly arranged by the Nixon administration to create the appearance of an orderly American withdrawal. During the same "interval," Nixon massively escalated his bombing campaign to no avail. Two years after the 1973 Paris Peace Accords, Saigon collapsed.

It is unlikely that the Maliki regime will fall to Sunni insurgents in Iraq, if only because the Sunni population is only about 20 percent of the population. However, the return of U.S. Special Forces is not likely to restore Iraqi stability, and they may become trapped in crossfire as the sectarian tensions deepen.

The real lesson may be for Afghanistan, where another unwinnable, unaffordable war in support of an unpopular regime is stumbling towards 2014, the timeline for the end of U.S. combat. That's the same year in which Hamid Karzai's presidential term ends.

Was anyone in the US/NATO alliance planning a "decent interval" for the Humpty Dumpty in Kabul? The US military "surge" there ended just this week, with 68,000 U.S. troops to be radically reduced in two years.

2. America's last months in Iraq: Michael Gordon's version

The New York Times often relies on its national security correspondent, Michael Gordon, an insider with close ties to military and intelligence professionals, to obtain a quasi-official version of events in the Long War. Gordon's Sept. 23 account of "failed efforts and challenges of America's last months in Iraq" is illuminating but hardly the final word.

To what is already known, Gordon adds that the White House tried to lobby for a widening of the Maliki government to include a role for Ayad Allawi's Iraqiya mainly-Sunni bloc. Those efforts failed. The Americans also hoped that the Baghdad regime would accept up to 16,000 "residual" troops for training, air support and counterterrorism. The proposal was pushed hard by the Pentagon after "an earful" from Saudi Arabia and other Sunni states.

The White House, "looking toward Mr. Obama's re-election campaign, had a lower number in mind." On April 29, the national security adviser, Tom Donilon, asked the defense secretary, Robert Gates, if he could accept "up to" 10,000. Gates said yes but was circumvented by the Joint Chiefs, led by Adm. Mike Mullen, who sent a classified letter to Obama warning that 16,000 were needed.

The secret proposal was endorsed by the U.S. commander in Iraq and the head of Central Command. The letter "arrived with a thud" at the White House, stirring an angry response.

Then on June 2, the president emphasized to Maliki that any new agreement would need ratification by the Iraqi parliament -- a virtual impossibility. The agreement would require "airtight immunities" for any U.S. troops left behind -- another nail in the coffin.

Hillary Clinton and Leon Panetta revived the proposal for 10,000.

On Aug. 13, Obama "settled the matter" by rejecting the 10,000 figure and a lower version of 7,000. He offered a token rotating presence of 1,500 U.S. troops at a time, up to 3,000 in all, plus six F-16s. The question Gordon never addresses is whether Obama knew his proposals would be rejected by the Iraqis, allowing him to withdraw while leaving responsibility with the Iraqis.

On a personal note, I have interviewed one American official present in the small White House discussions about those numbers. This former official told me that Donilon proposed the 10,000 figure. Aside from being national security adviser, I asked, did he say he was making an official offer from President Obama, or was he authorized to float the number as part of a continuing discussion? The number, he said, came from Donilon as an offer.

We may never know what was Obama's bottom line. He must have known the Iraqi parliament was a hotbed of sovereignty where a deal with the Americans would take weeks of rancor before failing. He must have known that an offer to discuss 1,500 troops and six jets would be an embarrassing token to leave behind.

On Oct. 21, the president video-conferenced Malaki -- for the first time in four months -- and told him the negotiations were over and all the U.S. troops were coming home.

Gordon's account, published in a book this week, is a critique of Obama's withdrawal, which he says leaves Iraq "less stable domestically and less reliable internationally." He complains of no troops on the ground, no Americans to "patrol the skies," and severe cuts in Iraq's police force.

Gordon never explains why leaving behind a small handful of American troops would have secured American objectives, which he says in hindsight were to create a "stable and representative government," avoid a power vacuum for terrorists, and "sufficient influence" so that Iraq would be an American partner, or at least not an opponent, in the Middle East.

What Gordon doesn't say is that those objectives were impossible to achieve in an almost nine-year war that cost the US 4,446 deaths and 32, 227 wounded, a taxpayer bill of $807 billion in direct costs, and left Iraq itself a ravaged wasteland.

This is an expanded version of an article that appears in The Nation.

[Tom Hayden is a former California state senator and leader of Sixties peace, justice, and environmental movements. He currently teaches at Pitzer College in Los Angeles. His latest book is The Long Sixties. Read more of Tom Hayden's writing on The Rag Blog.]

The Rag Blog

[+/-]

25 September 2012

RAG RADIO / Thorne Dreyer : Singer-Songwriters Bob Cheevers & Noëlle Hampton and André Moran

Above, from left, musicians André Moran, Bob Cheevers, and Noëlle Hampton, with Rag Radio's Thorne Dreyer and Tracey Schulz. Inset photos below of Bob Cheevers, Noëlle Hampton, and André Moran. All photos were taken in the KOOP studios in Austin, Texas, on Sept. 21, 2012. Photos by Sharon Kay Berger / The Rag Blog.

Rag Radio Podcast:
Singer-songwriters Bob Cheevers,
Noëlle Hampton, and André Moran

By Rag Radio / The Rag Blog / September 25, 2012

Singer-songwriter Bob Cheevers, and the musical team of Noëlle Hampton and André Moran, were Thorne Dreyer's guests on Rag Radio, Friday, September 21, 2012, on KOOP 91.7-FM in Austin.

They performed live on the show and discussed their work, the larger music scene, and the relationship between musicians and the community. Listen to the show here.

Rag Radio features hour-long in-depth interviews and discussion about issues of progressive politics, culture, and history. The syndicated show is produced in the studios of KOOP-FM, Austin's cooperatively-run all-volunteer community radio station. It is broadcast live on KOOP and streamed live on the Internet, and is rebroadcast on WFTE-FM in Mt. Cobb and Scranton, PA.

Bob Cheevers, who has been in the music business since the 60s, was the 2011 Texas Music Awards "Singer/Songwriter of The Year.” An Emmy-winning songwriter, Cheevers' releases have been in the UK’s Americana Top 10 and in the Top 20 and 30 on the U.S. Americana charts. Austin-based Cheevers grew up in Memphis, has worked in Los Angeles and Nashville, and has toured extensively in the U.S. and Europe. Johnny Cash and Waylon Jennings have recorded his songs.

Noëlle Hampton -- a “rootsy/Americana” singer-songwriter and indie rocker -- and her husband (and virtuoso guitarist) André Moran are a musical collaboration that began in the San Francisco Bay Area and relocated to Austin where they record and perform. Noëlle and André have opened for Bob Dylan, Sarah McLachlan, Jewel, Pat Benatar, and many others. They also perform as The Belle Sounds.

Bob Cheevers –- through his non-profit Over A Cheevers, Inc. -- is producing Be-Bob-Alooza at Austin’s Nutty Brown Café on Sunday, Sept. 30, from 5-10 p.m., with headliners Kevin Welch and Walt Wilkins' Mystiqueros. It is a benefit with proceeds going to an Austin singer/songwriter to be chosen by a panel of Austin music professionals. Noëlle Hampton and André Moran are contenders for the award.

Rag Radio has aired since September 2009 on KOOP 91.7-FM in Austin. Hosted and produced by Rag Blog editor and long-time alternative journalist Thorne Dreyer, a pioneer of the Sixties underground press movement, Rag Radio is broadcast every Friday from 2-3 p.m. (CDT) on KOOP, 91.7-fM in Austin, and is rebroadcast on Sundays at 10 a.m. (EDT) on WFTE, 90.3-FM in Mt. Cobb, PA, and 105.7-FM in Scranton, PA.

The show is streamed live on the web by both stations and, after broadcast, all Rag Radio shows are posted as podcasts at the Internet Archive.

Rag Radio is produced in association with The Rag Blog, a progressive internet newsmagazine, and the New Journalism Project, a Texas 501(c)(3) nonprofit corporation. Tracey Schulz is the show's engineer and co-producer.

Rag Radio can be contacted at ragradio@koop.org.

Coming up on Rag Radio:
THIS FRIDAY, September 28, 2012: Composer, Musician, Conductor, Writer, and Scholar David Amram.
October 5, 2012: Author Tova Andrea Wang and Journalist Harvey Wasserman on Voter Suppression in America.

The Rag Blog

[+/-]

Harry Targ : The 'Unfinished Revolution' of the Emancipation Proclamation

Emancipation from Freedmen's viewpoint. Illustration from Harper's Weekly, 1865. Image from Wikimedia Commons.

The Emancipation Proclamation:
The 'Unfinished Revolution'
The candidacy of President Obama in 2012 offers a continuation of the struggle for political rights against the most sustained racist assaults by neoliberals, conservatives, and tea party activists since the days of segregation.
By Harry Targ / The Rag Blog / September 25, 2012
"That on the first day of January, in the year of our Lord one thousand eight hundred and sixty-three, all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free… -- President Abraham Lincoln, “The Emancipation Proclamation,” January 1, 1863.
The Purdue University Black Cultural Center on September 21, 2012, organized a panel honoring the 150th anniversary of President Abraham Lincoln’s preliminary Emancipation Proclamation, the final version of which was issued by the President on January 1, 1863.

The proclamation declared slaves in the states rebelling against the United States to be free. It did not apply to those border states which had not seceded from the Union. In those states 750,000 slaves were yet to be liberated.

Celebration of political anniversaries provides an important opportunity to better understand the past, how the past connects to the present, and what needs to be done to connect the present to the future. As a participant on this panel I was stimulated to reflect on the place and significance of the Proclamation and the centrality of slavery and racism to American history.

First, as Marx suggested at the time, the rise of capitalism as a mode of production was inextricably connected to slavery and the institutionalization of racism. He described the rise of capitalism out of feudalism and the centrality of racism and slavery to that process:
The discovery of gold and silver in America, the extirpation, enslavement and entombment in mines of the aboriginal population, the beginning of the conquest and looting of the East Indies, the turning of Africa into a warren for the commercial hunting of black skins, signalized the rosy dawn of the era of capitalist production. These idyllic proceedings are the chief moments of primitive accumulation (Capital, Volume 1).
Second, the Emancipation Proclamation began a political revolution, abolishing slavery in Confederate states, but it did not embrace full citizenship rights for all African Americans nor did it support economic emancipation.

The historical literature documents that while Lincoln’s views on slavery moved in a progressive direction, the President remained more committed to preserving the Union than abolishing slavery. Until the Proclamation, he harbored the view that African-Americans should emigrate to Africa, the Caribbean, or Central America to establish new lives.

As historian Eric Foner wrote: “Which was the real Lincoln -- the racist or the opponent of slavery? The unavoidable answer is: both.” In short, President Lincoln, an iconic figure in American history thought and acted in contradictory ways.

Third, Lincoln’s growing opposition to slavery during his political career and his presidency was influenced to a substantial degree by the abolitionist movement. As an influential participant in that movement Frederick Douglass had a particular impact on Lincoln’s thinking.

Foner points out that on a whole variety of issues “Lincoln came to occupy positions the abolitionists first staked out.” He continues: “The destruction of slavery during the war offers an example, as relevant today as in Lincoln’s time, of how the combination of an engaged social movement and an enlightened leader can produce progressive social change.”

Fourth, the promise of the Emancipation Proclamation was never fully achieved. It constituted an “unfinished revolution,” the creation of political rights for former slaves but not economic justice. The former slaves remained dependent on the plantation system of agriculture; landless sharecroppers beholden to former slave owners.

Fifth, post-civil war reconstruction began to institutionalize the political liberation of African Americans. For a time Blacks and whites began to create new political institutions that represented the common interests of the economically dispossessed. But the collaboration of Northern industrial interests and Southern plantation owners led to the destruction of Reconstruction era change and a return to the neo-slave system of Jim Crow segregation.

Even the “unfinished revolution” was temporarily crushed.

Sixth, over the next 100 years African Americans, workers, women, and other marginalized groups continued the struggle to reconstruct the political freedoms implied in the Emancipation Proclamation and temporarily institutionalized in Reconstruction America. The struggle for democracy culminated in the Civil Rights Acts of 1964 and 1965, and the rising of Latinos, women, and gays and lesbians.

Finally, the contradictions of victories achieved and the escalation of racist reactions since the mid-1960s continues. And, most vitally, the unfinished revolution continues. The question of the intersection of race and class remains as gaps between rich and poor in wealth, income, and political power grow.

In this historic context, the candidacy of President Obama in 2012 offers a continuation of the struggle for political rights against the most sustained racist assaults by neoliberals, conservatives, and tea party activists that have existed since the days of segregation.

At the same time Obama’s reelection alone, while vital to the progressive trajectory of American history since 1863, will not complete the revolution. The need for social movements to address the “class question,”or economic justice, along with protecting the political gains that have been achieved, will remain critical to our future.

One hundred and fifty years after the Emancipation Proclamation the struggle for democracy, political empowerment, and the end to class exploitation, remains for this generation to advance.

[Harry Targ is a professor of political science at Purdue University who lives in West Lafayette, Indiana. He blogs at Diary of a Heartland Radical -- and that's also the name of his book from Changemaker Press which can be found at Lulu.com. Read more of Harry Targ's articles on The Rag Blog.]

The Rag Blog

[+/-]

BOOKS / Lamar W. Hankins : America's 'Gunfight' over Gun Control

America’s conflicts over gun control
Winkler’s research demonstrates clearly that Americans have always had the right to bear arms and the government has always had the right to regulate guns.
By Lamar W. Hankins /The Rag Blog / September 25, 2012

[Gunfight: The Battle over the Right to Bear Arms in America by Adam Winkler (2011: W.W. Norton & Company); 361 pp.; $27.95.]

Freethought groups generally don’t take positions on gun control and the right to bear arms. What freethinkers try to do is understand the evidence about various propositions and draw rational conclusions about those propositions based on the evidence.

For many years, debate has raged between those who oppose gun regulation and those who believe that the government has a role in regulating guns. Much of that debate has been based on myth, legend, erroneous history, and beliefs supported by little, if any, evidence.

Now, thanks to the work of Adam Winkler, a professor of constitutional law at UCLA, we have a book based on careful research that can help us separate fact from fiction when we discuss the right to bear arms and the regulation of that right. Gunfight: The Battle over the Right to Bear Arms in America makes a seminal contribution to the discussion. It will make the most vocal advocates in this debate, no matter their views, either enlightened or angry or both.

Winkler has found that some of our most persistent ideas, especially about the American Wild West, are false. Gunfights were not everyday occurrences, for instance. They happened now and again, but were not the norm, although almost everyone in the West during its developing years owned and carried guns, both rifles and hand guns. They needed to do so because there was danger all around -- from outlaws, desperate men, Indians, and wild animals.

But most western towns required that guns be checked when the owner came into town. Dodge City, Kansas, for example, prohibited the carrying of firearms in the 1870s. Most western towns had no murders during those days. The reason for these gun control regulations, according to their advocates (the predecessors to our modern-day Chambers of Commerce), was to create a civilized town that would grow and prosper.

Forty-three states protect the right of individuals to bear arms in their state constitutions, most either from the days of the founding of this country or from the early 1800s. Yet, we also have a gun regulation history that runs side-by-side with the established right to bear arms.

While the founders believed strongly in the private ownership of firearms, they did not believe in a standing army. Instead they supported militias, comprising ordinary citizens, who needed to possess firearms to fulfill their responsibilities. The Second Amendment protects the rights of states to have militias, and thereby the right of the people to possess firearms so they can serve in those militias.

The founders also supported gun control, barring many groups from owning firearms, including slaves, free blacks, and white loyalists -- those who did not support the revolution, about 40% of the population of the time. Gun owners were required to appear at mandatory musters of the militias, at which time they would present their guns for inspection, records of which were kept on public rolls. These laws were responses to the needs of the republic as they were perceived at the time.

In 1792, the founders passed an individual mandate that required every free white male between the ages of 18 and 45 to purchase a military style firearm and ammunition. Bird-hunting guns were insufficient to satisfy the requirements of the Uniform Militias Act.

The most prominent opponent of gun control today is the National Rifle Association (NRA). But in the 1920s and 1930s, the NRA led the gun control movement, drafting and promoting laws that restricted carrying guns in public, mainly aimed at restricting the gangsters of that day. Some of those laws still exist, but are opposed by today’s NRA. Winkler traced the militant anti-gun control stance of today’s NRA to an unusual series of events.

In 1966, the Black Panther Party for Self-Defense was created. In 1967, a group of 30 Black Panthers marched into the state capitol of California with loaded guns, rifles, and shotguns displayed openly, and walked into the legislative session then in progress to protest a gun control bill under consideration.

These young black men had been policing the police with their own guns at the ready, following Oakland policemen around to make sure they did not do anything harmful to other black men and giving suspects advice on what they should do. They carried guns openly, holding them pointed toward the ground or toward the sky, both accepted methods of displaying firearms in public.

The actions of the Black Panthers led the California State Legislature to consider and pass laws restricting openly carrying guns in public. Then-Governor Ronald Reagan said on the day of the visit to the capitol by the Black Panthers, “There is no reason why on the street today a citizen should be carrying loaded weapons.”

Gun control and race had been linked before in American history. A key purpose of the Ku Klux Klan, for instance, was gun control. After the Civil War, the Union Army allowed both black and white soldiers to take their guns with them as part of the pay that was owed to them. Some southern blacks bought some of those guns. Black ownership of guns in the south after the Civil War led to laws prohibiting such ownership among blacks. The KKK forcefully disarmed many blacks, intentionally killing some in the process.

In 1967, Detroit and Newark, as well as other urban areas, experienced riots in which guns were used against policemen and National Guardsmen. So-called Saturday Night Specials, cheap handguns, were available to thugs, robbers, and thieves. Many observers believed the easy availability of firearms caused much of the lawlessness, criminal activity, and violence in America’s urban areas.

With so many guns available, many feared that revolution was about to break out. Congress passed the first major gun control law since the 1930s -- the Gun Control Act of 1968 -- which banned the importation of cheap handguns, expanded licensing for gun dealers, and barred felons from possessing guns. But really the Act was intended more to control blacks than to control guns.

The California law and the Gun Control Act started the modern backlash led by the NRA, whose mostly white, rural members became afraid that while the two laws might have been aimed at blacks, the government would come after their guns next.

In 1975, the City Council of Washington, D. C., enacted a major gun control law. The law prohibited residents from owning handguns, excluding those registered before February 5, 1977. In June 2008, the Supreme Court held that the city's handgun ban violated individuals' Second Amendment right to gun ownership.

The opinion, written by Justice Scalia, recognized, however, that gun control laws could be legitimately passed and enforced. The city’s firearm registration and assault weapon ban were allowed to stand, and its laws still prohibit carrying guns, both openly and concealed.

While Scalia based his opinion on his concept of originalism, his reasoning made clear that it was really based on looking at current conditions and determining that such laws could serve legitimate governmental interests. He suggested that laws against machine guns were OK because machine guns are not in wide use, a modern circumstance created by the gun control laws from the gangster years of the 1930s.

Such a finding undercuts his claims that there is an originalist way to interpret the Constitution. When the Constitution was adopted, machine guns did not exist.

Winkler’s research demonstrates clearly that Americans have always had the right to bear arms and the government has always had the right to regulate guns. The biggest problem he found is that much gun control is ineffective, in that only about .5% of guns in America are used for illegal activities. This is why total gun bans, such as the one struck down in the nation’s capitol, don’t have a rational foundation.

Whether you are for or against gun control, Winkler’s book should at least help clarify the issues, so that arguments are not wasted on unfounded ideas and beliefs. We should be debating what gun control regulations will have a positive societal benefit and which are just regulation for regulation’s sake, or are put in place for unsubstantiated reasons.

For example, Winkler has suggested background checks for all gun purchasers, not just for those guns bought from licensed dealers. Such a step would make it harder for criminals to get guns, and would make it clearer how law-abiding citizens with personal guns they want to sell can follow the law.

Recent news suggests that another regulation or law that would be beneficial to society is to prohibit multiple sales of rifles to the same person during a set period of time. The practice of buying 10 or 20 or more semi-automatic rifles at one time has been used in Arizona to smuggle vast quantities of such rifles into Mexico, contributing to the extreme wave of violence that has been going on there for the past year or so, with its blowback on Americans.

Further, a requirement to register all guns would be of great help to law enforcement when they are investigating gun-related crime. If a gun is stolen, law enforcement will have good records about it. Such registration could be no more intrusive than the census.

Some will argue that this will be the first step to confiscation of guns by the government. After the Supreme Court decision in 2008, such an argument should be given no credibility. It is preposterous to argue that the government could confiscate guns when it is now clear that all of us have a constitutional right to own them. It is about as likely that the government will close all the churches.

In either case, massive insurrection would ensue, so this is a far-fetched, if not absurd position.

I hope we continue to debate gun control and finally arrive at sensible, necessary regulations that will make us all safer so that we can enjoy civilized communities that will grow and prosper. This desire has been handed down to us from our forebears. We should be grateful for their wisdom.

[Lamar W. Hankins, a former San Marcos, Texas, city attorney, is also a columnist for the San Marcos Mercury. This article © Freethought San Marcos, Lamar W. Hankins. Read more articles by Lamar W. Hankins on The Rag Blog.]

The Rag Blog

[+/-]

Only a few posts now show on a page, due to Blogger pagination changes beyond our control.

Please click on 'Older Posts' to continue reading The Rag Blog.