Brecher Brief

Search

Category

Archives

Recent Entries

Syndicate this site (XML)

Powered by

Movable Type 3.17




« September 2011 | Main | February 2012 »

January 30, 2012

Austerity and the "Common Sense" Fallacy

One of the more aggravating conditions of public discourse is the emotional appeal—and hence, political appeal—of “common sense,” however foolish or misguided that common sense turns out to be. This is especially evident in debates over economic policy where the “common sense” approach always seems to argue for deficit reduction and austerity. In a famous debate moment from the ’92 election, for instance, a young African American woman asked what turned out to be the campaign’s defining question: “how has the national debt affected you personally?” While HW Bush stammered something about interest rates and glanced impatiently at his watch, Clinton hit the ball out of the park, assuring the woman that he felt her pain. Clinton understood what Bush didn’t: the question was really about the recession, job loss, and economic uncertainty. Debt and deficit were only the language she used to articulate those fears.

For average citizens, treating government debt as a synonym for economic turbulence makes perfect sense. After all, when individual families get into financial trouble it is often because they have piled up too much credit card debt. And when somebody loses their job, the last thing they ought to do is whip out their credit card and go on a spending spree. Even if taking on more debt winds up being the necessary evil that allows them to avoid starvation for a few more days, it is rarely thought of as a “solution.” So when a society goes into an economic tailspin, common sense argues that we extrapolate from personal experience. Hence it “seems” only obvious that the solution is government belt-tightening.
But whole societies and their governments are not individual families. Analogies that equate them are often stupid and sometimes dangerous. Still, some economists and politicians persist in doing it. Consequently, we get the conservative fetish for deficit reduction in The United States, and the even more tragic mania for “austerity” that is now driving Europe over the abyss (while threatening our own fragile recovery). Case in point: Great Briton's suicidal efforts at reform through austerity.

Laying out the case in his column today, Paul Krugman writes:

Last week the National Institute of Economic and Social Research, a British think tank, released a startling chart comparing the current slump with past recessions and recoveries. It turns out that by one important measure — changes in real G.D.P. since the recession began — Britain is doing worse this time than it did during the Great Depression. Four years into the Depression, British G.D.P. had regained its previous peak; four years after the Great Recession began, Britain is nowhere close to regaining its lost ground.


Nor is Britain unique. Italy is also doing worse than it did in the 1930s — and with Spain clearly headed for a double-dip recession, that makes three of Europe’s big five economies members of the worse-than club. Yes, there are some caveats and complications. But this nonetheless represents a stunning failure of policy.

And it’s a failure, in particular, of the austerity doctrine that has dominated elite policy discussion both in Europe and, to a large extent, in the United States for the past two years.

His conclusion is especialy troubling:

The infuriating thing about this tragedy is that it was completely unnecessary. Half a century ago, any economist — or for that matter any undergraduate who had read Paul Samuelson’s textbook “Economics” — could have told you that austerity in the face of depression was a very bad idea. But policy makers, pundits and, I’m sorry to say, many economists decided, largely for political reasons, to forget what they used to know. And millions of workers are paying the price for their willful amnesia.

But half a century ago, things were different. Politicians who wished to pander to the misguided “common sense” fears of average voters could not so easily depend on theoretical support from a coterie of professional economists who take Austrian theory as an article of faith (evidence be damned); neither were they so liberally funded by a corporate class for whom the power to exploit was no longer a sin to be rationalized, but a sacred liberty to protect. And when sincere ignorance, self-interest, and piety collude, can any virtue be safe?


Posted by stevemack at 09:00 AM | Comments (0)

January 18, 2012

The Politics of "Job Cremation"

Last week Nate Silver weighed in on the question over whether the current attack on Romney’s Bain Capital history will exhaust itself by the general election. Some speculate that by tarring him as a “Vulture Capitalist,” Gingrich, Perry and others are actually doing him a favor—in essence, inoculating him against the same charge when Obama makes it. The argument goes that, aired now, the attack becomes old news later on; moreover, Romney now has six months or more to practice his response.

Could be, but Silver doesn’t think so—and neither do I.
Silver's first and most significant point is that the charge has the potential of undermining Romney’s “brand” as a job creator—the professional identity that functions as the fundamental argument for his candidacy:

Mr. Romney has made his private-sector experience a major theme of his campaign, using it to form a contrast to the “career politicians” he is running against in the primary as well as to Mr. Obama. “I think to create jobs it helps to have had a job,” Mr. Romney has said repeatedly. He has argued for less government regulation of private enterprise, and has said he would repeal new regulations on financial companies.

It would be one thing if the substance of the attacks on Mr. Romney had to do with an ethical problem at Bain Capital or some type of personal association. This is not to say that voters would have no right to consider the issue, but it would be somewhat peripheral to the core economic message of Mr. Romney’s campaign.

But these attacks take a different tack. They are a critique of Bain Capital’s business model and, by extension, a critique of a certain form of free-market activity that Mr. Romney and other Republicans advocate.

Ads like “When Mitt Romney Came to Town,” the 28-minute commercial put out by a “super PAC” that backs Newt Gingrich, adopt what appears to be a documentary style, but they present a one-sided view of the role played by private equity companies like Bain Capital, characterizing them as greedy and as lining the pockets of the wealthy at the expense of the working class. Were it not for the couple of clips of Mr. Romney speaking French, one would be shocked to learn that the ads had been produced by Republicans, rather than by a liberal filmmaker.

Arguments over job creation are going to be central to this year’s general election. It will be harder for Mr. Romney to defend his laissez-faire positions if Democrats can roll out clips of Republican partisans attacking him. Already, Gov. Rick Perry of Texas, the former head of the Republican Governors Association, has described Mr. Romney as a “vulture capitalist.” Newt Gingrich, the former speaker of the House, has said that Bain Capital has an “indefensible” business model.

It will not be above Mr. Obama’s campaign to simply replay clips of Mr. Gingrich and Mr. Perry making these remarks. Successful presidential campaigns have used such a tactic before. In 1980, for instance, Ronald Reagan’s campaign released a commercial that consisted of nothing more than a 25-second clip of a Ted Kennedy speech, in which Mr. Kennedy scolded the incumbent Democratic president, Jimmy Carter, on the Iranian hostage crisis and the country’s high inflation rates.

If the campaign is “merely” about job-creation competency, than Silver is, at least potentially, correct. But if liberals succeed in making the election a referendum on laissez-faire capitalism, if Obama makes the argument for the kind of mixed economy we’ve had for the last eighty years (an argument that has actually never been made in an election), and if Romney becomes the poster boy for the sort of rigid, draconian, “Austrian” economics that conservatives have championed since 1980, then Silver has actually understated the case and the Bain brand could be lethal.
Personalized attacks of this sort succeed when they resonate with a larger narrative. And if that larger narrative becomes, in effect, the organizing principle of the entire election, then such charges become self-validating. It will come down to whether or not liberals will be able to pin the global economic downturn, radical—and worsening—inequality, and the sluggish recovery on Austrian economics, and by extension, Romney’s often predatory business practices legitimized by that theory.

Few will be moved by claims of business expertise when one's business is, as the DNC puts it, "job cremation."

Posted by stevemack at 09:41 AM | Comments (0)

January 13, 2012

The Decline of the Public Intellectual

Surely the Russian masses (such a crude and ugly word) of a century ago were no more attentive to their intellectuals, public or otherwise, than we are today. And to hear American intellectuals tell it, that’s setting the bar pretty low.

Giving expression to a certain kind of anxiety of influence has become a clichéd preoccupation of public intellectuals in the last two decades. Not Harold Bloom’s creativity triggering anxiety, but a more pedestrian sort of whining about their apparent inability to exert any influence in the public square.

John Donatich voiced it well enough when he introduced a panel discussion on the issue that The Nation sponsored in 2001:

As we try to puzzle out the future of the public intellectual, it's hard not to poke a little fun at ourselves, because the issue is that serious. The very words "future of the public intellectual" seem to have a kind of nostalgia built into them, in that we only worry over the future of something that seems endangered, something we have been privileged to live with and are terrified to bury. In preparing for this event, I might as well admit that I've been worried about making the slip, "the future of the public ineffectual." But I think that malapropism would be central to what we'll be talking about. It seems to me that there is a central conflict regarding American intellectual work. How does it reconcile itself with the venerable tradition of American anti-intellectualism? What does a country built on headstrong individualism and the myth of self-reliance do with its people convinced that they know best? At Basic Books' fiftieth anniversary, it's a good time to look at a publishing company born in midcentury New York City, a time and place that thrived on the idea of the public intellectual. In our first decades, we published Daniel Bell, Nathan Glazer, Michael Walzer, Christopher Lasch, Herb Gans, Paul Starr, Robert Jay Lifton--and these names came fresh on the heels of Lévi-Strauss, Freud, Erik Erikson and Clifford Geertz.

Donatich’s smugly theatrical notion of a “conflict,” a popular view within the intelligentsia, is both wrong and wrong-headed. It is wrong in the sense that it traffics in the self-serving fiction of American anti-intellectualism. And it is wrong-headed in the sense that it undermines the value of citizen responsibility by subordinating it unnecessarily to the most elitist argument for the public intellectual, the one grounded in the myth of an aristocracy of experts.

The fiction of America’s anti-intellectualism has been debated adnauseam since Richard Hofstadter popularized the phrase a half-century ago. Without replaying the whole debate, two points will suffice: One, the fact that academic institutions wield enormous financial, technological, and cultural power—and the fact that, more generally, education continues to be the centerpiece of some of our most cherished social myths (i.e., “the “American Dream”)—are both powerful reasons to doubt that Americans suffer from some instinctive hostility to intellectuals. Two, what is sometimes identified as anti-intellectualism is in fact intellectual—that is, a well articulated family of ideas and arguments that privilege the practical, active side of life (e.g., work) over the passive and purely reflective operations of the mind in a vacuum. Hence, for example, when John Dewey built his career as a philosopher on a thoughtful, systematic, elegant, and sustained repudiation of the Cartesian notion of mind and, instead, argued for “experience” as the foundation of human endeavor—he was hardly exposing himself as an anti-intellectual bigot. ‘Nuff said.

As to what Donatich derisively calls a “headstrong individualism and the myth of self-reliance,” it’s worth noting that he’s not giving us full-fledged descriptions of real political ideas but caricatures of an imagined psycho-cultural disposition. An “immature” disposition, at that. One can almost hear the sit-com dad railing against his willful, stubborn, impetuous kid who has once again gotten himself in trouble because he refused to heed Pop’s unwaveringly wise advice. And in this myth, common-folk (like kids) always get into trouble because they lack what all paternal intellectuals have by birthright—impulse control. The infantile common-folk who comprise the “mob” has been the star of elitist melodrama for centuries; they’re also “exhibit A” in nearly every hand-wringing, anti-democratic treatise in the western tradition. Now, are some people ill-equipped for self-government? Of course. But the strongest alternative argument, the best argument for democracy, is not that the people are “naturally” equipped for self-government—but that they need to become so, and, moreover, experience is the only teacher. So here’s the point: Any argument for the public intellectual that, like Donatich’s, rests the assumption that common citizens are forever childlike and must be led by a class of experts is politically corrosive and historically dangerous.

So, is there any way of conceptualizing something called the public intellectual that is consistent with democratic values? Of course there is, but it needs to begin with a shift from “categories and class” to “function.” That is, our notions of the public intellectual need to focus less on who or what a public intellectual is—and by extension, the qualifications for getting and keeping the title. Instead, we need to be more concerned with the work public intellectuals must do, irrespective of who happens to be doing it.

It’s a distinction that matters. Those concerned with public intellectuals as a class will inevitably fret about the health of that class. They’ll either worry, like Donatich, about whether the rest of society is doing enough to nurture and sustain it (i.e., publishing, reading, and heeding its work). Or, they’ll hyperventilate about class purity, or the “appalling decline” in quality of most other public intellectuals. The quintessential example here is Richard Posner’s book Public Intellectuals: A Study of Decline. Posner—a federal judge, law professor, and one of the most important legal theorists of our time, a man who is highly regarded even by his many critics and who publishes a new book nearly every year—would NOT seem to be somebody you’d say has too much time on his hands. Yet, he clearly does. As William Dean points out in his review of the book,

Posner's narrow definition of the public intellectual is his book's greatest weakness and its greatest strength. Using economic analysis, hard data and checks on prediction, Posner subjects dozens of public intellectuals to pointed criticism, if not a sound thrashing. He concentrates on "academic public intellectuals," arguing that independent public intellectuals are a dying breed, and he demonstrates how their public pronunciamentos have been sloppy and prejudiced in ways they would never allow in their scholarship.

Posner fact-checks a host of public intellectuals and compiles a list of errors worthy of a soviet bureaucrat. And why? As Dean explains it, “Posner's main claim is that the arts and humanities should be kicked out of public intellectualdom.” Hence:

Posner launches into an ill-fated and lengthy exercise in ranking the 571 public intellectuals who in the years 1995-2000 received the most media attention and Web-site hits. None of the great public intellectuals I cite above (from Addams to Lasch) makes Posner's top 100, and three fail to show up among his top 571. Not only is this ranking a ridiculous way to assess real public influence, it undermines Posner's own project; he himself would predict that the ranking would stimulate public intellectuals' vanity, causing them either to preen or be wounded and then to ignore the book's larger argument.

Dean is quite right in labeling Posner’s project “ridiculous.” But I think Dean’s more significant point is the vision of public intellectual work he pits against Posner’s attempt to excommunicate the defects. Posner’s methodology, he argues, forces him to disregard

public intellectuals who discuss public philosophies and attitudes. These public intellectuals sometimes uncover implicit orientations and worldviews that, in turn, affect public decisions and actions. For example, he ignores the fact that there is an American spiritual culture, that religious thinkers can criticize and affect that spiritual culture, and that they can thereby make a difference in American public practice. Religious critics such as Cornel West, Jean Bethke Elshtain and Richard John Neuhaus are doing as much today.

Put more prosaically, public intellectuals perform an important social function. There are other ways to describe this function. In fact, in the panel discussion led by John Donatich and linked above, Jean Bethke Elshtain offered a more secular version than the one William Dean invokes in her name:

A public intellectual is not a paid publicist, not a spinner, not in the pocket of a narrowly defined purpose. It is, of course the temptation, another one, of the public intellectual to cozy up to that which he or she should be evaluating critically. I think perhaps, too many White House dinners can blunt the edge of criticism. . . .

So the public intellectual needs, it seems to me, to puncture the myth-makers of any era, including his own, whether it's those who promise that utopia is just around the corner if we see the total victory of free markets worldwide, or communism worldwide or positive genetic enhancement worldwide, or mouse-maneuvering democracy worldwide, or any other run-amok enthusiasm. Public intellectuals, much of the time at least, should be party poopers.



Elshtain’s point is that the public intellectual function is criticism. And if intellectuals are in a better position to perform that function it’s not because they are uniquely blessed with wisdom—and it’s certainly not because they are uniquely equipped to wield social or political power. It is only because learning the processes of criticism and practicing them with some regularity are requisites for intellectual employment. It’s what we do at our day jobs.

It is also, however, the obligation of every citizen in a democracy. Trained to it or not, all participants in self-government are duty-bound to prod, poke, and pester the powerful institutions that would shape their lives. And so if public intellectuals have any role to play in a democracy—and they do—it’s simply to keep the pot boiling. The measure of public intellectual work is not whether the people are listening, but whether they’re hearing things worth talking about.

Posted by stevemack at 09:24 PM | Comments (0)

The Wicked Paradox: The Cleric as Public Intellectual

If there’s any truth to the old adage that religion and (liberal, democratic) politics don’t mix, it isn’t because they are polar opposites—an ideological oil reacting against a metaphysical water. Rather, it’s because they are, more or less, alienated kindred vying for the same space in the human imagination. It is not difficult to see why: religious experience and democratic politics make overlapping—and often competing—claims to the deepest meanings we attach to our humanity. First, both make a sacred obsession of the operations of individual conscience. Whether it is in the prayer tower or the voting booth, each pivots on a private, solemn, even mystical moment when the individual summons all the resources of their inner being in a single act of “self-transcendence.” Second, both religion and democracy draw the individual into a larger cosmic or social order—then define obligations that go along with one’s place in that order. Both, in other words, offer a vision of personal identity that is derived from beliefs about how we should relate to everything around us. The struggle between the spiritual and political forces of our imagination is older than such things as red states, the Christian Coalition, or the Moral Majority. It’s been a continuing drama for nearly four hundred years of American history. But following the 2004 reelection of George W. Bush, the old drama acquired a new cast of characters and a snappy new production.

The Roveian strategy of playing to the religious right (under the mainstream media’s radar, as it turned out) tipped the balance in small-town Ohio and central Florida. Consequently, for a good six weeks or so, the post election punditry was consumed with talk about either the unsavory role Christian fundamentalists played in the campaign, or the “illiberal ways” the faithful were treated by critics. (A debate, incidentally, that seems to have further inflamed an overt and growing anti-religious campaign. See Dennett, Dawkins, and Hitchens; and for an “anthropological” response, see Scruton)

But for all the sound and furry, the debate occasionally struck deeply resonate historical chords. Responding to a number of Christian conservatives and their supporters who complained that liberals engaged in “anti-evangelical bigotry,” Peter Beinart of The New Republic made a provocative claim about the language of public debate. “This isn’t bigotry,” he said of the charges.

What these (and most other) liberals are saying is that the Christian Right sees politics through the prism of theology, and there’s something dangerous in that. And they’re right. It’s fine if religion influences your moral values. But, when you make public arguments, you have to ground them—as much as possible—in reason and evidence, things that are accessible to people of different religions, or no religion at all. Otherwise you can’t persuade other people, and they can’t persuade you. In a diverse democracy, there must be a common political language, and that language can’t be theological.

Beinart’s call for “dialogic neutrality” (as it’s sometimes called) certainly seemed reasonable. But Reason has its own set of problems: First, there is America’s own liberal history. In many ways, American political history is the history of activist theologians from the right and the left. These men and women have been intellectuals of a special kind—people whose religious training and experience shaped their vision of a just society and required them to work for it. They have been key players in some of our most important reform movements, from abolitionism, the labor movement, and civil rights to the peace movements of various generations. And second, there is a kind of absurdity to Beinart’s reason. As Hugh Heclo puts it, the insistence that people of faith sanitize their political rhetoric of any religious assumptions “amounts to a demand that religious believers be other than themselves and act publicly as if their faith is of no real consequence.” It’s not only absurd but unfair, some argue, to ask religious intellectuals to disarm their political speech of its fundamental moral rationale.

One of the great ironies of this debate is that historically, public intellectuals in America are a product of both our secular and religious traditions. Indeed, our entire liberal, secular democratic tradition is an extension of our religious origins. The story begins in 1630, when a prosperous lawyer by the name of John Winthrop and a band of English Puritans left the security of their English homes, migrated to the new American wilderness. There they launched one of the most daring experiments in Christian civil government the old world had ever seen. The Colony at Massachusetts Bay was to be a place where, as Puritan historian Cotton Mather put it many years later, “we would have our posterity settled under the pure and full dispensation of the gospel; defended by rulers who should be ourselves.” Winthrop himself described his theocracy more poetically: “wee shall be as a citty upon a hill. The eies of all people are uppon us.” Winthrop’s phrase has echoed through nearly four centuries of American history—and acquired meanings that transcend even the lofty goals of that early Puritan colony. Presidents, poets—and public intellectuals--have invoked his words to remind Americans of something fundamental about themselves: that they are a people defined not by race, not by ethnicity, but by moral purpose. As we now hear it, Winthrop’s notion of a “city upon a hill” is the keynote expression of that sacred national mission that sets us apart from every other people on the globe.

His harmonious theocracy, however, only lasted about five years.

It began to crack in 1635, when a young upstart preacher by the name of Roger Williams took it upon himself to scold the Massachusetts civil authorities for administering civil oaths (such as an oath of loyalty to the King of England) secured by the words “so help me, God.” Moreover, he thought it improper that those same authorities were enforcing strictly religious decrees. Williams was something of purist concerning matters of doctrine, and argued that government should restrict itself to enforcing the “second tablet” of the Ten Commandments, which concerns itself with such non-religious issues as murder and theft. It was the clergy’s responsibility to handle “first tablet” violations of religious law. As he wrote some time later, there should be a “hedge or wall of Separation between the Garden of the Church and the Wilderness of the world.” Not surprisingly, those same civil authorities branded Williams’ views “erroneous and very dangerous” and they sent him packing. One year later he founded an alternative Christian community he named Providence, in the new colony of Rhode Island.

Though Williams is justly credited with inaugurating the church / state debate on New World shores, the particulars of his argument is frequently overlooked. To be sure, Williams wrote much about the importance of “liberty of conscience,” but he was no “relativist,” no apologist for the secular state, no believer in tolerance for the sake of tolerance. He was a theologian, deeply concerned with the health and vitality of the church. Williams began with two fundamental assumptions: first, that that the church gets its authority from God; second, that civil society gets its authority from the People (including, of course, sinners and heretics). What follows from these facts should be enough to frighten any true believer: By linking church and state, you don’t put God in charge of civil society but put the People (sinners and heretics included) in charge of the church. Or as he phrased it, you take “God and Christ and Spirit of Heaven, and subject them unto natural, sinful, inconstant men.”

Roger Williams and John Winthrop each had a grip on the opposite ends of a paradox that has haunted American politics ever since. On one side, Winthrop knew that deep personal faith always implies an equally deep sense of the mystic and moral bonds that connect that person to others—bonds profound enough to be the basis of law. This is not just a Christian ideal; it’s an important historical motivation. Nearly every significant movement for social reform in American history was either started or nurtured in the church. Labor reform, the abolition of slavery, the temperance movement, women’s suffrage, public welfare, prison reform, the civil rights movement, the War on Poverty—each of these began as matters of conscience for early supporters. Moreover, Winthrop’s “City upon a hill” reminds us that we even use spiritual terminology to describe the “secular” democracy. Winthrop teaches us that a people deeply committed to a religiously inspired vision of society will inevitably try to make that vision law. And our history teaches us that American democracy would not be nearly so liberal or humane if they hadn’t. In the American experience, in short, religion and civil society are political codependents.

For Roger Williams, however, this codependency had a dark side. He recognized more clearly than most of his contemporaries that when entangled, religion and civil society are mutually destructive. The dangers for society were dramatic: enforcing religious strictures through the law was “the greatest occasion of civil war” and, as history has shown, resulted in “the destruction of millions of souls.” He knew that deep religious conviction did not permit negotiation or compromise—that zealots almost always prefer death (their own or that of some “heretic”) to a spiritually imperfect society. Indeed, Williams’ insight goes beyond eighteenth century politics: Modern democracies are, by culture and by design, a way of life in which decisions are made by process, persuasion, consensus, and accommodation. In such societies, all or nothing, religiously inspired political zealotry is poison. When activists are moved to establish God’s kingdom on earth, democracy is no longer an intrinsic value, but, at best, a convenient tool to be discarded when something better comes along.

But Williams’ deepest—and most prophetic—concern was for the way that, in spiritual terms, entanglement with government amounts to a double suicide: One, it kills the living essence of individual faith—the sense of an immediate connection to God. Two, it erodes the churches institutional credibility. On this score, he anticipates a long line of theologians and social critics who warn about the dangers of making the church and its doctrines a kind of “public property,” subject to the political needs of secular powers. It’s a point Alexis de Tocqueville would make two centuries later. He wrote “any alliance with political power whatsoever is bound to be burdensome for religion.” When religious leaders become politicians, he argued, they must sometimes “defend allies who are such from interest rather than from love.” “Hence religion cannot share the material strength of the rulers without being burdened with some of the animosity roused against them.” Or as the Christian conservative writer, Cal Thomas, puts it: When Christians “grasp for the immediate and lesser power of partisan and necessarily compromising politics they make a Faustian bargain for something that rarely changes hearts and minds.”

There is a truly wicked paradox here. It is that the beneficiary of that Faustian bargain that Thomas worries about—and that Williams predicted—is liberal democracy itself. The argument would go that American political life has thrived more or less on the religious energies alive in the culture—but only “parasitically.” That is, American politics “uses” religion in the sense that it draws something vital out of it, redefining it in the process as something secular, essentially social—and not at all dependent on the belief systems of particular faiths. In short, liberal democracy takes from religion what it cannot supply on its own: a deep sense of belonging. This may be the great political lesson of William James’ insight that our most profound religious instincts pull us out of ourselves and give us certain, yet unutterable, evidence that we are a part of the whole human family—with duties and responsibilities to match. So it may be that democracy’s common language of facts and reason and logic that Beinart longs for is, like Latin for Catholics, the language of death.

American democracy has always depended on public figures—and public intellectuals—whose work has been animated by strong faith. Billy Graham’s efforts to promote racial harmony during the 1950s, and Reinhold Neibuhr’s work for economic justice throughout his career come quickly to mind. A deep religious sensibility has the power to make us feel a real kinship with others. And kinship tempers self-interest. This kind of “democratized” religion enlarges our sense of justice, moving it beyond a concern for individuals alone toward a personal investment in the well being of our countrymen.

For those, like Peter Beinart, who are concerned about the use of religious rhetoric in democratic debate, a more important challenge would center on how religion is being used, not whether it is used. Or, not whether they are talking politics, but who they are talking politics to. Just as enlightened religious thinkers have used the terms of their faith to build a sense of a larger American community, it has also been used to insulate particular Americans within the cultural walls of more narrow communities. Purity, after all, is the virtue that Puritans chose as their defining aspiration. The heart of sectarian bigotry is all the doctrines and dogma of particular churches (all that James thought was incidental to the religious impulse) that function as codes to authenticate tribal membership. And when terms of identity become the focus of intellectual practice within a religious community they give us tangible evidence of just how “special” our group is—and how unspecial, ungodlike, or un-American everybody else is. Rather than connecting us to one another, this sort of religion makes a virtue of alienation.

Posted by stevemack at 09:18 PM | Comments (0)


"A Whitman for our Time."
- Jerome Loving,
   ORDER
"Stephen John Mack's The Pragmatic Whitman: Reimagining American Democracy, [is] The most thoroughly informed philosophical reading of Whitman to appear in decades. Mack develops the premise . . . That Whitman shares with John Dewey a vision of democracy as a 'civic religion' in America, a profoundly secularist and progressive perspective.

- M. Jimmie Killingsworth, Texas A & M University
August 2015
Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31