Archive for the ‘Politics’ Category

What Are We Doing in Libya?

Twenty two days after Colonel Qaddafi fired on protesters in Libya, we are now in the middle of war. Well, not of war. We don’t use that term anymore. We are now in the middle of “military engagement,” which effectively means that the US-led coalition is launching cruise missiles over Libya. But a war by any other name is still a name.

Peter Nixon over at dotCommonweal is in agreement, in his post “War. Again.” “Make no mistake;” he writes, “This is not a humanitarian intervention. We are taking sides in a civil war.”

President Bush was justly criticized for his rush to war in Iraq and for not having a clear plan for what to do after we defeated Iraq’s armed forces. Bush’s pace, however, looks positively dilatory compared to the speed with which President Obama, with very little consultation with Congress or the American people, has committed the United States to yet another war to establish a government in a foreign country that is more to our liking.

And if the principle that governments cannot slaughter their citizens with impunity is to be the principle underlying our foreign policy, where are we off to next? Yemen, where army snipers killed 46 people yesterday? There is no shortage of tyrannies in the world. How much of our blood and treasure are we willing to expend to remake the world in our own image?

Historically, Christians have debated whether or not the demands of the Sermon on the Mount should lead the church to oppose all war, or whether some wars might be justified. For the majority of Christendom, the latter side has won. The first major theological justification for the morality of war goes back to Augustine who argues in his letter to Boniface that military engagement is an obligation of neighbor love, and in doing so, lays the foundation for just war theory:

Do not think that it is impossible for any one to please God while engaged in active military service. . . Think, then, of this first of all, when you are arming for the battle, that even your bodily strength is a gift of God; for, considering this, you will not employ the gift of God against God. For, when faith is pledged, it is to be kept even with the enemy against whom the war is waged, how much more with the friend for whom the battle is fought! Peace should be the object of your desire; war should be waged only as a necessity, and waged only that God may by it deliver men from the necessity and preserve them in peace. For peace is not sought in order to the kindling of war, but war is waged in order that peace may be obtained. Therefore, even in waging war, cherish the spirit of a peacemaker, that, by conquering those whom you attack, you may lead them back to the advantages of peace; for our Lord says: “Blessed are the peacemakers; for they shall be called the children of God.” Matthew 5:9 If, however, peace among men be so sweet as procuring temporal safety, how much sweeter is that peace with God which procures for men the eternal felicity of the angels! Let necessity, therefore, and not your will, slay the enemy who fights against you. As violence is used towards him who rebels and resists, so mercy is due to the vanquished or the captive, especially in the case in which future troubling of the peace is not to be feared (Epistle 189).

Following Augustine, Aquinas too treated just war under love or charity:

In order for a war to be just, three things are necessary. First, the authority of the sovereign by whose command the war is to be waged. For it is not the business of a private individual to declare war, because he can seek for redress of his rights from the tribunal of his superior. . .

. . . Secondly, a just cause is required, namely that those who are attacked, should be attacked because they deserve it on account of some fault. . .

. . . Thirdly, it is necessary that the belligerents should have a rightful intention, so that they intend the advancement of good, or the avoidance of evil. Hence Augustine says (De Verb. Dom. [The words quoted are to be found not in St. Augustine’s works, but Can. Apud. Caus. xxiii, qu. 1): “True religion looks upon as peaceful those wars that are waged not for motives of aggrandizement, or cruelty, but with the object of securing peace, of punishing evil-doers, and of uplifting the good.” For it may happen that the war is declared by the legitimate authority, and for a just cause, and yet be rendered unlawful through a wicked intention. Hence Augustine says (Contra Faust. xxii, 74): “The passion for inflicting harm, the cruel thirst for vengeance, an unpacific and relentless spirit, the fever of revolt, the lust of power, and such like things, all these are rightly condemned in war.” (II-II, Q. 40, art. 1).

In addition to the criteria Aquinas lays out for going to war (ius ad bellum), namely, right authority, just cause, and just intent, just war theory also includes attention to the way the war is fought (ius in bello). In other words, the war ought to be proportional. It ought to use only enough force to respond to the threat at hand.

So it this “war” in Libya just? It does seem that the United States is at pains to guarantee that the authority initiating this military engagement is rightful. This is not a case of unilateral action or “coalitions of the willing,” as Ross Douthat points out:

In its opening phase, at least, our war in Libya looks like the beau ideal of a liberal internationalist intervention. It was blessed by the United Nations Security Council. It was endorsed by the Arab League. It was pushed by the diplomats at Hillary Clinton’s State Department, rather than the military men at Robert Gates’s Pentagon. Its humanitarian purpose is much clearer than its connection to American national security. And it was initiated not by the U.S. Marines or the Air Force, but by the fighter jets of the French Republic.

And our cause does indeed seem just. Qadaffi is a pretty wicked guy, especially in recent weeks as he has unleached his troops on those who have risen in protest against his rule, killing many and threatening the country with further disasters. As the Chicago Tribune points out, Libya imports about 90% of its food and other basic necessities, and Qadaffi is likely to use food as a weapon, threatening starvation to those who do not comply.

But what about our intent? In order to determine the justice of our intent, we need to first know what it is, and that is not so easy. President Obama announced at a news conference in Chile this morning that military action in Libya has only a humanitarian intent, namely, stopping the killing of Libyan civilians by Col. Qaddafi’s soldiers. Nevertheless, “it is U.S. policy that Qadafi needs to go.” A recent NYTimes article addresses this point exactly: “Target in Libya is Clear; Intent is Not:”

But there is also the risk that Colonel Qaddafi may not be dislodged by air power alone. That leaves the question of whether the United States and its allies are committing enough resources to win the fight. The delay in starting the onslaught complicated the path toward its end. . . For Mr. Obama, who has explicitly said that Colonel Qaddafi has lost any right to govern, the conundrum is that the United Nations mandate does not authorize his removal. So Mr. Obama now says the goal is limited: to use force to protect the Libyan people and allow humanitarian aid to get through.

An intention is something more than a desire, in the Aristotelian-Thomistic tradition. An intention (proaireton in Greek) is something deliberated upon, something chosen with reason. For Aquinas, intention is an act of the will which “tends toward the end,” but which presupposes an act of reason ordering something to the end (I-II, Q. 12, art. 1). Intention further includes the means to achieving this end: “the will is moved to the means for the sake of the end: and thus the movement of the will to the end and its movement to the means are one and the same thing. For when I say: “I wish to take medicine for the sake of health,” I signify no more than one movement of my will. And this is because the end is the reason for willing the means” (I-II, 12.4).

So in the case of Libya, for the intention to be just, both the means and the end in sight must be just. And there is a lot of question if this is the case in our current engagement. Douthat writes,

Because liberal wars depend on constant consensus-building within the (so-called) international community, they tend to be fought by committee, at a glacial pace, and with a caution that shades into tactical incompetence. And because their connection to the national interest is often tangential at best, they’re often fought with one hand behind our back and an eye on the exits, rather than with the full commitment that victory can require. . . Because liberal wars depend on constant consensus-building within the (so-called) international community, they tend to be fought by committee, at a glacial pace, and with a caution that shades into tactical incompetence. And because their connection to the national interest is often tangential at best, they’re often fought with one hand behind our back and an eye on the exits, rather than with the full commitment that victory can require.

It seems to me that our intention in Libya has not been established. Qadaffi is a bad guy, and nobody wants him around, but our intention is not to remove him from power. Libyans who rose against Qadaffi are in a bad place right now, but our intention is not to protect them, at least not really, since protecting them would presumably mean a regime-change, and that isn’t our intention at the time. It is terrible to watch a guy like Qadaffi start a new reign of terror in North Africa, but just war principles are in place because war is such a tragic event that it need be only utilized as a last resort, and only with an eye toward guaranteeing a more just peace in the future. This “engagement” in Libya is neither a last resort, nor is the end in sight any better than what we have now: a dictator in control of a country.

Reconceptualizing Feminine “Complementarity” by Appealing to Female War Reporters

Cathleen Kaveny, professor in the law school and the theology department at Notre Dame, has an important article up at America Magazine, in which she reflects on the Catholic Church and its effort to define “feminism.” Kaveny, in her typically moderate, rational, and sensitive way explores the varieties of ways in which the word “feminist” is used and the manifold ways in which the Church both is and is not what it claims.

On the one hand, if we take “feminism” to be a general affirmation of the well-being and the dignity of women, the Catholic Church is most definitely “feminist”:

It has done an enormous amount of good for women, Catholic and non-Catholic alike, in precarious circumstances throughout the world. To take only one example, the Catholic Agency for Overseas Development, Gender & Women runs programs around the world that help women organize into cooperatives for the production and marketing of goods; it also provides shelters for basic needs, educational programs in literacy and training in business knowledge and empowerment.

However, despite the many ways in which the Church works to advance the flourishing of women, it is often subject to criticism from both secular and religious feminists who claim the Church takes two steps back for every step forward in its refusal to sanction the use of contraception (and more controversially, soften its restriction on abortion in cases of rape, incest, and the life of the mother). Kaveny does a good job portraying the limbo-like status many Catholic women find themselves in affirming both the progressive nature of the way the Church views women and way in which the Church still has miles to go in truly affirming the dignity and equality of women:

Catholic women can sometimes find themselves caught in the middle, loving their church and their faith but dispirited by occasional statements that suggest that the Vatican views them as disordered or defiled simply because they are women. Last July the Vatican caused a public relations firestorm after its announcement of two grave crimes under canon law: sexual abuse by members of the clergy and the attempt to ordain a woman. Even women who support the church’s restriction of the priesthood to males winced at the decision to group these two acts in the same document.

In order to advance a more rigorous analysis of feminism and Catholicism, Kaveny suggests three areas of focus: equality and difference, nature and nurture, and complementarity and collaboration. It is really the last area–the issue of complementarity–where the most work needs to be done in light of John Paul II’s distinctive for of feminism, re-affirmed by Benedict XVI:

With women flooding the educational system, men find themselves competing with them for advancement and academic honors. Pope Benedict XVI, when he was prefect of the Congregation for the Doctrine of Faith, expressed concern about such competition between men and women and called instead for a collaborative relationship between the sexes (“On the Collaboration of Men and Women in the Church and in the World,” 2004).

His view is this: The basis of a collaborative relationship is the recognition of the complementary gifts and skills of men and women. Women in particular should not aim to emulate the strengths of men but should instead nurture their own distinct gifts. Complementarity is most clearly visible in the roles that men and women play in marriage and family life but should be visible in other contexts as well. One of the hallmarks of John Paul II-style feminism, in fact, is an effort to define the “feminine genius” in all spheres of women’s existence in terms of the virtues of motherhood.

For their part, many other feminists are worried about the call to complementarity, not necessarily because they are opposed to the idea that both men and women bring some distinct and important gifts to human society but because of the way that idea tends to work out in practice. In fact, they fear it undermines collaboration, because it tends to promote separation and practical inequality.

The great Protestant theologian Karl Barth explicated male-female complementarity in terms of A and B—one need not be a psychic to guess which sex is which. The way the concept of complementarity works in geometry also reveals the potential problem: Two angles are complementary if they add up to 90 degrees, so a complementary angle is all and only that which the primary angle is not. Analogously, if one begins with a man, then a woman must be all and only that which a man is not—her role is to fill in the gaps. If complementarity is taken too far, then, it does not facilitate collaboration but rather fosters entirely separate spheres of interest and specialization.

The concept of complementarity rightly affirms the importance—and unique demands—of motherhood on women. But how does it account for the gifts, ambitions and concerns that men and women have in common, even in parenting? For men and women to strive for excellence—together—in the many areas and interests they share ought not to be considered a destructive form of competition. The common pursuit of excellence, or virtue, is a key element of the classical definition of friendship.

The question of complementarity and collaboration has come up recently in regards to another question: the role of women reporters in war zones. In an important article in this weekend’s NYTimes, Kim Barker argues that, despite the dangers of sexual assault and physical violence, women can cover war time reporting just as well as men, if they have the courage. Not only can women do the same job men can, they also provide a necessary angle to war-time reporting not accessible to their male counterparts.

More important, they also do a pretty good job of covering what it’s like to live in a war, not just die in one. Without female correspondents in war zones, the experiences of women there may be only a rumor.

Look at the articles about women who set themselves on fire in Afghanistan to protest their arranged marriages, or about girls being maimed by fundamentalists, about child marriage in India, about rape in Congo and Haiti. Female journalists often tell those stories in the most compelling ways, because abused women are sometimes more comfortable talking to them. And those stories are at least as important as accounts of battles.

Kim Barker seems to provide a challenging alternative to the sort of complementarity Kaveny is addressing, without throwing out the issue of complementarity all-together.

In Mulieris Dignitatem, John Paul II’s encyclical on the dignity of women, JPII wrote that women should employ their “feminine genius” in building a culture of life. This “genius” includes the feminine characteristics of receptivity, generosity, sensitivity, and maternity. Is it not possible that all of these characteristics make women powerful and important war-time reporters? Receptivity in their ability to grasp the unique experiences of women in war-torn areas or in uprisings like the ones going on now in Egypt and Bahrain; sensitivity in her ability to see beyond the material to the heart of the matter, to report not just on the events but on the spiritual and emotional movements underlying these events; generosity in her ability and willingness to sacrifice herself and her body for the sake of truth; maternity in her ability to see and hear also the children in a given place, and weave their stories also into the final story she tells as a reporter.

Like many women, everdaythomist has problems with the ways in which the concepts like “complementarity” and other aspects of John Paul II’s feminism have been used, but I think these concepts gain at least new rhetorical force when put in dialogue with the sort of secular feminism that Kim Barker offers in her argument for the important and unique role of female war-time reporters. Complementarity offers us a way of seeing these female reporters as offering a unique perspective not so easily offered by their male counterparts, thus justifying their work on grounds not so much of equality, but on difference. This is important, I think, because if women reporters are offering the same perspective as their male colleagues, it seems too easy for an editor to pull all women reporters out of war zones by appealing to the dangers of their job and the ability of men to do their tasks without the same risks.

In practice, I think putting JPII’s notion of complementarity in dialogue with Kim Barker’s secular feminism effectively serves to help bridge the divide between Catholicism and feminism that Kaveny is trying to encourage. It offers us a way of keeping the concept of complementarity without keeping some of the rather unfortunate ways this concept has been put in practice to keep women from doing the sorts of things that men do.

What Does Aquinas Have to Say About Egypt?

Jim, over at Zwinglius Redivivus, has a post entitled “Egypt Burns, and the Theologians and Biblical Scholars Remain Silent.” He writes,

Nothing really needs to be added to that title except one blazingly evident fact: too many are so involved in pointless pursuits and the useless drivel and dreck of their own limited interests that they are blind to what’s going on around them and voiceless.

Therefore they are, as far as I am concerned, worthless. If the teaching of Scripture isn’t applied to real life (as opposed to attempting to apply it to sci-fi and other stupidities) and theologians and biblical scholars have nothing to say to or about events such as we are presently witnessing, I think they have proven themselves unworthy of the title they bear and no longer relevant to anything, for anything at all.

I don’t normally read his blog (I found this post through a link on another blog), but Jim has a point. What say we blogger theologians about the events unfolding in the largest country in the Middle East?

Aquinas, following Aristotle, is clearly a supporter of a people’s right to rise up against an unjust tyrant. He writes in “On Kingship”

If to provide itself with a king belongs to the right of a given multitude, it is not unjust that the king be deposed or have his power restricted by that same multitude if, becoming a tyrant, he abuses the royal power. It must not be thought that such a multitude is acting unfaithfully in deposing the tyrant, even though it had previously subjected itself to him in perpetuity, because he himself has deserved that the covenant with his subjects should not be kept, since, in ruling the multitude, he did not act faithfully as the office of a king demands.

He is also wary of civil unrest. He goes on in “On Kingship” to say, “The welfare and safety of a multitude formed into a society lies in the preservation of its unity, which is called peace.” However, his more systematic response to the disorder caused by political revolution is in the Summa, in the treatise on sedition (contained within the larger treatise on charity, not justice, as you may be surprised to know). In II-II 42.1, Aquinas identifies sedition (“when one state rises in tumult against another part”) as a sin “opposed to a special kind of good, namely the unity and peeace of a people.” In 42.2, he cites Augustine (De Civ. Dei ii, 21) that “‘wise men understand the word ‘people’ to designate not any crowd of person, but the assembly of those who are united together in fellowship recognized by law and for the common good.’ Wherefore it is evident that the unity to which sedition is opposed is the unity of law and common good: whence it follows manifestly that sedition is opposed to justice and the common good.”

Aquinas’ comments on sedition in the Summa are not opposed to what he says in “On Kingship.” In “On Kingship,” the overthrow of an unjust tyrant is an expression of unity on behalf of the common good. He confirms this in 42.2 ad. 1: “It is lawful to fight, provided it be for the common good. But sedition runs counter to the common good of the multitude, so that it is always a mortal sin.” And in the same article ad. 3, he says even more explicitly:

A tyrannical government is not just, because it is directed, not to the common good, but to the private good of the ruler, as the Philosopher states (Polit. iii, 5; Ethic. viii, 10). Consequently there is no sedition in disturbing a government of this kind, unless indeed the tyrant’s rule be disturbed so inordinately, that his subjects suffer greater harm from the consequent disturbance than from the tyrant’s government. Ondeed it is the tyrant rather that is guilty of sedition, since he encourages discord and sedition among his subjects, that he may lord over them more securely; for this is tyranny, being conducive to the private good of the ruler, and to the injury of the multitude.

In other words, Aquinas would probably say that a lawful uprising in accord with the common good simply is not sedition, in the same way that taking food from another when one’s life is in danger is not stealing (II-II, 66.7). Since neither are opposed to the common good, neither are sins.

So this brings us to Egypt. There is some fear among US onlookers that the uprising is the work of the Muslim Brotherhood, which could potentially be contrary to the common good (especially the small group of Egyptian Coptics). But Egypt’s Islamist opposition has vowed to “respect the will of the Egyptian people” if Mubarak is disposed. Moreover, the uprising is more eclectic than merely the Muslim Brotherhood, as the Guardian notes:

“There is widespread exaggeration about the role of the Brotherhood in Egyptian society, and I think these demonstrations have exposed that,” said Khalil al-Anani, an expert on Egypt’s political Islamists at Durham University. “At first the movement showed little interest in the protests and announced they weren’t going to participate; later they were overtaken by events and forced to get involved or risk losing all credibility.” . . .

. . . Even on Friday, when the Brotherhood finally threw its weight behind efforts to bring down the government – a stance its leadership initially held back from – Islamist slogans were noticeable by their absence, and the formal contribution of the movement remained limited.

The Egyptian revolution, therefore, does not seem to fit the criteria for sedition. Rather, it seems the whole nation, and especially the youth, are rising collectively to challenge the injustices of a tyrant. Still, the unrest in the country definitely endangers the common good, and needs to be settled as quickly as possible.

Paul VI wrote in Populorum Progressio in 1967, keeping largely with the Thomistic tradition, “We know . . . that a revolutionary uprising–save where there is a manifest, long-standing tyranny which would do great damage to fundamental personal rights and dangerous harm to the common good of the country–produces new injustices, throws more elements out of balance and brings on new disasters. A real evil should not be fought against as the cost of greater misery.”

Although the mood is celebratory, the violence can escalate rapidly. Nik Kristof blogs from Cairo

The people I talked to mostly insisted that the army would never open fire on civilians. I hope they’re right. To me, the scene here is eerily like that of Tiananmen Square in the first week or so after martial law was declared on May 20, 1989, when soldiers and citizens cooperated closely. But then the Chinese government issued live ammunition and ordered troops to open fire, and on the night of June 3 to 4, they did – and the result was a massacre.

In the past, the army famously refused President Sadat’s order to crack down on bread riots, and maybe they won’t crack down this time. But I’ve seen this kind of scenario unfolding before in Indonesia, South Korea, Mongolia, Thailand, Taiwan and China, and the truth is that sometimes troops open fire and sometimes they don’t. As far as I can see, Mubarak’s only chance to stay in power is a violent crackdown – otherwise, he has zero chance of remaining president. And he’s a stubborn old guy: he may well choose to crack heads; of course, whether the army would follow orders to do so is very uncertain. The army is one of the few highly regarded institutions in Egyptian society, and massacres would end that forever.

One troubling sign is that the government isn’t showing signs of backing down. It used fighter planes to buzz Tahrir, in what surely seems an effort to intimidate protesters. It moved the curfew even earlier today, to 3 pm. It has sent the police back into some areas. The Internet remains shut off. And the state media continue to be full of lies. None of that sounds like a government preparing to bow to the power of the people.

It seems clear that Mubarak needs to go, but his overthrow will only be moral so long as the scale tips in balance of the common good.

The Problem With Democracy

Nicholas Kristoff has an op-ed out today in which he jokingly argues that America needs a monarch:

If we can just get over George III, our new constitutional monarchs could serve as National Hand-Holders, Morale-Boosters-in-Chief and Founts of American Indignation.

Our king and queen could spend days traipsing along tar-ball-infested beaches, while bathing oil-soaked pelicans and thrusting strong chins defiantly at BP rigs.

All that would give President Obama time to devise actual clean-up policies. He might then also be able to concentrate on eliminating absurd government policies that make these disasters more likely (such as the $75 million cap on economic damages when an oil rig is responsible for a spill). . .

. . . As Stephen Colbert observed about the oil spill: “We know if this was Reagan, he would have stripped to his skivvies, put a knife in his teeth, gone down there and punched that oil well shut!”

But let’s be realistic. Most presidents just won’t look that good in their skivvies. And some may accidentally swallow the knives. Thus, the need for a handsome king and queen to lead photo-ops.

But perhaps the need for a monarch is not so much due to America’s love for drama, or Obama’s love of the spotlight, or the the general tendency to think of the head of state as a Hollywood star. Rather, the problems Kristoff sees in our current democracy (the inability to deal with the oil spill, for example) might be rooted in a problem with democracy itself.

When Aristotle was writing his political treatise, he said that the function of a government was to help its members life a good life. He saw three main ways a government could be constructed: rule by one, rule by a few, and rule by many. All three forms of government have good manifestations and bad ones. A good rule by one is a monarchy; a bad rule by one is a tyranny. In like manner, a good rule a few is an aristocracy; a bad one is an oligarchy. The difference between good forms of governments and bad ones is that in the latter, the end (telos) of government is to help the citizens live a good life, whereas in a bad form of government, the telos is to help its governors live a good life.

Unlike Plato, Aristotle knows that any ideal form of government (like the idealist society delineated by Plato in the Republic in which all property is common and happiness results from common simple pursuits) was bound to fail since such an ideal was contrary to human nature. He also knows that every form of government has a tendency to become corrupt because people naturally want to assume more power for themselves at the expense of others. As such, Aristotle tends towards supporting democracy (what he calls a polity) as the best form of government since, by dividing up power to rule, it makes it difficult for any one group or individual to assume exclusive power and direct the activities of state away from the common good and towards their own individual good.

Democracy too has good and bad forms. Its good form is what Aristotle calls a polity whereby citizens take turns ruling and different activities are allocated to different rulers. Ideally, both the rich and the poor should be involved in ruling a state with a proper balance of powers so that one individual or group does not become too powerful (what Aristotle refers to as uniting the freedom of the poor with the wealth of the rich). Its bad form is when the masses act out of their own self-interest, the government stagnates, and nothing gets done. This is why Aristotle prefers democracy—its bad form is simply stagnation.

But Aristotle does not do a great job defending his democracy against the critique of Plato. Plato argued that the act of governing required certain expertise, and in democracy, only those who are experts at appealing to the sentiments of the masses and winning elections will be selected by the people to rule. As a result, the people that are elected to rule will only be able to affect change by using mass appeal and manipulation, not practical wisdom.

Aristotle thinks that a division of labor will solve this. That is, the people best suited to certain tasks will be selected to oversee those tasks. But if such people are democratically selected, in other words, if they are selected by the people, we must assume that the majority of people know what type of person would be best suited for what task. However, most people (which Aristotle recognizes) do not have such knowledge. Hence, democracy turns into a game of rhetoric in which the most appealing, not the most competent individual is selected to rule.

The problem will not be solved, as Kristoff seems to think, by allotting a figurehead to play the role of looking pretty and providing entertainment for the American people while the real leader does all the work. As long as the people are selecting the “real leader,” he or she will always be a figurehead, and in the meantime, the masses will be acting out their own self-interest and nothing will get done.

Philosopher Children Make for Better Politics

This article in the New York Times brings up an under-discussed topic: teaching children philosophy:

“The world is a puzzling place and when you’re young it doesn’t make sense,” Professor Wartenberg says. “What you’re giving them is the sort of skills to learn how to think about these things.”

Professor Wartenberg has written a book, “Big Ideas for Little Kids: Teaching Philosophy Through Children’s Literature” (Rowman & Littlefield, 2009), to spread his experiment to more elementary schools. His focus is on teaching undergraduate philosophy students how to work with children, and his decade-old course at Mount Holyoke, “Teaching Children Philosophy,” has led many of his students to pursue careers in early-childhood education.

“A lot of them don’t know what to do after college,” he says. “If they want to do something with philosophy, this opens up an avenue.”

Professor Wartenberg also says that philosophy lessons can improve reading comprehension and other skills that children need to meet state-imposed curriculum standards and excel on standardized tests. With a grant from the Squire Family Foundation, which promotes the teaching of ethics and philosophy, he is assessing whether his program helps in the development of argument and other skills.

The view that children can do philosophy and engage in conversations on metaphysics, ethics, aesthetics and epistemology challenges the view of child psychologist Jean Piaget who claimed that children under the age of 12 were not capable of the sort of abstract thinking required for philosophical analysis. Matthew Lipman, however, founder of Philosophy for Children, disagrees, claiming that the insatiable curiosity of children makes them ripe for engaging in philosophical dialogues. According to Lipman’s approach, the teacher acts as a sort of “midwife to the thoughts of the students” (to use an expression from Plato). The idea is not to teach students what Plato or Descartes thought, but rather to teach them how to think.

Literature turns out to be a wonderful place to begin, as the following exchange over The Giving Tree illustrates:

Ms. Runquist’s students managed to fit philosophy in between writing and science. This was their sixth lesson of the year, and by now they knew the drill: deciding whether or not they agreed with each question; thinking about why or why not; explaining why or why not; and respecting what their classmates said.

Most of the young philosophers had no problem with the boy using the tree’s shade. But they were divided on the apples, which the boy sold, the branches, which he used to build a house, and the trunk, which he carved into a boat.

“It’s only a tree,” Justin said with a shrug.

“The tree has feelings!” Keyshawn replied.

Some reasoned that even if the tree wanted the boy to have its apples and branches, there might be unforeseen consequences.

“If they take the tree’s trunk, um, the tree’s not going to live,” said Nyasia.

Isaiah was among only a few pupils who said they would treat an inanimate object differently from a human friend.

“Say me and a rock was a friend,” he said. “It would be different, because a rock can’t move. And it can’t look around.”

This gave his classmates pause.

In book VII of The Politics, Aristotle addresses the question of how people should be educated in an ideal city according to both the end and means of education. The end of education is eudaimonia, a life of flourishing or as we say, happiness. Whereas practical reason makes important contributions to eudaimonia in terms of making decisions conducive to health and financial success, ultimately, it is the speculative intellect which contributes most directly to the ultimate end of education and the achievement of eudaimonia. While Aristotle definitely thinks that children are not born in command of their reason, but must rather be trained, he clearly thinks that by the age of seven, children should be engaged in the most basic and foundational forms of philosophical inquiry, and should be learning the intellectual habits (counsel, understanding, wisdom) which are integral to the philosophical life. Active, creative, and democratic conversation among children creates adults who can engage in active, creative, and democratic conversation. Young philosophers, according to both Aristotle and Lipman, turn into good citizens.

Opponents claim that children need to be taught “useful” subjects like math, science, and reading, all of which are conveniently-suited to standardized tests, and that philosophy is a luxury which our already-undereducated children cannot afford. Lipman, however, started working on developing philosophical tools for children during the Vietnam era, during which he claimed “many Americans were too accepting of authoritative answers and slow to reason for themselves — by college, he feared, it would be too late.”

It seems to me that with all the unproductive back and forths between liberals and tea-party conservatives, the gross misunderstandings on both sides in the debates on health care reform, the vitriol we see in coverage of the Roman Catholic Church in recent weeks, and countless other examples point to the fact that even the best brains among us do not know how to have a conversation, to reason about ideas, and to listen and compromise with those who hold divergent views. Perhaps teaching kids philosophy isn’t such a worthless idea after all.

Is the Health Care Debate Missing the Point?

First, I have to apologize to my regular readers (I mean you, Dr. Camosy) for my failure to post anything new over this Lenten season. I was focusing on getting a full draft of my dissertation finished (success!) which kept me from dedicating any brain matter on blogging. So I have some catching up to do.

I would be remiss as an “EverydayThomist” to not post at least something on the health care debate. For those of you who live under a rock or in the 13th century metaphysics section of the library, Congress has passed a health care bill, expanding coverage to millions of uninsured. Historic, monumental, controversial.

Throughout the past months as the health care debate has totally consumed this country, I have been really confused about what the exact issue was. Expanding health insurance, sure, but why? It is not like health insurance is an intrinsic good, nor does there seem to be any clear link between health insurance and the common good. I never could really understand how expanding health insurance coverage made us a better society.

Now, maybe I am missing something. After all, I have been one of those people trapped under a Latin tome in the 13th century metaphysics section of the library (I can tell you more about judgment per modum inclinationis than health care reform, in all likelihood), but it turns out I am not alone in my ponderous state of trying to figure out the exact issue. In this month’s Atlantic Monthly, Megan McArdle questions the assumption about whether people with health insurance are necessarily healthier . . . or if those without health insurance are more likely to die:

Aside from an exchange between Matthew Yglesias of the Center for American Progress and Michael Cannon of the Cato Institute, few people addressed the question that mattered most to those of us who cannot buy an individual insurance policy at any price—the question that was arguably the health-care debate’s most important: . . . If we lost our insurance, would this gargantuan new entitlement really be the only thing standing between us and an early grave?

. . . Even Democratic politicians made curiously little of the plight of the uninsured. Instead, they focused on cost control, so much so that you might have thought that covering the uninsured was a happy side effect of really throttling back the rate of growth in Medicare spending. When progressive politicians or journalists did address the disadvantages of being uninsured, they often fell back on the same data Klein had used: a 2008 report from the Urban Institute that estimated that about 20,000 people were dying every year for lack of health insurance.

But when you probe that claim, its accuracy is open to question. Even a rough approximation of how many people die because of lack of health insurance is hard to reach. Quite possibly, lack of health insurance has no more impact on your health than lack of flood insurance.

McArdle cites a recent study by Helen Levy and David Meltzler on “The Impact of Health Insurance on Health” that finds that “many of the studies claiming to show a causal effect of health insurance on health do not do so convincingly because the observed correlation between insurance and good health may be driven by other, unobservable factors.”

While investigating some of these claims, I have also been watching, with a little bit of shame, Jamie Oliver’s new show Food Revolutions about the British cook’s efforts to make the unhealthiest city in America (“We are Marshall,” WV) a little bit healthier in part by bringing fresh, non-processed foods into the local elementary school menu and by teaching local families how to cook food that is both tasty and good for them.

In the first show, Jamie goes into the home of a family and lays out everything that the family eats on a weekly basis on the kitchen table. It’s pretty disgusting–lots of pizza and fried food, and what isn’t fried is processed. The whole family, including the cute little elementary school aged girl, is obese, which is why nobody is surprised (except for the parents) when Jamie takes them in to get a physical and finds their 14-year old son Justin is basically guaranteed to get diabetes. The doctor explains all the complications Justin is likely to face including the possibility of losing a limb, unless he makes some radical lifestyle changes. It’s a sad story, but Jamie’s point on the show is that it is not uncommon. On the first episode, he claims that this generation of children is likely to be the first generation in a century that does not outlive their parents due to early morbidity caused by diabetes, heart disease, high blood pressure, and other complications linked to obesity.

Why do I bring this up? Because 14-year-old Justin’s problem is not a lack of health insurance (though it is true that those who lack health insurance are more likely to be obese, to smoke, and to drink excessively), but that he is simply unhealthy. We are an unhealthy nation, to be sure, but I still fail to see how expanding coverage is going to make us, on a whole, healthier. As long as our children keep eating pizza and “potato pearls” in their public school cafeterias, and as long as their parents keep feeding them pizza and chicken nuggets for dinner, and as long as our suburban lifestyles become increasingly more sedentary, no amount of health insurance is going to solve our problems, or at least, so it seems to me.

So why focus on insurance? Honestly, I think our legislators’ hearts are in the right spot, but they recognize something implicitly that Jamie Oliver’s show makes very explicit–it is really, really hard to change people’s deeply-ingrained habits. He are in this country habitually unhealthy. We have characters disposed to overeat, to eat junk food, to watch sports rather than play them. And as these habits become stronger, and we suffer more as a consequence, we realize how hard it is to change. There is no legislative quick-fix for bad character, character disposed to make bad choices again and again and again. Such character change would require massive, multi-faceted change–changing stringent USDA regulations so as to allow more fresh food in schools, ending corn subsidies, urban planning projects that encourage walking, extended recess and play periods for children and teens, education, and the list goes on.

Where could Congress even begin? I don’t know, but I do know that it was easier to try and change health care legislation than it was to change people’s habits. And this points to a larger problem with the way we do government in this country–we really don’t craft legislation conducive to national flourishing, legislation designed to make people better people, legislation aimed at increasing virtue, rather than vice. We craft legislation to “fix” problems, without probing to see the foundational issue.

One of the really cool things about Jamie Oliver’s show is that he realizes that to get kids who are habituated to eat horrible processed foods to start eating healthy, he actually has to teach them how to eat. This requires lots of steps–teaching them how to use a knife and fork to cut chicken rather than picking up chicken fingers with their own sauce-covered fingers, teaching them the names of vegetables and which of their favorite foods come from those veggies (surprised that none of the kids filmed knew that french fries came from potatoes and ketchup from tomatoes?), teaching them how to cook and helping them to realize that cooking your own food is fun, and most importantly, giving them time to eat and encouragement to eat the right things. In a crowded lunch room where none of the first graders are eating his homemade burritos and salad, Jamie begs the administrator to give the kids more time to finish, then he painstakingly goes through the room, encouraging each child, helping them to realize the food in front of them was good, and rewarding “clean plates” with stickers and applause. He ends up fairly successful, but look how much time, energy, and effort this takes. And these kids need such encouragement everyday for the next few years to reverse all the negative habituation they have had since birth to think that processed finger foods are good. These are not the things that government is good at doing. Sadly, it is what needs to be done.

And so now more people have health insurance, and I guess we will have to wait and see if we become a healthier nation as a result. I, for one, am not holding my breath.

It’s the March for Life, not the March for Scott Brown

I didn’t get to attend the March for Life in Washington, DC this year, much as I would have liked to. Like any large-scale witness, the March for Life is a time not to debate the nuances of abortion politics and the various ways in which one can be “pro-life,” but is rather a time to collectively say “NO” to abortion. The March is a time to say one thing, and one thing only–that abortion is a grave evil, and we here who are participating are marching on behalf of the millions of unborn who have become victims of abortion.

On every other day of the year, anti-abortion advocates can adopt a more nuanced approach to the issue of abortion. On every other day of the year, anti-abortion advocates can get into debates about making abortion illegal vs. other legal tactics to minimize the number of abortions that take place. On every other day of the year, anti-abortion advocates can tone down their rhetoric, make concessions, and explore the connections between issues like access to health care, racial and gender discrimination, living wages, and abortion. But not today. Today, there are two answers–yes, or no, and today, and today only, anti-abortion advocates get to simply say “NO.”

Which is why I am disturbed to see, at least in the very preliminary media coverage of the March, to see the rhetoric of the March turning to Scott Brown and healthcare reform. From the Washington Post, for example:

Many at the rally cited the election of Republican Scott Brown to the U.S. Senate in Massachusetts as sign of a shifting momentum to conservative causes like their own.

“Any people from Massachusetts here today?” asked U.S. Rep. Steve King (R-Iowa), one of several members of congress who spoke a the rally on the Mall. “Thank you Massachusetts. Thank you for helping us kill the anti-life bill,” he said referring to the Democrats’ filibuster-proof majority in the Senate that will be broken once Brown is sworn in.

The issue of health care reform dominated the speeches and prayers blasted over loudspeakers at the protest. More than three decades since Roe v. Wade, the anti-abortion movement has been mobilized during the past year against the healthcare reform legislation.

Sure, I can see rallying speeches that reemphasize the point that any healthcare legislation that allocates federal funds for expanding abortion coverage is immoral. But Scott Brown, last I checked, wasn’t out there marching in the chilly mid-atlantic cold against abortion. In fact, the new junior senator from Massachusetts isn’t even pro-life. This is from his campaign website:

While this decision should ultimately be made by the woman in consultation with her doctor, I believe we need to reduce the number of abortions in America. I believe government has the responsibility to regulate in this area and I support parental consent and notification requirements and I oppose partial birth abortion. I also believe there are people of good will on both sides of the issue and we ought to work together to support and promote adoption as an alternative to abortion.

Scott Brown doesn’t oppose healthcare reform because it allocates federal funds for abortion; Scott Brown opposes healthcare reform because it is expensive:

I believe that all Americans deserve health care coverage, but I am opposed to the health care legislation that is under consideration in Congress and will vote against it. It will raise taxes, increase government spending and lower the quality of care, especially for elders on Medicare. I support strengthening the existing private market system with policies that will drive down costs and make it easier for people to purchase affordable insurance. In Massachusetts, I support the 2006 healthcare law that was successful in expanding coverage, but I also recognize that the state must now turn its attention to controlling costs.

The issue of healthcare reform and abortion is important, and it needs to be discussed. But giving speeches in support of Scott Brown complicates the simple message that the marchers should be trying to communicate, a message of simple opposition to abortion. It also opens them up to criticism from their opponents who can simply point to the fact that the man they support doesn’t actually support them. The March for Life shouldn’t be about Scott Brown, or about any congressional figure. It should be a march for the pre-born and those that remain unborn. The March for Life is supposed to be a simple, collective “NO,” to abortion; how about we keep it that way?

Maybe WWJD is the Wrong Question

I’m spending the Christmas holidays with my in-laws and there is a lot of discussion about how different the values between the parents and the kids seems to be. The topic of these debates ranges from food (organic and local vs. economical), pastimes (urban activities like yoga vs. suburban activities like golf), alcohol consumption (enthusiastic vs. opposed), and attitudes (young professional cool-shoulder vs. Southern chattiness). Additionally, I am reading a collection of essays discussing Alasdair MacIntyre’s take on intractable moral disputes. All of this has me thinking about whether or not there is an objective universal morality, and if there is, how do we figure out what it is?

During the 17th-19th centuries, people assumed that there was some sort of objective universal standard of morality that transcended history and culture that was accessible to reason alone. This is what MacIntyre calls the “Enlightenment Project,” which he has, I think, correctly identified as a failure. Rational people simply do not agree on what sort of life towards what sort of goals is worth living, and due not, as it appears, to some failure in reason.

The goal of the twentieth century, advanced most notably by John Rawls and still supported by many modern liberal thinkers, was to argue that societies could agree on basic political, social, and economic institutions and procedures independently from any comprehensive agreement about what constituted a good life. Rather than argue rationally about metaphysics, argues Rawls et. al., we should just agree to disagree and focus rather on using reason to construct a basically just society, wherein people of all mindsets and metaphysical assumptions can flourish.

Let me explain this a little. When I talk about “metaphysical assumptions,” I am not just talking about an arcane topic that pertains only to scholars. Metaphysical speculations includes questions like “what is the good,” “what is my conception of God,” “how if at all is God actively involved in human affairs,” and “what goal is my life is ultimately oriented towards?” These are not trivial matters at all, and it is the answers to these questions that ultimately provide the basis for our morality.

Say, for example, you think there is a God, and that the world is corrupt and unjust, but that ultimately God will prevail and rectify what human beings are themselves unable to do. This metaphysical assumption may lead you to support more lenient penalties for convicted criminals, for example, because you believe that a human criminal justice system can only imperfectly mete out punishments, and ultimately, God’s judgment will prevail in the assignation of eternal punishments and rewards. Or, with such metaphysical assumptions, you may be less likely to concern yourself with human-caused global warming, because you believe that the fate of the earth is ultimately in God’s hands. You may also be willing to forgo pleasure, to live a simpler and more ascetic life in hope of maximizing pleasure in the next life.

Say another person does not believe in God or an afterlife, but rather believes that this life is all that we humans have. This person may support idealistic social program oriented towards constructing the most ideal society possible. This person may be very concerned with the impact human beings have on the environment, based on the assumption that if human beings don’t fix it, nobody else will. This person may believe in experiencing as many pleasurable situations as possible in order to “suck the marrow out of life,” since it is the only life we have.

You see where I am going. Each of my hypothetical individuals can be very rational and very intelligent and nevertheless disagree on almost everything. And so we end up with a bunch of shrill debates like the ones we have about politics in this country where liberals accuse conservatives of being unenlightened and uneducated and conservatives accuse liberals of being idealistic hedonists. If you’ve experienced a holiday gathering with a significant generation gap in values and political orientations, you know first-hand what I mean.

Rawls and others say that we will never reach an agreement on those big, over-arching metaphysical questions, but we can agree on such things like that goods should be allocated in such a way as to not unduly favor a privileged minority or that everyone in a given society should have enough freedom to pursue their basic goals (i.e. no slavery). The thing is, we don’t actually agree on such procedural claims.

MacIntyre argues, contra Rawls, that individual traditions with their own individual narratives can come to a rational agreement about metaphysical claims so that subsequently, they can agree about more specific moral questions and procedural claims. In MacIntyre’s own words, he offers “a conception of rational enquiry as embodied in a tradition, a conception according to which the standards of rational justification themselves emerge from and are part of a history in which they are vindicated by the way in which they transcend the limitations of and provide remedies for the defects of their predecessors within the history of that same tradition.”

But in a pluralistic world, what happens when two conflicting traditions clash? This question takes on immediate significance for me when I listen to my brother and sister-in-law arguing with their parents. My brother and sister-in-law argue that where they live, everybody has their values, their political leanings, their tastes, and their lifestyle (they live in Washington DC and live a young urban professional life). They could never come and live the suburban life in Dallas where their parents live because everybody is so different from them—they just wouldn’t fit in. Fine, but during the holidays, there is a week and a half of clashing values in practically every discussion they have with their parents, from very basic food choices to very weighty political questions like healthcare reform and abortion. Can they ever come to an agreement, or are they doomed to simply “throw up their hands” in futility and frustration at the end of each argument?

MacIntyre says that opposing groups like my in-laws can come to some broad agreements on questions of morality by adopting the standpoint of the opposing tradition to the extent possible (methodologically this is highly questionable but bear with him) and identifying irresolvable problems within the opposing tradition that could be solved by their own. MacIntyre uses the example of the clash between utilitarians and Aristotelian-Thomistic natural law theorists, arguing that utilitarians cannot come to an agreement on what constitutes happiness (physical or intellectual pleasure based on individualistic or communal assumptions), on which the principle of utility is based, but that natural law theorists who know and can apply the concept of a natural end (telos) to human existence can solve this problem. Thus, he claims, the natural law tradition is rationally superior.

Here’s a simpler and more concrete example. My sister-in-law believes that everybody should ultimately do what is in their own best interest (rational egoism) but arguably, it is such rational egoism that has led to mortgage crisis and widespread recession we the United States is now in the midst of. As a political liberal, she is forced to support financial policies based on the widespread redistribution of wealth based on self dis-interest, such as corporate executives forgoing bonuses or credit card companies voluntarily lowering interest rates (and subsequently lowering their profit margin). Now, she could try to make these political arguments based on a more far-sighted rational egoism, that ultimately, eliminating corporate bonuses and lowering interest rates on credit card debt is in the best-interest of the parties in question, because it is in their best interest to have a stable and functional economy, but it is not clear this is the case. It seems that even now, the “smartest guys in the room” are able to figure out how to make a lot of money at the expense of a lot of people and the economy as a whole, simply by acting rationally in their own self-interest.

You could argue then that rational egoism is rationally inferior to a system like, for example, Christianity’s “love your neighbor as yourself.” If everybody tried to serve their neighbor’s interest before their own, and lived more ascetically, forgoing unnecessary pleasures like wine, exotic travel, fashion, and fine cuisine, our country’s economy would not be in the mess it was in now.

The problem is that my sister-in-law may agree on the rational foundation of this point and yet still adhere to rational egoism in her own life. This leaves an opponent with the option of either claiming her position is irrational or that she has not actually been convinced of the rational superiority of the opposing system. Practically, the way this shakes out is that she ends up criticizing my father-in-law’s way of life for insufficiently appreciating the finer pleasures in life like wine and gourmet meals and he ends up criticizing her way of life for its excessiveness.

In order to come to some sort of rational agreement, they would need to step back and ask themselves what it is that they mean by a good life, not accidentally, but essentially. That is, they need to ask themselves not what sort of ideal contingents they desire for their life, but essentially, what is constitutive of flourishing in this life. So they don’t need to debate whether it is better to live in a city or in the suburbs, or whether it is better to eat diverse and exotic cuisines or the same Caesar salad every day. These are accidental qualities of a good life. Rather, they need to ask themselves what, in every conceivable setting (city or suburb, rich or poor, educated or not) is essential to a good life. Even debating the merits of rational self-interest vs. altruism misses the point—we need to ask what both of these systems are oriented towards. What is the goal of self-interest or altruism? What is the good that both systems are implicitly working towards?

It is the answer to this foundational question that answers the question I poised at the beginning of this blog regarding the existence of an objective and universal standard of morality. If we can agree on this foundational, metaphysical question, then I think we can come to some sort of basic universal agreement on some foundational moral claims.

But I don’t think we can. I think that we might be able to agree that there is some ultimate good which we are all striving for, but I think that based on rational speculation alone, we cannot ascribe any content to this good. MacIntyre says that the good derives its thicker substantive claims within a tradition, but even that I feel is too idealistic. I think our true and substantive knowledge of the good rests on the elevation of the rational apprehensive power by the infused virtue of faith. It is faith that gives us eyes to see what we human beings are really meant to do on this earth (and consequently, it is hope and charity that give us the will and hearts to do what we are made to).

And this brings me to the title of my post. WHAT Jesus would do in any given situation doesn’t really tell us anything. The real question is WHO Jesus is. If Jesus truly is God incarnate, perfect in every way, the ultimate good, then He consumes our vision such that all other goods must be subordinated to Him. In Scripture, once people know who Jesus is (think Peter and Paul, for example), what they need to do becomes clear. Disagreements may exist, but they get worked out. This is why, I think, Paul goes out not to proclaim Jesus’ life, but rather, Jesus’ identity (see Colossians 3, for example).

Problem is, this knowledge only comes through faith. No rational arguments can convince somebody that Jesus is God. Faith is a gift. And so I think, so long as some of us have such a gift, and others are without, the disagreements will remain intractable. Reason cannot resolve our most deep-seated disputes, and maybe that’s ultimately okay. Maybe it is good that we have to be dependent on God’s grace to ultimately resolve what we cannot.

So for Christians, maybe we need to spend less time getting bogged down in intractable disputes and more time doing what Paul did—proclaiming who Jesus is: God incarnate, crucified and risen. We’ll leave the convincing to faith.

*Although I used my in-laws as examples throughout this post, my conclusion in no way reflects on them or their faith.

Overcoming Realism with the Anabaptist Vision

When Barack Obama was elected, I wrote a post on his connection with Christian realism of the Reinhold Niebuhr variety, which you can read about here.

Christian realism is basically the idea that the world is evil and that in order to fight that evil, you have to get your hands dirty. Christian realism says that an idealistic stance of non-violence allows evil to triumph over good. Although non-violence or pacifism may be an ideal, Christian realists say that this ideal must be subordinated to the utilitarian calculus of political force and violence. Augustine adopted a Christian realist position in advocating an interior ethic of love, but an exterior ethic of expediency. Luther adopted a Christian realist position against the peasants in his treatise “Against the Thieving, Murderous Hordes of Peasants.” Reinhold Niebuhr was the Christian realist par excellence in his support of strong-armed cold war politics.

In a recent op-ed, David Brooks notes that realism is still alive and well in the political philosophy of Barack Obama, articulated so very eloquently in his acceptance of the Nobel Peace Prize:

We must begin by acknowledging the hard truth that we will not eradicate violent conflict in our lifetimes. There will be times when nations – acting individually or in concert – will find the use of force not only necessary but morally justified. . . I face the world as it is, and cannot stand idle in the face of threats to the American people. For make no mistake: evil does exist in the world. A non-violent movement could not have halted Hitler’s armies. Negotiations cannot convince al Qaeda’s leaders to lay down their arms. To say that force is sometimes necessary is not a call to cynicism – it is a recognition of history; the imperfections of man and the limits of reason.

Brooks commends President Obama for a “thoroughly theological” speech which “talked about the need to balance the moral obligation to champion freedom while not getting swept up in self-destructive fervor.” Brooks, himself a Christian realist, clearly finds the president’s moral position a prudent one.

I agree that Obama did a fine job articulating a realist stance and defending his political foreign policy on respectable moral grounds. But remember the context—Obama’s realist speech, which Brooks says “was the most profound of his presidency, and maybe his life,” was given at his acceptance of the Nobel Peace Prize. The prize is meant to acknowledge those idealists like Martin Luther King Jr. who choose not to get their hands dirty, who refuse to succumb to violent tactics even in the defense of a just cause. Such prizes are meant to provide recognition and encouragement to those idealists who provide a witness for what is morally possible, even if it isn’t morally expedient.

Christians like Brooks are supportive of the president’s speech because, since Christianity has existed, Christians have been more comfortable compromising with the world’s evil than they have been resisting the world’s evil with non-violent agape. Those idealistic, non-violent witnesses, minority that they are, are necessary and important reminders of the task to which Christians are called. One group of such idealistic witnesses were the Anabaptists.

The Anabaptists were a group of Christians involved in what was called the “Radical Reformation.” Concerned that reformers like Luther and Calvin were compromising too much in their political stances and failing to live up to the demands of the Christian life, the Anabaptist vision offered a new conception of the essence of Christianity as discipleship (die Nachfolge Christi), the essence of the Christian church as a community of brothers and sisters, and the essence of Christian ethics as one of agapic love and non-violence.

The Anabaptists refused to accept the state church system which reformers like Martin Luther and John Calvin were a part of. They did not participate in the government for the precise reason that earthly institutions like the magistracy required moral compromise that the Anabaptists found inconsistent with Christian life. The Schleitheim Confession of Faith, an early Anabaptist collection of beliefs states this as an agreement to separation [from the world]:

A separation shall be made from the evil and from the wickedness which the devil planted in the world; in this manner, simply that we shall not have fellowship with them [the wicked] and not run with them in the multitude of their abominations. This is the way it is: Since all who do not walk in the obedience of faith, and have not united themselves with God so that they wish to do His will, are a great abomination before God, it is not possible for anything to grow or issue from them except abominable things. For truly all creatures are in but two classes, good and bad, believing and unbelieving, darkness and light, the world and those who [have come] out of the world, God’s temple and idols, Christ and Belial; and none can have part with the other. To us then the command of the Lord is clear when He calls upon us to be separate from the evil and thus He will be our God and we shall be His sons and daughters. . .

Therefore there will also unquestionably fall from us the unchristian, devilish weapons of force — such as sword, armor and the like, and all their use [either] for friends or against one’s enemies I would like the records — by virtue of the word of Christ, Resist not [him that is] evil.

In other words, the Anabaptists did not believe that Christ came so that we could continue resisting the corruption of the world with the tools of corruption or using evil to fight evil. Rather, Christ came to liberate us from evil, and by choosing to follow Him, the Anabaptists believed we must necessarily forsake force, violence, and political power of any kind.

Because of their commitment to non-violence and the principle of worldly separation, the Anabaptists had a lot of enemies. From 1527-1560, the Anabaptists were severely persecuted. The 1529 Diet of Spires passed a death sentence on all Anabaptists of either sex [by] fire, sword, or some other way.” The 1551 Diet of Augsburg decreed that any judge or juror who had scruples about executing an Anabaptist be removed from office, fined, and/or imprisoned. As a result of these decrees, thousands of Anabaptists were executed in the 16th century, without trial or sentence. Yet, as Harold Bender writes in his quippy “The Anabaptist Vision,”

The authorities had great difficulty in executing their program of suppression, for they soon discovered that the Anabaptists feared neither torture nor death, and gladly sealed their faith with their blood. In fact, the joyful testimony of the Anabaptist martyrs was a great stimulus to new recruits, for it stirred the imagination of the populace as nothing else could have done.

Bender goes on to conclude:

However, the Anabaptist was realistic. Down the long perspective of the future he saw little chance that the mass of humankind would enter such a brotherhood with its high ideals. Hence he anticipated a long and grievous conflict between the church and the world. Neither did he anticipate the time when the church would rule the world; the church would always be a suffering church. He agreed with the words of Jesus when He said that those who would be His disciples must deny themselves and take up their cross daily and follow Him, and that there would be few who would enter the strait gate and travel the narrow way of life. If this prospect should seem too discouraging, the Anabaptist would reply that the life within the Christian brotherhood is satisfying full of love and joy.

Compare this to Obama’s Nobel speech:

[A]s a head of state sworn to protect and defend my nation, I cannot be guided by [Gandhi and King’s] examples alone. I face the world as it is, and cannot stand idle in the face of threats to the American people. For make no mistake: evil does exist in the world. A non-violent movement could not have halted Hitler’s armies. Negotiations cannot convince al Qaeda’s leaders to lay down their arms. To say that force is sometimes necessary is not a call to cynicism – it is a recognition of history; the imperfections of man and the limits of reason. . . So yes, the instruments of war do have a role to play in preserving the peace. And yet this truth must coexist with another – that no matter how justified, war promises human tragedy. The soldier’s courage and sacrifice is full of glory, expressing devotion to country, to cause and to comrades in arms. But war itself is never glorious, and we must never trumpet it as such.

Understandably, Obama cannot reasonably embrace the Anabaptist vision, but I do think that the Anabaptist vision can embrace Christians who have too long capitulated to the claims of realism. David Brooks seems pleased with the theological underpinnings of Obama’s political philosophy. He writes, “Other Democrats talk tough in a secular way, but Obama’s speeches were thoroughly theological. He talked about the “core struggle of human nature” between love and evil.” While Brooks may be correct in noting the theological underpinnings of Obama’s politics, Christians need to question whether those underpinnings adequately reflect the nature of discipleship to Christ.

Love and evil are not two warring powers, as Brooks so dualistically proposes. What the Anabaptist vision reminds us is that Christian love overcomes evil not by force, but by inspiration and imagination. Christian love, as lived out by the Anabaptists, provides a witness to what is best and noblest in human nature. In the wake of such love, evil simply becomes impotent. The County of Alzey, after executing 350 Anabaptists in Palatinate, was said to exclaim, “What shall I do? The more I kill, the greater becomes their number!” Barack Obama’s speech says that such love cannot ultimately triumph against the world’s evils, and that if good is to overcome evil, force will be necessary. But the Anabaptist vision says otherwise. Heinrich Bullinger, one of the Anabaptist’s enemies and persecutors, wrote that the Anabaptists taught,

One cannot and should not use force to compel anyone to accept the faith, for faith is a free gift of God. It is wrong to compel anyone by force or coercion to embrace the faith, or to put to death anyone for the sake of his erring faith. It is an error that in the church any sword other than that of the Divine Word should be used. The secular kingdom should be separated from the church, and no secular ruler should exercise authority in the church. The Lord has commanded simply to preach the Gospel, not to compel anyone by force to accept it. The true church of Christ has the characteristic that it suffers and endures persecution but does not inflict persecution upon anyone.

It is unfortunate that a peace prize meant to recognize those idealists who believe peace without violence is possible ended up rewarding a spirit of moral compromise this year. But it is even more unfortunate that Christians like Brooks think that Obama’s message is grounded in theology of Jesus Christ. So I will conclude this post with the same words in which I concluded my last post arguing against Christian realism:

As Stanley Hauerwas notes,

Jesus’ cross . . . is not merely a general symbol of the moral significance of self-sacrifice. The cross is not the confirmation of the facile assumption that it is better to give than receive. Rather, the cross is Jesus’ ultimate dispossession through which God has conquered the powers of this world. The cross is not just a symbol of God’s kingdom; it is that kingdom come.”

Jesus does not play power politics. He does not fight the evil of the world on evil’s terms. He does not use violence, power, and coercion to fulfill his mission. Nor does he expect his disciples to. Jesus invites his disciples to his own non-violent love, a love that will indeed overcome the powers of the world, but not through coercion and force.

The Anabaptist vision gives us a glimpse of what Jesus’ non-violent love actually can accomplish.

A Thomistic Argument for Labeling Retouched Media Images

Valerie Boyer, a member of the French Parliament, has drafted a law requiring all digitally-altered photographs of people used in advertising to be labeled as “retouched.” Her proposal has not yet come to a vote in the National Assembly, but has understandably initiated a debate extending beyond France.

According to the NYTimes article on the subject, the real issue for Ms. Boyer is “about her two teenage daughters, 16 and 17, and the pressures on young women to match the fashionable ideal of a thin body and perfect skin.” Boyer noted in an interview: “If someone wants to make life a success, wants to feel good in their skin, wants to be part of society, one has to be thin or skinny, and then it’s not enough — one will have his body transformed with software that alters the image, so we enter a standardized and brainwashed world, and those who aren’t part of it are excluded from society.”

Photographers and models largely oppose the proposed law, citing concerns about destroying the nature of photographic art and misplacing body concerns and eating disorder prevention efforts to images rather than other complex causal factors.

But EverydayThomist is on the side of Boyer, with a Thomistic argument to boot. According to Aquinas, the sense of sight is the most important of the senses (this point he derives from his Aristotelian biology). While Aquinas thinks that there is something ontologically superior granted to the sense of sight not shared by other senses, a primary reason that the sense of sight is so important is that it is through our vision that we know the truth.

This requires some explanation. Human beings, in Aquinas’ hylomorphic anthropological schema, are composed of a material body and an immaterial soul. We know the truth through our immaterial intellect. However, unlike the angels and other spirits, human beings, being corporeal, cannot grasp the truth simply through the immaterial intellect. Rather, all knowledge of the truth must be mediated through the corporeal body, and specifically, through the external senses–sight, hearing, touch, taste, and smell.

The external senses apprehend external objects which it then communicates to the immaterial intellect. The intellect, being immaterial, cannot have knowledge of material objects perceived by the senses unless it abstracts from these material objects to form an image in the mind, what Aquinas calls a phantasm. It is by means of this image that the mind knows. This is an important point in Thomistic epistemology that bears repeating: the mind can only know by means of the creation of phantasms.

However, the process of knowing by means of the creation of phantasms is a complex and highly dialectic process. The mind must continue to return to the external senses which apprehend (and are corporeally transmuted by the perception of the external object) in order to maintain and develop the phantasm. Think of this analogously to apprehending a complex piece of art. As you think back on the work of art in your mind, your knowledge will be fragmented. You have to continue, time and again, returning to the piece of art before you can truly see it in your mind’s eye, even when the artifact itself is absent. Turns out, all knowledge of externals is like this. We have to continue returning to the external object before its phantasm can be firmly planted in the mind and our knowledge of the object can be said to be true.

Multiple empirical studies have indicated a significant correlation between exposure to idealized media images and various manifestations of body dissatisfaction including depression anxiety, and anger. A 2003 Australian study investigated the effect of body dissatisfaction in adolescent boys and girls (aged 13-15) after viewing 20 commercials containing idealized thin female images versus 20 nonappearance television commercials. The study found that girls, but not boys, who viewed the commercials with the idealized images reported significantly higher body dissatisfaction compared with nonappearance commercials, supporting the general hypothesis that televised images of attractiveness lead to increased body dissatisfaction in adolescent girls. A 2002 study by Durkin and Paxton found that in a controlled study of seventh and tenth graders, both grades experienced a significant decrease in state body satisfaction and a significant increase in state depression attributable to viewing idealized images of females in advertising. Another 2002 meta-analytic review of 25 studies on the effect of mass media images of the slender ideal on body dissatisfaction found that body image was significantly more negative after viewing thin media images than after viewing images of thin models than after viewing images of average or plus-size models.

The role of the media and specifically the espousal of the thin-ideal image of female beauty is frequently implicated as a cause for the onset and maintenance of eating disorders, and experimental data from the last two decades seems to confirm that this is the case. Several studies confirm that body-image dissatisfaction is the most consistent predictor of the onset of an eating disorder. A three-year longitudinal study of female adolescents confirmed a statistical significance between body dissatisfaction and the onset restrictive eating behaviors.

Aquinas would not be surprised at such empirical studies. Aquinas, along with the ancients, knew that what we see influences who we are. Aquinas called this the process of becoming connatured to what we see. The strongest phantasms in our minds, the phantasms of external objects we are most frequently exposed to through our vision, naturally influences our appetites, inclining us toward those objects in the appetitive movement of love. If we continuously are exposed to thin-ideal images of beauty in popular media, those phantasms of that beauty ideal will be strong in our mind, and our appetites will be duly influenced as well. Women may be inclined towards behaviors like food restriction and over-exercise to manifest such an ideal in their own body. Men may be inclined towards women embodying such an ideal, thus reinforcing the knowledge (derived from the phantasm), that thinness is the ideal of feminine beauty.

Boyer’s proposal offers a way of bypassing this psychological process. By labeling thin-ideal images as retouched, the phantasm that the mind would like create upon exposure to such images is more likely to be a phantasm of a falsely-represented external object, rather than an accurate representation of reality. The mind would not just create a phantasm of an overly-thin beautiful woman, but would accompany this phantasm with the cognitive judgment that such an image was a lie. Thus, the appetite is more likely to be inclined towards such images as good and desirable.

Now, EverydayThomist in no way thinks that Boyer’s proposal is going to solve the eating disorder problem. Eating disorders are complicated phenomena, and the representation of thin-ideal images of women in popular media is only part of the problem. But her proposal is a step in the right direction. It recognizes that eating disorders are not simply problems with food, but also problems in seeing. Transforming what we see is frequently the first step in solving problems in what we do.