Thursday, September 13, 2018

Music as a Double-Edged Sword

Music is a double-edged sword that can create and destroy. On the one hand, it can create and preserve religious community. On the other, it can destroy or undermine the very community it creates. Music creates the religious community since it fosters a shared emotional life. Those who sing together, worship, learn, and grow together. They sing praises to God—the very crux of worship. They also explore basic religious concepts through a medium that drives those concepts from the head into the heart.
Not only does music bind the community together, it also binds the community to its past. Religious music, as all music, was composed by a specific individual at some point in the past. The particular composition is a projection of its composer's inner life. When religious believers sing, they unite with the inner life of the composer in a deeply personal way. Further, religious believers also unite with past congregations. After all, a religious community adopted the composer's composition as expressive of its own inner life. Thereafter, congregation after congregation, saint after saint, has relied on the song for spiritual meaning and continuity. In other words, when the saint sings, she sings a song that not only binds her to her current community, but to past believers. And so deep is that binding, that it brings believers, past and present, into deep emotional connection. To borrow a concept from Brigham Young, when the saint sings, she forms an unbroken chain, or choir, back into the distant past. Religious music is, in this sense, a true at-one-ment. An at-one-ment that binds a believer to a community, past and present. This is the awesome creative power of music: at-one-ment.
But music can also destroy religious communities, preventing individuals from being able to worship or grow in religiously significant ways. Individuals who adopt musical anthems diametrically opposed to religious life, quickly find that they can no longer worship in the same ways as their peers. For such individuals, not only does religious music lose its value, but it no longer possess the same emotional force. Further, when an individual adopts music opposed to religious life, he shapes and changes his heart in ways  not amenable to religious sensibilities. In short, depending on the type of music and the level of immersion, an individual can divide himself from his religious community.
And, in fact, such destructive music not only severs the individual from his contemporaneous community, it also severs him from a tradition. He no longer sings the songs that his ancestors sang. He no longer worships, learns, or matures through those songs. He divides himself from that life, to establish a new form of life. This is the awesome destructive power of music: separation.
Why does music simultaneously hold destructive and creative force? Plato, as in so many things, provides an answer. In The Republic, Plato introduces the concept of θυμός or Thumos. The word is difficult to translate, but it essentially means “spiritedness.” For Plato, Thumos, is one part of the tripartite psyche and frequently manifests itself as righteous indignation, as in political revolutions. It is that aspect of the psyche that is driven to correct perceived injustice and oppression. Thumos, then, is profoundly emotional while expressing an intellectual component. In my opinion, Thumos holds a unique relationship to art for this very reason. After all, artwork, music included, is at once uniquely emotional while expressing a strong intellectual component.
It is perhaps for this reason that Plato wanted to closely regulate music in his ideal polis. Plato, in fact, prescribes certain types of music for classes of citizens. He ascribed a “spirited” form of music for the guardian or military class, while ascribing a form of music amenable to peace or community for other citizens.

I don’t know the musical modes, I said, but leave us the mode that would fittingly imitate the utterances and the accents of a brave man who is engaged in warfare or in any enforced business, and who, when he has failed, either meeting wounds or death or having fallen into some other mishap, in all these conditions confronts fortune with steadfast endurance and repels her strokes.  And another for such a man engaged in works of peace, not enforced but voluntary, either trying to persuade somebody of something and imploring him - whether it be a god, through prayer, or a man, by teaching and admonition - or contrariwise yielding himself to another who is petitioning him or teaching him or trying to change his opinions, and in consequence faring according to his wish, and not bearing himself arrogantly, but in all this acting modestly and moderately and acquiescing in the outcome.  Leave us these two modes - the enforced and the voluntary - that will best imitate the utterances of men failing or succeeding, the temperate, the brave - leave us these.[1]

This regulated music would reinforce certain emotional and psychological traits desirable for those classes of citizens. Plato’s logic seems to assume that music arises from the deepest reaches of human emotion as a potent inner force projected outward onto the world. Since it has its natural home within the human psyche, when projected outward onto the world, it would easily penetrate deeply into the psyche of its hearers, shaping them for better or worse. As such, if music arose from an inner life of hatred, it would generate hatred within those who hear it. Likewise, if music arose from more noble emotions, it would ennoble.
If Plato’s arguments hold true, music bears a close connection to the psyche. As such, a religious community, just as a polis, must guard musical content lest it corrupts believers. And that is precisely what religious communities have done. In the High Middle Ages, Latin Christianity closely guarded the types of musical compositions and themes that were permissible for worship. Likewise, modern Mormonism closely controls the forms and themes of permissible musical compositions, canonizing certain music as the only acceptable music for worship. Such music ennobles. It educates and shapes the emotional lives of individuals, binding them together as a cohesive community. In this way, such music is akin to the music Plato spoke of as fostering peaceable community. In fact, religious music, as noted above, makes at-one-ment.

But music can also sever and degrade a community. In my mind, Beethoven's compositions stand as an apt example the destructive capacity of music. As noted above, in the High Middle Ages, ecclesiastical and political powers commissioned and controlled the highest expressions of music. Beethoven declared war on this tradition. Beethoven, more than any composer before, rejected institutionally prescribed musical forms, turning music into the sole expression of an individual. His music was not written for a community, but for himself. It was his dirge against a stifling form of community, dividing him from that community and tradition. Unlike religious believers, he could not merely adopt the music created by another. Instead, he needed to create a new form of music, which subsequent composers have adopted. Beethoven’s music is, in this sense, akin to the spirited music Plato ascribes to his military class. It is Thumos, in that it confronts a community with an individual's will, and, in so doing, divides that individual from his community. And, in fact, such music not only severs the individual from a contemporaneous community, it also severs him from a tradition.

If these arguments hold, they provide important guidance for Mormonism. First, Mormonism must continue to use music for creative purpose. Saints must be able to adopt such music as an expression of their own inner life, creating a space where the saints can bind themselves to one another and to their sacred past. Second, and relatedly, Mormonism cannot provide a narrow form of musical life. If the permissible forms of music—music through which an individual can worship, learn, and grow—are limited, individuals will venture outside the community to express their emotional life. When individuals so venture, they more often than not adopt a form of music that does not arise from the inner life of a saint. Such music will shape the heart and mind of the wondering saint, dividing him from his religious community. Thereafter, he will no longer be able to worship, learn, or grow with the saints. Such is the creative and destructive capacity of music, and those communities who ignore it do so to their own detriment.[2]




[1] Republic  398d-399c.

[2] It is worth mentioning that even though I've discussed music in this post, the arguments made above apply more broadly to any form of artistic expression.

Tuesday, September 4, 2018

Utilitarianism and Altruism

     We revere selfless action. Jesus of Nazareth, Mother Teresa, and Francis of Assisi invoke awe precisely because they "died as to self" in the service of others. This awe reveals a core human value: altruism (other-centered action). Given this value, most people find forms of ethical life that reject altruism as unacceptable. This explains the common disdain of egoism as an ethical theory. This theory--in its strongest form--posits that one has a moral obligation to promote his own good, such that failing to do so is immoral.[1] If egoism were true, Mother Teresa lived a deeply immoral life, to the extent she lived altruistically.

      Like others, I find egoism to be problematic precisely because it fails to make room for true altruism. For the same reason, I find utilitarianism to be problematic. Utilitarianism treats an act as right or wrong depending on its consequences. Good consequences make an act right; bad ones make it wrong. Jeremy Bentham, an English philosopher, promoted a hedonic form of utilitarianism. According to Bentham, when an act brings about the greatest pleasure for the greatest number of people, it is right. But when an act results in pain without a net gain in pleasure, it is wrong. Famously, Bentham recommended employing a hedonic calculus in assessing competing actions. This calculus asks an agent to consider the intensity of pleasure that will result, its duration, etc.

     Bentham hoped that utilitarianism would provide an ethic capable of improving the lot of the common person. Legislators, judges, and executives act ethically, under the theory, not when they conform to abstract rules, but when their actions yield the right consequences, in terms of pleasure or pain. Framed in this manner, utilitarianism seems altruistic. It takes the stand-point of others as primary and demands our actions to ensure the well-being of others.

     But in so doing, utilitarianism leads to a paternalistic altruism--an altruism unworthy of the name. This paternalism is evident upon careful reflection. Imagine a legislature contemplates making church attendance mandatory. In arriving at this decision, it consults psychological and sociological research that reveals the benefits of religious observance. From this consultation, the legislature justifiably concludes that mandatory church services will maximize pleasure throughout its jurisdiction.

     But before enacting this law, the legislature opens its proposal to public comment. Citizens throughout the jurisdiction object. Many claim to be agnostics or atheists, strongly believing that church attendance will not contribute to their overall well-being. Despite these objections, the legislature moves forward and enacts its proposed legislation. Some time later, the accuracy of the legislature's research is borne out as citizens throughout the jurisdiction experience greater pleasure, happiness, and well-being from the legislation, a gain not offset by those disliking the legislation.[2]

     Though the legislature acted for the benefit of its citizens, it did so paternalistically. The legislature, after all, imposed what was best on its citizens without due regard to their desires. And while, in the end, the legislature's course of conduct proved to maximize pleasure, it did so at the expense of the others it aimed to benefit. This strikes me as drastically anti-altruistic.

     Altruism, at a minimum, must involve a respect for the inner desires and hopes of others. When an agent contemplates some act, it acts altruistically precisely when it incorporates the other's perspective and fashions its conduct accordingly. This, of course, has limitations. When the other's perspective, for example, commends immoral conduct, one does not act altruistically by humoring the improper act. But utilitarianism utterly fails to heed the other's perspective at all. It determines what is in the best interest of another and pursues that thing, often despite the other person's wishes.

     A utilitarian will likely bite the bullet in response to this argument. He will note that while utilitarianism does not take the inner desires of others seriously in all instances, it does foster  altruistic action because it seeks what's best for others. Whether this is a legitimate form of altruism partly depends, in the final analysis, on if the person receiving this paternal-charity experiences as altruistic. For my part, I would be willing to bet that the citizens of our thought experiment would not experience the legislature's action as altruistic, but as a paternalistic assault on their autonomy. And as the ethical theory seemingly stands in tension with legitimate altruism, it is suspect, in my judgment.[3]


________________
[1] In a weaker form, ethical egoism merely provides acting for one's own benefit is moral. It does not take the next step and claim that failing to act for one's own benefit is immoral.
[2] This thought experiment reveals a related problem systemic to utilitarianism--it fails to take individual rights seriously.
[3] Not all forms of utilitarianism are amenable to this analysis. I also note that being inconsistent with altruism does not automatically render an ethical theory false. It does, however, make it implausible.

   

Sunday, September 2, 2018

The Wisdom of Hell


            The doctrine of hell has, I suspect, fallen into disrepute among many contemporary Christians. God—these Christians believe—cannot be simultaneously loving while condemning souls to eternal punishment. Further, it may be argued, belief in hell leads to self-loathing and toxic perfectionism, which prevents a Christian from experiencing God’s love. Better for all to abrogate or minimize the doctrine from Christian discourse. This will, it’s supposed, lead to a healthier relationship with self, others, and God.
            In this post, I wish to set aside the theological justification of this softer, contemporary view of hell. I also wish to set aside the question of whether hell actually exists. Instead, I want briefly to explore the existential consequences of belief in hell. Put differently, does belief in hell yield valuable outcomes in our lives? Provocatively, I argue that it does. In particular, I will contend that belief in hell makes life more meaningful. By logical extension, this means that life without a belief in hell lacks the full measure of meaning it otherwise could bear.
            To see this, I believe an analogy is in order. Imagine a general, faced with the prospect of fielding troops. If he fields the troops wisely, lives will be lost, but a victory gained. If he fields them unwisely, high casualties and defeat will result. Alternatively, then, he could shirk the battle altogether. But this too will impose costs. He will not gain the benefit of a victory and may jeopardize the overall war effort.
            Confronted with competing alternatives, each bearing momentous consequences, the general’s every decision is meaningful. Much is at stake. And it is precisely because much is at stake that the costs of his decisions are raised from the realm of the trivial to that of the meaningful. Consequences in practical action are, at least in part, the bearers of meaning. Remove those consequences and you eliminate or diminish the meaningfulness of practical action.
            It is, perhaps, for this reason that Ivan, in Dostoevsky’s The Brothers Karamazov, claims that “if God does not exist, then everything is permitted.” To be sure, Ivan was linking the non-existence of God (and by extension the non-existence of hell) with the elimination of ethical duties. But there is, in his statement, the caution that without God, social obligations become a farce, sapped of ultimate meaning. 
            Relatedly, Book of Mormon prophets warn against loosing a belief in hell. Nephi tells us that in the last days, many will say, 
[e]at, drink, and be merry; nevertheless, fear God—he will justify in committing a little sin; yea, lie a little, take the advantage of one because of his words, dig a pit for thy neighbor; there is no harm in this; and do all these things, for tomorrow we die; and if it so be that we are guilty, God will beat us with a few stripes, and at last we shall be saved in the kingdom of God.
(2 Nephi 28:8 (emphasis added)). 
Later in the book, Alma gives a flesh and blood representative of this teaching in the person of Nehor, who taught that “all mankind should be saved at the last day, and that [the people] need not fear nor tremble, but that they might lift up their heads and rejoice; for the Lord had created all men, and had also redeemed all men; and, in the end, all men should have eternal life.” (Alma 1:4). For Nehor, then, fear of hell should not sting the conscience nor restrain the body. 
Aside from the truth of this teaching, the existential consequences seem clear. Belief in hell raises the stakes. It imposes risk on the life one chooses to live. Choose wisely, like the general discussed above, and a victory will result. Choose unwisely and defeat will follow. So situated, confronted with the eternal consequences of our choices, our lives become more meaningful than they would otherwise be. For if no choice made any difference in the grand scheme of things, then no choice would ultimately be meaningful. Lie, cheat, and steal, for in the end, the outcome will be the same. Wisely, then, Lehi counsels that  “it must needs be, that there is an opposition in all things.” For without such opposition, “there would have been no purpose in the end of its creation." (2 Nephi 2:11, 12). And what better way to highlight that opposition than with a belief in punishment in the hereafter?
Some may resist my argument, noting that on the whole, belief in hell yields negative existential consequences. As noted above, it can be argued that the doctrine of hell leads to self-loathing and toxic perfectionism. This, I acknowledge, can occur. But I do not believe it to be the result of a belief in hell, but of an improper picture of God as eager to condemn souls to hell. The proper Christian belief views God as essentially loving, willing to forgive, and readily willing to impart grace to bring us to salvation, even where our efforts fall short. The fully meaningful Christian life, therefore, has both these components: a belief in hell and a belief in God who desires, more than anything else, to rescue us from hell on condition of our responding to His voice. Hell is, therefore, part of a greater whole in Christian doctrine. And without that part, meaningfulness in life is lost, too high a price, in my judgment.

Saturday, August 25, 2018

Do Latter-day Saints Worship God?



     Do Latter-day Saints worship God? This question has an obvious “yes” answer. But this obvious answer obscures a theological nuance that makes the justification for worship in the Mormon tradition unique as compared to other Christian denominations. In particular, I will argue that Latter-day Saints do not unconditionally worship God; their worship is conditional upon God possessing certain attributes.
     To see this, we must recognize that Mormon theology envisions a contingent God. This means that God has not always existed as God. He has instead become God through a process. Joseph Smith made this point explicit in his King Follett Sermon: 
I am going to tell you how God came to be God. We have imagined and supposed that God was God from all eternity. I will refute that idea, and take away the veil, so that you may see. . . . It is the first principle of the Gospel to know for a certainty the character of God and to know . . . that he was once a man like us.
Many prophets and apostles after Joseph expressed similar views. Brigham Young, for example, claimed that God is “the Father of our spirits, and was once a man in mortal flesh as we are, and is now an exalted being.” (Journal of Discourses, 7:333). Similarly, Elder Melvin J. Ballard taught that “[i]t is a ‘Mormon’ truism that is current among us and we all accept it, that as man is God once was and as God is man may become.” (General Conference, April 1921).

     This process of transformation could, furthermore, run in the opposite direction. For as the book of Alma reminds us, if God attempted to rob justice, he would “cease to be God.” (Alma 42:13). Succinctly stated, God was once a man. He developed into deity. And He could, by His own volition, cease to be God. He is, therefore, contingently God, for His divine status is contingent upon His having developed into and remaining as a certain type of being.

     Why does this fact about God hold relevance to the question posed above? Because if He is contingently God, then our worship is focused on what He is, not who He is. Our worship, to phrase the matter differently, is conditional, depending on God having realized the attributes of moral perfection, almighty power, and all knowledge. If tomorrow, God ceased to be God, losing those attributes, it would be improper to worship Him. Worship is, after all, properly reserved for the deity.

     Apostle Orson Pratt—one of the original Apostles ordained by Joseph Smith—came to a similar conclusion, though based on different reasoning. Specifically, he claimed that “[w]hen we worship the Father, we do not merely worship His person, but we worship the truth which dwells in His person.” This truth, for Orson, is the Holy Spirit, the highest member of the Godhead,[1] and “[p]ersons are only tabernacles or temples, and TRUTH is the God, that dwells in them.” This move permitted Orson to defend monotheism and the omnipresence of God. For “if the fulness of truth, dwells in numberless millions of persons, then the same one indivisible God dwells in them all.” And “[a]s truth can dwell in all worlds at the same instant, therefore, God who is truth can be in all worlds at the same instant.” Orson Pratt, therefore, held that we do not merely worship our Eternal Father; we worship the Truth that dwells within him. If Truth ceased to dwell in God, we would, it seems be obligated to refrain from worshiping Him.[2]
     Contrast Mormonism’s view of God with the classical Christian view. As I use the term, classical Christian theism is the Christianity of the creeds. Classical theists believe that God exists necessarily as God. He resides outside space and time. As such, he is unchanging. He always has been and will never cease to be all powerful, all knowing, and morally perfect. Further, classical theists hold to the doctrine of divine simplicity. This doctrine, in part, teaches that the “attributes” of power, knowledge, and goodness, are not mere add-ons or properties that attach to God. They are an essential part of who He is. He is identical with these properties. Thus, classical Christians worship a being who is essentially and necessarily good, powerful, and wise. As a result, worship of God in classical theism does not depend on God coming to possess certain properties. Unconditional worship is proper as He has been and will always be the type of being who is worthy of worship.
     This contrast between Mormonism and classical Christianity reveals the distinctive theological foundations underlying worship within Mormonism. Several objections could be made to my position. Let me respond to a few. First, some may argue that my depiction of God within the Mormon tradition is inaccurate. Latter-day Saints, the objector may continue, also believe in a God that exists necessarily as God. In response, I note that some respectable scholars take this approach. But it requires a nuanced interpretation of the King Follett Sermon and other texts. For reasons I cannot defend here, I find this objection hard to defend. Joseph really did believe God has not always existed as God. And he really did teach that we too could become as God. To the extent, however, that my view of God in the Mormon tradition is incorrect, my argument above fails. 
     Second, a person could object that my argument leads one to worship not God, but His attributes. This, I believe, is mistaken. To see this, an analogy is needed.[3] When a violinist becomes a virtuoso, we do not focus our respect and honor on the attributes that person has actualized—the property of virtuosity. No. We focus our respect and honor on the violinist. For she is the person who has realized that property.
     Something similar occurs in relation to God. Our worship of God is conditional, depending vitally on his realizing certain character traits. But this does not mean that we worship His attributes. We worship Him. He is, after all, the one who realized these attributes. We laud, praise, adore, honor, respect, and love Him because He became divine. To be sure, if he ceased to be divine, our worship would no longer be proper. But insofar as it is proper, we worship Him, not His attributes. 
     Third, and finally, a person may object that the God of Mormon theism is not worthy of worship at all. This is a criticism made by classical Christians, and I do not have the ability to venture a complete response to it here. I do note, however, that notions of worship worthiness vary between peoples. I find the idea of a God who prepared the way for me to become His peer inspiring. A Christian, in response, may attempt to transform the question of what type of being is worthy of worship into a normative one, arguing that while people differ in what they personally find worship worthy, there is one standard of worship worthiness. This response will, in my judgement, beg the question as it will rest on theological notions of perfection and goodness that I, as a Latter-day Saint, do not share. 
     In the end, Latter-day Saints do worship God. But that worship rests on a different theological foundation than it does for other Christians. Some may find this foundation disturbing. But for the Latter-day Saint, it is nothing more than the proper view of deity—a contingent God who is worthy of our worship because of what he has become.



     [1] Terryl Givens, in his recent volume Wrestling the Angel, provides a fuller discussion on how Orson Pratt placed the Holy Spirit as the Highest Member of the Godhead. See Terryl Givens, Wrestling the Angel (New York: Oxford University Press, 2015). p. 126 (noting that for Orson, "the original divine entity was not God the Father; rather, "the Great First Cause itself" consisted of "conscious, intelligent, self-moving particles, called the Holy Spirit."

     [2] Orson Pratt Makes these observations in "The Pre-Existence of Man," published in The Seer, Vol. I, No. 2, February 1853, paragraph 22.

     [3] I borrow this analogy from a wise friend, Benjamin Leto.

Sunday, August 19, 2018

Ascending Mt. Moriah: Divine Command Theory, Human Free Will, and Ecclesiastical Authority


This past week, I posted about how morality without God rests on a form of self-deception. This post, to my surprise, gave rise to a discussion among my friends about Divine Command Theory, which I abbreviate as DCT in what follows. That discussion has caused me to think more deeply about DCT. In this post, I have explored new thoughts—at least new to me—on how DCT influences our understanding of ecclesiastical authority. I have also discussed other aspects of DCT. What follows is not carefully and analytically argued. Much of it may prove objectionable. It is not intended as an airtight argument, but as my general impressions of DCT’s problems. To discuss those problems, though, I must first briefly identify what DCT is.

Divine Command Theory*
Divine Command Theory (DCT) is a metaphysical theory about what makes an action right or wrong. Advocates of the theory hold that when God commands something, it is obligatory to do that thing. So if God commands Abraham to sacrifice Isaac, then it is morally obligatory (and morally praiseworthy) for Abraham to plunge the dagger into his son.

This view stands in conflict with Moral Realism. Moral Realism, for our purposes, comes in two types: Platonism and Divine Goodness Theories (DGT). Platonism posits universal, eternal moral realities that exist independently of God. These realities fix what is right and wrong, good and evil. When human agents grasp that murder is wrong, they grasp a universal, eternal law that exists independently of what any agent believes. Christian theologians have been reluctant to accept Platonism, as it invokes realities that God did not create and to which he is subject.

DGT, in comparison, import these “Platonic” moral realities into God’s nature. Stated differently, under DGT, God’s nature is essentially good. Thus, when human agents grasp that murder is wrong, they grasp God’s nature, achieving a type of intellectual union with it. This theory, importantly, is different than DCT. For theorists of DGT (keep up with the acronyms) say that God’s nature is fixed and essentially good. God cannot change his nature tomorrow to make murder consistent with it. But DCT roots God’s goodness in his will, which is not fixed. God can command murder one day and revoke that command the next.

One problem a proponent of DGT must face is a lack of divine omnipotence. DGT, after all, maintains that God’s nature is fixed and that God must act consistent with it. God cannot alter his nature. He is what he essentially is. Nor can he act contrary to that nature, for he is essentially good.

Most proponents of DGT address the problem of omnipotence by claiming that acts inconsistent with goodness are not acts of power. Thus, murder is not an act of power. Acts of power, under this understanding, express goodness. And murder does not express goodness.

Advocates of DCT reject this solution. They instead claim that God is not constrained by his nature. He can command anything. At one time he may say, “thou shalt not murder,” and at another, “thou shalt utterly destroy.” If he is not constrained by a fixed nature, he can be omnipotent in the strongest sense of the term. William of Ockham, a well known advocate of DCT, made this move to preserve divine omnipotence. But in resolving the problem of divine omnipotence, DCT creates new ones.

Problems with DCT
The problems created by DCT are manifold. Many may prove soluble. For my purposes, I will focus on two-types of problems. The first type of problem concerns how DCT distorts the divine nature. The second type concerns how DCT distorts our understanding of free will and ecclesiastical power. 

Problems with the Divine Nature
Regarding the first type, DCT safeguards divine power by making goodness purely subjective: goodness is what God decrees. So if God commands the murder of children, it would be morally obligatory and morally good to carry out that command. Under this theory, we should praise those who comply with God’s command.[1]

But we intuitively view the murder of children as deeply wrong, no matter who commands it. By the light of our own reason, we would adjudicate a god who commanded such a thing, as a god engaged in immoral acts. This, to my mind, reveals that DCT fundamentally misunderstands the foundations of moral goodness and moral obligations.

But an advocate of DCT will contend that we simply err in our moral judgments when we adjudicate the divinely commanded murder of children to be wrong. In making this move, the advocate owes us an error theory. In philosophical parlance, an error theory explains why we err in our judgments. Take the analogous case of free will. Some philosophers deny that we have free will. In so doing, they must confront the fact that we experience ourselves as endowed with the capacity for free choice.

Similarly, in this case, the divine command advocate must explain why we err in seeing God’s commands as immoral. If God commanded the murder of a son, why do we experience this event as unethical? Is it merely due to societal convention that we view such an act as wrong?

Each of these answers is plausible enough. They reveal, however, another problem for DCT. For the theory imagines a world where our moral judgments may be radically wrong. When you and I intuit the murder of a child as wrong, we could be mistaken; God may have commanded it. And if we could be wrong about that judgment, we could be wrong about a whole host of pedestrian judgments. Furthermore, we could be right about one judgment today (murder of children is wrong), and wrong about it tomorrow when God decrees the opposite. 

More deeply, then, the God of DCT may have created us in such a way that we cannot recognize good and evil. This, in turn, has some drastic implications for human culpability for sin--an issue I do not grapple with here. Instead, I am more interested in the effects of DCT in more practical affairs. And it is to that issue that I now turn.

DCT’s Effect on Free Will and Ecclesiastical Authority
In addition to the problems listed above, I believe DCT gives rise to several more practical problems. First, it distorts our view of human free will. For our understanding of human free will follows our understanding of divine free will. This is not a necessary truth, but a historically verified contingent truth.

Before DCT gained widespread acceptance among theological circles, Moral Realism of the DGT variety held sway. DGT, as discussed above, modeled divine free will as constrained by the divine nature. God cannot murder a child as it is inconsistent with his nature. Those who held this view, also held an analogous view of human free will. The proper use of freedom, for these theologians, was the conduct in conformity with human nature. But human agents, unlike God, are fallible and weak willed. We often commit acts that are inconsistent with our God-given natures; whereas God, who is infallible, perfectly realizes his nature.

When DCT gained traction, a new model of human freedom arose. Now, human freedom was not the power to actualize a human nature, but the freedom to express one’s will. This, strangely, has caused us to view the proper exercise of human freedom as unconstrained by any moral laws. True freedom, is not freedom to actualize moral laws grounded in essential human nature. No. True freedom is freedom from any constraints on the human will.[2]

DCT has also, in my judgment, had an impact on the notion of ecclesiastical authority and legitimacy. Those who embrace DCT see God’s authority as flowing from his power, not from his goodness. This may be a controversial point. But if God does not have a fixed, essentially good nature, then his authority is grounded in the exercise of his will. Whatever he wills is good because he wills it. Stated differently, we give ascent to God’s commands because those commands flow from the exercise of God’s power, not because they flow from an essentially good, divine nature. And where we give ascent to God because of his power, we worship God because he is powerful.

This, it seems to me, holds some negative implications for ecclesiastical authority. For parishioners who come to see a particular person as uniquely connected God, will come to treat that person's authority in a way analogous to the way they treat God's authority. They will come to believe that this person's authority is legitimate because it is an exercise of authority.

I believe this is exactly the wrong orientation. I believe ecclesiastical authority is legitimate only where it is legitimate. Put differently, legitimacy flows, at least in part, from morality, not solely from authority. A man’s pronouncements are not authoritative merely because he thunders them from atop a figurative ecclesiastical Mt. Moriah. To be sure, we must give latitude to God’s divinely appointed leaders. But that latitude has limits—limits I do not fully explore in this already lengthy post.

Conclusion
In the end, many who suppose themselves advocates of DCT are actually driven by a sense of epistemic humility. Namely, the trust God’s judgments more than their own, including when those judgments are given through earthly authority. This is admirable, and I too strive for that humility. But we should not equate epistemic humility with divine command theory. The former actually assumes, I believe, an objective moral order that God is better able to perceive; the latter assumes no such order and no such perceiving. It instead denies a moral order and holds that God’s commands generate the moral order.

As I hope to discuss in my next post, this view, in its starkest form, is inconsistent with Mormonism’s view of God. I also reemphasize that what I have said above may be objectionable in many particulars. Some of my readers may wish to defend DCT, and there are good defenses of my comments above. But I will ultimately argue that DCT is a flawed metaphysical theory inconsistent with Mormon theism.



* A great deal of nuisance among divine command theories. I cannot explore all of those nuisances here, and some may rightly accuse me of creating a straw-man version of DCT. But the version I present is the most basic articulation of the theory.

[1] The theory also, I note, distorts divine goodness, leading to the conclusion that God lacks moral virtue because a moral virtue would have to be defined as a habit to do an action that God commands.
[2] For those interested in this point, I commend David Bentley Hart’s book, Atheist Delusions, where this idea is explored in greater detail.

Sunday, August 12, 2018

The Atheist's Noble Lie

     Morality rests on a lie. We labor, after all, to develop virtuous characters capable of compassion, honesty, selflessness, and trustworthiness. In all of this, we aim at ideals that we can never perfectly actualized. When we strive for moral improvement, the ideals recede from us, always just out of reach. We never achieve perfect compassion or become wholly selfless. And we know we never will in this life. Regardless, we hide this knowledge from ourselves, recognizing that if we too carefully observe our own moral limits, we may slacken our push for moral growth. This seems paradoxical. Morality, which (by most estimations) abhors lying, is founded on self-deception, a form of lying.

     Or at least it does for one who does not believe in God. For the theist can recognize his or her own powerlessness in the face of morality's demands, and yet strive for perfect virtue, secure in the knowledge that God will, in the end, bring about perfect conformity with ethical ideals. And this "in the end" is key. The theist knows life continues beyond this moonlit sphere into a brighter and holier one. Given this knowledge, he or she may strive in God with all vigor, undeterred by the inability to perfectly actualize moral ideals here and now. When morality rests on God it is not founded on a lie.

Saturday, August 11, 2018

The Nominal Fallacy and the Loss of Personal Relationships



            A fallacy is a form of faulty reasoning. These forms litter our thinking, and one who seeks truth must exercise great caution to avoid them. Take the democratic fallacy (argumentum ad populum) as an example. This form of reasoning holds that a proposition is true because most people believe it to be true. This model of reasoning can lead us to err: if 9 out of 10 people believe the northern star resides in the southern hemisphere, those people are still wrong. The truth of a proposition is not up to a vote.

Perhaps a more pernicious fallacy is the nominal fallacy (from the Latin nomen, meaning name). This fallacy posits that when you apply a name (or label) to something, you have explained it. Put differently, a name transmits explanatory content. So framed, this is a descriptive form of the fallacy: a name describes (i.e. explains) that to which it applies. But I believe the fallacy also occurs in a prescriptive or normative form—a point of relevance to what follows. In the prescriptive form, a name both describes (explains) and evaluates.

The nominal fallacy—in its descriptive and prescriptive forms—permeates all intellectual endeavors. To elaborate, imagine a person attaches the label “democracy” to a particular society. Has he thereby descriptively explained that society or its form of social organization? No. Has he even prescriptively fixed its value or worth as a form of social life? No. The name itself carries no explanatory or normative information. Understanding and evaluating a society requires careful assessment of its institutions, cultures, and mores.

But as intellectually lazy creatures (and we all are), employing a label as a shortcut to knowledge is satisfying. Labels free us of the labor necessary for authentic understanding. In freeing us, however, they deceive us, leading us into an encounter not with the reality we try to understand, but with a pale conceptual counterpart that bears a cartoonish resemblance. The nominal fallacy, therefore, confers the gift of cheap, pseudo knowledge.

Aside from the epistemic error flowing from this form of reasoning, it also carries with it a deeper, existential problem. Specifically, when we use labels or names to explain a person, we no longer enter into a personal relationship with him or her. We instead strap a cartoonish, abstract concept onto that person, which shields us from a direct encounter. So shielded, we do not see a flesh and blood person, a unique individual, but a mere representative of some pernicious or virtuous class or group.

Who hasn’t heard a political opponent charging his interlocutor (if that term even properly applies) as a “racist” or a “liberal snowflake.” In taking up this debate tactic, the political opponent believes he has explained his interlocutor while revealing something about his interlocutor’s worth. But the nominal fallacy, as already noted, bears the cheap gift of pseudo knowledge. It gives rise to an illusion in the mind of the political opponent—an illusion that makes him feel virtuous.

He is far from it, though. Not because he errs, but because he has severed himself from a direct encounter with the person he maligns. His interlocutor is not a person with a family and career, with hopes, desires, and fears. No. For the political opponent, his interlocutor is a racist or a liberal snowflake—a manifestation of an abstract class or group. And the interlocutor knows he has not been treated as a person, but as a proxy for something worthy of derision.

The nominal fallacy—while easy and satisfying—is, I believe, a form of intellectual cowardice. One who employs it is never forced to look another person—a real individual—in the eyes. He instead dehumanizes the other and then looks at a mere object of his outrage. This form of cowardice, therefore, makes political conversations easier and more difficult. Easier because it frees participants from any meaningful intellectual work. More difficult because it raises the cost of engaging in any type of dialogue whatsoever. As responsible citizens, we must avoid this form of thinking like a cancer. Our republic—which depends on our having difficult conversations in a respectful way—cannot long survive its presence.