A week ago I receive an invite from a friend to meet him and some others for dinner that evening. I promptly tell him sure and go about my business until it was time to get ready. As I’m stepping out of the shower, I reach for my phone realizing I don’t know where the restaurant is and I text him back. “Hey, where is X?” His reply had Earth shaking implications.

He sent me a link from lmgtfy.com. That is, he sent me a snarky, albeit well intentioned rebuke about my very unself-reliant approach to not knowing something. Why should I waste his time asking a question when I could’ve just googled it? So my fiancé, he, and I got a good laugh out of it and life went on.

A few days later, someone posted a question on Facebook about what word processor to use on a Mac. I felt compelled; nay, obligated to respond to her question with the same response my friend sent to me. The pertinent jeer of lmgtfy.com proved useful again and suddenly I’m struck by the implications of my own actions.

Are questions worth asking any more? I asked Google.

Screen Shot 2013-03-08 at 4.15.37 PM

Apparently not every question comes up with a worthwhile response. Why then would we still feel bothered by having to respond to a question when someone could just look it up and discover the answer for themselves? Apparently it depends on the question. So I asked a few more.

In Isaac Asimov’s short story, The Last Question, a system that can be said to be the internet is essentially used to answer all types of useful queries. In the story, one person in every generation or so asks a question with some fundamentality to it. Anyone familiar with the 2nd Law of Thermodynamics knows about the question posed and it’s weightiness. “Can entropy be reversed?” If you don’t know about the 2nd Law of Thermodynamics or what entropy means, then I have a lovely link for you right HERE!. There is a critical mass of information wherein even the incalculable can become calculable. Even when I asked my first question above, Google attempted to give me its “sufficient data for a meaningful response.” So while we may not be ultimately satisfied with the answers we receive from the  knowledge nebula to the ultimate questions (think: 42), we are still given 800,000 potential loci for something more subjectively meaningful.


So what? What does this mean for us? Even as I’m writing this, an article from one of my favorite editorial sights is publishing an article on roughly the same thing.

A week prior to the dinner in question, I went out with the same friend where we had a discussion about moral obligations with regards to knowledge. My position was this: With information being increasingly more available, are people now morally obligated to know certain things about the world? I mentioned that whether we want to admit it or not, we already have a standard for this. Prior to the internet, saying “I don’t know” to some questions would warrant some contempt. If you were to not have a position or even a standard wealth of knowledge on slavery in the 1860s, you may be looked down upon. Now that we have Google, is it okay for people to not know about certain world events or discoveries? My friend was more sympathetic to the ignorant amongst us. He thought there were plenty of cultural and environmental reasons why people wouldn’t know something or wouldn’t feel compelled to know something. Should we fault the rich for not knowing the needs of the poor? Should we fault the impoverished for not knowing about macroeconomic trends and issues? Do high school students in Alaska need to know about heat waves in Texas? Should Americans know about what’s going on in Syria? It all seems to depend on what you’re asking and to whom. However, not having an answer is becoming less acceptable if you have access to the same information. The internet to one degree or another, I’m arguing is a great equalizer. If that is true, then we can say that with minimal effort, and with great expectations from others, individuals should meet a higher standard of knowledge or at least utilize the capability to attain that knowledge on average. My fiancé during this discussion did not share my friend’s sympathies. She is much more inclined to impute the ultimate responsibility on the ignorant. They are willfully ignorant by their knowledge-seeking malaise. Additionally, she thinks, they should feel ultimately responsible for their own set of knowledge, and that any individual doesn’t owe society at large anything for which to feel obligated. To her, the obligation comes from within. To me, the obligation simply exists as sort of brute fact by which we are helpless to comply, though those of us that do so willingly and productively are good people. My friend I think is wary of the term “obligation.”

So the question remains fairly open. Are questions obsolete? Well, maybe not. Many of them are certainly becoming a faux pas. It seems Google has an answer for everything except in philosophy. Almost by definition, philosophy tends to tackle issue for which the ken of humanity has not yet breached that critical mass I mentioned earlier. Google cannot answer your ultimate questions. Instead it’ll spit back references and other attempts to your questions in ways that you may or may not find useful. If you ask google about loneliness, you might derive some meaning from its responses. If you seek to understand a language in the the robust, idiomatic and nuanced parlance of its domain then you might find plane tickets or audio tapes or movies. However, Google doesn’t have the moral obligation to provide you with the answers you seek. Instead it demands that you recognize the relentless obligation you have to constantly ask the right questions.


Guns are not like other objects. They aren't even like each other.

Guns are not like other objects. They aren’t even like each other.


If you don’t occasionally check out THE STONE, you should. Who would’ve thought that a major newspaper would have a philosophy column online. At any rate, since the shooting at Newtown, Connecticut they have been posting lots of super interesting posts about the gun debate. Here is an awesome one which helps to clarify the rights versus goods argument I made in my previous post, and this by a real philosopher.


The Weapons Continuum

(Part 1: The Non-Evidential Discussion)

             I find myself writing a draft off this post every few months. Every time there is a horrific shooting, often worse than the one before, my mind races and my fingers search for a keyboard. If Facebook and Twitter are any indication, then most of you do the same. I usually soak in what ever I can of popular opinion and the arguments from social media to get a sense of where people are in the discussion. I check news sources too, (no, not TV news) to get a feel for the particular narrative that always seems to take on new characteristics and new language after each shooting. On every occasion I feverishly hash out a draft, and for some reason it never seems to make it to my wall. There are too many digressions. There are too many distractions. Every argument smashes into the rear end of the next without getting resolved. So now it is about a week since the shooting in Newtown, Connecticut, and I wonder if I can flesh this out. Oh, and it isn’t too soon. There is no such thing.

Friendly Argument about Guns

Friendly Argument about Guns

             I decided the best way to tackle the issue of gun control is to split it up in parts. The first part will consist of the non-evidential arguments for gun control. Most people immediately refer to what ever statistics or anecdotes that they can muster when arguing for or against gun ownership, as they absolutely should! However, statistics, examples, and stories are easy to be skeptical about and often refute one another without legitimate sources and research involved. Keyboard crusaders are all but credible authorities, myself included. In the social-media arena, it is enough to post a quickmeme with an eye catching graphic or phrase for the tidal wave of comments ensue. My first post won’t do any of that. Instead I’d like to focus on what most people agree is an irreducible, irreconcilable, source for debate and conflict. That topic is Rights.

        There is a constant back drop of rights talk that requires a little elucidation. What does it mean to have the “right to bear arms?” What does it mean to have the right to anything? This is a word we use fairly loosely in conversation but it is never really brought to the forefront of an argument and explained. Can rights change? Are they strictly bound to a specific text or do documents like the Constitution just reflect some basic societal intuition? In talking about gun control, it seems like the most common deflection immediately goes into rights talk and what had started off as a wonky back-and-forth between friends peters off to a stalemate. Without delving into the historical context of the Constitution and talking about what the founding fathers meant/thought/believed about the 2nd amendment (that would be an evidential argument), let’s actually talk about rights.


        The idea of people having a certain right may have erupted around sometime in the early 1700s. It wasn’t until the old European monarchies started to crumble that people started to gain a real sense of individualism. Suddenly the majority of people weren’t just uneducated slaves. A great secular awakening and philosophical writings churned a feudal European serfdom into an era that would soon be called the Enlightenment. A reverence for science and knowledge grew and this was also reflected in the new science of political theory. Up until that point, with perhaps the exceptions of Aristotle and Thomas Hobbes, political concepts were not based on community or individuals. Instead, things like divine right theory and hereditary totalitarianism were the norm.

       It wasn’t until the late 17th century that John Locke developed social contract theory wherein the individual has a direct relationship with the state. This empowers the citizenry in a society to be able to participate and affect the governing body. While versions of democracy had made appearances in different forms up until this point, it wasn’t until Locke wrote his two treatises of government that individual’s rights started to make their appearances. At this point also, political pamphlets were readily available and literature was starting to become the 17th century equivalent of the internet. Literacy was at an all time high, especially in the United States which had the highest rate of literate citizens in the world. So when Locke wrote about individuals in a state of nature are entitled to “Life, Health, Liberty, and Property” (property has a different meaning here than it does today), it resonated with a population with a new sense of individualistic value. Nearly a hundred years later, our founding fathers and namely Thomas Jefferson included the phrase, “Life, liberty, and the pursuit of happiness,” in the Declaration of Independence, which was directly influenced by John Locke. At this point in history, another major thinker that revolutionized the concept of inherent rights was emerging. Philosopher Immanuel Kant lays the groundwork for what would become the major moral dichotomy of our time.

        While rights were becoming a meme worthy of replication in a society like America, the going moral philosophy was utilitarianism. Jeremy Bentham and later on John Stewart Mill (another philosopher that influenced our Constitution) developed a view wherein moral acts were contingent on their utility to society. It is easy to see how this view would become popular in the nationalistic sense. Things were morally good if they benefited the greatest amount of people to the greatest extent that it could. This is as altruistic as anything could be. It is of course, not without its short comings. While intuitively beneficial, it does seem to demand a lot of people to be self sacrificing and unselfish to one degree or another. In a world where individualism was taking over, utilitarianism may ask people to do things for the greater good that seemed undesirable. A good example is the ol’ train track scenario in moral philosophy. So there was a need to leverage strict utilitarianism with whatever it is that leads us to ascribe special privilege to individuals.

        So when Immanuel Kant, produced his Critiques of Pure Reason he stood at odds with Utilitarianism with his own philosophy, Deontology. Kant’s arguments can get lengthy and convoluted but in the shortest terms possible Deontology says: 1) People have a duty to do good, 2) Good is only good if it is a good in and of itself, and that it can apply universally to all people, it cannot be related to a want or desire 3) People are an end in and of themselves. It’s because people are rational beings and they are able to distinguish between treating people with dignity and as a reason to do good and not merely as a means to a desirable end. So we have a duty to do good things, regardless of their benefit to people. For Kant, lying is never good because it treats people as a means to an end and serves to undermine their dignity—even if a lie were to save the planet from destruction!

        Instead of using these terms, we’ll use more accurate terms for the gun debate and call them it the Good (utilitarianism) vs. the Right (deontology). Almost every moral debate that I can think of is framed within these two ideological sides and documents like the Constitution itself carefully balances between them. When someone embraces one side, they are sacrificing the advantages of the other. When a right overlaps with what we consider to be a greater good, then we don’t really have any controversy. A good example could be something like voting rights for women, or equal education opportunities for people, or religious freedoms. These are by and large uncontroversial, whereas at one time in the past they may have needed some debate. To say that we have the right to be treated equally serves the benefit of the society and treats people as ends in themselves. Dignity is assigned to everyone equally in this instance.

        When we start approaching the right to own guns, things get a little hazy. Is the right to own guns a way to treat people as dignified ends? We in the Unites States take it for granted that any extension of freedom is a moral good. Freedoms in almost every other context seem to be uncontroversial and beneficial as opposed to a lack thereof. However once freedoms for individuals start to breach the well being of other individuals, we begin to see structures for which law must create a workable boundary. We are not free to murder. We are not free to drive drunk. We aren’t even free to plagiarize the ideas of others, or libel against them. So, why all the hoopla over guns?

        Once the gun debate is reduced to the idea of rights, the only response is to combat it with notions of the good. If you have ever participated in friendly arguments at bars or on social media, you may have already noticed. In order to establish a right, there must be a correlation with the good in order to justify it. Once you ask someone who believes in a right why they believe in that right, the response must come in the form of how it benefits society (unless you’re arguing with a philosopher and there’s a good chance it’ll happen anyway). If you ask someone why it is important that we have the right to bear arms, people will cite the 2nd amendment. We know that rights precede the Constitution, so we can ask why it was considered a right in the first place (by the way, the Bill of Rights was very nearly not even included in the Constitution). So the answer must come in the form of, “because it is better than to not have the right.” The response is, “why?” That’s where all the crazy answers start to come out because not enough critical thought it put into the answer. People start invoking Hitler’s gun policy on Jews, or that crime will increase, or that the right is God given, etc. All of which are evidential arguments that require data to support. Rarely, if ever, are arguments from these grounds substantial or coherent. Frustration ensues.

Right vs. Good

Right vs. Good


        That isn’t to entirely dismiss the claim that maybe we do have a morally defensible right to guns. We still have to wonder, at what cost? We know that roughly 10,000 to 30,000 deaths occur each year from shootings. Without going any further into the statistics, we can see that no other right affects the potential ending of human life so directly. There are of course evidential reasons (that I will get to in future posts) that can factor into why the number is what it is, but this will only distract from the fact that the prevalence of guns, lead to the existence of gun deaths. It could be argued, and indeed it has, that guns don’t cause guns deaths–people do. While on the face, this could be seen as true, but that doesn’t absolve the usage of guns altogether. Clearly guns are present at the scene of a crime where a shooting occurs. Even if it’s in a small degree, guns carry at least some responsibility for the deaths that occur from the bullets that were in them. Secondly, the argument that our culture in America has a special relationship with guns unlike any other country seems to be untrue even on its face. It is possible to delve into the historicity of firearms and combat in at least 100 countries that would negate this theory from “specialness.” Nevertheless, I’ll assume that this is true. It then seems funny that the immediate defense of many gun advocates, most notably the NRA, is that our culture of violence is to blame for gun deaths vis a vis video games, movies, the media and comics. It is blaringly evident that gun culture must carry its negative aspects into the definition along with what ever goods may be found in it. To blame gun culture for deaths but use it as a reason for the a right to have a gun, well it seems silly.

        The question must be asked: How many gun deaths is the right to own guns worth? The right for freedom of slaves (amongst other things) was worth a civil war with hundreds of thousands of deaths. Is the right to own guns worth 30,000 deaths? 100,000? 1,000,000 per year? Of course it is possible to hedge what the right to bare arms means and currently I’m not advocating banning guns altogether, but I am offering the question up hypothetically. What amount of deaths per year would make it considerable to pro-gun advocates to submit that maybe banning guns unilaterally is a necessary action? Let’s say the number is a mere 100,000. The follow up questions must inquire as to why that number is significant? If 99,999 deaths occur in one year, then it isn’t worth considering? The goal would be to find a common ground with some reason for there being a non-arbitrary number. For defenders of the right, the number is infinite. There may not be a sufficient correlation between guns and gun deaths, or the right supercedes any consequences, or libertarian freedom is the highest value, but sufficient reason must be given to defend these positions. As for defenders of the good: the number is 1. The right to own guns for these group is directly related to death and it represents a verifiable evil that must be suppressed. Extreme positions on either side are impractical. Seeing as how the law is based on normative intuitions; that the general feelings of the citizens are reflected in law then it seems likely that people in the frame work of this argument would most likely want to keep the number as low as possible while still maintaining the right.

        There are a number of prescriptions as to how to balance this argument. The point is that for the right, the only thing worth preserving is the right itself. For the good, it is people’s lives that are paramount and this reflects a more sensible attitude toward a very real state of affairs in relation to gun deaths. If the number of gun deaths per year fluctuated wildly, then the argument might take on a different tone. It would be possible to differentiate between bad years and good years and discovering the causal relationships between the two to work toward a compromise between the good and the right would be a matter of working toward the better years. The fact of the matter is that the number of gun deaths remains on a steady rise and so does the general consensus about stricter gun laws. In my opinion, by defending the right to own a gun we are obscuring what it means to treat people with dignity in the Kantian sense. Human dignity has been surpassed by ideology and a sort of religious belief in adherence to rights. We accept rights with the costs that come with it, but the 2nd amendment might straddle the fence as to how much we are actually willing to pay in a modern, peaceful society.

        If the right for one individual to protect themselves (a speculation at best), is worth the deaths of some 10,000 people nation wide, then how can we say that this does not exemplify our worship of the mere thought of security versus actual safety. The defense of this statement inexorably leads us into the evidential arguments which will be submitted in the coming weeks. I hope to receive some feedback for this post along with counter arguments. I have given a really basic account of rights and utilitarianism in order to keep it friendly to everyone so if the best a comment can do is critique my portrayal of Kant or the Constitution then save it unless you’re really trying to assert something worth talking about. Remember, the core argument is Right versus Good. It doesn’t matter which side that you tend to fall on, but there is a resolution that must be accepted between both. Imagine yourself as the single decider in the matter. The fate of the country rested on you. How would you decide the law of guns and why? How would you explain yourself to the people that went against you?

One World

Posted: December 19, 2012 in Uncategorized
Tags: , , , , ,

Just finished reading One World by Peter Singer. The book argues for the benefits of embracing the idea of Globalization in a more meaningful way than we already do. We are increasingly aware of the actions other nations take be they environmental, economical, or humanitarian. The effects reach cross borders and oceans and cultures. Singer takes on these issues concisely and tackles the common ethical rebukes of globalization. Of the 4 chapters, the 3rd and 4th resonate with me in that they directly deal with humanitarian issues. Questions like interfering with other nations for humanitarian reasons and are we participating in moral imperialism? How can we go about it? Has it been tried before? But the last chapter is about international obligations to charity versus nationalism. This sentence struck me enough to warrant this update:

“Among the developed nations of the world, ranked according to the proportion of their Gross National Product that they give as development aid, the United States comes absolutely, indisputably, last.”

This highlights the disparity between our impulse to “take care of our own” as opposed to assisting the dangerously impoverished people in less financially developed nations. We were throwing money at people who are victims of 9/11 or hurricanes here, but thousands of children die every year of starvation. This has gotten me thinking…


         A hotly debated topic amongst philosophers and neuroscientists that some propose is pivotal for the criminal justice system is the likelihood of determinism. If the evidence for materialist determinism as presented by science is accepted then notions of free will as we know it could begin to disappear. The erosion of such a crucial and culturally engrained belief could lead to changes in the reasons why we seek to serve justice and mete out punishment. Some argue, and I intend to as well, that accepting determinism pulls the moral rug from beneath the feet of the largely retributivistic criminal justice system. If this becomes a more popular idea, then the folk justice upon which our law rests may take on more consequentialist characteristics. A controversial article written by Joshua Greene and Jonathan Cohen, For the law, neuroscience changes nothing and everything, lays out a concise framework from which to carry out the argument. Where some detractors have made their cases against points raised by both Greene and Cohen, I will argue in defense of the article. Within the debate there has been some delineation with regards to how determinism actually renders the moral impetus for retributivism defunct, and in my argument I hope to establish some stability to the moral implications that determinism points us toward. There is a short, but sometimes confusing causal process that occurs from brain states to moral culpability that require some clarification and definition in order to establish that the position of retributivism has some significantly moral problems to address. If this is not enough, I will briefly appeal to the folk tradition of law, and how that can begin to change within a new climate where neuroscience may have an effect on what we consider to be mens rea and the capacity for guilt.

Hard Determinism

            Most people are familiar enough with physics to know that every effect must have had a cause, which must have been caused by something else that was caused by something before it, ad infinitum. It’s this simplistic yet stable epistemological fact that guides the theory of determinism. Cohen and Green rehash Peter Van Inwagen’s (1982) initial formula: “Determinism is true if the world is such that its current state is completely determined by (i) the laws of physics (ii) past states of the world,” (p1777). This theory of determinism was notably hashed out by Pierre Simon Laplace in the early 19th century.[i] If it was possible to know the state of the entire universe and all the trajectory of matter at any given time, then the future of that moment will be as clear as the past. This includes the neural workings of an individual’s brain. The more neuroscience maps out and predictably forecasts the materials of our brain, the more it will become possible to foresee our behaviors.[ii] If actions become predictable based on neural patters and brain states then it seems like our brain may be making decisions without our “consent”. In other words, it is not the conscious rationality that humans attribute to themselves that are making the decisions, it is the neurophysiology of our brain that does it and the conscious selves are just along for the ride.

    Libertarianism or Radical Free Will

             The argument from a libertarian view of free will rests solely on intuition. It feels like we have free will; so much so that it becomes absurd to imagine otherwise. In fact, the vagueness of the issue is expressed when Greene and Cohen give Van Ingwagen’s two criterions for determinism (physics and past states) and says, “Free will…requires the ability to do otherwise,” (p1777). This forces us to question what this would look like and where it would come from. All of the deterministic mechanisms in the universe are in full effect, guiding all of life around us and in us, until a moment when something unaffected by the causal process can decide to not abide by the rules of causality (physics). The “ability to do otherwise,” or to have done otherwise, sounds practical but it skirts the issue. To be able to claim that a person was able to do other than one had chosen to do is to analytically say, “had I chosen to do otherwise, I would’ve done otherwise” (Harris p20). The ability to have done otherwise ultimately lies on the causal process in a separate hypothetical universe where “otherwise” had been done. To merely posit the potential for an alternate scenario doesn’t make it more possible (or as Daniel Dennett would call it: evitable). Another libertarian option might be to imagine that radical free will would be possible if a “soul” of sorts was involved, in that it would be a non material locus for free will to arise from. This only serves to beg the question of interactivity between the entity capable of free choice and the deterministic matter that is ticking along as planned inside the material brain. Additionally, if there were a soul that was making decisions prior to conscious awareness of the process, which causally affected brain states and there by incurred an action, then we would find ourselves in the strange predicament of giving moral agency to a soul and not the person.[iii] All this to say, we have to embrace the cultural function of this radical notion of free will because it is so culturally accepted and simplistic in its utility that it serves, at least partly, as the foundations for criminal justice.[iv]

 Incompatibilism and the Compatibilist Misdirection

            Both hard determinism and libertarianism are incompatibilist views in that either one or the other is true and there can be no crossing over with regards to free will, just as the designation suggests. Compatibilism however, maintains that the universe (our brains included) is largely deterministic but that doesn’t necessarily rule out the phenomenon of free will. The instances of free will, according to compatibilists, are just as real as we often feel them to be and can be scientifically defended. This is the prevailing view amongst most philosophers still arguing for the existence of free will but it just takes on a slightly different shape than what is culturally understood. When someone chooses door number one or door number two, the decision is freely made as long as there are no external or internal forces causing that person to act against their own volition. Free will for compatiblists, still exists in the sense that the physical organism of a human is free to make choices ante-conscious awareness and presumably the consciousness becomes aware afterwards. In other words, if you are acting in accordance with your desires, there is no reason to believe that this is not free will.[v] This effectively combines what is typically viewed as the mind/body problem into one neat package where the physiology of the brain issues a freely chosen command and our conscious selves are ever consenting under normal conditions. Even in the case where a person is said to have changed their mind, is still a case where the brain caused the switch in preference, which was dutifully reaffirmed by the consciousness. This, according to compatibilists, seems to allow for room for a narrow sense of freedom, but the “will” part appears to come afterwards. It merely affirms that no one makes actions against their own will. At least one notable neuroscientist, Sam Harris responds to compatibilism with the obvious: “people claim greater autonomy than this. Our moral intuitions and sense of personal agency are anchored to a felt sense that we are the conscious source of our thoughts and actions,” (Harris, p16-17). The reason why compatibalism doesn’t resonate with us is because it affirms that choices are being made freely, but not in the way most people want. The consciousness still has little to no control over the decisions.

Compatibilism seems like a viable option but there is ambiguity as to what qualifies as an external or internal source of prohibition, as its criteria points out. According to Greene and Cohen there are agreed upon psychological conditions that at least limit or negate free will. These conditions could include infancy, mental disabilities, or physical aberrations such as cancers or growths in the brain. I challenge this clause to compatibilism because it appears to be an arbitrary if not blurry threshold to cross when determining whether or not someone acted in their own free will. Whether any physical state of the brain be it abnormal or statistically average, can be cited as being the origin of a decision making process then we can point and say, “that is internal force inhibiting freedom of will.” Sam Harris writes, “A neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it,” (Harris, p5). If determinism is true, as compatibilists allow, then the ability to point to any source of origin for a decision is to link the causal process from material, to brain state to action, without invoking the consciousness. The consciousness is what validates the decision after the fact. “A puppet is free as long as he loves his strings,” says Harris of compatibilism (Harris, p20).

 The Role of Retributivism

            For nearly 50 years, Retributivism has seated itself as the moral impetus for inflicting state sanctioned violence, also known as criminal punishment. The retributivist view sees punishment as a moral good, or at least a moral duty. Retributivists point at the infraction that a criminal committed, and says that because it was a conscious decision to cause harm, then harm must be inflicted back. The moral constraints of all versions of the retributivist position run as follows: (i) all criminals must be punished (ii) all punishments must be inflicted on the guilty (iii) the punishments should be equivalent to the crime committed. The issues raised by hard or strict retributivism are not on the table here with regards to how to fit the square peg of this high moral standard into the round hole of a reality where scarcity and error is inevitable. What is at issue is the moral platitude of retributivism. It is a morally based standard of judgment and, notably not a prescription for how to punish.[vi]

            Cohen and Greene frame retributivism as a normative suggestion for moral responsibility that is ultimately reflected in the law. They write, “We argue that the law’s intuitive support is ultimately grounded in a metaphysically overambitious, libertarian notion of free will that is threatened by determinism and, more pointedly, by forthcoming cognitive neuroscience,” (Cohen, Greene, p1776). For them, retributivism requires a libertarian view of free will because in order for a person to be blamed for their actions, they must fit the criteria for blameworthiness. As I will soon explain, these criteria are somewhat unclear. Cohen and Greene write, “Retributivists want to know whether the defendant truly deserves to be punished. Assuming one can deserve to be punished only for actions that are freely willed, hard determinism implies that no one really deserves to be punished,” (Cohen, Greene, p1777). Essentially this means that if either determinism or compatibilism are true, then a person is not culpable for the reasons in their actions. Only under a libertarian view of free will, where all thoughts and considerations are made at the time of the decision making process, can a person be said to be blame worthy.

 Consciousness, Rationality, Moral Agency, and Mens Rea

            Naturally, the core issue at hand is whether or not someone without libertarian free will can be held responsible for their actions. If the consciousness and rationality employed by a person is able to weigh the morality of a certain decision, namely a potentially criminal one, then we would be able to hold that person accountable for the action. However, due to the rising tide of scientific evidence that says that determinism or perhaps compatibilism is probably true, then how can we attach blame to the consciousness of a being that had no bearing on the criminal act in question?

            The parameters of consciousness and rationality ultimately frame the way a person is considered to be a moral agent. A moral agent is a being that can consciously make choices based on valued judgments with moral consequences. Consciousness refers to a high level of cognitive awareness. Libertarianism would offer the opportunity to be responsible for our moral actions to the fullest extent if consciousness were present at the advent of the decision making process. This seems to require a level of objectivity in consciousness, of which a materially tethered consciousness is incapable. We have good evidence from fMRIs (Functional Magnetic Resonance Imaging) that show that conscious awareness isn’t present in the decision making process, which also renders the radical free will concept implausible. This is why experimentees are shocked when the experimenters can almost always predict the outcomes of their decisions. Compatibilism suffers from almost the same problem. It would view this definition of moral agency problematic as well. If free will is expressed sans consciousness, then the moral choices aren’t made vis a vis moral agency. While it might be possible for a moral decision to be made, a being would be unaware of its initiation until your brain had committed to something—the consciousness is an unwitting stooge, a yes man. For determinists, there’s no free choice, and conscious consent to the brain’s course of action would be a non sequitur. At this junction, as moral agency is demystified, so is the most prominent source for moral culpability.

            Rationality represents its own set of issues that must be addressed as it is reflected in the law as being part of the criteria for culpability.[vii] Psychologist Stephen Morse submits the importance of rationality in the moral justification for law: “Unless people were reasonably capable of understanding and using legal rules and premises in deliberation, law would be powerless to affect human behavior and it would be unfair to hold them responsible,” (Morse p2). He then presupposes some conditions under which this might be the case. Ignorance of the law or simple incapability to understand it would mean that responsibility would not fall on the offender. However, the notion of rationality plays a strange role if determinism as I suppose, is true. The capacity for competency in law and the affects of breaking it would hinge on the prior deterministic conditions of that persons life. Of course this would be out of their control. Some people would be deterministically exposed to breaking the law even prior to their rational comprehension of it. More importantly, the deterministic process may also override the override rational deliberation.

Greene’s and Cohen’s Mr. Puppet is a prime example of carefully constructed factors that would lead an unwitting person inexorably to a life of crime. In their thought experiment, a scientist carefully and precisely influences a person from birth until the moment the experiment ends with the goal of creating a desired type of criminal. As the result, when Mr. Puppet murders someone, the scientist claims that he and not Mr. Puppet was in control of the events including and leading up to the crime. Intuitively, this strikes us as correct. It seems like the excusing conditions in the law for coercion. Mr. Puppet was unaware of all the factors involved, and when they come to light to the citizens and thereby the law, the inclination would be to excuse him. For Cohen and Greene, the law generally reflects that the premise of radical free will is the basis for retributive punishment. Michael S. Pardo, law professor at the University of Alabama opposes this conclusion. He says, “Even in a world of physical determinism, [retributivism] may be grounded in the control people have over their actions though the exercise of their practical rationality. If people act for reasons—more generally, if they act on basis of their beliefs, desires, and other mental states—then we can blame or praise their actions (in light of their mental states),” (Pardo p17). This should immediately raise a red flag. One’s mental states, even in a Mr. Puppet paradigm where libertarian free will potentially exists, are built upon the influences created by the invasive control of the scientist. Pardo’s philosophical objection appears to be just flat out wrong. In the end rationality occurs after the groundwork is laid for a criminal to be predisposed to crime, and their attitudes toward it and moral factors.

Mens rea or “guilty mind” is a central part of the criminal justice system. Its existence as a term in law reflects that there is an obvious correlation between one’s mind and one’s actions. The tricky part about mens rea is its surprising ambiguity.[viii] The term refers to the intentionality, competence, and rationality that are reasons for a criminal action but the characteristics of what could fall into mens rea tend to be partially normative and largely subjective in nature. They all are inextricably attached to culpability though. According to the University of Pennsylvania Law School’s Encyclopedia of Crime and Justice, “the [Model Penal] Code defines four levels of culpability:  purposefully, knowingly, recklessly, and negligently,” (Robinson p999) when referring to the mind state that accompanies a criminal action. All four of these states require a capacity for a mind to be guilty. I intend to amend the definition of mens rea with the minimal characteristic of capacity for culpability. A guilty mind can only be guilty if it has the capacity to be guilty. In assigning culpability, as I have hoped to have already explained, there must be conscious and rational deliberation unencumbered by deterministic factors. Culpability requires both consciousness and rationality at the level of moral deliberation. This is what it means to say that if determinism is true, then culpability/blameworthiness/mens rea cannot apply to criminals (or anyone for that matter). Herbert Morris expresses this as a criterion for what it means to be guilty in the eyes of the law: “The absence of a requisite culpability state or one’s fair opportunity to behave otherwise than one did, precludes guilt,” (Morris p64).

            The objections raised by Morse and Pardo with regards to culpability, I submit are misguided. Morse raises this objection to Cohen and Greene:

Responsibility has nothing to do with “free will” even though legal cases and commentary concerning responsibility are replete with talk about it. Nor is the truth of a fully physically-caused universe (sometimes referred to as “determinism”) part of the criteria for any legal doctrine that holds some people nonresponsible. Thinking that causation itself excuses, including causation by abnormal variables, is an analytic error that I have termed the fundamental psycho-legal error. All behavior may be caused in a physical universe, but not all behavior is excused, because causation per se has nothing to do with responsibility. For example, many variables caused you to be reading this article now, but you are perfectly responsible for intentionally reading it. Reading it is presumably not evidence of incapacity for rationality or compulsion. If causation negated responsibility, no one would be morally responsible and holding people legally responsible would be extremely difficult.[ix]

Cohen and Greene have a response that I’ll refer to momentarily, but as per my own argument, I submit that he is absolutely right insofar that he links responsibility to intentionality. One’s deterministic brain state intentionally causes that person to commit an action, but the “evidence” of the intentionality stemming from rationality as I have described it, is virtually absent. Again, the compulsion to do something (intentionality) is antecedent to conscious, rational, deliberation. So perhaps one might be responsible for committing an action so far as they were involved in the committing of it, but they are not morally culpable.

                 Cohen and Greene respond to the first part of the above quote by Morse by addressing the official legal language of responsibility. They agree that this is the current state of affairs in the law but that the law itself is predicated by a popular consensus. They write, “The legitimacy of the law itself depends on its adequately reflecting the moral intuitions and commitments of society. If neuroscience can change those intuitions, then neuroscience can change the law,” (Cohen, Greene p1778). In a nutshell, this sets the precipice for a revolution in the way the law, and criminal punishment is carried out. It is only a matter of how far away society is from that tipping point.


Normative Flexibility in the Law


            Our morality if value driven. By and large, people intuitively feel that we have free will because it is misleadingly apparent that is the case. It is also easy to cobble together a system of laws and criminal punishment around the concept of free will. Even more recently: it is easy to base a retributive theory of justice on the behavior of those who consciously make decisions that result in harm. However, neuroscience and philosophy is growing near the threshold of indictment from suspicion on whether free will is a plausible view of the world in which we function. The expectation for neuroscientists like Sam Harris and criminal law theorists like Joshua Greene and Jonathan Cohen, is that the scientific findings regarding the likelihood of determinism will spread into populace. There may be serious philosophical ramifications for this phenomenon that reach beyond the scope of this paper, but at least one fundamental social construct must adapt to a newly prevailing theory of determinism: the law.

            As I have quoted Cohen and Greene at the end of the previous section, others appear to agree. Herbert Morris’ paper Decline of Guilt is practically a love letter to the marriage of normative values with the law. If our intuitions about guilt change, so must the law in reflecting that change. He writes dramatically of a proposed schism between the two, “Widespread disaffection among the popular from the norms or lack of belief in the legitimacy of tribunals established to judge people would transform the legal practice into one in which individuals with power merely enforced their will upon others,” (Morris p66). Professor Pardo states, “The United States Supreme Court has also explained that, as a matter of constitutional law, the federal and state governments may as a general matter rely on multiple justifications for punishment,” (Pardo p2). This allows for the law to have a smorgasbord of moral justifications to choose from, implying flexibility and reflection of an evolving culture. Later on Pardo critiques the Cohen and Greene article on the same ground saying that popular opinions of justice aren’t sufficient to change punishment decisions.[x] This is as if to say that the government has abilities that it won’t ever avail, despite good reason. For Morse, in spite of his skepticism he admits, “Legal rules do, of course, change in response to evolving principles and new scientific discoveries,” (p5). This has become true for gay marriage most recently. While neuroscience may be a long way away from convincing the people and thereby the law to be changed, that doesn’t mean it won’t ever happen.

            By the popularity of the notion of free will, which is likely to be an incorrect account of our role in reality, and how it is reflected in retributivism and the criminal law, we can see that an inaccurate idea can proliferate. It seems to me that even if determinism was not true, that neuroscience could feasibly make a strong enough pitch to the citizens of the United States that could influence the way we do criminal justice. The added bonus of course, could be increasing technological advanced that would bolster the evidence for determinism. Our notions of moral agency, guilt, and just deserts might end up being irrevocably changed.


            After clarifying how free will, determinism, and compatibilism represents human intentionality in the world, it becomes apparent that there is a disparity between our moral accusations and our actual capability as moral agents. I have contended that because all cognitive events stem from the physiology of the brain, that it is evident we live in a deterministic world. In this account of a deterministic world, and corroborated by studies, decisions are made prior to consciousness and rationality. Consciousness and the ability to be rationally weigh moral decisions are the foundations for moral agency and culpability. If decisions are made in the brain, prior to conscious awareness and rational consideration of it, then it would appear that culpability and blameworthiness are no longer traits that we can attribute to human beings. People may be intentional, and they may have reasons for their intentions, but both of these events arise prior to conscious awareness and deliberation. The law enacts several different modes of enforcement deriving from folk conceptions of law. This makes it a flexible enterprise. The discoveries of neuroscience, neurophysiology and philosophy could begin to influence a larger swath of the population into having pity on criminals who, through no fault of their own, were lead down a maleficent path. This does not mean that we cannot effectively punish or remove the threats to the rest of society from the greater population. It does mean that the moral impetus of retributivism cannot be applied insofar as taking revenge upon a person who could not have done otherwise intuitively feels like an unsavory state of affairs. If we are to treat people as ends in and of themselves, then we have a greater responsibility to evaluate the brain states that comprise moral agency and standing.

[i] “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” P. Laplace, 1902. A Philosophical Essay on Probabilities. Wiley, p. 4


[ii]C.S. Moon, M. Brass,  H.J. Heinze, J.D. Haynes, 2008. Unconscious Determinants of Free Decision in the Human Brian. Nature Neuroscience.


[iii] “The unconscious operations of a soul would grant you no more freedom than the unconscious physiology of your brain does. If you don’t know what you soul is going to do next, you are not in control.” S. Harris, 2012. Free Will, Free Press, p. 12


[iv] “However, we argue that the law’s intuitie support is ultimately grounded in a metaphysically overambitious, libertarian notion of free will that is threatened by determinism and, more pointedly, by forthcoming cognitive science,” (Greene, Cohen p1776)


[v] “Compatibilists generally claim that a person is free as long as he is free from any outer or inner compulsions that would prevent him from acting on his actual desires,” (Harris, p16).


[vi]  Michael Cahill opens up his essay on applying retributivism in the real world. As utilitarian consequentialism offers a complete theory of justice in that it is both prescriptive and justificatory, “By contrast, retributivism, which adopts a backward-looking perspective focusing on the moral duty to punish past wrongdoing, is a justificatory theory, but seemingly not a prescriptive one,” (Cahill, p818)


[vii]  “Rationality is the touchstone of responsibility. Only agents capable of rationality can use legal and moral rules as potential reasons for action. Only by its influence on practical reason can law directly and indirection affect the world we inhabit,” (Morse, p2).


[viii] “For a phrase so central to criminal law, mens rea suffers from a surprising degree of confusion in its meaning. One source of confusion arises from the two distinct ways in which the phrase is used, in a broad sense and in a narrow sense. In its broad sense, mens rea is synonymous with a person’s blameworthiness, or more precisely those conditions that make a person’s violation sufficiently blameworthy to merit the condemnation of criminal conviction,” and then the second sense: “The moden meaning of mens rea, and the one common in legal usage today, is more narrow: mens rea describes the state of mind or inattention that, together with its accompanying conduct, the criminal law defines as an offense,” P. Robinson, 2002. Encyclopedia of Crime & Justice. University of PennsylvaniaLawSchool, p995. For my arguments both definitions are useful. The broad sense can apply to the trait by which most people often attribute to or utilize in their accusations on a guilty party. The second definition focuses on a specific behavior that categorizes the level of intent enacted by the guilty party in the framework of the law. All of those behaviors I submit are out of the control of the conscious person.


[ix]  (Morse, p9) I feel compelled to add that the last line of the quote seems to be an example of circular reasoning. It’s as if Morse is proposing that we must attribute responsibility to causal processes if for no other reason than we wouldn’t be able to hold anyone responsible for anything. Maybe that is true!


[x]  “Although lay intuitions may be relevant to reform, and some agreement between punishment and lay intuitions may be necessary for the legitimacy of punishment, accord with the intuitions of most people in not sufficient to justify decisions. It is possible for widely shared intuitions about what is just punishment to be mistaken,” (Pardo p15).  This is of course, my accusation of the retributivistic impulse.



(Posting a school paper as is. No works cited page at the moment. This will be corrected.) Notable sources include Noah Feldman, Michael Walzer, and Jurgen Habermas. All geniuses.


The demand for justice after a terrorist attack on the scale of the 9/11 events may warrant the type of knee-jerk declaration of war (specifically, the War on Terror) that we have seen in the time that followed. The United States government has done a good job of justifying ad hoc its response to the threat of terrorism, with the general consent or indifference of the citizenry. That is, up until recently. Granting that the situations immediately after 9/11 required the United States to enter into a Supreme Emergency modus operandi, it appears the desire for constitutional law to remerge. Now a swell of dissatisfaction, always present but nary as lucid, has begun to make its humanitarian and ethical case against the continued War on Terror. The criticism isn’t so much about lack of necessity for there to be an anti-terrorism force, but that the means and motives by which it is carried out are increasingly malevolent. By reexamining the justifications of certain practices and policies it might be possible to help clarify what values we carry into an effective anti-terror strategy. This might also help to garner popular support for the operations that have become prima facie oppressive or even terrorist-like on behalf of the American agencies engaged in anti-terrorism. The question the US is facing now warrants an answer: Are the affects of anti-terrorism worse than the threat of terrorism?

            Even if we had established concrete definitions of war, crime, and terrorism, it would stand to reason that the urgency for an immediate response to 9/11 would force the US to hastily cobble together some kind of argument for going to war in the middle east while still riding high on the surge of retributive justice sought by the people. The state of affairs the United States found itself after the attack was unstable to say the least. Jurgen Habermas recalls, “the repeated and utterly vague announcements of possible new terror attacks and the senseless appeal to ‘be alert’ exacerbated the vague sense of dread and the indefinite state of alarm—in other words, precisely what the terrorists intended” (p4). Justice must also include elimination of the threat. Bringing these terrorists to justice was synonymous with killing them it seemed, and the US saw it fit to blend what ever justification it could find in order to take an action. Feldman writes, “To maximize flexibility, the US government would probably try to give itself the option of invoking either the crime paradigm or the war paradigm at any moment,” (p477).

At the early stages of the War on Terror, it didn’t matter which paradigm of justice the US chose, either treating terrorists like enemy combatants or as criminals though ultimately it adopted the language of war. The US used what ever methods reaped the most desirable goals including torture, indefinite detention without trial, drone strikes, etc. This is reason enough to invoke a philosophy of consequentialist considerations with regards to justifying how the US has acted in the past, and how it can defend decisions and actions going forward. While it is true, as Walzer critiques, that consequentialism can’t assign exact values to what aspects of anti-terrorism should be measured and how heavily, it does establish polar extremes from which to avoid or achieve. Without much valediction, reducing superfluous civilian killing, financial cost, and oppression and fear can be seen as morally sound principles. If there was a need for this excess in the past to bolster the might of a culture that won’t be bullied by terrorism, that time has apparently now passed. While apparently effective, as we have seen with the death of Osama bin Laden, and the lack of fulfilled terrorist attacks, it can be equally argued that the methods of effective anti-terrorism is police-like. The US can raise its level of legitimacy by reenacting the rules of law, and relinquishing the policies of emergency ethics.

            The arguments over whether a terrorist should be granted the rights of a criminal can tip the scales with regards to the legitimacy of the anti-terrorism movement. By subjectively assigning the identity of terrorists to the category of enemy combatants, the US burdens itself with a heap of moral risk. Firstly, it is not all together clear that terrorists are enemy combatants of war. Under Just War theory, there must be a cohesive body for which to wage war against and according to Noah Feldman, “The terrorist mastermind…is different with respect to provenance than a general who plans an attack that will be made on the U.S. by an army attacking from without,” (p469). In the case of a homegrown lone terrorist, it would seem strange to declare war on him/her. It is also unclear that a terrorist, in this case al Qaida, represents the ideologies of a legitimate state. Just war would necessitate that there would need to be a reachable peace and a political body to negotiate with, amongst other criteria, that rule out fundamentalist-based terrorism. This example rules out 2 out of 4 of Feldman’s criterions for determining whether a terrorist is an enemy combatant: provenance and identity. The remaining two of Feldman’s criterions that help to support the idea that they are indeed enemy combatants seem pyrrhic in their victories. The intentionality of a terrorist to discredit a state or not recognize it as legitimate is only nominally important in that there is no reason to take their intention seriously once terrorism is used—they have disqualified themselves by their sheer abhorrent and immoral nature. The scale quality says that the magnitude of a terrorist attack might push a regime over the threshold of criminal to enemy combatant, but the threshold seems morally arbitrary. Secondly, the policies of pursuit with regards to enemy combatants, “shoot first and ask questions later,” give rise to a host of human rights violations that serve to undermine favorability to the continuation of warring with terrorists. By a systematic gathering of intelligence, pursuit, and execution, there is an engine of assault without a check or balance to determine whether each strike is worth investing in. Now we are forced to look at the policy as a whole. Lastly; the unilateral practice of using torture, drones, and indefinite detention as tools of waging war raises the culpability for impeccable accuracy to a near implausible standard. Michael Walzer describes the ethics of anti-terrorism and puts it lightly when he writes, “The terrorists hold that there is no such thing as ‘collateral’…damage. All the damage for them is primary…The more deaths the more fear. So anti-terrorists have to distinguish themselves by insisting on the category of collateral damage, and by doing as little of it as possible” (p9).

In order to maintain legitimacy, it is absolutely crucial that the US does not act terroristic while combating terrorism in this way. In defending the potentiality for collateral damage for drone strikes, the US has painted all people associated with a potential terrorist, as potential terrorists. Whether terrorists are or aren’t enemy combatants, it must be said that the term “innocent” is not up for mitigation. Walzer writes, “[Innocents] are ‘innocent’ whatever their government and country are doing and whether or not they are in favor of what is being done,” (p1). It is in this way, that even consequentially speaking, it is damning for the US to continue the War on Terror and instead adopt a policing policy of anti-terrorism. If the methods of anti-terrorism as they are carried out now continue, then the western world serves to embolden the fundamentalist terrorist and public sympathies within their influence. Walzer continues on this theme in another work, “repression and retaliation [of terrorism] must no repeat the wrongs of terrorism, which is to say that repression and retaliation must be aimed systematically at the terrorists themselves, never at the people for whom the terrorists claim to be acting,” and later on, “The refusal to make ordinary people into targets, whatever their nationality or even their politics, is the only way to say no to terrorism. Every act of repression and retaliation has to be measured by this standard,” (p60-61). While being interviewed, Habermas makes the same point, “the state runs the risk of being discredited by the inappropriateness of the measures it deploys, whether internally by militarization of security that undermines the rule of law or externally by mobilizing a disproportionate and ineffective military and technological supremacy,” (p8). This language rises in the minds of anyone who travels the awful banality of the TSA. Without the moral high ground, anti-terrorism begins to look like oppression or terrorism in kind.

The War on Terror, while initiated on emergency standards of supposed necessity can be justified only through a consequentialist view. All things considered, the best course of action was to suspend the status quo of natural or constitutional law to achieve a certain goal. Ostensibly, the threat of utter catastrophe has subsided and now we must change gears as well. Consequentialism with the goal of establishing a world with the greatest amount of well being for all, offers the ability to evolve in light of new evidence and circumstances. As it has become blatantly evident that our practices directly related to our war-like approach to anti-terrorism has undermined our legitimate claim to just war, or justice in general, then it becomes increasingly necessary to adopt a criminal approach. With considerations to a more global society, it might be finally time to take a supranational justice system seriously. Globalization has rendered the inherent value of our borders worthless. The moral weight of killing citizens (read as innocents) in another country that happens to harbor terrorists is equally immoral as killing our own.

       The Republican National Convention is almost a week behind us now, which is roughly a millennium in the media-cycle. During this empty-chair obsessed stretch of time, I’ve been waiting for a critical analysis of the RNC motto which was draped from every wall and leaping from every spokesperson’s mouth: We Built It. Sure, every news station had something to say about how this rally cry was in defiance of Obama’s oft misquoted speech. There has been plenty of punditry and comedic blowback with regards to the manufacturing of gaffes. But, even if we grant the false characterization of Obama’s sentiment from the point of view of the Republican marketing force, we still don’t see a justification of the motto from the RNC. In fact, I’ll argue that “We Built It” is yet another indication of the immoral tenets of faith that are necessary to maintain the conservative position in contemporary politics.

Who is the “We” of the slogan? If you asked any attendee at the RNC I’m sure they would consider themselves to be included in that “We.” They would say that it is any hardworking American citizen or something to that effect. I can’t help but feel that the party leaders don’t just mean hardworking American citizens built whatever that is. If that were the case, it would be no different than what Obama said. Surely government workers, people who erect the bridges and pave the roads of President Obama’s speech, work just as hard as anyone in the private sector. No, the RNC slogan would have to be more narrowly specified in order to distance itself from even partially embracing big government. So again, who exactly is the We? It seems to me that they are referring to the people that became successful without the guiding hand of the government. The only people the Republicans value; the only value Republicanism demands is entrepreneurship. Free market prowess and endowment. To be sure, this is a valuable value indeed, but is it so paramount that it trumps the needs of all Republicans, including the unsuccessful ones? What does this say about the nominees of the GOP and what kind of reverence they demand of their loyal party members who may not be able to meet their expectations? Is it possible to be an unsuccessful Republican? Is it possible to work for the big government bureaucracy and vote Republican? Of course it is! So then why does this slogan become so appealing?

Perhaps it appeals to our vanity to want to be members of a meritocracy. The raising of successful entrepreneurs to the point of worship is prevalent because most people believe that they too have the equal chance at rewards of which the elite have already reaped. All the rats have an equal chance at the cheese if they work hard enough. A just meritocracy promises to give ample rewards for the efforts of the participants. This seems fair enough. The obvious objection is that not all people have an equal chance. Even if we give people an equal starting point, perhaps two individuals born on the same day to equally affluent families, there are an innumerable amount of invisible factors that will surely sway the success attained by each. Many proponents of the meritocratic ideology would chalk the happenstance of unfair starting points to a brute fact of life. Tough noogies. If you’re just dealt a bad hand, and you’re unable to manifest success from it, then you just weren’t meant to be successful. This seems eerily reminiscent of the Calvinistic elect; a predetermined fate of which you are either a member of God’s favored flock, or you are doomed. If this is an exaggeration, then what are we to say of how success is then managed? It seems that if people from the illusory starting point become successful, that only their children will have the immense advantage of also becoming successful (like Romney himself), and it maintains a strict genealogy of equity and affluence. The definition of this phenomenon is what is labeled a Plutocracy. This is the plausible slippery slope of a strict meritocracy. Republicans are advocating exactly this when they hiss at the notions of social safety nets. If we are to laud the successful, who become successful almost definitely on the merits of circumstance, and we are also to let the poor be damned, then we are embracing Social Darwinism.

The good news is that our society doesn’t work like this. Philosopher and political theorist John Rawls has been able to illustrate a set of rules by which all can abide, and in which the quandaries of moral obligations can be satisfied. The metric for a fair start that John Rawls proposes enables people to become successful but only if they work to the benefit of the least well off. In other words, it does not advocate for communism, where all are equal. It also negates Plutocracy. What Rawls’ differential principle says is that if we are to allow for vast ranges of social and economic inequality, those at the top should be morally obligated to contribute to the well being and wealth of opportunities that the least well off should enjoy. Please watch famed Harvard professor Mike Sandel explains it HERE in the fullest terms.

The great news is that this is how our society works now! It is also why we cringe at allowing room for trickle-down economics in a moral discussion about wealth inequality. There is too much room for corruption when the least well off wait with baited breath on the charity of the successful amongst us. If there is no policy or social contract to ensure there is a just distribution of opportunity (not necessarily wealth!) then there is no motivation to be just. In order to shift away from the traps of a strict meritocracy, the United States has implemented several safety nets to protect members of its society from becoming victims of Social Darwinism. This could be the result of Rawlsian political ethics.

The Republican National Convention hinged on their supporters’ faith in the most successful people in the nation, and their willingness to redeem their political and economic ideologies. When the RNC banners shouted We Built That!, it was a reminder that everyone owed them special treatment. We have noticed that they try to legislate the special treatment for themselves! The reason why most people want to raise taxes on the 1%; the reason why we want to regulate the economic practices of the richest people; the reason why we want to make sure everyone has access to health care, a good education, and a safe environment is because we all intuitively feel morally obligated to equality. Meritocracy, Plutocracy, and Social Darwinism only appeals to the people it benefits! They have no choice but to pitch the dogma in the form of plausible policy. No informed person willingly signs up to be subjugated to these political structures. Without shunting the majority of financially destitute Americans, the GOP has not laid out a plan for which they can ethically rebuild the nation. If we are to submit to success by any means necessary, then we are opening the door for a very, very scary future.