Monday, July 23, 2018

Theses on Divinity

I. Man as Potentiality
“I live on Earth at present, and I don’t know what I am. I know that I am not a category. I am not a thing -- a noun. I seem to be a verb, an evolutionary process -- an integral function of the universe.”
-Buckminster Fuller, I Seem to be a Verb

Humans do not have a nature except for their ability to change it. We collectively constitute the species whose essence lies in its existence (as Martin Heidegger has noted in Being and Time), a pure potentiality in the shape of an animal. Our form is woefully devoid of any content, and this is not because of some inability to fulfill our supposed functions, but because potentiality, by definition, cannot be comprised of a definite substance (even eventually) without destroying itself and transforming into its opposite. This notion is commonly expressed in all the cynical banalities about the putative impossibility of human happiness -- cliches which are unable or unwilling to comprehend the “joy with no outlet” with which each human is blessed. To fulfill man’s potential would also be to abolish it.

II. God as Potentiality Fulfilled
“...the primary essence has not matter; for it is fulfillment (ἐντελέχεια)."
-Aristotle, Metaphysics (Book XII, Chapter 8)

The flaw of every theism is to imagine god, paradoxically, as a potentiality fulfilled -- hence the hackneyed idea that “man has made god in his idealized image.” For the ancients, this fulfillment represented the very potential of humankind (which nonetheless can never itself be fulfilled). If god represents humanity perfected, then the aporias which arise in such a being are not due to a logical gap, but an historical one: man historically mistook his own potential as being completable. By giving a face and a name to god, man damned himself for all time to the pursuit of an impossibility, worship of a content for which there is no mode of attainment. This is why ancient Judaism, although it gave all sorts of characteristics to its god and therefore made the same mistake as every other theism, refrained, at least, from speaking his name. This is also why Humanism, and then Old Hegelianism, remained essentially theistic movements: although they elevated man to the level of a deity, deified-man was still specified and sculpted into stagnant artifacts.

III. Man as God
“...the mere fact that I exist and have within me an idea of a most perfect being, that is, God, provides a very clear proof that God indeed exists.”
-Rene Descartes, Meditations

God, for the present age, must therefore be turned on his head and reimagined as a potentiality not merely unfulfilled but unfulfillable. If this is really the true nature of god, then only man himself can be indicated as an example of any sort of deity. The supposed arrogance of such a statement is vindicated by the fact that man is the only animal who is able to imagine god, and hence the only animal which is truly godly. In this light, Descartes’s proclamations on the existence of god (which have been ridiculed almost universally) must be reexamined: that humans can imagine a perfect being does not prove that such a being must exist somewhere out of sight, but rather proves the perfection and divine nature of our own thought here and now. Descartes’s argument is therefore revealed as correct, with the only caveat being that god does not exist outside of us, but finds his seat in human subjectivity and potentiality itself. And the process of History continues to prove our divine nature: we can now travel and communicate more quickly than Hermes, wage wars more destructively than Ares, harvest crops more efficiently than Demeter. It is as if man is on a quest to produce everything from nothing; however, the fact that this journey must always remain in progress (its realization being impossible) does not reveal a pathetic weakness but on the contrary defines divinity itself.

IV. Artificial Selection
“Destruction is a form of creation.”
Graham Greene, “The Destructors”

"Let us therefore trust the eternal Spirit which destroys and annihilates only because it is the unfathomable and eternal source of all life. The passion for destruction is a creative passion too!"

Mikhail Bakunin, "The Reaction in Germany"

The universe was not created from nothing; rather, the concept of “nothing” only arose because of the being of the universe. Conversely, the ability for humans to conceive of negativity demonstrates their very ability to create something from nothing: by definition, there cannot be a no-thing -- yet we nevertheless conjured up its concept (a concept which therefore must find its seat a priori in the human mind). That this concept is a priori perfectly reveals the godliness of human potentiality: negation is the modality of man which brings about nothing from something, which is, to say the same thing, the ability to bring about something from nothing. The conscious destruction of the possibility of natural selection for humankind (which has already been largely accomplished) furthermore signifies our mastery and ownership of negation itself, and represents that with the birth of contemporary humanity came the end of Prehistory. Negation does not define the being of the world. Rather, humans define the being of the world in terms of negation.

V. Faith
“Those who are in the realm of the flesh cannot please God. You, however, are not in the realm of the flesh but are in the realm of the Spirit, if indeed the Spirit of God lives in you.

“I consider that our present sufferings are not worth comparing with the glory that will be revealed in us.”
-Romans 8:8-9 and 8:18

“Blessed are the meek: for they will inherit the Earth.”
-Matthew 5:5

For the secular age, faith is analogous to blind belief, to conviction without reason. Faith is often contrasted with science; yet science functions at its core by means of the faith it has in its own axioms, which cannot themselves be proven, as Kurt Godel has shown with his incompleteness theorems (remember that Godel remained a strong believer in a personal god throughout his life): hence John D. Barrow is able to write that mathematics “is the only religion that can prove itself to be one” (The Artful Universe). Faith must be maintained in the face of systems which cannot prove their own consistency. Otherwise, in an ironic twist, the concept of truth itself would need to be abandoned. Faith in man-as-god merely implies the belief (floating groundlessly, by necessity) in the infinite potentiality of each individual, the indefatigable ability to overcome oneself. Only in this way can Christianity and the thought of Nietzsche be reconciled. Christ’s sympathy for the meek in no way contradicts a desire for greatness: suffering through misfortune in order to overcome oneself is in fact the definition of passion (note that “passion” comes from the Latin pati “to suffer, to endure”). Man’s divine grandiosity can only be revealed by his very meekness.

VI. Beyond Secularism
"A truth isn't a view on the world but what binds us to it in an irreducible way. A truth isn't something we hold but something that carries us."
-Julien Coupat, The Coming Insurrection

Secularism is not the lack of religion, but a religion of lack: while positively affirming man as an unfulfillable potentiality -- as god --, it simultaneously deprives this potentiality of any outlet. Yet insofar as secularism underlies the age of nihilism in which we currently find ourselves (in the same way that Christianity grounded the medieval period), it can only be regarded as a transitionary religion, one which does not take on a life of its own but rather functions as a prolonged interrogative and, conversely, as a sort of anti-mystery (ἀντί-μυστήρια) of which each person, however unwilling, is always already an initiate (μύστης). Answering this question-without-mystery, this question which is a secret to no one, is the present mystical (μυστικός) task of humanity. If there is any sense in defining secularism as the absence of religion, it is revealed in the fact that secularism involves a refusal to bind-fast (religere, from which we derive the term "religion") to anything -- hence the secular skeptic, who attempts to tear asunder every truth while finding none of his own. In moving beyond secularism, man must find again his passions, truths which he refuses to give up despite any suffering he might be forced to endure. The only figure adequate for representing the coming post-secular humanity is therefore the martyr, a figure in which, again, Christianity and the thought of Nietzsche coincide. As the pagan emperors martyred Christians, and as Christians martyred heretics, secularism martyrs martyrdom itself. Sacrifice not of livestock to Zeus nor of myrrh to Christ but of oneself to oneself (i.e. to one's own truths) will be the rite of the cults of passion to come.

Thursday, July 19, 2018

More Theses on the End of History

Expanding on the last post with this....would like to keep expanding and make this into a larger work. I provided the previous entry as the third thesis (out of the three here). These new theses (the first two) expand on and introduce the previous ideas.... They also seek to emphasize that History cannot be isolated from an inquiry into History; even an inquiry into History is a part of that very History. How else could Hegel's thought have had such an impact?

I. Stalingrad

Omne possibile exigit existiturire "Everything possible demands to exist."
-Leibniz, De Veritutibus Primis, On the First Truths

"With respect to demand, every fact is inadequate, and every fulfillment insufficient. And this is not because it exceeds every possible realization, but simply because it can never be placed on the level of realization."

-Agamben, What is Philosophy?

Historia exigit tendere finem. An amateur student of Latin might translate this statement as “History demands to reach its end,” but the Latin more properly implies that “History goes out to stretch towards its threshold.” It should go without saying, also, that the vagueness in the Latin mirrors the unclarity of the whole project of historiography, beginning with the most difficult of all thinkers: Hegel. Regardless, does History demand to reach its end, does it always drive itself forward grasping desperately for the door?

Something with “exigency” is demanded, its objectives are urgent. But a demand, with no play on words, does not demand its outcome. That is, a demand does not imply necessity, as Agamben has rightly pointed out, but instead the existence of a possibility being actualized. A demand is something driven forth: it is an attempt the accomplishment of which is reserved for the imperfect tense.

One can strain a tendon or tend to a child’s scrape and show a tenderness of heart or have a tendency for selfishness or preside with tendance over attendees. And one tends to know the denotation of words. The root -tend- always implies a metaphoric reaching, stretching, but always without completion. Tending to a wound does not mean healing it; and tendencies are habits, not essential modes.

The French habit of ending films with the elegant Fin does not indicate the end of a piece of cinematography, but rather the transition between the image and what is meant to be represented in it, between the referens and the relatum, between possibility and its actualization. In no other way can cinema be inspiring.

The poetic element of philosophy, writes Agamben, lies in its definitions. Exigit (from ex-agere), means to “drive out” — not an enemy or some infestation, but oneself. Tendens isn’t so much a “reaching” (as if one had already captured the ultimate) as it is a “stretching.” But most importantly, finem is not merely an end, beyond which no one could dare to venture, but a border — a threshold.

Therefore, historia exigit tendere finem is not merely true, but must be regarded as a basic starting point for any historiography. In studying it, History is revealed as an incessantly self-propelled movement to the next. This remains true whether finem implies a true “end” or not, whether we take the term “reach” to imply finality or potentiality.

This is the same thesis which defined Left Hegelianism, the philosophy which emptied Hegel of all of his content but kept the form. At Stalingrad in 1943, as Lewis White Beck has noted, this force met its antithesis in a bitter, frostbitten battle, an enemy comprised of those who kept the content of Hegel’s philosophy but abandoned its form. It is this conflict which has played out indefinitely since the end of World War II and well beyond the collapse of the German Reich or even the USSR: the fight between those who insist that History has ended and those who wonder if there is more to it — and if there is an end to it at all.

II. Basics for an Archaeology of the Present: A Brief Inquiry into the Contemporary

"...of those that were great in earlier times most have now become small, and those that were great in my time were small in the time before.... I know that man's good fortune never abides in the same place...."

"This is the bitterest pain to human beings: to know much and control nothing."

-Herodotus, The Histories

We live in the most densely populated cities in the history of the world, but the loners strolling down their abandoned concrete paths can’t make eye-contact. Our cities are littered with trash and pollution and decrepitness, but Teslas and magic erasers wipe out everything. Poverty has been abolished — or at least covered up —, but only to damn us to the most hellish meaninglessness.  

We are a humanity which dawns on the horizon of the final moment in History, but we’re horrified by what we see. As we scale the mountain of dialectical struggle and peer over its glorious peak, we see nothing but still snow blanketed over a suffocated potentiality: schussing apparently defines the remainder of human temporality.

The ultimate history, the most far-reaching Classics, and the least essentializing anthropology all seek to comprehend the strangeness of the present — not the barbarism of previous periods or the insanity of some sort of outdated religiosity, but the absurdity of the current situation. Any system which grasps historical knowledge must finally make an inquiry (“inquiry” in Ancient Greek is ίστορίη, historiā) of the contemporary. This was, after all, the main project of Herodotus’s Ἱστορίαι.

The true goal of any historiographical endeavor is, therefore, a deconstruction of the present or, in other words, critique. Inquiry — historiā — can only exist in negativity.

III. A Negation of "the Negation of the Negation"

"The negative of the object, or its self-supersession, has a positive meaning for self-consciousness, i.e. self-consciousness knows the nothingness of the object...."
-GWF Hegel, "Absolute Knowing," The Phenomenology of Spirit

"All my life, my heart has yearned for a thing I cannot name."
-Andre Breton, The Surrealist Manifesto

These trees which line the path make one think that he is an emperor. But he surely isn’t. He is one among a million who make that same walk, and this became true the moment that aristocracy itself was commodified. Why else do the lamps which accompany these trees display such a nineteenth century aesthetic, if not to harken back to a time when men in trench coats and top hats strolled down lanes with their canes pointing to the future — just a few meters ahead! — which awaited their progeny? The modern writer even writes with a taste for the archaic, as I do presently.

All of this subdues us for now, it appeases our appetite for another dialectic: for haven’t we already reached the end of History? Now only the steady flow of technological obsolescence keeps our time, seemingly dead events which signal the infinite positing of “absolute knowing” without any conflict or negation. And anyway, when the world merely posits endlessly, at an ever increasing rate, there isn’t much time left for the leisure of negating.

The end of History in which we currently find ourselves therefore itself represents the new dialectic. This notion goes far beyond the postmodern theories which deny history the status of being an object of Wissenschaft, views which ironically only became popular once history visibly ended. With the end of History, a new dialectic occurs in which the form of the dialectic struggles against the fact that it no longer contains any content. The mere form of History is impossibly preserved, despite the fact that any content which can make its dialectic real has vanished. The revolution continues — but against whom or what? Negation without object is a paradox which philosophy has not yet detailed.

Time becomes violently empty in the modern period. Wars flow into one another endlessly, like rivers of blood which feed a common reservoir. An insurrection in Greece, Libya, Chechnya, rises against a phantom only to recreate the same horror show with a different cinematography. The "negation of the negation" therefore takes on a new meaning: negation, for the present age, does not negate "some" negation, but negates negation itself. The modern dialectic must play out between nihilism and the radical desire for a something which has not yet been elucidated.

Tuesday, May 8, 2018

Short Thesis on the End of History

"The negative of the object, or its self-supersession, has a positive meaning for self-consciousness, i.e. self-consciousness knows the nothingness of the object...."
-GWF Hegel, "Absolute Knowing," The Phenomenology of Spirit

"All my life, my heart has yearned for a thing I cannot name."
-Andre Breton, The Surrealist Manifesto

These trees which line the path make one think that he is an emperor. But he surely isn’t. He is one among a million who make that same walk, and this became true the moment that aristocracy itself was commodified. Why else do the lamps which accompany these trees display such a nineteenth century aesthetic, if not to harken back to a time when men in trench coats and top hats strolled down lanes with their canes pointing to the future — just a few meters ahead! — which awaited their progeny? The modern writer even writes with a taste for the archaic, as I do presently.

All of this subdues us for now, it appeases our appetite for another dialectic: for haven’t we already reached the end of History? Now only the steady flow of technological obsolescence keeps our time, seemingly dead events which signal the infinite positing of “absolute knowing” without any conflict or negation. And anyway, when the world merely posits endlessly, at an ever increasing rate, there isn’t much time left for the leisure of negating.

The end of History in which we currently find ourselves therefore itself represents the new dialectic. This notion goes far beyond the postmodern theories which deny history the status of being an object of Wissenschaft, views which ironically only became popular once history visibly ended. With the end of History, a new dialectic occurs in which the form of the dialectic struggles against the fact that it no longer contains any content. The mere form of History is impossibly preserved, despite the fact that any content which can make its dialectic real has vanished. The revolution continues — but against whom or what? Negation without object is a paradox which philosophy has not yet detailed.

Time becomes violently empty in the modern period. Wars flow into one another endlessly, like rivers of blood which feed a common reservoir. An insurrection in Greece, Libya, Chechnya, rises against a phantom only to recreate the same horror show with a different cinematography. The "negation of the negation" therefore takes on a new meaning: negation, for the present age, does not negate "some" negation, but negates negation itself. The modern dialectic must play out between nihilism and the radical desire for a something which has not yet been elucidated. 

Friday, August 5, 2016

On Language and Thought

 "Language is the perfect element in which interiority is as external as exteriority is internal." -Hegel, Phenomenology of Spirit

Language precedes thought. Such a radical thesis will slowly prove convincing through an analysis of set theory, its relation to language, and its paradoxes. It will soon become clear that language is the first and only immediate thought; all others presuppose language and are mediated by it. 

We should first start out with the set. A set is, quite intuitively, a grouping of elements; and elements can be just about anything. Mathematicians tend to talk about sets of numbers, in which numbers are the elements. For example, there is the set of all even numbers which includes 2, 4, 6, 8, etc. It is also represented or signified as E = {2, 4, 6, 8, ... }. There can also be a set of existing entities. For example, we can have the set of all frogs or the set of all green objects.

In this way, sets are also the basic unit of language. The word is the category. What I mean by this is that all words are sets or relationships between those sets. For example, when I say the word "frog," I really mean the set of all frogs. When I say the word "green," I really mean the set of all green objects. On the other hand, when I say a phrase like "the frog is green," "is" relates the frog to its appropriate color, and the article "the" (which is not even present in every language) specifies a certain frog, rather than any frog, which would be denoted by the article "a." So in this sense, the words "frog" and "green" are sets, and the words "is" and "the" are relationships between those sets. Hence, set theory is really the purest language, the language par excellence, since it specifies this aspect of all signifiers (or words) and the relationships between them. All nouns and adjectives are sets, and all other words are relationships between those sets. If language is abstraction of reality, then set theory is language in the abstract -- an abstraction of the abstraction. 

However, there is a paradox within set theory which arises. And that paradox arises from "the set of all sets which are not elements of themselves." Sometimes referred to as Russell's Paradox, the conundrum arises from the fact that if such a set belongs to itself, it does not; and if it does not belong to itself, then it does. An explanation is in order. Imagine a set which is not an element of itself. We can use the above example, the even numbers. Surely, the numbers 2, 4, 6, and 8 are elements of such a set. Yet the set of such numbers is not itself an even number. Nor is a set such as A = {2, 4, 6}. A definitely contains even numbers. But the set itself is not an even number. And so A would not be an element of the even numbers. Similarly, the set of all even numbers would not be an element of itself -- it would not be an element of the set of all even numbers. So, again imagine the set of all sets which are not elements of themselves. If such a set is not an element of itself, it must be an element of itself by virtue of not being an element of itself -- since this very set is the set of all sets which are not elements of themselves! And if it IS an element of itself, then it cannot be, since this set includes only sets which are not elements of themselves! Hence the paradox.

Now, Giorgio Agamben puts this paradox most cleverly when he states that "language is the set of all sets which are not elements of themselves." In an immediate way, this makes sense. After all, as was stated, language is really the set of all words. But no word, as a set, actually contains itself as an element. For example, the set of green objects, which might include frogs, leaves, and celery, does not include the word green as an element. The word green is not itself green. The word frog is similar. Although it might contain my pet frog Roger, it does not contain itself. The word frog is not itself a frog! And so, since language contains all such sets, language can be seen as the set of all sets which are not elements of themselves. 

The contradiction of language is therefore best exemplified in the language par excellence, which is set theory. What was originally viewed as a contradiction within a language is now revealed as a contradiction within language itself. Russell's Paradox is not merely a problem for set theory: it is the paradox of all languages.

But what does it mean to say that language is the set of all sets which are not elements of themselves? Firstly, it implies that language both is and is not an element of itself. Although sounding nonsensical at first, this actually is generally correct. "Word" is, after all, a word. As is "language." Hence, language is an element of itself. "Language" is a word within language. "Word" is a word among words.

On the other hand, language is not an element of itself. It must be presupposed before any word actually arises. Otherwise, the word could never arises as a word. Language is therefore presupposed before it actually arises. It cannot, in this sense, be an element of itself, as the thought of it precedes itself. Its notion exists before it is itself made into a set. That is, it exists immediately as a notion without actual lingual specifications. The idea of linguistic communication can occur without the realization or actualization of that idea. One must directly imagine linguistic communication before the first word arises. 

So language both is and is not an element of itself. This should not be alarming. After all, logic is itself a language; so if this sentiment contradicts logic, it merely reveals what is superior. Language, in every way, transcends logic. Logic, like set theory, is a lowly derivative of language, which so many have deluded themselves into believing is the final form of thought. 

Language is therefore an immediate thought. That is, it is not mediated. In particular, it is not mediated by itself: language cannot be mediated by language. However, in arguing that language precedes thought, we must somehow show that language is the first and only immediate thought. All other thoughts would then presuppose and be mediated by language.

As mentioned above, the defining feature of language is its ability to categorize, its way of placing entities into sets. But is this very facet of language not a prerequisite for thought? Suppose that we were not able to categorize the world around us. Would it not then appear as a single entity without distinction? We would then be left in a purely vegetative state, floating in white nothingness without any way of achieving thought or even self consciousness -- for we wouldn't even be able to distinguish ourselves from what is around us. 

Language therefore must precede thought. And it is therefore the first and only immediate thought. All others are mediated by the resulting language. 

Let us use an example to further emphasize this point. Catherine Belsey states that "if the things or concepts language named already existed outside language, words would have exact equivalents from one language to another." She uses an example to illustrate that this is not the case: the French Toto, sois sage means Toto, be good. "But ... we sensed that sage and 'good' were not always interchangeable. 'A good time' in French, we knew, would not be sage at all, since the term implies sense or wisdom." The implication is that every thought is a categorization. Sage is a categorization or set which includes a variety of ways of acting or being. There is, however, no direct translation to English. We have no way, as English speakers, of putting such a sentiment. And so it never comes to mind. But a more accurate way of putting this is that it never comes to mind without a word. No matter how hard we try to delude ourselves, such a sentiment never arrived to us so monosyllabically. Perhaps a string of sentences would have sufficed, but that would be circular in that the thought still only arises through words (and this still holds even if our language is so advanced that every thought can eventually be realized with enough words). The sentiment of a new thought is always on the tip of our tongues, but without the word it never quite arises. Colors are the same: we might imagine a new color -- as if it is just beyond purple! But without the visualization it never arises in the mind. In other terms, thought presupposes language. 

This conclusion is rather difficult to grasp. For one, we all feel that we can think without language. Yet there is no way of actually proving this since language has always been a part of us (is it surprising that memory only arises once one has learned a language or that language is itself the first thing remembered?). One counter argument could take the following form. Imagine that you observe an animal which you have never seen before, and for which you therefore do not have a word. Surely, could one not imagine that new animal in the mind before naming it? Would this not be an example of thought preceding language? However, such an argument seems circular. After all, to call this animal to mind is already to categorize it as a new species. It already implies that one has placed it into a set separate from all others. Hence, upon being thought or imagined, the new animal has already entered the realm of the linguistic. There is always already a way of wording any new object or experience, even if this wording must take the form of "not everything else I have already seen" or, in our example, "not every other animal I have already seen." The first word, the original set, already implies all others. Once one has discovered language, and therefore began thought, there can be no end to it. Is it really that difficult to accept this? We cannot unlearn our language, and we cannot stop thinking -- except perhaps in death.

This is also why slang has such power. Upon hearing slang, one has a new thought enter his mind. Eventually, such nuanced words become irresistible. The word and the thought come back to their masters endlessly, like a stray dog who has grown to love the man who feeds him.

So language precedes thought. We might therefore conclude that all thought is always already mediated. And this mediation occurs because of the linguistic: it happens through the use of language. Hence, this should not be forgotten: control of language means control of thought. The next goal must therefore be aimed at nothing less than complete linguistic liberation.  

Friday, July 15, 2016

The Elephant In The Room

Warning: I wrote this very quickly very late at night, and 
so I'm not sure that I'm saying a whole lot...I just heard
the news about France and wrote down my response.

Hopefully it makes sense. I'm not sure if France really
even fits into it -- looking over it now, I feel like I mainly
had the U.S. in mind.

    The west has increasingly become a target for acts of mass violent in recent years — most recently (and the inspiration for this post), the bastille day truck attack in France. Immediately, the response is often that we should not “give in” to the attacks by showing fear or hate, and that the best way to combat the attacks is to love each other more, and teach others to love in the same way. The idea has some grounding, in that a fear-driven submission to the organizations responsible for the attack would be devastating, but more love doesn’t seem like a viable solution. For one, it seems wrong to think that any amount of communal love could stop certain people from committing mass murder; there will always be lunatics, fanatics, and sociopaths who are either untouched by the love or at least refuse to reciprocate it for one reason or another. Even if that’s wrong, though, and a state of complete communal love is possible and would, in fact, lead to world peace, the steps to reach this goal seem completely intangible. Do I just hug my friends more, call my mom and dad to check up on them more, and help walk elderly people across the street? Should I donate to charity? Which one? It doesn’t seem like any of this stuff would lead either to global communal love or the end of terrorism, and it’s hard to come up with some real steps that would.
     Others counter this view by pointing out (as I did above), that you’ll never be able to get everyone on board your love train. But they continue to claim that "radical Muslims” are the main group that we will never be able to fight with love, and often suggest that we exterminate them instead. This idea seems even more ludicrous than the first! First, because “radical Muslim” is a vague term which does not divide the global population into any sort discrete grouping, making their extermination a difficult task. But that point doesn’t even really matter, because unless 100% of all terrorist attacks were committed by “radical Muslims,” their extermination wouldn’t solve the problem. Finally, and most importantly, it seems unjustifiable to claim that Islam, even in its most “radical” interpretation, specifically calls out for acts of mass murder against the west, and is the only doctrine that does so. Calling radical Islam the cause of West-directed terrorism is like calling pneumonia the cause of death for a man shot through the heart by a shotgun; perhaps the man had pneumonia when he died, and perhaps the terrorist is a Muslim, but they both seem like unlikely causes, and the real reason for the problem seems much more obvious and fundamental.
     What, then, is the problem? If not an absence of love, or a radical interpretation of Islam, how can we explain this recent explosion of mass murder? Mental illness? Drugs? Bad parenting? All of these responses are unoriginal, unsupported, and boring. All of these responses, and the two main ones highlighted in the previous paragraphs, are given to distract from the real problem that is staring everyone right in the face. Just as a commuter never feels responsible for the traffic that they are helping to create by driving in rush hour, the West seems to completely lack a feeling of responsibility for the acts of mass violence that they are helping to create. People don’t hate the west because they feel like we don’t love enough, or because they read the Koran; people hate the West because of countless years of exploitation, domination, and violence. The West is not an innocent person minding their own business, who suddenly gets picked on by a bully for no reason. The West is the bully, and the people we have been picking on have finally found the means to retaliate. Many of the organizations and people that decide to commit mass violence against the west have backgrounds where the west hit them first; whether through corrupt politics, nonchalance about murdering citizens outside of our borders, or support of various (perceived or real) things that generally degrade society, the West has made a lot of people angry.

     It should be obvious: people hate us because we’re screwing them. But still, the response is that we need to love more, give people more medication, and kill the muslims. And so we continue to exploit, dominate, and harm people both within our borders and abroad, while simultaneously searching in desperation for someone to tell us how to stop people from hating and harming us. This doesn’t mean we should become weak, and mold ourselves to fit the demands of our attackers, but rather that we should take a look in the mirror, admit some of our behavior seems to necessarily infuriate and alienate people to the point that they will sacrifice their lives to hurt us, and change that behavior. This doesn’t mean that we should take intangible steps to love the people who seek to harm us more, but rather that we should stop directly and indirectly contributing to violence through police, misguided military efforts, bribes, and embargoes, that we should stop supporting corruption even when doing so is economically viable or serves to strengthen our global power, the list goes on. We cannot turn a blind eye to the wrongs of the West in favor of the comfort it gives us. We need to demand this change, and it needs to happen now. If the demands are not met, we cannot roll over and remain complacent. The West needs to change its behavior, whether they (or we) like it or not; until then, many more innocent lives will be taken.

Sunday, May 1, 2016

On Hume’s Copy Principle

One of the most powerful tools in Hume’s epistemic toolkit throughout the Enquiry is the copy principle (CP). Roughly, the idea behind the CP is that all mental content can be divided into two categories: impressions, which are perceptions received immediately through sensation (either from sense organs or internal emotional states), and ideas, which we form based on our impressions. Importantly, impressions and ideas are not two different kinds of mental content, but rather “all our ideas, or more feeble perceptions, are copies of our impressions, or more lively ones;” ideas and impressions are both essentially perceptions, and are only “distinguished by their differing degrees of force and vivacity.” Our idea of the color brown, for example, is nothing but an image of the exact same color that we received through the senses, and we recognize the second image as an idea instead of an impression because the color isn’t as lively or as forceful in our minds as the initial impression was (rather than being thrust upon us, for example, we bring it into our reflection). Hume maintains that this principle holds not only for our idea of the color brown, furthermore, but for every possible idea; no idea can contain any content that was not first copied from an immediate impression on the senses. Rather than demonstrably prove this claim, Hume challenges the opposition to come up with a counterexample, and moves on to assert that proper use of the CP would put an end to many (mainly metaphysical) disputes that center around the disagreement of philosophical terms. The idea is that, whenever we are suspicious that a philosophical term is meaningless, we should ask, “from what impression is that supposed idea derived? And if it is impossible to assign any, this will serve to confirm our suspicion.” Equipped with this principle, Hume is able to easily dissolve age old disputes about concepts like substance and self, maintaining that these ideas must be meaningless and unintelligible, since there are no impressions that they can correspond to. It is unclear, however, that all of our ideas are copied exactly from impressions in the ways spelled out by the CP. Rather, it seems that some of our ideas are more conceptual than perceptual, and so it seems that Hume may be unsupported in his claim that the CP applies to all ideas, and unjustified, therefore, in his use of the principle to dissolve these various debates.

If we take a step back and examine how Hume uses and states the CP, it seems plausible that we might be suspicious of the meanings of the philosophical terms ‘idea’ and ‘impression.’ Following Hume’s advice, then, we ask “from what what impression are these ideas copied from? What impression gave rise to your idea of ‘impression’ or ‘idea’?” It does not seem like Hume has an obvious answer to these questions. His idea of ‘impression’ is stated in conceptual terms, as a class of “more lively perceptions,” and ‘idea’ is similarly defined as “the less lively perceptions of which we are conscious when we reflect” on our sensations. Those do not seem like perceptual definitions, and Hume would be hard pressed to describe a specific impression that can explain the relative ‘liveliness’ of the two terms, as well as the fact that they are otherwise indistinguishable. If we continue to follow Hume’s advice, then, it seems that our suspicions have been confirmed; Hume cannot assign any particular impression to his ideas of ‘impression’ and ‘idea,’ and so these terms must not actually carry any meaning. Taking another step back, it seems that Hume is attempting to give a general account of human psychology, while simultaneously claiming that we can have no abstract ideas that were not originally copied from particular impressions, which seems somewhat contradictory. In order to account for the whole of human experience, in other words, it seems that Hume wants claim that ideas (in general) are always copied from particular impressions (in general), but this removes his justification for claiming anything about a general idea or impression. If all ideas are copied or compounded from simple impressions, then Hume can have no general conception of ‘idea’ or ‘impression’ separate from any particular impression (or composition of impressions), and so the CP seems to be more of a description of the particular ideas that Hume has had so far, rather than a law of human psychology. Ultimately, then, it seems that Hume is on shaky ground when he uses the CP to dispel various ideas as meaningless, since the principle itself disallows him from applying it universally to a conception of ideas in general.

Hume, however, would be reasonable to argue that this objection misses his point. Although his principles are certainly supposed to amount to a general account of human psychology, he never claimed to have any general notion of ‘idea’ or ‘impression.’ All that Hume claims is that “when we analyze our thoughts and ideas, however compounded or sublime, we always find that they resolve themselves into such simple ideas as were copied from a preceding feeling or sentiment.” In other words, Hume might as well have written that the idea of brown is copied from the impression of brown, and the idea of sadness is copied from the impression of sadness, etc. He uses the word ‘idea’ not to refer to some general notion, but rather to refer to any particular idea that one might be able to come up with, and chooses to sum up this list with the assertion that “every idea which we examine is copied from a similar impression.” Of course, the absence of a general notion of ‘idea’ opens Hume up to the objection that the CP may only apply to the particular ideas that he has experienced — perhaps some of our ideas are missing from his list —  but he remains confident that no one will ever be able to come up with an idea not derived directly from their impressions. Like our knowledge of causal necessity, Hume’s knowledge of the CP seems to be a matter of fact, based on reasoning from experience; but like our knowledge of causal necessity, the fact that Hume’s reasoning comes from experience does not diminish the certainty of the reasoning. In other words, even though we might discover that the CP is based on reasoning from experience, and so not entirely supported by demonstrative arguments “of the understanding, there is no danger that these reasonings, on which almost all knowledge depends, will ever be affected by such a discovery.” So, just as no one will ever be able to hit a cue ball against an 8-ball only to have them both fly upward at the speed of light, no one will ever be able to produce an idea not derived from impressions in accordance with the CP; it does not matter that we cannot demonstrate the universal necessity of either fact in the understanding, as long as they are continually confirmed by the vast uniformity of experience. Hume would likely concede, therefore, that he has not provided any universal account of ideas or their universal relation to impressions, but would maintain that this does not disallow him from using the CP to dispel various definitions as meaningless, since the vast uniformity of experience has confirmed it as a matter of fact.

Unfortunately, upon closer examination, this optimistic response turns out to be unsatisfying; we can grant that Hume was not operating with any general notions of ‘idea’ or ‘impression,’ and it still does not seem to follow that the vast uniformity of experience confirms the certainty of the CP. We can suppose that, rather than stating the CP in any general terms, Hume wrote out a list of all the ideas he can come up with, and the impressions that they are copied from. Somewhere along this list is a phrase that says, roughly, ‘the idea of the table is copied from the impression of the table, and is similar in every way, except in its diminished force and vivacity.’ So, it seems that Hume has an idea of an idea of the table (a second-order idea of the table), that describes the idea of the table (the first-order idea of the table) and its relation to the impression of the table. But, presumably, the impression and first-order idea of the table do not include considerations about their relations to each other. The impression of the table simply is brown, five feet long, forceful, and vivacious; the idea of the table is brown, five feet long, weak, and not very lively; the second-order idea of the table is not brown, or five feet long, nor is it merely a less forceful and vivacious version of the impression. Rather, the second-order idea of the table seems indescribable in perceptual terms; it is certainly about the brownness of the table in the idea versus the impression, but it is not itself brown. Even more, the second-order idea of the table is also about the first-order idea of the table’s conformity with the CP, and its role in Hume’s larger human psychology, which seem unaccounted for in the first-order idea and the impression. It seems, therefore, that Hume’s second-order idea of the table is not merely some perceptual image or feeling that is less forceful than the table he initially saw and felt, but rather must be a conceptual description of those perceptions and their relation to other pieces of his philosophical framework. So, since Hume himself seems to employ ideas that are conceptual in nature, and not directly traceable back to impressions, he cannot justifiably accuse the conceptual terms of other authors as being meaningless or unintelligible.

Importantly, this objection doesn’t necessarily disprove the CP, but rather shifts the burden of proof back onto Hume. As mentioned earlier, instead of attempting to deliver a demonstrative proof of the CP, Hume simply maintains that it is confirmed by the uniformity of experience, and leaves the burden on his opponents to propose an idea that violates the rule. If an opponent is able to produce and idea that they claim is not derived from an impression, then Hume concedes that, in order to defend the CP, “it will be incumbent on [him] … to produce the impression or lively perception which corresponds to [that idea].” I claim to have delivered such an idea (the second-order idea of the table), and so the burden is now on Hume to come up with an impression that the idea was copied from. Furthermore, I maintain that Hume will not be able to reasonably argue that the second-order idea of the table is copied solely from the impression or idea of the table (with less force and vivacity); the second-order idea contains relational considerations that the impression and idea do not, and the impression and idea are brown, five feet long, and qualitatively perceptual while the second-order idea is merely about these perceptual qualities. This does not, however, mean that Hume has no way of showing that the second-order idea of the table was derived in accordance to the CP. While it may not be copied directly from the impression or idea of the table, copying impressions is not all that our minds can do; our ideas are either direct copies of impressions, or are created through “compounding, transposing, augmenting, or demising the materials afforded to us by the senses and experience.” In other words, while I claim to have shown that Hume’s second-order idea of the table cannot have been copied directly from the impression or idea of the table, it remains possible that he can show the second-order idea to be some composition, or transposition, of various different impressions and ideas. Perhaps we transpose our impression of force and vivacity with our impression of two things being related, and compose this complex idea with both the impression and idea of the table — Hume may be able to construct a second-order idea of the table through a story with a similar flavor. I leave this as a possibility, but would like to further comment that it seems like quite a tall order. In general, second-order ideas seem fundamentally different from impressions and first-order ideas; it seems that while we may be able to adequately describe first-order ideas and impressions in fully perceptual terms (color, size, shape, etc), perceptual terms will never be able to give an account of second-order ideas, no matter how much they are transposed and composed. Second-order ideas seem to be about abstract, conceptual relations, that cannot really be pictured, heard, or felt, and Hume’s empiricism simply cannot account for this conceptual flavor. Ultimately, therefore, while the door is still open for Hume to show that second-order ideas can be built up through the composition and transposition of perceptual impressions, it seems that the essentially conceptual nature of second-order ideas will prevent this sort of explanation from being fully satisfying.

Thursday, March 31, 2016

On the Privilege of Science

Note: I wrote this for a class, so it deals with a specific text in the introduction, but the ideas should be applicable even if you haven't read Feyerabend. Also, note that when I say science, I mean what we all know of and generally think of as Science (with a capital S). Not the general method of assessing evidence to reach conclusions, but Science as is done by Western physicists, chemists, etc.

At the start of his essay, How to Defend Society Against Science, Feyerabend introduces a general worry about science’s privilege as a method for reaching truth and forming correct beliefs about the world. The worry seems two have two parts: first, that science has become an ideology whose standards of truth rely more on a dogmatic acceptance of the scientific method’s accuracy than any real notion of truth; and second, that even if science has found the real truth, we do not have to accept the truth, but rather have a choice between pursuing truth versus other important values (like freedom). Initially, this view seems absurd — of course science gets us the truth, that’s why we have airplanes, computers, and a cure for polio. Why shouldn’t we privilege the real truth that science discovers over truths with less logical and empirical support? Ultimately, I feel that Feyerabend does not flesh out his general worry enough to combat these objections, and so I aim to carry on in his spirit and deny any inherent privilege science should have in forming truthful beliefs. Specifically, the view I wish to argue against can be roughly stated as follows: Science gives us the most accurate conception of the world, and so if we want to form the correct beliefs about the world, we ought to privilege science’s method over other systems. I will argue, first, that science’s empirical method does not necessarily give us the most accurate understanding of the world. Second, I will maintain that even when science’s method does seem to give us the most accurate understanding of the world based on its evidential support, it may be desirable to weigh different sorts of evidence in our analysis and conclusions. Importantly, my overarching claim is that science deserves no inherent privilege in matters of reaching truth, and should only be privileged when doing so is useful. I am not arguing for the complete rejection of science, nor will I be able to give a complete catalogue of when it is useful to privilege science in assessing truth and when it is not.

The first of my arguments, against science’s ability to reach an accurate understanding of the world, is roughly borrowed from Schopenhauer. The idea is that science necessarily presupposes certain facts about the universe, without being able to explain why those assumptions are necessarily true. For example, imagine asking a scientist why there are four fundamental forces. They might respond by explaining that the nature of the elementary particles in the universe necessitates the four forces that we have discovered. But, we can continue by asking why the elementary particles have the nature that they do. We may be met by further explanations, but if we continue to question the scientist in this way, they will eventually respond, “that’s just how it works.” As we keep asking “why,” the scientific explanation must either eventually end by referencing a fundamental assumption for which no further “why” can be asked, or must infinitely regress such that we can keep asking “why” forever. If the explanation can go on infinitely, then it seems that science hasn’t explained anything; we never reach a satisfying answer to our question of “why.” If, on the other hand, the explanation stops at some fundamental assumption whose validity is assumed rather than explained, then it seems that science may not present a completely accurate view of reality. Since science cannot explain why the universe is fundamentally governed by this set of forces, or why these forces necessarily operate in the way they do, we cannot be sure whether an assumption of these forces truly produces an accurate understanding of the world, or whether they are simply the best estimates that we can come up with using current methods. Maybe there are certain forces that science does not yet know how to detect; maybe these fundamental forces merely appear to be separate based on our collected data, but are actually one unified force; until science provides a sufficient explanation for the necessity of the operation of the fundamental forces, we cannot be certain that they accurately depict the nature of the world. Even more, this will be true no matter how much science’s assumptions are refined to account for newly observed phenomena. All of science’s assumptions are true only to the extent that they are able to explain observed data without making false predictions, and so we will never know whether those assumptions are actually correct, or whether there will be new data and predictions that call for a refinement of the current assumptions.

It remains possible, however, that even though we cannot be certain of the accuracy of science’s assumptions without sufficient explanation, those assumptions may still be, in fact, accurate. Perhaps, some day in the future, the assumptions made by science will be so complete that they are able to account for all experimentally reproducible phenomena in the universe, with no false predictions; even though we may not be able to explain why those assumptions are necessary, can we not be certain that those assumptions are correct? I now aim to argue against this objection, and claim that science only allows certain types of evidence in its analysis, and that it may be desirable to allow different evidential criteria in our collection and analysis of data. Vine Deloria, for example, offers tribal systems of thought an illustration of how new knowledge can be grasped through different methods of interpreting data. As Deloria sees things, tribal systems of thought are as “systemic and philosophical” as science; they simply allow different kinds of evidence and do not share in science’s goal of “determining the mechanical functioning of things.” The Sioux system, for example, denies science’s view of knowledge as absolute, separate from humans, and waiting to be discovered. Where science rejects any evidence that is not experimentally replicable to an objective observer, the Sioux do not disregard any experience, and derive their conclusions from “individual and communal experiences in daily life”, “emotional experiences”, “keen observation of the environment”, and “interpretive messages…received from spirits in ceremonies, visions, and dreams.” The system fundamentally asserts that all experience is valid, and its willingness to accept and interpret a wider range of experiences does not make it less systematic or true than science; the Sioux simply wish to interpret and account for a greater variety of experience than is reproducible in experimental settings. Instead of searching for abstract principles to understand and explain the structure of the world, Sioux knowledge is directed at discovering the best way for people to lead an ethical life. All of our experiences have content and validity to the extent that they contribute to the moral framework of the universe, and so we cannot disregard anything in our search for moral understanding. As a result of this motivation and epistemology, all events and beings are believed to be related in the moral community. Everything has a responsibility to fulfil itself and participate in the creation of moral and experiential content — “nothing has incidental meaning and there are no coincidences”. Entities are, in fact, viewed as communities themselves; everything has a personality that can be used to guide one’s moral understanding of the universe. Ultimately, the evidence allowed by science may be more useful for an understanding of the mechanical function of the universe, while the evidence allowed by the Sioux may be more useful in understanding its moral content, and it may be circumstantially desirable to accept the latter evidential criteria over the former. Of course, Deloria provides only one example of alternative evidential criteria, but I believe that the example illustrates that there may be circumstances where it is desirable to reject the evidential criteria accepted by science.

One final worry is that, upon accepting new standards of evidence, we are not changing our standards of truth but rather disregarding truth in place of something more useful (like a better moral understanding). Science tells us what’s really happening, but we would rather believe something false in order to further our other goals. I feel that this objection misses my (and Deloria’s) point: why prefer science’s conclusions as what’s really happening? Why take science’s evidential criteria as the most sure way toward truth? Even if science may be able to most comprehensively explain objectively replicable phenomena and the mechanical structure of the universe, aren’t there other phenomena and structures that it hasn’t considered? Does it give a satisfying account of the most important phenomena? My point, much like Feyerabend’s, is that we have a choice: science may lead us to one truth, and another system may confirm another truth, and we can decide which truth seems more circumstantially appropriate. Either way, we accept the conclusion of one system as true, and reject the other system as lacking in evidence; we change our evidential criteria for assessing a belief to be true, in such a way that science’s evidence may no longer be convincing. Ultimately, I grant that I have not provided complete specificity as to when we ought to privilege other systems over science, which systems we ought to privilege, and why. I believe that I have shown, however, that we do have a choice in how to assess truth and reach and understanding about the world. While science’s methods have achieved the accuracy to allow for airplanes and computers, there are circumstances in which we may question the relevance of science’s evidential criteria, and prefer the method of another system; when collecting data and assessing truth, science deserves only circumstantial — rather than inherent — privilege over other systems.