The Death of Socrates and the Apathy of Philosophy

What do we talk about when we speak of “facts?” For those involved or interested in scientific, or more broadly, empirically-rooted fields, it is a thing that is indisputable; for instance, that gravity is a force that exists that keeps each and every one of us, as well as all other material things, rooted to the surface of the planet Earth. In these cases, facts are things that have been rigorously tested and examined and been found overwhelmingly to be the case; to this, we bind facts with the necessity of being true; there is no such thing as a fact that is not also true. So, while it is a fact that gravity exists and affects all of us, it is not a fact that unicorns exist, for the former has more than ample evidence for its existence, and the latter does not.

In philosophy, though, unless a member of the field is an adherent of the Vienna Circle, we find it harder to discern truth than those I have just briefly described. Particularly when our discipline takes up questions of ethics and morality, truth becomes a thing that is ultimately near impossible to ascertain given the metrics and rubrics of the sciences. The history of philosophy is ultimately one of the evolution and debate between certain men (and, occasionally, like a desperately needed breath of fresh air, women) over how it is best to think, and then later based upon that thinking, act. We are incessantly concerned with the thoughts of those long dead and buried unlike almost any other discipline; departments across the globe make hiring decisions based upon whether one focuses upon Kant rather than Hume, Aristotle versus Cicero, or Heidegger as opposed to Carnap. How well one is deemed to understand the writings of those deemed important by some and worthless by others determines entire careers, and to publish a philosophical text means to ceaselessly call back to our forbears in order to legitimize our own claims.

Yet, this understanding that is so highly valued is a kind of knowledge that is scorned outright by many in the sciences; we engaged with philosophy necessarily must think in abstract, changeable, evolving manners that must essentially spit in the face of notions of what constitutes factuality. If one person declares x thing about a passage of Spinoza, another will eventually come along and argue that said passage actually says something quite different, and the argument over which is the truth of Spinoza’s writing will span centuries, and likely never end.

However, the one absolute, concrete thing that all philosophers seem to universally call back to as a justification for what they do is the death of Socrates. The event occupies a special place in our minds; it is given special emphasis in near all of our classes, a legendary event that cemented our discipline’s ethos for all time. The great one, the true instigator of it all, we are told, willingly committed suicide by hemlock in 399 BCE because he wished to die for his ideas, his ethics, his moral code. He could not bow to the indignities demanded by small-minded government, for to do so would as well kill him as would the poison he drank. Whether we are to compromise our very being or die, the tale of Socrates tells us, the proper path is to drink deep; this preserves us, for all time.

Jacques-Louis David, “The Death of Socrates.” 1787. Oil on Canvas. Metropolitan Museum of Art, New York.

Since his death, times have changed drastically, and ultimately so has how philosophy goes about its way in the world. Our discipline is not particularly brave, and to claim that as a whole it ever was is questionable to say the least. Certainly, there are many philosophers who have famously fought against injustice and oppression, and have written books on the subject, but most of us are perfectly content to ignore the problems of the world, and make a life of inventing arguments as to why philosophy is special, existing outside the purview of the everyday, that only the most rarefied and excellent minds should ever be able to engage with us, and until the government comes breaking down department doors, we can safely ignore the affronts to free speech and human rights that occur all around us.

I recognize that this is a position that is, to say the least, controversial. And indeed, I should make clear that I personally do not oppose the ability of scholars to study things that are impractical and have little relevance for a wider community; were I to do so, I would be arguing essentially for the end of academia as it has existed for hundreds if not thousands of years. In this paper, I’m not going to propose anything so drastic as that. In a series of posts to come, I will present a critique of philosophy that claims itself privileged, above the everyday, and argue for a sort of more ethical thinking that is tied with a realization of the necessity for action, a philosophy that is a part of, rather than a bystander, in the world.

Eichmann in Boston

This quarter, for my senior capstone class, i.e. the class taken for a major that is supposed to cap off our undergraduate studies, I am taking a course focused on Hannah Arendt, the political theorist, with the truly excellent Peg Birmingham, who has been one of my most important instructors during my time at DePaul. In our analysis of it, it has brought to my mind many stark lessons for our current predicaments.

We started our examination of Arendt’s works by reading Eichmann in Jerusalemthe book she wrote and compiled from her time sitting in on the trial of Adolf Eichmann, the Nazi bureaucrat who more or less facilitated the extermination of the European Jews by the Third Reich and who was captured, tried, and put to death by Israel in the early 1960s. Most of the book is concerned with the details of the trial, and the testimony of Eichmann and others, but on the whole Arendt attempts to use the man to assess the phenomenon of the Holocaust itself, namely the complete and utter moral collapse of the nations involved, and also what she refers to as the “banality of evil,” the thoughtlessness of Eichmann in his charge to fulfill the duty given to him.

The chilling aspect of the trial for Arendt, and for those who read her recounting of it, is not that Eichmann was some evil ghoul, maniacally bent on the destruction of a people, but in fact something quite different; the man was no such thing. He was not psychologically unhinged in any way; he described himself as doing his duty, having no ability to stop the horrors of the Nazi regime, and so instead he did as he was ordered, because he believed that doing so was the truly moral thing to do, going so far as to cite Kant’s categorical imperative as his driving ideal. He admitted no guilt, no antisemitism, and seems to have at least been not guilty of the latter.

What, then, are we to make of Eichmann? He was not a monster, as is evident from reading Arendt’s account; but surely we should not exonerate him, as well. The message of his trial is that monstrous evil does not necessarily only come from the diabolical mastermind, bent on the destruction of all that is good and pure in the world, but instead that evil is easily done by those who are joiners, those who do not question orders, those who are too apathetic to care about what acts they commit or  enable, so long as they think it is correct, though that authority they believe may be a corrupt one. Moral collapse of the sort seen on the scale of the Holocaust is not, as we believe, an extraordinary thing, but rather one that can occur at any time, if, like Eichmann, we allow ourselves be the dupes of power, if we are not skeptical enough to question governments and ethical enough to make the right decisions.

The banality of evil can be seen today. We in the US have spent the past decade or so, if not more, watching our own government commit exceedingly immoral acts in the name of national security, while riling the population up into a fever over anyone who happens to fit the stereotype of being Muslim. It’s happening right now, in the aftermath of the explosions at the Boston Marathon. Long before suspects were identified by authorities, the media and average citizens leapt to conclusions, immediately assuming Saudis and anyone else who was mildly Middle Eastern looking to be the culprits; such reactions are catalogued here and here, amongst other places.

This attitude, the assumptions of guilt, are exactly what Arendt warned us against. But she also offered us a brighter truth; that against all of the horror of the Nazi war machine, there were those that resisted, such as the actions of the state of Denmark to save its Jewish citizens and refugees from the Einsatzgruppen, as well as those of Anton Schmid, a sergeant in the Wehrmacht who saved 250 Jewish people from the concentration camps. There are always those who make the moral choice, not the practical choice of saving their own skins, when the greatest adversity comes. Patton Oswalt wrote much in the same vein in the response to Boston.

It is in these stories that we must take hope from, and from the banality of evil, of the prejudiced assumptions, that we must constantly be on guard from. Our society has the capability to lapse into abjection and horror at any moment, if it has not already. We have stood by while innocent people have been illegally imprisoned by the people we elect to govern us, we have raised barely a peep against drone strikes on innocent civilians, and plenty of other moral lapses; as difficult as it is to do, we must look at ourselves, and constantly check our biases and actions. Ideology is what makes us blind, and so, we must do our utmost to not blindly follow ideology.

What’s Going On With Me

Hello all.

I haven’t been posting a lot recently, and that’s been mostly down to my completely packed class/work/organizing schedule, which thankfully will be ending in two weeks. The problem hasn’t so much been workload, which is honestly pretty light compared to other quarters I’ve had, but it’s been work that has been so basic that it completely turns me off. I know it sounds pretentious and all that, but when I’m not doing work that interests me, my brain can kind of turn off with it. So, doing all this make-work stresses me out, and I spend so much time doing it (because a lot of it is math, and math makes me panic) that I end up being too stressed to blog. It’s a vicious cycle.

Furthermore, when I’m too stressed to blog, I get stressed by the fact that I’m not blogging. I can’t think of what to write about, or I have an idea but worry that my idea is too similar to other people’s writing. Basically, my brain is very good at finding ways to suck.

It’s apathy of a sort, in that I think in the fear of failing, or being repetitive, or simply piling on to this discussion or another rather than adding to it substantively, my brain convinces itself that it’d just rather do nothing at all, and we should really watch that episode of Eureka or play FIFA for a while rather than try to write. That’ll de-stress you, I think, you can write after you just turn off your brain for a while. Sometimes it works, sometimes it doesn’t, but when it’s the latter, it makes me feel even worse. And the cycle begins again.

The moral of the story is, while some of my pieces have gotten good play out in the interwebs, and the fact that last Friday I hosted a panel of some of my greatest intellectual heroes, a few of whom told me they found what I write to be very important, which was overwhelming to say the least, I’m still figuring out how the hell this whole blogging game works right now. And I don’t know if I’m going to figure it out any time soon, or if I ever will. But I do like doing it, a lot, and I like talking to you readers when you share my stuff or make comments on it. So, I’m hoping that after next week, when I’ll get to bid Chicago adieu for a week to go recharge my batteries in New York and Boston with friends old and new, I’ll be back on my game.

In the meantime, I’ll be locked up in my apartment, scribbling away about Aristotle and Agamben and Kristeva. C’est la vie.

[Forward Thinking]: How Should We Punish People for Moral Failures?

In the newest installment of the Forward Thinking series, Libby Anne and Dan Fincke have asked this question:

How and when (if ever) should we take it upon ourselves to punish someone in our lives for a moral failure? How does this vary depending on various possible relationships we might have to the the morally guilty party? Consider, for example, how or whether we might punish our friends, our partners, our parents, our colleagues, strangers we encounter, etc. What sorts of values and principles should guide us when we presume to take it upon ourselves to be moral enforcers?

I think that there is a problem with this question straight off; the very idea that the violation of one’s morals should result in punishment. I take from the prompt that punishment here means more than a simple pointing out, or the having of an argument over a point of ethics; the implication here is that we should have in and of ourselves a special desire to be, as it is said, “moral enforcers,” attempting to keep the people we know and encounter in line. And, frankly, that’s an attitude, even in a hypothetical as it is presented here, that worries me deeply.

What this prompt essentially asks for, in my opinion, is strategies that force people to conform to a normative system of beliefs about conduct, in other words socially acceptable actions. In this case, I believe that norms have to be questioned more thoroughly than perhaps any aspect of society, for, to borrow from Adorno, we know via our own experiences that our society does not really operate under any sort of widely held ethical truths; thanks to the implementation of capitalistic-driven mass media, amongst plenty of other reasons, any sort of social contract that Americans ever ascribed to certainly does not exist anymore. Consumerism and other material concerns have replaced any responsibility wider society might have felt towards their common man. With public ethics no longer having any objective core, we find ourselves in a state of nihilism.

Thanks to this, Adorno claims, claims about morality cannot be objective; only scientific statements, about fundamental empirical facts, can ever be given objective validity. Morality becomes prejudicial, making it impossible to make good decisions between opposing claims of ethical subjectivity. Thus, morality becomes a tool of power, useful only to make its espouser more attractive. And that influence created is backed not by ethical validity, but the material assets of the person backing a moral vision.

Now, before the pitchforks and torches are raised, I’m not claiming that Dan and Libby Anne are capitalist megalomaniacs seeking to bring us all under our rule. What I mean to do by quickly laying out part of Adorno’s moral philosophy to to indicate the danger of thinking, even in the abstract as it is laid out in the prompt, the idea that we can be so set in our morality that we then believe ourselves to have the authority to punish others for violating our own moral code. I cannot ever imagine myself being so certain of rights and wrongs that I would take it upon myself as a duty to discipline another because they did or said something that contradicted my views in some way. It seems like a position borne of ego, and not a terribly moral one in and of itself at that.

So, that’s my answer, and a question to all of you: could you ever be so certain of your morality, so affirmed, so unconcerned at the possible holes in your own reasoning, that you would take it upon yourself to become an ethical constable? I certainly don’t think that I could.

In Praise of Radicalism

I and others have talked a lot in the past year or so about a schism within atheism, one driven by one side’s belief that atheists can and should become involved with issues that are not traditionally associated with the atheist community, i.e. things like feminism, anti-racism, trans* advocacy, and such, and those who, well, don’t, often with rather awful results in the latter case. However, for me, that doesn’t seem to be the only split in our community, or indeed in progressive groups at large. There is, in fact, another way, one which I happen to ascribe to; that of a more progressive, radical bent.

This came up recently in a post by Ed Brayton, the head of FreethoughtBlogs, who took issue with a piece on In These Times by Bhaskar Sunkara, who is the editor of Jacobin Magazine, one of my favorite publications. In it, he takes issue with what he sees as liberals like Ezra Klein and Matt Yglesias becoming so centrist, they lack any kind of ideology:

There seemed to be something different about this band, an idealism that blended the resurgent youth activism that rallied behind Howard Dean’s 2004 campaign and against the Iraq War with the liberal “netroots” culture that developed alongside it. Their popularity grew as they were absorbed into the media ecosystem. Klein’s writing moved from his eponymous Typepad to the American Prospect to the pages of the Washington Post. Yglesias also got his break at the Prospect and ended up at Slate.

But at some point, Klein and company stopped being liberals. They even stopped being human. The singularity—a technological superintelligence—was upon us. The wonks had become robots, ready to force enlightenment down our partisan throats.

Sunkara went on to detail how during and before the most recent presidential campaign, Klein did things like defend Paul Ryan’s budget plan, which was, frankly, a barely concealed attempt to further attack the poor in our country, as well as Klein declaring himself to not be liberal. He goes on to advocate for a return to an ideology-driven journalism on the mainstream left. However, Ed, by no means any centrist, took issue with Sunkara’s analysis:

To be blunt, this is anti-intellectual bullshit, faux populism aimed at exactly the wrong place. It’s the kind of “thinking to logically robs humanity of poetry and emotion” nonsense that we often see from the right. The last thing we need in a political system that is soaked with appeals to ignorant populism and emotional argument is to marginalize the people who actually do logical and detailed policy analysis. It isn’t enough to declare one’s good intentions if the policy being advocated won’t achieve the stated goals.

I want to take issue with Ed’s analysis here, and try to make a case for a more radical way of life while not completely disregarding moderate views, which Sunkara seems to do. Unlike the latter man, I’m not a Marxist by any stretch of the imagination. I actually identify myself as something entirely more stigmatized in our society; an anarchist.

Quelle horreur, etc. You may now be thinking I want to violently overthrow the government and turn this country into a Road Warrior-esque hellscape, with the human race fighting one another over gasoline and such. But that’s not the case. I imagine there are certainly such apocalyptically-minded people out there, but by and large, we anarchists are quite a cuddly bunch. Noam Chomsky is one of us, as was Emma Goldman, who is one of the coolest people to ever live.

But if we don’t want Road Warrior, then what DO we want? Well, the overthrow of the state part is definitely still there, preferably non-violently. I believe, essentially, that in America, our government no longer works for its people, if in fact it ever did. Furthermore, the government is more and more using military-caliber force against its citizens, notably via the police department. Not only do they lie compulsively to preserve their power, but there are innumerable cases of the police using extreme methods, from torture to using drones (albeit unarmed versions) to hunt a criminal, whom they have been so unsuccessful in detaining that they have taken to randomly shooting at people who they think might be this suspect.

Many books have been written on how the state fails us, and I’m not going to recap them here. However, the moral of the story is that through a long process of rationally examining evidence for the government’s incompetence, I think that its problems are inherent to its existence and cannot be solved through normative means. However, I also have the understanding that it’s pretty likely that won’t happen in my lifetime, and thus, we return to the radicals vs. centrist conversation.

I and others like me are most likely not going to make it into the mainstream conversation any time soon. The media that reaches the wider population is not capable of expressing complex ideas, thus we’re probably always going to be relegated to the blogs and academic spaces that most people never tread, nor care to. But that doesn’t mean we progressives are going to stop talking, or give up trying to change things.

What we are is a check on those like Ezra Klein and Matt Yglesias, the ones who can offer alternatives that are still backed up by facts and rationality, but at the same point have a stronger ideological ethics behind them. Ideology and emotion are no barrier to a good argument, used correctly. I think Sunkara tried to make an important point in his piece, but got lost in the emotion and excessive and odd metaphors. I also think Ed was wrong to completely disregard his argument as anti-rational. At its finest, there’s nothing irrational about radicalism; it’s just a bit out of the ordinary, non-normative, a constant seeking of greater change. And on the day that it is realized, well, that will be an interesting day indeed.

Football, Rape Culture, and The Great American Gaslight, Part 1

[trigger warning for rape]

In my day to day life, I try to avoid American football at all costs. For me, it has always been symptomatic of everything loathsome about America; the games seem more like three hour advertisements than sporting events, designed to carry on the capitalist dream at all costs by selling viewers everything they can while a game of some sort happens in the background. In certain parts of the country, particularly Texas, the high school game is an inextricable part of the culture, with some schools’ stadiums holding as many people as do those of professional teams, and costing astronomical amounts of money. Money that could be spent teaching children proper history or science regularly is diverted to the football teams, with predictable results; the game is a religion unto itself, unlike any other sport in the world, even proper football.

This kind of thinking, privileging football above education, has continued into the college game in several high profile instances, most publicized being the case of Jerry Sandusky and Penn State University, the latter being one of the most well known college football programs in the country, whose upper echelons conspired to cover up Sandusky’s sexual abuse of 52 children over a 15 year period, some of whom were involved in The Second Mile, Sandusky’s program for underprivileged youth. After this came to light, he was eventually indicted,  convicted and sent to prison, but not without riots breaking out from large parts of the Penn State student body, who flipped a news van and caused property damage over the firing of coach Joe Paterno, who was among those who assisted in the coverup.

The whole affair, particularly the protests in support of Paterno, was one of the most visible manifestations of male privilege and rape culture. I realize that both of these are very loaded terms, and, thanks to some feedback from friends, I realize I’ve been a bit lax in actually defining social justice terminologies for those of you who read this blog, so I’m going to try and do that from now on. So, over the course of this post and the ones that will follow it, I am going to try, via the lens of football as America’s true civil religion, which seems to stand inviolate above nearly everything else, to present privilege and rape culture as the driving forces behind the whole apparatus of the game, as the things which make it so powerful and entrenched. Who knows, we may get into a little bit of nationalist theory too. First off, I am going to introduce my theoretical framework of feminist epistemology as the grounding for all of this.

I mentioned privilege above, and also its loadedness as a term, and so I’m going to try and defuse that a bit. Privilege, as elucidated at greater length here and here, we define as being a set of unearned advantages conferred upon a person or group based upon socially constructed (i.e. skin color is not genetically determined, women aren’t naturally less rational than men, etc.) notions of normalcy. Our society has over the past several decades, particularly since the Civil Rights movement, been oriented to ignore aspects of identity that have historically been used to ostracize and demean those who do not ascribe to Western societal norms; thus, we have, through a widespread, nearly all-encompassing apathy, made it so that it is nearly completely taboo to even discuss gender or particularly race; the done thing is to prove that you’re not prejudiced by not even taking into account issues of identity, only viewing your black friend based on their personality and moral character, because after all, if race is a social construction, then surely it doesn’t matter and shouldn’t be considered, right?

Well, not quite. When we do that, and ignore aspects of identity that determine entire groups of people’s social status, we’re not being caring or sensitive. What we’re doing when we make ourselves blind to the issues inherent to gender, race, class, or any socially constructed divide is further exercising our privilege. In epistemology, this is referred to primarily in the “problem of the rational knower.” Lorraine Code, in her book What Can She Know?, analyzes this problem in depth, and it is from her that I shall draw here.

Referring to that problem, that being whether or not it is important for us to be aware of the sex of the knower. According to Code, academic philosophy has the habit of treating the knower as a “featureless abstraction.”[1] In the logical proposition “S knows that P,” which is the most basic form at the heart of philosophy, she claims that the emphasis is never on who that knower is, but instead what it is that they know; this then leads to understanding of everything that prevails in those conditions stated. This is a part of the grand project of modern philosophy, which, it is posited, examines the “problem of knowledge” in order to determine the “possibility and justification of knowledge claims” in order to establish a “relation of correspondence between knowledge and ‘reality’ and/or ways of establishing the coherence of particular knowledge claims within systems of already-established truths.”[2] These set methodologies, then, endeavor to make these truth claims in order to ground them within a “permanent, objective, ahistorical and circumstantially neutral framework or set of standards. The question ‘Who is S?’ is regarded neither as legitimate nor as relevant to these endeavors.”[3]

It is this latter part wherein lies the rub for Code; those making the judgments about permanence, objectivity, ahistoricity and neutrality are, in attempting to live up to those mandates, working for a sort of purity in which questions of identity cannot enter. Code disagrees strenuously, for she believes that such an unattached, impartial knower is nonexistent, nor is it truly possible for such a person to ever exist. She introduces a type of relativism into the conversation, asserting that a certain epistemological relativism can hold that “knowledge, truth, or even ‘reality’ can be understood only in relation to particular sets of cultural or social circumstances… Conditions of justification, criteria of truth and falsity, and standards of rationality are likewise relative.”[4] The universal purity that her targets ascribe to simply does not exist in the real world.

There are however many critics of relativism in this context, asserting that it would be a disaster to move in such a direction, but Code believes it is possible to avoid the slide into subjectivism that they so fear; her relativism is one that would sidestep reductionism and simplified planes of knowledge, and could keep open “a range of interpretive range of possibilities… it creates stringent accountability requirements of which knowers have to be cognizant.”[5] With this, she has introduced a moral-political requirement to epistemology, but cautions against not just authoritative statements on the matters of knowledge and rationality, but on any idea that the subjectivity and circumstances of the knower are the only paradigms to consider; they are significant, but not definitive. This distinction will be very important to the rest of the book.

Returning to the sex of the knower, Code posits that this sort of absolutism in epistemological endeavors has led to the construction of women as, simply, not-men. It is the case that the “S” of “S knows that P” has been “tacitly assumed” as male, but not just any male; “the S who could count as a model, paradigmatic knower has most commonly – if always tacitly – been an adult (but not old), white, reasonably affluent (latterly middle-class) educated man of status, property, and publicly acceptable accomplishments. In theory of knowledge he has been allowed to stand for all men.”[6] These expectations are not mere habit, she asserts, but instead the product of the conscious convictions of philosophy, and has been engrained for centuries; when this issue arises among male philosophers, they say that things are “as they should be.”[7]

This being the case, women are in effect judged to lack the capacity to be proper knowers. Code recounts Aristotle, my man Rousseau, and Kierkegaard amongst others in the Western philosophical tradition who have said as much. Amongst all, women’s knowledge is “inherently and inevitably” subjective, whereas the defining feature of knowledge has been commonly regarded as objectivity. Here, Code has an easy answer to the question of the knower’s sex; if women’s knowledge is naturally subjective, then “if the world-be knower is female, then her sex is indeed epistemologically significant, for it disqualifies her as a knower in the fullest sense of that term.”[8] It comes down essentially to a question of access; historically, many forms of knowledge, particularly those explored at institutions of higher learning, have been unattainable for women; this leads to the question of whether “maleness” or “femaleness” are subjective factors of the sort that form and are constitutive of knowledge; however, given the fact such a binary consideration would fail to adequately take into account how gender functions across a spectrum in society, which Code very rightly points out, such an analysis would be far too problematic to be able to form a proper answer.[9] The question, then, is not necessarily between genders, but between the natural and the socialized, and whether that dichotomy has any validity.

In short, what we find ourselves faced with when we enter into the social world is one built on foundations of inequality. Our society has been constructed so that a normative class of white men are perceived at all turns as being the most rational, the most knowledgeable, the most trustworthy. With this in mind, in my next post, I will further add to this rationale and begin to apply it to the stories I briefly introduced at the beginning of this post, as well as in other instances.


[1]    Lorraine Code, What Can She Know? Feminist Theory and the Construction of Knowledge.

                (Ithaca, NY: Cornell University Press, 1991): 1.

[2]    Ibid.

[3]    p. 2.

[4]    Ibid.

[5]    p. 3.

[6]    p. 7.

[7]    Ibid.

[8]    p. 10.

[9]    p. 12.

Feminism as Ethical Practice

My good friend Chana of The Merely Real left an interesting comment on my post about chivalry:

I wonder if there’s a way to salvage the idea of “[Chivalry] is about not harming or hurting others, especially those who are more vulnerable than you” in a world with privilege and power dynamics. Certainly this should apply in all interpersonal relationships and along all axes of power, but if we’re examining the gender relation in particular, maybe a revisited chivalry would be something like, “avoid the harm that comes from your privilege” and that would end up in men doing things like acknowledging Schrodinger’s Rapist, because their size and power and societal stuff makes them scary to women, and giving women extra space and time, and being extra careful about consent, avoiding coercion by all means possible, looking only for enthusiastic consent and taking the responsibility of saying no if it looks like maybe she’s only saying yes because she’s scared, and things like that. Could that work?

And there are definitely parallels in other power relations. Don’t do microagressive shit. Don’t use words you know hurt people. Etc. But maybe chivalry really has too much paternalistic “taking care of you” baggage to function that way.

Well, I think there is. I touched on it briefly in the post; chivalry, if it can be deprived of the patriarchal aspects and becomes an attitude of respect and ethical behavior based on one’s morals rather than archaic gender binaries, what we’ve got is a feminist ethics. This is a huge aspect of the thesis I’m writing, so, to spare you the brunt of my philosospeak, I’m going to try and lay out here, briefly, what this means.

My premise is one that is essentially similar to Chana’s favorite, Richard Carrier’s, that being that philosophy, done well, necessarily leads to humanism, and then feminism. At its most basic, before we add on the layers, feminism is the point of view that women and men should be equal. Now, I ascribe to a more progressive feminism that doesn’t want just equality, but a full breakdown of patriarchy, but equality is pretty much the first thing feminism declared itself for. Now, as Carrier and plenty of others have noted, being against equality in this matter means that you are a sexist. End of. I don’t need to rail off quotes from Hume, Rousseau, Founding Fathers, et al to have to prove that everyone having the same rights and freedoms as everyone else is a moral good. We know it is.

Feminism is a moral good. Feminism, at its best, instructs us to check our privilege and work to break down unjust power dynamics. At its best, it will allow us to navigate our world in much the way that contemporary defenders of chivalry believe that system operates, but when we present our ethics as based in feminism, we not only manage to move past chivalry, which, as Chana said, is far too wrapped up in its baggage, and allow us to present feminism, which has plenty of silly stigma of its own, most of which has been created out of fear tactics and straight up lies, as a moral good that takes the place of old paternalistic points of view. It is a necessary step to making the kind of equal, just world that we want to believe is possible.