Confusing Categories: GMOs

FDA_Nutrition_Facts_Label_2006My home state of Vermont voted to label GMOs not so long ago. Unsurprisingly, Bernie Sanders also supports labeling GMOs. Perhaps more surprisingly, some large food manufacturers are starting to move in that direction as well, notably and recently Campbell Soup. For many progressives and liberals, GMO labeling represents a win in food advocacy.

But I try not to have sacred cows, and I try to question everything, and thus I find myself forced into a different, and locally controversial, position. It’s not that I’m against labeling GMOs exactly, because I really am not sure what I think about that aspect of the debate. But I get stuck on one sort of important observation: “GMO” doesn’t mean any one thing. Or at least, I don’t think it means something in the way we keep talking about it.

Let me explain: so far as I can tell, there are three major categories of thing on food labels.

Continue reading

The Democracy of Language

PronounsAs someone who works with words for a living, I always feel a little bit traitorous when I talk about the fluidity of language. My undergraduate studies in linguistics left me with a dialectic view of how language works. The conventional view of language, though, has the same deep currents of judgment and correctness and power that lie simmering in the rest of any culture.

Should I accept those? Should I treat language as brittle and defend narrow meanings and usages from shattering change? Or should I treat language as malleable clay that can and must be sculpted to best convey any idea? My inclinations lie obviously with the latter, but I retain a fondness for the certain bedrock of the former, and an empathy for those drawn to that view. Thus I am always aware that when I advocate use of the singular “they,” for example, I am casting a vote that goes against the grain for some.

The democracy of language is my bridge. By treating language as a democratic exercise, I acknowledge both the importance of the consensus view and the option of disregarding it. If a majority of English speakers believe prepositions should never be used at the end of a sentence, I can acknowledge and respect their view as a convention. At the same time, I can look at the native grammar of English and realize that prepositions are perfectly fine things to end sentences with.

Likewise I can treat pedantry with some respect, and embrace it in the spirit of inquiry rather than the spirit of restriction it so often carries. To be corrected, to hear that consensus view, is no shame—it is just a broadening of knowledge. But by receiving it as one option among many, I can just as easily and comfortably set it aside.

Still, there are some moments when pedantry slips out of the realm of mere language and becomes a channel for cultural currents. In the hands of the righteous, pedantry can acquire the sort of disdainful viciousness only well-chosen words can really achieve.

I do advocate the use of the singular “they.” I think it is a natural, elegant, and gracious choice to give respect both to those who identify outside a gender binary, and, equally importantly, to acknowledge that gender can be incidental. I should not need to know your gender to hear your ideas, to hire you for a job, to convey a delightful anecdote about you to a friend.

And so, when I read from a colleague in higher education sweeping dismissal of the use or relevance of new pronouns, the singular “they” along with, I am annoyed. I would like to receive this pedantry as a suggestion, but I find that difficult in the face of arguments that “they are entitled to their own identity, but not to their own grammar,” (and why not, exactly?) and that faculty don’t know “how to deal with this violation of basic subject-verb agreement” (no doubt the top of the educational priority list). This is followed by some hand-waving arguments about not kowtowing to “safety” and “comfort” as if those ideas have any relationship to the inclusion of a singular and a plural in the same word.

Happily, this author didn’t provide their gender, nor do I need it to discuss their work. Gender does not matter to my opinion, which is that they are engaging in the worst sort of pedantry—that sort of pedantry that is stolid and unyielding and defensive, the sort of pedantry that is the usual province of grumpy old white men and stereotypical English teachers. So I will address such pedants directly, and this author in particular, and I will do it using what, if they had any historical view of language whatsoever, would be the bane of their existence: the pronoun “you.”

For you, Melvin the pedant, who does not know this: a piece of linguistic history. The pronoun “you,” like “they,” was once solely plural. The singular second-person pronouns were “thee” and “thou.” As levels of formality began to drop out of English, so, too, did the distinction between the singular and plural in the second person. You probably don’t realize that you are using a plural pronoun in the singular every time you address one of your students. And thus does the rest of your argument, which rests on the implicit idea that language is fixed, collapse under its own decaying weight.

Please, if you must be a pedant, be an educated pedant. Make arguments out of elegance, out of convention, out of inquiry and desire for precision. Do not make arguments out of the specious idea that your own set of language is the only set, or that it should be.

Because, for the rest of us, I maintain that language is a democratic exercise. And you are being out-voted.

Sleight of Words

There is an entirely worthwhile debate to be had about how we name people in challenging situations. The more controversial the situation, the important this debate becomes. Finding the right words, the words that describe groups and events accurately, dispassionately, and without bias, is rarely accomplished without such debate.

Yet most every time these debates occur, there are those who leap ahead with labels instead of arguments. These people appropriate words and use them to push their perspectives as fait accompli, sidestepping the reasoned debate that might lead to a more balanced conclusion. When we are wise, we do not allow this manipulation. When we are unwise, we fail to notice.

People are fleeing Syria at the moment, trying to the best of their ability to escape a dangerous and hostile environment that threatens their lives and the lives of their families. They are risking their lives to escape, which should tell us something about their desperation and need. And, as it often does in such times, xenophobia has reached a fevered pitch in response.

migrantCNNThe clumsy have attempted to equate these refugees with ISIS, or with terrorism (a word itself co-opted long ago). The subtle, the propagandists, did not waste time making their arguments—instead, they went forth boldly to discuss a “migrant crisis,” advancing with their label an argument untenable in logic: that all these people seeking refuge are merely vagrants.

That we were slow to critique this word, that some continue to repeat it, and that only belatedly have we arrived at debate, suggests to me that we are not sufficiently cautious about such naming. This is not the only example, and sometimes we have gone much longer ignoring the question.

More than a decade ago, the United States invaded Iraq, toppled its government, and sought, without any clear understanding of the political or religious issues involved, to create a new government out of whole cloth. We drafted some leaders, left an occupying force, and scratched our heads as peace did not descend, as many Iraqis did not feel especially liberated, and as many people continued to die.

We did these things on false pretenses, and so I was not surprised when we explained them under false pretenses as well. We labeled the Iraqis who fought us and our created government as “insurgents.” We talked about quelling an “insurgency” as though these people were attacking an established body, rather than a piece of stage dressing that had yet to win the confidence or engagement of its constituency. The truth was so much more complex, and so much more difficult, than the label. Yet there was little public criticism, and the media embraced the “insurgency” as quickly as they have embraced the “migrant crisis.”

We have used labels, too, to justify ourselves when we have no justification. Just recently the Associated Press updated their style guide to stop calling deniers of climate science “skeptics,” a change those same deniers deeply resent and which has been all too long in coming. Because, of course, the label of “skeptic” implies a person who is justifiably hesitant, who is considering critically, and who is careful about beliefs and evidence.

Climate deniers were none of these things. Perhaps, twenty years ago, the term “climate skeptic” might have been justified. Even then the science was fairly clear, but there was justifiable debate about the extent of the problem. Now, when the only controversy remaining is political, the term “climate skeptic” is a laughable pretense. I continue to refer to these people as “deniers,” because I do not see how any other word is justified. The AP has chosen to call them “doubters,” but there is no doubt about the evidence—thus, their doubt is perverse and irrational, and to oppose the conclusions of climate science appears to me, inevitably, to be denial.

There is no way around the power of these names. Those doing the naming define the baselines, the scope, and the tenor of a conversation. When we say there is a “migrant crisis,” suddenly we have introduced doubt about whether we are obliged to help these people; instead, the debate is about whether the people fleeing and dying are truly in need. Likewise when we say there is an insurgency, there is no debate about whether we, in creating a particular set of rulers, were mistaken; instead, the debate is how to preserve that government. And when we let deniers pretend to be skeptics, we allow a debate long settled to continue as though it were not intellectually dishonest.

I cannot help but wonder how often these words are chosen intentionally, and how often they are simply summoned from the zeitgeist to serve as avatars. The former is duplicitous, and thus likely, and yet I think we are duplicitous with ourselves nearly as often as we are with others. And because we fool ourselves, we must be cautious to remember that a label is not an argument, and that it is incumbent on us to question its provenance and honesty. And I think also, we must question a bit more loudly.

Safe Spaces

Rock in the shallowsSafety is one of those rather slippery fractal concepts that seems to retain fuzzy edges no matter how closely one examines it. I’ve been considering it lately, partly because I’ve read a lot of discussion of trigger warnings, their uses and misuses, and what it means to create a safe space. I’ve seen arguments in multiple communities both for and against trigger warnings in the context of safety, and, personally, I find myself somewhat conflicted.

On the side of support, an argument I quite agree with is that people who have been and are being traumatized need, in a very real mental health sense, safe places to recover. When the harm being done is tied to systemic injustices the absolute need to respect these individuals becomes greater because it will not happen by default. In this context trigger warnings allow people to take charge of their own recovery and to choose what they will encounter, and when, and why.

Another argument I find compelling is that trigger warnings can be overused in a way that infantalizes those suffering from trauma and disrespects everyone concerned. If trigger warnings are applied to classroom material (mythology, for example) they can conflict with the need to create an open space for learning and discussion. In a worst-case scenario, someone might advocate for material to be censored or removed from a class to avoid triggering anyone.

Of course, trigger warnings are not intended as censorship, and labeling content is something we do widely without much controversy. No one any longer argues that giving films or video games or entertainment a rating of some kind is a bad things—those who want that information have it, and those who don’t care can ignore it. Nor does anyone complain about, for example, warnings of explicit language or topics on radio or television. These are things that accommodate some people’s needs while inconveniencing almost no one—a perfect bargain for a free society.

This leaves me with an apparent contradiction: trigger warnings are applied to maintain the safety of traumatized and marginalized groups, which is good, but can also be applied as a form of censorship, which is bad. The key to resolving this, for me, comes back to that concept of safety. A safe space is one where people can encounter challenging material as much or as little as they are able, not a space where challenging material is expunged.

Not that I think having a safe space without certain material is a bad thing—survivor communities may limit discussion of rape and abuse, and this is perfectly reasonable and necessary. That isn’t censorship; that is one community making a choice that works for that community and protects everyone in it. Censorship is when a choice to restrict material is made for everyone by default.

So, then, the solution must lie with choice. If a trigger warning is used to allow traumatized people the choice to engage or withdraw, this is worthwhile and important. If, though, “being triggered” is used improperly to emotionally hijack a discussion and eliminate topics people do not like, then it is neither helpful nor useful. Unfortunately, I think the idea of “being triggered,” for some people, has become a fashionable way to shut down discussion of uncomfortable material. That this can coexist with a very real population of traumatized individuals in need of real support and respect is all the more frustrating to me; the very idea of it seems disrespectful.

I return at the end to the goal of safe spaces. Trigger warnings can and do create those spaces when they are used to give people the choice to engage or withdraw, but safe spaces are not, and must not be conflated with, comfortable spaces. Safe spaces are places where you are free to be as uncomfortable as you choose, without judgment, without fear of ridicule, and without trauma. Safe spaces are places where, if we so choose, we engage our discomfort and grow.

False Categories: Black-on-Black Crime

Photo Credit: Flickr user Phil CrisologoI maintain that doubting and refining (and sometimes rejecting) one’s ideas is a fundamental part of knowing. Without those habits, ideas become brittle and dogmatic, and demagoguery becomes common. Without those habits, we can develop entire concepts with ready-made distortion built in. I think of these concepts as false categories: words or phrases where peripheral qualities are used to define a convenient set of things regardless of relevance.

Have you ever had someone ask you a rhetorical question that just made you think “that’s a stupid question,” but you couldn’t put your finger on why? Did you find yourself reluctantly pulled along by their logic, knowing full well that there was a flaw somewhere but unable to find it? This has been my experience when I encounter false categories—I recognize that there is a specious premise in the question, but it take a while to parse it out because it was hidden in the language instead of stated outright.

Of course, categories are useful and effective shorthand for thought and debate. This is why we rely on them so much. But human beings are also too enamored of categories; we think too little about them, and we overlook false categories that contain questionable implications. You can draw a circle around any convenient thing and call it a category, and we do, especially when there is an ideological motive for doing so.

Case in point, the phrase “black-on-black crime.” For white people, racial tensions they had mostly ignored have become much less ignorable in the recent past. For those white people motivated to dismiss the idea that racism still has any role in American society, “black-on-black crime” is a refuge. “Look!” these people can say, “there are proportionally more murders by black Americans of black Americans! Black-on-black crime is the real issue you should focus on, not [insert topic here].” Black Lives Matter? Then why don’t they focus on Black-on-Black Crime (and stop picking on George Zimmerman, or white people, or police officers)?

On the face of it (ignoring the false choice idea that you can only focus on one thing at a time) the category of “black-on-black crime” is apparently real. FBI crime statistics bear out that there is, indeed, more crime within races than across races, and more crime overall in black communities. So one could be forgiven, after a cursory glance at the data, for thinking the category of “black-on-black crime” is a natural category with real implications. Which is where a lot of people would stop, so let’s not.

The phrase “black-on-black crime,” especially when used in discussions about structural racism, implies a false equivalency between crimes motivated by racism and crimes motivated by poverty. To suggest that the Black Lives Matter movement should focus on “black-on-black crime” instead of structural racism in police departments implies that because more poor black people kill other black people than do racially motivated police, the latter should be somehow less important. Even the premise that you could focus on one and not the other implies no chain of causality between a community unable to trust its police force and the levels of crime within that community.

The phrase “black-on-black crime” carries with it the implicit limit of violent crime, the sort of crime where one or two people have one or two victims and there is direct interaction between them. If one includes fraud, embezzling, tanking the world economy, or various other kinds of white-collar (and mostly white-person) crime, the question of who counts as a victim becomes altogether muddled. White people often talk about white-collar crimes as “victimless,” but I think all the people who got stuck in foreclosure after being bamboozled into bad mortgages would disagree on that point.

The thing that bothers me most about “black-on-black crime” is that it is fundamentally a bait-and-switch. The category acknowledges that race is an important factor in the discussion, but then uses that importance to divert attention and avoid responsibility. It betrays a deeply separatist view of American society and carries the deep conviction that races are just different, which leads treacherously to the idea that some races are more criminal. Crime in white communities is painted as an aberration, but the implication of the phrase “black-on-black crime” is that crime in black communities is inherently tied to the racial makeup of those communities. Never mind that we know crime is actually tied to the density and socioeconomic makeup of communities, and societal structures and history have conspired to make poor urban communities more black than white.

This is why I believe “black-on-black crime” is a false category. Like many false categories, it takes an incidental factor and paints it as causal. Usually that’s just a mistake, a cognitive shortcut that we take so often that it’s tough to avoid. In this particular case, though, it echoes a long and shameful history of white people judging other races as inferior.

Some white people try to convince themselves that they no longer do this. Some try to convince themselves that racism isn’t a real part of society. “Black-on-black crime” does have something real to say about race, but what is has to say is that uncomfortable white people are trying very hard to look away. Racism, though, will not be buried in so shallow a grave; it will keep rising to the surface until we deal with it honestly, and structurally, and humbly.

Embracing Fuzzy Edges

Courtesy of NASA/Johns Hopkins APL

Pluto – Courtesy of NASA/Johns Hopkins APL

I love words. I also love science. And, because I love both, I pay attention to the interesting places where the words of science and the words of society do not quite match. Scientific terms need to reflect a series of characteristics shared by the things they describe, and we need to know what those characteristics are. Scientists need to be able to look at new astronomical bodies and categorize them, so the “I know it when I see it” approach that works for most of the rest of us most of the time just isn’t good enough in science. From that perspective, it makes sense to update the definitions of things to reflect our growing knowledge.

The word “planet” is one such example. Yes, it was infamously revised (scientifically) to exclude Pluto as a planet. There were lots of good reasons for this, not the least of which being that scientific language, unlike regular language, needs to limit its fuzzy edges. And people, including me, were sad to see Pluto dropped from the A-list in our solar system. Scientifically, it makes sense for now—but even in science, language changes.

Last week I was traveling and exploring places few people ever go. At the same time, the New Horizons spacecraft made its closest approach to Pluto, visiting it more closely than we ever have before. Over the coming weeks and months we will learn more about that tantalizing little world than we have ever known. Maybe Pluto will shift categories yet again.

Pluto is a good example of the fuzziness of words, because the scientific redefinition of the word “planet” exposed the fuzzy definition we had been using for a long time. It’s not that “planet” is unique, either. In the rest of society, words are not defined by hard and fast categories, because language is fundamentally democratic.

What about dictionaries, you ask? Dictionaries describe words; they don’t prescribe meaning. If most people use a word a particular way, that is a meaning of that word. If other people use it differently, the word has a second meaning. And so on. And there are lots of places where the dictionary or scientific definition of a word does not encompass everything it can mean. For example, have you ever witnessed an argument about whether tomatoes are fruits or vegetables? It depends on your definition. Colloquially, they are vegetables. Get a little more specific and they are fruit. Get even more specific and they are vegetables again—along with all fruits, and grains besides. But the whole argument misunderstands the point; tomatoes are not inherently one word or another, they just are; we decide what to call them, and when, and why.

Let’s try again—how about the word “theory?” In scientific terms, a theory is a general explanation of some phenomena that is deeply and broadly supported by the evidence. In colloquial terms, a theory may be nothing more than an educated guess. This is why many people arguing about the “theory of evolution” consistently and fundamentally misunderstand the subject. The edges of the word “theory” are fuzzy.

And let’s look at one more example, something you might not think has fuzzy edges (except in reality): a dog. Picture a dog in your mind. Dogs have paws, long noses, ears, four legs, tails. What about a dog with only three legs? Is it still a dog? What about dogs without tails? Still dogs? What about a cross between a dog and a wolf? It meets all the qualifications to be a dog, except something weird happens there because it meets the qualifications for a wolf as well. So which is it? Or is it both? The edges are fuzzy.

A moment ago you might have tried to think up a definition of “dog” that excludes my edge cases, maybe something about DNA. If so, you’re not alone; the reaction of most people, when faced with a fuzzy edge, is to clarify it—to draw an arbitrary line and defend that line heartily. Tomatoes are fruit, end of story.

But must we? Language doesn’t actually work this way—everything has a fuzzy edge of you look close enough, and that’s a good thing. It is a deep reminder that words are imperfect stand-ins for reality. Scientific language can and must be precise, but the rest of the time we can have fun. We can have words that mean different things to different people. This is how language works. This is how language lives and grows.

So let me return to Pluto for a moment. Now we are closer to Pluto than we have ever been. New Horizons has taken hundreds of photos and collected amazing data about an alien world. It is a world that has sparked our imaginations, inspired us, captivated us, and all from a long cold orbit on the edge of our solar system. It is a tiny world on the fuzzy edge between our definitions, and between us and the rest of the universe. Is Pluto a planet? It doesn’t matter. The edges may be fuzzy, but today, Pluto is looking beautifully clear.

The Apotheosis of Form

I like to think about words. I believe that thinking about the words we choose is a wonderful way of pushing the bounds of our thinking. I believe that choosing our words carefully and drilling down in the nuances of their meaning helps us understand both what we personally believe and how others’ thinking is subtly different. I believe that strongly enough that I’ve written a number of posts now about the importance of choosing your words carefully.

Anna_Chromy_Cloak_Of_ConscienceIn the discussions I’ve had on this topic, though, another theme has emerged: that of treating our words as if they are the only things that matter. I was discussing this with a close friend recently and she brought up the idea of “liberal shibboleths,” which I think is a brilliantly simple way to explain this problem. A shibboleth, after all, is “the watchword of a party,” and often “some peculiarity in things of little importance.” And before I single out liberals for illiberal use of shibboleths, there are plenty of conservative shibboleths, libertarian shibboleths, progressive shibboleths, and so on.

I and my friend both have seen moments when a well-meaning person is rebuked by members of the in-group for use of the wrong words. Sometimes that rebuke is called for—there are, indeed, people who are offensive with intent, and those people should be called on their behavior. But what of the rest? If someone reaches out honestly to understand a thing they are not, it’s natural that they not know how to speak about it. Why do we treat them as if they should? These are people who have taken a step outside their comfort zone—they do not need us to critique their form, they need us to show them new ideas.

There is value in treating people with respect. There is respect in describing people with the words they choose and not the words we choose. There is respect in recognizing what is offensive, and why, and avoiding it. But there is also value, and respect, in presuming the best of intentions. Certainly when a prominent white man publicly speaks of women as girls, the inherent sexism of his statement is worth critique. But if that man had gone to some of his colleagues with an honest desire to learn and asked how he should handle situations with “girls” in his lab?

Someone who wants to learn is a rare and precious commodity. What would you teach in such a moment? Would you teach this man that he is making unwarranted assumptions about half the human race? Would you teach him that basic human decency should not be dependent on gender? Would you teach him about women’s experiences when men view them as erratic, emotional, unintelligible aliens, instead of as human beings?

Or would you take this moment, this rare open moment, to teach him only that he is using the wrong word?

The thing I did not mention before is that a shibboleth is not merely a password or a badge of membership—it is a tool of exclusion. We know, by the words they use, who agrees with us and who does not. If we are complacent and unwilling to engage our own ideas, if we prefer superficial discussion with no dissent, the shibboleths tell us who to echo and who to exile.

In my opinion, the way we engage with outsiders is the true test—of whether our groups are bent on real, deep discussion and self-improvement, or whether they are rigid places where ritual is king and doubt is forbidden. We, who profess to be open to multiple ideas; we, who profess to believe in human rights and human decency; we, who claim to value discourse and discussion: it is incumbent on us to pay more than lip service to these ideals.

We can choose our words carefully, and we should. But we can make those choices out of understanding rather than prescription, and when we speak to those who disagree we should not conflate the two. The form is what we see, but it cannot be what we teach—because form, without the ideals to inspire it, is dead.

A Poor Choice of Words

via Neil Moralee

Have you ever made a complete ass of yourself and then had to apologize later? Ever found yourself rapidly backpedaling from something you said that, while ill judged at the time, seems head-smackingly foolish in retrospect? Have you ever found yourself stammering out an apology for “my poor choice of words?”

Personally, I can’t recall doing this—but I would bet that I have. I would bet that most people have (excluding incredibly inoffensive people, and assholes who never apologize). It’s not surprising that this phrase might come into your head at a moment of tension when you are fumbling for a way to take back something you said; after all, we hear it all the time. But if you ever find yourself about to say this, you really, really shouldn’t.

Last Friday I wrote about apologizing by claiming “it wasn’t my intent;” which is valid in minor incidents where good intentions can be presumed, but is often used to justify wildly prejudiced things. “A poor choice of words” is a close cousin: an apologetic phrase that makes perfect sense when you have a slip of the tongue, but not if you just said a meaner version of what you meant to say all along.

One place this phrase crops up often is in apologies from organizations, politicians, media personalities, and other individuals in the public eye. Rush Limbaugh called Sandra Fluke a “slut” and a “prostitute,” then apologized for his “choice of words.” Arkansas representative Don Young called migrant workers “wetbacks” and then apologized for his “poor choice of words.” Congressman Geoff Davis called President Obama a “boy” and then apologized for his “poor choice of words.” Dr. Ben Carson drew analogies between LGBTQ individuals and “bestiality” and then apologized, you guessed it, for his “choice of words.” Senator Harry Reid gave Obama the back-handed complement that he “had no Negro dialect” and then apologized, as usual, for “such a poor choice of words.”

You might have noticed a theme in all these examples: specifically, that these are all people expressing absolutely horrible, prejudiced things and yet they seem to think it was how they said them that mattered. In this insane upside-down world, you can hold opinions that are sexist, racist, or many other kinds of horrendous, but all that matters is the words you use to express them. The sentiment, apparently, doesn’t matter.

I assume that you, the reader, are already ahead of me at this point and have realized, if you didn’t know it already, that apologizing for “a poor choice of words” is not, in fact, apologizing. Instead it is downgrading one’s offense from believing something terrible to making some kind of slip of the tongue. “Oops! I totally meant to say something else instead of ‘subhuman mongrel.’ My bad!” “So sorry, I didn’t mean to say you were ‘a slut,’ I just accidentally said it out loud because I thought being sexist was funny. JK you guys!”

You might have noticed another insidious theme here, and I want to make it explicit because I think it is very important. Apologizing for “a poor choice of words” is the same as saying your original sentiment was fine. You are basically saying the horrible thing you said is a valid, acceptable thing to say.

So, if you happen to be a school with a dress code, say, and it happens to advise girls that “we don’t want to be looking at ‘sausage rolls’” and tells those same girls that “you can’t put 10 pounds of mud in a five-pound sack,” you should know that it is no way sufficient to apologize for “unfortunate word choices.”

Now I know horrible non-apologies are put out there all the time, but that doesn’t mean we have to condone them or perpetuate them. If you see a leader apologizing for their poor choice of words, call them on it. Twitter, Facebook, whatever—let their terrible apology writers know that we do not accept their apologizing for word choice instead of sentiment. If your friends apologize to you this way, you may want to be nicer, but gently make it clear what is and isn’t a real apology.

Because the phrase “a poor choice of words” is a indeed very poor choice of words.

It Wasn’t My Intent

Intention is both more and less important than we allow. It matters what I meant to say and do, because those reflect my experience of the events in question. But what I meant to say and do may have little relationship to your experience of the same events. And the events themselves are yet another truth.

I am not suggesting it is easy to navigate these murky waters. It’s tough to anticipate how someone will respond to what you say or do, and it’s tough to know ahead of time how it will be perceived. Maybe you tap a friend on the should to say hello and they jump out of their skin—you mean to say hello, they experience it as being startled, and the objective act (tapping them on the shoulder) holds neither connotation. In this sort of circumstance, intent does matter, and the phrase “it wasn’t my intent” may actually be reassuring. The hurt is minor and results from an innocent misunderstanding.

But there is a different usage of this phrase, and one that takes it well outside allowable bounds. Where I more often see “it wasn’t my intent” cropping up is in apologies where it really has no business being. I am talking about circumstances where the hurt is large, or there is no misunderstanding, or the consequences are so significant that intent no longer matters. In these cases the phrase “it wasn’t my intent” and its cousins are the phrases we trot out to abdicate responsibility.

Tim Hunt - World Economic Forum

Tim Hunt – World Economic Forum

For example, Tim Hunt used the phrase “I certainly didn’t mean that” this past week when apologizing for sexist comments about women (he called them girls) being a problem in labs. He was worried that women would “fall in love with him” and “cry” and be “distracting,” so Tim thinks they should be in gender-segregated labs. And in his apology he says he “did mean the part about having trouble with girls,” so he seems to be burning the candle at both ends on this apology. By saying he “didn’t mean” to offend anyone, he seems to be saying that the inherent sexism of his views doesn’t matter, because he didn’t intend it to be offensive. Happily, lots of woman in science jumped in to tell Tim just how wrong he is.

Nevertheless, this is how I usually see phrases like “it wasn’t my intent” employed. Not to clear up some real misunderstanding of meaning, but rather as a verbal scalpel to separate someone’s offensive views from the consequences of expressing those views. When someone says something steeped in prejudice and then claims “it wasn’t my intent” to upset anyone, they are effectively saying that there is nothing wrong with their views, and the fault lies in your response.

At this point some people may be thinking “hey, wait a minute, maybe Tim Hunt didn’t mean to be sexist.” They are probably right. And they may be thinking of some time that they said something prejudiced themselves and didn’t realize until after the fact—I know I’ve done this. And that is true, and a good point.

And it doesn’t matter. There is no plausible deniability for those espousing sexism, or racism, or homophobia, or any other prejudicial viewpoint. The offensiveness of prejudiced views and the hurt they cause cannot be separated. This is why the phrase “it wasn’t my intent” is such an insidious bit of misdirection—it’s basic role is to suggest that when someone is prejudiced and offensive, whether they intended to be matters more than whether they were. It refuses to acknowledge the prejudice as the problem, and thus it reinforces, rather than diminishes, the original harm.

“It wasn’t my intent,” we say, “to give offence. But of course, we are decent people, so if you were bothered by our prejudices, we will happily apologize for the bother, even though the problem really lies with you. Sorry.”

“It wasn’t my intent” is the “I’m sorry your face keeps hitting my fist” of rhetorical apology.

Live in the Real World

Maybe you were just saying how we needed our government to be something less than corrupt, or how women need to be safe in our society, or how evidence and logic should trump nonsense and prevarication, or how arguing about scientific realities is preventing us from dealing with them. And the person you were speaking with replied, with a touch of condescension, a hint of derision, and a little eye roll: “You have to live in the real world.”

Now, it really doesn’t bother me where people choose to focus their time and efforts. If they believe in a just cause, and they can maintain the effort for that cause, more power to them. If they don’t have time for what I think is most important, that’s fine, too—people can and must choose where to put their time, and it cannot be everywhere. But this is a special kind of righteous dismissal, and it isn’t what it sounds like.

When someone says, “You have to live in the real world,” they think they are saying that your suggestions are implausible, or foolish, or unachievable. They think they are gently steering you away from wasting your poor misguided energy on something that you, poor naïve soul that you are, do not realize is worthless.

Via ViudadesnudaBut that isn’t what they really mean. What they mean is: “My version of the real world, the version I have created in my head, accepts these things as givens. So shut up about them already.” And maybe they mean: “…because changing them is too hard.”

Mostly they mean: “Stop making me question my assumptions.”

Now, there are reasonable ways to disagree with people. If one person is talking about systemic inequalities that disadvantage the poor and how we need more support structures, and another person is suggesting that the poor are leeching off the government safety net like the parasites they are, there is clearly grounds for disagreement. There is also room for both sides to support their arguments, and to reconsider their own assumptions. And even though I find one of those views offensive, that’s the place I would rather be in a discussion. Debating issues with people when I disagree with them helps me learn how their views differ, how to support my own, and where I am wrong.

But if either person says to the other “You have to live in the real world,” that room for discussion is lost. Instead of being a point of discussion, the issue has become a point of judgment. That is the signal to me that the other party in the debate doesn’t care about finding the truth of an idea, only about preserving their own worldview.

What’s worse, this phrase doesn’t just come up with people I radically disagree with—in its most painful, useless, divisive form, it is coming from people who I would like to have as allies. It can appear in even apparently friendly discussions, but its true meaning remains. “You have to live in the real world here. Wind power isn’t the answer; solar is the only workable choice.” And maybe: “The government is never going to be effective, so less of it will always be the better option. You have to live in the real world.” Or perhaps: “Of course men shouldn’t rape women, but you have to live in the real world—how you dress is going to matter.”

I can’t make people stop saying this. Neither can you. I can’t even avoid being frustrated every time I hear it. But we can keep the true meaning in mind. If we hear others saying it, we can translate and regroup—okay, what assumption are they guarding here? And, if we find ourselves thinking this about someone else, we can use it as a reminder to question our own assumptions.

Because, in the real world, people are going to say this, and think it, and not always know what they really mean. But in the real world, some of us believe in trying to build a better one.