In my previous post, I sketched out a way of potentially reducing the current issues with the structure of social media, by taking it back in the direction of the internet’s open roots and reducing the control exerted by large, unaccountable, for-profit corporations.
I can’t actually implement that. I could maybe do the technical side, at a stretch, but creating the critical mass of adopters that would be essential for it to succeed, against the resistance of powerful incumbents, is far beyond my ability or the ability of anyone I know.
So what can you or I do to improve social media if we can’t do much about its structure? What influence do we have?
Well, we have some influence, however small, over the content of social media. Specifically, we decide what we share, how we comment, and what we react to. And since shares, comments, and reactions are the three ways in which a social media post gains traction and influence, this isn’t an insignificant power, if we choose to use it wisely.
In this post, then, I want to suggest some principles that we can follow to improve the quality of social media in our immediate zone of influence.
Posting and Reacting
Let’s start with posting and its little brother, reacting. (On Facebook, your reactions are broadcast to your network, so it’s similar to sharing the post; on Google+, your network only sees your reactions if you haven’t turned that setting off, and most people have done so. Among other things, this means that I feel more free on G+ to “like” things that not everyone in my network will agree with, without worrying about what they’ll think.)
On social media, few people create content, and much of the original content they do create is about themselves, not about issues. (Which is fine; one of the reasons I'm connected to people on social media is that I care about what's happening in their daily lives.) A larger number of people curate content, sharing articles or “memes” either from websites they frequent or from other social media users in their networks. Usually, we share things that we feel strongly about, and that we agree with; and the easiest strong emotions to arouse with a piece of content are outrage (at the actions or opinions of people who are “not our people”) or self-righteous smugness (at the actions or opinions of people who are “our people”). If you can find one of “your people” hitting out at one of “their people” it’s a two-for-one.
A lot of sharing on social media, in fact, is aimed at proclaiming our membership in a particular group. By proclaiming faithful group membership through the things we share, we can get affirmation from the other members of the group (in the form of further shares, reactions, and comments) and feel less alone in a hostile world.
The natural effect, though, is to amplify outrage, smugness, and division. I hope we can agree that smugness and division are inherently bad things to amplify, and that outrage is only worth amplifying in two circumstances: if it’s outrage about something that’s actually happening or has actually happened, and if our outrage leads to effective action for change.
Those two criteria are not often met, though.
“Fake news” is a term that’s had a lot of use over the past couple of years. It’s sometimes used as a mere slur against coverage that’s unsympathetic to the speaker’s “side,” but there are more objective definitions of “fake news”. It ranges from outright falsehoods presented as news (sometimes under the cover of “satire”), through conspiracy theories that impose a false narrative on real events; extreme spin and distortion; the omission of context or nuance to the point of reversing the significance of a fact; and biased opinion presented as facts.
A pair of data scientists trained a fake news detector, and discovered in the process that it’s actually easier to train a real news detector. They called it Fakebox. What it detects is whether a sample article is “written with little [sic] to no biased words, strong adjectives, opinion, or colorful language”. In other words, it looks for an objective, factual tone--the kind of article that doesn’t tend to compel people to share it on social media.
So, if you don’t have the technical resources to set up a Fakebox server, how might you decide whether to share something or not?
Well, firstly, is it substantial? Does it present or consider more than one viewpoint? Does it explore the topic in depth? This test basically rules out the “memes” which many people on social media share as readily as I click “Like” on a cat photo. I don’t mind (up to a point) the inspirational-quote ones or the jokes; I’m talking here specifically about the ones which lay out something that sounds like a fact, or a series of bullet points that sound like facts, but don’t provide any way of checking the claims for context or accuracy.
It’s essentially impossible to convey a significant amount of truth in an image with just a few words, and when these memes are fact-checked, they tend to range from outright falsehoods, through inaccuracies, to aspects of the truth presented without enough context to really understand them in a useful way. I haven’t done a study, but my intuition is that they skew towards the “outright falsehood” end, often by what they omit, but sometimes by what they claim. Whenever I see a new one do the rounds, I wait for the fact-check, and it almost without exception confirms my suspicion that they’re, at best, misleading. I never share them, even the ones that match up with my existing beliefs, and I urge you to consider adopting the same policy.
Secondly, before sharing something, check its tone. Does it amplify helpless outrage? Or does it amplify hope? Outrage is compelling, and in sharing it with your friends, who will agree with it and reinforce that you’re not alone, you feel slightly less helpless; but if all you’re doing is spreading the helpless outrage, it’s not a net gain.
Thirdly, does what you’re sharing give a helpful way forward or suggest action you can take? I listened to a fascinating podcast a while ago about some research done on China’s social media platforms. Surprisingly, the researchers found that people were not censored for expressing outrage against, or even insulting, government officials or government policy. What got them censored was calling for action. The Chinese government has apparently concluded that expressing outrage is no threat to them, as long as nobody does anything.
This suggests that amplifying a sense of helpless outrage on social media will only help to preserve the situation, and the system, you find intolerable.
Before you share, ask yourself: Would the Chinese government bother to censor this?
Fourth, does what you’re sharing draw us together by our common humanity, or focus on what divides us? Does it locate all the problems outside your group, reinforcing a sense of them and us? This is a question for liberals as well as for conservatives; liberals are far from immune to the temptation to excuse their own people for what they condemn in the “other”.
I have a lot more respect for articles that are criticism coming from inside the house. There is, of course, a place for criticism of groups you don’t belong to; part of the reason you don’t belong to them is that they stand for something you disagree with. But an article that implicitly (or even explicitly) places all the evil somewhere else is inevitably covering over a blind spot.
That doesn’t mean you can’t share it. But it does imply a duty for you to uncover that blind spot and comment on it, critiquing the failings and omissions of your own people according to the principles you claim to hold. If you’re actually acting out of principle, and not simply based on group membership, you should be able to do this at least some of the time.
In general, though, I suggest that you focus on and amplify what you love and what you hope, not on what you hate and what you fear. Terrible things are happening, but wonderful things are also happening, and they get a lot less exposure even though they’re more common. If you feel you need to talk about things going wrong (which is an important topic, as long as it’s not the only topic), do so by talking about people who are doing something about them.
If you can’t find anybody who’s doing anything, maybe you should do something.
Let’s talk about commenting now. “Don’t read the comments” is generally good advice for websites (and excellent advice for YouTube); comments on social media, depending on who’s in your network and who you allow to comment, can be more positive and helpful, but they can also rapidly degenerate into insults and point-scoring. This is especially the case on controversial topics, the kind of thing that is based on amplified outrage--which is another good reason not to amplify outrage.
One of my basic principles for social media comments is: don’t interact with posters who can’t pass the Turing test. The Turing test is the famous social conversation test which sets out to distinguish a person from a machine. There are a lot of “bots” around on social media, posting stereotyped comments based on keywords in order to draw attention to their business or cause, or amplify some particular form of outrage. Some of these are software-based, and some of them are implemented in the form of a human being typing on a keyboard. If you can’t tell which one it is, don’t talk to them.
If you are talking to a person, though, talk to them like a person, not a member of a group whose members are interchangeable. My wife had an experience recently of commenting on an acquaintance’s post, which was a classic amplifier of outrage against a group of which she happened to be a member. Another poster who I know is an actual person jumped in and ranted at her based on a stereotype of who she was, bearing little connection with reality. It didn’t result in a fruitful discussion.
An exchange of insults achieves nothing. Instead, look for common concerns and common humanity with people who differ from you. Consider the recent story of a well-known comedian who engaged with a man who coarsely insulted her on Twitter. She looked beneath his insult for the person and found someone in pain, and they ended up having a productive exchange; in fact, she helped him with the life situation that was part of what was behind his bad behaviour.
If you must have a discussion with someone you disagree with on social media (and I don’t advise it, in general), look for things you agree on, and appeal to shared values. Show how those shared values lead you to the conclusion you’ve reached. If you can’t find shared values, there’s not much point in discussing.
I’ll add: Don’t argue to win. Have some humility, and be prepared to learn and admit when you’re wrong. I’ve found that as a rule in life, whenever I go off on a rant, I almost always find I’m mistaken about something in the situation; sometimes about everything.
In summary, helpless outrage over misinformation; self-righteousness; and affirming group identity at all costs are not a good basis for basically anything. But they’re what social media tends to encourage.
We can help to change that if we approach our social media usage more consciously.
So here’s my social media pledge:
- I will seek out and share the truth, not just what confirms my prejudices.
- I will only share information that’s substantial and fact-based.
- I will not share “memes” that sound like facts, but don’t provide enough context to evaluate their truth.
- I will amplify what I love and what I hope for, not what I hate and fear.
- I will look for ways I can take action to change things for the better.
- I won’t engage with bots, or with people I can’t distinguish from bots, and I won’t act like a bot myself.
- I will look for shared values and common humanity in the people I encounter.
- I will approach discussions with humility, kindness, and a willingness to change my mind.
Join me, won’t you?