Washington Post political reporter Chris Cillizza – better known as The Fix – asked the following yesterday on twitter:
Serious question: Is it possible to have a civil and edifying comments section on a political blog? If so, how? Advice welcomed.
I’ve often said that if you ever want to lose faith in humanity, you should just read online comments on newspaper stories – it doesn’t matter which paper, what kind of story it is, or when it was posted. Many, if not most, of them are going to be ill-informed, vitriolic, filled with grammatical and spelling mistakes or all of the above.
The Diamondback is, sadly, generally no exception to this rule. The first comment I ever moderated was on a story about the College Park City Council. It boldly asserted that there was no College Park City Council, and that the only two cities in Maryland were Annapolis and Baltimore. This is demonstrably false.
But the comments never got as bad as they did this semester. The university was in the midst of several controversies focusing on campus diversity, which led to a lot of outright or borderline racist remarks being posted (for obvious reasons, I’m not going to repost them here.) The system we used to monitor comments didn’t do a good job of filtering, and we weren’t alerted when comments were posted. This meant I or the web editor would check the site in the afternoon, only to find a mess of vitrol, often too big for us to contain.
Soon, this problem expanded, thanks to the efforts of one very persistent, very annoying commenter.
She (or he) went by the name “Cynthia” – at least most of the time. The persona they would post as was as a black woman engaging in reverse racism – damning whites to hell, saying she hoped the whites on campus would die, claiming all white people were racist, even threatening the university president – which earned us a call from University Police. She also threatened to come to the newsroom and “beat the ass” of several Diamondback editors multiple times.
The posts would come several times a day, on articles of all types, even ones having absolutely nothing to do with race. It got progressively more and more absurd. Cynthia’s rhetoric was so inflammatory, so over-the-top, that I’m convinced it wasn’t actually an angry black woman, but someone simply trying to start fights. “She” would also occasionally post racist remarks under multiple names – we could tell they were all “here” because the IP addresses were identical and the comments would be made within minutes of each other.
And “her” attempts to provoke worked. Her posts would draw angry responses, usually from white people. Relatively often, these responses would be racist themselves. Which would prompt more anger, more accusations and more INTERNET SHOUTING.
Eventually, we switched to a new commenting system, which notified us every time a comment was made. We could quickly respond when Cynthia posted, or when other racist or otherwise offensive remarks went up and put out fires before they started. The incivility, the racism and the SHOUTING all quickly went away.
So why does this matter? It shows how one person who, either as a joke or because they are actually an incredibly rude and/or racist and/or incivil person, can quickly cause an entire commenting system to slowly devolve into a basically worthless mess. This has major implications for how journalists are going to interact with their readers in the digital era. To me, it indicates that to keep comment sections civil, informative and truly useful, there must be something approaching a zero-tolerance approach.
Many of the responders to Cillizza’s question say the solution to this problem is to engage with regular readers and commenters, to make the commenting section a two-way street, and to vigorously moderate the comments or to create a membership model, where people’s real names and e-mail addresses are somehow checked and verified.
The more successful commenting sections on the internet, such as those at the The New York Times and the Voice of San Diego, follow at least one of these rules. The Times approves all comments before they appear, and the Voice uses a membership model with verified e-mail addresses and names.
But there are issues will all of these solutions.
By stopping every possibly inflammatory comment, are journalists restricting freedom of speech? At least philosophically, don’t extreme views (“Obama hasn’t accomplished anything during his first year, and his health care plan is going to cause the elderly to die” OR “Bush was a fascist who should be tried at the international criminal court, and is a bigger terrorist than Bin Laden will ever be”) deserve the same treatment as moderate ones (“Obama’s first year was a mixed bag” OR “While I disagree with Bush’s national security policies, his domestic ones were fine”)?
And in a time of declining newsroom budgets and shrinking monetary and personnel resources, how much time can reporters necessarily spend monitoring comments on their articles, never mind thoughtfully engaging in discussions with regular readers. And how many editors are going to be comfortable with their reporters responding publicly to commenters without editing?
As for a membership model, could that have a negative impact on Web traffic? And verifying e-mail accounts or names would again consume precious newsroom resources. Similarly, most newspapers can’t afford to hire a staffer who focuses solely on moderating or approving comments.
As you might be able to guess, I’m not too optimistic about keeping online comments civil in the future. Any real solution is likely going to require a mixture of the solutions discussed above. Some news organizations might stumble upon the right combination of moderation and membership to keep comments civil without resorting to censorship, while keeping journalists at the right level of involvement.
This is a sticky issue, and these discussions are taking place in newsrooms across the country. In one of them, a fix to the problem might arise.