Friday, February 03, 2012

How should agencies moderate their online channels?

While government agencies often have limited options in the approaches they choose to use for moderating third-party social media channels, there's a number of ways they can choose to moderate channels under their control, including blogs, forums and wikis.

There's limited official guidance, and no real mandates or instructions for particular moderation approaches available across Australian government (no my knowledge). This is partially a good thing, as agencies need to consider what works for their goals and the sensitivity of their engagements, not merely follow a central line.

I have been asked a number of times by various people about the best approaches to moderation and how other agencies choose to moderate, however I only recently put together a quick review, based on a request in my job.

As this is public information - something that can be observed when visiting any particular blog or forum, and there is widespread interest as agencies look at what each other is doing and why to help inform their own decisions, I thought it worth publishing the list and allowing other agencies to add to it, so government agencies can both share this important information and collectively learn from it.

The spreadsheet, Australian agency moderation of online social channels, is available for viewing and editing here.


I also thought it worthwhile to provide some basics on moderation, what is it, how it can work and why it's done.

In my mind moderation differs from censorship or approval, it is a conversation management technique based on used to influence conversations to keep them on track and at a 'Goldilocks' temperature - not too hot (for example people yelling at the top of their voices) nor too cold (for example people speaking in icy tones).

Other purposes for moderation include risk management, particularly around legal considerations of defamation, copyright and the publication of inappropriate/offensive material and guiding the culture of an online space. Just as organisations develop cultures, so do online spaces. These may be positive, supportive, respectful and engaging or abusive and demeaning, depending on the management approach.

Where an owner or manager of an online space fails to have mechanisms like moderation and community guidelines in place upfront to help shape and underpin the culture they wish to support, there is significant risk of the culture developing in unintended directions and being difficult to manage once a given audience moves in.

Censorship and approval, on the other hand, are control techniques used to enforce the owner's views and beliefs over those of the community. Both provide broader control over conversations, not simply influencing them but actively constraining them to what the online space's owner feels is appropriate.

In these regimes often the reasons behind why comments are not published are highly subjective or based on the internal beliefs of the online space's owner rather than on objective guidelines for conversation. Censorship in particular is about prohibition of content, which can limit conversations to politically correct lines of thought - not good for a robust discussion or the debate of 'left field' ideas - whereas approval of content risks enshrining a user's views as being somehow being endorsed or supported officially by the space's owner, which may not be the case.

As the owner or manager of an online space, when moderating you have to allow views that disagree with you be published, provided they are not abusive or defamatory. However when censoring or approving you may choose to only selectively publish views which disagree with you or not publish them at all.

Obviously moderation can be more uncomfortable, particularly in political environments, as you can be more readily challenged. However the outcome is far more inclusive, encourages a broader level of participation and provides opportunities to influence and be influenced.

When it comes to how organisations moderate, there are several different approach to choose from.


Pre-moderation
The first place people commonly go is pre-moderation. This means that, as the manager or owner of an online space, you read and review every comment as it comes in against your moderation guidelines before you allow it to be published. As this process suggests, it becomes resource intensive in active communities and doesn't scale well, hence it is not used by the owners of services such as YouTube, Facebook, TripAdvisor or other large community or social sites.

Pre-moderation offers the illusion of greater control and lower risk, as you check everything, however there are often legal factors at play which mean that a court could hold the online space's owner to a higher standard and consider therefore that, by pre-moderating, they are more responsible for the comments from users than if they explicitly did not pre-moderate.

Therefore unless you have highly trained moderators (with an in-depth understanding of defamation, copyright, discrimination and other applicable laws) pre-moderation can risk greater legal liability for an organisation. However don't take my word as a non-lawyer on this (I am not offering legal advice), please consult your lawyers regarding your agency's circumstances.

Pre-moderation has another major negative - it kills conversations. While it may be a suitable technique for a blog, where comments are usually in reaction to the original post, in forums, wikis, social networks and other conversational online spaces, pre-moderation is usually the kiss of death for a community. It is simply not possible to have a timely or coherent conversation when a minder at your shoulder is screening each of your words before they are heard.

I like to compare this to the process for holding town hall meetings. Sure you may vet who is allowed in the door and manage the flow of conversation in the room by laying down ground rules and limiting time per statement or question, even closing down or ejecting abusive or defamatory speakers. However you cannot effectively have a spontaneous open discussion if each speaker is required to pre submit all of their questions or comments for moderation - why hold the town hall at all?

Post-moderation
The other main approach to moderation is post-moderation. This involves establishing a clear and publicly available set of moderation guidelines (which should be public even when pre-moderating) and reviewing comments after they are published and publicly visible within your online space.

While this may sounds risky, it hasn't proven to be in practice where a community is well-managed and it is made clear that at times comments will appear which may be inappropriate, but they will be removed once detected or reported. If necessary risks can be further reduced by pre-registering users and holding their first comment for pre-moderation (which is also a spam control approach - more on that later).

Post-moderation is used by the vast majority of large community sites, often with mechanisms for users to report content they feel is inappropriate so that the owner can take any appropriate steps.

The benefits of this approach include reduced resourcing and the ability to scale quickly to any size community, important for organisations who don't know ahead of time how large a community may become. Post-moderation also offers support for free flowing conversations, meaning that forums and wikis actually work and may deliver the outcomes you seek - provided you have built and promoted the community effectively and the topic is of interest to your audience.

Post-moderation can also reduce- but not totally avoid - potential legal risks that pre-moderated communities face. However it remains important to have a level of trained moderation capability on hand to respond to reports of inappropriate commenting quickly.

Best moderation approach
In my view in most cases post-moderation is the preferable approach, however organisations need to be ready to shift temporarily to pre-moderation where events dictate. Pre-moderating the first post of new users, where users register or otherwise have a persistent identity, is a useful additional technique where it is not likely to alienate users enmasse and having clear methods for participants to report poor behaviour is a must.

There are cases where it is better to pre-moderate, such as for highly emotive topics or where there is significant risk of politically motivated groups deciding to enmasse invade and take control of a space for their own goals.

Government agencies do have special circumstances that can require pre-moderation to be used at certain times, such as during caretaker period before an election, during a national emergency or when significant machinery of government changes are taking place. Public companies may also need to consider it during share freezes or prior to major public announcements.

If you establish your system effectively, switching from a post-moderation to a pre-moderation environment ( or vice versa) should take no more than a few minutes to achieve technically - provided any changes in community guidelines or moderation policy are prepared ahead of time. In fact if you are running a post-moderated space I would strongly suggest that it is worth pre-preparing the guidance for pre-moderation just in case you ever need it.

Spam management
Another area worth touching on is spam - the bane of all system administrators. It is estimated that up to 90% of all email transmitted over the Internet is spam, unsolicited commercial messages designed to make people buy, or sometimes carrying malicious code with the hope of infecting systems for use in bot armies (for sending more spam or hacking secure systems).

Spam is also a persistent issue for online communities, though increasingly a manageable one. I recommend using one of the global anti-spam filters such as Akismet or Mollom, which are rated at over 95% effective at preventing spam from being published (that's at least blocking 95 of every 100 spam messages).

Other techniques also assist in spam management such as using honey traps on registration or submission (forms that spam bots - automated systems - see but human users do not and using the first post pre-moderation approach. Tools such as CAPTCHA can also help (where you must read and type in letters or phrases from an image), however there are techniques to circumvent these in use and they tend to frustrate some users as often up to 20 percent of legitimate human users cannot successfully complete a CAPTCHA challenge - I sometimes struggle with reading them myself.

One thing I strongly advise against is using pre-moderation as an anti-spam technique. Generally the goal of preventing spam should not outweigh the goal of having an effective and flowing conversation. Stopping the community's discussion in order to protect against unsolicited commercial messages is a very big trade-off, similar to requiring all car drivers to submit to breath analysis EVERY TIME before they can drive on a public road. Sure this approach would reduce drink driving (though heavy offenders would find a way around it), but it would unduly punish the majority of drivers doing the right thing.

In conclusion...
With no clear guidance or mandated approach for moderation from any Australian government (that I am aware of when writing this), agencies all have a choice on how they wish to moderate online spaces they manage.

I think this is a good thing as moderation will always be horses for courses. However I strongly recommend that agencies seek legal advice and consider the choices and reasoning of other agencies before striking out in a particular direction.

I also strongly recommend that you share your approach and moderation guidance with other organisations so, collectively, agencies improve by building on each others' experience and expertise.

One way you can do this is by adding your moderation approach to this spreadsheetAustralian agency moderation of online social channels.
.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Bookmark and Share