The 4 (F)laws of Social Listening

4 laws of social listening

Social listening is big business. Having access to a platform that can interrogate social media data has become a must have for many, if not most, organisations.

The problem is the vast majority of these platforms suffer from major flaws which can often lead to them producing pretty graphs and analysis that at best tell you very little and at worst are hugely misleading.

For social listening to be effective in providing genuine insight, it needs to fully address the following: refine, contextualise and weight the data, and be wary of sentiment analysis.

1. Refine

This is where it starts. Rubbish in, rubbish out. Within any social media the volume of spam (or robot created) accounts, conversation and content is vast. Any social listening exercise or platform must be effective at filtering this crud out, otherwise the resulting analysis is going to be seriously flawed.

Think of it like refining oil. You could try running your car on raw crude, but I wouldn’t recommend it.

Separating the wheat from the chaff is not a simple exercise. Just saying “let’s look at data from accounts with more than X followers” or “at least X shares” or similar sliders that many vendors provide doesn’t cut it. In fact, it can often result in missing really important conversations involving key individuals who just happen to be less active or noisy.

You need filters that are sophisticated enough to eliminate the noise, whilst retaining the signal, something Ray Dolby knew a thing or two about.

2. Contextualise

The keyword. That wonderful item, beloved by social listening platform users the world over. Selecting the right keywords has almost become an art for some people. They produce complex Boolean searches in a desire to find the conversations they hope are contextually relevant.

The thing is, the more complex your keyword search criteria, the more likely you’re missing something you needed to hear. I want to identify conversations about Apple the brand not the fruit, so I search for a string of “Apple” AND “XXXX”, or “Apple” NOT “XXXX”.

More sophisticated solutions use pattern recognition and topic clustering to identify the context and save you thinking of every AND XXXX or NOT XXXX. But what about conversations that are relevant, but in a less direct way? What about competitors like Samsung or Microsoft? When I ask for Apple data do we include these conversations or ignore them? Is it only when they mention Apple? Where do we draw the line?

The other major weakness in these approaches is that they rely on semantic content in the first place. If keywords aren’t present in the social media post then the system can’t identify them. Makes for a pretty major weakness when you consider the huge increase in visual based social media like minimally tagged Instagram posts, or tweets where it’s the picture that tells the story.

The bottom line is neither of these approaches is a human way of listening. We don’t rely on keywords to tell us if something is insightful or of interest; we have the capacity to recognise it when it is. Social listening solutions need to be designed with this in mind.

After all we are listening to people, not simply crunching data.

3. Weight

When Barack Obama gets up to the White House podium, the world listens. When @abc45xxx (not real) with 200 spam followers auto posts a link to an article, no one does.

I’m not talking about influence measurement here – though it’s a related topic – I’m merely pointing out that social media may have democratised conversation, but that doesn’t mean that every social media post should be given equal weight. It also doesn’t mean that only posts from “influencers” are important either.

Social listening solutions must take account of the relative importance of what is said, by whom and how people reacted to it and then weight their analysis accordingly.

4. Be wary of sentiment

This is simple. Automated sentiment analysis is quite simply not much better than a coin toss in many cases.

If you’re happy to make business decisions on this basis then fine, otherwise don’t believe those seductive dashboard graphs and accept that you’re going to have to take a more human approach to such analysis and that means you probably can’t do it for every last tweet, YouTube comment, blog post etc. 

Consequences

Compliance with all the first three requirements is an absolute necessity if a social listening exercise is to have any potential for producing insight.

If you don’t filter out the noise your analysis is flawed from the start.

Filter the noise, but don’t ensure you have the right context and you’ll be listening to the wrong conversations, even if they will be crystal clear.

And fail to weight what was said and you risk missing the key voices that were driving the conversation.

Social listening solutions need to start addressing these (f)laws properly and fast, before users realise how irrelevant and misleading a lot of what they are telling them actually is.