Fear or Value – which one is “selling” social media?

Salems Lot When considering making a purchase as a business there are arguably three forms of justification – need, fear or value. By need I mean an absolute requirement for something i.e. you cannot operate without it. By nature these aren’t the decisions that you spend very long thinking about. The other two are where the majority of consideration comes in.

Fear – To a certain extent this is the more irrational of the two. What if I don’t do this? What won’t I know? What will people think? What if my competitors do or perhaps they already are?

Value – This is the more rational. If I do this I will derive this much benefit.

In the recent Econsultancy Social Media and Online PR Report (well worth reading) amongst many interesting statistics a few that jumped out at me were in connection with organisations (Figure 17) and Agencies (Figure 19) views of the potential value of social media.

Open minded but not convinced of its value

Presents major challenges and risks for their business

Agency view of Clients

64%

15%

Organisations themselves

44%

19%

Two points jump out at me from these stats. Firstly that Agencies think organisations are more sceptical about value than Organisations apparently do themselves. Perhaps this is due to lack of follow through on spending decisions?

Secondly that in both cases these figures imply that value is seen as a much bigger challenge to the argument for engaging in social media activities than the challenges and risks.

This is borne about by the findings of Figures 48 and 50 where from both Agency and Organisation perspectives 60% of respondents considered they had achieved some benefit from their social media activities but nothing concrete.

So with the vast majority of respondents seeing no concrete value in what they are doing does this suggest that fear – fear of what is being said about you, fear of missing an opportunity – is playing more of a role in justifying investment in social media than value?

Oh and the picture is from the 1970s TV version of Salems Lot and this scene was quite simply the most scary experience of my life at the time and I have never forgotten it!

Technorati new rankings explained (I hope!)

Technorati logo betaI was involved in an Econsultancy Round Table session recently and amongst many very interesting topics discussed was (of course) the perennial conundrum of PR measurement. During the discussion a number of people commented on how they no longer placed any reliance on, or used, Technorati since it had changed how blog authority and rank were calculated.  So I thought I would see if I could get to grips with it.

In the past, Technorati’s authority score for a blog represented a count of the number of different sites that had linked to a particular blog in the preceding six months. Until the summer of 2008 this count included links where blogs appeared in blogrolls. These were removed from the calculations at that time, as they were identified as being too slow to change. Basically people’s housekeeping in connection with blogrolls was identified as being less than real time – to say the least I suspect!

The rank of a blog then represented how many blogs had a greater authority score i.e. more different inbound links than the selected blog.

The new measurements from October 2009 are less transparent but arguably more valid and useful. According to Technorati, authority is now based on a site’s linking behavior, categorization and other associated data over a short, finite period of time. This results in a score out of 1,000, with a higher score indicating greater authority. The advantages of this approach are that it is less easy for people to manufacture authority by creating fake links, plus the ratings are more dynamic, reflecting the extent to which individual blogs are the source of conversation.

They have also introduced a second authority score when viewing blogs through the Blog Directory feature that relates to a blogs relative authority within the sector or sub sector that it is classified in. For example if you want to know the blogs with a small business focus that Technorati thinks have the most authority on the subject then you can see a list here. In this case the Online Marketing Blog is assessed at having quite a bit more authority (961) within the small business blogs than the second ranked blog is this sector, Social Media Today (871). This is despite their overall authority scores being 614 and 689 respectively. Indicating that though SMT has more authority generally, Online Marketing Blog is considered to be more influential within the small business sector.

This is an interesting, and I would suggest, very useful change as it is relative and relevant authority that matters when assessing the importance of different sites not an absolute measure. We take the same approach to ranking sites at RealWire when calculating our RealWire Influence Rating for coverage achieved. If you don’t take this relative/relevant approach then you will always end up saying that the most influential sites are ones in the biggest communities e.g. Tech, but that is obviously not appropriate if you were trying to assess which sites were influential to, say, the fashion sector.

You can also see those blogs that are rising and falling the most within that sub sector on the right hand side of the same page.

I reckon these changes mean that it is easier to find key blogs that are relevant to you and those that are becoming more and less influential over time. And no this isn’t just because my blog now appears in the top 20k! :-) What do others think?

Am I talking to myself? TweetReach may have the answer

I’ve been rather busy lately moving house, hence the lack of posts and even a reduction in my Twitter activity. To get back in the swing of things a quick post about TweetReach.

Stephen Waddington highlighted this tool to me at the North East CIPR Awards on Friday (congrats to all the winners by the way).  It is pretty straightforward to use.  Put in a term, a url or hashtag and it calculates the following three measures:

Reach – the number of different people who follow people who have talked about the search in some way.

Exposure – how many times someone could have seen a particular reference to the topic e.g. if 4 people have tweeted it who all have a follower in common then that follower will have been exposed to it 4 times but will only count as 1 in the reach figures.

Impressions – the total number of occasions to see (effectively the sum of the “Exposure” figures for everyone “Reach”ed)

Note that the Reach figure measures the number of different people. Assuming this is the case (and I have no practical way of checking this) then this is good stuff as it ensures that duplication in networks is taken care of. It also tells you what proportion of the relevant tweets were retweets or @replies.

So a very good tool in theory for understanding the extent and likely penetration of a conversation. However unfortunately the bad news is that it is limited to the last 50 tweets, unless you pay TweetReach $20 and even then it is limited by Twitter’s API to the last seven days or 1,500 tweets. The seven day limitation also means that you MUST carry out the analysis close to the time of the relevant conversation as you can’t go back historically. These factors weaken its role as a measurement tool significantly unfortunately IMHO.

Perhaps now that Google are indexing our tweets the tool could be expanded?

The Value of PR Measurement – Part 3

This post follows on from Part 2 and assumes the reader is familiar with it. Also a warning this post is a little longer than the rest but I hope it is worth it :-)

When measuring PR IMHO too much emphasis seems to be placed on proving *absolute* causality. A piece of PR, results in coverage, which provokes a demonstrable response, which leads to a required outcome. At each stage proof is required. The reality as I mentioned in Part 1 is that this is often unlikely to be possible to do and as I covered in Part 2 lack of proof hasn’t stopped accountants making predictions.

What is achievable though is demonstrating that causality was likely. Econometric modelling (or Regression analysis) can tell you whether outcomes are likely to be correlated with particular observable activities and they can also tell you the likelihood that these correlations didn’t occur by chance.

So how do we go about building models that demonstrate value? I will cover one here and more in the final part of this series.

Share Price based

Quoted companies should, in theory, be the easiest organisations to build a value based approach around as there is a constant real time assessment being made of the value of these organisations – their share price.

First we track the impact of PR as we would normally, but we do so in as close to real time as we can. Online this can be done as we often know the exact time when something is published, whereas offline this is much more problematic. We then qualify the activities that occur, seeking to focus in on those that are most likely to have been influential. Finally we look for evidence of indicators of influence arising from or within these activities e.g. positive sentiment in coverage about the company.

Next we use econometric modelling to estimate how much of the movement in the company’s share price over a period of time is explained by these indicators of PR activity and how much is explained by other factors.

Sounds a bit tricky? Likely to be very expensive? Well perhaps not.  I recently had a demo of a new piece of software called Fin-Buzz that seeks to help PR/IR professionals do this for UK FTSE 100 companies (Note this was a complete coincidence I only found out about it when researching for these posts and neither I nor RealWire have any link to the company that produces it).

In my opinion the software could be improved through looking at additional sources of coverage, it currently tracks around 100 I believe that they view as key. I would also suggest including the ability to actually build and run your own econometric models that then produce actual values – to save me the time doing the analysis below! At the moment the software only provides evidence that causality may exist, an example of which I have used as the basis of my analysis below.

But it is still a good start and I will be interested to see how it develops.

Practical example

(Note: There is no particular rationale behind choosing this example other than I was tracking Centrica at the time as we had expected to meet them at the Communications Directors Forum).

Acquisition of 20% stake in British Energy by Centrica plc

Timeline:

10th May 2009 – Rumours broke in the evening that the acquisition of a stake, thought to be 25% at the time, was going to be announced. http://news.bbc.co.uk/1/hi/business/8042544.stm

11th May 2009 – Announcement made in the morning that stake would actually be 20% http://www.centrica.com/index.asp?pageid=29&newsid=1783 http://news.bbc.co.uk/1/hi/business/8043191.stm

A graph of the change in share price from the day before announcement until two days after looks like this:

From ft.com

From ft.com

The graph shows that the share price of Centrica rose approximately 6% on the day of the announcement representing a change in valuation of approximately£700m.

In order to model how much of the change in share price was potentially explained by PR, and in particular the reaction to the announcement of the deal, the model needs to include detailed data on factors that could explain movements in the share price. For the purpose of this example I have looked at:

– market data  FTSE100 used (did companies in general experience similar changes in prices)
– comparable company (same sector) data weighted value of the three FT comparables above used (did companies in this sector experience similar changes in prices)
– sentiment measured in media coverage by Fin Buzz

I have then run regression analysis based on the movements in these variables during the month of May. It should be noted that it is a while since I studied Econometrics at University so the experts out there might pick holes in my analysis :-)

Two models have been produced. One that models all three variables and one that just looks at the extent to which the movements in the share price can be predicted by sentiment alone.

The graph below shows the movement in the actual share price and the share prices that would have been predicted by each of these models (you might need to load the page itself to see clearly): 

You can see that both predictions are highly correlated to the actual share price. In fact the statistical analysis says that the vast majority of the explanation for the movements in the price relates to the sentiment (R²s of 79% and 84% respectively for those who know about these things).

In addition further statistical analysis (t tests and F tests for the statistical experts) shows that there is a greater than 95% chance that movements in sentiment are important in explaining the movement in the share price and that there is only a tiny chance that this relationship is only by chance.

N.B If anyone would like to see the statistical details then just let me know.

Conclusions

The sentiment measured by Fin Buzz across their sources explains the vast majority of the fluctuations in the Centrica share price over this period and hence the change in market value of £700m.

The value of good PR to Centrica in this situation was therefore potentially worth millions. We now enter chicken and egg territory. Was there positive sentiment towards the deal because of how it was communicated or because of the deal itself? The answer is probably both.

Some of the value is likely to be in the deal, but only to the extent that the reasoning, the strategy and the implications were communicated well. One person could have heard the announcement and thought “well its a decent deal but not really convinced” whereas another who had been better communicated with and therefore understood the thinking better might respond “this is a great deal actually”. The impact on value in each case could be very different.

Finally even if PR only influenced/resulted in say 10% of the positive sentiment this would still have apparently resulted in the creation of an extra £70m of value.

But perhaps it isn’t necessary to reach a firm conclusion on this to demonstrate the likely value added by PR in this situation. Let’s face it if you had something worth £700m wouldn’t you want to entrust it to the experts?

The Value of PR Measurement – Part 2

This post follows on from Part 1 and assumes the reader is familiar with it

So how can the world of accountancy help with measuring the value of PR?

Accountants measure value all the time – giving an opinion on a set of accounts or valuing a potential acquisition for a purchaser, or disposal for a seller.

In order to value a company accountants use a variety of techniques. Examples include:

– Multiples of “profits”, where profits can be defined in an alphabet soup of different ways – PBT (Profit before tax), PAT (Profit after tax), EBIT (Earnings before Interest and Tax), EBITDA (Earnings before interest, tax, depreciation and amortisation- phew!)

– Discounted cash flow models otherwise known as NPVs (Net Present Value) using WACC (Weighted Average Cost of Capital), CAPM (Capital Asset Pricing Model) – yes even more letters! – and other tools to work out the discount factors used.

Despite the complexity involved in some of these techniques they all basically pose the question:

“How much money (in today’s terms) will owning this company, or a share of this company, entitle me to in the future?”

What they are all attempting to do therefore is predict the future.

Unless you are Mystic Meg this is clearly an impossible task. There is no way that anyone can predict the future with any certainty and hence any valuation is almost certain to be wrong. But that doesn’t stop accountants doing it everyday.

So when you try and predict the future earnings of a company what are the factors that you take account of?

Clearly there is the current level of profitability as a starting point. You can then go on to consider factors that demonstrate market potential, competitive advantage and barriers to entry such as:

– Market expectations in the future for that company’s products
– IP the company has or is developing
– Market share
– Potential to improve efficiency and hence profit margins
– Management team and their likelihood to deliver the company’s plans

Public/Investor relations plays a role in increasing the “value” placed on a company where these sorts of factors are concerned by communicating these areas effectively.

But in addition to this the key component of a company’s competitive advantage, and hence its ability to make future profits, is the reputation of its brand. If reputation changes then value can be created or destroyed. This is because the change in reputation will affect perception of the very factors that drive value, such as the likelihood of the company exploiting markets, launching new products and the confidence in the management team.

And PR is the custodian of reputation.

So accountants give opinions and value companies and yet, with the greatest respect to my former colleagues and fellow professionals, probably don’t understand reputation as well as PR professionals.

What does this all mean?

Perhaps we need to look at PR “measurement” in a different way. Perhaps we should be looking at how brand values change, share prices move and the changes in profitability of a company’s products and services. We could then try and demonstrate, through a framework, how reputation management and development through PR has contributed to improvements in these areas. This way PR claims the Value of what it has helped to achieve not the activities or even the actions that have occurred.

In Parts 3 and 4 I will try and suggest some practical ways we could perhaps seek to do this.