22nd February 2017

By: Mike Constantine, CTO.

An interesting (and at times too rare conversation) is being held at boardrooms across the country. It goes a little bit like this:

“Hi Paul, I’m Alison the CISO”… “Nice to meet you Alison, I’m Paul the CMO”.

Yes, these two execs have probably rarely passed each other in the corridor, let alone had to have a conversation or make decisions together about the business or its strategy.

But, with recent headlines, Alison and Paul really have some decisions to make. Fake news has become the buzzword during the US presidential election. Social media was allegedly used to disseminate articles or posts designed to polarise the electorate, some of which were not factually correct. Couple this with the fact that Pew Research Centre provides a (slightly concerning) stat that 44% of Americans get their news from Facebook, then you can see what a powerful ‘tool of misinformation’ the social network can become.

Another story also made me sit up and take notice last week. A researcher at UCL found that Twitter had a set of automated fake accounts which consisted of 350,000 accounts creating a botnet which (presumably for the right amount of Bitcoins) could be mobilised to act as a single unit. This had the potential to change patterns of conversations, swing opinion, and influence election results. But, in this case the botnet was only quoting passages from Star Wars novels… so I assume it was fairly docile!

These systems of nefariousness (namely giant botnets and social media) raise a serious question. At what point will systems like this be used to commit corporate damage? Will this be done by the hackers or activists (like many of the attacks of old) or will it be companies attacking each other? When does fierce competition tip over into something which is subject to legal challenge?

Well, it seems to me, that when systems get to this size generating improper content about a product or company could be straightforward. How many of us rely on customer review scores to book our next hotel room or decide which new microwave to purchase? A huge percentage I suspect. What if some or all of these reviews were ‘influenced’ by a competitor? Could this actually destroy product launches, distort share prices, sway customer procurement strategies?

Whilst this paints a rather bleak exaggerated picture of what could happen, I imagine those early conversations between the CISO and CMO will have been interesting!

How do we change our security approach to deal with this threat?

No longer is security about stopping people getting in and doing damage… or about stopping sensitive information getting out of the organisation. That’s still important, but it’s definitely not enough anymore. As the world evolves into an almost exclusively online market, how do you safeguard your brand or customer Net Promotor Score (NPS)?

My view is that this becomes the domain of Big Data analysis. The challenge is to detect anomalies and trends in the data that has been collected – and then act upon variances outside of the norm which might indicate malicious activity. There are a number of vendors in this space which can analyse data feeds from systems, telecoms and security equipment, end users and social media. They can then build baselines in terms of what is ‘normal’ and then raise the alert when anomalies arise.

As the stories about fake news and spurious content grow – so might the demand for managed security services around an Enterprise’s social media presence. It’s in the BBC news again, and the Morality of Fake News is being broadcast on Radio 4 tonight, 22nd February 2017.

Let’s all watch this space with interest.


Enterprise Mobility and Security for GDPR