Research Updates

AI scammers, holographic PMs and losing the race to the research pole

We live in interesting times.

If a royal wedding watched by half the planet or the pending implementation of an EU privacy regulation doesn’t float your boat – 5 days to GDPR! – tomorrow New Zealand’s Prime Minister will address the crowds at Techweek in holographic form. Likely so she can keep up with work commitments and be in two places at once and who wouldn’t benefit from cloning themselves to stay on top of email.

“Help me NZ techies, you’re my only hope….”

Meanwhile the boffins at Google have taken decades of research into AI and computer speech synthesis and produced an autonomous assistant in the form of ‘Duplex’ that can book a hair appointment for you and sound uncannily real in the process. Parody makers start your engines…

If the loping, door-opening robots of Boston Dynamics doesn’t have you reaching for that classic 80’s Terminator DVD, Juha Saarinen’s observations of Duplex’s abilities in adversarial human hands should prove a lightbulb moment:

Humanity has an infallible ability to subvert and pervert the coolest technology, and use it to hurt each other with.

Unfortunately, it’s all too easy to imagine how Duplex could be misused by robocallers and phone fraudsters who won’t start off the conversations with a “you are talking to an AI” warning.

Think email spam, phishing, romance scamming and 419ing, except they’ll arrive on your mobile phone.

More naturally sounding and behaving digital assistants backed by self-learning AI will make them more attractive to people, not less, so expect to speak to machines more often.

Google CEO Sundar Pichai told cheering crowds that Duplex understands the context and the nuance of conversation, a mean feat for those of us struggling to improve our EQ scores. The result of his Duplex demo was a concern that more effort should be made on protecting the human to prevent AI deception.

As someone researching human vulnerabilities and the role they play in socio-technical internet attacks, this latest development reminded me just how far behind in my project timeline I’ve slipped in 2018.

In January this year I presented an update on pilot survey data that looked promising based on research into OCEAN personality facets and the role they may play in social engineering susceptibility.

The pilot survey requested basic demographics and used 62 questions from 3 psychometric scales to measure computer use, health and lifestyle factors and how they may shape risk appetite and risk perception:

SeBIS – Security Behavior Intentions Scale
Measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness

DOSPERT – Domain-Specific Risk Taking Scale
Assesses individual risk taking and risk attitude

CFC – Consideration of Future Consequences Scale
Identifies individuals who are more inclined to act in ways that are protective of their future health and well-being

Five basic hypotheses underlie the research:

  1. An average individual with average security knowledge, an average appreciation of future consequences and average propensity for risk taking scores 60% across all three scales.
  2. Does security knowledge, an appreciation of future consequences and a risk averse nature result in higher scores?
  3. Does a lack of security knowledge, a desire for immediate returns and a risk taking or sensation seeking nature result in lower scores?
  4. Does a lower score correlate with previous adverse experiences? Requires next stage data bearing evidence of cybercrime/security impacts, e.g. Falling victim to credential harvesting, financial losses.
  5. Can we prove that a low score is predictive of being pre-disposed to socio-technical internet attacks?

The high-level concept being to generate a ‘Security Quotient’ score and to see if it’s possible to test for high-risk human behaviour and mitigate it through additional security controls or by educating people in a targeted manner to mitigate those risks.

In short, can predictive analytics utilising psychometric profiling prevent internet users from falling victim to cybercrime.

Could personality profiling be used for more than just targeted advertising remarketing on search engines and social media? What if you could understand and quantify the nature of the people risk in your organisation as you can the technology risk?

Results from the pilot showed a distribution of scores from 28 valid responses with one anonymous respondent identified as very/high risk on two of the three scales:

To those attending, I summarised the next steps:

  • A larger survey dataset is necessary to validate the ‘average individual score’ concept of 60%.
  • Submissions by victims of cybercrime are required to validate the predictive ability of any such Security Quotient score.
  • Nationality should be captured in the full survey for evaluation of cultural ‘Individualism’ being a protective factor

2018 project delays

A mix of family commitments and a new role working in Deloitte’s cyber team has pushed back the final survey by three months. The race is now on to complete this second stage and write up the findings.

Race might be the wrong word though. Two weeks ago – thanks to a good friend working in Westpac’s security team – I discovered that researchers at the Universities of Cambridge and Helsinki had developed the ‘Susceptibility to Persuasion II (StP-II)’ test that can be used to predict who will be more likely to become a victim of cybercrime.

Whilst this initially left me feeling like Robert Scott beaten to the South Pole by Roald Amundsen (but without the cold and suffering), my reading of their work suggests the Security Quotient concept is still valid.

Dr David Modic’s team developed the StP-II scale with an initial 138 items based on significant research into scam compliance. They had used the 12-item Consideration of Future Consequences Scale and confirmed that self-control is an important predictor of various behaviours including victimisation. Lack of premeditation – thinking before you act – is a significant predictor of scam compliance. They also made use of the full DOSPERT-R scale (as opposed to just the recreational risk elements highlighted by Elie Bursztein’s 2016 research into USB drops) to evaluate individual risk preferences.

Read the full research and you find the eventual StP-II scale drops to 54 core items to measure susceptibility to persuasion. The best part is the test is now online so give it a go and see how your personality stacks up.

But please be sure to take the updated Security Quotient survey once the final tweaks have been made, hopefully later this month, I don’t want to suffer the fate of Antarctic explorers…

Photo by @franckinjapan

Cybersecurity research: guinea pigs wanted!

It’s been a while since I celebrated getting funding from InternetNZ to research the human side of cyber security and how individual personality traits might play a part in common ‘socio-technical attacks’ like phishing, ransomware and online scams.

I’ve digested mounds of academic research spanning fields as diverse as human computer interaction, risk management, health promotion and social psychology. I’ve read books and blogs on social engineering and scammer tactics and have assembled the first draft of a conceptual scale that might help identify ‘high risk’ individuals when it comes to common cybercrime and cyber security attacks.

Taking inspiration from the agile “move fast and break things” mindset, it’s highly likely this will be the first of many iterations of a research questionnaire but I’m keen to get feedback from some willing guinea pig volunteers.

If you have 15 minutes to spare and the enthusiasm to road test an online survey, please do get in touch by email to research@ubisec.nz or message me on LinkedIn and I’ll happily share a URL with you.

The survey looks at basic demographic details, computer use, health and lifestyle factors and how they may shape risk appetite with the ultimate aim being to vulnerability scan layer eight.

Header image by David Burke, used under Creative Commons licence.

Securing the human: a $35m question

This post was originally published on LinkedIn on 28 July 2017

Chris Hails, Information Security Consultant

Browsing the BBC website this morning, a quote in a report on Alex Stamos’ keynote to Black Hat jumped out at me. Facebook’s CSO was talking about the need for ‘a more people-centric security industry’ and suggested:

“We have perfected the art of finding problems without fixing real world issues,” he told attendees. “We focus too much on complexity, not harm.”

The human side of information security and associated online harms is a major focus for me. Between August 2010 and August 2016, New Zealanders reported almost 28,500 online incidents to NetSafe involving $35m in direct financial losses.

In policing terminology there’s a difference between pure ‘advanced cybercrime’ and cyber-enabled crime but when you’ve spoken with individual victims who have lost their life savings thanks to some shady overseas operator, the difference tends to melt away and the impact on the victim is what matters the most.

Think of the individual who has remortgaged their house; drained their business of operating capital; traveled to a hotel room thousands of miles away to meet that mysterious investor offering a handsome percentage in return for a small up front payment.

Those experiences at NetSafe left me wanting to find solutions to what are increasingly known as ‘socio technical attacks’. If you haven’t heard that term before I’ll refer to Dr Jean-Louis Huynen: “A socio-technical attack is possible because of the human components in a system.”

Over those six years working at NetSafe, the most common – and most financially and/or emotionally harmful – forms of socio-technical attacks were:

  • Romance fraud
  • Investment fraud
  • Ransomware
  • Business Email Compromise (BEC)

Whether you classify those as cyber-enabled or pure cyber attacks isn’t the important point here. The key is that in the majority of those cases, the weakest link in the system was often a human being – a human who responded to the charms of a scammer or was curious enough to infect their own system and encrypt essential data.

Humans, it’s fair to say, can be wonderful things but they also come with a range of inherent flaws or vulnerabilities:

  • Many of us like to help people: that could be holding a door open for someone wearing a hi-vis vest piggybacking into a building or allowing the helpful ‘Microsoft’ technician to have access to your computer to fix the viruses.
  • Many of us respond to outside forces or biases in the form of authority, curiosity or a general sense of invincibility and click on the malicious attachment or submit our credentials to the phishing site that ‘satisfices’ our need to verify it really is the official bank website.

These concepts are not new and whilst a smattering of the word cyber adds a sexy sheen to the stories, humans have been taken advantage of for a long time. Take a quick peek at this ‘Spanish Prisoner’ story in the New York Times and note the date: 20 March 1898.

What cyber brings to the picture is a speed of operation and ability to bridge the distance unimaginable for the criminals operating at the end of the 19th century. Speed and ease of operation and access to a global pool of victims equals profit and has resulted in changing the face of modern crime.

Look at the latest UK crime statistics and you’ll find that ‘cyber crime’ in the form of Computer Misuse and Cyber Enabled Fraud now makes up 53% of reported crime.

There’s no doubt that the technical skills involved in advanced, persistent, technically impressive attacks are to be reviewed with a wry smile and a sense of awe.

But it’s becoming apparent that a failure to implement basic cyber hygiene steps – not sophisticated attackers – is often to blame. And that includes failing to train your staff on how to recognise suspicious activity and how to respond to potential cyber incidents.

Dr. Ian Levy, from the UK’s National Cyber Security Centre probably said it best:

“A lot of the attacks that we see on the internet today are not purported by winged ninja cyber-monkeys. Attackers have to obey the laws of physics; they can’t do things that are physically impossible”

The wonderful people at InternetNZ have provided me with funding this year to explore some of the root causes of those 28,500 incidents, to research why so many socio-technical attacks are successful and to examine if there might be a programmatic way to identify individual cyber security risk profiles and deliver adaptive security benefits in future.

It’s only the start of the project, but I’ll be posting updates as I progress in the hope we can continue to explore ways to help more people stay safe and secure online.

Send me a message or leave a comment if you’d be keen to hear more.