The Deterrence of Deception in Socio-Technical Systems

  • Anderson, Ross (PI)
  • Vrij, Aldert (CoI)
  • Mann, Sam (CoI)
  • Leal, Sharon (CoI)
  • Yan, Jeff (CoI)
  • Baddeley, Michelle Catherine (CoI)
  • Sasse, Martina Angela (CoI)
  • Stajano, Frank (CoI)
  • Robinson, Peter (CoI)

Project Details


We have assembled a team of computer scientists and psychologists to do breakthrough research on deception. This is not just the basic problem at the heart of cyber-crime, but is central to human behaviour. Deception is the flip side of cooperation; as our ancestors evolved to cooperate in larger groups, so also we evolved the ability to deceive - and to detect deception in others. This includes the ability to deceive ourselves, which in turn helps us deceive others.

The move of business and social life online is changing deception in many ways. Some of these are essentially mechanical: on the one hand it's easy for crooks to create good copies of bank websites, while on the other hand companies can collect and analyse ever more data to detect fraud. Other changes affect our behaviour; for example, as transactions become more impersonal, the inhibitions against cheating become less. It feels less wrong to defraud a website than to defraud a person, and as more commerce goes online, fraud is rising. There is a strong practical reason for finding ways of making transactions feel more personal again, and this is one aspect of our investigation.

But there are much bigger and deeper issues. Existing deception research has almost all dealt with the static case of whether the experimental subject could tell whether another person was lying. The answer is "usually not"; we're not good at detecting people telling lies in the laboratory where the stakes are low and there is no interaction. We will move this to a new level by studying how we can deter deception in interactive contexts. We have a long list of ideas we want to test, by building a framework in which people play games in conditions with players who are mechanical, anonymous, partly identifiable or socially connected, and where players can cheat but not punish, punish but not cheat, or both. We will explore conditions where "cheating" subjects believe they are breaking social norms against conditions where deception is socially acceptable, such as bluffing in online poker. We will do experiments in online interviewing to explore the circumstances in which subjects can detect deception interactively. Another thread will be exploring the extent to which surveillance, by humans, by software or both, can act as a deterrent to cheating. Finally we will investigate how all this relates to the perception of privacy, so we can better understand what forms of fraud surveillance are acceptable as well as effective.

The aim of our research is not merely to come up with better mechanisms and design principles for deterring deception at e-commerce websites, and indeed in social networks. It is to deepen our fundamental understanding of how deception works by exploring the different perspectives that online interaction creates, and allows us to manipulate. This will, at the level of basic science, enable us to understand better what it means to be human, and the potential for our humanity to be expressed, realised and developed in the complex socio-technical systems on which we are all coming to depend.
Effective start/end date1/10/1330/09/15


  • Engineering and Physical Sciences Research Council: £190,617.00


  • Economics
  • Info. & commun. Technol.
  • Psychology
  • Behavioural & experimental eco
  • Human-Computer Interactions


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.