Honesty repeats itself: comparing manual and automated coding on the veracity cues total details and redundancy

Haneen Deeb, Aldert Vrij, Nicola Palena, Petra Hypšová, Gerges Dib, Sharon Leal, Sam Mann

Research output: Contribution to journalArticlepeer-review

Abstract

Lie detection research comparing manual and automated coding of linguistic cues is limited. In Experiment 1, we attempted to extend this line of research by directly comparing the veracity differences in manual coding and two coding software programs (Text Inspector and LIWC) on the linguistic cue ‘total details’ across eight published datasets. Mixed model analyses revealed that LIWC showed larger veracity differences in total details than Text Inspector and Manual coding. Follow-up classification analyses showed that both automated coding and manual coding could accurately classify honest and false accounts. In Experiment 2, we examined if LIWC’s sensitivity to veracity differences was the result of honest accounts including more redundant (repeated) words than false accounts as LIWC—but not Text Inspector or Manual coding—accounts for redundancy. Our prediction was supported and the most redundant words were function words. The results implicated that automated coding can detect veracity differences in total details and redundancy but it is not necessarily better than manual coding at accurately classifying honest and false accounts.
Original languageEnglish
JournalApplied Psycholinguistics
Publication statusAccepted for publication - 25 Jul 2024

Keywords

  • linguistics
  • lie detection
  • total details
  • redundancy
  • automated coding
  • manual coding

Fingerprint

Dive into the research topics of 'Honesty repeats itself: comparing manual and automated coding on the veracity cues total details and redundancy'. Together they form a unique fingerprint.

Cite this