A team of security researchers from GovTech Singapore briefed DEFCON attendees on how the future use of AI might be used to create more effective spear phishing emails than humans at massive scale. The researchers noted their tests had some weaknesses. Their tests demonstrated that AI-generated phishes produced better click rates than those created by human operators but the experiment still required a human operator in the feedback loop to verify the final synthetic language was good enough to use as a template or was rejected by the human for fine-tuning by the AI. Maybe one day it won’t need a human in the decision-making loop. The researchers also noted that the sample size was small, the costs for machine learning training were likely to exceed a million(s) dollars at scale. However, given access to an API or massive computing power, none of these obstacles should be a barrier to larger criminal operations or nation-states. The machine learning was extensively trained with open-source OSINT. The researchers also raised the ethical issue of the monitoring of Open AI platforms for potential abuse. Share your thoughts with the Hackbuster’s Forum community here! The researchers also commented that employee anti-phishing training needs to be further emphasized. We could
not agree more.
DEFCON Slide Deck
Hacking Humans with AI as a Service