Blog

Our weekly Cyber Flash Briefing round up of top open source news and ‘Cyber Tip Tuesday’ videos

Black Arrow’s Perspective: The Duality of AI, Empowering Business and Cyber Attackers Alike

Artificial Intelligence (AI) has unlocked countless doors to innovation, paving the way for unprecedented efficiency, automation, and business intelligence in organisations globally. However, the AI we so keenly herald as a catalyst for advancement is not without its dangers. From a cyber security perspective, the risks associated with AI, particularly those concerning insider threats and AI-driven attacks, are of increasing concern.

Consider the potential of AI in the wrong hands

Threat actors leverage AI to enhance their illicit activities. Sophisticated cyber attacks, traditionally relegated to a small number of skilled and capable individuals or groups, could be automated and exponentially expanded in scale. The very virtues that make AI so appealing to businesses - scalability, adaptability, and autonomy - are transformed into threats by attackers to harm those same businesses as well as individuals. A cyber attacker utilising AI tools could easily automate processes like social engineering attacks, identifying system vulnerabilities, or developing advanced malware that adapts to today’s countermeasures.

AI has catapulted the capabilities of attackers

Reflect on the MOVEit scandal this summer that has impacted tens of millions of individuals globally. In May this year, the Russian-based CL0P ransomware gang exploited a previously undiscovered vulnerability in the popular file transfer platform called MOVEit. The platform is a component of the supply chain that is used by many companies, paradoxically to keep their documents secure from unauthorised access. Nonetheless, the attackers managed to break into MOVEit in different organisations to harvest the information within it, or use that access as a door to other parts of the target’s systems. So far, the compromise has claimed sensitive data from at least 230 firms across the world, from government and education to transport and finance including Ernst & Young, British Airways, the BBC and Tesco Bank. The heist undoubtedly took significant skill and expertise to accomplish, and highlights the need to manage the security within the chain of organisations and systems that are connected across borders and sectors.

Now, imagine what happens when attackers conduct these kinds of attacks by levering the power and creativity of AI. Attackers are constantly probing and identifying novel methods of attack, to swiftly bypass existing security measures and evade detection. With AI, attackers are already sending out seemingly flawless phishing messages to successfully break into systems through people, and will soon attack technology systems using AI in sophisticated ways at a rate that humans cannot possibly keep pace with.

Poisoning the AI data pool

Beyond harnessing AI for malicious intent, threat actors could exploit AI systems' inherent vulnerabilities. One alarming method is adversarial machine learning; this is a technique wherein 'poisoned' data is used to manipulate AI algorithms. In a cyber security context, attackers could intentionally feed misleading data into AI security systems, causing them to overlook genuine threats or behave unpredictably. People may be inclined to trust the output of AI making autonomous decisions, because they do not believe that malicious or false information could have been added into the source data.

AI models used in sensitive sectors like healthcare, finance, or defence are an attractive target for intellectual property theft. The very algorithms that drive insights, predictions, and automated decisions could be stolen, reverse-engineered, or used maliciously. The successful theft of an AI model could cause extensive financial loss, and potentially even endanger national security.

Insider threats

The realm of insider threats is yet another frontier where AI's risk factors come to the fore. Consider the rise of powerful language models like ChatGPT, that must be handled with care in organisations because sometimes, employees unwittingly become a threat.

Samsung reported that their staff had leaked sensitive proprietary information by inputting it into such models. Many organisations such as Amazon and Apple have already banned their employees from using publicly accessible generative AI systems like ChatGPT, and have instead provided their own private alternatives for internal use.

How to protect your organisation

Mitigating the dangers of AI requires a two-pronged approach. First, robust AI governance is needed at a national and international level. Governments must implement ethical guidelines, transparency measures, and regulation; however, these measures are unlikely to be implemented at a pace that matches the development and adoption of AI. Secondly, organisations must themselves immediately begin to foster a culture of AI and cyber security awareness among their employees. Clear communication about the capabilities and potential dangers of AI, coupled with training employees to recognise AI-enhanced threats and implementing strong cyber security controls across people, operations and technology can all help in this endeavour.

The advent of AI has ushered in a new era of previously unimaginable benefits, and risks. From AI-driven cyber attacks to the dangers posed by employee use, the threats are real and rapidly evolving. As organisations increasingly and inevitably adopt AI, they must stay vigilant to these threats and proactively invest in robust cyber security measures to safeguard against them. The reality of AI and cyber security is a delicate balancing act that we are all learning as we leverage those benefits while mitigating the risks.

Contact us to discuss how to embrace AI and manage the risks to your organisation

An increasing number of organisations are contacting us to take advantage of our expertise and advice on how they can benefit from AI while managing the complex risks. Contact us today to discuss how we can help you assess and govern the risks though cyber security controls across people, operations and technology.