AI will heighten global ransomware threat, says NCSC

Artificial intelligence (AI) will be leveraged to increase the volume and impact of cyber attacks involving ransomware between now and 2026, the UK’s National Cyber Security Centre (NCSC) has warned in a report.

The NCSC has previously rejected overt alarmism over the impact of AI on cyber security, but now believes the rapidly emergent technology will “almost certainly” increase the volume and impact of cyber attacks.

The GCHQ-backed agency said it is clear AI is already being used in malicious activity, where it is reducing the barrier of entry to novice cyber criminals, mercenary hackers and hacktivists, enabling those with relatively basic skills to conduct much more effective intrusions.

It is this “democratisation” of the cyber criminal underground, combined with improved targeting of victims using AI, that will likely contribute to the emergence of AI as a factor in ransomware incidents, something that has not really being seen so far.

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” said NCSC CEO Lindy Cameron.

“The emergent use of AI in cyber attacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.

“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks,” she said.

Más contenido para leer:  Las aplicaciones GenAI aumentan el interés en la conectividad 5G diferenciada

National security threat

James Babbage, director general for threats at the National Crime Agency (NCA), said: “Ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals.

“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods. Fraud and child sexual abuse are also particularly likely to be affected.

“The NCA will continue to protect the public and reduce the serious crime threat to the UK, including by targeting criminal use of GenAI and ensuring we adopt the technology ourselves where safe and effective,” he said.

The NCSC maintains that ransomware is the most acute cyber threat facing UK organisations as ransomware crews continue to adapt their business models in the search of more efficiencies and maximum profits.

Last year, for example, saw a divergence in tactics between some gangs, with a number bypassing the traditional ransom locker malware altogether in favour of straight up extortion, Cl0p/Clop being one of the more active proponents of this tactic.

The UK government is trying to tackle this rapidly changing threat model through its multibillion-pound Cyber Security Strategy, in which both the NCSC, its law enforcement partners and private industry are active participants.

ESET global cyber security advisor Jake Moore said: “AI has simply increased the power enabling cyber criminals to act quicker and at scale. Furthermore, whilst past and present phishing emails are fed into the algorithms and analysed by the technology, the better the outcomes naturally become. The volume of such attacks will inevitably increase, but until we find a robust and secure solution to this evolving problem, we need to act now to help teach people and businesses how to protect themselves with what is available.

Más contenido para leer:  LTE todavía tiene piernas con conexiones IoT de roaming permanente para llegar a 825 millones para 2026

“Social engineering has an impressive hold over people due to human interaction, but now AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate,” he added.

“Trust is often difficult to gain, but clever social engineering techniques often pressure people to bypass their default defences. Ultimately, we need to educate people about these new attacks and to think twice before transferring money or divulging personal information when requested.”

“Impact overestimated”

But Ilia Kolochenko, CEO and chief architect of ImmuniWeb, rejected the premise of the report outright. The impact of generative AI on cyber crime growth seems to be overestimated, to put it mildly,” he said.

“First, most cyber crime groups have been successfully using various forms of AI for years, including pre-LLM [large language model] forms of generative AI, and the introduction of LLMs will unlikely revolutionise their operations,” said Kolochenko.

“Second, while LLMs can help with a variety of simple tasks, such as writing attractive phishing emails or even generating primitive malware, they cannot do all the foundational tasks, such as deploying abuse-resistant infrastructure to host C&C servers or laundering the money received from the victims.

“Third, the ransomware ‘business’ already works very well,” he said. “It’s a mature, highly efficient and effective industry with its own players, economy, laws and hierarchy.”

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales