Microsoft report claims US opponents are gearing up for an AI war

Microsoft unveils CoPilot For Security to help lessen AI risks

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

Microsofthas claimed in a newsecurity advisorythat US rivals such as Iran, Russia and North Korea are preparing to step up their cyberwar efforts using modern generative AI.

The problem is aggravated, it adds, by a chronic shortage of skilled cybersecurity personnel. The briefing quotes a 2023 ISC2 Cybersecurity Workforce Study which says thatroughly 4 million additional support staffwill be required to cope with the upcoming onslaught. Microsoft’s own studies in 2023 highlighted a huge rise in password attacks over two yearsfrom 579 per second to over 4000 a second.

CoPilot For Security

CoPilot For Security

The company’s response has been the roll-out ofCoPilot For Security, an AI tool that is designed to track, identify and block these threats, but faster and more effectively than humans can.

For example, a recent test showed that the use of generative AI helped security analysts, regardless of expertise level, to operate 44% more accurately and 26% faster in dealing with all types of threats. Eighty six percent also said that AI made them more productive and reduced the effort needed to complete their tasks.

Unfortunately, as the company acknowledges, the use of AI is not restricted to the good guys. The explosive rise in the technology is leading to an arms race, as threat actors look to leverage the new tools to do as much damage as they can. Hence the release of this threat briefing to warn against the upcoming escalation. The briefing confirms thatOpenAIand Microsoft are partnering together to detect and tackle these bad actors and their tactics as they emerge in force.

The impact of generative AI has had on cyber attacks is widespread. In 2023, Darktrace researchers found that there was a135% increase in email-based so-called ‘novel cyber attacks’in January to February 2023, which coincided with thewidespread adoption of ChatGPT. Additionally, a rise in phishing attacks that were linguistically complex and used an increased amount of words, longer sentences and more punctuation was discovered. This all led to a 52% increase in email account takeover attempts, with attackers realistically posing as the IT team in victims’ organizations.

“Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyberthreats, whether traditional or AI-enabled.”

The report outlines three main focus areas which are likely to consume increasing amounts of AI in the near future. Improved reconnaissance of targets and weaknesses, enhanced malware coding using sophisticated AI coders, and help with learning and planning. The huge compute resources needed inevitably means that the early adopters of the technology will almost certainly be nation states.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Several such cyberthreat entities are specifically mentioned. Strontium (or APT28) is a highly active cyber-espionage group which has been operating out of Russia for the past twenty years. It goes under a number of labels, and is expected to dramatically increase its use of advanced AI tools as they become available.

North Korea also has a huge cyber-espionage presence. Some reports say thatover 7000 personnel have been running continual threat programsagainst the West for decades - with an increase in activity of 300% since 2017.

One such group is The Velvet Chollima or Emerald Sleet operation, which primarily targets academic and NGO operations. Here, AI is being increasingly used toimprove phishing campaignsand test vulnerabilities.

The briefing highlights two other major players in the global cyberwar arena, Iran and China. These two countries have also been increasing their use of language learning models (LLMs), primarily to research opportunities, and gain insight into possible areas of future attack. As well as these geo-political attacks, the Microsoft briefing outlines increased use of AI in more conventional criminal activities, such as ransomware, fraud (especially through the use ofvoice cloning), email phishing and general identity manipulation.

As the war heats up, we can expect to see Microsoft, and partners like OpenAI, develop an increasingly sophisticated set of tools to provide threat detection, behavioral analytics and other methods of detecting attacks quickly and decisively.

The report concludes: “Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyberthreats, whether traditional or AI-enabled.”

More from TechRadar Pro

Nigel Powell is an author, columnist, and consultant with over 30 years of experience in the tech industry. He produced the weekly Don’t Panic technology column in the Sunday Times newspaper for 16 years and is the author of the Sunday Times book of Computer Answers, published by Harper Collins. He has been a technology pundit on Sky Television’s Global Village program and a regular contributor to BBC Radio Five’s Men’s Hour. He’s an expert in all things software, security, privacy, mobile, AI, and tech innovation.

Cisco issues patch to fix serious flaw allowing possible industrial systems takeover

Washington state court systems taken offline following cyberattack

Turns out most of us really don’t mind data centers