In a world as technologically dependent as the one we live in, individuals and companies are more vulnerable than ever to multiple threats, ranging from ransomware and malware attacks to data theft and other forms of cybercrime. This reality underscores the growing importance of artificial intelligence in cybersecurity as a key technology to prevent, detect, and respond to potential security breaches that could result in sensitive information disclosure and economic damage, leading to loss of trust from partners and customers. However, if we look at the specific impact of AI in cybersecurity, there are positives and negatives.
That said, not everyone in the industry is aware of this situation. According to a recent survey, only 46% of security professionals surveyed believe they understand both the positive and negative impacts of this technology on cybersecurity.
Considering the growing impact AI is having on cybersecurity, it is crucial to understand the magnitude of the situation and understand how to use AI-powered solutions to your advantage, as well as protect yourself from potential attacks that exploit this self-same technology.
New use cases for AI in cyber attacks
Not surprisingly, cybercriminals have spotted the opportunity AI delivers to make their attacks more effective. This is precisely why many cybersecurity professionals have been studying and warning about potential misuses of AI for some time now.
A prime example is Morris II. This malicious worm does not incorporate artificial intelligence in its inner workings, instead its propagation method exploits vulnerabilities in generative AI systems. In other words, although this worm is not AI-driven, its effectiveness is directly dependent on systems that are. This malware was developed by a group of researchers at Cornell Tech, a research centre at Cornell University, New York, with the purpose of alerting us to threats lurking in generative AI systems, while highlighting the need to strengthen security measures in these environments through implementing measures to control the use of this type of technology.
Cybersecurity experts also warn about the high probability of hackers managing to circumvent the protection of tools such as ChatGPT, then using them to produce malicious content and develop new forms of attack, which flags up the damage that could be inflicted by misuse of AI-powered tools.
Language models such as ChatGPT can be used maliciously if users manage to bypass security restrictions. The controls implemented in these systems are designed to prevent generating responses that could facilitate illicit activities, such as the creation of malware or the dissemination of harmful information. However, cybercriminals can attempt to circumvent these restrictions by manipulating language, using indirect descriptions or less obvious terms to achieve their goals.
For instance, instead of directly requesting code for ransomware, a user could take advantage of the tool’s blind spots to describe specific functionality that is part of a malicious programme without explicitly mentioning that it is malware. This could lead to the generation of code fragments that, although not constituting full malware, could be used as the basis for developing a malicious tool.
This process could result in the creation of polymorphic malware, which constantly changes its form to evade traditional security solutions, making it more difficult to detect and mitigate. Polymorphic malware is particularly dangerous because it uses code variability to escape detection signatures, making it a difficult threat to control.
Ultimately, misuse of tools powered by generative AI can lead to the automated creation of new and increasingly evasive malware. We need to strengthen security measures in these environments now to prepare companies for this increasingly real threat.
Using AI to enhance cybersecurity
The good news is that AI can also be applied to improve cybersecurity solutions and positively impact the protection of company systems, even against the most evasive attacks.
If your managed service provider offers an advanced EDR solution, which bases its main endpoint threat detection and response functionalities on artificial intelligence, it can strengthen protection capabilities in the following ways:
- Advanced threat detection:
Thanks to machine learning characteristic of AI, solutions are able to analyse large volumes of data and detect potential threats in real time, and thus recognise advanced threats that traditional security solutions might miss. This delivers greater detection efficiency, reducing the chances of successful attacks through early identification.
- Analysis and prediction:
These technologies help to obtain analysis that provides a deep understanding of the techniques, tactics, and procedures (TTPs) used by cybercriminals. AI can correlate past events with suspicious behaviour to facilitate the identification of vulnerabilities and strengthen protection under a prevention-based system.
- Automated incident response:
AI-based EDR solutions can automate incident response, minimising reaction time and, with it, the consequences and propagation of the attack. When a threat is detected, predefined actions can be executed, such as isolating the affected device, blocking malicious processes, and generating detailed alerts for computers, improving the effectiveness of protection efforts.
Given this situation, it is essential that cybersecurity teams are updated to understand how to combat new types of attacks based on artificial intelligence and thus put half-prevention and control against possible attacks.
However, it is equally important to know how to use it to their advantage to strengthen the protection of devices against increasingly sophisticated threats and thus reduce the attack surface. By understanding the potential of AI in both offensive and defensive cyber strategies, companies can prepare for the pervasive role of AI in cybersecurity. For more information on the latest solutions that WatchGuard Technologies offers against AI-driven attacks, please contact the Dolos team.