ML Models for Hackers
"ML makes cybersecurity simpler, less expensive, more effective, and productive. However, adversaries are using the same technology to mount automated and aggressive attacks capable of breaking systems quickly and evading detection."
"ML makes cybersecurity simpler, less expensive, more effective, and productive. However, adversaries are using the same technology to mount automated and aggressive attacks capable of breaking systems quickly and evading detection."
Home » Blog » ML Models for Hackers – The Good, the Bad, and the Ugly

ML Models for Hackers – The Good, the Bad, and the Ugly

Table of Contents

Machine learning is a hot topic around the world. The technology offers various potential benefits, mainly productivity, cost reduction, and efficiency. Organizations are also discovering recent breakthroughs in ML application in cybersecurity. With ML models, cybersecurity tools can analyze patterns and learn from them. That way, businesses can detect and prevent similar attacks. Other than that, they can respond to changing user behavior.  Ultimately, ML can make cybersecurity simpler, less expensive, more effective, and productive. 

At the same time, adversaries are using ML to mount automated and aggressive attacks. A recent article on TechRepublic states that bad actors use machine learning to break passwords more quickly and build malware that knows how to hide. By using the technology, hackers are becoming more efficient in developing a deeper understanding of how organizations prevent attacks. 

Cybersecurity image

ML Capabilities in Cyber Attacks 

Cybersecurity experts have explained ways hackers use machine learning and artificial intelligence to evade security defenses during NCSA and Nasdaq cybersecurity summit. According to Elham Tabassi, chief of staff information technology laboratory, “attackers can use AI to evade detections, to hide where they can’t be found, and automatically adapt to countermeasures.” 

ML enables attackers to infiltrate IT infrastructure and stay unnoticed for extended periods. They can hide their tracks while learning about the target’s environment. Accordingly, they blend with the daily network activities to evade detection. The sustained and concealed presence potentially allows hackers to take control of an entire system. 

ML Use Cases in Attacks 

1. Data Poisoning

Data is the first line of defense to keep ML safe. Undeniably, ML tools are only as good as the data and models they are built on. Therefore, keeping ML models up to date with the latest intelligence is crucial Conversely, hackers understand the importance of data in ML security models. In effect, they target the data to render such security tools ineffective. Data poisoning is a prime example of ways hackers manipulate ML model data. 

A poisoning attack occurs when attackers inject bad data into a model’s training pool. Consequently, the attack gets the model to learn something it should not. Tabassi states that cybercriminals target data used to train machine learning models. In effect, they ‘poison’ the data to manipulate a training dataset to control the behavior of a model. Bad actors modify the data set to trick the model into performing incorrectly. For example, they can modify the model to label spam emails as safe. 

Data poisoning can be categorized into attacks targeting the ML algorithm’s availability and the ones targeting its integrity, alias backdoor attacks. Availability attacks aim to inject huge amounts of bad data into a system, rendering the model’s boundaries useless. Previous Steinhardt study shows that a 3 percent data set poisoning results in an 11 percent drop inaccuracy. Conversely, a backdoor attack is a type of input that a model’s designer is unaware of. In these attacks, a criminal adds an input into an algorithm. Attackers leverage such backdoors to get to the ML systems with malicious intents.  

Industries require standards and guidelines to ensure data quality in machine learning. Such guidelines coupled with relevant technical requirements will address accuracy, privacy, and security. 

2. Generative Adversarial Networks (GANs)

GANs refer to AI systems pitted against each other. In this case, one system simulates original content while the other spots the first system’s mistakes. The two systems compete against each other. As a result, they create content that is convincing enough to pass for the original. Attackers use GANs to mimic normal traffic patterns. Effectively, the tactic enables them to divert the security team’s attention away from attacks, giving the criminals freedom to exfiltrate confidential information. 

GANs also aid in password cracking, fooling multi-factor authentication, and evading malware detection. Thomas Klimek wrote that hackers could specifically apply GANs to create malware that bypasses ML-based detection systems. Besides that, they can leverage the systems to fool existing image detection systems as well as creating high-resolution fake images. Klimek also talks about a PassGAN system based on ML. Researchers trained the system on an industry-standard password list. As a result, it was possible to guess more passwords. 

3. Bots Manipulation 

Undoubtedly, if an AI algorithm makes decisions, intruders can manipulate it to make the wrong decision. Simply put, attackers can abuse models once they understand them. A prime example is a recent bot attack on a cryptocurrency trading system. With several cryptocurrency platforms in operation, attackers have figured out how bots are trading. They are using the bots to trick the algorithms. Attackers can also use ML algorithms to develop bots that automate attacks. Distributed denial of service (DDoS) attacks use zombies or botnets that contain ML to coordinate attacks and make them lethal.  

4. Smart Phishing 

Hackers also perform smart phishing with ML. The technology allows attackers to analyze huge data quantities to identify potential targets. Consequently, they craft customized and highly believable emails to target victims. ML algorithms can decipher the pattern of automated emails that service providers send. Accordingly, the tactic enables hackers to create fake messages that look identical to real ones. A recipient can’t identify the difference between phishing and legitimate emails straightforwardly. In effect, victims end up sharing confidential information with attackers. 

Wrapping Up 

ML has both good and bad sides in cybersecurity. The technology’s ability to analyze large data sets and detect patterns enables hacking tools to perform functions that they have not been programmed to perform. The capabilities make ML an ideal choice for cybersecurity threat detection and mitigation. 

Sadly, cyberattacks are also using the technology to develop sophisticated malware and attacks that can fool and bypass security systems. Hackers now use ML to trigger the right attacks at the right time. In effect, they are leaving the company’s cybersecurity teams with no clues of what is happening. 

The many ways hackers can use ML during attacks should not propagate fear or discourage users from developing ML applications. As hackers increasingly explore ML to devise advanced attacks, businesses need reliable tools to detect, identify, and respond to threats immediately. Cynergy offers a security solution designed to work around the clock to enhance your security posture. The solution provides ongoing and dedicated monitoring of all externally exposed assets. Cynergy’s solution features automated AI and ML-powered tools and a vast array of cybersecurity experts to help organizations overcome automated and ML-based attacks.

Share:
Facebook
Twitter
Pinterest
LinkedIn
Email
Reddit

Request a Live Demo

Looking for your first cybersecurity expert?
Need a platform that will guide you all the way to certification?
Want to gain visibility of your exposed assets?

We use cookies to make Cynergy’s website a better place. To learn more, and to see a full list of cookies we use, check out our Cookie Policy.