From Misuse to Abuse: AI Risks and Attacks

From Misuse to Abuse: AI Risks and Attacks

October 16, 2024 at 07:45AM

Cybercriminals are increasingly using AI to enhance their capabilities, although much of the hype surrounding AI in cybercrime lacks substance. Currently, AI is mainly applied to simple tasks like phishing and code generation. However, security risks exist, particularly with custom AI tools, raising concerns over sensitive data exposure.

### Meeting Takeaways – October 16, 2024

#### **Overview**
The meeting focused on the role of artificial intelligence (AI) in cybercrime, examining how cybercriminals are leveraging AI, the myths surrounding its capabilities, and the associated risks.

#### **Key Points Discussed**

1. **Reality vs. Hype of AI in Cybercrime**:
– AI will not replace humans soon, but those who leverage AI will have a competitive advantage.
– Significant media hype exists regarding AI threats, often overstating risks (e.g., “Chaos-GPT”).
– Many “AI cybertools” found in underground forums are basic rebranded LLMs, considered scams by attackers.

2. **Current Use of AI by Cybercriminals**:
– Cybercriminals are exploring AI capabilities but face similar shortcomings as legitimate users (e.g., hallucinations).
– Present utilization includes composing phishing emails and generating attack-related code snippets.
– Some attackers analyze compromised code with AI to disguise it as non-malicious.

3. **Introduction of Custom GPTs**:
– OpenAI’s GPTs allow customization, aiding developers to create specialized applications.
– However, they pose security risks, as sensitive information can potentially be exposed or misused.

4. **Risks Associated with GPTs**:
– Risks include exposing sensitive instructions or API keys through prompt engineering.
– To mitigate risks, it’s advised not to upload sensitive data and to implement instruction-based protections.

5. **Frameworks for AI Security**:
– Several frameworks are available to help organizations managing AI-related risks, including:
– NIST AI Risk Management Framework
– Google’s Secure AI Framework
– OWASP Top 10 for LLM and LLM Applications
– MITRE ATLAS

6. **LLM Components Targeted by Attackers**:
– Key components vulnerable to attacks include Prompts, Responses, AI Models, Training Data, Infrastructure, and Users.

7. **Real-World Examples of AI Misuse**:
– **Prompt Injection**: Manipulation of an AI chatbot by altering its behavior, leading to fraudulent transactions.
– **Legal Consequences from Hallucinations**: An AI chatbot provided incorrect information resulting in legal action against Air Canada.
– **Data Leaks**: Employees unintentionally disclosed proprietary data while using AI tools.
– **Deepfake Technology in Fraud**: Cybercriminals used deepfake avatars to facilitate a significant fraud against a bank.

#### **Conclusion**
AI serves as a powerful instrument for both attackers and defenders in cybercrime. Understanding attacker strategies and tactics will be crucial for organizations to enhance their defenses against AI misuse.

**Action Items**:
– Increase awareness of AI’s capabilities and risks within the organization.
– Review and implement recommended security frameworks.
– Ensure the handling of sensitive data conforms to best practices when utilizing AI tools.

For further insights, the full content can be accessed through the masterclass link provided by the meeting facilitator.

Full Article