Calendar Icon White
January 25, 2024
Clock Icon
6
 min read

ChatGPT Security Risk and Concerns in Enterprise

Explore ChatGPT security risks, from data breaches to malware, and learn the best practices to protect your organisation's sensitive data.

ChatGPT Security Risk and Concerns in Enterprise
Calendar Icon White
January 25, 2024
Clock Icon
6
 min read

ChatGPT Security Risk and Concerns in Enterprise

Explore ChatGPT security risks, from data breaches to malware, and learn the best practices to protect your organisation's sensitive data.

TL;DR

ChatGPT faces various security risks:

  • Confidential data exposure, including PII
  • AI phishing and malware
  • Data reconstruction and poisoning
  • Sponge sample threats
  • Intellectual property risks
  • API attacks
  • Risk of exposure through plugins
  • How ChatGPT can be misused

Best practices for security:

  • Responsible use
  • Employee training
  • Implementing data governance policies
  • Using Network Detection and Response platforms 
  • Firewalls and patches
  • Comprehensive security monitoring

Alan Turing, a visionary in the 19th century, predicted the potential for machines to alter their instructions and learn from experience. This laid the foundation for what we know today as Artificial Intelligence (AI). AI plays a significant role in our daily lives - from Siri's assistance to facial recognition technology that unlocks our phones. It has seen immense growth, with major milestones such as the launch of ChatGPT in November 2022 and subsequent releases of GPT-4 and ChatGPT plugins.

As the use of AI continues to grow, a McKinsey survey has revealed a concerning trend - only 21% of companies have established policies for its use. Surprisingly, inaccuracies in AI are now becoming a bigger concern than cybersecurity, with just 32% of firms addressing this risk. This calls for a detailed examination of AI technologies such as ChatGPT and their impact on enterprise security.

Common ChatGPT Security Risks

list of Common ChatGPT security risks for sensitive data sharing

1. Asking Chat GPT for credentials

ChatGPT is a chatbot powered by the vast Common Crawl dataset, which includes a variety of internet text sources such as GitHub repositories. This dataset is known for its large size and scope but may also contain embedded sensitive information like API keys. While ChatGPT is designed to avoid reproducing this type of information, there is a concern that it may unintentionally mimic genuine credentials or replicate real keys from its training data. 

2. Lack of proper context

ChatGPT's performance is based on factors such as the training data quality and model architecture. These models are trained on large amounts of text data that may contain biases. As a result, the generated outputs may also contain biased or unfair results, particularly if the training data is not diverse. 

Also read: Does ChatGPT save your data?

3. Security and data leakage

ChatGPT poses a serious risk to companies by potentially exposing sensitive information to others. This has already impacted major organizations like Amazon and Samsung. Research from Cyberhaven revealed that, in just one week, employees at an average of 100,000 people at a company entered confidential data into ChatGPT nearly 200 times. The platform's security flaws have allowed thousands of employees to attempt inputting customer data, source codes, and other regulated data. In fact, a bug in the platform resulted in personal and billing data being accidentally leaked. 

4. Confidentiality and privacy risks

ChatGPT can gather personally identifiable information (PII) from interactions for response generation. OpenAI's privacy policy specifies that this encompasses user account specifics, conversation content, and web interaction data. If unauthorized individuals were to gain access, this information could be exposed, potentially compromising user data, training prediction data, and details regarding the model's architecture and parameters.

In fact, ChatGPT experienced a disruption caused by a bug in Redis, exposing sensitive user information, including chat titles, chat history, and payment details. In addition to conventional attacks, ChatGPT faces the threat of prompt injection and model poisoning, potentially modifying the security and performance without the user being aware.

5. Intellectual property concerns

The ownership and copyright implications of code or text produced by ChatGPT can be complex. When Samsung engineers input proprietary source code into ChatGPT, they allow this sensitive information to influence the AI's future outputs. This poses a risk of inadvertent disclosure to competitors or threat actors. 

In another instance, a Samsung executive used ChatGPT to convert internal meeting notes into a presentation; there is a risk that someone from a competing company could use ChatGPT to inquire about Samsung's business strategy, potentially compromising sensitive data.

6. Data reconstruction

AI models face a serious privacy threat from data reconstruction attacks, which use methods like model inversion to extract sensitive information from training data. This can include confidential details like biometrics or medical records. Model theft can further amplify this risk by replicating the model's training set and exposing confidential information during the training process.

7. Data poisoning

AI models like ChatGPT are at risk of data poisoning attacks, where harmful data can be injected or training data labels can be altered, potentially impacting their performance. Tampering with ChatGPT's training sources, updates from external databases, or user conversations could result in errors or manipulated behaviors. Furthermore, the model's dependence on user feedback for optimization could be exploited to degrade its performance.

8. Sponge Sampling

Sponge samples are a new AI security threat similar to denial of service (DoS) attacks, increasing model latency and energy use, thus straining the hardware and disrupting machine learning model availability. A study by Shumailov et al. employed sponge samples to prolong the response time of the Windows Azure translator from 1 millisecond to 6 seconds, significantly affecting language models.

9. Risk of exposure through plugins

Although OpenAI reviews plugins to ensure they align with our content, brand, and security policies, it's important to note that when using plugins, your data is still being sent to third-party entities separate from OpenAI. These third parties have their own data processing policies. Furthermore, using plugins may expose ChatGPT to new vulnerabilities or potential cyber-attacks to access end-user data.

Potential Misuses of ChatGPT

1. AI phishing emails

Cybercriminals have used their abilities to create convincing phishing schemes, including realistic emails and fake landing pages. These tactics have been effective in general phishing attacks and more targeted man-in-the-middle (MitM) attacks. This technology also enables scammers to impersonate organizations or individuals convincingly. A report by WithSecure highlights experiments where ChatGPT was used to craft deceptive phishing emails to transfer money to scammers.

2. Malware

ChatGPT, despite its protective measures, is susceptible to users discovering ways to bypass restrictions and misuse its AI capabilities for harmful purposes. A Checkpoint research report has revealed that threat actors in underground hacking forums already use OpenAI tools to create malware. Additionally, a study by Recorded Future has shown that even threat actors with limited programming skills can harness ChatGPT to enhance existing malicious scripts, making them harder for threat detection systems to spot. 

ChatGPT can be misused to develop complex encryption tools in Python, provide guidance on creating dark web marketplace scripts, generate unique email content for business email compromise (BEC) attacks, expedite the creation of malware for crime-as-a-service operations, and produce large volumes of spam messages to disrupt communication networks.

3. API attacks

APIs have become increasingly popular in enterprises, but unfortunately, so have API attacks. Salt Security researchers have reported an 874% increase in unique attackers targeting customers' APIs in just six months of 2022. In fact, in the first quarter of 2023, there was a significant 400% increase in attackers targeting APIs compared to the previous year. Cybercriminals are now using generative AI to quickly identify vulnerabilities in APIs, a task that used to be time-consuming to analyze API documentation, gather data, and create queries at a faster rate. 

Best Practices to Secure Sensitive and Confidential PII ,PHI in ChatGPT

Across the globe, employees have embraced ChatGPT as a valuable tool for automating various tasks. They utilize it to input proprietary data and generate initial drafts for code, marketing materials, sales presentations, and business plans.

To ensure the safe and efficient use of ChatGPT in these contexts, here are some best practices to consider:

1. Use ChatGPT responsibly

While ChatGPT offers a convenient way to communicate and gather information, there are potential risks associated with sharing sensitive information. It is important to be cautious when using this chatbot and consider its limitations, which could lead to inaccurate responses. Therefore, it is advised not to use ChatGPT for critical matters. To ensure the security of your conversations on ChatGPT, experts recommend disabling chat history and model training in your account settings.

2. Implement data governance policies

Establish strong data governance policies that clearly outline how data is classified, protected, and shared within the organization. This includes setting guidelines for handling sensitive information in AI chatbot conversations, implementing access controls to restrict entry into AI chatbot systems, and restricting extensions and applications. Multi-factor authentication (MFA) can also provide extra security for all your accounts.

3. Employee training

Educate employees on the potential risks and responsibilities of using AI chatbots ethically. This training should cover technical aspects and incorporate these tools with existing company policies and processes. Employees must have a comprehensive understanding of how AI tools function and what their role is in the organization. This involves transparently explaining these tools' functioning, data usage, and output application.

4. Network Detection and Response platform 

An NDR platform offers comprehensive cybersecurity monitoring to safeguard your network against unauthorized access by malicious threat actors. Effective NDR solutions use AI and ML to identify and prevent unauthorized access, while a well-maintained zero-trust environment further strengthens defenses by restricting access to authenticated users.

5. Create strong passwords

A strong password should be unique and regularly updated. Avoid repeating passwords for your accounts on different platforms and apps to prevent password-stuffing attacks.

6. Patches and firewall

Make sure to install the latest updates and patches available. Enable your operating system's firewall and activate your router's firewall for additional protection. To further secure your data and location, consider using a private VPN that encrypts your information.

7. Regular monitoring

Add extra layers of protection to your accounts and ensure all alerts are activated. Watch out for any unusual patterns in chatbot usage and set up alerts for potential data breaches. Strac monitors all the data sent to ChatGPT and enforces rules to prevent sharing sensitive information.

Automate Sensitive Scanning, Classification and Redaction in ChatGPT

Strac ChatGPT DLP uses 'Automated Sensitivity Analysis' to continuously monitor and classify ChatGPT content, ensuring the protection of sensitive data such as PII, PHI, and other confidential information. To safeguard user privacy, Strac masks any sensitive parts of conversations and only grants access to authorized personnel when needed. With Strac, users have the ability to set their own rules for data sensitivity in ChatGPT interactions, providing a sense of security and control over their information.

Automated and instant detection and redaction of sensitive data of customers like SSN, Credit card number shared in ChatGPT by Strac DLP
Ensure confidential information remains confidential. Book a demo with Strac.

Protection against accidental shares

Mistakes happen. Recognizing this, Strac ensures that unintentional data disclosures during ChatGPT interactions are mitigated, safeguarding internal information that employees might unknowingly expose, thinking they're in a secure space.

Real-time data anonymization

Strac promptly anonymizes Personally Identifiable Information (PII) and Payment Card Information (PCI) within ChatGPT prompts, certifying that proprietary data remains undisclosed and unshared with ChatGPT.

Compliance assurance

Strac's solution ensures that interactions with ChatGPT comply with stringent privacy regulations, like GDPR and CCPA, by anonymizing sensitive information before it reaches ChatGPT, thus protecting businesses from potential non-compliance penalties.

Chrome extension for seamless integration

Strac offers a secure Browser Extension (Chrome, Edge, Safari, Firefox) that enables businesses to harness the capabilities of ChatGPT without compromising on data security standards, presenting a balanced blend of functionality and safety.

Strac offers pre-designed compliance setups specifically for ChatGPT, making meeting standards such as PCI, HIPAA, and GDPR easy. With detailed interaction audits and up-to-date security insights, Strac simplifies the auditing process and helps you stay ahead of emerging threats. 

Founder, Strac. ex-Amazon Payments Infrastructure (Widget, API, Security) Builder for 11 years.

Latest articles

Browse all