Hidden AI risks lurking at your business

Hidden AI risks lurking at your business

Anand Oswal, SVP & GM of Network Security at Palo Alto Networks, discusses that unsanctioned GenAI apps pose cybersecurity risks like data leakage, malware, and compliance violations. Without proper visibility, control, data security, and threat prevention, these tools can expose sensitive information, leading to financial and operational losses despite their productivity benefits.

The adoption of unsanctioned GenAI applications can lead to a broad range of cybersecurity issues, from data leakage to malware. That’s because your company doesn’t know who is using what apps, what sensitive information is going into them, and what’s happening to that information once it’s there. And because not all applications are built to suitable enterprise standards for security, they can also serve malicious links and act as entryways for attackers to infiltrate a company’s network—giving them access to your systems and data. All of these issues can lead to regulatory compliance violations, sensitive data exposure, IP theft, operational disruption and financial losses. While these apps provide enormous productivity potential, there are serious risks and potential consequences associated with their adoption if not done securely.

Take for example:

  • Marketing teams using an unsanctioned application that uses AI to generate amazing image and video content. What happens if the team loads sensitive information into the app and the details of your confidential product launch leak? Not the kind of “viral” you were looking for.
  • Project managers using AI-powered note-taking apps to transcribe meetings and provide useful summaries. But what happens when the notes captured include a confidential discussion about this quarter’s financial results ahead of the earnings announcement?
  • Developers using copilots and code optimization services to build products faster. But what if optimized code returned from a compromised application includes malicious scripts?

These are just a few of the ways that well-intentioned use of GenAI results in an unintentional increase in risk. But blocking these technologies may limit your organization’s ability to gain a competitive edge, so that isn’t the answer either. Companies can—and should—take the time to consider how they can empower their employees to use these applications securely. Here are a few considerations:

Visibility: You can’t protect what you don’t know about. One of the biggest challenges IT teams face with unsanctioned apps is that it’s difficult to respond to security incidents promptly, increasing the potential for security breaches. Every enterprise must monitor the use of third-party GenAI apps and understand the specific risks associated with each tool. Building on the understanding of which tools are being used, IT teams need visibility into what data is flowing in and out of corporate systems. This visibility will also help detect a security breach so it can be identified and rectified quickly.
Control: IT teams need the ability to make an informed decision on whether to block, allow or limit access to third-party GenAI apps, on either a per-application basis or leveraging risk-based or categorical controls. For example, you might want to block all access to code optimization tools for all employees but allow developers to access the third-party optimization tool that your information security team has assessed and sanctioned for internal use.
Data Security: Are your teams sharing sensitive data with the apps? IT teams need to block sensitive data from leaking to protect your data against misuse and theft. This is especially important if your company is regulated or subject to data sovereignty laws. In practice, this means monitoring the data being sent to GenAI apps, and then leveraging technical controls to ensure that sensitive or protected data, such as personally identifiable information or intellectual property, isn’t sent to these applications.
Threat prevention: The potential for exploits and vulnerabilities can be lurking underneath the surface of the GenAI tools being used by your teams. Given the incredibly fast rate at which many of these tools have been developed and brought to market, you often don’t know whether the model being used was built with corrupt models, trained on incorrect or malicious data, or is subject to a broad range of AI-specific vulnerabilities. It is a recommended best practice to monitor and control data flowing from the applications to your organization for malicious or suspicious activity.

 

Juniper Networks presents Outlook 2025

Juniper Networks presents Outlook 2025

Mike Spanbauer, Senior Director of Product Marketing at Juniper Networks, predicts…
Are pre-owned smartphones safe?

Are pre-owned smartphones safe?

Phil Muncaster, guest writer at ESET, explains that buying a pre-owned phone…
Why your cloud security strategy may be obsolete by 2025?

Why your cloud security strategy may be obsolete by 2025?

John Engates, Field CTO of Cloudflare, warns that within 18 months,…
OmniOps secures $8 million from GMS Capital Ventures

OmniOps secures $8 million from GMS Capital Ventures

OmniOps, the first Saudi Arabia-based AI Infrastructure Technology provider, announced the successful…
lechef all set to transform workplace dining in the region

lechef all set to transform workplace dining in the region

Saudi-based serial entrepreneur Eugen Brikcius announced the launch of its new food…
Clemta ready to cater entrepreneurs in the region

Clemta ready to cater entrepreneurs in the region

Clemta, the one-stop shop for global entrepreneurs incorporating in the US, has…