Diego Arrabal, Vice President, Eastern Europe, Middle East and Africa, Check Point Software Technologies, highlights how AI is evolving into core infrastructure across sectors, the emerging risks around AI-driven environments and why organisations need a more unified, prevention-first approach to secure AI at scale
Across the Gulf, artificial intelligence is no longer an experiment or a side project. It is becoming core infrastructure. Governments, enterprises and critical sectors are embedding AI into how they plan, operate and compete, linking its adoption directly to national development goals, economic diversification and long-term resilience.
That momentum is real, and it matters. But as AI moves from pilots into production, the conversation has to change. The question is no longer whether AI can create value, but whether organisations are building it in a way they can actually trust.
For years, cybersecurity was often treated as something that could be added later, once systems were already in place. That approach does not survive in the AI era. AI changes how systems behave, how data moves and how decisions are made. Security cannot sit on the perimeter anymore. It has to be built into the foundation.
Attackers have already adapted. AI is being used to scale attacks, automate reconnaissance and make social engineering more convincing than ever. At the same time, organisations are rolling out copilots, AI-driven applications, autonomous agents and private AI environments at speed. The result is a far broader and more dynamic attack surface than most traditional security models were designed to handle.
This challenge is particularly acute in the Gulf, where AI adoption is increasingly tied to critical sectors and strategic priorities. Financial services, healthcare, energy, logistics, government platforms and industrial systems are all becoming more intelligent and more data-driven. Security, therefore, cannot remain limited to networks, endpoints or email. It must extend into models, prompts, agents, permissions, data pipelines, and real-time behaviour within AI environments.
This is why the concept of the AI factory is gaining traction across the region. Many organisations are no longer comfortable relying entirely on public AI services without clear visibility into where their data resides, how it is handled or who ultimately controls it. Regulatory expectations, sovereignty requirements and business risk are accelerating the move toward private and hybrid AI environments. These environments are quickly becoming mission-critical infrastructure.
But AI factories do not behave like traditional data centres. They combine high-performance compute, massive datasets, distributed training pipelines, inference engines, APIs, orchestration layers and increasingly autonomous systems that can act, not just generate output. The risks are different as well: prompt injection, model theft, data leakage, adversarial manipulation and lateral movement across AI workloads are no longer theoretical concerns.
Partial visibility is not enough in this context. Securing AI requires understanding how models, agents and applications behave at runtime, not just how they were designed. Control has to exist where decisions are made and actions are triggered.
This becomes even more important as organisations move toward agentic AI. Not long ago, AI was largely viewed as a productivity tool, something that could help write, summarise or analyse. That view is already outdated. AI is now being connected directly to internal systems, business applications and live data sources. Once AI systems can act, the risk is no longer about a wrong answer. It is about access, authority and control.
That shift demands a more deliberate approach to security. Organisations need a unified way to manage AI across three realities at once: employee use of AI, AI embedded into business applications and autonomous systems operating with minimal human intervention. Security has to operate across all three, in real time, and inside production environments.
This is where a more prevention-first, unified approach becomes critical, bringing together visibility, policy enforcement and threat prevention across the AI lifecycle rather than relying on fragmented controls.
The answer is not to slow AI adoption. The Gulf is moving quickly, and that speed is intentional. But ambition without control creates exposure. Security has to move at the same pace as innovation, not trail behind it.
That starts with visibility. Organisations need to understand how AI is being used across the workforce, including informal or unsanctioned use that often goes unnoticed. They need a clear view of which AI applications and agents exist, what they are connected to and which data they can access.
Just as importantly, they need the ability to intervene. Policy documents and guidelines are no longer enough. In AI-driven environments, risk appears at runtime. Control must be enforced continuously and adapted as systems evolve, permissions change and new workflows emerge.
For enterprises across the region, this is not about limiting innovation. It is about making it sustainable. The organisations that stand out in the next phase of AI adoption will not simply be the ones that move fastest, but the ones that build with discipline, clarity and confidence.
The Gulf is no longer experimenting with AI at the edges. It is embedding AI into national strategies, critical infrastructure and real operational systems. That makes this region one of the most important places in the world to get AI security right.
Because the next phase of the AI race will not be decided by who moves first. It will be decided by who builds systems they can trust and trust to scale.
