Weighing AI Innovation's Advancements Against Cybersecurity Threats: A Delicate Equation
Rewritten Article:
Hey there! Shane Buckley, CEO of Gigamon, is leading the charge in deep observability. In a recent Forbes Technology Council piece, I stressed the importance of securing AI budgets in 2025, with a focus on cybersecurity. With annual budgets getting finalized, it's high time to keep cybersecurity at the forefront of business strategy, investments, and decisions in AI.
We're in the thick of it – AI hype is at an all-time high, while C-suite leaders feel the heat to demonstrate the success of their AI investments to stakeholders. Simultaneously, interest rates stay steep, budgets remain tight, and everyone wants the transformative benefits of AI for increased productivity.
For a long time, the primary hurdle in democratizing AI was its excessive cost. But now, that question's been raised with the release of DeepSeek – a Chinese-owned and operated large language model (LLM) that rocked the AI/tech industry, claiming they could achieve the same caliber LLM as U.S. competitors like Open AI's ChatGPT, at a significantly lower price point. The announcement sent shockwaves throughout the industry.
More recently, Alibaba launched its corresponding AI model, Qwen, showing us that the AI race is far from over. Expect more innovative entries to hit the market as the competition intensifies.
Although innovation brings exciting opportunities for advancements across industries, from lifesaving drug discoveries to predictive models assessing climate risk, it also carries the potential for disastrous consequences if not handled thoughtfully.
Before organizations can confidently integrate AI into their infrastructure and offerings, there are dangerous pitfalls to watch out for. As with any new technology, there's always the possibility of misuse – whether it's unintentionally disclosing sensitive customer data within LLMs, the potential for adversaries to tamper with data to influence business decisions, or unknowingly transferring intellectual property to competitors.
Every shiny object isn't gold and requires careful handling. Following a major data breach, the average global cost can reach an astounding nearly $5 million, not to mention the inevitable tarnishing of reputation.
Here are three tips for organizations to responsibly and securely adopt AI without risking exposure:
Sharing the Power
CEOs face enormous pressure to implement AI in order to stay competitive. This pressure filters down to CISOs, who may not feel adequately supported to ensure AI is implemented securely.
To tackle these issues, six in ten CISOs reported in a Gigamon survey that the most empowering factor in their work would be for cyber risk to hold true importance in the boardroom. This translates to creating checks and balances, with every C-suite leader closely aligned, and CISOs securing a seat in the boardroom.
This includes ensuring someone on the board – if not the CISO – possesses a deep understanding of AI to weigh the pros and cons of deployment and how to move forward safely.
Strengthening Defensive Strategy
Once the C-suite is on the same page, it's essential to evaluate your tech stack. CISOs are tasked with implementing a defense-in-depth strategy – a multi-layered cybersecurity approach that caters to today's ever-evolving threat landscape.
However, it can also lead to tool bloat, which increases costs, creates redundancies, and fosters siloed solutions, often undermining the initial intentions of a defense-in-depth strategy and contributing to security breaches.
To counter this, organizations must prioritize end-to-end visibility, monitor east-west (lateral) movement, and boost telemetry. Only then can they ensure their tool stack is streamlined to reinforce their security posture.
Exploring Alternative AI Solutions
Although open-source LLM platforms like DeepSeek are an alluring option for smaller companies, they carry substantial risks. That's why alternative options like Retrieval-Augmented Generation (RAG) models are gaining traction.
These models combine retrieval mechanisms with generation capabilities, potentially offering more transparent and explainable AI outputs. Organizations can leverage their own real-time, proprietary data to train RAG models, resulting in more accurate and relevant outputs while simultaneously reducing the potential for AI hallucinations – when AI generates incorrect or misleading information.
Because RAGs retrieve data from internal, secure databases or knowledge repositories, it limits the technology's access points and keeps sensitive information confined within the organization, protecting it from manipulation by outside sources. However, when implementing any new technology, organizations leave themselves vulnerable if they fail to monitor the technology consistently.
Observability – such as the integration of network-derived telemetry with log-based security tools – can help ensure threats aren't lurking within your environment. This serves as a last line of defense with the visibility to see what's happening as data moves between internal and external systems in the cloud, in containers, and on-prem.
There's no doubt that the current wave of AI innovation is exhilarating, but let's not lose sight of the need for security. As the AI race accelerates, it's easy to get carried away by the potential it offers. However, it's crucial to approach these advancements with caution.
As we keep pushing boundaries, let's do so responsibly, with a "security first" mindset that is supported by the boardroom, the C-suite, and every employee. We can't afford to slam the barn door shut after the horse has bolted.
Forbes Technology Council is an elite, invitation-only community for outstanding CIOs, CTOs, and technology executives. Do I qualify?
Enrichment Insights:- Organizations should focus on implementing robust governance, security measures, and alternate AI solutions like RAG models to securely adopt AI while maintaining a balance of power and optimizing for defense-in-depth.- Knowledge in AI and ML can help enhance incident response and vulnerability assessment in cybersecurity (SANS AI Cybersecurity Summit).- Collaboration with stakeholders across industries and governments is vital for developing balanced AI policies that address diverse concerns and opportunities (White House Office of Science and Technology Policy).- Organizations can manage and secure AI-driven APIs with tools like Kong Gateway (API management).- Establish structured policies to ensure AI systems operate ethically, legally, and transparently, mitigate potential risks such as data privacy breaches and algorithmic bias, and maintain compliance with regulations through continuous monitoring (governance frameworks).- Adopt technologies like Palo Alto Networks’ AI Runtime Security to detect and block malicious traffic targeting AI models (AI runtime security).- Consider implementing real-time security measures like AI runtime security and leverage innovative cybersecurity solutions for incident response (innovative cybersecurity solutions).
- As Shane Buckley, CEO of Gigamon, emphasizes the significance of securing AI budgets for cybersecurity in 2025, it is essential for stakeholders to prioritize the integration of robust governance and security measures, such as Retrieval-Augmented Generation (RAG) models, to ensure the responsible and secure adoption of AI.
- In the race of AI innovation led by entities like DeepSeek and Alibaba's Qwen, it is crucial for organizations, as noted by Shane Buckley, to maintain a "security first" mindset, particularly during annual budget discussions, to prevent costly data breaches which can exceed almost $5 million and jeopardize an organization's reputation.
- Industry collaboration and understanding, as exemplified by Shane Buckley, are essential for developing balanced AI policies, enabling efficient incident response, enhancing vulnerability assessments, and addressing diverse concerns and opportunities effectively.