Navigating the Double-Edged Sword of AI

By

By Erkin Djindjiev and Michael Sviben, DomainGuard

Navigating the Double-Edged Sword of AI

Embracing Innovation without Losing Control

ChatGPT is now an everyday workplace tool with vast potential. But as with every revolutionary technology, it comes with unintended consequences. It is the responsibility of technology leaders to explore not only what these tools can do for us, but what they can do to us when used with malicious intent.

The Threats are Already Here

Generative AI isn’t a distant risk—it’s here, already reshaping the threat landscape. From impersonation scams to professional-grade phishing and dark web tools, attackers are actively exploiting these technologies. Below are some of the key risks every CIO should be aware of.

Data Leaks through Public AI Tools

Well-intentioned employees often paste sensitive data into public AI tools like ChatGPT. That can include customer contracts, source code, even internal memos. Once submitted, the data leaves your network—and your control. Samsung learned this the hard way when proprietary source code leaked through employee interactions with a chatbot.

And they’re not alone. Various sources estimate that between 8% and 13% of employees have uploaded sensitive or confidential data to public AI tools. That means the risk isn’t theoretical—it’s likely happening inside your organization right now.

Professional-Grade Phishing at Scale

Gone are the days of scams that could be readily identified by misspellings and broken English. Today, generative tools enable attackers to write flawless, convincing emails. They can instantly translate emails into any language, mimic corporate tone, and even fine-tune the message for psychological effectiveness. As a result, phishing campaigns are becoming more believable—and more dangerous.

Deepfake Voices and Faces

Using just 10 seconds of audio, attackers can clone an executive's voice. Combine that with caller ID spoofing and social engineering, and an unsuspecting employee might act on a fraudulent request. In one real case, a Hong Kong employee wired $25 million after a video call with AI-generated "executives."

Attackers will often create a false sense of urgency. When in doubt, hang up and call the individual back through a separate, verified communication channel. For sensitive business functions such as accounts payable, consider using safewords that rotate to approve large transactions.

Fake IDs and AI-generated profile photos now enable attackers to create entire digital personas. These are used to infiltrate networks, gather intel, and bypass KYC checks on financial platforms. And because these personas don’t correspond to real people, they are virtually untraceable.

Weaponized Dark AI Models

While most public AI tools include ethical guardrails, attackers are building underground versions trained on hacking manuals and malware repositories. Models like "WormGPT" and "FraudGPT" are sold on the dark web, explicitly designed for crafting malicious code, phishing pages, and social engineering scripts.

What Technology Leaders can Do Today

While the threats from generative AI are significant, there are practical steps your business can take to reduce risk. The following actions focus on minimizing data exposure, reinforcing security culture, and staying ahead of emerging threats.

  1. Deploy a private LLM instance
    Avoid sending proprietary data to public tools. Instead, host a private version of an LLM within your organization or through a vetted partner. This ensures control over where data goes and who can access it.
  1. Establish strong AI use policies
    Ensure your organization has clearly defined rules about what can and cannot be entered into AI tools. Consider implementing data loss prevention (DLP) tools to monitor usage and flag violations.
  1. Train employees on prompt hygiene
    Awareness is everything. Teach your teams that even casual prompts can lead to inadvertent leaks. Encourage them to sanitize data and recognize the risks of sharing anything confidential.
  1. Prepare for deepfake threats
    Develop internal protocols for verifying identity beyond voice or video. This could include:
    - Safewords or challenge phrases for executives
    - Multi-person approval chains for large financial transactions
    - Awareness training on common impersonation tactics
  1. Monitor AI advancements on the dark web
    Stay informed. Work with your security teams or vendors to keep tabs on how AI is being used in cybercrime. Knowledge is a key defense.

A Final Word: AI is Inevitable. Abuse Doesn’t Have to Be.

Generative AI isn’t going away. Nor should it. But as with the internet, mobile phones, and social media before it, bad actors will find ways to weaponize it. The difference now is speed and scale. By taking a proactive approach today, CIOs can help ensure that innovation drives opportunity—not compromise.

This article is part of the Expert Insights series by Access Point Consulting.

Resources

To Enhance Your Cyber Operations

How Pen Testing and Continuous Attack Surface Management Work Together

How Pen Testing and Continuous Attack Surface Management Work Together

As the digital perimeter continues to dissolve, security leaders are rethinking how they manage cyber risk. Penetration testing and vulnerability management remain essential—but they’re no longer enough on their own. Today’s attackers exploit what lies beyond your defined scope: misconfigured cloud buckets, forgotten subdomains, exposed APIs, and rogue SaaS apps. To stay ahead, organizations need not just testing, but visibility. That’s where continuous Attack Surface Management (ASM) comes in.

Find out more
Beyond Domains: The Expanding External Threat Landscape

Beyond Domains: The Expanding External Threat Landscape

As organizations strengthen their internal security, attackers are shifting their focus — exploiting what’s outside your firewall. The external threat landscape has evolved far beyond just domains and IP addresses. Today, it includes employee data on data broker sites, leaked credentials on the dark web, chatter on adversarial forums, and impersonations through ads and decentralized platforms. In this article, we highlight what you need to know about these risks and how to improve your visibility. 

Find out more
The Hidden Risks of Domain-Based Threats — and How to Defend Against Them

The Hidden Risks of Domain-Based Threats — and How to Defend Against Them

Domain-based threats have become one of the most persistent and underestimated risks organizations face. From lookalike domains designed to deceive, to infrastructure missteps that invite attackers, the danger is real — and growing. During a recent webinar hosted by Access Point Consulting, we explored these threats, why they matter, and what you can do to protect your brand, customers, and employees.

Find out more