Embracing Innovation without Losing Control
ChatGPT is now an everyday workplace tool with vast potential. But as with every revolutionary technology, it comes with unintended consequences. It is the responsibility of technology leaders to explore not only what these tools can do for us, but what they can do to us when used with malicious intent.
The Threats are Already Here
Generative AI isn’t a distant risk—it’s here, already reshaping the threat landscape. From impersonation scams to professional-grade phishing and dark web tools, attackers are actively exploiting these technologies. Below are some of the key risks every CIO should be aware of.
Data Leaks through Public AI Tools
Well-intentioned employees often paste sensitive data into public AI tools like ChatGPT. That can include customer contracts, source code, even internal memos. Once submitted, the data leaves your network—and your control. Samsung learned this the hard way when proprietary source code leaked through employee interactions with a chatbot.
And they’re not alone. Various sources estimate that between 8% and 13% of employees have uploaded sensitive or confidential data to public AI tools. That means the risk isn’t theoretical—it’s likely happening inside your organization right now.
Professional-Grade Phishing at Scale
Gone are the days of scams that could be readily identified by misspellings and broken English. Today, generative tools enable attackers to write flawless, convincing emails. They can instantly translate emails into any language, mimic corporate tone, and even fine-tune the message for psychological effectiveness. As a result, phishing campaigns are becoming more believable—and more dangerous.
Deepfake Voices and Faces
Using just 10 seconds of audio, attackers can clone an executive's voice. Combine that with caller ID spoofing and social engineering, and an unsuspecting employee might act on a fraudulent request. In one real case, a Hong Kong employee wired $25 million after a video call with AI-generated "executives."
Attackers will often create a false sense of urgency. When in doubt, hang up and call the individual back through a separate, verified communication channel. For sensitive business functions such as accounts payable, consider using safewords that rotate to approve large transactions.
Fake IDs and AI-generated profile photos now enable attackers to create entire digital personas. These are used to infiltrate networks, gather intel, and bypass KYC checks on financial platforms. And because these personas don’t correspond to real people, they are virtually untraceable.
Weaponized Dark AI Models
While most public AI tools include ethical guardrails, attackers are building underground versions trained on hacking manuals and malware repositories. Models like "WormGPT" and "FraudGPT" are sold on the dark web, explicitly designed for crafting malicious code, phishing pages, and social engineering scripts.
What Technology Leaders can Do Today
While the threats from generative AI are significant, there are practical steps your business can take to reduce risk. The following actions focus on minimizing data exposure, reinforcing security culture, and staying ahead of emerging threats.
- Deploy a private LLM instance
Avoid sending proprietary data to public tools. Instead, host a private version of an LLM within your organization or through a vetted partner. This ensures control over where data goes and who can access it.
- Establish strong AI use policies
Ensure your organization has clearly defined rules about what can and cannot be entered into AI tools. Consider implementing data loss prevention (DLP) tools to monitor usage and flag violations.
- Train employees on prompt hygiene
Awareness is everything. Teach your teams that even casual prompts can lead to inadvertent leaks. Encourage them to sanitize data and recognize the risks of sharing anything confidential.
- Prepare for deepfake threats
Develop internal protocols for verifying identity beyond voice or video. This could include:
- Safewords or challenge phrases for executives
- Multi-person approval chains for large financial transactions
- Awareness training on common impersonation tactics
- Monitor AI advancements on the dark web
Stay informed. Work with your security teams or vendors to keep tabs on how AI is being used in cybercrime. Knowledge is a key defense.
A Final Word: AI is Inevitable. Abuse Doesn’t Have to Be.
Generative AI isn’t going away. Nor should it. But as with the internet, mobile phones, and social media before it, bad actors will find ways to weaponize it. The difference now is speed and scale. By taking a proactive approach today, CIOs can help ensure that innovation drives opportunity—not compromise.
This article is part of the Expert Insights series by Access Point Consulting.