By Garrett Kohlrusch | GK Data LLC
A few years ago, “don’t click suspicious links” was most of what you needed to know. The emails were obvious. The phone calls were clumsy. The tells were everywhere.
That era is over.
The attacks happening now don’t rely on you making an obvious mistake. They rely on you trusting what you see and hear — and then manufacturing something indistinguishable from real. Voice, email, video, identity. Attackers have access to tools that make all of it forgeable, and they’re using them against businesses every day.
AI Voice Cloning: When Your Ears Lie to You
The technology required to clone a voice has crossed a threshold that most people aren’t aware of yet. A few seconds of clean audio — pulled from a LinkedIn video, a podcast appearance, a Zoom recording, a voicemail — is enough to generate a synthetic voice that is, in practice, indistinguishable from the real person.
Attackers use these cloned voices to make phone calls. An employee hears what sounds exactly like their CEO asking for an urgent wire transfer before end of day. A business owner gets a call that sounds exactly like their accountant requesting login credentials. The voice is right. The cadence is right. The familiarity is right.
These aren’t edge cases. The FBI has issued repeated warnings. Individual incidents have resulted in six-figure losses from a single phone call.
What actually stops it: Establish a verbal verification word with anyone who might receive a high-stakes request from you — an employee, a financial contact, a family member. If the word isn’t used, the request doesn’t move forward. No exceptions, no matter how convincing the caller sounds.
Email Spoofing: The Gap Between Looking Real and Being Real
Email authentication has improved, but the attack surface hasn’t gone away — it’s shifted.
Display name spoofing is still rampant. Your email client shows you “Garrett Kohlrusch” and most people stop there. The actual sending address, if you look, is something completely unrelated. Lookalike domains work similarly: gkdata.io versus gkdata-io.com or gkdatа.io (with a Cyrillic ‘a’). The difference is invisible at a glance.
Beyond those, AI has eliminated one of the most reliable tells in phishing emails: bad writing. The grammar is perfect now. The tone matches. The context is sometimes pulled from real email threads the attacker has already accessed. You can no longer read your way to safety.
What actually stops it: Slow down on any email that involves money, credentials, or sensitive data — regardless of how legitimate it looks. Verify through a channel you initiated, not one the email provides.
Business Email Compromise: The Long Con
BEC is in a different category from most scams because it’s not opportunistic. It’s targeted, patient, and expensive.
An attacker compromises or spoofs a business email account — often a vendor, a partner, or an internal executive — and then monitors. They read threads. They learn the payment workflow, the approval chain, the language everyone uses. Then, at the right moment, they insert a payment redirect. Update your wire instructions to this new account. The invoice attached is legitimate. The context is real. Only the bank details have changed.
The FBI estimates BEC costs businesses nearly $3 billion annually. The average loss per incident is not a rounding error — it’s often a significant fraction of a company’s operating budget. And because the transaction looks legitimate, recovering the funds is rarely possible.
What actually stops it: Require out-of-band verbal confirmation for any change to payment details. Call a known number — not one from the email — and confirm directly before any funds move.
Deepfake Video: The Next Threshold
In early 2024, a finance employee at a multinational firm joined what appeared to be a video call with colleagues, including someone posing as the CFO. Everyone on the call was a deepfake. He transferred $25 million before the deception was discovered.
That incident required sophisticated tools. The barrier has dropped considerably since then. Real-time face and voice synthesis is increasingly accessible, and the attack playbook is documented. This is not a future threat — it’s a current one for any organization where video calls are used to authorize decisions.
What actually stops it: High-stakes financial decisions should never be authorized solely on the basis of a video call. Require a secondary verification step that happens outside the call itself.
The Common Thread
Every attack here exploits the same thing: trust in a signal you’ve been conditioned to rely on. A familiar voice. A recognizable name in the sender field. A face you know on a screen.
The technology to fake all of those has arrived. The defense isn’t paranoia — it’s process. Clear verification procedures applied consistently, without exceptions for urgency or familiarity. The few seconds it takes to confirm through a second channel is a small cost. The alternative is substantially more expensive.
How GK Data LLC Can Help
Social engineering is the entry point for the majority of serious breaches. Once an attacker has a foothold — a credential, an email thread, a trusted position — the damage to your infrastructure, your data, and your reputation compounds quickly.
At GK Data LLC, we help businesses understand where their exposure actually is — from email authentication gaps that make spoofing easier than it should be, to web application vulnerabilities that give attackers the reconnaissance material they need to build convincing pretexts.
If your business hasn’t had an outside assessment, now is the right time.
[email protected] | gkdata.io
GK Data LLC is a cybersecurity consultancy based in Minneapolis, MN, specializing in web application security, and managed IT services.