Changelog

Jan 1, 2026

Green Fern
Green Fern
Green Fern

The Rise of Synthetic Phishing

Phishing has evolved. What used to be generic mass emails with obvious red flags has become something far more dangerous: AI-generated attacks that are hyper-personalized, contextually aware, and nearly indistinguishable from legitimate communication.

We call it synthetic phishing.

These attacks leverage large language models to scrape publicly available data—LinkedIn profiles, company announcements, social media, even breached databases—and craft messages that reference real colleagues, actual projects, and genuine business relationships. They arrive at plausible times, mimic authentic writing styles, and scale effortlessly across thousands of targets.

The result? Phishing that doesn't look like phishing.

Traditional security tools weren't built for this. They rely on known signatures, blacklisted domains, and pattern matching against historical attacks. But synthetic phishing generates unique content for every target. There's no signature to match.

At Trotta, we're building a different approach. Instead of waiting to detect attacks after they've been crafted, we simulate how adversaries think—predicting attack vectors and identifying vulnerabilities before they're exploited.

Because in a world where AI powers the offense, defense needs to get predictive.

Dec 31, 2025

Yellow Flower
Yellow Flower
Yellow Flower

AI Voice Cloning: The New Front of Social Engineering

A few seconds of audio. That's all it takes to clone someone's voice.

AI voice cloning technology has advanced rapidly, and attackers are weaponizing it. With samples pulled from earnings calls, conference talks, social media videos, or even voicemail greetings, bad actors can generate convincing replicas of executives, colleagues, or family members.

The attack scenarios are already real: a CFO receives a call from what sounds exactly like the CEO requesting an urgent wire transfer. An employee gets a voicemail from "IT" asking them to reset credentials. A finance team member hears their manager's voice authorizing a vendor payment change.

These aren't hypotheticals. Organizations are losing millions to voice-based social engineering, and traditional security has no answer—because there's no malware to detect, no malicious link to block. Just a human being trusting a familiar voice.

Jan 2, 2026

Orange Flower
Orange Flower
Orange Flower

Malicious AI Agents: Autonomous Attackers at Scale

The next evolution is already here: AI agents that don't just assist attackers—they are the attackers.

These aren't simple scripts or bots. Malicious AI agents can autonomously reconnaissance targets, identify vulnerabilities, craft and send phishing campaigns, adapt based on responses, and even negotiate in real-time during social engineering attempts. They operate around the clock, learn from failures, and scale infinitely.

Imagine an agent that scans your company's public footprint, maps your org chart, identifies the finance team, crafts personalized emails to each member, follows up with voice calls using cloned audio, and adjusts its approach based on who engages. All without human intervention.

This isn't science fiction. The building blocks exist today, and the barrier to entry is dropping fast. Soon, sophisticated attack capabilities once reserved for nation-states will be accessible to anyone with basic technical skills and malicious intent.

Get Early Access