Back
Nov 14, 2025
Dear Security Posture, We Need to Talk

Frank Lyonnet

One of the first 20 users of Cursor and a visionary ask
Three years ago, I stumbled upon a tweet from an AI coding startup called Cursor, which was seeking early adopters for its intelligent editor. On a whim, I replied and had the pleasure of meeting Michael Truell, co‑founder and CEO. I was offered to voice some asks, and so I did — including a wishful feature request: “I want a button to create a full project skeleton.”
To my surprise, I was invited as one of the first 20 users. Over the following months, I watched new AI‑assisted coding features come to life at an incredible pace. Fast forward to yesterday: that little startup announced a whopping $2.3 B funding round, and yes, my coveted one‑click project button is finally real.
This journey – from a speculative tweet to a revolutionary coding assistant – reinforced a conviction I held from day one: AI would eventually revolutionize IT security just like it has revolutionized coding. The trick is the same in both worlds: LLMs are only as good as the signals you feed them. Cursor works because it injects rich, structured code context into the model instead of just throwing natural‑language prompts at it.
We architected our product with that same principle in mind: EDAMAME Security exists to inject information‑rich security signals into an LLM’s decision‑making loop. We built the plumbing first — deep telemetry across identity, network, system and communications, correlated with threat intel (MITRE CVEs, HaveIBeenPwned, blacklists, compliance baselines, ML‑based anomaly detection) — and only then wrapped a reasoning engine around it.
In 2025, the timing finally lines up. We’re poised to launch our first “agentic” security posture tool – essentially your personal CISO in a box – and spark a decentralized, user‑empowered endpoint security revolution.
The AI Security Assistant Revolution: User‑Up Security on Developer Workstations
Traditional endpoint security tools have long taken an “admin‑down” approach: IT administrators push strict policies to machines, lock down settings, and hope users comply. The result? Frustrated developers, hampered workflows, and often nothing more than security theater. We’ve all seen it – the dev who can’t run Docker without disabling half the security stack, or the engineer who bypasses controls entirely by coding on an unmonitored personal laptop. Relying on top‑down enforcement alone often backfires, as users find ways around obstacles.
In 2017, Netflix challenged this paradigm with an internal tool called Stethoscope, which showed employees the security issues on their devices and how to fix them, instead of forcibly changing things for them. This philosophy, coined “User Focused Security,” trusts and guides users to secure their own devices rather than relying on heavy‑handed IT enforcement.
Around the same time, startups like Kolide proved that treating employees like responsible adults actually works: Kolide’s Slack bot will notify an employee when their device drifts out of compliance and then provide step‑by‑step instructions to self‑remediate. If the user fixes a problem (say installing a critical OS patch), they can click a button in Slack to confirm, and the system verifies the resolution in real time. In deployments of these tools, compliance went up and frustration went down – all without Big Brother‑style control.
Building on this “user‑up” approach, our team at EDAMAME is introducing something new: a developer‑friendly AI Security Assistant that lives on your workstation. Instead of a silent agent enforcing policies behind your back, the EDAMAME assistant works with you – explaining risks, proposing fixes, and even automating them (with your permission). Every action it takes is transparent and undoable, so you remain in control. The outcome? Developers actually fix issues on their machines (with a little AI help), security improves continuously, and IT can verify compliance via cryptographic attestations – all without resorting to remote lockouts or draconian policies. This flips endpoint security from a coercive experience to a collaborative one.
And crucially: this isn’t just “ChatGPT on your laptop.” EDAMAME is a security signal engine first, and an LLM client second. It continuously turns raw telemetry into structured, high‑value security signals and feeds those into the model to drive decisions. Think of it as Lynis + Nmap + Wireshark + HaveIBeenPwned + threat models, all distilled into a context package the LLM can actually reason over.
Before diving into how it works, let’s summarize the shift:
Old “Admin‑Down” Security
Central IT enforces strict policies (MDM profiles, group policies) on your machine.
You have limited control or insight. Productivity often suffers, and users inevitably find ways to bypass controls (hello, Shadow IT).
Security issues may linger until the next audit or never be fixed at all.
New “User‑Up” Security
An intelligent assistant on your device guides you to compliance.
It fixes routine issues automatically (with your consent) and coaches you on the tougher ones.
You keep full control – nothing is permanent or hidden, and you can undo any change.
The organization still gets proof that your device is secure (through attestation), but without micromanaging your device or invading your privacy.
Importantly, as Netflix’s Jesse Kriss (who pioneered User Focused Security) noted, the user‑up approach isn’t mutually exclusive with traditional monitoring or management – in fact, they complement each other. You might still have a subset of tightly managed machines for certain roles, but use a Stethoscope‑like assistant on all the others to gain broader visibility.
The key is engaging users as partners. After all, “the only security approaches that work are the ones that are actually used,” and every bit of friction or confusion in your tools will erode the goodwill of your users (and thus your security). EDAMAME embraces this insight: by making security convenient and collaborative, it ensures protective measures are actually adopted rather than circumvented.

Signals In, Intelligence Out: How EDAMAME Uses LLMs
Under the hood, EDAMAME is always doing the same thing:
Collect rich security signals from your environment – identity, network, system, and comms – plus external threat intel and baselines.
Fuse and compress those signals into a machine‑readable posture snapshot – “this is everything that matters right now.”
Ask a large language model to reason over that snapshot, propose actions and explanations, and
Execute changes via a vetted set of local tools, always with the ability to roll back.
In other words, our LLM is not the sensor; it’s the brain. EDAMAME handles the sensing, normalization and enrichment so that the brain is staring at high‑definition telemetry instead of fuzzy log fragments.
Concretely, the EDAMAME agent pulls:
Local state and hardening data (like Lynis on steroids).
Network topology and live sessions (Nmap‑style views of LAN and overlays).
Process, socket and DNS telemetry (your personal Wireshark, minus the packets).
External feeds: HaveIBeenPwned, MITRE CVEs, public blacklists/whitelists, and ML‑based anomaly scores.
Mappings to threat models and compliance frameworks (CIS, SOC 2, ISO 27001, etc.).

All of that gets transformed into structured security signals – “SSH open on ZeroTier interface”, “overlay peer listed on FireHOL”, “identity found in 2025 credential dump”, “Raspberry Pi on flat LAN with outdated OS” – that the LLM sees as part of its context.
The difference is how you drive that intelligence. There are two primary modes.
Two Ways to Talk to Your Personal CISO: “Do It For Me” Button and MCP Chats
Both modes use the exact same signal pipeline and decision engine. They just wrap it in different UX:
Built‑in LLM with a “Do It For Me” button – EDAMAME talks directly to an LLM you choose (Claude, OpenAI, or a local Ollama model) using your API key. The experience lives entirely in the EDAMAME UI.
MCP mode via tools – EDAMAME exposes the same capabilities over the Machine Control Protocol so external orchestrators (Claude Desktop, n8n, your own bots) can call into it.
Both modes see the same posture. They just differ in how you interact with the results.

Mode 1 – Built‑In LLM + “Do It For Me”
In this mode, EDAMAME quietly acts as your AI security copilot.
You go into Settings, pick your model:
paste an OpenAI API key,
or a Claude key,
or point it at a local Ollama endpoint,
and you’re done. EDAMAME doesn’t care which LLM you bring; it just expects an OpenAI‑style API.
Then, one Monday morning, you hit a single button:
“Assess my security posture and fix anything that’s safe to auto‑fix.”
Behind that button, the “signal engine” spins up. The local agent collects what it needs – OS state, running services, network sessions, LAN devices, overlay networks, threat intel lookups – and sends a compact, signal‑rich snapshot to your chosen LLM.
A few seconds later, you’re looking at something like (based on a real run):
Identity: your primary email is in a 2025 credential‑stuffing dataset; no password manager detected.
Local access: Remote Desktop (ARD), Remote Login (SSH) and the root account are enabled.
Overlay network: zerotier‑one is running as root with 30+ persistent UDP connections; one peer appears on a FireHOL blacklist.
Monitoring agent: a Datadog‑style process is repeatedly trying to reach 100.100.100.200:80 over HTTP, in CGNAT space, with no replies, from a non‑standard user.
LAN / IoT: two Raspberry Pis on 192.168.1.0/24 run outdated Raspbian and expose SSH, a Shinobi CCTV dashboard, Syncthing and ZeroTier; your router, printer and Hue bridge also expose legacy management services.
Instead of dumping 23 alerts on you, the LLM classifies each item as:
AUTO_RESOLVE – safe hygiene, fully reversible:
Disable root login (keep sudo).
Turn off ARD.
ESCALATE – needs a human brain:
“Your ZeroTier overlay is running as root and extends SSH into a mesh with a blacklisted peer. Is every node on this network trusted?”
“Your monitoring agent is beaconing via HTTP to a FireHOL‑listed CGNAT IP. Is this an old proxy, or possible C2?”
“Two outdated Raspberry Pis on a flat LAN (and on ZeroTier) look like a textbook lateral‑movement platform.”
You click “Do It For Me”, and EDAMAME executes all AUTO_RESOLVE actions using its local tools. Every change:
is logged in natural language,
references the exact signals that triggered it (“SSH was open on ZeroTier interface zt0”),
can be undone individually with one click.
What you’re left with on‑screen are the interesting questions an experienced security engineer would actually ask – just without spending an hour in shell history and PCAPs.
Mode 2 – MCP Integrations (Claude Desktop, n8n, and Friends)
Now imagine you prefer to live in another interface: Claude Desktop, a custom Slack bot, or a low‑code orchestrator like n8n.
That’s where MCP mode comes in.
In this mode, EDAMAME exposes its capabilities as tools a remote AI can call:
list_todos,get_action_history,process_todo,undo_all, …
Claude Desktop (or any MCP‑aware client) sees EDAMAME as just another toolset. You start a chat on your desktop:
You: “Hey Claude, assess my security posture.”

Claude calls EDAMAME’s tools, gets back the same signal‑packed posture snapshot as above, and responds:
Claude: “I checked with EDAMAME. I auto‑fixed 7 hygiene items (disabled root login, turned off ARD, …).
Three findings need your attention:
• ZeroTier is running as root with 34 peers, one on a FireHOL blocklist, while SSH is open.
• The Datadog‑style agent attempted 12 HTTP connections to 100.100.100.200:80 (CGNAT + threat‑listed) from a non‑standard user.
• Two Raspberry Pis on 192.168.1.0/24 run outdated Raspbian with SSH, Shinobi CCTV, Syncthing and ZeroTier, on a flat network.
Together, these create a high‑risk attack path. Want me to dig into the monitoring agent first, or lock down ZeroTier and the Pis?”
From Claude’s perspective, it’s just “thinking very hard” and calling tools. From EDAMAME’s perspective, Claude is a remote brain asking it to inspect processes, apply threat intel, change firewall rules and report back the updated signals.
The same pattern works with:
n8n / Zapier‑style flows – schedule nightly posture checks, ship summaries into Slack, open tickets if certain high‑risk combinations appear (e.g., “remote login + ZeroTier + old Pis”).
Custom chat apps – your own branded assistant that still uses EDAMAME for the “touch the machine” parts.
The key design point: both modes use the same engine. Whether you prefer:
a built‑in “Do It For Me” button backed by your own Claude / OpenAI / Ollama model inside EDAMAME, or
a chat‑first MCP integration from Claude Desktop or an automation platform,
…the analysis, decision logic, and security‑signal plumbing are identical. You pick the UX that fits your workflow; your personal CISO shows up either way.
Safety Net: Full Rollback for Every Change (No‑Fear Automation)
A critical ingredient in EDAMAME’s design is trust through reversibility. Every change the AI makes – whether it’s enabling a firewall, or updating a config – comes with an undo. Think of it as a safety net under the tightrope of automation. Any action the assistant performs is logged and can be rolled back with one click or a simple CLI command.
In traditional security automation, admins are cautious. A script that auto‑deletes “suspicious” files might accidentally delete something important; a policy that auto‑quarantines devices might knock your CEO’s laptop off the network at the worst time. The fear of false positives and unintended consequences means most automation is narrow and conservative – lots of “alert, but don’t fix” or “quarantine only if 100% sure” approaches.
In contrast, EDAMAME’s AI can afford to be bold and proactive because mistakes aren’t permanent. If it tries a batch of 10 fixes and 1 causes an issue, it can immediately roll it back. The AI doesn’t have to be perfect; it just has to be accountable.
For example: suppose the AI notices ten system settings on your Mac that don’t align with security best practices. It decides to fix all of them at once to give your device a quick security boost. It enables the firewall, disables a guest account, turns on automatic updates, and so on. A minute later, your overall security score jumps from, say, 2.5★ to 3.5★ — great! But then you realize your local web development server isn’t reachable anymore.
You check the assistant’s Action History and see that one of the changes was “Enabled firewall” which likely blocked your local dev port. No problem: you hit “Undo” on that specific change. Within seconds, the firewall setting is back to how it was, and your dev workflow is unblocked. The assistant then connects this outcome back to its signals: “firewall enabled” + “dev port 3000 unreachable” and learns from it. Next time, it will either prompt you before enabling the firewall or at least remember to preserve your custom dev ports.
This full undo/rollback capability means no automated action has to be feared. Mistakes are cheap when you can revert them instantly. So the assistant can fix issues aggressively and continuously, dramatically reducing dwell time of vulnerabilities, yet you retain ultimate control.
User‑Up Security in Action: Everyday Experience
Combining an AI assistant with a user‑centric philosophy unlocks workflows that simply weren’t possible before. Let’s recap how daily security tasks transform with EDAMAME in the picture.
Morning Posture Check
Instead of periodic audits or waiting on IT reports, you can quickly check your posture each day. With virtually zero effort, you get a digest of your device’s security health. The assistant auto‑resolves the trivial stuff (clears known benign alerts, applies minor updates) and surfaces only the deltas in your signals – what changed since yesterday that actually matters.
You spend 2–3 minutes reviewing a summary, not 30 minutes juggling tools.
In‑Flow Fixes
Imagine you’re about to deploy code to production or access a sensitive system, and a policy requires your device to be in a secure state (up‑to‑date OS, disk encrypted, no high‑severity alerts open). In the old world, if you were out of compliance, you’d get blocked and have to file an IT ticket or scramble to fix things manually.
With EDAMAME, the moment you get that nudge – “Hey, you need to secure X before proceeding” – your AI is right there to help:
“Your disk isn’t encrypted, which is required for prod access. Here is the process to enable encryption now.”
Under the hood that’s just another signal transition: “disk_encrypted = false” becomes “disk_encrypted = true”, and your CI gate or access proxy can see the attestation.
Learning By Doing
Each time the AI fixes or flags something, it explains the signal and the why. Over time, you naturally start to learn these security best practices yourself. For instance, you might not have known that having Remote Login enabled on your Mac is risky; but after the assistant turns it off and tells you “Remote SSH access was enabled, which is unusual and dangerous for most users,” you file that away. If you’re curious, you can ask follow‑up questions in chat (“Why is Telnet considered dangerous?”) and get a mini security lesson.
Critically, all this happens with minimal friction. When you engage users positively and give them easy tools to fix issues, compliance skyrockets. EDAMAME takes this to the next level by not just informing users, but actually doing the heavy lifting for them (when permitted). It’s the best of both worlds: user empowerment and signal‑driven automation.
Deep Dive: ZeroTier, Rogue Peers and Remote Login
Let’s zoom in on one of the advanced findings: that zerotier‑one process.

ZeroTier is a powerful peer‑to‑peer virtual network that creates an encrypted overlay resembling a Layer‑2/Layer‑3 LAN. A root‑privileged daemon manages a virtual NIC, TUN/TAP devices and routes; central “planet” nodes help peers discover each other, but data typically flows directly from peer to peer.
That’s fantastic for developer convenience. It’s also a minefield if you combine it with:
Remote Login (SSH) enabled on “all interfaces”
ARD enabled
Root login still available
Peers on the overlay that aren’t rigorously vetted
Once you join a ZeroTier network, every other peer on that network effectively becomes a neighbor on a virtual Ethernet segment. If an attacker can:
Steal or guess your ZeroTier network credentials (by compromising your controller account, or finding network IDs and auth tokens in code or docs), and
Join your overlay as a “legitimate” peer,
…then from their perspective, your laptop looks like any other host on a flat LAN:
They can scan it for SSH/ARD.
They can attempt password reuse using credentials leaked in breaches.
They can pivot to and through other overlay peers (including those old Raspberry Pis).
ZeroTier encrypts the traffic, but encryption doesn’t save you if the attacker has become an endpoint in your overlay or a router for part of it. With routing and bridging features, misconfigurations can even let a rogue node act as a default gateway or DNS server for others, creating very real man‑in‑the‑middle opportunities.
This is why the assistant doesn’t treat “ZeroTier is running” as a benign fact. It cross‑correlates multiple signals:
ZeroTier running as root
Remote Login and ARD all enabled
Your identity in a fresh credential dump
Old Pis on the same physical and virtual networks
and concludes:
“This overlay isn’t just a convenience. In its current state, it’s a stealth external attack surface that can expose your remote login services to any peer who gets onto this mesh.”
The recommended path forward is straightforward but powerful:
Bind SSH/ARD to specific interfaces (e.g., corporate VPN only, not the ZeroTier NIC).
Tighten ZeroTier ACLs and require explicit approval for each device.
Monitor and regularly audit overlay peers.
Where possible, disable remote access entirely unless you actively need it.
Again, notice the pattern: signals first, LLM reasoning second. The “insight” is simply the model connecting dots we’ve already distilled.
Deep Dive: Outdated Raspberry Pis and Real Lateral Movement
Next, those Raspberry Pis.
The assistant spotted two Pi devices, both running old Raspbian images, both with multiple admin and sync services exposed, both reachable from your laptop and via the ZeroTier overlay.
That’s not just “lab clutter”; it’s almost an exact replay of a real‑world breach.
In 2018, an attacker gained access to NASA’s Jet Propulsion Laboratory through an unauthorized Raspberry Pi that had been connected to an internal network segment without proper security controls. The NASA Office of Inspector General reported that the attacker used that Pi to gain a foothold, moved laterally inside JPL, and exfiltrated around 500 MB of mission‑related data over a period of months. The Pi wasn’t in any formal asset inventory, and weak segmentation made lateral movement possible.
This pattern is now well understood: IoT and “embedded” devices (cameras, printers, lab boards) tend to have:
outdated kernels and userland,
default or weak credentials,
minimal patch management,
and a privileged position on flat networks.
Threat reports from national cyber centres explicitly call out enterprise‑connected devices as common stepping stones for lateral movement.
So EDAMAME’s warning about your Pis isn’t “paranoid AI”. It’s seeing exactly the pattern that has bitten serious organizations before:
“These Pi boxes are unpatched, expose multiple admin surfaces, share a Layer‑2 domain with your primary workstation, and also sit on an overlay VPN. If an attacker pops one, they can pivot into your laptop and beyond. Please either harden, isolate, or retire them.”
Concretely, the assistant encourages you to:
Rebuild them on current Raspberry Pi OS.
Remove unused services (old CCTV dashboards, RPC, debug ports).
Enforce strong, unique credentials and, where supported, TLS for admin UIs.
Move them onto an IoT VLAN with strict east‑west firewall rules.
Restrict which ZeroTier peers they can talk to, or remove them from the overlay entirely.
All of these recommendations fall straight out of a handful of high‑quality signals: “device type = Raspberry Pi”, “OS version < X”, “services = SSH + Shinobi + Syncthing + ZeroTier”, “network = flat + overlay”.
Deep Dive: Monitoring Agent vs. Blacklisted Destination
Finally, the Datadog‑style agent beaconing to 100.100.100.200:80.
Datadog’s agent architecture is fairly standard: a collector gathers metrics and a forwarder sends them securely over TLS to well‑known intake endpoints such as *.datadoghq.com on port 443.
Seeing an agent:
talk plain HTTP,
to an IP in CGNAT space (
100.64.0.0/10is reserved for carrier‑grade NAT, not public SaaS endpoints),that also appears on a FireHOL blocklist,
from a non‑standard service account (
jdoe),
is exactly the sort of anomaly a human analyst would investigate for possible command‑and‑control traffic.
Again, it’s all about signals: process = datadog‑like, dst_ip in CGNAT, protocol = HTTP, dst_ip on blacklist, user = jdoe (not dd-agent). The assistant phrases it as a hypothesis:
“This could be a stale telemetry endpoint or an on‑prem proxy that was decomissioned and later reallocated to someone malicious. It could also be a tampered agent config trying to beacon to attacker‑controlled infrastructure. Let’s freeze this egress path, diff the config against a known‑good one, verify the agent binary, and then decide.”
The important nuance here is that EDAMAME doesn’t blindly trust “your own tools”. It knows what “normal” looks like for a monitoring agent and calls out deviations, then walks you through a disciplined triage path instead of panicking.
Holistic Posture: Remote Login + ZeroTier + Old Pis = Very Bad
The key lesson from this whole session is not that remote login, or ZeroTier, or Raspberry Pis are inherently evil. It’s that together, in this exact configuration, they form a very attractive kill chain.
The signals in play:
Your identity appears in a fresh credential‑stuffing dataset; you don’t use a password manager, so reuse is likely.
Remote login services (SSH, ARD) and even the root account are enabled on your dev workstation.
A root‑level overlay VPN (ZeroTier) extends those services into a mesh network with many peers, one of which is on a FireHOL list.
Unpatched Raspberry Pis and other IoT devices sit on the same LAN and the same overlay.
A monitoring agent is making strange HTTP calls to a CGNAT address on a threat list.
From an attacker’s point of view, this is almost “plug and play”:
Use leaked credentials to compromise some SaaS or controller account.
From there, discover your ZeroTier network and join as a peer.
Use the overlay to hit SSH on your laptop, because Remote Login is enabled.
Pop an old Raspberry Pi and use it as a quiet pivot and persistence node.
Hide C2 traffic inside connections from “trusted” agents and overlay peers.
From your point of view, it’s just a normal dev setup that grew organically over a few years.
EDAMAME’s job – and where the “agentic” piece really matters – is to look across all of this, synthesise it into a story, and then help you unwind it step by step:
Fix identity hygiene (password manager, unique credentials, 2FA).
Close or constrain remote access (SSH/ARD/root).
Harden and strictly govern ZeroTier.
Patch, isolate or retire risky IoT devices.
Investigate and correct strange agent behaviour.
This is what “personal CISO in a box” really means: not just flipping security toggles, but using high‑signal telemetry to explain why a particular mix of toggles is dangerous, and giving you a safe, undoable path to something better.
Conclusion: AI + User Empowerment + Signals = Security That Works
The advent of EDAMAME’s AI Security Assistant signals a shift in endpoint security from a model of control to a model of collaboration. By making security developer‑first – engaging the people who use the devices – it achieves compliance through understanding rather than coercion. Developers stay happy and productive, while CISOs get the assurance of secure systems and up‑to‑date posture. It’s a win‑win that’s long overdue.
To put it plainly, here’s the contrast:
Old Way (Lockdown and Pray)
IT enforces policies via MDM/GPO without user input.
Devices are heavily restricted; users often feel spied on or hamstrung.
Many will work around controls (using personal devices or finding hacks), leading to blind spots.
Security issues often linger until a periodic scan or auditor finally catches them – if ever.
New Way (User‑Up with AI and Security Signals)
Every developer has a personal security assistant on their machine.
The assistant continuously collects and enriches signals, then collaborates with the user to fix issues in real time.
Users are treated as partners: they get explanations and retain final say (with undo for any change).
Compliance isn’t assumed or enforced by fear – the device actually proves it’s in a good state, and it’s achieved without heavy‑handed tactics or privacy invasions.
The net effect is better security without the bitterness. Developers actually adopt the tool (because it’s helpful, not nagging), which means the security improvements are real and widespread.
Our journey with EDAMAME exemplifies this philosophy, supercharged with modern AI and information‑rich security signals. The assistant watches your back 24/7, fixes what it safely can, and guides you through the rest – all while keeping you informed and in control. This transforms security from a dreaded disruption into a seamless part of your workflow: almost invisible when all is well, yet always available and vigilant when you need it.
Looking ahead, we’re excited not just about individual workstations, but about extending this personal CISO concept across entire organizations. Imagine every engineer, every contractor, every CI/CD pipeline node running with an AI guardian by its side, collectively forming a decentralized mesh of security that’s robust yet flexible. That’s the future we’re building towards. Developer velocity and strong security can coexist – and now we have the proof living on our machines.
Stay tuned: in the next part of this series, we’ll explore how the same AI assistant approach can secure distributed infrastructure (with Slack bots keeping the team in the loop on server posture, etc.). But on the front lines of the developer workstation, one thing is clear: empowering users with an AI sidekick, fueled by high‑quality security signals, isn’t just an experiment – it’s the new baseline for effective endpoint security.
Frank Lyonnet
Share this post