LAST month, on Privacy Week, of all weeks, I found myself in this new coffee spot in Masaki. I’m halfway through my second cup, watching people work quietly beside me.
My laptop fan is humming a little too loudly, a subtle but persistent reminder of the repairs I keep promising myself I’ll make. Miguel is playing on repeat in my new AirPods, two small joys that feel almost illicit.
And somewhere between refreshing my emails, people-watching, and pretending not to notice how good the music sounds, I take one wrong – no, right – click. A question I probably wasn't meant to ask that day.
I discovered something called Clawd.bot – now rebranded as Moltbot after an anthropic trademark issue. It’s an open-source AI assistant. Kind of like Microsoft Copilot, but...you install it. You control it. You decide what it can see and do. I’ve covered open source here before, so this is not new ground.
Anyway, in theory, it’s empowering. In practice, it made me pause. Now that AI assistants are becoming part of everyday life, the way we’re adopting them is moving faster than the questions we’re asking about trust, access and privacy.
A year or two ago, tools like this felt niche, but they they’re now a mainstream. People use AI assistants to help them shop, draft emails, organise calendars, summarise documents, plan trips, manage tasks, and answer late-night questions they don’t want to Google.
Today, Microsoft Copilot sits inside Word and Outlook. Google Gemini lives in your browser (not available to all users at this time). ChatGPT is open on millions of tabs right now. These tools feel safe because they’re familiar.
Today, AI assistants like Moltbot are kind of like that brilliant friend from college who always helped you hack your coding assignments. Only now, instead of lending you their brain, they live in your phone or laptop. You can link them to your email, your notes, your calendar, even your documents. Some people let them browse the web for them. Others go all in, hooking up banking tools or password managers.
The appeal is obvious. And the risk? Just as obvious. Is it a thrill—or a conundrum? Would you give a brand-new human assistant — someone you just met, with no background check — full access to your computer?
Most people wouldn’t.
Yet many of us are doing exactly that with software. Because it’s convenient, and nothing bad has happened yet.
Does anyone remember CrowdStrike?
Sometime in 2025, a faulty update from CrowdStrike – a widely trusted cybersecurity company – caused millions of systems around the world to crash. Airports, banks and hospitals, all the organisations vanished off the grid in the wake of a single glitch.
This wasn’t a hack. It wasn’t an attack. It was a trusted security tool behaving exactly as designed – just with a mistake. The lesson wasn’t that CrowdStrike is bad. The lesson was that when software has deep system access, small errors scale fast.
AI assistants increasingly sit in that same privileged position. They can read. They can write. They can automate. And sometimes, they can act without asking twice.
Then there was the Chrome extension story that never hit front-page news, but maybe should have. A popular VPN extension -- installed by millions, slapped "Featured" in official stores -- was quietly collecting users' AI conversations across ChatGPT, Copilot, Claude, Gemini and others. Around the same time, security researchers uncovered AuraStealer, a malware strain spreading through social platforms and fake software guides.
Victims ran commands themselves, thinking they were activating legitimate tools. Instead, the malware quietly harvested browser data, credentials, payment wallets, screenshots, system info – you name it.
Installing software yourself doesn't automatically make it secure. Open-source doesn't mean harmless. Local doesn't mean private. It just means the responsibility shifted to you.
So, where does that leave tools like Moltbot? AI assistants are fascinating, powerful and convenient. But the risk for most users isn't malice – it's over-permission. Giving them unrestricted access creates a single point of failure. If something breaks, updates wrong, or touches another compromised system, the blast radius is huge.
That doesn't mean ‘don't use AI assistants.’ It means slow down before handing over the keys. The systems that tell us ‘I'm helpful, Trust me’, deserves a second look.
Lastly—this feels like a confession more than a conclusion – I've been going down a rabbit hole with PwC's work on cyber risks. What fascinates me isn't just the risks themselves, but how quickly they shape-shift, how companies are constantly trying to stay one step ahead of something that refuses to sit still. And there I am, thinking... huh. This is actually a story. I tip the waiter, let the thought linger, and smile to myself. Hmm, I think, I should probably write about this next week.
By Joshua Mabina
© 2026 IPPMEDIA.COM. ALL RIGHTS RESERVED