Cyber Threat Intelligence Podcast

Special Episode - Safer AI Assistants, Smarter Choices

Pedro Kertzman

Your assistant wants to learn everything about you, remember it forever, and act on your behalf across apps and devices. That promise is powerful—and risky. We break down a no-nonsense safety plan for adopting an always-on AI assistant without handing over your digital life, drawing on years in cybersecurity and months building a personal assistant that listens, learns, and controls real tools.

We start with the foundation: identity isolation and permission design. Instead of connecting your primary accounts, create fresh Google or iCloud identities and selectively share calendars, folders, and photos into that sandbox. Then layer in separation of duties: let the assistant draft emails, code, and automations, but run reviews through a separate model before deploying anything. You’ll hear concrete workflows that preserve the magic of autonomy while catching mistakes, bad defaults, and excessive permissions.

From there, we get tactical about risk. Scope your first use case tightly and keep IoT devices off the table until you’ve watched the system behave for weeks. If you can, use a dedicated machine; if not, contain the runtime with hardened Docker setups—non-root users, minimal images, restricted networking, and secrets handled correctly. Turn on comprehensive logging and make the assistant explain what it did and why. Most importantly, disable auto-install and auto-update for skills and plugins, review changelogs, and promote updates only after testing. Assume failure, keep backups, and apply least privilege at every step.

We close with a direct ask to security professionals: help shape safer AI by contributing hardened images, documentation, and practical guardrails to open-source projects. The genie isn’t going back; users are adopting these tools today. If you’ve got expertise in containers, threat modeling, or secure defaults, your contribution can cut attack surface for thousands of people overnight. If this resonates, subscribe, share with a friend who’s testing an assistant, and leave a review with the one safeguard you plan to implement next.

Send us a text

Support the show

Thanks for tuning in! If you found this episode valuable, don’t forget to subscribe, share, and leave a review. Got thoughts or questions? Connect with us on our LinkedIn Group: Cyber Threat Intelligence Podcast—we’d love to hear from you. If you know anyone with CTI expertise that would like to be interviewed in the show, just let us know. Until next time, stay sharp and stay secure!

Pedro Kertzman:

With great power comes great responsibility. You've probably heard that quote before, but it feels just more important than ever right now. And if you're here, I will take a guess, and it's probably from one of the two reasons. You're either using already or planning to use the new moat bot, or you are from the cybersecurity or information security industries. Either way, this episode is for you. Let me start by just addressing the most important question: who's this guy? Why should I listen to him? So I work for many years now in the cybersecurity industry across architecture, defense, and cyber threat intelligence, but most importantly, for the conversation today, for the past six, seven months, I'm also building my own personal digital assistant. It's fairly similar in concept to Moltbot. No wake up word, it's always always listening, learning all the time. So it's like a big brain kind of thing, but also super capable, can control a bunch of things, browsing, home automation, so on and so forth. Another similarity is the reg or the retrieval augmented generation. So it's basically the way the LLMs or AI models, if you will, will store your personal preferences. So it's the place where they will store you how you like coffee, your name, your family name, or the the time you like to wake up every day, favorite sports, so on and so forth. So it's where basically all your personal information will be stored. So funny enough, I often, when I go explain to my family or friends that are not from the technology industry, I use the analogy of an octopus: big brain, super capable tentacles that can act almost autonomously. Okay, that experience in particular with my professional background is what I felt I should be here today talking to you, and that's why I'm here. And let me be very clear as well. I am not here to tell you don't do it or you should not do it. I'm not here for that. I'm here to share, to the best of my knowledge, how you can do it on a safer on a safer way today. Okay, so I'm gonna share seven tips or if you will suggestions on how to do MoldBot in a safer way. And for the cybersecurity folks here, I have a very, very important message for you on the second part of this episode. So hang tight, and I'll start by addressing those tips or suggestions to the end users first. So, suggestion number one, probably the most important one. Don't give Motbot a fairly new project the keys to all your digital life. Don't give it your accounts. Create a new account, either Google or iCloud. iCloud I think it's like a dollar per month. So create a new account, and most platforms, including these ones I'm talking about, will give you the option to share calendars, folders, you name it, photos, right? So we can control the exposure to this new platform as it evolves and gets more secure. Suggestion number two. So treat it as let's say a junior assistant, like draft this email, draft something else, the code, give me to review, or if you don't want to, if you're not technical, then an important tip as well. Use senior assistants, Claude Code, Chat DPT, for example, they have out-of-the-box integration with Motbot. So use that. It's another subscription for sure, but it's very important from a security standpoint. So Moatbot, for example, can draft code or entire applications and then give it to ChatDPT to do review, technical review, security review, and then when chat dbt, yep, it's good. Then you give it the ability to go and deploy it. Okay. Third suggestion start small, give it one use case, one draft, one small task here, and then as you progress, you understand how it works, you give more tasks, more things to do, parallel things to do, and so on. I would say, especially, don't give it from the get-go IoT devices or home automation devices or access to this type of platform or tools. Just because imagine you wake up in the middle of the night, your garage door is open, your front door is unlocked, the temperature in the house just freezing or something like that. So, again, start small as the technology evolves, gets safer, and then you can start planning to also add those things that are not only digital but could have a real impact on your life. Suggestion number four: keep it separate. If you can, physically separate, have another computer, buy another computer. But if you can't, just don't put it straight on your personal computer with all the information in there. Imagine you delete something, photo, video you only had there. It's it's not good, right? So keep it separate. If you can, if you only have one computer, I would say no problem. Then use your senior assistant cloud code. It can actually deploy a container for you, and even better, it can deploy a container using best practices from a cybersecurity standpoint. It will just reduce the exposure that comes out of the box that Motbot comes out of the box with. So use that on your favor as well. Five, ask Motbot how you did this, why you did that. So have that flow, that conversation to understand how the system works, right? It's gonna be your assistant, so have that conversation. So it turns from a junior to a you know less junior, so on and so forth. Also, your senior assistants can be your friends there. You can ask Lot Code, for example, go in, collect the logs, give me a report of all activity from yesterday, from to earlier today, for the whole week, so on and so forth. If you know where the logs are, you can just copy and paste the logs on Chat GPT. It's also gonna give you really good reports about the activity. Suggestion number six, and this might be probably the most cybersecurity-ish of all the suggestions. It's more technical, let's say, but simple from an understanding standpoint. Just don't do auto updates or auto install of skills and plugins. It's super, or I should say fairly simple from a criminal digital criminal standpoint or threat actor, whatever the name you want to use it, to exploit vulnerabilities on those types of plugins or skills, especially when they're first released. So just disable that, and again, your senior assistants can be your friends there, just ask them to guide you where to click, how to disable those options. And suggestion number seven just assume that things can go wrong. Things break, software breaks, even the most reliable piece of software can break. So, again, it's a new project. So assume that things can go wrong. Don't put all your information in there in the like get-go. So just do things slowly as you progress and understand how things work, and then you give it more information, especially if it's a piece of information that you would really be annoyed if it's in the wild. Don't put it there, you don't need to put everything in there. And now talking to my friends in the cybersecurity and information security industries. Security was never really about stopping users from doing things, right? Innovation is happening, either we want it or not. I don't want to repeat, but I will. This genie, it's not going back to the bottle. So it's up to us to decide if we're gonna help innovation be safe to use, if we're gonna help shape innovation in a safe way. If you remember, if you're old enough to remember when emails came, people now I'm gonna send letters. Why should I use digital? I can send letters. BYOD, cloud. Oh my goodness, remember cloud? No, I have to touch my servers, I have walls here, I don't know what this information is gonna. I need to see my things. Imagine our lives today without cloud so we can help drive that in a safe way. That's our main contribution to innovation, society to progress, right? So that's that's why I'm here mostly to help you think about that as well. And then talking about technicalities per se, if you were a container or a Docker expert, for example, you have a lot of impact right now if you can contribute to that project, and again, it's open source, it's open source. Thanks to Peter and his team, we can contribute to it, we can use our expertise to drive innovation in a safe way forward. Imagine instead of just downloading things like people are doing in thousands right now, then download a Docker image with Motebot on it, right? And it's just reducing the exposure, the attack surface, instead of people being able to find new moatbot users on Shodan and stuff like that. So you can really help right now. I could vibe code that, but I don't think it's the moment to do things like this. If you don't know, if you don't know Docker, that's okay. Documentation, imagine that users need to know how to enable, how to disable things, best practices, recommendations, so on and so forth. So every piece of technology that we know how to do on a safe way, it's gonna be beneficial to a project like this. There are so many components to it, and again, the numbers that you see, it means users really see a value on it, they're not gonna stop, and it's up to us to do something now or to just sit idle, don't do anything about it. But I don't think it's time for it, it's time to roll up our sleeves, use the knowledge we have, and go shape history right in front of our eyes. Thank you.