Ångström-laboratoriet · 13 February 2026
For users of AI, chatbots and agents
Krister Hedfors
Krister Hedfors has worked in technical cybersecurity for 20 years. He is currently engaged as a cybersecurity architect helping large organisations develop, test, and scale AI-driven services.
AI Security Literacy means being well-versed in the security aspects of AI. In this presentation, we survey the field, interspersed with concrete thought-provoking examples, to help us navigate the new AI landscape with increased confidence.
The target audience ranges from beginners to power users of AI, chatbots, and agents.
How AI tools have become embedded in daily work
Concrete examples of what can go wrong
An attacker manipulates the AI's instructions by hiding commands in data the model processes.
This is the SQL injection of the AI era.
Users inadvertently expose sensitive information by pasting it into AI tools.
AI models generate confident-sounding but incorrect outputs.
AI agents call tools and take actions to advance the storyline — even when the storyline is wrong.
The agent does not verify whether its premise is true. It confidently executes a sequence of tool calls that follow the narrative, regardless of truthfulness.
Reference: hacka.re
Uncontrolled use of AI tools outside organizational IT governance.
Practical guidelines and the AI security mindset
Running AI on your own machine keeps your data entirely under your control.
Think of AI tools like a very capable but untrustworthy intern:
This file may be executable. Verify its contents before running.
Uses OpenAI Embeddings API (text-embedding-3-small) with the API key configured above.
API key stored locally, sent only to selected provider.