Ångström-laboratoriet · 13 February 2026
For users of AI, chatbots and agents
Krister Hedfors
Krister Hedfors has worked in technical cybersecurity for 20 years. He is currently engaged as a cybersecurity architect helping large organisations develop, test, and scale AI-driven services.
AI Security Literacy means being well-versed in the security aspects of AI. In this presentation, we survey the field, interspersed with concrete thought-provoking examples, to help us navigate the new AI landscape with increased confidence.
The target audience ranges from beginners to power users of AI, chatbots, and agents.
How AI tools have become embedded in daily work
Concrete examples of what can go wrong
An attacker manipulates the AI's instructions by injecting hidden commands in data the model processes.
This is the SQL injection of the AI era.
Users inadvertently expose sensitive information by pasting it into AI tools.
Once sent to an external AI service, you have lost control of that data.
AI models generate confident-sounding but incorrect outputs:
Automation bias — the tendency to trust AI output without verification — is a growing risk as AI tools become more fluent.
AI agents call tools and take actions to advance the storyline — even when the storyline is wrong.
The agent does not verify whether its premise is true. It confidently executes a sequence of tool calls that follow the narrative, regardless of truthfulness.
Uncontrolled use of AI tools outside organizational IT governance:
Shadow AI creates invisible risk — organizations cannot protect against threats they do not know about.
Practical guidelines and the AI security mindset
Think of AI tools like a very capable but untrustworthy intern:
The goal is not to avoid AI, but to use it with awareness.