Ångström-laboratoriet · 13 February 2026

AI Security Literacy

For users of AI, chatbots and agents

Krister Hedfors

Krister Hedfors

About the speaker

Krister Hedfors has worked in technical cybersecurity for 20 years. He is currently engaged as a cybersecurity architect helping large organisations develop, test, and scale AI-driven services.

What is AI Security Literacy?

AI Security Literacy means being well-versed in the security aspects of AI. In this presentation, we survey the field, interspersed with concrete thought-provoking examples, to help us navigate the new AI landscape with increased confidence.

The target audience ranges from beginners to power users of AI, chatbots, and agents.

What we will cover

  1. How AI systems process and store your data
  2. Common attack vectors with real-world examples
  3. The risks of over-reliance on AI outputs
  4. Shadow AI and organizational governance
  5. Practical guidelines for safe AI usage
  6. Building an AI security mindset

Understanding the landscape

How AI tools have become embedded in daily work

The AI revolution in numbers

75% of knowledge workers use AI tools weekly
3.4B ChatGPT monthly visits worldwide
92% of Fortune 500 companies use AI assistants

How AI processes your data

Threats and attack vectors

Concrete examples of what can go wrong

Prompt injection

An attacker manipulates the AI's instructions by injecting hidden commands in data the model processes.

This is the SQL injection of the AI era.

Data leakage

Users inadvertently expose sensitive information by pasting it into AI tools.

Once sent to an external AI service, you have lost control of that data.

Hallucinations and over-reliance

AI models generate confident-sounding but incorrect outputs:

Automation bias — the tendency to trust AI output without verification — is a growing risk as AI tools become more fluent.

Agents follow the narrative

AI agents call tools and take actions to advance the storyline — even when the storyline is wrong.

The agent does not verify whether its premise is true. It confidently executes a sequence of tool calls that follow the narrative, regardless of truthfulness.

Shadow AI

Uncontrolled use of AI tools outside organizational IT governance:

Shadow AI creates invisible risk — organizations cannot protect against threats they do not know about.

Navigating safely

Practical guidelines and the AI security mindset

Practical guidelines

Building an AI security mindset

Think of AI tools like a very capable but untrustworthy intern:

The goal is not to avoid AI, but to use it with awareness.

Thank you

Questions?

25–30 min presentation + 15 min Q&A