AISecurityLiteracy.dev

Presentations and resources on the security aspects of AI, chatbots, and agents.

About

Krister Hedfors

Krister Hedfors has worked in technical cybersecurity for 20 years. He is currently engaged as a cybersecurity architect helping large organisations develop, test, and scale AI-driven services.

Presentations

LLM information leakage (general)

AISecurityLiteracy.dev

General threat model and defense program for information leakage in LLM applications, covering training data extraction, membership inference, prompt injection, RAG leakage, tool exfiltration, and measurable controls.

20–25 min presentation + 10 min Q&A

View slides

LLM leakage via shared hardware (deep dive)

AISecurityLiteracy.dev

A deep technical dive into cross-tenant information leakage risks in shared accelerator environments, including GPU memory remanence, side channels, virtualization constraints, and isolation-by-sensitivity decisions.

20–25 min presentation + 10 min Q&A

View slides

AI security literacy — for users of AI, chatbots and agents

Ångström-laboratoriet

You are warmly invited to a seminar on AI security literacy, with Krister Hedfors. AI Security Literacy means being well-versed in the security aspects of AI. In this presentation, we survey the field, interspersed with concrete thought-provoking examples, to help us navigate the new AI landscape with increased confidence. The target audience ranges from beginners to power users of AI, chatbots, and agents.

25–30 min presentation + 15 min Q&A

View slides