AI Security Workshop: A practical approach to AI safety

Learn about risks in AI systems and how to defend against attacks, such as data poisoning and prompt injection.

This workshop will teach participants step-by-step how to build, test, and validate an AI model, and how to download and run AI models locally.

You will be exposed to the PyTorch framework for hands-on exercises. We will show attacks on AI models and tools and show how AI security testing tools can be used to identify vulnerabilities.

Presented by Dr Malcolm Shore, Technical Director at Kode-1 and an adjunct Professor at the Centre for Cyber Resilience at Deakin University. 

Malcolm is a renowned LinkedIn instructor with over 30 cybersecurity courses and 1M views.

The workshop is designed for IT and cybersecurity professionals but needs no knowledge of AI. Some familiarity with Python programming would be useful for those wishing to follow hands-on. 

As part of this course, you will:

  • Gain an insight into AI governance frameworks
  • Learn how to monitor attacks and secure AI systems
  • Identify GenAI/LLM or ML model vulnerabilities
  • Learn how to augment your use of AI with specialist data
  • Use programmatic access to the Gemma LLM to build their AI application 
  • Learn about AI security frameworks and how they can be applied.

More information

Disclaimer: This page contains links to a third-party website for the purpose of promoting their AI Month events. Our department does not endorse and is not affiliated with these third parties. We do not guarantee or assume responsibility for the accuracy or reliability of any information on these third-party websites.