AI Security Lab

Foundation Project for AI Security Engineering

System Health Check

Verify that the API backend is responding correctly.

Mock Chat

Send a prompt to the mock model endpoint and see the JSON response.

Lab 1: Some prompts are intentionally blocked. Try a normal question, then try a suspicious one to see the difference.

About This Lab

This is a starter project for exploring AI security concepts. Throughout the course, you'll enhance this application with:

  • Prompt injection defenses
  • Output filtering and sanitization
  • Sensitive data redaction
  • Request logging and monitoring
  • Policy enforcement
  • Rate limiting