×
AI Training

AI Security, Compliance, and Explainability Training

BONUS! Cyber Phoenix Subscription Included: All Phoenix TS students receive complimentary ninety (90) day access to the Cyber Phoenix learning platform, which hosts hundreds of expert asynchronous training courses in Cybersecurity, IT, Soft Skills, and Management and more!

Course Overview

In this two day, instructor – led AI course in Washington, DC Metro, Tysons Corner, VA, Columbia, MD or Live Online, attendees master the AI auditing processes and understand the importance of making AI transparent through explainability techniques. Students also learn AI’s role in various sectors, best practices for system security, and the intricacies of AI design and deployment. This course is intended for AI and Machine Learning Practitioners, IT Regulatory and Compliance Officers, Cybersecurity Professionals, Decision Makers and Executives. At the completion of this course, participants will be able to:

  • Understand the importance of machine learning interpretability
  • Explore different types of ML interpretability models
  • Analyze standard techniques and methods for explainability
  • Evaluate the effectiveness of interpretability methods
  • Apply XAI in various sectors

Schedule

AI Security, Compliance, and Explainability Training

date
location
price
8/12/24 - 8/13/24 (2 days)
Open
$1,700
8/19/24 - 8/20/24 (2 days)
Open
$1,700
8/26/24 - 8/27/24 (2 days)
Open
$1,700
9/02/24 - 9/03/24 (2 days)
Open
$1,700
9/16/24 - 9/17/24 (2 days)
Open
$1,700
10/07/24 - 10/08/24 (2 days)
Open
$1,700
10/14/24 - 10/15/24 (2 days)
Open
$1,700
11/18/24 - 11/19/24 (2 days)
Open
$1,700

Prerequisites

All learners are required to have:

  • Foundational Knowledge in AI and Machine Learning
  • Familiarity with Data Management
  • Basic Cybersecurity Concepts

Course Outline

Ethics and Regulation

  • What is an AI System?
  • View of AI System
  • AI System Classifications
  • Branches of AI Today
  • AI by the numbers
  • AI – the Good
  • AI – the Bad
  • Principles of AI Ethics
  • Principles of AI Ethics
  • Fairness
  • Accountability
  • Transparency
  • Explainability
  • Privacy and autonomy
  • Reliable
  • Ask ChatGPT 3.5
  • AI Ethics in Practice
  • Regulatory Compliance in AI Systems
  • What are the benefits of AI regulation?
  • What are the disadvantages of regulating AI
  • Regulations and standards in AI
  • GDPR and data protection
  • AI in healthcare (HIPAA and other relevant laws)
  • AI in healthcare examples
  • AI in finance and regulatory compliance
  • US FINRA AI Deployment
  • AI in US finance examples
  • AI in the global finance examples
  • Case studies of AI non-compliance
  • Addressing Regulatory and Compliance
  • Dangers of Discrimination and Bias
  • Data Security and Data Privacy
  • Control and Security Concerns of AI
  • Cooperative Corporate Compliance

Security and Privacy

  • What is AI Cybersecurity?
  • Threats and challenges in AI security
  • Implementing AI in cybersecurity
  • Adversarial attacks
  • Model inversion and extraction
  • Data poisoning
  • Best practices for securing AI systems
  • Robustness techniques
  • Differential privacy
  • Federated learning
  • Homomorphic encryption

Secure AI Design and Deployment

  • Secure Software Development
  • Connectivity
  • Exploitation of AI Systems (Jailbreaks)
  • Infrastructure Concerns
  • System Vulnerabilities
  • Data Privacy
  • Data Leaks via Generating Text
  • OpenAI GPT-3/4 Data Location and Storage
  • Azure OpenAI
  • Adversarial Attacks
  • Malicious Use of AI
  • Bias and Discrimination
  • Regulatory and Ethical Considerations
  • Security and Privacy in Chatbots
  • Ensuring Security and Privacy
  • Data Protection
  • Enforcing Data Protection
  • Anonymization Techniques
  • Best Practices for Security with Generative AI
  • Sources of Bias in AI
  • Tackling AI Bias
  • Real-world Case Studies
  • Autonomous Vehicles and the Trolley Problem
  • AI in Warfare and Weaponization
  • AI in Criminal Justice

AI Auditing and Certification

  • Introduction
  • Organizational Roles in AI Ethics and Compliance
  • Implementing AI Ethics Guidelines and Checklists
  • Key Components of an AI Audit
  • Steps in the AI Auditing Process
  • Post-Deployment Monitoring and Feedback Loops
  • Reporting and Recommendations
  • AI Certification Process

Explainable AI (XAI)

  • Introduction to Machine Learning Interpretability
  • Importance of ML interpretability
  • Different types of ML interpretability models
  • Model-agnostic interpretability methods
  • Model-specific interpretability methods
  • Limitations of model-specific interpretability
  • Limitations of Model-agnostic interpretability
  • Global vs. Local interpretability
  • Interpretability in Deep Learning
  • Techniques and Methods for Explainability
  • Layer-wise relevance propagation (LRP)
  • Sensitivity analysis
  • Gradient-weighted class activation mapping (Grad-CAM)
  • Evaluating Interpretability
  • Techniques for evaluating interpretability
  • Overview of existing evaluation frameworks
  • Model-Agnostic Visual Analytics (MAVA)
  • Human-AI Collaborated Evaluation (HACE)
  • Interpretability in Large Language Models
  • Interpretability in Generative LLM’s
  • Common evaluation metrics for generative AI models
  • Common evaluation metrics – Diversity metrics
  • Common evaluation metrics – Likelihood
  • Common evaluation metrics – Perplexity
  • Common evaluation metrics – Inception Score
  • Common evaluation metrics – FID
  • Common evaluation metrics – BLEU
  • Common evaluation metrics – ROUGE
  • Common evaluation metrics – Human evaluation
  • Techniques for Interpreting Large Language Models
  • Importance of XAI in various sectors
  • XAI in Healthcare: Enhancing Care and Transparency
  • XAI in Finance: Driving Decisions and Building Trust
  • XAI in Legal Systems: Fairness and Accountability

Lab Exercises

  • Lab 1. AI Ethics and Regulation
  • Lab 2. Understanding security and privacy
  • Lab 3. Learning the CoLab Jupyter Notebook Environment
  • Lab 4. Guardrails with template manual
  • Lab 5. Guardrails with system prompt
  • Lab 6. Optional – Implementing Nemo Guardrails for LLM Response Restriction Overview
  • Lab 7. Designing an Audit Process for OpenAI’s ChatGPT
  • Lab 8. AstroZeneca Ethics-Based AI Audit Framework Design
  • Lab 9. Lab 1 – Designing a Gender Bias Test for a Large Language Model (LLM)
  • Lab 10. Exploring Machine Learning Interpretability (MLI) with H2O’s Driverless AI Overview

BONUS! Cyber Phoenix Subscription Included: All Phoenix TS students receive complimentary ninety (90) day access to the Cyber Phoenix learning platform, which hosts hundreds of expert asynchronous training courses in Cybersecurity, IT, Soft Skills, and Management and more!

Phoenix TS is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have final authority on the acceptance of individual courses for CPE credit. Complaints re-garding registered sponsors may be submitted to the National Registry of CPE Sponsors through its web site: www.nasbaregistry.org

Subscribe now

Get new class alerts, promotions, and blog posts

Phoenix TS needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at anytime. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy.

Download Course Brochure

Enter your information below to download this brochure!

Name(Required)