Course Overview

This training briefly reviews deep learning concepts, then teaches the development of generative AI models.

Key Learning Areas

  • Review of Core Python Concepts
  • Overview of Machine Learning / Deep Learning
  • Hands on Introduction to Artificial Neural Networks (ANNs) and Deep Learning
  • Hands on Deep Learning Model Construction for Prediction
  • Generative AI Fundamentals
  • Sequential Generation with RNN
  • Variational Autoencoders
  • Generative Adversarial Networks
  • Transformer Architectures
  • Overview of current popular large language models (LLM)
  • Medium-Sized LLM in your own environment

Course Outline

Review of Core Python Concepts (**if needed – depends on tool context**)

  • Anaconda Computing Environment
  • Importing and manipulating Data with Pandas
  • Exploratory Data Analysis with Pandas and Seaborn
  • NumPy ndarrays vs. Pandas Dataframes

Overview of Machine Learning / Deep Learning

  • Developing predictive models with ML
  • How Deep Learning techniques have extended ML
  • Use cases and models for ML and Deep Learning

Hands on Introduction to Artificial Neural Networks (ANNs) and Deep Learning 

  • Components of Neural Network Architecture
  • Evaluate Neural Network Fit on a Known Function
  • Define and Monitor Convergence of a Neural Network
  • Evaluating Models
  • Scoring New Datasets with a Model

Hands on Deep Learning Model Construction for Prediction 

  • Preprocessing Tabular Datasets for Deep Learning Workflows
  • Data Validation Strategies
  • Architecture Modifications for Managing Over-fitting
  • Regularization Strategies
  • Deep Learning Classification Model example
  • Deep Learning Regression Model example
  • Trustworthy AI Frameworks for this DL prediction context

Generative AI fundamentals

  • Generating new content versus analyzing existing content
  • Example use cases: text, music, artwork, code generation
  • Ethics of generative AI

Sequential Generation with RNN

  • Recurrent neural networks overview
  • Preparing text data
  • Setting up training samples and outputs
  • Model training with batching
  • Generating text from a trained model
  • Pros and cons of sequential generation

Variational Autoencoders

  • What is an autoencoder?
  • Building a simple autoencoder from a fully connected layer
  • Sparse autoencoders
  • Deep convolutional autoencoders
  • Applications of autoencoders to image denoising
  • Sequential autoencoder
  • Variational autoencoders

Generative Adversarial Networks

  • Model stacking
  • Adversarial examples
  • Generational and discriminative networks
  • Building a generative adversarial network

Transformer Architectures

  • The problems with recurrent architectures
  • Attention-based architectures
  • Positional encoding
  • The Transformer: attention is all you need
  • Time series classification using transformers

Overview of Current Popular Large Language Models (LLM)

  • ChatGPT
  • DALL-E 2
  • Bing AI

Medium-Sized LLM on in Your Own Environment

  • tanford Alpaca
  • Facebook Llama
  • Transfer learning with your own data in these contexts

Prerequisites

Learners should have prior experience developing Deep Learning models, including architectures such as feed-forward artificial Neural Networks, recurrent and convolutional.