NLP with Transformers & Large Language Models
Table of Contents
Introduction to NLP & Transformers
Evolution of NLP
Why Transformers?
Understanding Transformer Architecture
Self-Attention Mechanism
Multi-Head Attention
Complete Transformer Block
Pre-trained Language Models
BERT (Bidirectional Encoder Representations)
GPT (Generative Pre-trained Transformer)
T5 (Text-to-Text Transfer Transformer)
Fine-tuning & Transfer Learning
Fine-tuning BERT for Classification
Fine-tuning with LoRA (Low-Rank Adaptation)
Working with LLMs
Using LLM APIs
Prompt Engineering
Retrieval-Augmented Generation (RAG)
Advanced Techniques
Quantization for Efficiency
Multi-GPU Training
Production Deployment
Model Serving
Future Directions
Last updated
