Course Outline
- Week 1: Introduction to Prompt Engineering
- What is Prompt Engineering?
- Configuring LLM Settings
- Basics of Prompting
- Designing Effective Prompts
- Week 2: Techniques in Prompt Engineering
- Zero-Shot Prompting
- Few-Shot Prompting
- Chain-of-Thought Prompting
- Self-Consistency
- Meta Prompting
- Generated Knowledge
- Prompt Chaining
- Tree of Thoughts
- Week 3: Advanced Techniques in Prompt Engineering
- Retrieval Augmented Generation (RAG)
- Fine-tuning
- Function Calling
- Prompt Caching
- Generating Synthetic Datasets
- Week 4: Evaluating, Refining, and Ensuring Reliability in Prompts
- Evaluating Prompt Performance
- Ensuring Truthfulness and Reducing Bias in Prompts
- Adversarial Prompting: Identifying and Mitigating Vulnerabilities
- Enhancing Model Robustness through Prompt Refinement
- Practical Exercise: Optimizing Prompts for Complex Tasks
- Week 5: Models & Tools for Prompt Engineers
- Overview of Popular LLMs: GPT-4, Claude, and More
- Advanced Function Calling Techniques
- Model-Specific Prompting
- Exploring LLM APIs: Practical Applications
- Week 6: Prompt Engineering Projects
- Project 1: Building a Multi-Step Prompting System
- Project 2: Generating and Refining Synthetic Data
- Project 3: Developing a Custom API with Prompt Integration
- Course Wrap-Up and Next Steps
Exploring LLM APIs: Practical Applications
Week 6: Prompt Engineering Projects
Coming Soon
This section is under development and will be available soon.
Upgrade to Premium
You've reached the free limit. Upgrade to premium for unlimited access!