Modified Generative AI and LLMs in Practice
Modified Generative AI and LLMs in Practice
This module offers an in-depth exploration of Generative AI and Large Language Models (LLMs),
including their architecture, applications, ethical considerations, and the latest developments in the
field. Students will gain both theoretical and practical knowledge in working with Generative AI and
LLMs and understand their significance in various domains.
***********************CV Filtering************************************
● Week 1
○ Generative AI Fundamentals (Video Recording)
■ Introduction to Generative AI
● https://www.coursera.org/learn/introduction-to-generative-ai/lectu
re/TJ28r/introduction-to-generative-ai
■ Fundamentals of Image Generation
● https://www.coursera.org/learn/introduction-to-image-generation/l
ecture/geDgG/introduction-to-image-generation
● https://medium.com/@sivavimelrajhen/a-basic-introduction-to-ima
ge-generation-methods-25719fdea31e
●
■ Overview of Variational Autoencoders (VAEs)
● https://www.coursera.org/learn/introduction-to-image-generation/l
ecture/geDgG/introduction-to-image-generation (Added in
Fundamentals of Image Generation)
● https://www.coursera.org/learn/generative-deep-learning-with-tens
orflow/lecture/yVhK9/variational-autoencoders-overview
●
■ Understanding Generative Adversarial Networks (GANs)
● https://www.coursera.org/learn/apply-generative-adversarial-netwo
rks-gans/lecture/3qd3n/overview-of-gan-applications
●
○ Practical
■ Hands-on session on basic image generation using VAEs
■ Assignment: Implementing simple image generation models
○ Discussion session
■ Discuss Content, practical and Assignment
● Week 2
○ Advanced Techniques in Image Generation (Video Recording)
■ In-depth study of GAN architecture
● https://www.coursera.org/learn/apply-generative-adversarial-netwo
rks-gans/home/week/2
● https://www.coursera.org/learn/generative-ai-for-everyone/lecture/
FhzP3/how-generative-ai-works
■ Introduction to different GAN architectures
● https://www.coursera.org/learn/apply-generative-adversarial-netwo
rks-gans/home/week/2
● https://www.coursera.org/learn/apply-generative-adversarial-netwo
rks-gans/home/week/3
● https://www.coursera.org/learn/build-better-generative-adversarial-
networks-gans/lecture/zF2FR/stylegan-overview
●
○ Practical
■ Selecting and understanding appropriate StyleGAN models
● https://www.coursera.org/learn/build-better-generative-adversarial-
networks-gans/lecture/zF2FR/stylegan-overview
●
●
■ Fine-tuning a pre-trained StyleGAN model
■ Performance evaluation and generating diverse faces
○ Discussion session
■ Discuss Content, practical and Assignment
● Week 3
○ Application- Faces from Real Images (Video Recording)
■ Integrating theory into practical applications
● https://www.coursera.org/learn/build-better-generative-adversarial-
networks-gans/lecture/zF2FR/stylegan-overview
● https://www.coursera.org/learn/apply-generative-adversarial-netwo
rks-gans/lecture/sY4Fc/image-to-image-translation
●
■ Real-world applications and ethical considerations
● https://www.coursera.org/learn/build-better-generative-adversarial-
networks-gans/lecture/ZA46w/disadvantages-of-gans
● https://www.coursera.org/learn/build-better-generative-adversarial-
networks-gans/supplement/ykwev/optional-notebook-gan-debiasing
●
○ Practical
■ Generating faces from real images using GANs
● Data preprocessing for the project
● Selecting and fine-tuning a StyleGAN model for the project
● Performance evaluation and showcasing the generated faces
○ Discussion session
■ Discuss Content, practical and Assignment
● Week 4
○ Transformers and Pre-trained Models (Video Recording)
■ c
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/3A
qWI/transformers-architecture
●
■ Understanding the mechanics of attention
● https://www.coursera.org/learn/generative-ai-with-llms/supplemen
t/Il7wV/transformers-attention-is-all-you-need
●
■ Overview of pre-trained language models
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/R0x
bD/generating-text-with-transformers
●
○ Quiz
● Week 5
○ Introduction to Large Language Models (LLMs) (Video Recording)
■ Overview of LLMs
● https://www.coursera.org/learn/generative-ai-for-everyone/lecture/
lXOQ0/llms-as-a-thought-partner
■ Key Architectural Components
● https://www.coursera.org/learn/generative-ai-with-llms/home/wee
k/1
○ Practical
■ Transformer architecture
■ Introduction to huggingface and transformers library
■ Basic idea about the Frameworks (langchain and LlamaIndex) used for LLMs
■ Assignment: Implementing simple LLMs for text generation
○ Discussion session
● Week 6
○ Prompt engineering (Video Recording)
■ Understanding prompt engineering in LLMs
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/ZV
UcF/prompting-and-prompt-engineering
● https://www.coursera.org/learn/generative-ai-prompt-engineering-f
or-everyone/home/week/1 - Full course (as much as possible)
○ Practical
■ Hands-on exercises on prompt engineering
■ What is the prompt template and the importance of using it - difference
between gpt and gemini/google vertex.
■ Does every LLM follow the instructions? - closed and open source llms
■ Different prompt techniques (might includes examples from the langchain
framework)
■ Project assignment: Creating custom prompts for LLMs
○ Discussion session
● Week 7
○ Fine-tuning strategies (Video Recording)
■ Exploring model fine-tuning strategies in depth (Video Recording)
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/exy
NC/instruction-fine-tuning
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/cTZ
RI/fine-tuning-on-a-single-task
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/not
ob/multi-task-instruction-fine-tuning
● https://www.coursera.org/learn/generative-ai-with-llms/lecture/rCE
9r/parameter-efficient-fine-tuning-peft
● https://www.coursera.org/learn/generative-ai-for-everyone/lecture/
EIX6K/fine-tuning
●
○ Practical
■ Why we use fine tuning and Fine-tuning LLMs for specific task
■ includes lora and qlora finetune techniques - Huggingface PEFT library
● why lora and qlora - adapter
■ Assignment: Implementing and evaluating fine-tuning strategies for LLMs
○ Discussion session
● Week 8
○ Retrieval-Augmented Generation (Video Recording)
■ Overview of retrieval-augmented generation
● https://www.coursera.org/learn/generative-ai-for-everyone/lecture/
qF1Az/retrieval-augmented-generation-rag
■ https://www.coursera.org/learn/generative-ai-with-llms/lecture/j2lTW/usin
g-the-llm-in-applications
■ Understanding the synergy between retrieval and generation models
○ Practical
■ https://medium.com/@murtuza753/using-llama-2-0-faiss-and-langchain-for-
question-answering-on-your-own-data-682241488476
● Week 9
○ Identifying project scope and objectives
○ Developing a detailed project plan
○ Initial model selection and feasibility analysis
■ https://www.coursera.org/learn/generative-ai-for-everyone/lecture/kNX59/
choosing-a-model
● Week 10-12
○ Development
○ Continuous testing and improvement
■ https://www.coursera.org/learn/generative-ai-with-llms/lecture/8Wvg3/mo
del-evaluation
■
○ Addressing challenges and seeking instructors’ feedback
● Week 12
○ Project showcase and presentation
○ Peer review and feedback sessions
https://developers.generativeai.google/
Similar Courses
● https://www.coursera.org/learn/generative-ai-with-llms
●
Core concepts: Able to do optimization in terms of speed , accuracy and hallucination reduction.