Generative Ai - Record
Generative Ai - Record
NM1081 – GENERATIVE AI
NAME : _______________________
BRANCH : _______________________
YEAR/SEM : _______________________
PET ENGINEERING COLLEGE
(An ISO 9001:2015 Certified Institution)
Affiliated to Anna university & Approved by AICTE
BONAFIDE CERTIFICATE
Prompt Engineering
5
OpenAI Generative Pre-Trained
6 Transformer 3 (Gpt-3) for Developers
8 Information Extraction
9 Recipe Generator
10 Image Reshape
11 Guided Project
12 Capstone Project 1
13 Capstone Project 2
1
The term "Artificial Intelligence" was coined by John McCarthy in 1956 during the
Dartmouth Conference. Since then, AI has grown from simple rule-based systems to
complex neural networks and machine learning models.
AI works by processing large amounts of data and identifying patterns. It combines inputs
(data), algorithms (processing), and outputs (actions or decisions). AI doesn’t “think” like
humans, but it can mimic human-like decision-making processes to a remarkable degree.
The roots of AI can be traced to classical philosophers who attempted to describe human
thinking as a symbolic system. The modern history of AI began with the development of
digital computers in the 1940s. Alan Turing's famous paper, “Computing Machinery and
Intelligence,” posed the question, “Can machines think?”
After initial enthusiasm, AI hit a roadblock due to unrealistic expectations and limited
computing power, leading to reduced funding and interest—this period is known as the
first AI winter.
AI saw a resurgence with the development of expert systems that mimicked decision-
making abilities of human experts. MYCIN and DENDRAL were early examples.
Again, progress slowed due to the high cost of developing expert systems and their
limitations. Funding was withdrawn, leading to the second AI winter.
Modern AI (2000s–Present):
The explosion of data (big data), increased computational power (GPUs), and the
development of machine learning and deep learning models led to a renaissance in AI.
Breakthroughs in NLP (e.g., GPT models), computer vision, and robotics marked the dawn
of the AI era we see today.
Based on Capabilities:
1. Narrow AI (Weak AI): Performs specific tasks (e.g., facial recognition, voice
assistants).
2. General AI (Strong AI): Can perform any intellectual task that a human can do
(still theoretical).
Based on Functionalities:
2. Limited Memory: Can use past experiences for future decisions (e.g., self-driving
cars).
Subfields of AI
ML enables machines to learn from data and improve over time without being explicitly
programmed.
Allows machines to understand, interpret, and generate human language (e.g., translation,
chatbots).
4. Computer Vision:
Enables machines to see and interpret visual information from the world.
5. Robotics:
6. Expert Systems:
Rule-based systems that mimic human experts to solve complex problems in specific
domains.
4
7. Fuzzy Logic:
Deals with reasoning that is approximate rather than fixed and exact.
Each subfield plays a vital role in building intelligent systems and enhancing machine
capabilities across industries.
Big Data: Provides the vast amount of information needed to train models.
Edge Computing: Allows AI processing to happen closer to data sources for faster
responses.
AI development also benefits from open-source platforms like TensorFlow, PyTorch, and
Scikit-learn, making AI research and implementation more accessible.
ML is a key component of AI that enables machines to learn from data and make
predictions or decisions.
Types of ML:
1. Supervised Learning: Uses labeled data to train models (e.g., spam detection).
3. Reinforcement Learning: Models learn by trial and error using rewards (e.g.,
game playing).
5
Deep learning uses artificial neural networks to simulate human brain function. It is ideal
for tasks like speech recognition, image classification, and natural language generation.
DL models require large datasets and high computation, but they offer superior
performance for complex tasks.
1. Healthcare:
Personalized medicine
AI-assisted surgeries
2. Finance:
Fraud detection
Algorithmic trading
Recommendation engines
Inventory management
Personalized marketing
4. Manufacturing:
Predictive maintenance
Quality control
5. Education:
6
Automated grading
6. Transportation:
Autonomous vehicles
Route optimization
Traffic prediction
7. Agriculture:
Crop monitoring
Yield prediction
Automated irrigation
8. Entertainment:
These applications enhance efficiency, reduce costs, and improve user experiences.
Data Quality and Quantity: Poor or biased data leads to poor models.
Bias and Fairness: AI can perpetuate or amplify biases present in training data.
Human-centered design
Organizations like UNESCO and the EU are actively working on global AI ethics
frameworks.
Multimodal AI: Combining text, image, video, and audio inputs (e.g., GPT-4).
Smarter and Leaner Models: More efficient models for mobile and edge devices.
Continuous research, innovation, and ethical oversight will shape the trajectory of AI in
the coming decades.
Conclusion
Artificial Intelligence is reshaping the world by enabling machines to learn, adapt, and
make decisions. From its theoretical origins to its practical applications across sectors, AI
has emerged as a key driver of innovation and efficiency.
The field is broad, encompassing machine learning, NLP, computer vision, and robotics,
and it’s powered by breakthroughs in algorithms, hardware, and data availability.
9
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that
are programmed to think and learn like humans. One of the most prominent subfields of
AI is Natural Language Processing (NLP), which focuses on enabling machines to
understand, interpret, and generate human language.
In recent years, NLP has seen exponential growth due to advancements in machine
learning, access to massive datasets, and computational power. From chatbots and virtual
assistants to language translation and summarization, NLP applications have become
deeply integrated into our daily lives.
Language models like GPT represent a significant leap in NLP capabilities. These models
are trained on vast corpora of text and can perform a wide variety of language-related tasks
with minimal task-specific tuning.
Language modeling has evolved from basic statistical models to deep learning-based
transformers:
OpenAI is an AI research and deployment company with a mission to ensure that artificial
general intelligence (AGI) benefits all of humanity. One of its groundbreaking
contributions is the development of the GPT (Generative Pre-trained Transformer)
series of models.
GPT-2 (2019): Gained attention for its ability to generate coherent and
contextually relevant text. Due to its capabilities, OpenAI initially withheld full
release for safety concerns.
GPT models are based on the transformer decoder architecture. Key components include:
GPT uses a unidirectional (left-to-right) attention mechanism which differs from BERT’s
bidirectional approach. This design is ideal for tasks like text generation, where predicting
the next word is essential.
The number of parameters (weights) increases with each GPT version, leading to improved
performance and context understanding. For example, GPT-3 has 175 billion parameters
while GPT-4 is even more capable, though its exact size is not officially disclosed.
Pretraining: The model is trained on a massive corpus of internet text (e.g., books,
articles, websites) using unsupervised learning. The objective is to predict the next
word in a sentence, learning grammar, facts, and reasoning patterns.
Content Generation: Writing articles, poetry, code, and social media posts.
Data Biases: The training data may include biased or harmful content, which can
be reflected in outputs.
Resource Intensity: Training and running large models is expensive and energy-
intensive.
Bias and Fairness: Ensuring GPT models do not reinforce societal biases or
discrimination.
Privacy: Avoiding leakage of sensitive data that may have been present in training
datasets.
OpenAI and others advocate for Responsible AI practices, including transparency, human-
in-the-loop systems, usage guidelines, and alignment research.
Multimodal Models: Integration of text, images, audio, and video inputs (e.g.,
GPT-4 with image capabilities).
We’re moving toward AI systems that can collaborate with humans across various
domains, supporting creativity, productivity, and decision-making.
Conclusion
Generative models are a class of machine learning models that can generate new data
instances that resemble a given dataset. Unlike discriminative models that focus on
classifying input data, generative models learn the distribution of input data and can
produce synthetic outputs such as text, images, videos, music, and more.
For developers, these models offer powerful capabilities to simulate data, enhance
creativity, and solve real-world problems through AI-generated content. With the rise of
tools like ChatGPT, DALL·E, and Stable Diffusion, understanding generative models has
become essential for AI development.
The journey of generative models began with the development of probabilistic models like
Gaussian Mixture Models and Hidden Markov Models. As computational power increased,
so did the complexity of these models.
2006: Geoffrey Hinton introduced deep belief networks, sparking interest in deep
learning.
2020s: GPT-3, DALL·E, and other large-scale generative models made public
waves.
The convergence of computing, large-scale data, and research advancements has made
generative models highly accessible and impactful for developers.
Before diving deeper, developers should be familiar with the following key concepts:
Grasping these basics helps developers troubleshoot and fine-tune generative models
effectively.
There are various architectures and approaches to building generative models, including:
1. Autoregressive Models
These models predict future data points based on past ones, making them ideal for
sequential data like language and audio.
Examples:
16
PixelRNN
2. Autoencoders
These neural networks learn compressed representations and reconstruct data. The latent
space can be sampled to generate new instances.
Basic Autoencoders
Denoising Autoencoders
Involve a generator and discriminator in a minimax game. The generator tries to produce
realistic data, while the discriminator attempts to distinguish between real and fake.
5. Flow-based Models
Use invertible transformations to learn data distribution and allow exact likelihood
computation.
RealNVP
Glow
Each of these models has trade-offs in terms of training complexity, quality of output, and
controllability.
They are trained simultaneously, with the generator trying to “fool” the discriminator.
Types of GANs
Training Challenges
Mode collapse
Instability
Non-convergence
Apply regularization
Image synthesis
Art generation
Super-resolution
VAEs bridge the gap between deep learning and probabilistic inference. They learn a latent
variable model by optimizing the evidence lower bound (ELBO).
Key Characteristics:
Applications:
Anomaly detection
Text generation
Drug discovery
VAEs offer more controlled generation than GANs but may produce blurrier images in
visual tasks.
Key Innovations:
Popular Models:
Chatbots
Summarization tools
Developers can use Hugging Face libraries and OpenAI APIs to fine-tune or deploy these
models.
Here are key tools used by developers to work with generative models:
19
These tools reduce development time and allow experimentation with cutting-edge models.
1. Text Generation
AI writing assistants
Code generation
2. Image Generation
Product visualization
AI music composition
Voice cloning
Audio enhancement
4. Game Development
5. Healthcare
Literature reviews
Generative AI opens doors to creative, personalized, and automated digital solutions across
domains.
Mitigating these issues requires knowledge of model compression, transfer learning, and
scalable architecture.
Generative models can create realistic yet synthetic content, which raises serious ethical
questions:
Key Concerns:
Privacy Violations
21
Developer Responsibilities:
Following guidelines from AI ethics boards and regulatory frameworks is essential for
responsible development.
Synthetic Data for AI: Expand training sets while preserving privacy.
Conclusion
Generative models are revolutionizing how developers create and interact with data. From
simple text generators to complex multimodal architectures, these tools empower
developers to automate, innovate, and elevate user experiences.
With increasing access to frameworks and pre-trained models, developers can integrate
generative capabilities into applications across industries. However, they must also address
ethical concerns and technical limitations responsibly.
The journey of generative AI is only beginning. By mastering these tools, developers can
not only build the future but imagine it.
22
The AI-first paradigm in software engineering marks a transformative shift where artificial
intelligence is no longer just a component but the driving force of the entire development
lifecycle. Unlike traditional methodologies that view AI as an add-on, AI-first software
engineering embeds machine learning (ML), natural language processing (NLP), and data-
driven automation at the core of software design, development, deployment, and evolution.
In the AI-first model, software isn’t merely programmed — it learns, adapts, and evolves.
Applications are expected to anticipate user needs, automate tasks intelligently, and
optimize themselves in real-time. This evolution has been made possible by rapid advances
in deep learning, the availability of massive datasets, and high-performance computing.
Software engineering has evolved dramatically from manual coding and static architecture
to agile, DevOps, and now AI-augmented pipelines. Initially, development was code-
centric and linear. Agile introduced iterative and incremental progress, and DevOps further
integrated operations for continuous delivery.
AI-first Era: AI predicts bugs, generates code, analyzes user behavior, and
optimizes performance autonomously.
23
The AI-first approach alters traditional boundaries. Development becomes data-driven, and
engineering processes are infused with learning mechanisms that make software
increasingly autonomous.
2. Learning-enabled Systems: Software can adapt and evolve over time using ML
models.
These principles guide software engineers to build systems that are intelligent by design
rather than by integration.
Using AI in this stage reduces communication gaps and ensures that software reflects real
user needs.
24
AI enhances design by helping architects and designers simulate, validate, and optimize
software models:
This approach significantly accelerates and improves the decision-making process during
software design.
AI tools like GitHub Copilot, Amazon CodeWhisperer, and OpenAI Codex have
revolutionized coding:
Bug Detection: Identifies errors before execution using static and semantic
analysis.
These assistants not only improve productivity but also enhance code quality and reduce
onboarding time for developers.
Test Case Generation: AI generates test cases based on code and requirements.
25
Test Optimization: Prioritizes tests based on historical defect data and impact
prediction.
AI ensures higher coverage, faster testing cycles, and early detection of issues—especially
critical in agile environments.
In deployment, AI facilitates:
User Feedback Analysis: NLP parses reviews and tickets for feedback loops.
AI closes the DevOps feedback loop, enabling continuous learning and improvement in
production environments.
Developers must choose tools based on their use cases — from code generation and
modeling to deployment automation.
Soft skills like problem-solving, ethics, and communication remain critical, especially
when collaborating with multidisciplinary teams.
27
An app uses AI-first design to assist doctors with preliminary diagnoses from X-ray
images. This reduced diagnostic time by 50%.
These cases show how AI-first engineering delivers measurable business value.
Skill Gaps: Not all teams are ready for AI-first adoption.
Multimodal Systems: Integration of text, image, audio, and video in one pipeline.
Engineers who align their careers with AI-first principles will be well-positioned in the
evolving tech landscape.
Conclusion
AI-first software engineering represents a revolution in how we build and evolve digital
systems. Rather than merely applying AI after the fact, the entire development lifecycle is
reimagined with AI at the core — from understanding user needs and generating code to
automating testing, deployment, and refinement.
Infosys Springboard’s approach emphasizes both technical skill and ethical grounding,
preparing learners to engineer software that is intelligent, responsible, and scalable.
By mastering AI-first principles, developers not only accelerate delivery but also unlock
innovation, enhance user experiences, and shape the digital future.
29
5. PROMPT ENGINEERING
Prompt engineering is the process of crafting effective inputs (prompts) to elicit accurate,
relevant, and efficient outputs from language models like GPT-3, GPT-4, and other
generative AI systems. As these models grow in capability, the need to control and fine-
tune their behavior through smart prompting becomes critical.
Infosys Springboard's course emphasizes that prompt engineering is not just about
formulating questions—it's about designing structured interactions that guide the AI
toward useful and meaningful results.
Prompt engineering emerged with the rise of large language models (LLMs). Early
versions, like GPT-2, had limited capabilities and required developers to tweak underlying
code. With the launch of GPT-3 and beyond, LLMs became general-purpose tools—
capable of answering questions, writing stories, solving equations, and more—based on
simple text prompts.
LLM 1.0 (e.g., GPT-2): Limited understanding; output was often unpredictable.
LLM 3.0 (GPT-4 and beyond): Context length increases, enabling rich dialogues
and task completion.
30
Large Language Models (LLMs) like those developed by OpenAI, Google, and Meta are
trained on vast amounts of text data to predict and generate human-like responses. These
models learn the probability distribution of sequences of words, enabling them to:
Answer questions
Translate languages
Simulate conversation
LLMs use architectures like the transformer model, which relies on attention mechanisms
to handle context and semantics. Understanding how these models work helps in designing
better prompts. Key aspects include:
Prompt engineering leverages these concepts to craft inputs that work well with how the
model reasons and responds.
Anatomy of a Prompt
Instruction: What you want the model to do (e.g., “Summarize this text…”).
Example:
“Summarize the following news article in 100 words, using a neutral tone.”
Good prompts are clear, specific, and goal-oriented. Ambiguity leads to unreliable outputs,
while overly detailed prompts may confuse the model.
Types of Prompts
1. Instructional Prompts
Direct commands: “Translate this to French.”
2. Interrogative Prompts
Questions: “What are the benefits of renewable energy?”
3. Contextual Prompts
Input with examples:
“Example 1: Input – X, Output – Y. Now, Input – Z, Output – ?”
4. Dialogic Prompts
Chat-style or conversational:
User: Tell me a joke. AI: Here’s one...
5. Creative Prompts
For story generation, poetry, or art.
6. Chain-of-thought Prompts
Breaks down reasoning step-by-step:
“Let’s think step by step...”
Understanding and choosing the right prompt type improves interaction success rates.
For instance:
Few-shot prompts provide models with training-like data that improves the quality of
output. However, excessive examples may overload the model’s context window.
These platforms assist in rapid prototyping, testing, and refining prompts for deployment.
Infosys Springboard advocates for AI aligned with societal good, emphasizing responsible
development and usage.
Few-shot learning uses prior examples to train on-the-fly. It’s powerful in low-
data environments.
Prompt: “Translate English to French. English: Hello → French: Bonjour...”
Recent models like GPT-4 perform well with zero-shot prompts, but few-shot remains
essential for nuanced or technical tasks.
Conclusion
Prompt engineering is a critical skill for harnessing the full potential of large language
models. It combines technical understanding with linguistic clarity, logical structure, and
ethical awareness. By crafting precise, goal-oriented prompts, developers can build
intelligent systems that perform tasks with minimal supervision and high reliability.
36
Transformer models were introduced by Vaswani et al. in 2017, revolutionizing the field
of deep learning. Unlike RNNs or LSTMs, transformers can process input data in parallel
and capture long-range dependencies more efficiently using self-attention mechanisms.
OpenAI’s progression through GPT-1, GPT-2, and GPT-3 has refined these ideas, scaling
model size and capabilities:
The leap from GPT-2 to GPT-3 brought unprecedented fluency, coherence, and
generalization power. This evolution has enabled applications previously thought
infeasible using traditional rule-based NLP techniques.
GPT-3 is a unidirectional transformer model that utilizes 96 attention layers and 96 heads.
It processes language as a sequence of tokens, predicting the next word based on the
37
preceding context. Each layer contributes to refining the prediction through multiple self-
attention computations and feed-forward networks.
Tokenization: Using Byte Pair Encoding (BPE) to split text into manageable
chunks.
Its size and architecture allow GPT-3 to perform multiple tasks in a zero-shot or few-shot
fashion, eliminating the need for extensive fine-tuning.
These features make GPT-3 highly versatile for developers looking to embed AI into their
applications without building models from scratch.
To begin using GPT-3, developers need access to the OpenAI API, which provides
endpoints for querying the model. The steps include:
Languages like Python, Node.js, and Java are commonly used. Python's openai package
makes interaction straightforward:
python
import openai
openai.api_key = "your-api-key"
response = openai.Completion.create(
engine="text-davinci-003",
max_tokens=100
Authentication is handled via a secret API key. Developers must manage rate limits and
token usage wisely to optimize costs.
Key parameters:
Proper understanding of these options ensures developers get tailored, useful outputs from
GPT-3.
Prompt engineering is the art of crafting inputs to generate accurate and relevant outputs.
It’s critical to GPT-3's effective use.
Best practices:
Example:
python
French:"""
GPT-3 can generate code from comments or instructions, aiding productivity in IDEs.
Example prompt:
def factorial(n):
python
if n == 0:
return 1
else:
return n * factorial(n-1)
It also supports SQL generation, debugging assistance, and code explanation, making it
a valuable tool in software development pipelines.
GPT-3’s conversational ability enables developers to create realistic chatbots. Using chat
endpoints and conversational formatting (system, user, assistant roles), developers can
build multi-turn dialogue systems.
Example setup:
python
messages = [
Customizing tone, domain knowledge, and context allows for diverse applications in
support, healthcare, education, and HR.
Webhooks
Serverless Functions
REST APIs
Dynamic documentation
Frameworks like Flask, React, and Node.js help bridge GPT-3 with front-end
applications. Careful caching, token control, and latency management are essential for
seamless user experiences.
Understanding these drawbacks helps developers mitigate risk and build more robust
systems.
Monitoring logs and refining prompts continuously leads to faster, more cost-effective
GPT-3 applications.
Developers will soon design AI workflows that interact with tools, APIs, and live data
autonomously, setting the stage for AI-native apps.
Conclusion
The Infosys Springboard course “OpenAI GPT-3 for Developers” empowers learners to
harness GPT-3’s capabilities for solving real-world problems. From simple text
generation to complex chatbot design and code automation, GPT-3 provides a versatile
foundation for intelligent applications.
As the field evolves, developers equipped with prompt engineering, API mastery, and
ethical awareness will lead the way in shaping AI-first products and experiences.
44
EXPERIMENT
b. Generate a Resume from the paragraph about your experience and skillset
using GPT-3 API.
c. Generate set of interview questions in any of the programming language of
your choice.
d. Generate a summary from meeting notes. Consider the following as an
example:
Max: Profits up 50%
Ruby: New servers are online
Kyle: Need more time to fix software
Walker: Happy to help
Parkman: Beta testing almost done
Problem statement:
Design a CGAN generative model that can generate the few fake images from the
provided data. Use the CIFAR-10 dataset to train a Conditional GAN. Take user input for
class type and generate images of the specified class type using the Conditional GAN.
5/9/25, 2:56 PM Generating SQL commands.ipynb - Colab
import openai
openai.api_key="sk-proj-v4qLcBBF-JUPQoka7unid0Rn_NkHCoCqsKv2hYlq_QVF1smzgiwDQg8Iba3vIJv6qJu3B8mdLCT3BlbkFJJ4oy5W7q82jbnpz_huk7wCdJdgNB8OO
examples= [
("Find unique values of DEPARTMENT from EMPLOYEE table.",
"SELECT DISTINCT DEPARTMENT FROM EMPLOYEE;"),
("Find 3 highest salaries from EMPLOYEE table.",
"SELECT TOP 3 SALARY FROM EMPLOYEE ORDER BY SALARY DESC;")
]
def create_prompt(user_query):
prompt="Convert the following English statements into SQL queries;\n\n"
for example in examples:
prompt +=f"English:{example[0]}\nSQL:{example[1]}\n\n"
def generate_sql(english_text):
prompt=create_prompt(english_text)
response = openai.Completion.create(
engine="davinci-002",
prompt=prompt,
temperature=0.5,
max_tokens=100,
stop=["\n"]
)
return response["choices"][0]["text"].strip()
https://colab.research.google.com/drive/1tqqXjfw6xbdJaUEYndRGUSLwkTgmj0hu?authuser=1#printMode=true 1/1
5/9/25, 2:57 PM Information extraction.ipynb - Colab
import openai
openai.api_key="sk-proj-v4qLcBBF-JUPQoka7unid0Rn_NkHCoCqsKv2hYlq_QVF1smzgiwDQg8Iba3vIJv6qJu3B8mdLCT3BlbkFJJ4oy5W7q82jbnpz_huk7wCdJdgNB8OO
def extract_information(passage):
prompt = f"""Extraxt key information from the following passage:\n\n\"\"\"n{passage}\n\"\"\"n|nsummary of key points:\n"""
response = openai.Completion.create(
engine = "davinci-002",
prompt=prompt,
temperature=0.3,
max_tokens=200,
top_p=1.0,
frequency_penalty=0,
presence_penalty=0,
stop=["\"\"\""]
)
return response["choices"][0]["text"].strip()
passage= """Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-lik
extracted_info = extract_information(passage)
print("Extracted Informations:\n")
print(extracted_info)
Extracted Informations:
1. GPT-3 is an autoregressive language model that uses deep learning to produce human-like text
2. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2)
3. GPT-3's full version has a capacity of 175 billion machine learning parameters
4. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020
5. GPT-3 is part of a trend in natural language processing (NLP) systems of pre-trained language representations
6. Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity
https://colab.research.google.com/drive/1NTYviof2Y7PmmBANXC2bYvfgSO0Z0kfe?authuser=1#printMode=true 1/1
5/9/25, 2:57 PM Recipe Generator.ipynb - Colab
import openai
openai.api_key="sk-proj-v4qLcBBF-JUPQoka7unid0Rn_NkHCoCqsKv2hYlq_QVF1smzgiwDQg8Iba3vIJv6qJu3B8mdLCT3BlbkFJJ4oy5W7q82jbnpz_huk7wCdJdgNB8OO
def generate_recipe(dish_name,ingredients):
ingredients_text = "\n".join(ingredients)
prompt = f"""Write a recipe based on these ingredients and instructions:
{dish_name}
ingredients:
{ingredients_text}
step to be followed
"""
response = openai.Completion.create(
engine = "davinci-002",
prompt=prompt,
temperature=0.5,
max_tokens=100,
top_p=1.0,
frequency_penalty=0.5,
presence_penalty=0.5,
)
return response["choices"][0]["text"].strip()
dish_name="Caramel Custard"
ingredients=["Milk","Custard Power","Sugar","Milkmaid"]
recipe = generate_recipe(dish_name,ingredients)
https://colab.research.google.com/drive/1H4KjHbOR_5etB617B5bMzabeVnCYosdG?authuser=1#printMode=true 1/1
5/9/25, 2:58 PM Image Reshape.ipynb - Colab
train_X = train_X/255
train_y
import numpy as np
prob= (np.exp(model.feature_log_prob_)[0])
prob
array([1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 3.37552743e-04,
3.37552743e-04, 3.37552743e-04, 1.68776371e-04, 3.37552743e-04,
3.37552743e-04, 3.37552743e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 3.37552743e-04,
3.37552743e-04, 1.68776371e-04, 3.37552743e-04, 5.06329114e-04,
3.37552743e-04, 1.68776371e-04, 1.68776371e-04, 3.37552743e-04,
5.06329114e-04, 8.43881857e-04, 5.06329114e-04, 5.06329114e-04,
8.43881857e-04, 6.75105485e-04, 3.37552743e-04, 3.37552743e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 3.37552743e-04,
5.06329114e-04, 5.06329114e-04, 5.06329114e-04, 6.75105485e-04,
1.35021097e-03, 3.03797468e-03, 5.56962025e-03, 9.78902954e-03,
1.60337553e-02, 2.29535865e-02, 2.80168776e-02, 3.07172996e-02,
3.10548523e-02, 2.64978903e-02, 2.22784810e-02, 1.65400844e-02,
1.19831224e-02, 7.42616034e-03, 2.86919831e-03, 1.68776371e-03,
3.37552743e-04, 3.37552743e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 5.06329114e-04,
8.43881857e-04, 1.18143460e-03, 3.37552743e-03, 6.58227848e-03,
1.56962025e-02, 3.35864979e-02, 6.83544304e-02, 1.23544304e-01,
2.02531646e-01, 2.80337553e-01, 3.56624473e-01, 4.08607595e-01,
4.27172996e-01, 4.02194093e-01, 3.40084388e-01, 2.57552743e-01,
1.68438819e-01, 9.75527426e-02, 4.75949367e-02, 1.70464135e-02,
3.20675105e-03, 5.06329114e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 6.75105485e-04,
1.35021097e-03, 3.20675105e-03, 7.93248945e-03, 2.09282700e-02,
4.97890295e-02, 1.02447257e-01, 1.95611814e-01, 3.15949367e-01,
4.51983122e-01, 5.90717300e-01, 6.92658228e-01, 7.61856540e-01,
7.78396624e-01, 7.51392405e-01, 6.71561181e-01, 5.39240506e-01,
3.91729958e-01, 2.47257384e-01, 1.25907173e-01, 4.69198312e-02,
7.76371308e-03, 5.06329114e-04, 1.68776371e-04, 1.68776371e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 1.68776371e-04,
2.70042194e-03, 7.42616034e-03, 1.68776371e-02, 4.91139241e-02,
1.01772152e-01, 2.01181435e-01, 3.47510549e-01, 5.07510549e-01,
6.56540084e-01, 7.66413502e-01, 8.44894515e-01, 8.90126582e-01,
9.01603376e-01, 8.90801688e-01, 8.45569620e-01, 7.42953586e-01,
5.81097046e-01, 4.05569620e-01, 2.36118143e-01, 1.04978903e-01,
https://colab.research.google.com/drive/1GI5vjJKefJkG4YXPZ1WqvQcNKeegJ3-9?authuser=1#printMode=true 1/3
5/9/25, 2:58 PM Image Reshape.ipynb - Colab
1.90717300e-02, 1.35021097e-03, 5.06329114e-04, 3.37552743e-04,
1.68776371e-04, 1.68776371e-04, 1.68776371e-04, 3.37552743e-04,
2.70042194e-03, 1.18143460e-02, 3.40928270e-02, 9.13080169e-02,
1.93248945e-01, 3.39915612e-01, 5.11054852e-01, 6.74092827e-01,
7.93417722e-01, 8.56371308e-01, 8.85063291e-01, 8.95527426e-01,
9.00759494e-01, 9.11223629e-01, 9.05485232e-01, 8.53670886e-01,
7.33333333e-01, 5.56793249e-01, 3.60168776e-01, 1.76708861e-01,
4.23628692e-02, 2.19409283e-03, 3.37552743e-04, 3.37552743e-04,
3.37552743e-04, 1.68776371e-04, 5.06329114e-04, 6.75105485e-04,
3 20675105e-03 1 80590717e-02 6 61603376e-02 1 62194093e-01
https://colab.research.google.com/drive/1GI5vjJKefJkG4YXPZ1WqvQcNKeegJ3-9?authuser=1#printMode=true 2/3
5/9/25, 2:58 PM Image Reshape.ipynb - Colab
https://colab.research.google.com/drive/1GI5vjJKefJkG4YXPZ1WqvQcNKeegJ3-9?authuser=1#printMode=true 3/3
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
import tensorflow as tf
import numpy as np
import math
import matplotlib.pyplot as plt
import os
import glob
import imageio
from IPython.display import Image, display
output_dir = 'output_images'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# -----------------------------
# Data Loading and Preprocessing
# -----------------------------
(x_train, y_train), (_, _) = tf.keras.datasets.fashion_mnist.load_data()
print("Training data shape:", x_train.shape, y_train.shape)
#Generator Initializations
noise_length = 100
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 1/6
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
x = tf.keras.layers.Activation('leaky_relu')(x)
x = tf.keras.layers.Conv2DTranspose(64, (5,5), strides=2, padding='same')(x)
def create_discriminator_model(input_shape=[28,28,1]):
disc_input = tf.keras.layers.Input(shape=input_shape)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(disc_input)
x = tf.keras.layers.Conv2D(32, (5,5), strides=2, padding='same')(x)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(x)
x = tf.keras.layers.Conv2D(64, (5,5), strides=2, padding='same')(x)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(x)
x = tf.keras.layers.Conv2D(128, (5,5), strides=2, padding='same')(x)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(x)
x = tf.keras.layers.Conv2D(256, (5,5), strides=1, padding='same')(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)
generator = create_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
<matplotlib.image.AxesImage at 0x7cfbb8472a50>
#This code will check the functioning of the descriminator against a sample input. As of this point,
#the descriminator is untrained.
discriminator = create_discriminator_model()
decision = discriminator(generated_image)
print (decision)
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 2/6
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
def build_models():
generator = create_generator_model()
discriminator = create_discriminator_model()
plt.figure(figsize=(rows, rows))
for i in range(num_images):
plt.subplot(rows, rows, i + 1)
image = np.reshape(fake_images[i], [image_size, image_size])
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.suptitle(f"Step {step}")
# Load and preprocess Fashion MNIST for the discriminator (normalized to [0,1])
(x_train, _), (_, _) = tf.keras.datasets.fashion_mnist.load_data()
x_train = np.reshape(x_train, [-1, image_size, image_size, 1]).astype('float32')
x_train = x_train / 255.0
for i in range(train_steps):
# Train Discriminator
noise_input = np.random.uniform(-1.0, 1.0, size=[batch_size, noise_size])
fake_images = generator.predict(noise_input)
X = np.concatenate((real_images, fake_images))
y_real = np.ones((batch_size, 1))
y_fake = np.zeros((batch_size, 1))
y = np.concatenate((y_real, y_fake))
if i % 100 == 0:
print(f"Step {i}: [D loss: {d_loss:.4f}, acc: {d_acc:.4f}], [A loss: {a_loss:.4f}, acc: {a_acc:.4f}]")
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 3/6
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
def plot_images(fake_images, step):
plt.figure(figsize=(2.5,2.5))
num_images = fake_images.shape[0]
image_size = fake_images.shape[1]
rows = int(math.sqrt(fake_images.shape[0]))
for i in range(num_images):
plt.subplot(rows, rows, i + 1)
image = np.reshape(fake_images[i], [image_size, image_size])
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.show()
G, D, A = build_models()
G.summary()
Model: "gen_network"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ input_layer_2 (InputLayer) │ (None, 100) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense) │ (None, 12544) │ 1,266,944 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ reshape_1 (Reshape) │ (None, 7, 7, 256) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_4 │ (None, 7, 7, 256) │ 1,024 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ activation_5 (Activation) │ (None, 7, 7, 256) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_transpose_4 (Conv2DTranspose) │ (None, 14, 14, 128) │ 819,328 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_5 │ (None, 14, 14, 128) │ 512 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ activation_6 (Activation) │ (None, 14, 14, 128) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_transpose_5 (Conv2DTranspose) │ (None, 28, 28, 64) │ 204,864 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_6 │ (None, 28, 28, 64) │ 256 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ activation_7 (Activation) │ (None, 28, 28, 64) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_transpose_6 (Conv2DTranspose) │ (None, 28, 28, 32) │ 51,232 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_7 │ (None, 28, 28, 32) │ 128 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ activation_8 (Activation) │ (None, 28, 28, 32) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_transpose_7 (Conv2DTranspose) │ (None, 28, 28, 1) │ 801 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ activation_9 (Activation) │ (None, 28, 28, 1) │ 0 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 2,345,089 (8.95 MB)
Trainable params: 2,344,129 (8.94 MB)
Non-trainable params: 960 (3.75 KB)
D.summary()
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 4/6
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
Model: "disc_network"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ input_layer_3 (InputLayer) │ (None, 28, 28, 1) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ leaky_re_lu_4 (LeakyReLU) │ (None, 28, 28, 1) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_4 (Conv2D) │ (None, 14, 14, 32) │ 832 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ leaky_re_lu_5 (LeakyReLU) │ (None, 14, 14, 32) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_5 (Conv2D) │ (None, 7, 7, 64) │ 51,264 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ leaky_re_lu_6 (LeakyReLU) │ (None, 7, 7, 64) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_6 (Conv2D) │ (None, 4, 4, 128) │ 204,928 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ leaky_re_lu_7 (LeakyReLU) │ (None, 4, 4, 128) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_7 (Conv2D) │ (None, 4, 4, 256) │ 819,456 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ flatten_1 (Flatten) │ (None, 4096) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_3 (Dense) │ (None, 1) │ 4,097 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 1,080,577 (4.12 MB)
Trainable params: 0 (0.00 B)
Non-trainable params: 1,080,577 (4.12 MB)
A.summary()
Model: "adversarial"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ input_layer_4 (InputLayer) │ (None, 100) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ gen_network (Functional) │ (None, 28, 28, 1) │ 2,345,089 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ disc_network (Functional) │ (None, 1) │ 1,080,577 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 3,425,666 (13.07 MB)
Trainable params: 2,344,129 (8.94 MB)
Non-trainable params: 1,081,537 (4.13 MB)
train_gan(G, D, A)
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 5/6
5/9/25, 2:56 PM Guided Project - Viewer Page.ipynb - Colab
https://colab.research.google.com/drive/1_kVBs9j-zwHhN-MRscKqfGSoebeX4AQQ?authuser=1#printMode=true 6/6
5/9/25, 2:55 PM Capstone Project 1.ipynb - Colab
Collecting openai==0.28
Downloading openai-0.28.0-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.11/dist-packages (from openai==0.28) (2.32.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.11/dist-packages (from openai==0.28) (4.67.1)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.11/dist-packages (from openai==0.28) (3.11.12)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests>=2.20->openai==0.2
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests>=2.20->openai==0.28) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests>=2.20->openai==0.28) (2
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests>=2.20->openai==0.28) (20
Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (2.4
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (1.3.2)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (25.1.0)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (1.5.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (6.1.0)
Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (0.2.1)
Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->openai==0.28) (1.18.3)
Downloading openai-0.28.0-py3-none-any.whl (76 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.5/76.5 kB 5.8 MB/s eta 0:00:00
Installing collected packages: openai
Attempting uninstall: openai
Found existing installation: openai 1.61.1
Uninstalling openai-1.61.1:
Successfully uninstalled openai-1.61.1
Successfully installed openai-0.28.0
import openai
openai.api_key = "sk-proj-v4qLcBBF-JUPQoka7unid0Rn_NkHCoCqsKv2hYlq_QVF1smzgiwDQg8Iba3vIJv6qJu3B8mdLCT3BlbkFJJ4oy5W7q82jbnpz_huk7wCdJdgNB8
prompt_text="""
Generate Python code for the following tasks using Pandas,Matplotlib and seaborn:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt_text}
],
temperature=0.7,
max_tokens=500,
top_p=1.0,
frequency_penalty=0,
presence_penalty=0,
)
print(response['choices'][0]["message"]['content'])
Sure! Here's an example Python code to perform exploratory data analysis (EDA) and visualize a dataset using Pandas, Matplotlib, and
```python
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Histogram
plt.subplot(2, 2, 1)
data.hist()
plt.title('Histogram')
https://colab.research.google.com/drive/1oiZ88ewu4oY3muX8-9WQI262phzO-8XD?authuser=1 1/4
5/9/25, 2:55 PM Capstone Project 1.ipynb - Colab
# Barplot
plt.subplot(2, 2, 2)
sns.countplot(data['column_name'])
plt.title('Barplot')
# Boxplot
plt.subplot(2, 2, 3)
sns.boxplot(data=data)
plt.title('Boxplot')
# Correlation Heatmap
plt.subplot(2, 2, 4)
corr = data.corr()
sns.heatmap(corr, annot=True)
plt.title('Correlation Heatmap')
plt.tight_layout()
plt.show()
```
This code will help you perform EDA and visualize the dataset using Pandas, Matplotlib, and Seaborn in Python.
experience_paragraph="""
I am Ravikumar and I have accumulated five years of experience at Steel Strips Wheels Limited, where I contributed to manufacturing and
resume_text=generate_resume(experience_paragraph)
print(resume_text)
**Resume**
**Name:** Ravikumar
**Summary:**
Dedicated professional with five years of experience at Steel Strips Wheels Limited in manufacturing and quality control. Skilled in
**Skills:**
- Production Line Management
- Operations Optimization
- Quality Control
- Technical Proficiency
- Problem-Solving
https://colab.research.google.com/drive/1oiZ88ewu4oY3muX8-9WQI262phzO-8XD?authuser=1 2/4
5/9/25, 2:55 PM Capstone Project 1.ipynb - Colab
- Automotive Industry Knowledge
**Experience:**
Steel Strips Wheels Limited
- Contributed to manufacturing and quality control processes
- Oversaw production lines, optimized operations, and ensured product quality
- Developed technical skills and problem-solving abilities
- Enhanced understanding of the automotive industry
**Education:**
[Your Education Background]
**Certifications:**
[If Applicable]
**Projects:**
[If Applicable]
**References:**
Available upon request.
{"role": "system", "content": "You are an AI that generates programming interview questions."},
{"role": "user", "content": prompt}],
temperature=0.7,
)
return response['choices'][0]["message"]['content']
programming_language="Javascript"
interview_questions=generate_interview_questions(programming_language)
print(interview_questions)
1. Explain the difference between 'undefined' and 'null' in JavaScript. Provide examples where each would be used.
2. What are closures in JavaScript? How are they useful and can you provide an example of how to create a closure?
3. Describe the concept of prototypal inheritance in JavaScript. How does it differ from classical inheritance in other programming
4. How does JavaScript handle asynchronous programming? Explain the event loop and how it facilitates non-blocking operations.
5. What are the differences between 'var', 'let', and 'const' for variable declaration in JavaScript? When would you use each one?
6. How can you optimize the performance of a JavaScript application? Discuss best practices for improving speed and efficiency.
7. What is the role of promises and async/await in modern JavaScript development? How do they help manage asynchronous code executio
8. Discuss the concept of hoisting in JavaScript. How does it impact variable and function declarations in the code?
9. What are some common design patterns used in JavaScript? Provide examples of when and how these patterns can be applied in real-w
10. Write a function in JavaScript that takes an array of numbers and returns the sum of all the even numbers in the array. Optimize
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are an AI generates professional meeting summaries."},
{"role": "user", "content": prompt}],
temperature=0.5,
https://colab.research.google.com/drive/1oiZ88ewu4oY3muX8-9WQI262phzO-8XD?authuser=1 3/4
5/9/25, 2:55 PM Capstone Project 1.ipynb - Colab
)
return response['choices'][0]["message"]['content']
notes="""
Ruby: New servers are online
meeting_summary=summarize_meetingnotes(notes)
print(meeting_summary)
During the meeting, Ruby reported that new servers are now online. Kyle requested more time to fix software issues. Walker offered a
https://colab.research.google.com/drive/1oiZ88ewu4oY3muX8-9WQI262phzO-8XD?authuser=1 4/4
5/9/25, 2:54 PM Capstone project - fake image 2.ipynb - Colab
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
if tf.config.list_physical_devices('GPU'):
print("GPU is available. Running on GPU.")
else:
print("GPU is not available. Running on CPU.")
BUFFER_SIZE = x_train.shape[0]
BATCH_SIZE = 32
LATENT_DIM = 100 # Dimension of the noise vector
NUM_CLASSES = 10
IMG_SHAPE = (32, 32, 3)
def build_generator():
noise_input = layers.Input(shape=(LATENT_DIM,))
label_input = layers.Input(shape=(1,), dtype='int32')
# Embed the label into a vector of size equal to the noise dimension
label_embedding = layers.Embedding(NUM_CLASSES, LATENT_DIM)(label_input)
label_embedding = layers.Flatten()(label_embedding)
# -------------------------------
# 3. Build the Simplified Discriminator
# -------------------------------
def build_discriminator():
image_input = layers.Input(shape=IMG_SHAPE)
label_input = layers.Input(shape=(1,), dtype='int32')
https://colab.research.google.com/drive/1N3Z3VgTPEnr3IQ0N6Wnj9AvNiZykAukS?authuser=1#printMode=true 1/4
5/9/25, 2:54 PM Capstone project - fake image 2.ipynb - Colab
x = layers.Flatten()(x)
# Output logits (no activation) for use with from_logits=True in loss
output = layers.Dense(1)(x)
# -------------------------------
# Instantiate Models on GPU
# -------------------------------
with tf.device('/GPU:0'):
generator = build_generator()
discriminator = build_discriminator()
# -------------------------------
# 4. Define Losses and Optimizers
# -------------------------------
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
# -------------------------------
# 5. Training Step and Loop on GPU
# -------------------------------
@tf.function
def train_step(images, labels):
noise = tf.random.normal([BATCH_SIZE, LATENT_DIM])
random_labels = tf.random.uniform([BATCH_SIZE, 1], minval=0, maxval=NUM_CLASSES, dtype=tf.int32)
g_loss = generator_loss(fake_output)
d_loss = discriminator_loss(real_output, fake_output)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
EPOCHS = 250
with tf.device('/GPU:0'):
train(train_dataset, EPOCHS)
Epoch 1/250
Generator Loss: 0.6839, Discriminator Loss: 1.3814
Epoch 2/250
Generator Loss: 0.6770, Discriminator Loss: 1.4080
Epoch 3/250
Generator Loss: 0.7029, Discriminator Loss: 1.4113
Epoch 4/250
Generator Loss: 0.6723, Discriminator Loss: 1.4256
Epoch 5/250
Generator Loss: 0.6918, Discriminator Loss: 1.4012
Epoch 6/250
Generator Loss: 0.6755, Discriminator Loss: 1.3699
https://colab.research.google.com/drive/1N3Z3VgTPEnr3IQ0N6Wnj9AvNiZykAukS?authuser=1#printMode=true 2/4
5/9/25, 2:54 PM Capstone project - fake image 2.ipynb - Colab
Epoch 7/250
Generator Loss: 0.7395, Discriminator Loss: 1.4185
Epoch 8/250
Generator Loss: 0.7247, Discriminator Loss: 1.4029
Epoch 9/250
Generator Loss: 0.7553, Discriminator Loss: 1.3193
Epoch 10/250
Generator Loss: 0.6400, Discriminator Loss: 1.5028
Epoch 11/250
Generator Loss: 0.6821, Discriminator Loss: 1.3818
Epoch 12/250
Generator Loss: 0.6937, Discriminator Loss: 1.3994
Epoch 13/250
Generator Loss: 0.6841, Discriminator Loss: 1.4496
Epoch 14/250
Generator Loss: 0.7226, Discriminator Loss: 1.4325
Epoch 15/250
Generator Loss: 0.7455, Discriminator Loss: 1.3671
Epoch 16/250
Generator Loss: 0.6962, Discriminator Loss: 1.3433
Epoch 17/250
Generator Loss: 0.7197, Discriminator Loss: 1.3869
Epoch 18/250
Generator Loss: 0.7162, Discriminator Loss: 1.4074
Epoch 19/250
Generator Loss: 0.7342, Discriminator Loss: 1.3944
Epoch 20/250
Generator Loss: 0.7146, Discriminator Loss: 1.4116
Epoch 21/250
Generator Loss: 0.7800, Discriminator Loss: 1.3810
Epoch 22/250
Generator Loss: 0.6912, Discriminator Loss: 1.4149
Epoch 23/250
Generator Loss: 0.7153, Discriminator Loss: 1.3769
Epoch 24/250
Generator Loss: 0.7144, Discriminator Loss: 1.3733
Epoch 25/250
Generator Loss: 0.6757, Discriminator Loss: 1.4576
Epoch 26/250
Generator Loss: 0.7196, Discriminator Loss: 1.3844
Epoch 27/250
Generator Loss: 0.6581, Discriminator Loss: 1.4439
Epoch 28/250
Generator Loss: 0.7165, Discriminator Loss: 1.3002
Epoch 29/250
Generator Loss: 0.6969, Discriminator Loss: 1.4068
# -------------------------------
# 6. Generate Images by Class
# -------------------------------
# CIFAR-10 class names
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
# Sample random noise and set the label for all generated images
noise = tf.random.normal([num_images, LATENT_DIM])
labels = tf.constant([[label_index]] * num_images, dtype=tf.int32)
https://colab.research.google.com/drive/1N3Z3VgTPEnr3IQ0N6Wnj9AvNiZykAukS?authuser=1#printMode=true 3/4
5/9/25, 2:54 PM Capstone project - fake image 2.ipynb - Colab
# -------------------------------
# 7. User Input and Image Generation
# -------------------------------
# Prompt the user for a class type and generate images from that class
user_class = input("Enter the class type to generate images (e.g., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck)
generate_images_by_class(user_class)
Enter the class type to generate images (e.g., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck): bird
# -------------------------------
# 7. User Input and Image Generation
# -------------------------------
# Prompt the user for a class type and generate images from that class
user_class = input("Enter the class type to generate images (e.g., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck)
generate_images_by_class(user_class)
Enter the class type to generate images (e.g., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck): dog
https://colab.research.google.com/drive/1N3Z3VgTPEnr3IQ0N6Wnj9AvNiZykAukS?authuser=1#printMode=true 4/4