AI-complete Merge
AI-complete Merge
Introduction:
Prolog, which stands for "Programming in Logic," is a declarative programming language
commonly used in artificial intelligence and computational linguistics. Developed in the
1970s by Alain Colmerauer and Philippe Roussel, Prolog originated from the logic
programming paradigm, with its roots deeply embedded in formal logic and mathematical
reasoning.Unlike traditional imperative programming languages like C++ or Java, Prolog
focuses on the logical relationships between entities rather than the sequence of steps to
achieve a goal. Programs in Prolog are executed by a process called "logical inference,"
where the system derives solutions by matching rules and facts against a query.
History of Prolog:
Prolog's inception can be traced back to the work on logic programming by Robert Kowalski
and Alain Colmerauer in the early 1970s. It was further refined by Colmerauer and Roussel at
the University of Aix-Marseille, France. Prolog gained popularity in the academic and
research communities due to its elegant approach to problem-solving using logic and rule-
based systems. Over the years, Prolog has undergone several revisions and standardizations,
with the most prominent being the Edinburgh Prolog, which laid the groundwork for
subsequent implementations.
Features of Prolog:
1. Declarative Style: Prolog allows programmers to declare what needs to be achieved
rather than explicitly specifying how to achieve it. This promotes a higher level of
abstraction and simplifies problem-solving.
2. Pattern Matching: Prolog employs pattern matching extensively, allowing rules and
facts to be matched against queries, facilitating powerful search and inference
capabilities.
1 | Page
6. Unification: Central to Prolog's operation is the process of unification, which is the
process of finding substitutions for variables that make two logical expressions
equivalent. Unification is used extensively during query resolution and rule
application.
7. Horn Clauses: Prolog programs typically consist of Horn clauses, which are logical
implications in the form of a head and a body. The head represents the goal to be
achieved, and the body consists of conditions that must be satisfied for the goal to be
true.
- Atoms: Represent constants such as names, symbols, or identifiers. Atoms are written in
lowercase and enclosed in single quotes if they contain special characters or spaces.
If an atom contains special characters or spaces, it must be enclosed within single quotes.
Here are some examples of atoms in Prolog:
likes(john, pizza)
color('red')
animal(dog).
In these examples, likes, john, pizza, color, red, animal, and dog are all atoms representing
various entities or relationships.
2 | Page
human(socrates)
mortal(X) :- human(X).
In this example, "socrates" is a fact, stating that Socrates is human. The rule `mortal(X) :-
human(X).` defines that if X is human, then X is mortal.
Clauses in Prolog:
In Prolog, a clause is a fundamental unit of logic programming. Clauses are statements that
define relationships, properties, or logical implications within a Prolog program. There are
two main types of clauses in Prolog:
Facts: Facts are simple statements that assert the truth of a relationship or property. A
fact in Prolog consists of a predicate, which represents the relationship or property,
followed by a list of arguments. Facts serve as the base knowledge upon which Prolog
programs operate.
Rules: Rules are logical implications that define conditions or relationships. A rule in
Prolog consists of a head and a body. The head specifies the goal to be achieved,
while the body contains conditions that must be satisfied for the goal to be true. Rules
are used to infer new information or to guide the execution of Prolog programs.
Facts in Prolog:
In Prolog, facts are simple statements that assert the truth of a relationship or property. Facts
provide the base knowledge upon which Prolog programs operate. A fact in Prolog consists of
a predicate followed by a list of arguments enclosed in parentheses. The predicate represents
the relationship or property being asserted, while the arguments represent the entities
involved in the relationship. Facts serve as the foundation upon which logical inferences and
queries are made in Prolog programs.
In these examples, likes, age, and parent are predicates, while (john, pizza), (susan, 25),
and (bob, alice) are the arguments to those predicates.
Rules in Prolog:
3 | Page
In Prolog, rules are logical implications that define conditions or relationships. A rule in
Prolog consists of a head and a body. The head specifies the goal to be achieved, while the
body contains conditions that must be satisfied for the goal to be true. Rules are used to infer
new information or to guide the execution of Prolog programs. They provide a way to encode
problem-solving strategies or logical inferences within a Prolog program.
This rule states that "X is mortal if X is human". Here, mortal(X) is the head, and human(X)
is the body.
In summary, in Prolog, facts provide simple assertions about relationships or properties, rules
define logical implications or conditions, and clauses encompass both facts and rules, serving
as the basic building blocks of Prolog programs.
Predicates in Prolog:
In Prolog, a predicate is a fundamental concept representing a relationship, property, or
action. Predicates are used to define logical statements or rules within a Prolog program.
They are composed of a functor, which specifies the name of the predicate, and a number of
arguments enclosed in parentheses. Predicates play a central role in defining the logic and
structure of Prolog programs.
Functor: The functor is the name of the predicate, which identifies the relationship or
property being asserted. It begins with a lowercase letter or an underscore, followed
by a sequence of letters, digits, or underscores. The functor uniquely identifies the
predicate within the Prolog program.
Arguments: The arguments of a predicate represent the entities or values involved in
the relationship or action described by the predicate. Arguments can be variables,
constants, or complex terms. They provide the necessary information to evaluate or
satisfy the predicate.
In these examples, likes, age, and parent are predicates, while (john, pizza), (susan, 25),
and (bob, alice) are the arguments to those predicates.
4 | Page
Queries in Prolog:
In Prolog, a query is a request for information or a solution to a logical problem posed to the
Prolog interpreter. Queries allow users to interact with Prolog programs by asking questions
or making logical inquiries. A query consists of a goal, which is a predicate followed by a list
of arguments enclosed in parentheses. The Prolog interpreter attempts to find solutions or
bindings for the variables in the query that satisfy the predicates and rules defined in the
program.
Goal: The goal of a query is to determine whether a particular predicate or set of
predicates holds true given the current knowledge base and rules defined in the Prolog
program. The goal specifies the desired outcome or condition that the Prolog
interpreter should attempt to satisfy.
Advantages of Prolog:
1. Natural Representation of Problems: Prolog allows problems to be represented
naturally using logical rules and facts, making it well-suited for domains such as
expert systems and natural language processing.
5 | Page
Disadvantages of Prolog:
1. Efficiency: Prolog may not be as efficient as imperative languages for certain types of
tasks due to its computational model and backtracking mechanism.
3. Limited Domain: While Prolog excels in certain domains like symbolic computation
and expert systems, it may not be suitable for performance-critical or highly
procedural tasks.
Applications of Prolog:
1. Expert Systems: Prolog's ability to represent knowledge and perform logical
inference makes it well-suited for building expert systems that emulate human
expertise in specific domains.
2. Natural Language Processing: Prolog has been used in various natural language
processing tasks, including parsing, semantic analysis, and machine translation.
1. Define Predicates:
Identify the key entities and relationships within the export system and represent them as
predicates. Predicates could include:
product(Name, Description, Price): Describes a product with its name, description,
and price.
6 | Page
country(Name, Continent): Represents a country and its continent.
exported_to(Product, Country): Indicates which products are exported to which
countries.
requires_license(Product, Country): Specifies products that require a license for
export to certain countries.
2. Represent Facts:
Define facts to populate your knowledge base with specific information about products,
countries, and export regulations. For example:
3. Define Rules:
Establish rules that govern the export process, considering factors such as licensing
requirements, pricing, and destination countries. For instance:
A product requires a license for export to a specific country if it's listed in the
requires_license/2 predicate.
Certain products might be restricted or prohibited from export to certain countries due
to legal or regulatory constraints.
7 | Page
Products exported to a specific country.
Countries to which a particular product is exported.
Products requiring licenses for export to a given country.
Meta-Programming:
8 | Page
2. Code Generation:
Prolog allows you to generate code programmatically based on specific requirements or
conditions. This can be useful for automatically generating complex queries, rules, or entire
programs based on user input, data analysis, or other runtime factors.
9 | Page
In summary, meta-programming in Prolog provides powerful capabilities for program
generation, manipulation, abstraction, and introspection. It allows Prolog programs to
dynamically adapt and respond to changing requirements or conditions at runtime, enhancing
their flexibility, expressiveness, and utility.
10 | P a g e
EXPERIMENT – 2
Steps:
1. Identify Predicate:
2. Define Facts:
Open a text editor, such as Notepad, to write basic facts in PROLOG syntax to
express the provided statements:
3. Save the program directly in the bin folder directory with a .pl or .prolog extension.
11 | P a g e
4. Open GNU PROLOG and compile it by using the consult('exp2prolog.pl').
command.This command will load the contents of the "exp2prolog.pl" file into the
Prolog interpreter, making its predicates and rules available for querying and
execution within the Prolog environment.
12 | P a g e
EXPERIMENT – 3
Aim: Write predicates, one converts centigrade temperatures to Fahrenheit, the other checks
if a temperature is below freezing using PROLOG.
Steps:
1. Create a file (e.g. exp3prolog) with an extension of .pl or .prolog of predicates.
13 | P a g e
EXPERIMENT -4
Program:
2. Open the prolog console and use the ‘consult’ command to compile the file and use it
as a knowledge base.
14 | P a g e
Tree Representation:
15 | P a g e
EXPERIMENT – 5
Theory:
The Water Jug Problem is a classic puzzle in which you are given two jugs, a 4-gallon jug
(Jug A) and a 3-gallon jug (Jug B), and your goal is to measure out exactly 2 gallons of water
using these jugs. You can fill the jugs, empty them, or pour water from one jug into the other
until you reach the desired amount.
Source Code:
16 | P a g e
Prolog:
17 | P a g e
EXPERIMENT – 6
Source Code:
2. Open the prolog console and use the ‘consult’ command to compile the file and use it
as a knowledge base.
18 | P a g e
EXPERIMENT – 7
Source Code:
2. Open the prolog console and use the ‘consult’ command to compile the file and use it
as a knowledge base.
19 | P a g e
20 | P a g e
EXPERIMENT – 8
Theory:
Hangman is a popular word-guessing game typically played by two or more people. The
game's objective is for one player to guess a hidden word, phrase, or sentence letter by letter
within a limited number of attempts.
1. **Setup**: One player (the "host") selects a word or phrase and keeps it hidden from the
other player(s). The word or phrase is represented by a series of dashes, each dash
representing a letter. For example, if the word is "hangman", it might be represented as
"-------".
2. **Guessing**: The other player(s) then take turns guessing letters in an attempt to uncover
the hidden word. The host reveals any correctly guessed letters by replacing the
corresponding dashes with the guessed letter. If the guessed letter is not in the word, the host
marks it as a wrong guess.
3. **Incorrect Guesses**: For each incorrect guess, the host typically draws part of a gallows
with a hanging stick figure (the "hangman"). The drawing is usually incremental, adding one
part for each wrong guess. The game typically ends if the hangman is completed (usually
consisting of a head, body, arms, and legs), indicating that the guesser has run out of
attempts.
4. **Winning and Losing**: The guesser(s) win the game if they successfully guess the word
before the hangman is completed. If the hangman is completed before the word is guessed,
the host wins.
Hangman can be played with various levels of difficulty, such as using longer words or
phrases, limiting the number of incorrect guesses allowed, or using less common letters. It's
often played for fun and as a way to practice vocabulary and spelling skills. Additionally, it
can be adapted into digital versions and educational tools.
21 | P a g e
Source Code:
import random
def choose_word(words):
"""Choose a random word from the list."""
return random.choice(words)
def hangman():
"""Main function to play the Hangman game."""
# Choose a word
word = choose_word(words)
# Initialize variables
guessed_letters = []
attempts = 6
print("Welcome to Hangman!")
print("Try to guess the word.")
print(display_word(word, guessed_letters))
if guess in guessed_letters:
print("You already guessed that letter.")
continue
elif len(guess) != 1 or not guess.isalpha():
print("Please enter a single letter.")
continue
guessed_letters.append(guess)
22 | P a g e
if attempts == 0:
print("You ran out of attempts! The word was:", word)
break
else:
print("Correct guess!")
Output:
23 | P a g e
EXPERIMENT – 9
Source Code:
1. Create text file.
2. Open the prolog console and use the ‘consult’ command to compile the file and
use it as a knowledge base.
24 | P a g e
25 | P a g e
EXPERIMENT – 10
Source Code:
1. Create a text file.
26 | P a g e
2. Open the prolog console and use the ‘consult’ command to compile the file and use it
as a knowledge base.
27 | P a g e
EXPERIMENT – 11
Aim: Write a program to remove stop words for a given passage from a text file using
NLTK
Source Code:
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
nltk.download('stopwords')
nltk.download('punkt')
def remove_stopwords(text):
tokens = word_tokenize(text)
stop_words = set(stopwords.words('english'))
return filtered_text
def main():
with open('C:\College\AI lab\passage.txt', 'r') as file:
passage = file.read()
cleaned_passage = remove_stopwords(passage)
if __name__ == "__main__":
main()
28 | P a g e
Output:
29 | P a g e
EXPERIMENT – 12
Aim: Write a program to remove stop words for a given passage from a text file using
NLTK
Source Code:
import nltk
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
nltk.download('punkt')
def stem_sentence(sentence):
tokens = word_tokenize(sentence)
porter = PorterStemmer()
return stemmed_sentence
def main():
# Input sentence
sentence = "Natural language processing is a field of artificial intelligence that focuses on
the interaction between computers and humans through natural language."
stemmed_sentence = stem_sentence(sentence)
if __name__ == "__main__":
main()
Output:
30 | P a g e
EXPERIMENT – 13
Aim: Write a program to POS (part of speech) tagging for the give sentence using NLTK
Source Code:
import nltk
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag
def pos_tag_sentence(sentence):
# Tokenize the sentence
tokens = word_tokenize(sentence)
return tagged_tokens
def main():
# Input sentence
sentence = "Natural language processing is a field of artificial intelligence that focuses on
the interaction between computers and humans through natural language."
if __name__ == "__main__":
main()
31 | P a g e
Output:
32 | P a g e
EXPERIMENT – 14
Source Code:
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
nltk.download('punkt')
nltk.download('wordnet')
def lemmatize_sentence(sentence):
tokens = word_tokenize(sentence)
lemmatizer = WordNetLemmatizer()
return lemmatized_sentence
def main():
sentence = "Natural language processing is a field of artificial intelligence that focuses on
the interaction between computers and humans through natural language."
lemmatized_sentence = lemmatize_sentence(sentence)
if __name__ == "__main__":
main()
33 | P a g e
Output:
34 | P a g e
EXPERIMENT – 15
Aim: Write a program for Text Classification for the given sentence using NLTK.
Source Code:
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.classify import NaiveBayesClassifier
import random
def preprocess(sentence):
# Tokenize the sentence
tokens = word_tokenize(sentence.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_tokens = [word for word in tokens if word not in stop_words]
# Lemmatize tokens
lemmatizer = WordNetLemmatizer()
lemmatized_tokens = [lemmatizer.lemmatize(word) for word in filtered_tokens]
return lemmatized_tokens
def extract_features(words):
return dict([(word, True) for word in words])
def train_classifier():
# Sample training data
training_data = [
(preprocess("Natural language processing is a field of artificial intelligence."),
"technology"),
(preprocess("Forests are home to diverse ecosystems."), "nature"),
(preprocess("Computers can understand and generate human language."), "technology"),
(preprocess("Mountains offer breathtaking views and fresh air."), "nature"),
35 | P a g e
(preprocess("Machine learning algorithms improve with more data."), "technology"),
(preprocess("Rivers provide water for plants and animals."), "nature")
]
return classifier
features = extract_features(tokens)
category = classifier.classify(features)
return category
def main():
classifier = train_classifier()
test_sentences = [
"The internet has revolutionized communication.",
"Birds migrate to warmer climates during winter.",
"Artificial intelligence is shaping the future of technology.",
"Forests play a crucial role in maintaining ecological balance.",
"Deep learning models require large datasets for training.",
"Oceans cover more than 70% of the Earth's surface.",
"Blockchain technology is transforming various industries.",
"Wildlife conservation is essential for biodiversity."
]
if __name__ == "__main__":
main()
36 | P a g e
Output:
37 | P a g e
ADDITIONAL EXPERIMENT 1
Introduction
In this task, I received a dataset from a client containing sales data. My responsibility was to
analyze and understand the dataset using various data analysis techniques. I delved into the
data to identify patterns and relationships between different variables. After thorough
exploration, I presented my findings to the Data Science team leader. This task demanded
proficiency in data analysis, data visualization, and effective communication.
I undertook the challenge of comprehending the client's data structure and its interrelations.
Leveraging this understanding, I formulated a problem statement that could be addressed
using the available data. This task involved skills in data modeling, problem framing, and
grasping the business context.
In this phase, I constructed a predictive model utilizing the client’s data. Employing machine
learning algorithms, I trained the model and conducted rigorous testing to assess its
performance. Upon completion, I interpreted the results and conveyed them back to the
business stakeholders. This task demanded expertise in machine learning, model evaluation,
and result interpretation.
My role here was to prepare the machine learning model for deployment into a production
environment. I ensured its robustness, reliability, and capability to handle real-world data
seamlessly. Additionally, I made provisions for easy updates and maintenance. This task
required proficiency in software engineering, machine learning, and understanding production
environments.
Task Five: Quality Assurance
The final task involved evaluating the machine learning model's performance in a production
setup. I monitored its behavior over time, ensuring it delivered accurate and dependable
results consistently. Any identified issues were promptly addressed, and I worked on
enhancements to refine the model further. This task called for skills in quality assurance,
problem-solving, and a commitment to continuous improvement.
Each of these tasks presented unique challenges, enabling me to apply and enhance a
diverse set of skills. They provided invaluable insights into the realm of AI professionals
at Cognizant.
Conclusion
Introduction
The BCG GenAI job simulation on Forage provided a comprehensive insight into the daily tasks
performed by the GenAI team at BCG. The simulation involved tasks ranging from developing
a chatbot to assist with financial inquiries1.
In this task, I was tasked with developing an AI-powered chatbot tailored for analyzing financial
documents. Utilizing Python and Jupyter, I embarked on constructing the chatbot to meet the
specified requirements. Its primary function was to provide prompt and accurate responses to
user inquiries related to finance.
This project stood at the forefront of innovation, situated at the intersection of finance and
generative AI (GenAI), an area of growing interest and investment at BCG. As the newest
member of the team, I assumed the responsibility of spearheading the development of this
chatbot.
1. Understanding the Problem: The initial step involved comprehending the core problem
statement. This entailed grasping the users' needs and delineating the types of queries
the chatbot would encounter.
2. Data Collection and Preparation: Subsequently, I undertook the task of gathering and
refining the data essential for training the chatbot. This included sourcing financial
documents, cleansing the data, and preparing it for integration into a machine learning
model.
3. Model Development: With the data prepared, I proceeded to develop the machine
learning model pivotal for powering the chatbot. This stage involved meticulously
selecting an appropriate model, training it on the curated data, and fine-tuning it to
achieve optimal performance.
4. Testing and Evaluation: Once the model was constructed, it underwent rigorous testing
and evaluation to validate its functionality and performance. This phase involved
subjecting the chatbot to various test scenarios and assessing its responses.
5. Deployment: Upon successful testing and refinement, the chatbot was deemed ready for
deployment. I oversaw its integration into the designated platform, ensuring seamless
functionality and accessibility for end-users.
6. Each stage of this project presented unique challenges and opportunities for learning. It
provided invaluable insights into the fusion of AI and finance, highlighting the
transformative potential of cutting-edge technologies in addressing real-world
challenges.
Conclusion
The BCG GenAI job simulation provided a valuable opportunity to experience the work of a
Data Scientist at BCG. The tasks in the simulation were challenging and engaging, providing a
comprehensive understanding of the role. The skills and knowledge gained from this
simulation will be invaluable in my future career in AI.