Python for Computer Science and Data Science 2 (CSE 3652)
MINOR ASSIGNMENT-1: OBJECT-ORIENTED PROGRAMMING (OP)
|. What is the significance of classes in Python programming, and how do they contribute to object
oriented programming?
SIGNI F
Blueprints for Objects: Classes define the properties and behaviour of objects.
Encapsulation: Bundle data and methods to protect them from outside interference.
-Inheritane: Enable code reuse by creating new classes from existing ones.
Polymorphism: Allow different classes to be treated as objects of a common superclass.
7 Ps
“Modularity: Easier code management, updates, and debugging,
-Abstraction: Simplify complex logic.
“Code Reusability: Reduce redundancy through inheritance and polymorphism, Classes help create structured,
maintainable, and efficient code in Python.
Create a custom Python class for managing a bank account with basic functionalities like deposit
and withdrawal?
class Bank/ccount
def _ init__(self, account_number, balance=0):
self account_number = account_number
seif balance = balance
def deposit(self, amount):
selfibalance += amount
retum seif balance
def withdrawdself, amount): OUTPUT
if amount > setf.balance: Remaining Balance: 0
Sedum ineuracient ids” Remaining Balance: 100
selfbalance -= amount : a
retum self balance Remaining Balance: 50
def displayBalance(sett)
print(fRemaining Balance: {self.balancey’)
account = BankAccount("123456780")
account. displayBalance()
account deposit(100)
account displayBalanceg
account withdraw(50)
account. displayBalance()
Create a Book class that contains multiple Chapters, where each Chapter has a title and page count.
Write code to initialize a Book object with three chapters and display the total page count of the
book.
def __init__(selt, title, page_count):
self.page_count = page_count
def __init__(self, title):
seit te’ title
seif.chapters = {)
def add_chapteriself, chapter
‘elf chapters append(chapter)
def total_page_count(self}
return sum(chapter page_count for chapter in seif.chapters)
hapter("Chapter 1”, 20)
chapter2 = Chapter(‘Chapter 2°, 35)
chapter3 = Chapter("Chapter 3°, 45)
ook = Book(’My Book")
book.add_chapter(chapter1)
book add_chapter(chapter2)
book. add_chapter(chapter3)
print(PTotal page count: {book.total_page_count()}")
NAME: SATYABRATA PANDA.
REGD. NO: 2241016112
@ scanned with OKEN Scanner4, How does Python enforce access control to class attributes, and what is the difference between
public, protected, and private attributes?
Python enforces access control to class attributes using naming conventions
1> PUBLIC ATTRIBUTES: -Named with no leading underscore -Directly accessible outside of the class.
2> PROTECTED ATTRIBUTES: -Named with single leading underscore -Directly accessible to i's subclasses,
3> PRIVATE ATTRIBUTES: -Named with double leading underscore and are -Not directly accessible but
accessed using setters & getters,
class Example:
def _init_(sett
s@¥fpublic_attr = "| am public" oureut:
self_protected_attr= "| am protecte Tam public
self.__private_attr= “I am private” 1 am protected
def get_private_atr(sal) return self private_attr | am private
obj = Exampie()
print(obj.public_attr)
print(obj._protected_attr) #print(obj.__private_attr # Raises an ATtributeE:ror, outside the ciass car't be accessed
print(obj.get_private_attr))
5. Write a Python program using a Time class to input a given time in 24-hour format and convert it to a
12-hour format with AM/PM. The program should also validate time strings to ensure they are in the
correct HH:MM:SS format. Implement a method to check if the time is valid and return an appropriate
message.
import re
class Time:
def _ init__(self, time_24):
seiftime_24 = time_24
def is_valid_time(sel)
pattem = r*{01]\d}2{0-2)):{0-5iia):(0-5]\d)S°
match = re.matchipattern, setf.time_24)
retum True if match else False
def convert_12_hour_format(sett): ane penal
if not self is_valid_time()
return ‘invalid Time Format’ 12 Hour comet: 03:92:54 PM,
else:
hours, minutes, seconds = mapiint, self.ime_24.split(:))
"AM" if hours < 12 else ‘PM!
return f(hours:02}:{minutes:02):{seconds:02} {period)’
time = Time('15:32:54’)
if timeis_valid_time()
print(‘Valid Time Format’)
print(?12 Hour Format: {time.convert_12_hour_format())’)
else: print(Invalid Time Format’)
6. Write a Python program that uses private attributes for creating a BankAccount class. Implement
methodstodeposit, withdraw, and display the balance, ensuring direct access to the balance attribute
is restricted. Explain why using private attributes can help improve data security and prevent
accidental modifications.
class BankAccount account = BankAccount(‘123456789")
def _ init__(self, account_number, balancs print(account.deposit(100))
[email protected]_number = account_number print(account.withdraw(50))
self._palance= balance print(account display_balance())
def deposit(self, amount)
if amount > 0: OUTPUT,
self,_balance += amount io0
retum self._balance 80
def withdraw(self, amount): 0
if amount > setf._balance:
return "insufficient funds"
self_balance = amount
retum self._balance
def display_balance(se'f):
retum self balance
NAME: SATYABRATA PANDA,
REGD. NO: 2241016112
@ scanned with OKEN Scanner7. WriteaPythonprogramtosimulateacardgameusingobject-oriented principles. The program should
include a Card class to represent individual playing cards, a Deck class to represent a deck of cards,
and a Player class to represent players receiving cards. Implement a shuffle method in the Deck
class to shuffle the cards and a deal method to distribute cards to players. Display each player's
hand after dealing.
import random
class Card:
‘def _ init__(self, suit, rank):
self suit = suit
self.rank = rank
def __str_(self)
return f{self.rank) of {self-suit)’
class Deck:
suits = [Hearts', ‘Diamonds’, ‘Clubs’, "Spades')
ranks = ((2','3,'4,'5', 6. '7,
def _ init__(self)
eif.cards = ([Card(suit, rank) for suit in self suits for rank in setf.ranks))
def shutfle(self):
random.shuffie(self.cards)
def deal(seif, num_players, cards_per_player)
players = ([Player(fPlayer (1+7)}) for in range(num_players)])
for _ in range(cards_per_player)
for player in players:
if self.cards
player.add_card(self.cards.pop(0))
"9, 10', ‘Jack’, ‘Queen’, ‘King’, ‘Ace'))
retum players
class Player: output.
def _init_(seit, name): Player 1:5 of Hearts, 9 of Clubs, 4 of Diamonds, 5 of Diamonds, 3 of
self.name = name Spades
self hand = [ Player 2: 10 of Spades, King of Hearts, 6 of Spades, King of Clubs, Ace of
def add_card(self, card) Clubs
Self hand append(card) Player 3: Ace of Diamonds, 10 of Diamonds, Ace of Spades, Ace of Hearts,
def show hand seit: 7 of Hoarte
retum ',*joinstr(card) for card in self hand)
deck = Deck)
deck shutfie()
Players = deck.deal(4, 5)
for player in Players:
print(f{playername): {player show_hand())’)
8. Write a Python program that defines a base class Vehicle with attributes make and model, and a
method display info(). Create a subclass Car that inherits from Vehicle and adds an additional at
tribute num doors. instantiate both Vehicle and Car objects, call their display info() methods, and
explain how the subclass inherits and extends the functionality of the base class.
class Vehicle:
def _init__(se'f, make, model)
seifmake = make
self model = mode!
def display_info(setf):
print(P'Vehicle Make: (self. make), Model: (self.model}") ourput
class Car(Vehicle): Vehicle Information:
def _ init__(self, make, model, num_doors)
super()__iit|_(make, model)
se'f.num_doors = num_doors
def display info(set)
super().display_info() # Call the base class method
print(Number of Doors: (self.num_doors
Vehicle = Vehicle("Toyot
cart = Car(*Honda’, "Givi
print("Vehice Information:
Vehicle display_info()
print(\nCar Information")
cart.display_info()
Vehicle Make: Toyota, Model: Corolla
Car Information: Vehicle Make: Honda,
Model: Civic
Number of Doors: 4
NAME: SATYABRATA PANDA,
REGD. NO: 2241016112
@ scanned with OKEN Scanner9. Write a Python program demonstrating polymorphism by creating a base class Shape with a method
area(), and two subclasses Circle and Rectangle that override the area() method. instantiate objects
of both subclasses and call the area() method. Explain how polymorphism simplifies working with
different shapes in an inheritance hierarchy.
import math
class Shape:
def area(selt)
pass
class Circle(Shape):
def __init_(setf, radiusy ourpuT
Seltrodus © seas Circle Area: 78.53981633974483
def area(selt):
return math pi *self.adius ** 2 Rectangle des: 24
class Rectangle(Shape):
def _ init__(self, lenath, with):
def area(self):
return self length * self. wath
circle = Circle(5)
rectangle = Rectangle(4, 6)
print("Circle Area:", circle.area())
print Rectangle Area: reclangle.area())
10. Implement the CommissionEmployee class with init , earnings, and repr methods. include properties
for personal details and sales data. Create a test script to instantiate the object, display earnings,
modify sales data, and handle data validation errors for negative values.
class CommissionEmployee.
def _ init__(self, name, sales, commission_rate)
if Sales < 0 or commission_rate < 0:
Taise ValueError(‘Sales and commission rate must be non:
negative.") oureut
self name = name CommissionEmployee(name=John Doe,
self sales = sales ssales=5000, commission_rate=0.1)
se'f.commission_rate = commission_rate Earnings: 500.0
@property Updated Earnings: 700.0
def eamings(seit) Error: Sales and commission rate must be
return self sales * self.commission_rate non-negative.
def _repr_(seif)
rélurn PCommissionEmployee(name=(self.name},
elf.sales}, commission_rate={self.commission_rate})”
‘employee = CommissionEmployee("John Doe”, 5000, 0.1)
print(employee)
print("Earnings:", employee.eamings)
‘employee sales = 7000
print("Updated Earnings", employee.earnings)
invalid_employee = CommissionEmployee("Jane Dos", -1000, 0.1)
except ValueError as e
print("Error.", €)
14. What is duck typing in Python? Write a Python program demonstrating duck typing by creating a
function describe() that accepts any object with a speak() method. Implement two classes, Dog and
Robot, each with a speak() method. Pass instances of both classes to the describe() function and
explain how duck typing allows the function to work without checking the object's type.
class Dog
def speak(sel: robot = Robot( describe(robot)
return "Woot! Woot" describe(dog)
class Robot: Beep boop! ourour
def speak(set: Woof! Woof!
return "Beep boop!"
def describetentiy)
prinentity speak())
dog = Dog()
NAME: SATYABRATA PANDA,
REGD. NO: 2241016112
@ scanned with OKEN Scanner12. WAP to overload the + operator to perform addition of two complex numbers using a custom
Complex class?
class CustomComplex:
def _ init__(self, real, imag).
selfreal= real
selfimag = imag
def __add__(self, other):
rélun CustomComplex(self,real + other.eal, self.imag + other imag) QUTPUT:
def __str__(sel): First Complex Number: 3 + 4i
return Pfself real} + (self.imag}" ‘Second Complex Number: 1 + 21
CustomComplex(3, 4) ‘Sum: 4+ 6i
2 = CustomComplex(t, 2)
Bact +c2
print('First Complex Number:", c1)
print(’Second Complex Number:", c2)
print("Sum:", ¢3)
13. WAP to create a custom exception class in Python that displays the balance and withdrawal amount
when an error occurs due to insufficient funds?
class Insufficient undsException(Exception)
def __init__(self, balance, withdrawal_amount
super().__init|_(Pinsuffcient funds: Balance = {balance}, Withdrawal Amount = {withdrawal_amount)")
self balance = balance
self withdrawal_amount = withdrawal_anount
lass BankAccount:
def __init_(self, balance):
seit balance = balance
def withdraw(self, amount):
if amount > self balance:
raise InsuffcientFundsException(selfbalance, amount)
self.balance -= amount
return self balance
try
‘account = BankAccount(500)
print("Balance after withdrawals", account. withdraw(700))
except insufficientFundsException as e:
print("Error",€)
OUTPUT.
Error: Insufficient funds: Balance = 500,
Withdrawal Amount = 700
14, Write a Python program using the Card data class to simulate dealing 5 cards to a player froma
shuffled deck of standard playing cards. The program should print the player’s hand and the number
of remaining cards in the deck after the deal.
import random.
{from dataclasses import dataciass
@dataciass
class Card: oureur:
rank: str Players Hand.
suit: str 10 of Hearts
class Deck: 3 of Clubs
"91, 101, J 'Q), KA] of Diamonds
‘Spades! 7 of Spades
—init_ (set: K of Hearts
seif cards = [Card(rank, suit) for suit in self. suits for rank in selfranks] Remaining cards in deck: 47
random shufie(self.cards)
def deal(setf, num)
return (self.cards.pop() for _ in range(num)]ifenself.cards) >= num else {]
deck = Deck0)
player_hand = deck.deal(5)
print("Player's Hand:")
for card in player_hand:
print(f{card.rank} of {card sut}")
print(PRemaining cards in deck: {len(deck.cards)}")
NAME: SATYABRATA PANDA,
REGD. NO: 2241016112
@ scanned with OKEN Scanner15. How do Python data classes provide advantages over named tuples in terms of flexibility and func
tionality? Give an example using python code.
{rom dataciasses import dataciass
from callections import namedtuple
PersonNT = namedtuple("PersonNT*
1 = PersonNT(“Alice”, 25)
print(’NamedTuple:", p1.name, p1.age) output:
@dataclass NamedTuple: Alice 25
class PersonDG: Before Birthday: PersonDC(name="Bob’,
name: str age=30)
age: int After Birthday: PersonDC(name="Bob’, age=31)
def birthday(selt
selfage += 1
2 = PersonDC("Bob", 30)
print('Before Birthday:”, p2)
p2.birthday()
print(‘After Birthday", p2)
16. Write a Python program that demonstrates unit testing directly within a function's docstring using
the doctest module. Create a function add{a, b) that returns the sum of two numbers and includes
multiple test cases in its docstring. Implement a way to automatically run the tests when the script is
executed.
import doctest
def adda, b):
Returns the sum of two numbers.
>>> add(2, 3)
5 QUTPUT All The Test Cases Passed!!!
>>> add(-1, 1)
0
>>> add(0, 0)
0
retuma+b
if_name__==*_main_*
doctest festmod)
17. Scope Resolution: object’s namespace —» class namespace —+ global namespace —> built-in names
"Global Species” # Global namespace
class Animal:
species = "Class Species” # Class namespace
it__(self, species):
Especies = species # Instance namespace
def display_species(sel)
:", self.species) # Looks in instance namespace
Animal.species) # Looks in class namespace
ies:", globals()['species']) # Looks in global namespace
a= Animal("Instance Species")
a.display_species()
will be the output when the above program is executed? Explain the scope resolution process step
by step.
Scope Resolution Process:
1. Global Namespace (spocies)
©The variable species = “Global Species" is defined at the global level.
2. Class Namespace (Animal.species)
© Inside the Animal class, species = "Class Species" is defined
© This means every instance of Animal will have access to Animal. species unless overridden by an instance
vatiabe.
NAME: SATYABRATA PANDA,
REGD. NO: 2241016112
@ scanned with OKEN Scanner3. Instance Namespace (self species):
© When we create an instance a = Animal("Instance Species"), the constructor initializes self species
Species", which overrides the class-level species for this instance,
4, Scope Resolution in display_specios()
© self.species —+ Looks in the instance namespace first ("Instance Species’)
© Animal.species —+ Looks in the class namespace ("Class Species’)
© globals(|'species'] —+ Looks in the global namespace (“Global Species!)
output:
Instance species: Instance Species
Class species: Class Species
Global species: Global Species
18. Write a Python program using a lambda function to convert temperatures from Celsius to Kelvin,
store the data in a tabular format using pandas, and visualize the data using a plot.
import pandas as pd
import matplotib pyplot as plt
celsius_x = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] OUTPUT:
kevin_y =lst(map(iambda x: x* 273.15, celsus_»)) CELSIUS KELVIN
data = pa.DataFrame( ° @ 273.15
‘CELSIUS: celsius_x, 1 10 283.15
‘KELVIN’: kelvin_y 2 20 293.15
» '
print(data) eB 30) 203102
pit.plot(datal‘CELSIUS]), data['KELVIN'], marke 4 40 313.15
linestyle=", color) 5 50 323.15
pitutle(Celsius VS Kelvin Conversion) 6 60 333.15
pitxiabelCELSIUS) 7 70 343.15
pi yiabelCKELVIN 8 80 353.15
pit.grid(True)
9 98 363.15
sh
pitshow) 10 100 373.15
Celsius VS Kelvin Conversion
wo
5
3x0
0
ceisws
NAME: SATYABRATA PANDA.
REGD. NO: 2241016112
@ scanned with OKEN ScannerMINOR ASSIGNMENT-2: COMPUTER SCIENCE THINKIN’
RECURSION, SEARCHING, SORTING AND BIG O
def power(base exponent):
else:
retum base*power(base,exponent-1)
{(input("Enter your base number :"))
t(input("Enter your exponent number :"))
print(power(a.b))
. Write a recursive function power(base, exponent) that, when called, returns base“exponent.
OUTPUT:
Enter your base number : 2
Enter your exponent number : 3
8
The greatest common divisor of integers x and y is the largest integer that evenly divides into
both x and y. Write and test a recursive function gcd that returns the greatest common divisor of x
and y.
def gedi(x,y)
ify==0:
return x
else:
return god(y,x%y)
t(input("Enter 1st number :"))
(input("Enter 2nd number :"))
print(godia,b))
OUTPUT:
Enter 1st number : 4
Enter 2nd number : 12
4
Write a recursive function that takes a number n as an input parameter and prints n-digit strictly
increasing numbers.
def generate_numbers(n, start=1, num="")
ifn == 0:
print(num,end:
retum
fori in range(start, 10):
generate_numbers(n - 4, i+ 1, num + str)
n-=int(input(‘Enter the length of the number :"))
generate_numbers(n)
OUTPUT:
Enter the length of the number : 2
12,13,14,15,16,17,18,19,23,24,25,26,27,28,2
9,34,35,36,37,38,39,45,46,47,48,49,56,57,58,
59,67,68,69,78,79,89
Implement a recursive solution for computing the nth Fibonacci number. Then, analyze its time
complexity. Propose a more efficient solution and compare the two approaches.
def fibot(n): def fibonacci_dp(n) retum din}
sai 8 ifn <= 0: . numeint(input("Enter your number :"))
retur fibonacci_dp(num)
else: elifn ==
Tetum fibo1 (n-1)+fibo1 (n-2) return 1
numeint(input("Enter your number :")) dp = [0] * (n+ 1) OUTPUT.
fibot (num) dplt]= 1 Enter your number : 5
for iin range(2, n + 1) 5
pil = dpli- 1] + dpli- 2]
@ scanned with OKEN Scanner5. Given an array of N elements, not necessarily in ascending order, devised an algorithm to find the
kth largest one. It should run in O(N) time on random inputs.
import random
def partition(arr, left, right):
= anffright]
left
for] in range(left, right
if art{] >= pivot
arr). arf] = art), arti
ist
arr], arrright] = arrfright), art]
return i
def quickselect(arr, left, right, k):
if left <= right
pivot_index = random.randint(left, right)
arr{pivot_index], arfright] = arr{right], arr[pivot_index]
pivot_index = partition(arr, left, right)
if pivot_index
retum arr{pivot_index]
elif pivot_index > k:
retum quickselect(arr, left, pivot_index - 1, k)
else:
retum quickselect(arr, pivot_index + 1, right, k)
return -1
def find_kth_largest(arr, k):
n= len(arn)
return quickselect(arr, 0, n- 1, K- 1)
arr=[3, 21,5, 6,4]
ele=int(input("Enter the kth element :"))
print(find_kth_largest(arr.ele))
OUTPUT:
Enter the kth element : 3
5
6. For each of the following code snippets, determine the time complexity in terms of Big O. Explain
your answer.
«def examplet(n): Time complexity is O(n"2) as there are two
for i in rango(n): loops among which there is also an inner loop
for j in range(n): that runs for n times.
print(i
© fori in range(n):
Time complexity is O(n
print(i) gee or)
+ def recursive_function(n):
ifn
return 1
return recursive_function(n- 4) + recursive_function(n- 1)
Time complexity is O(2%n)
@ scanned with OKEN Scanner7. Given N points on a circle, centered at the origin, design an algorithm that determines whether
there are two points that are antipodal, i.e., the line connecting the two points goes through the
origin. Your algorithm should run in time proportional to NlogN.
import math
def find_antipodal(points)
polar_points = [(math.atan2(y, x). x. y) for x, y in points]
polar_points.sort()
n= len(polar_points)
left, right = 0, 1
while right
arf + 11}
arr{j], arrfj + 1] = arf + 4), arr)
return arr
# Insertion Sort
def insertion_sort(art)
n= lenarr)
arr = arr.copy()
for iin range(1, n)
key = arti]
jei-t
while j >= 0 and arrijj[t] > key{1]
arr{j + 1] = arr]
i=
arr{j + 1]= key
return arr
@ scanned with OKEN Scannersorted_selection = selection_sort(personalities)
sorted_bubble = bubble_sori(personaiities)
sorted_insertion = insertion_sort(personalities)
sorted_dict_selection = {name: networth for name, networth in sorted_selection}
sorted_dict_bubble = (name: networth for name, networth in sorted_bubble}
sorted_dict_insertion = (name: networth for name, networth in sorted_insertion}
print("Sorted using Selection Sort :", sorted_dict_selection)
print("Sorted using Bubble Sort : ", sorted_dict_bubble)
print("Sorted using Insertion Sort : ", sorted_dict_insertion)
10.Use Merge Sort to sort a list of strings alphabetically.
def merge_sort(arr)
iflen(arn <= 1
retum arr
mid = len(arr) // 2
left_half = merge_sort(arr:mid])
right_half = merge_sort(arr{mid:])
retum merge(left_half, right_half)
def merge(left, ight)
sorted_list = )
isj=0
while i < len(left) and j < len(right):
if left] < righty
sorted _list.appendi(teft)
sorted _list.append(right{)
jt
sorted_listextend(lefi:))
sorted_list.extend(right{])
retum sorted_list
words = apple’, ‘orange’, ‘banana’, ‘grape']
sorted_words = merge_sort(words)
print(sorted_words)
41.Without using the built-in sorted() function, write a Python program to merge two pre-sorted lists.
into a single sorted list using the logic of Merge Sort.
def merge_sorted_lists(lst1, list2): else: ad = b 3 é q
merged st= 0 peo tst appendilist2{)) cuit = merge_sorted_lsts(listt,lst2)
while i < len(listt) and j < len(list2):_ merged_listextend{(lst1[i:]) Print(resutt)
sti] < list2U] merged _ist extend(lst2{j)) “ .
UTPUT:
merged _ist.append(lst1[i) _retum merged_lst
iv=t (1, 2,3,4,5, 6,7, 8]
@ scanned with OKEN ScannerPython for Computer Science and Data Science 2 (CSE 3652)
MINOR ASSIGNMENT - 3: NATURAL LANGUAGE PROCESSING
1. Define Natural Language Processing (NLP). Provide three real-world applications of NLP and
explain how they impact society.
Natural Language Processing (NLP) is a branch of artificial inteligence (A\) that focuses on enabling computers to
Understand, interpret, and generate human language. It combines computational linguistics, machine learning, and
deep learning to process text and speech data.
Real-World applications of NLP:-
1. Chatbots and Virtual Assistants (e.g., Sir, Alexa, Google Assistant)
‘+ NLP-powered chatbots and voice assistants improve customer service by providing instant responses,
reducing wait times, and enhancing accessibility. They assist users in daily tasks, such as setting
reminders, answering queries, and automating customer support, making life more convenient
2. Sentiment Analysis (e.g., Social Media Monitoring, Brand Reputation Management)
+ Companies and organizations use NLP to analyze customer feedback, reviews, and social media posts to
understand public sentiment. This helps businesses improve products, manage reputations, and detect
potential crises, influencing decision-making and customer satisfaction.
3. Machine Translation (e.g., Google Translate, Deep)
+ NLP enables real-time translation of text and speech, breaking language barriers and fostering global
‘communication. Its crucial in education, business, and international relations, making information more
accessible to people worldwide
2. Explain the following terms and their significance in NLP:
+ Tokenization
+ Stemming
+ Lemmatization
a, Tokenization:-
i, Tokenization is the process of breaking text into smaller units, called tokens. These tokens can be
words, sentences, or subwords, depending on the level of tokenization. Itis a fundamental step in
Natural Language Processing (NLP) used for text analysis and preprocessing,
ii. Significance of Tokenization in NLP:-
1. Helps in text preprocessing for NLP models.
2. Enables efficient text analysis by breaking down language structures,
3. Supports various NLP tasks like sentiment analysis, machine translation, and speech
recognition
b. Stemming:-
i, Stemming is a text normalization technique in Natural Language Processing (NLP) that reduces words
to their root or base form by removing prefixes and suffixes. It helps in reducing word variations,
making text analysis more efficient.
li, Significance of Stemming in NLP:-
4. Reduces dimensionality in text processing.
2. Improves search engine efficiency by matching different word forms,
3. Enhances text analysis for NLP tasks like sentiment analysis and topic modeling.
. Lemmatization:-
i. Lemmatization is a text normalization technique in NLP that reduces words to their dictionary (base)
for, known as a lemma, while considering the context and meaning of the word. Uniike stemming,
lemmatization produces valid words.
ii, Significance of Lemmatization in NLP:-
4. Improves text analysis accuracy.
2. Helps in tasks like search engines, text summarization, and sentiment analysis.
3. Reduces word variations while maintaining proper meaning
@ scanned with OKEN Scanner3. What is Part-of-Speech (POS) tagging? Discuss its importance with an example.
Part-of-Speech(POS) tagging is the process of assigning grammatical categories (e.9., noun, verb, adjective) to each
word in a sentence based on its meaning and context.
+ Importance of POS tagging in NLP:-
= Enhances text understanding for NLP mode's,
+ improves lemmatzation
+ Used in Named Entity Recognition (NER), parsing, machine transiation, and text-to-speech applications.
* EXAMPLE:
from textblob import TextBlob
text = TextBlob( The quick brown fox jumps over the lazy dog’)
print(text.tags)
OUTPUT:
{(The', DT), (quick, WJ), (brown’, 'NN)}, (fox’, NN’), (jumps’, 'VBZ), Cover, IN}, (the’, 'DT), (lazy’, WW), dog’,
NN)
4. Create a TextBlob named exercise blob containing "This is a TextBlob”
{rom textblob import TextBIcb
ourPuT.
exercise_blob = TextBlob((This is a TextBlob) This is a TextBiob
print(exercise_biob)
5. Write a Python script to perform the
following tasks on the given text:
+ Tokenize the text into words and sentences.
+ Perform stemming and lemmatization using NLTK or SpaCy.
+ Remove stop words from the text.
+ Sample Text: "Natural Language Processing enables machines to understand and process
human languages. It is a fascinating field with numerous applications, such as chatbots and
language translation.”
##Using NLTK
import nitk
{rom nitktokenize import word_tokenize, sent_tokenize
{rom ritk.corpus import stopwords
trom nitk.stem import PorterStemmer, WordNetLemmatizer
nitk.download('punkt)
nitk.download('stopwords’)
nitk downioad( wordnet)
text = "Natural Language Processing enables machines to understand and pracess human languages.
Itis a fascinating field with numerous applications, such as chatbots and language translation.”
##Sentence Tokenization
‘sentences = sent_tokenize(text)
print("\nSentence Tokenization:")
print(sentences)
#words Tokenization
words = word_tokenize(text)
print("\nWord Tokenization:")
print(words)
fistemming
stemmet -orterStemmer()
‘stemmed_words = [stemmer.stem(word) for word in words)
print("WwAfter Stemming:")
print(stemmed_words)
@ scanned with OKEN Scanner#Lemmatization
lemmatizer = WordNetLemmatizer()
lemmatized_words = [lemmatizer.lemmatize(word) for word in words]
print("\nAfter Lemmatization:")
print(iemmatized_words)
#Remove stop words
stop_words = set(stopwords.words( english’))
filtered_words = [word for word in words if word.lower() not in stop_words}
print("\nAfter Removing Stop Words
print(fitered_words)
uTPUT:
Sentence Tokenization’
[’Natural Language Processing enables machines to understand and process human languages. 'Itis a fascinating
field with numerous applications, such as chatbots and language translation.’)
Word Tokenization:
[’Natural’, ‘Language’, ‘Processing, ‘enables’, machines’, to’, ‘understand’, ‘and’, ‘process’, ‘human’, languages,
‘is, 'a, fascinating’, Yield, ‘with’, ‘numerous’, ‘applications, , ‘such, ‘as’, ‘chatbats’, ‘and’, language’, ‘ranslatior
After Stemming:
['natur, 'anguag’, ‘process’, enabl’, ‘machin’, ‘to, ‘understand’, ‘and’, ‘process’, human’, ‘languay
‘field, ‘with’, ‘numer’, ‘applic, ‘sucht, ‘as’, ‘chatbot, ‘and’, ‘languag’, ‘ransiat’, '']
After Lemmatization:
[’Natural’, ‘Language’, ‘Processing’, ‘enables’, machine’, ‘to’, ‘understand’, ‘and’, ‘process’, ‘human’, language’
‘a, fascinating, Yield’, with’, ‘numerous’, ‘application’, ‘such’, ‘a, ‘chatbots’, ‘and’, language’, ‘translation’,
After Removing Stop Words:
[Natural’, ‘Language’, ‘Processing’, ‘enable:
field’, ‘numerous’, ‘applications’, '’, ‘chatbots'
‘machines’, ‘understand’, process’ human’, 'ianguage
language’, transtation’
, fascinating),
Web Scraping with the Requests and Beautiful Soup Libraries:
+ Use the requests library to download the www.python.org home page’s content.
+ Use the Beautiful Soup library to extract only the text from the page.
+ Eliminate the stop words in the resulting text, then use the wordcloud module to create a
word cloud based on the text.
import requests
from bs4 import BeautifulSoup
impor tk
from nitk.corpus import stopwords
import stting
from wordcloud import WordCioud
import matplotib.pypiot as pit
nitk downioad'stopwords)
url = 'http//wenw.python.org/’
response = requests.get(url)
if response.stalus. code == 200:
soup = BeautifulSoup(response.text, ‘himl.parser’)
text = soup.get_text()
stop_words = set(stopwords.words(‘english’))
words = text.split()
fitered_words = [word.lower() for word in words if word.lower() not in stop_words and word not in string punctuation]
filtered_text = ' ' join(filtered_words)
‘word. cloud = WordCioud|width=600, height=400, background. col
pltfigure(igsize=(10,10))
pltimshow(word_clou, interpolation=-biinear)
plt.axis(oft)
‘white’).generate(fitered_text)
@ scanned with OKEN Scannerpit.show()
else:
Print(t"Failed to retrieve page, status code: {response status_code}")
soft CO oad” name ource code a Baee
“OHS al BYE i S =
U 0 i eee Fi onesies z
s) fru help cL un : Dass $
epy thon:
O# 5
Sie eVentee
7. (Tokenizing Text and Noun Phrases) Using the text from above problem, create a TextBlob,
then tokenize it into Sentences and Words, and extract its noun phrases.
{rom textbiob import Textilob
import nite
nitk.download(‘brown')
text = "Natural Language Processing enables machines to understand and process human languages.
It is a fascinating field with numerous applications, such as chatbots and language translation.""
blob = TextBiob(text)
print(Tokenized into sentences:
Sentences = blob sentences
for sentence in sentences:
print(sentence)
print(\n')
Print(Tokenized into words")
words = blob. words
print(words)
print(\)
print(‘Noun Phrases:-')
noun_phrases = blob.noun_phrases
print(noun_phrases)
OUTPUT:
Tokenized into sentences:-
Natural Language Processing enables machines to understand and process human languages.
Itis a fascinating field with numerous applications, such as chatbots and language translation.
Tokenized into words:-
[’Natural’, ‘Language’, ‘Processing’ ‘enables’, machines’, ‘to’, ‘understand’
‘a’, fascinating’, Yield’, with’, ‘numerous’, ‘applications’, ‘such’, ‘as’, ‘chatbots'
Noun Phrases:-
[language processing, process human languages’, ‘numerous applications’, language translation’)
ind, ‘process’, ‘human’, languages’, 't, is,
‘and, language’, translation’)
@ scanned with OKEN Scanner8. (Sentiment of a News Article) Using the techniques in problem no. 5, download a web page
for a current news article and create a TextBlob. Display the sentiment for the entire
TextBlob and for each Sentence.
import requests
from be4 import BeautitulSoup
trom textblob import TextBlob
ut = 'hitps./hwww:thehindu.comvnews/internationalmyanmar-thailand-earthquake-death-tol-live-updates-march-30-
2026/articie69302738.ece'
response = requests.get(url)
if response status. code == 200:
soup = BeauifulSoup(response.text, ‘him. parser’)
paragraphs = soup find_all(p’)
article_text = "join(fp.get_text() for p in paragraphs})
blob = TextBlob(aricle text)
Print(‘Overall sentiment of the article:-")
print(blob.sentiment)
print('Sentiment of each senetences’)
for sentence in blob.sentences:
print(f'{sentence}\n Sentiment:{sentence.sentiment}\n')
else:
print(Failed to retieve the page, Status Code: {response status_code)")
9. (Sentiment of a News Article with the NaiveBayesAnalyzer) Repeat the previous exercise but
use the NaiveBayesAnalyzer for sentiment analysis.
import requests
from bs4 import BeautifulSoup
from textbiob import TextBob
trom textblob.sentiments import NaiveBayesAnalyzer
import nite
nitk.download(movie_reviews')
Url = ‘https://mww.thehindu.com/news/international/myanmar-thailand-earthquake-death-tolllive-updates-march-30.
2025larticle69392738.ece"
response = requests.get(url)
if response. status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser’)
paragraphs = soup find_all(’)
article_text = join({p. gettext) for pin paragraphs!)
blob = TextBlob/article_text, analyzer-NaiveBayesAnalyzer()
print(‘Overall Sentiment:’, blob.sentiment, '\n')
print(‘Sentiment Analysis of each statement’)
{for sentence in blob. sentences:
print("'{sentence)\n Sentiment:(sentence.sentiment)\n')
else:
print(("Failed to retrieve the page, Status Code: {response.status_code}")
@ scanned with OKEN Scanner10. (Spell Check a Project Gutenberg Book) Download a Project Gutenberg book and create a
TextBlob. Tokenize the TextBlob into Words and determine whether any are misspelled. If
so, display the possible corrections.
import requests
from textbiob import TextBlob
utl = 'https:/mww.gutenberg.org/cache/epub/75747/pq75747.txt
response = requests.get(url)
it response status_code == 200:
text = response.text
blob = TextBlobitext)
‘words = blob. words
misspelled_words = ()
for word in words{:500}:
correct_word = word.correct()
if word.lower() != correct_word.lower():
misspelled_wordsfword] = correct_word
if misspelled_words:
print(Misspelled Words & Their Corrections’)
for original, suggested in misspelled_words.items()
print(F {original} -> {suggested))
else:
print(‘No misspelled words found!!')
else:
print(f"Failed to download book. Status Code: {response.status_code}")
14. Write a Python program that takes user input in English and translates it to French, Spanish,
and German using TextBlob.
* Create a program that takes multiple user-inputted sentences, analyzes polarity and
subjectivity, and categorizes them as objective/subjective and positive/negative/neutral.
* Develop a function that takes a paragraph, splits it into sentences, and calculates the
sentiment score for each sentence individually.
* Write a program that takes a sentence as input and prints each word along with its POS tag
using TextBlob.
* Create a function that takes a user-inputted word, checks its spelling using TextBlob, and
suggests top 3 closest words if a mistake is found.
* Build a Python script that extracts all adjectives from a given paragraph and prints them in
order of occurrence.
* Write a program that takes a news article as input and extracts the top 5 most common noun
phrases as keywords.
* Write a program that takes a news article as input and extracts the top 5 most common noun
phrases as keywords.
+ Write a program that summarizes a given paragraph by keeping only the most informative
sen- tences, based on noun phrase frequency.
trom textblob import TextBlob, Word
@ scanned with OKEN Scannerfrom collections import Counter
from googletrans import Transiator
import requests
from bs4 import BeautifulSoup
ef transiate_text():
text = input('Enter text in Engiish:")
translator = Translator()
try
print('French:*, translator. translate(text, dest
print('Spanist, translator.translate(text, de
print(‘German:",translator.translate(text, dest='de’).text)
‘except Exception as e
print(‘Translation fled", &)
# transiate_toxt()
def analyze_sentiment()
text = input(‘Enter multiple sentences: °)
blob = TextBlob(text)
Polarity, subjectivity = biob.sentiment polarity, blob.sentiment subjectivity
polarity = Positive’ it polarity > 0 else 'Negative' il polarity <0 else ‘Neutral!
‘subjectivity = ‘Subjecive' if subjectivity > 0.5 else ‘Objective’
print(f Polarty:{potarity} Subjectivity: {subjectivity)’)
# analyze_sentiment()
det sentence_sentiment()
text = input(Enter a Paragraph’)
biob = TextBiob(text)
for sentence in blob. sentences:
polarity, subjectivity = sentence. sentiment polafity, sentence. sentiment subjectivity
print(?Sentence:{sentence)\nPolarity: (polarity), Subjectivity: (subjectivity)
# sentence_sentiment()
def pos_tagging}):
text = input(Enter a sentence: )
blob = TextBlob(text)
for word, tag in blob.tags:
print("{word} -> (tag)
# pos_tagging()
Get spell_checker():
‘word = input("Enter a word: ")
blob = TextBlob(word)
if blob.correct() == word:
print(‘Correct spelling!
else:
print("Did you mean: (biob.correct()}2")
suggestions = Word\word).spelicheck():3]
print(‘Suggestions:')
for suggestion in suggestions:
print(suggestion(0))
# spell_checker()
def extract_adjectives()
text = input(Enter a paragraph: ‘)
blob = TexiBiob(text)
adjectives = [word for word, tag in blob tags if tag_startswith(.W)]
print( Adjectives’, adjectives)
@ scanned with OKEN Scanner# extract_adjectivest)
def extract_noun_phrasesi):
url= ‘hitps://www.thehindu, com/news/national/telangana/400-acres-in-kancha-gachibowi-doesnt-belong-to-uoh-
telangana-govvarlicie69395769.ece!
response = requests.get(url)
if response. status_code == 200:
soup = BeautitulSoup(response.text, *himl.parser’)
soup.get_text(separator=" ")
TextBiob(text)
rnoun_phrases = Counter(biob.noun_phrases)
top_noun = noun_phrases.most_comman(5)
print(‘Top 5 Keywords’, top_noun)
else:
print(print(“Failed to retrieve artcle.))
# extract_noun_phrases()
def summarize_paragraph\)
text = input(‘Enter a paragraph: ")
blob = TextBiob(text)
phrase_counts = Counter(blob.noun_phrases)
‘sentences = sorted|blob.sentences, key=lambda s: sum(phrase_countsp] for p in s.noun_phrases), reverse=True)
summary = *"join(str(s) for sin sentences{:3))
print("Summary:’, summary)
summarize_paragraph()
ouTPUT:
Enter a paragraph: | am evil, But for greater good, | am domeneering!!
‘Summary: | am evil, But for greater good, | am domeneering! |
12, Write a Python program that takes a word as input and returns:
* It's definition
* It's synonyms
+ It's antonyms(if available)
{rom nitk.corpus import wordnet
import nit
nitk.downioad( wordnet
def get_word_info(word):
synonyms = set()
antonyms = set()
synsets = wordnet synsets(word)
if not synsets:
print('No defiition found for the given wor
return
Print(f"Definition of ‘{word)’: {synsets{0].definition()}")
for syn in synsets:
{or lemma in syn.lemmas()
synonyms.add(lemma.name())
iflemma.antonyms(:
antonyms.add(lemma.antonyms()0].name())
print(f"Synonyms: (, "join(synonyms) f synonyms else 'None)*)
print(f"Antonyms: {', ' join{antonyms) if antonyms else ‘None’}")
input("Enter a word: ").stip()
@ scanned with OKEN Scannerget_word_info(word)
13. Write a Python program that reads a .txt fi
visualization.
+ Create a word cloud in the shape of an object (e.g., a heart, star) using WordCloud and a
mask image.
import numpy as np
import matplotib.pyplot as pit
{rom wordcloud import WordCioud
{rom PIL import Image
import nit
{rom nitk.corpus import stopwords
import strin
nitk.download( stopwords’)
def generate_wordoloud(text file, mask_image):
# Read text from file
with open(text file, “", encoding="ui-8") as file:
text =file.read()
stop_words = set(stopwords.words(‘english)
words = text spit)
leaned_words = [word.lower().strp(string punctuation) for word in words if word.|ower() not in stop_words}
leaned _text = " "join(cleaned_words)
mask = np.array(\mage.open(mask_image))
wordcioud = WordCloud(width=800, height=400, background color="white", mask=mask, contour_color='black’,
processes the text, and generates a word cloud
contour_width=1).generate(cleaned_text)
pit figurefigsize=(10, 10))
pit.imshow(wordcloud, interpolation="bilinear’)
pltaxis(‘ott")
plt.show()
# File paths
text_file = “/Users/himadribose/Desktop/story.txt"
mask_image = "/Users/himadribose/Desktop/star png”
14, (Textatistic: Readability of News Articles) Using the above techniques, download from
several news sites current news articles on the same topic. Perform readability assessments
on them to determine which sites are the most readable. For each article, calculate the
average number of words per sentence, the average number of characters per word and the
average number of syllables per word.
import requests
{rom bs4 import BeautitulSoup
import nit
from nitk.tokenize import sent_tokenize, word_tokenize
impor textstat
nitk.download((punkt})
def get_article_text(ur'):
try
response = requests.get(ur, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser’)
paragraphs = soup find_all(')
@ scanned with OKEN Scannertext ='join((para.get_text() for para in paragraphs)
retum text if text else None
‘except requests.exceptions RequestException as e:
print("Etror fetching articte from {ur}: (e}")
Tetum None
def analyze_readability(text)
“Calculate readability metrics
sentences = sent_tokenize(tex!)
words = word_tokenize(text)
if len(sentences) == 0 or len(words)
return None
‘avg_words_per_sentence = len(words) / len(sentences)
‘avg_chars_per_word = sum(len{word) for word in words) /Ien(words)
avg_syllables_per_word = sum(textstat syllable_count(word) for word in words) / len(words)
readability_score = textstatflesch_reading_ease(tex!)
return {
Average Words per Sentence": avg_words_per_sentence,
Average Characters per Word": avg_chars_per_word,
“average Sylables per Word: avg_syllables_per_word,
"Flesch Reading Ease Score": readability_score
a
news _urls = [
“https:/timesofindia.indiatimes.com/sports/cricketlip/top-stories/isg-vs-pbks-preview-ipI-2025-rishabh-pant-aims-to-
regain-form-as-lucknow-super-giants-host-punjab-kings/articleshow/1 19803314.cms",
“https: /timesofindia.indiatimes.com/business/intemational-business/nokia-reaches-settlement-with-amazon-over-
Video-technology-patents/articleshow/119801839.cms*
1
for ut in news_urls
print("Anaiyzing: {ur))
article_text = get_article_text(url)
if article text:
readebility_metrics = analyze_readability(aricie_ tex’)
if readability_metries:
for key, value in readability_metrics.items():
print("(key}: {value:.21)°)
print(’-"* 50)
else:
print(“Insufficient data for readability analysis.")
else:
print("Failed to fetch article text.”)
OUTPUT:
Avelyn: bts t-roatins invnines. com series e.onesaas the sion e0miomtthr
ims-to-regain- kn -oiants-host-puni Jarlicleshow/119803314,cm:
‘Average Words per Sentence: 3771
‘Average Characters per Word: 5.08
Average Syllables per Word: 1.56
Flesch Reading Ease Score: 32.16
Analyzing: https://imesofindia indiatimes, com/business/intemational-business/nokia-reaches-settlement-with-amazon-
over-video technoloay-patents/articleshow!119801839.ms
Average Words per Sentence: 63.00
@ scanned with OKEN Scanner‘Average Characters per Word: 5.09
‘Average Syllables per Word: 1.62
Flesch Reading Ease Score: 13.89
15.(spaCy: Named Entity Recognition) Using the above techniques, download a current news
article, then use the spaCy library's named entity recognition capabilities to display the
named entities (people, places, organizations, etc.) in the article.
import requests
{rom bs4 import BeautitulSoup
import spacy
# Load spaCy's English model
nip = spacy.load{"en_core_web_sm")
def get article text(ut})
"Fetch article text from a news website"
response = requests. get(ur)
if response.status_code == 200:
‘soup = BeautifulSoup(response.text, 'html.parser’)
paragraphs = soupfind_all(p)
text =" ‘join((para.got_text() for para in paragraphs)
roturn text
else:
retum None
def extract_named_entities(text):
"Extract named entities using spaCy’
doc = nip(text)
entities = [(ent.text, ent.label_) for ent in doc.ents]
return entities
# Example news article URL (Replace with a real URL)
rnews_url = *htps/timesofindia.indiatimes.com/business/intemational-business/nokia-reaches-settlement-with-amazon-
over-video -technology-patents/articteshow/119801839.cms"
article text = got_article text(news_url)
it articie_text:
named entities = extracl_named_entities(article_tex!)
print('Named Entities in the Article")
for entity, label in named_entities:
print(("{entity} -> (label}")
else:
print("Failed to fetch article text.")
OUTPUT:
Named Entities in the Article:
‘The TOI Business Desk -> ORG
‘The Times of India -> ORG
the TOI Business Desk -> ORG
TOl-> ORG
@-> CARDINAL
Maa Durga -> PERSON
Studio > GPE
6 > CARDINAL
UNESCO -> ORG
@ scanned with OKEN ScannerRajasthan Amruta Khanvilkar's -> PERSON
10 -> CARDINAL
5-> CARDINAL
9-> CARDINAL
daily -> DATE
Anasuya Bharadwaj -> PERSON
monthly -> DATE
PPF Check -> ORG
NPS A Mutual Fund -> ORG
16. (textblob.utils Utility Functions) Use strip punc and lowerstrip functions of TextBlob’s
textblob.utils module with all=True keyword argument to remove punctuation and to get a
string in all lowercase letters with whitespace and punctuation removed. Experiment with
each function on Romeo and Juliet.
import spacy
import requests
# Load spaCy's large English model (has word vectors)
nip = spacyload("en_core_web_Ig")
ef get_gutenberg_text(ut})
"Fetch and clean text from a Project Gutenberg book
response = requests. get(ur)
if response.status_code == 200:
text = response text
start = text.find("*** START OF THIS PROJECT GUTENBERG EBOOK")
end = text.find("*** END OF THIS PROJECT GUTENBERG EBOOK")
if start I= -1 and end !=-1
text = text[start:end) # Extract actual play content
return text
else:
retuin None
# Project Gutenberg URLs for Shakespeare plays (change if needed)
romeo_url = "https://www.gutenberg.org/cache/epub/1112/pg1112.txt" # Romeo and Juliet
comedy_url = "hitps:/Amww.gutenberg.org/cachelepubl/1514/pg1514.txt" # A Midsummer Night's Dream
# Get the text from Gutenberg
romeo_text = got_gutenberg_text(romeo_url)
comedy text = get_gutenberg_text(comedy_uri)
# Convert to spaCy docs
if romeo_text and comedy_text:
romeo_doc = nip(romeo_text)
comedy_doc = nip(comedy_ text)
# Compute similarity
similafity_score = romeo_doc.similarity(comedy doc)
print(f"Similarity between ‘Romeo and Juliet’ and ‘A Midsummer Night's Dream: {similarity_score:.2}")
else:
print("Failed to fetch one or both plays. Check URLs.")
17. (textblob.utils Utility Functions) Use strip punc and lowerstrip functions of TextBlob’s
textblob.utils module with all=True keyword argument to remove punctuation and to get a
string in all lowercase letters with whitespace and punctuation removed. Experiment with
each function on Romeo and Juliet.
@ scanned with OKEN Scannerimport requests
from textblob.utils import strip_pune, lowerstrip
## Fetch Romeo and Juliet text from Project Gutenberg
url = *hitps:/www.qutenberg.orgiiles/1513/1513-0.1xt
response = requests.get(uri)
if response.status_code == 200:
text = response.text
else:
print("Failed to fetch text’)
exit’)
# Take a sample of text
sample_text = tex![:500] # First 500 characters for testing
# Apply strip_pune and lowerstrip
clean_text = strip_punc(sample_text, all=True) # Remove punctuation
lower_clean_text = lowerstrip(clean_text, al=True) # Convert to lowercase and strip spaces.
# Print results
print("Original Text Sample:")
print(sample_text)
print("\nAfter strip_punc:")
print(clean_text)
print("\nAfter lowerstri
print(iower_clean_text)
ouTPur:
Original Text Sample:
** START OF THE PROJECT GUTENBERG EBOOK 1513 **"
‘THE TRAGEDY OF ROMEO AND JULIET
by William Shakespeare
Contents
‘THE PROLOGUE.
AcTI
Scene |. A public place.
Scene Il. A Street.
Scene Ill. Room in Capulet's House.
Scene IV. A Street
Scene V. A Hall in Capulet's House.
ACT II
CHORUS.
Scene |. An open place adjoining Capulet’s Garden.
Scene Il. Capulet's Garden
Scene ll, Friar Lawrence's Cell
Scene IV. A Street.
Scene V. Capulet's Garden.
Scene VI. Fr
‘After strip_punc:
START OF THE PROJECT GUTENBERG EBOOK 1513,
‘THE TRAGEDY OF ROMEO AND JULIET
by Wiliam Shakespeare
Contents
THE PROLOGUE
AcTI
Scene | A public place
Scene Il A Street
‘Scene Ill Room in Capulet's House
Scene IV A Street
Scene V A Hall in Capulet’s House
ACT
CHORUS
‘Scene Il Capulet's Garden
Scene Ill Friar Lawrence's Cell
Scene IV A Street
Scene V Capulet’s Garden
Scene VI Fr
Alter lowerstrip:
start of the project Scene | An open place
adjoining Capulet’s Garden
gutenberg ebook 1513
the tragedy of romeo and juliet
by william shakespeare
contents
the prologue
acti
scene i a public place
soene ii a street
scene ili room in capulet’s house
scene iv a street
soene v a hall in capulet's house
act i
chorus
scene i an open place adjoining capulet's garden
scene ii capulet's garden
scene ili fiar lavrence's cell
scene iv a street
scene v capulet's garden
soene vift
@ scanned with OKEN Scanner