Python_Developer_Interview_Prep
Python_Developer_Interview_Prep
An OrderedDict is a dictionary subclass that remembers the order in which keys were first
inserted. The only difference between dict() and OrderedDict() lies in their handling of key order
in Python.
There are various important points related to python dictionary ordereing here , we are
discussing some important points related to Python dictionary ordereing those are
following.
Example : In this example the below Python code uses an OrderedDict to demonstrate
changing the value associated with a specific key. Initially, it creates an OrderedDict with keys
‘a’ through ‘d’ and respective values 1 through 4.
Equality Comparison in Python Dictionary Order
OrderedDicts in Python can be compared for equality not only based on their content but also
considering the order of insertion. This is useful when comparing two OrderedDicts for both
key-value pairs and their order.
Example : In this example the code creates two OrderedDicts, `od1` and `od2`, with different
orderings of key-value pairs. It then demonstrates that the order of insertion is considered when
comparing them for equality using the `==` operator, resulting in `False`.
OrderedDict Reversal in Python Dictionary Order:
After initializing an `OrderedDict` named `my_dict` with specific key-value pairs, the code
attempts to reverse the order of these pairs using the reverse() method. However, `OrderedDict`
does not natively support a `reverse()` method. To reverse the order, the code correctly employs
Python’s reversed() function combined with list() and items() to obtain a reversed list of
key-value pairs. This reversed list is then used to construct a new `OrderedDict` named
`reversed_dict`, demonstrating the ability to reverse the order of elements in an `OrderedDict`
while preserving the original key-value associations
Example : In this example the below code Creates an OrderedDict named my_dict with
elements in a specific order and uses reversed() along with list() and items() to reverse the list of
key-value pairs from my_dict. Then, OrderedDict() reconstructs the dictionary in reversed order
and prints each key-value pair in reversed order.
OrderedDict Popitem() in Python Dictionary Order:
The popitem() method in OrderedDict can be used with the last parameter to remove and return
the last inserted key-value pair. This is useful when you want to process items in a last-in,
first-out manner. Using `popitem(last=True)` on an OrderedDict would remove and return the
most recently added item, providing flexibility in managing the order of elements.
Example : In this example the below code uses an OrderedDict and applies the `popitem`
method with `last=True` to remove and store the last inserted key-value pair. It then prints the
removed item, resulting in the output: `(‘c’, 3)`.
Example :In this example the below python code demonstrates deletion, re-insertion, and
printing of items in an OrderedDict. It first prints the OrderedDict items, then deletes the entry
with key ‘c’, prints the updated OrderedDict, and finally re-inserts ‘c’ with its value, printing the
OrderedDict again.
Why can we have dictionary keys as tuples and not lists?
In Python, dictionary keys must be immutable, which means their value cannot change after
they are created. This requirement ensures that the dictionary's internal hash table can reliably
determine the key's location and maintain consistent lookups and deletions.
Here's why tuples can be used as dictionary keys but lists cannot:
Immutability:
Tuples: Once a tuple is created, its contents cannot be modified. This immutability allows tuples
to have a consistent hash value throughout their lifetime, making them suitable for use as
dictionary keys.
Lists: Lists are mutable; their contents can be changed after they are created. Because lists can
be altered, their hash value could change if the list's contents are modified. This could lead to
inconsistencies in the dictionary's internal hash table, making lists unsuitable for dictionary keys.
Hashing:
Tuples: Since tuples are immutable, their hash value is fixed and can be used reliably for
dictionary operations. This fixed hash value ensures that the dictionary can quickly and
accurately locate the corresponding value for a given key.
Lists: Lists, being mutable, do not have a consistent hash value. Python does not allow lists to
be hashed because their contents could change, potentially affecting their hash and causing
problems with the dictionary's internal structure.
To summarize, the key requirement for dictionary keys is immutability, which is why
tuples (which are immutable) can be used as keys, while lists (which are mutable) cannot.
Hashing technique in Python
Hashing is a process used in computing to map data of arbitrary size to fixed-size values, often
called hash values or hash codes. This is typically done using a hash function, which takes an
input (or "key") and produces a numerical value that represents that input.
Hashing is a fundamental technique used in various applications, including data structures like
hash tables, cryptographic systems, and data integrity verification.
The hash() function in Python takes an object (which must be hashable) and returns an integer
hash value. This hash value is used for quick data retrieval in hash tables, such as dictionaries
and sets. Hash values are used to compare dictionary keys during a dictionary lookup quickly.
Multi Threading
Multi Tasking:
Executing several tasks simultaneously is the concept of multitasking.
Eg: while typing python program in the editor we can listen mp3 audio songs from the same
system. At the same time we can download a file from the internet. All these taks are executing
simultaneously and independent of each other. Hence it is process based multi tasking.
This type of multi tasking is best suitable at operating system level.
Note: Whether it is process based or thread based, the main advantage of multi tasking is to
improve performance of the system by reducing response time.
Note: Where ever a group of independent jobs are available, then it is highly recommended to
execute simultaneously instead of executing one by one. For such type of cases we should go
for Multi Threading.
Python provides one inbuilt module "threading" to provide support for developing threads.
Every Python Program by default contains one thread which is nothing but MainThread.
Note: threading module contains function current_thread() which returns the current executing
Thread object. On this object if we call getName() method then we will get current executing
thread name.
Output:
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Child Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
Main Thread
If multiple threads present in our program, then we cannot expect execution order and hence we
cannot expect exact output for the multi threaded programs. B'z of this we cannot provide exact
output for the above program.It is varied from machine to machine and run to run.
Note: Thread is a pre defined class present in threading module which can be used to create
our own Threads.
With multithreading:
We can get and set name of thread by using the following Thread class methods.
Note: Every Thread has implicit variable "name" to represent name of Thread.
Eg:
Output:
MainThread
Pawan Kalyan
Pawan Kalyan
For every thread internally a unique identification number is available. We can access this id by
using implicit variable "ident"
active_count():
This function returns the number of active threads currently running.
Eg:
Output:
D:\python_classes>py test.py
The Number of active Threads: 1
ChildThread1 ...started
ChildThread2 ...started
ChildThread3 ...started
The Number of active Threads: 4
ChildThread1 ...ended
ChildThread2 ...ended
ChildThread3 ...ended
The Number of active Threads: 1
enumerate() function:
This function returns a list of all active threads currently running.
Eg:
Output:
D:\python_classes>py test.py
ChildThread1 ...started
ChildThread2 ...started
ChildThread3 ...started
Thread Name: MainThread
Thread Name: ChildThread1
Thread Name: ChildThread2
Thread Name: ChildThread3
ChildThread1 ...ended
ChildThread2 ...ended
ChildThread3 ...ended
Thread Name: MainThread
isAlive():
isAlive() method checks whether a thread is still executing or not.
Eg:
Output:
D:\python_classes>py test.py
ChildThread1 ...started
ChildThread2 ...started
ChildThread1 is Alive : True
ChildThread2 is Alive : True
ChildThread1 ...ended
ChildThread2 ...ended
ChildThread1 is Alive : False
ChildThread2 is Alive : False
join() method:
If a thread wants to wait until completing some other thread then we should go for join() method.
Eg:
In the above example Main Thread waited until completing child thread. In this case output is:
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Seetha Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
Rama Thread
t.join(seconds)
In this case thread will wait only specified amount of time.
Eg:
Daemon Threads:
The threads which are running in the background are called Daemon Threads.
The main objective of Daemon Threads is to provide support for Non Daemon Threads(like
main thread)
Eg: Garbage Collector
Whenever Main Thread runs with low memory, immediately PVM runs Garbage Collector to
destroy useless objects and to provide free memory,so that Main Thread can continue its
execution without having any memory problems.
We can check whether thread is Daemon or not by using t.isDaemon() method of Thread class
or by using daemon property.
Eg:
t.setDaemon(True)
But we can use this method before starting of Thread.i.e once thread started,we cannot change
its Daemon nature,otherwise we will get -> RuntimeException:cannot set daemon status of
active thread
Eg:
from threading import *
2) print(current_thread().isDaemon())
3) current_thread().setDaemon(True)
Default Nature:
By default Main Thread is always non-daemon.But for the remaining threads Daemon nature
will be inherited from parent to child.i.e if the Parent Thread is Daemon then child thread is also
Daemon and if the Parent Thread is Non Daemon then ChildThread is also Non Daemon.
Eg:
1) from threading import *
2) def job():
3) print("Child Thread")
4) t=Thread(target=job)
5) print(t.isDaemon())#False
6) t.setDaemon(True)
7) print(t.isDaemon()) #True
Note: Main Thread is always Non-Daemon and we cannot change its Daemon Nature b'z it is
already started at the beginning only.
Whenever the last Non-Daemon Thread terminates automatically all Daemon Threads will be
terminated.
Eg:
In the above program if we comment Line-1 then both Main Thread and Child Threads are Non
Daemon and hence both will be executed until their completion.
Lazy Thread
Lazy Thread
Lazy Thread
End of Main Thread
Synchronization:
If multiple threads are executing simultaneously then there may be a chance of data
inconsistency problems.
Eg:
1) from threading import *
2) import time
3) def wish(name):
4) for i in range(10):
5) print("Good Evening:",end='')
6) time.sleep(2)
7) print(name)
8) t1=Thread(target=wish,args=("Dhoni",))
9) t2=Thread(target=wish,args=("Yuvraj",))
10) t1.start()
11) t2.start()
Output:
Good Evening:Good Evening:Yuvraj
Dhoni
Good Evening:Good Evening:Yuvraj
Dhoni
....
We are getting irregular output b'z both threads are executing simultaneously wish()
function.
To overcome this problem we should go for synchronization.
In synchronization the threads will be executed one by one so that we can overcome data
inconsistency problems.
Synchronization means at a time only one Thread
Locks are the most fundamental synchronization mechanism provided by threading module.
We can create Lock object as follows:
l=Lock()
The Lock object can be hold by only one thread at a time.If any other thread required the same
lock then it will wait until thread releases lock.(similar to common wash rooms,public telephone
booth etc)
Note: To call release() method compulsory thread should be owner of that lock.i.e thread should
have the lock already,otherwise we will get Runtime Exception saying
RuntimeError: release unlocked lock
In the above program at a time only one thread is allowed to execute wish() method and hence
we will get regular output.
The standard Lock object does not care which thread is currently holding that lock.If the lock is
held and any thread attempts to acquire lock, then it will be blocked,even the same thread is
already holding that lock.
Eg:
Output:
D:\python_classes>py test.py
Main Thread trying to acquire Lock
Main Thread trying to acquire Lock Again
In the above Program main thread will be blocked b'z it is trying to acquire the lock second time.
Note: To kill the blocking thread from windows command prompt we have to use ctrl+break.
Here
ctrl+C won't work.
If the Thread calls recursive functions or nested access to resources,then the thread may trying
to acquire the same lock again and again,which may block our thread.
Hence Traditional Locking mechanism won't work for executing recursive functions.
Reentrant facility is available only for owner thread but not for other threads.
Eg:
1) from threading import *
2) l=RLock()
3) print("Main Thread trying to acquire Lock")
4) l.acquire()
5) print("Main Thread trying to acquire Lock Again")
6) l.acquire()
In this case Main Thread won't be Locked b'z thread can acquire the lock any number of times.
This RLock keeps track of recursion level and hence for every acquire() call compulsory
release()
call should be available. i.e the number of acquire() calls and release() calls should be matched
then only lock will be released.
Eg:
l=RLock()
l.acquire()
l.acquire()
l.release()
l.release()
Note:
1. Only owner thread can acquire the lock multiple times
2. The number of acquire() calls and release() calls should be matched.
Output:
The Factorial of 5 is: 120
The Factorial of 9 is: 362880
In the above program instead of RLock if we use normal Lock then the thread will be
blocked.
Lock:
1. Lock object can be acquired by only one thread at a time.Even owner thread also cannot
acquire multiple times.
2. Not suitable to execute recursive functions and nested access calls
3. In this case Lock object will takes care only Locked or unlocked and it never takes care about
owner thread and recursion level.
RLock:
1. RLock object can be acquired by only one thread at a time, but owner thread can acquire
same lock object multiple times.
2. Best suitable to execute recursive functions and nested access calls
3. In this case RLock object will takes care whether Locked or unlocked and owner thread
information, recursiion level.
In the case of Lock and RLock,at a time only one thread is allowed to execute.
Sometimes our requirement is at a time a particular number of threads are allowed to
access(like at a time 10 memebers are allowed to access database server, 4 members are
allowed to access Network connection etc).
To handle this requirement we cannot use Lock and RLock concepts and we should go for
Semaphore concept.
Semaphore can be used to limit the access to the shared resources with limited capacity.
Semaphore is advanced Synchronization Mechanism.
Here counter represents the maximum number of threads are allowed to access simultaneously.
The default value of counter is 1.
Whenever thread executes acquire() method, then the counter value will be decremented by 1
and if thread executes release() method then the counter value will be incremented by 1.
i.e for every acquire() call counter value will be decremented and for every release() call counter
value will be incremented.
Case-1: s=Semaphore()
In this case counter value is 1 and at a time only one thread is allowed to access. It is exactly
same as Lock concept.
Case-2: s=Semaphore(3)
In this case Semaphore object can be accessed by 3 threads at a time.The remaining threads
have to wait until releasing the semaphore.
Eg:
BoundedSemaphore:
Normal Semaphore is an unlimited semaphore which allows us to call release() method any
number of times to increment counter.The number of release() calls can exceed the number of
acquire() calls also.
Eg:
1) from threading import *
2) s=Semaphore(2)
3) s.acquire()
4) s.acquire()
5) s.release()
6) s.release()
7) s.release()
8) s.release()
9) print("End")
It is valid because in normal semaphore we can call release() any number of times.
BounedSemaphore is exactly same as Semaphore except that the number of release() calls
should not exceed the number of acquire() calls, otherwise we will get
ValueError: Semaphore released too many times
Eg:
1) from threading import *
2) s=BoundedSemaphore(2)
3) s.acquire()
4) s.acquire()
5) s.release()
6) s.release()
7) s.release()
8) s.release()
9) print("End")
It is invalid b'z the number of release() calls should not exceed the number of acquire() calls in
BoundedSemaphore.
Eg: After producing items Producer thread has to communicate with Consumer thread to notify
about new item.Then consumer thread can consume that new item.
In Python, we can implement interthread communication by using the following ways
1. Event
2. Condition
3. Queue
etc
event = threading.Event()
Condition is the more advanced version of Event object for interthread communication.A
condition represents some kind of state change in the application like producing item or
consuming item. Threads can wait for that condition and threads can be notified once condition
happend.i.e Condition object allows one or more threads to wait until notified by another thread.
Condition is always associated with a lock (ReentrantLock).
A condition has acquire() and release() methods that call the corresponding methods of the
associated lock.
condition = threading.Condition()
Methods of Condition:
1. acquire() To acquire Condition object before producing or consuming items.i.e thread
acquiring internal lock.
2. release() To release Condition object after producing or consuming items. i.e thread releases
internal lock
3. wait()|wait(time) To wait until getting Notification or time expired
4. notify() To give notification for one waiting thread
5. notifyAll() To give notification for all waiting threads
Queues Concept is the most enhanced Mechanism for interthread communication and to share
data between threads.
Queue internally has Condition and that Condition has Lock.Hence whenever we are using
Queue we are not required to worry about Synchronization.
import queue
Producer Thread uses put() method to insert data in the queue. Internally this method has logic
to acquire the lock before inserting data into queue. After inserting data lock will be released
automatically.
put() method also checks whether the queue is full or not and if queue is full then the Producer
thread will entered in to waiting state by calling wait() method internally.
Consumer Thread uses get() method to remove and get data from the queue. Internally this
method has logic to acquire the lock before removing data from the queue.Once removal
completed then the lock will be released automatically.
If the queue is empty then consumer thread will entered into waiting state by calling wait()
method internally.Once queue updated with data then the thread will be notified automatically.
Note: The queue module takes care of locking for us which is a great advantage.
Good Programming Practices with usage of Locks:
Case-1:
It is highly recommended to write code of releasing locks inside finally block.The advantage is
lock will be released always whether exception raised or not raised and whether handled or not
Handled.
l=threading.Lock()
l.acquire()
try:
perform required safe operations
finally:
l.release()
Case-2:
It is highly recommended to acquire lock by using with statement. The main advantage of with
statement is the lock will be released automatically once control reaches end of with block and
we are not required to release explicitly.
This is exactly same as usage of with statement for files.
lock=threading.Lock()
with lock:
perform required safe operations
lock will be released automatically
Lock will be released automatically once control reaches end of with block and We are not
required to release explicitly.
Note: We can use with statement in multithreading for the following cases:
1. Lock
2. RLock
3. Semaphore
4. Condition
Multiprocessing in Python
Multiprocessing refers to the ability of a system to support more than one processor at the same
time. Applications in a multiprocessing system are broken to smaller routines that run
independently. The operating system allocates these threads to the processors improving
performance of the system.
Consider a computer system with a single processor. If it is assigned several processes at the
same time, it will have to interrupt each task and switch briefly to another, to keep all of the
processes going.
This situation is just like a chef working in a kitchen alone. He has to do several tasks like
baking, stirring, kneading dough, etc.
So the gist is that: The more tasks you must do at once, the more difficult it gets to keep track of
them all, and keeping the timing right becomes more of a challenge.
This is where the concept of multiprocessing arises!
A multiprocessing system can have:
Here, the CPU can easily executes several tasks at once, with each task using its own
processor.
It is just like the chef in last situation being assisted by his assistants. Now, they can divide the
tasks among themselves and chef doesn’t need to switch between his tasks.
Multiprocessing in Python
In Python, the multiprocessing module includes a very simple and intuitive API for
dividing work between multiple processes.
Let us consider a simple example using multiprocessing module:
To create a process, we create an object of Process class. It takes following arguments:
target: the function to be executed by process
args: the arguments to be passed to the target function
Note: Process constructor takes many other arguments. In above example, we created 2
processes with different target functions:
Each process runs independently and has its own memory space.
As soon as the execution of target function is finished, the processes get terminated. In above
program we used is_alive method of Process class to check if a process is still active or not.
Pitfalls of Multi Threading
Global Interpreter Lock (GIL): The Global Interpreter Lock (GIL) limits the performance gain of
using threads for CPU-bound tasks in CPython. Only one thread can execute Python bytecode
at a time, preventing true parallel execution on multi-core processors.
Race Conditions and Deadlocks: Managing shared resources among threads can be complex
and lead to race conditions and deadlocks if synchronization mechanisms are not used
correctly.
Debugging Complexity: Multi-threaded programs can be difficult to debug due to
non-deterministic behavior and race conditions.
Limited CPU Utilization: Because of the GIL, CPU-bound tasks don't benefit significantly from
threading because multiple threads can't efficiently utilize multiple CPU cores.
Memory Overhead: If memory is not managed properly, threads may share the same space,
leading to data corruption and unintended consequences.
Platform Dependence: Python modules may not be thread-safe or behave differently on different
platforms.
Asyncio in Python
Asynchronous programming allows your program to handle tasks in a non-blocking way, which
can improve performance and responsiveness, especially in I/O-bound applications. Python's
asyncio library is a popular tool for asynchronous programming. Here’s a breakdown of how it
works:
Asyncio is a Python library that is used for concurrent programming, including the use of
async iterator in Python. It is not multi-threading or multi-processing.
Asyncio is used as a foundation for multiple Python asynchronous frameworks that provide
high-performance network and web servers, database connection libraries, distributed task
queues, etc
Event Loop:
The core of asyncio is the event loop. It manages and schedules the execution of asynchronous
tasks. The event loop continually checks for tasks that are ready to run and executes them.
Coroutines:
Coroutines are special functions defined with async def and can use await to pause execution
until an awaited task is complete. This allows the event loop to switch to other tasks while
waiting.
Tasks:
A task is a wrapper for a coroutine, enabling it to be scheduled and run by the event loop. You
create tasks using asyncio.create_task() or loop.create_task(). Tasks are used to manage the
execution of coroutines concurrently.
Awaitables:
Awaitables are objects that can be used with the await keyword. They include coroutines and
objects with an __await__ method.
import asyncio
Explanation
say_hello is an asynchronous function that prints "Hello", waits for 1 second (simulating an I/O
operation), and then prints "World".
main creates two tasks that run say_hello concurrently.
asyncio.run(main()) starts the event loop, runs the main coroutine, and closes the loop when it’s
done.
Advantages
Efficiency: Asynchronous programming can handle many tasks concurrently without needing
multiple threads or processes, making it more efficient in I/O-bound situations.
Responsiveness: It helps keep applications responsive by not blocking the main thread while
waiting for I/O operations to complete.
Common Use Cases:
Web scraping
Network services (e.g., web servers)
File and network I/O operations
Concurrent tasks and background operations
Conclusion:
asyncio is a powerful library in Python that enables efficient, non-blocking I/O operations
through asynchronous programming. By using coroutines, tasks, and the event loop, you can
write code that handles multiple operations concurrently without the need for traditional
threading or multiprocessing.
Asynchronous programming allows only one part of a program to run at a specific time.
Consider three functions in a Python program: fn1(), fn2(), and fn3().
In asynchronous programming, if fn1() is not actively executing (e.g., it’s asleep, waiting, or has
completed its task), it won’t block the entire program.
Instead, the program optimizes CPU time by allowing other functions (e.g., fn2()) to execute
while fn1() is inactive.
Only when fn2() finishes or sleeps, the third function, fn3(), starts executing.
This concept of asynchronous programming ensures that one task is performed at a time, and
other tasks can proceed independently.
In Python, the term monkey patch refers to dynamic (or run-time) modifications of a class or
module. In Python, we can actually change the behavior of code at run-time.
Pros
Flexibility: Allows for quick fixes or adjustments without altering the original codebase.
Testing: Useful for mocking or stubbing during testing.
Cons
Maintainability: Can make code harder to understand and maintain, as changes are made
dynamically and may not be obvious.
Debugging: Can introduce subtle bugs if not managed carefully, as it alters the behavior of
existing code.
Compatibility: Changes to third-party libraries might break if the library updates or if other parts
of your code depend on the original behavior.
We use above module (monk) in below code and change behavior of func() at run-time by
assigning different value.
Output :monkey_f() is being called
What is Class:
● In Python everything is an object. To create objects we required some Model or Plan or
Blueprint, which is nothing but class.
● We can write a class to represent properties (attributes) and actions (behavior) of
objects.
● Properties can be represented by variables
● Actions can be represented by Methods.
Syntax:
class className:
''' documenttation string '''
variables:instance variables,static and local variables
methods: instance methods,static methods,class methods
Documentation string represents the description of the class. Within the class doc string is
always optional. We can get doc string by using the following 2 ways.
1. print(classname.__doc__)
2. help(classname)
Example:
1) class Student:
2) ''''' This is student class with required data'''
3) print(Student.__doc__)
4) help(Student)
Example of Class:
1) class Student:
2) '''''Developed by durga for python demo'''
3) def __init__(self):
4) self.name='durga'
5) self.age=40
6) self.marks=80
7)
8) def talk(self):
9) print("Hello I am :",self.name)
10) print("My Age is:",self.age)
11) print("My Marks are:",self.marks)
What is Object?
Pysical existence of a class is nothing but object. We can create any number of objects for a
class.
Example: s = Student()
class Student:
2)
3) def __init__(self,name,rollno,marks):
4) self.name=name
5) self.rollno=rollno
6) self.marks=marks
7)
8) def talk(self):
9) print("Hello My Name is:",self.name)
10) print("My Rollno is:",self.rollno)
11) print("My Marks are:",self.marks)
12)
13) s1=Student("Durga",101,80)
14) s1.talk()
Output:
D:\durgaclasses>py test.py
Hello My Name is: Durga
My Rollno is: 101
My Marks are: 80
Self variable:
self is the default variable which is always pointing to current object (like this keyword in Java)
By using self we can access instance variables and instance methods of object.
Note:
1. self should be first parameter inside constructor
def __init__(self):
2. self should be first parameter inside instance methods
def talk(self):
Constructor Concept:
☕ Constructor is a special method in python.
☕ The name of the constructor should be __init__(self)
☕ Constructor will be executed automatically at the time of object creation.
☕ The main purpose of constructor is to declare and initialize instance variables.
☕ Per object constructor will be exeucted only once.
☕ Constructor can take atleast one argument(atleast self)
☕ Constructor is optional and if we are not providing any constructor then python will provide
default constructor.
Example:
1) def __init__(self,name,rollno,marks):
2) self.name=name
3) self.rollno=rollno
4) self.marks=marks
Program:
1) class Student:
2)
3) ''''' This is student class with required data'''
4) def __init__(self,x,y,z):
5) self.name=x
6) self.rollno=y
7) self.marks=z
8)
9) def display(self):
10) print("Student Name:{}\nRollno:{} \nMarks:{}".format(self.name,self.rollno,self.marks)
)
11)
12) s1=Student("Durga",101,80)
13) s1.display()
14) s2=Student("Sunny",102,100)
15) s2.display()
Output
Student Name:Durga
Rollno:101
Marks:80
Student Name:Sunny
Rollno:102
Marks:100
PEP 8 - official python convention
Yield in Python
The yield keyword in Python is used to turn a function into a generator. Generators are a type of
iterable, like lists or tuples, but they generate values on the fly and only when requested. This
can be more memory-efficient than using a list, especially when dealing with large data sets or
streams of data.
Defining a Generator:
When a function contains a yield statement, it becomes a generator function. Instead of
returning a single value, it yields multiple values, one at a time.
Generators can be iterated over using a for loop or any construct that works with
iterables, such as list().
Output: 1 2 3 4 5
Explanation
1) Define Generator Function: count_up_to(max) generates numbers from 1 up to max.
2) Yield Values: The function uses yield to produce each number one by one.
3) Create Generator Object: counter is a generator object created from the count_up_to
function.
4) Iterate Over Generator: The for loop automatically handles the generator, calling
__next__() and printing each value.
1. Instance Variables:
If the value of a variable is varied from object to object, then such type of variables are called
instance variables.
For every object a separate copy of instance variables will be created.
Example:
1) class Employee:
2)
3) def __init__(self):
4) self.eno=100
5) self.ename='Durga'
6) self.esal=10000
7)
8) e=Employee()
9) print(e.__dict__)
We can also declare instance variables inside instance method by using self variable. If any
instance variable declared inside instance method, that instance variable will be added once we
call taht method.
Example:
1) class Test:
2)
3) def __init__(self):
4) self.a=10
5) self.b=20
6)
7) def m1(self):
8) self.c=30
9)
10) t=Test()
11) t.m1()
12) print(t.__dict__)
Output
{'a': 10, 'b': 20, 'c': 30}
1) class Test:
2)
3) def __init__(self):
4) self.a=10
5) self.b=20
6)
7) def m1(self):
8) self.c=30
9)
10) t=Test()
11) t.m1()
12) t.d=40
13) print(t.__dict__)
1) class Test:
2)
3) def __init__(self):
4) self.a=10
5) self.b=20
6)
7) def display(self):
8) print(self.a)
9) print(self.b)
10)
11) t=Test()
12) t.display()
13) print(t.a,t.b)
Output
10
20
10 20
2. Static variables:
If the value of a variable is not varied from object to object, such type of variables we have to
declare with in the class directly but outside of methods. Such type of variables are called Static
variables.
For total class only one copy of static variable will be created and shared by all objects of that
class.
We can access static variables either by class name or by object reference. But recommended
to use class name.
Note: In the case of instance variables for every object a seperate copy will be created,but in the
case of static variables for total class only one copy will be created and shared by every object
of that class.
1) class Test:
2) x=10
3) def __init__(self):
4) self.y=20
5)
6) t1=Test()
7) t2=Test()
8) print('t1:',t1.x,t1.y)
9) print('t2:',t2.x,t2.y)
10) Test.x=888
11) t1.y=999
12) print('t1:',t1.x,t1.y)
13) print('t2:',t2.x,t2.y)
Output
t1: 10 20
t2: 10 20
t1: 888 999
t2: 888 20
If we change the value of static variable by using either self or object reference variable,
then the value of static variable won't be changed,just a new instance variable with that
name will be added to that particular object.
1) class Test:
2) a=10
3) def m1(self):
4) self.a=888
5) t1=Test()
6) t1.m1()
7) print(Test.a)
8) print(t1.a)
Output
10
888
Note: By using object reference variable/self we can read static variables, but we cannot modify
or delete.
If we are trying to modify, then a new instance variable will be added to that particular object.
t1.a = 70
If we are trying to delete then we will get error.
1) class Test:
2) a=10
3)
4) t1=Test()
5) del t1.a ===>AttributeError: a
3. Local variables:
Sometimes to meet temporary requirements of programmer,we can declare variables inside a
method directly,such type of variables are called local variable or temporary variables.
Local variables will be created at the time of method execution and destroyed once method
completes.
Local variables of a method cannot be accessed from outside of method.
1) class Test:
2) def m1(self):
3) a=1000
4) print(a)
5) def m2(self):
6) b=2000
7) print(b)
8) t=Test()
9) t.m1()
10) t.m2()
Output
1000
2000
1. Instance Methods
2. Class Methods
3. Static Methods
1. Instance Methods:
Inside method implementation if we are using instance variables then such type of methods are
called instance methods.
Inside instance method declaration,we have to pass self variable.
def m1(self):
By using self variable inside method we can able to access instance variables.
Within the class we can call instance method by using self variable and from outside of the class
we can call by using object reference.
1) class Student:
2) def __init__(self,name,marks):
3) self.name=name
4) self.marks=marks
5) def display(self):
6) print('Hi',self.name)
7) print('Your Marks are:',self.marks)
8) def grade(self):
9) if self.marks>=60:
10) print('You got First Grade')
11) elif self.marks>=50:
12) print('Yout got Second Grade')
13) elif self.marks>=35:
14) print('You got Third Grade')
15) else:
16) print('You are Failed')
17) n=int(input('Enter number of students:'))
18) for i in range(n):
19) name=input('Enter Name:')
20) marks=int(input('Enter Marks:'))
21) s= Student(name,marks)
22) s.display()
23) s.grade()
24) print()
ouput:
D:\durga_classes>py test.py
Enter number of students:2
Enter Name:Durga
Enter Marks:90
Hi Durga
Your Marks are: 90
You got First Grade
Enter Name:Ravi
Enter Marks:12
Hi Ravi
Your Marks are: 12
You are Failed
Syntax:
def setVariable(self,variable):
self.variable=variable
Example:
def setName(self,name):
self.name=name
Getter Method:
Getter methods can be used to get values of the instance variables. Getter methods also known
as accessor methods.
Syntax:
def getVariable(self):
return self.variable
Example:
def getName(self):
return self.name
Demo Program:
class Student:
2) def setName(self,name):
3) self.name=name
4)
5) def getName(self):
6) return self.name
7)
8) def setMarks(self,marks):
9) self.marks=marks
10)
11) def getMarks(self):
12) return self.marks
13)
14) n=int(input('Enter number of students:'))
15) for i in range(n):
16) s=Student()
17) name=input('Enter Name:')
18) s.setName(name)
19) marks=int(input('Enter Marks:'))
20) s.setMarks(marks)
21)
22) print('Hi',s.getName())
23) print('Your Marks are:',s.getMarks())
24) print()
output:
D:\python_classes>py test.py
Enter number of students:2
Enter Name:Durga
Enter Marks:100
Hi Durga
Your Marks are: 100
Enter Name:Ravi
Enter Marks:80
Hi Ravi
Your Marks are: 80
2. Class Methods:
Inside method implementation if we are using only class variables (static variables), then such
type of methods we should declare as class method.
1) class DurgaMath:
2)
3) @staticmethod
4) def add(x,y):
5) print('The Sum:',x+y)
6)
7) @staticmethod
8) def product(x,y):
9) print('The Product:',x*y)
10)
11) @staticmethod
12) def average(x,y):
13) print('The average:',(x+y)/2)
14)
15) DurgaMath.add(10,20)
16) DurgaMath.product(10,20)
17) DurgaMath.average(10,20)
Output
The Sum: 30
The Product: 200
The average: 15.0
Note: In general we can use only instance and static methods.Inside static method we
can access class level variables by using class name.
https://www.geeksforgeeks.org/types-of-inheritance-python/
https://www.tutorialspoint.com/python/python_inheritance.htm
MRO in Python
In Python, MRO stands for Method Resolution Order. It is the order in which classes are looked
up when a method is called or an attribute is accessed.
MRO is particularly relevant in the context of multiple inheritance, where a class inherits from
more than one parent class.
When you have a class hierarchy with multiple inheritance, Python needs a way to determine
which method or attribute to use when the same name appears in multiple parent classes. The
MRO specifies the order in which classes are considered for method and attribute lookups.
The C3 Linearization Algorithm
Python uses the C3 Linearization algorithm to compute the MRO. This algorithm ensures a
consistent and predictable method resolution order in complex class hierarchies.
Example of MRO:
Output:
Method in B
Explanation
Class Definitions:
Method Call:
d.method() will call the method() of class B because B appears before C in the MRO of D.
MRO Output:
D.__mro__ will display the method resolution order as a tuple of classes: (D, B, C, A, object).
This indicates that Python will look in D, then B, then C, then A, and finally object (the base
class for all new-style classes).
print(D.mro())
Key Points
Single Inheritance: With single inheritance, the MRO is straightforward; it’s just the order of the
base classes.
Multiple Inheritance: The C3 Linearization algorithm ensures that the MRO is consistent and
follows a specific order to handle complex scenarios with multiple inheritance.
Consistency: The MRO algorithm is designed to respect the order of base classes specified and
handle potential conflicts in a predictable manner.
What is Polymorphism? And its types
Eg1: Yourself is best example of polymorphism.In front of Your parents You will have one type of
behaviour and with friends another type of behaviour.Same person but different behaviours at
different places,which is nothing but polymorphism.
Eg2: + operator acts as concatenation and arithmetic addition
Eg3: * operator acts as multiplication and repetition operator
Demo Program:
1) class Duck:
2) def talk(self):
3) print('Quack.. Quack..')
4)
5) class Dog:
6) def talk(self):
7) print('Bow Bow..')
8)
9) class Cat:
10) def talk(self):
11) print('Moew Moew ..')
12)
13) class Goat:
14) def talk(self):
15) print('Myaah Myaah ..')
16)
17) def f1(obj):
18) obj.talk()
19)
20) l=[Duck(),Cat(),Dog(),Goat()]
21) for obj in l:
22) f1(obj)
Output:
Quack.. Quack..
Moew Moew ..
Bow Bow..
Myaah Myaah ..
The problem in this approach is if obj does not contain talk() method then we will get
AttributeError
1) class Duck:
2) def talk(self):
3) print('Quack.. Quack..')
4)
5) class Dog:
6) def bark(self):
7) print('Bow Bow..')
8) def f1(obj):
9) obj.talk()
10)
11) d=Duck()
12) f1(d)
13)
14) d=Dog()
15) f1(d)
Output:
D:\durga_classes>py test.py
Quack.. Quack..
Traceback (most recent call last):
File "test.py", line 22, in <module>
f1(d)
File "test.py", line 13, in f1
obj.talk()
AttributeError: 'Dog' object has no attribute 'talk'
But we can solve this problem by using hasattr() function.
hasattr(obj,'attributename')
attributename can be method name or variable name
1) class Duck:
2) def talk(self):
3) print('Quack.. Quack..')
4)
5) class Human:
6) def talk(self):
7) print('Hello Hi...')
8)
9) class Dog:
10) def bark(self):
11) print('Bow Bow..')
12)
13) def f1(obj):
14) if hasattr(obj,'talk'):
15) obj.talk()
16) elif hasattr(obj,'bark'):
17) obj.bark()
18)
19) d=Duck()
20) f1(d)
21)
22) h=Human()
23) f1(h)
24)
25) d=Dog()
26) f1(d)
27) Myaah Myaah Myaah...
2. Overloading:
Eg1: + operator can be used for Arithmetic addition and String concatenation
print(10+20) #30
print(‘sarjak’+'maniar') #sarjakmaniar
Eg2: * operator can be used for multiplication and string repetition purposes.
print(10*20) #200
print('durga'*3) #durgadurgadurga
1. Operator Overloading:
We can use the same operator for multiple purposes, which is nothing but operator overloading.
Python supports operator overloading.
Eg1: + operator can be used for Arithmetic addition and String concatenation
print(10+20)#30
print('durga'+'soft')#durgasoft
Eg2: * operator can be used for multiplication and string repetition purposes.
print(10*20)#200
print('durga'*3)#durgadurgadurga
Demo program to use + operator for our class objects:
1) class Book:
2) def __init__(self,pages):
3) self.pages=pages
4)
5) b1=Book(100)
6) b2=Book(200)
7) print(b1+b2)
D:\durga_classes>py test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
print(b1+b2)
TypeError: unsupported operand type(s) for +: 'Book' and 'Book'
We can overload + operator to work with Book objects also. i.e Python supports Operator
Overloading.
For every operator Magic Methods are available. To overload any operator we have to override
that Method in our class.
Internally + operator is implemented by using __add__() method.This method is called magic
method for + operator. We have to override this method in our class.
Demo program to overload + operator for our Book class objects:
1) class Book:
2) def __init__(self,pages):
3) self.pages=pages
4)
5) def __add__(self,other):
6) return self.pages+other.pages
7)
8) b1=Book(100)
9) b2=Book(200)
10) print('The Total Number of Pages:',b1+b2)
1) class Student:
2) def __init__(self,name,marks):
3) self.name=name
4) self.marks=marks
5) def __gt__(self,other):
6) return self.marks>other.marks
7) def __le__(self,other):
8) return self.marks<=other.marks
9)
10)
11) print("10>20 =",10>20)
12) s1=Student("Durga",100)
13) s2=Student("Ravi",200)
14) print("s1>s2=",s1>s2)
15) print("s1<s2=",s1<s2)
16) print("s1<=s2=",s1<=s2)
17) print("s1>=s2=",s1>=s2)
Output:
10>20 = False
s1>s2= False
s1<s2= True
s1<=s2= True
s1>=s2= False
1) class Employee:
2) def __init__(self,name,salary):
3) self.name=name
4) self.salary=salary
5) def __mul__(self,other):
6) return self.salary*other.days
7)
8) class TimeSheet:
9) def __init__(self,name,days):
10) self.name=name
11) self.days=days
12)
13) e=Employee('Durga',500)
14) t=TimeSheet('Durga',25)
15) print('This Month Salary:',e*t)
2. Method Overloading:
If 2 methods having same name but different type of arguments then those methods are said to
be overloaded methods.
Eg: m1(int a)
m1(double d)
But in Python, Method overloading is not possible.
If we are trying to declare multiple methods with same name and different number of arguments
then Python will always consider only last method.
Demo Program:
1) class Test:
2) def m1(self):
3) print('no-arg method')
4) def m1(self,a):
5) print('one-arg method')
6) def m1(self,a,b):
7) print('two-arg method')
8)
9) t=Test()
10) #t.m1()
11) #t.m1(10)
12) t.m1(10,20)
Most of the times, if method with variable number of arguments required then we can handle
with default arguments or with variable number of argument methods.
1) class Test:
2) def sum(self,a=None,b=None,c=None):
3) if a!=None and b!= None and c!= None:
4) print('The Sum of 3 Numbers:',a+b+c)
5) elif a!=None and b!= None:
6) print('The Sum of 2 Numbers:',a+b)
7) else:
8) print('Please provide 2 or 3 arguments')
9)
10) t=Test()
11) t.sum(10,20)
12) t.sum(10,20,30)
13) t.sum(10)
Output:
The Sum of 2 Numbers: 30
The Sum of 3 Numbers: 60
Please provide 2 or 3 arguments
1) class Test:
2) def sum(self,*a):
3) total=0
4) for x in a:
5) total=total+x
6) print('The Sum:',total)
7)
8)
9) t=Test()
10) t.sum(10,20)
11) t.sum(10,20,30)
12) t.sum(10)
13) t.sum()
3. Constructor Overloading:
1) class Test:
2) def __init__(self):
3) print('No-Arg Constructor')
4)
5) def __init__(self,a):
6) print('One-Arg constructor')
7)
8) def __init__(self,a,b):
9) print('Two-Arg constructor')
10) #t1=Test()
11) #t1=Test(10)
12) t1=Test(10,20)
1) class Test:
2) def __init__(self,a=None,b=None,c=None):
3) print('Constructor with 0|1|2|3 number of arguments')
4)
5) t1=Test()
6) t2=Test(10)
7) t3=Test(10,20)
8) t4=Test(10,20,30)
Output:
Constructor with 0|1|2|3 number of arguments
Constructor with 0|1|2|3 number of arguments
Constructor with 0|1|2|3 number of arguments
Constructor with 0|1|2|3 number of arguments
1) class Test:
2) def __init__(self,*a):
3) print('Constructor with variable number of arguments')
4)
5) t1=Test()
6) t2=Test(10)
7) t3=Test(10,20)
8) t4=Test(10,20,30)
9) t5=Test(10,20,30,40,50,60)
Output:
Constructor with variable number of arguments
Constructor with variable number of arguments
Constructor with variable number of arguments
Constructor with variable number of arguments
Constructor with variable number of arguments
3. Overriding
Method overriding:
What ever members available in the parent class are bydefault available to the child class
through inheritance. If the child class not satisfied with parent class implementation then child
class is allowed to redefine that method in the child class based on its requirement. This
concept is called overriding.
Overriding concept applicable for both methods and constructors.
Demo Program for Method overriding:
1) class P:
2) def property(self):
3) print('Gold+Land+Cash+Power')
4) def marry(self):
5) print('Appalamma')
6) class C(P):
7) def marry(self):
8) print('Katrina Kaif')
9)
10) c=C()
11) c.property()
12) c.marry()
Output:
Gold+Land+Cash+Power
Katrina Kaif
From Overriding method of child class,we can call parent class method also by using
super()
method.
1) class P:
2) def property(self):
3) print('Gold+Land+Cash+Power')
4) def marry(self):
5) print('Appalamma')
6) class C(P):
7) def marry(self):
8) super().marry()
9) print('Katrina Kaif')
10)
11) c=C()
12) c.property()
13) c.marry()
Output:
Gold+Land+Cash+Power
Appalamma
Katrina Kaif
1) class P:
2) def __init__(self):
3) print('Parent Constructor')
4)
5) class C(P):
6) def __init__(self):
7) print('Child Constructor')
8)
9) c=C()
In the above example,if child class does not contain constructor then parent class constructor
will be executed
From child class constuctor we can call parent class constructor by using super() method.
1) class Person:
2) def __init__(self,name,age):
3) self.name=name
4) self.age=age
5)
6) class Employee(Person):
7) def __init__(self,name,age,eno,esal):
8) super().__init__(name,age)
9) self.eno=eno
10) self.esal=esal
11)
12) def display(self):
13) print('Employee Name:',self.name)
14) print('Employee Age:',self.age)
15) print('Employee Number:',self.eno)
16) print('Employee Salary:',self.esal)
17)
18) e1=Employee('Durga',48,872425,26000)
19) e1.display()
20) e2=Employee('Sunny',39,872426,36000)
21) e2.display()
Output:
Employee Name: Durga
Employee Age: 48
Employee Number: 872425
Employee Salary: 26000
Employee Name: Sunny
Employee Age: 39
Employee Number: 872426
Employee Salary: 36000
Pickling in Python
Pickling is the Python term for serializing an object, which entails transforming it into a binary
representation that can be stored in a file or communicated over a network. Python has built-in
functions for the pickling objects in the pickle module.
Example: Python Object Serialization
In this example, we are creating a file named ‘person.pickle’ that stores the serialized form of a
Python object. We will create a dictionary object ‘person’ which will be serialized. The file object
represents the file that will be used for writing the pickled object. The pickle.dump() function is
then used to pickle the person object to the file. It takes two arguments – the object to be
pickled and the file object to which the pickled object should be written.
Unpickling in Python
In Python, deserializing a pickled object entails turning it from its binary representation back to a
Python object that can be used in code. This process is known as unpickling. Python’s built-in
pickle module has functions for unpickling objects.
In this example, we will load the pickle file in our Python code using the load() function of the
pickle module. The pickle.load() function is used to deserialize and unpickle the object from the
file. It takes one argument – the file object from which the object should be loaded. The
unpickled object is stored in the variable data.
Pickling and Unpickling of Objects:
Sometimes we have to write total state of object to the file and we have to read total
object from the file.
The process of writing state of object to the file is called pickling and the process of
reading state of an object from the file is called unpickling.
Writing and Reading State of object by using pickle Module:
Writing Multiple Employee Objects to the file:
serialization - object to byte in java
Python Global Interpreter Lock (GIL) is a type of process lock which is used by python
whenever it deals with processes.
Generally, Python only uses only one thread to execute the set of written statements. This
means that in python only one thread will be executed at a time. The performance of the
single-threaded process and the multi-threaded process will be the same in python and this is
because of GIL in python. We can not achieve multithreading in python because we have global
interpreter lock which restricts the threads and works as a single thread.
Python has something that no other language has that is a reference counter. With the help of
the reference counter, we can count the total number of references that are made internally in
python to assign a value to a data object. Due to this counter, we can count the references and
when this count reaches to zero the variable or data object will be released automatically. For
Example:
Output:
4
5
This reference counter variable needed to be protected, because sometimes two threads
increase or decrease its value simultaneously by doing that it may lead to memory leaked so in
order to protect thread we add locks to all data structures that are shared across threads but
sometimes by adding locks there exists a multiple locks which lead to another problem that is
deadlock. In order to avoid memory leaked and deadlocks problem, we used single lock on the
interpreter that is Global Interpreter Lock(GIL).
In old languages like C++, programmer is responsible for both creation and destruction of
objects.Usually programmer taking very much care while creating object, but neglecting
destruction of useless objects. Because of his neglectance, total memory can be filled with
useless
objects which creates memory problems and total application will be down with Out of memory
error.
But in Python, We have some assistant which is always running in the background to destroy
useless objects.Because this assistant the chance of failing Python program with memory
problems is very less. This assistant is nothing but Garbage Collector.
Hence the main objective of Garbage Collector is to destroy useless objects.
If an object does not have any reference variable then that object eligible for Garbage
Collection.
1. gc.isenabled()
Returns True if GC enabled
2. gc.disable()
To disable GC explicitly
3. gc.enable()
To enable GC explicitly
Example:
1) import gc
2) print(gc.isenabled())
3) gc.disable()
4) print(gc.isenabled())
5) gc.enable()
6) print(gc.isenabled())
Output
True
False
True
Destructors:
Destructor is a special method and the name should be __del__
Just before destroying an object Garbage Collector always calls destructor to perform clean up
activities (Resource deallocation activities like close database connection etc).
Once destructor execution completed then Garbage Collector automatically destroys that object.
Note: The job of destructor is not to destroy object and it is just to perform clean up activities.
Example:
1) import time
2) class Test:
3) def __init__(self):
4) print("Object Initialization...")
5) def __del__(self):
6) print("Fulfilling Last Wish and performing clean up activities...")
7)
8) t1=Test()
9) t1=None
10) time.sleep(5)
11) print("End of application")
Output
Object Initialization...
Fulfilling Last Wish and performing clean up activities...
End of application
Note:
If the object does not contain any reference variable then only it is eligible fo GC. ie if the
reference count is zero then only object eligible for GC
A shallow copy in Python creates a new object, but instead of copying the elements recursively,
it copies only the references to the original elements. This means that the new object is a
separate entity from the original one, but if the elements themselves are mutable, changes
made to those elements in the new object will affect the original object as well.
As you can see, even though we only modified the first element of the first sublist in the
shallow copied list, the same change is reflected in the original list as well.
This is because a shallow copy only creates new references to the original objects, rather
than creating copies of the objects themselves
A deep copy in Python creates a completely new object and recursively copies all the objects
referenced by the original object. This means that even nested objects within the original object
are duplicated, resulting in a fully independent copy where changes made to the copied object
do not affect the original object, and vice versa.
As you can see, when we modify the first element of the first sublist in the deep copied
list, it does not affect the original list.
This is because a deep copy creates a new object and recursively copies all the nested
objects, ensuring that the copied object is fully independent from the original one
Call by Object Reference
Abstract Class and Interface
In Python, both abstract classes and interfaces are used to define methods that must be
created within any subclass. They are useful for defining a common API for a set of subclasses.
Let's explore both concepts:
Abstract Class
An abstract class can contain both abstract methods (methods without implementation) and
concrete methods (methods with implementation). Abstract classes are defined using the ABC
(Abstract Base Class) module, which comes from the abc package.
In Python, there is no built-in interface keyword or construct as found in some other languages
like Java.
Instead, interfaces can be simulated using abstract classes with only abstract methods. An
interface typically defines a set of methods that implementing classes must provide.
Defining an Interface
Define an abstract class with only abstract methods (i.e., no concrete methods).
Key Differences and Uses
Abstract Class:
Interface:
Creational Patterns
1) Singleton
Ensures a class has only one instance and provides a global point of access to it.
2) Factory Method
Defines an interface for creating an object, but lets subclasses alter the type of
objects that will be created.
3) Abstract Factory
Provides an interface for creating families of related or dependent objects without
specifying their concrete classes.
Structural Patterns
1) Adapter
Converts the interface of a class into another interface clients expect. Adapter lets
classes work together that couldn't otherwise because of incompatible interfaces.
2) Decorator:
Attaches additional responsibilities to an object dynamically. Decorators provide a
flexible alternative to subclassing for extending functionality.
3) Facade:
Provides a simplified interface to a complex subsystem.
Behavioral Patterns
1) Observer:
Defines a one-to-many dependency between objects so that when one object
changes state, all its dependents are notified and updated automatically.
2) Strategy:
Defines family of algorithms, encapsulates each one, and makes them
interchangeable. Strategy lets the algorithm vary independently from clients that
use it.
These are just a few examples of the many design patterns that can be
implemented in Python. Each pattern serves a different purpose and can be
chosen based on the specific problem you are trying to solve.
Only one instance of class is created and same is used everytime - singleton pattern
What are the abstract methods in some classes of python
In Python, abstract methods are methods declared within an abstract class that must be
implemented by subclasses. They are defined using the @abstractmethod decorator from the
abc module. Here are some examples of classes in Python's standard library and popular
libraries that contain abstract methods:
1. The ABC class from the abc module is the base class for defining abstract base
classes (ABCs). Here's an example:
2. collections.abc in the Standard Library
The collections.abc module provides abstract base classes for container data types.
Some examples include Iterable, Iterator, Sequence, and Mapping.
Example: collections.abc.Iterable
Super keyword
The super keyword in Python is used to call a method from a parent (or superclass) from within
a subclass. It is commonly used to ensure that the parent class's method is called and allows
the subclass to extend or modify the behavior of that method.
Hello, I am Alice
I am 10 years old
2) In Multiple Inheritance:
In the case of multiple inheritance, super() can be used to call methods in a way
that respects the method resolution order (MRO).
Output:
Process in D
Process in B
Process in C
Process in A
Enumerate in Python
The enumerate function in Python adds a counter to an iterable and returns it as an enumerate
object. This can be useful when you need to iterate over a list and also need to know the index
of the current item in the list.