Python Workbook
Python Workbook
Python is a high-level, interpreted programming language known for its simplicity and readability. Its
key features include dynamic typing, automatic memory management, and a rich standard library. It
also has strong community support and is used for various applications such as web development,
data analysis, and machine learning.
LIST TUPLE
Slower due to dynamic resizing and modifications. Faster due to fixed size and immutability.
SET DICTIONARY
Created using the set() function. Created using the dict() function.
In Python, "self" refers to the instance of a class that a method calls. It is typically used within a
method to refer to instance variables or contact other instance methods. When a method calls on an
instance of a class, the self keyword accesses the instance's attributes and methods.
In Python, a lambda function is a small, anonymous function that can have any number of arguments
but can only have one expression. Lambda functions are a shorthand for creating simple functions
that are only needed once. They are made using the lambda keyword, followed by the function's
arguments and a colon, and then the expression for evaluation.
In Python 2. x, "range" and "xrange" generate integer sequences. However, "range" yields a list of
integers simultaneously, while "xrange" generates them on-the-fly as needed. It can be more
memory-efficient when working with large ranges, as it generates only one number simultaneously.
In Python 3. x, "range" has been modified to behave like "xrange," and "xrange" no longer exists.
In Python, the type() function determines the type of a variable. For example, type(variable) will
return the style of the variable.
Alternatively, the isinstance() function can be used to check if a variable is an instance of a particular
class. For example, isinstance(variable, int) will return True if the variable is an instance of the int
class.
The "is" operator checks if two objects are the same object in memory. It returns True if both objects
are identical, meaning they have the same memory address.
On the other hand, the "==" operator checks if two objects have the same value. It returns True if the
values of the two objects are equal, regardless of whether they are the same object in memory.
10. What are decorators in Python, and how are they used?
In Python, decorators are functions that help the compiler know about the unique property
associated with a particular function. By wrapping a function with another function, decorators can
modify the input or output values of the function or add functionality to it before or after it executes.
Decorators are often used to add cross-cutting concerns like logging, caching, or authentication in a
reusable manner.
Python's popularity is due to its simple and readable syntax, versatility, and ease of use. It has a large
standard library and a vast array of third-party libraries and frameworks that allow developers to
build complex applications quickly and efficiently.
Python is useful for various applications, including web development, scientific computing,
data analysis, machine learning, artificial intelligence, and automation. It is used by
organizations such as Google, NASA, and Netflix and is also widely used in academia.
One of the main advantages of Python is its simplicity and readability. The syntax is easy to
understand and write, making it accessible to every programmer.
Python also strongly focuses on code readability, making it easier for people to collaborate
on projects and maintain code over time.
Another advantage of Python is its versatility. It can be used for various applications, from
building simple scripts to creating complex applications with graphical user interfaces.
Python
return area
print(result)
You can also try this code with Online Python Compiler
Run Code
Output:
50
This function takes two parameters, length and width, and calculates the area of a rectangle using
the formula area = length * width. The return statement returns the value of the area.
A list is a mutable data type in Python that uses square brackets [] to store an ordered collection of
items.
On the other hand, a tuple is an immutable data type in Python that uses parentheses () to store an
ordered collection of items. Tuples are faster and more memory-efficient than lists, especially for
larger data collections.
Use cases Used for dynamic data Used for static data
A module in Python is a file containing Python code that can be used in other Python programs.
A module is a self-contained unit of code that can include variables, functions, and classes that can
be accessed and used in other Python programs. By organizing code into modules, you can avoid
duplicating code across different programs and instead import and use the same code in multiple
places, making it easier to maintain and reuse your code.
Python has many built-in modules that can be used for various purposes, such as working with files,
network communication, data processing, and more. In addition, third-party modules can be
installed and used in Python programs to extend their functionality.
To use a module in a Python program, you first need to import it using the import statement. Here is
an example:
Python
import math
result = math.sqrt(12)
print(result)
You can also try this code with Online Python Compiler
Run Code
Output:
3.4641016151377544
Break and Continue are two keywords in Python that are used to change the flow of a loop. These
keywords are used inside loops, such as for and while loops.
Both keywords are used to change the flow of a loop, but they have different effects on the loop:
Break - It exits the loop entirely and continues with the next statement after the loop.
Python
for i in range(10):
if i ==5:
break
print(i)
You can also try this code with Online Python Compiler
Run Code
Output:
Continue- It skips the current iteration and moves on to the next iteration of the loop.
Python
for i in range(10):
if i == 5:
continue
print(i)
You can also try this code with Online Python Compiler
Run Code
Output:
2
3
Let's consider the following code to explain iteration over a list in Python.
Python
numbers = [1, 2, 3, 4, 5]
if num % 2 == 0:
else:
You can also try this code with Online Python Compiler
Run Code
Output:
1 is odd
2 is even
3 is odd
4 is even
5 is odd
Explanation: We have a list called numbers containing five integers. We use a for loop to iterate over
the list, and on each iteration, we assign the current item in the list to the variable num. Then, we
check if the number is even or odd using the modulus operator (%) and print out the result.
The 'for' loop iterates over the list from the first to the last item and executes the indented code
block once for each item.
Example:
Python
student = {
"name": "Jaideep",
"age": 22,
print(student["name"])
print(student["age"])
print(student["major"])
You can also try this code with Online Python Compiler
Run Code
Output:
Jaideep
22
Computer Science
18. Write a Python factorial program without using if-else, for, and ternary operators.
We can use a recursive function that calculates the factorial of a given number without using if-else,
for, and ternary operators:
The function recursively calls itself until it reaches the base case of n=1, at which point it returns 1.
Each recursive call multiplies the current value of n by the result of the previous call, effectively
calculating the factorial.
Python
def factorial(n):
print(factorial(4))
You can also try this code with Online Python Compiler
Run Code
Output:
24
Python
order = len(str(number))
sum = 0
temp = number
digit = temp % 10
temp //= 10
if number == sum:
else:
You can also try this code with Online Python Compiler
Run Code
Output:
Enter a number: 34
20. What is the difference between deep and shallow copying of an object in Python?
Shallow copying creates and populates a new object referencing the original object's data. Suppose
the original object contains mutable objects as elements. In that case, the new entity will reference
the same mutable objects, and changes to the mutable objects are reflected in both the new and
original objects.
Deep copying creates a new object and recursively copies the original object's data and the data of
any things it references. It means that the new entity and its simulated data are entirely independent
of the original object and its data.
In Python, garbage collection is the process of freeing up memory unused by the program. Python
automatically uses a built-in system that counts how often an object is used. If the count goes to
zero, the object is considered garbage and removed from memory.
22. What is the difference between append() and extend() methods in Python lists?
In Python, both append() and extend() methods are used to add elements to a list, but they have
some differences in functionality.
append() method: This method adds a single element to the end of a list. The element can
be of any type, including another list.
extend() method: This method adds multiple elements to a list, such as elements from
another list, tuple, or any iterable object. The elements are added one by one to the end of
the list.
The "yield" keyword in Python creates generator functions that can produce a sequence of values.
When we call a function with a "yield" statement, it returns a generator object. The generator object
helps to iterate over the values produced by the function. The generator has values on-the-fly as it
repeats over, which makes it a memory-efficient way to generate sequences.
24. What is the difference between "static method" and "class method" in Python?
A static method is a method bound to the class, not an instance of the class. It means it calls the class
without creating an instance. Static methods define utility functions that don't depend on the state
of the example or the class.
A class method is also bound to the class, but it takes a reference to the class itself as the first
argument. Defined methods operate on the class itself rather than on the instances of the class. Class
methods are alternative constructors for the class, which can create class instances with different
initial parameters.
A generator is created using a function that contains the "yield" keyword. As the generator function
is created, it returns a generator object to produce a sequence of values on-the-fly as the generator
iterates. To create a generator function, define a function that contains one or more "yield"
statements. Each "yield" statement should produce a value for the generator to return.
26. What is the difference between "map" and "filter" functions in Python?
"map" and "filter" are built-in functions operating on iterable objects. The main difference between
the two functions is that "map" applies a given function to each item in an iterable and returns an
iterator with the results. In contrast, "filter" applies a given function to each item in an iterable and
returns an iterator with only the things that meet the given condition.
The except block can be used to catch a specific or general exception and handle the exception by
providing an appropriate message to the user or performing other actions. We can also use it to raise
a new exception or re-raise the original exception.
A module is a single file containing Python code that can be imported and used in other Python
codes. A module typically includes functions, classes, and variables used in other programs. Modules
are a way to organize code and promote code reuse.
On the other hand, a package is a collection of related modules organized into a directory structure.
A package contains an init.py file executed while package importing. The init.py file can contain
initialization code and define the package's interface by specifying which modules are part of the
package.
29. What are some of the built-in data structures in Python, and how are they used?
Python has multiple built-in data structures, including lists, tuples, sets, and dictionaries, which store
and organize data differently.
1. Lists: Lists are ordered collections of items of different types. They are defined by enclosing a
comma-separated list of values in square brackets. Lists are mutable, meaning you can
remove, add or modify their items.
2. Tuples: Tuples are similar to lists but immutable, meaning you cannot change their values
once they are defined. Tuples are defined by enclosing a comma-separated list of values in
parentheses.
3. Sets: Sets are unordered collections of different items. They are defined by enclosing a
comma-separated list of values in curly braces. Sets are helpful when you want to eliminate
duplicates from a collection of objects.
4. Dictionaries: Dictionaries are collections of key-value pairs. They are defined by enclosing a
comma-separated list of key-value teams in curly braces, with a colon separating each key
and its corresponding value. Dictionaries are helpful when looking up a particular key's value.
30. What is the difference between a Mutable datatype and an Immutable data type?
Mutable data types are those whose values can be changed after creation. When you modify a
mutable object, it changes its value in place without creating a new object. Any other references to
the object will also see the change.
On the other hand, immutable data types are those whose values cannot be changed after creation.
When you modify an immutable object, you create a new object with the modified value. Any other
references to the original object will not see the change.
The == operator compares the values of the objects, while the is operator checks whether the two
objects are the same, i.e., whether they have the same identity.
== operator is operator
It compares the values of two objects. It checks whether two objects are the same object.
It returns True if the values of two objects are It returns True if two variables reference the same object in
equal. memory.
It returns False if the values of two objects are It returns False if two variables do not reference the same obje
not equal. in memory.
Example:
Python
a = [4, 2, 1]
b = [4, 2, 1]
c=a
print(a == b)
print(a is b)
print(a is c)
You can also try this code with Online Python Compiler
Run Code
Output:
True
False
True
Explanation:
True - the values of the objects are the same.
32. What is the difference between a shallow copy and a deep copy in Python?
The difference between a shallow copy and a deep copy in python are as follows:
A shallow copy creates a new object but A deep copy creates a new object with new
Definition references the same memory addresses as the memory addresses for the main object and any
original object for the nested objects. nested objects.
It is faster than a deep copy because it does not Slower than a shallow copy because it creates a
Speed
create a new object for nested objects. new object for each nested object.
Memory Less memory usage because it shares memory More memory usage because it creates new
Usage addresses for nested objects. memory addresses for each nested object.
In Python, arguments are generally passed by reference, but how it works can be confusing.
When you pass an object to a function in Python, a reference to that object is passed to the function.
This means that the function can modify the object, and those modifications will be reflected in the
calling code.
Example:
Python
def modify_list(my_list):
my_list.append(4)
# create a list
my_list = [1, 2, 3]
print(my_list)
You can also try this code with Online Python Compiler
Run Code
Output:
[1, 2, 3, 4]
Explanation: In this example, the ‘modify_list’ function modifies the original list by appending the
value 4. When the function returns, the modified list is still accessible in the calling code.
However, there are some cases where it appears that Python is passing arguments by value. For
example, when you pass an integer or a string to a function and modify it within it.
In conclusion, arguments in Python are generally passed by reference, which means that
modifications made to objects within a function can affect the original object in the calling code.
However, the behavior can vary depending on the type of object being passed since immutable
objects like strings, and integers cannot be modified in place.
To convert a list into a set in Python, you can use the built-in set() function. This function takes an
iterable object (such as a list) as input and returns a new set object that contains all the unique
elements in the iterable.
Python
my_list = [1, 2, 3, 3, 4, 4, 5]
my_set = set(my_list)
print(my_set)
You can also try this code with Online Python Compiler
Run Code
Output:
{1, 2, 3, 4, 5}
Explanation:
In the above code, we first define a list my_list containing duplicate elements. We then pass this list
to the set() function to create a new set object, my_set, that contains only the unique elements of
the original list.
Note that sets are unordered collections of unique elements, so the order of the elements in the
original list may not be preserved in the resulting set.
You can create an empty NumPy array in Python using the numpy.empty() function. This function
creates an array of a specified size and shape but with uninitialized entries.
Here's an example:
Python
import numpy as np
print(empty_arr)
You can also try this code with Online Python Compiler
Run Code
This code will output an empty array of shape (3, 4), which means it has 3 rows and 4 columns, but
with no values assigned to any of its entries:
Output:
As you can see, the array entries are uninitialized, containing whatever values were already in the
memory space where the array was created. If you want to create an empty array with initialized
entries, you can use the numpy.zeros() function instead.
Pickling and unpickling are processes used in Python to serialize and deserialize objects. Serialization
converts an object into a byte stream, which can be stored or transmitted over a network.
Deserialization transforms a sequence of bytes, typically stored in a file or transmitted over a
network, back into an object in memory that can be manipulated and used by a program.
Pickling converts a Python object hierarchy into a byte stream using the pickle module. This
byte stream can be saved to a file or sent over a network. The pickle module can handle
most Python objects, including complex data types such as lists, sets, and dictionaries.
Unpickling is the reverse process of pickling. It involves reading a byte stream and
reconstructing the original Python object hierarchy. This is done using the pickle.load()
function.
37. Write a code snippet to get an element, delete an element, and update an element in an array.
Python
import numpy as np
# Create an array
# Get an element
element = arr[2]
# Delete an element
# Update an element
You can also try this code with Online Python Compiler
Run Code
Output:
Element at index 2: 3
Python
print(v(7, 5))
You can also try this code with Online Python Compiler
Run Code
Output:
12
Python
a=arr.array('i',[1,2,3,4,5])
a[::-1]
You can also try this code with Online Python Compiler
Run Code
Output:
[::-1] reprints the array as a reversed copy of ordered data structures such as an array or a list. The
original array or list remains unchanged.
l = [ 'a','b','c','d','e' ]
l[::-1]
Output:
The Random module is a standard module that is used to generate a random number. The method is
defined as:
import random
random.random()
The statement random.random() method returns a floating-point number in the range of [0, 1). This
function generates random float numbers. Here, The methods used with the random class are the
bound methods of the hidden instances. The 'random' module instances can show the
multithreading programs that create different examples of individual threads.
For the most Part, 'xrange' and 'range' have the same functionality. They both provide a way to
generate a list of integers to use; however, you please. The only difference is that 'range' returns a
Python list object while 'xrange' returns an 'xrange' object.
This means that 'xrange' doesn't generate a static list at run-time as 'range' does. It creates the
values as you need them with a unique technique called yielding. This technique is used with a type
of object known as generators. That means that if you have a vast range, you'd like to generate a list
for, say, one billion, 'xrange' is the function to use.
This is especially true if you have an accurate memory sensitive system such as a cell phone that you
are working with, as the 'range' function will use as much memory as it can to create your array of
integers, which can result in a Memory Error and crash your program. The 'range' function is a
memory hungry beast.
42. How can you randomise the items of a list in place in Python?
Python
shuffle(x)
print(x)
You can also try this code with Online Python Compiler
Run Code
Output:
Python library offers a feature - serialisation out of the box. Serialising an object refers to
transforming it into a format that it can store to deserialise it, later on, to obtain the original thing.
Here, the pickle module comes into play. It accepts any Python object, converts it into a string
representation, and dumps it into a file using the dump function; this process is called pickling. In
contrast, the process of retrieving original Python objects from the stored string representation is
called unpickling.
Pickling:
The process of serialisation in Python is known as pickling. Using the concept of 'pickling', any object
in Python can be serialised into a byte stream and dump it as a file in the memory. The process of
pickling is compact, but it can compress pickle objects further. Moreover, pickle keeps track of the
serialised objects, and the serialisation is portable across versions.
The function is used for the above process 'pickle.dump()' from the pickle module in Python.
Unpickling:
Unpickling is the opposite of pickling. It deserialises the byte stream to recreate the objects stored in
the file and loads them in the memory.
The function is used for the above process 'pickle.load()'.
def factorial(n):
i=1
fact = i
fact *= i
yield fact
i += 1
a = factorial(8) # create generator object
a.__next__() # error
for i in factorial(10):
An iterator is an object.
It remembers the state, i.e., where it is used during iteration.
__iter__() method initializes an iterator.
It has a '__next__() ' method which returns the next item in iteration and points to the next element.
Upon reaching the end of the iterable object, '__next__()' must return a StopIteration exception.
It is also self-iterable.
Iterators are objects using which we can iterate over iterable things like lists, strings, etc.
class LinkedList:
self.numbers = lst
def __iter__(self):
self.pos = 0
return self
def __next__(self):
self.pos += 1
return self.numbers[self.pos - 1]
else:
raise StopIteration
print(next(it))
# Throws Exception
# ...
# StopIteration
*args
*args is a particular parameter used in the function definition to pass arguments with a
variable number of items.
"*" means variable length, and "args" is a name used as a convention.
add = a + b
add += num
return add
**kwargs
**kwargs is a special syntax used as the function definition to pass a variable-length keyword
argument. Here, also, "kwargs" is used just as a convention. It can also use any other name
to represent "kwargs" here.
Keyworded argument means a variable that has a name when passed to the function. It is a
dictionary of the variable terms and their value.
def KeyArguments(**kwargs):
for key, value in kwargs.items():
# output:
# arg1: item 1
# arg2: item 2
# arg3: item 3
Python packages and Python modules are two mechanisms that allow for modular programming in
Python. Modularizing has several advantages -
Simplicity: Working on single modules helps you focus on a relatively small portion of the
existing problem. This makes development more manageable and less prone to errors.
Maintainability: Modules are designed to enforce the logical boundaries between different
problem domains. If they are written to reduce interdependency, it is less likely that the
modifications in a module might also impact other parts of the program.
Reusability: Functions defined in a module can easily be reused by the other parts of the
application.
Scoping: Modules are typically defined as separate namespaces, which help avoid confusion
between identifiers from other aspects of the program.
Modules are simply Python files with a '.py' extension and can have a set of functions, classes and
variables defined. They can be imported and initialised using import statements if partial
functionality is required to import the requisite classes or processes, such as the foo import bar.
Packages provide for hierarchical structuring of the module namespace using a '.' dot notation. As
modules help avoid clashes between global and local variable names, similarly, packages can help
prevent conflicts between module names.
Creating a package is easy since it also uses the system's inherent file structure that exists. Modules
combined into a folder are known as packages. Importing a module or its contents from a package
requires the package name as a prefix to the module's name joined by a dot.
To create a class in Python, we use the keyword "class", as shown in the example below:
class Employee:
self.name = employee_name
To instantiate or create the object from the class created above, we do the following:
employee = Employee("Jeff")
To access the name attribute, we call the attribute using the dot operator as shown below:
print(employee.name)
An empty class does not have any members defined inside it. It is created using the 'pass' keyword
(the pass command does nothing in Python). We can make all the objects for this class outside the
class.
For example-
Python
class EmptyClass:
pass
obj = EmptyClass()
obj.name = "Arun"
You can also try this code with Online Python Compiler
Run Code
Output:
Python does not use access specifiers precisely like private, public, protected, etc. However, it does
not deprive it to any variable. It has the concept of imitating variables' behaviour using a single
(protected) or double underscore (private) as prefixed to variable names. By default, variables
without prefixed underscores are public.
Example:
class Employee:
# protected members
_name = None
_age = None
# private members
__department = None
# constructor
self._name = emp_name
self._age = age
self.__department = department
# public member
def display():
51. python support 'multiple inheritance'? How does it work in Python? Explain with an example.
Multiple Inheritance: This is achieved when one child class derives its members from more than one
parent class. All the features of parent classes are inherited in the child class.
Python
# Parent class1
class Parent1:
def parent1_func(self):
# Parent class2
class Parent2:
def parent2_func(self):
# Child class
def child_func(self):
self.parent1_func()
self.parent2_func()
# Driver's code
obj1 = Child()
obj1.child_func()
You can also try this code with Online Python Compiler
Run Code
Output:
Yes, it is possible if other child classes instantiate the base class or if the base class is a static method.
'__init__' is a method or constructor in Python. This method automatically allocates memory when a
new object/ instance of a class is created. All classes have the '__init__' method.
Example to show how to use it:
Python
class Employee:
self.name = name
self.age = age
self.salary = 20000
print(E1.name)
print(E1.age)
print(E1.salary)
You can also try this code with Online Python Compiler
Run Code
Output:
XYZ
23
20000
Following are the ways using which you can access parent class members within a child class:
By using the name of the Parent class: You can use name of the parent class to access attributes as
shown in the example below:
Example:
class Parent(object):
# Constructor
self.name = name
class Child(Parent):
# Constructor
Parent.name = name
self.age = age
def display(self):
print(Parent.name, self.age)
# Driver Code
obj = Child("ChildClassInstance", 9)
obj.display()
Using super(): The parent class members can access the super keyword in the child class.
Example:
class Parent(object):
# Constructor
self.name = name
class Child(Parent):
# Constructor
def __init__(self, name, age):
'''
'''
super(Child, self).__init__(name)
self.age = age
def display(self):
print(self.name, self.age)
# Driver Code
obj = Child("ChildClassInstance", 9)
obj.display()
class Parent(object):
pass
class Child(Parent):
pass
# Driver Code
We can check if an object is also an instance of a class by making use of isinstance() method:
obj1 = Child()
obj2 = Parent()
57. Write a one-liner to count the number of capital letters in a file. The code should work even if
the file is too big to fit in memory.
Let us first work out a multiple line solution and then simplify it to a one-liner code.
text = fh.read()
if character.isupper():
count += 1
58. What is the 'main' function in Python? How do you invoke it?
In the world of programming languages, the 'main' function is considered as an entry point for the
execution for a program. But in Python, this is known that the interpreter serially interprets the file
line-by-line. This means that Python does not provide the 'main()' function explicitly. But this doesn't
mean that it a cannot simulate the execution of 'main'. It can do this by defining the user-defined
'main()' function and using the python file's '__name__' property. This '__name__' variable is a
particular built-in variable that points to the current module's name. This can be done as shown
below:
Python
def main():
print("Hi Ninja!")
if __name__ == "__main__":
main()
You can also try this code with Online Python Compiler
Run Code
Output :
Hi Ninja!
59. Are there any tools for identifying bugs and performing static analysis in Python?
Yes, tools like PyChecker and Pylint are used as a static analysis and linting tools, respectively.
PyChecker helps find bugs in a python source code file and raises alerts for code issues and
complexity. Pylint checks for a module's coding standards and supports different plugins to enable
custom features to meet this requirement.
61. What are decorators, and how are they used in Python?
Decorators in Python are the functions that add functionality to an existing function in Python
without changing the structure of the function itself. They are represented by the @decorator_name
in Python and are called in a bottoms-up fashion. For example:
def lowercase_decorator(function):
def wrapper():
func = function()
string_lowercase = func.lower()
return string_lowercase
return wrapper
def splitter_decorator(function):
def wrapper():
func = function()
string_split = func.split()
return string_split
return wrapper
def hello():
The beauty of a decorator lies in the fact that besides adding functionality to the output of the
method, they are can even accept arguments for functions and can further modify those arguments
before passing them to the function itself. The are inner nested function, i.e. 'wrapper' function,
plays a significant role here. It is implemented to enforce encapsulation and thus, keep itself hidden
from the global scope.
def names_decorator(function):
arg1 = arg1.capitalize()
arg2 = arg2.capitalize()
return string_hello
return wrapper
@names_decorator
Python packages are namespaces containing multiple modules such as “os”, “sys”, “json”, “pandas”
etc.
63. What is the 'pandas' library used in Python? How is a 'pandas' data frame created?
Pandas is an open-source, python library used in data manipulation of applications that require high
performance. The name is derived from "Panel Data", that has multidimensional data. It was
developed in 2008 by Wes McKinney and designed for data analysis.
Pandas help perform five significant data analysis steps: load the data, clean/manipulate it, prepare
it, model it, and analyse it.
A data frame is a 2D mutable and tabular structure representing data labelled with axes - rows and
columns.
The syntax for creating data frame:
import pandas as pd
Syntax:
df1.append(df2)
concat() method: This is used to stack data frames vertically. This is best used when the data frames
have the same columns fields.
Syntax:
PD.concat([df1, df2])
join() method: It is used for extracting data from various data frames having one or more common
columns.
df1.join(df2)
65. Can you create a series from the dictionary object in pandas?
One-dimensional array capable of storing different data types is called series. We can create pandas
series from dictionary object as shown below:
Python
import pandas as pd
series_obj = pd.Series(dict_info)
print (series_obj)
You can also try this code with Online Python Compiler
Run Code
Output:
x 2.0
y 3.1
z 2.2
dtype: float64
If the index is not specified in the input method the keys of the dictionaries are sorted in ascending
order for constructing the indices. If the index is passed, then it will extract values of the index label
from the dictionary.
66. How will you identify and deal with missing values in a data frame?
We can identify if a data frame has missing values by using the 'IsNull()' and 'isna()' methods.
Missing_data_count = df.isnull().sum()
We can handle missing values here by either replacing the values in the columns with 0 as follows:
df['column_name'].fillna(0)
df['column_name'] = df['column_name'].fillna(
(df['column_name'].mean())
import pandas as pd
df = pd.DataFrame(data_info)
print(df)
print(df)
68. How will you delete indices, rows and columns from a data frame?
To delete an Index:
Execute the 'del df.index.name' for removing the index by name.
Alternatively, the df.index. It can assign a name to None.
For example, if you have the below data frame:
Column 1
Names
John 1
Jack 2
Judy 3
Jim 4
df.index.name = None
# del df.index.name
print(df)
Column 1
John 1
Jack 2
Judy 3
Jim 4
The axis argument is passed in order to the drop method if the value is 0, it signals to
drop/delete a row and a column if it's 1.
Additionally, we can try to delete rows/columns in-place by setting the value of 'inplace' to
True. This makes sure that it is deleted without the need for reassignment.
It can delete the duplicate values from the row/column by using the drop_duplicates()
method.
69. How can the first row be re-indexed as the name of the columns in pandas?
Reindexing is the process of confirming a data frame to a new index with optional filling logic. If the
values are missing in previous index, 'NaN/NA' is placed in the location. A new object is returned to it
unless a new index is produced equivalent to the current one. The copied value is set to False. This is
also used for changing the indices of rows and columns in the data frame.
A generator is an iterator that generates values on the fly as needed. It is defined using the yield
keyword and iterates over a "for" loop or calling the next() function. Generators are useful for
generating large sequences of values that may be too large to store in memory.
On the other hand, a coroutine is a special kind of function that can be paused and resumed at
specific points. It is defined using the async def syntax and iterates using an async for loop or by
calling the await function. Coroutines help perform asynchronous operations, such as network or
database I/O, without blocking the main thread of execution.
72. How does Python's Global Interpreter Lock (GIL) affect multithreading and multiprocessing?
The Global Interpreter Lock (GIL) is the Python interpreter's mechanism to ensure that only one
thread can execute Python bytecode at a time.
1. In the case of multithreading, Python threads cannot take full advantage of multiple CPU
cores to perform parallel processing. While threads can be helpful for I/O-bound tasks, they
could be better suited for CPU-bound tasks that require intensive computation. In these
cases, multiprocessing can be a better option.
2. Multiprocessing involves running multiple instances of the Python interpreter in parallel,
each with its own GIL. It allows for parallel processing on multi-core machines, as each
process can utilize a separate CPU core.
1. Use efficient algorithms and data structures: Inefficient algorithms or data structures can
lead to unnecessary computational overhead, slowing down the performance of your
application.
2. Optimise code with profiling: Profiling measures your code's performance to identify
bottlenecks and areas to be optimized. Python has several built-in profiling tools, such as
cProfile and time, to help you identify performance issues.
3. Utilise built-in functions and libraries: Python has an extensive standard library with many
built-in functions and modules optimized for performance.
4. Implement parallelism with multiprocessing: Python's Global Interpreter Lock (GIL) can limit
the performance of multi-threaded programs. However, multiprocessing takes advantage of
multiple CPU cores for parallel processing.
The "asyncio" library is a built-in library in Python that provides an infrastructure for writing
asynchronous, concurrent, and parallel code. It is designed to help developers write highly efficient
and scalable network servers and clients. Asyncio enables you to write code that can perform I/O
operations without blocking the main thread of execution, which can significantly improve the
performance and responsiveness of your applications.
1. Use generators and iterators: Generators and iterators can help reduce memory usage by
allowing you to process data from one element simultaneously rather than loading the entire
dataset into memory at once.
2. Use built-in functions and modules: Built-in functions and modules like "itertools" and
collections can help optimize memory usage by providing efficient algorithms and data
structures optimized for memory usage.
3. Avoid unnecessary copying of data: Python objects are often passed by reference, which can
result in excessive data copying. To avoid this, you can use immutable objects like tuples or
copy() functions to create shallow rather than deep copies.
4. Use lazy loading: Lazy loading is a technique in which data is loaded into memory only when
needed rather than the entire dataset. It can help reduce memory usage and improve
performance.
76. What are some best practices for designing and developing large-scale Python applications?
Some best practices for designing and developing large-scale Python applications:
1. Use a modular architecture: Modular architecture allows you to break down your
application into smaller, more manageable components. It makes it easier to understand,
test, and maintain your code.
2. Follow coding and conventions: Follow coding standards and conventions to ensure your
code is readable and consistent. Use descriptive variable and function names, comments,
and documentation.
3. Write unit tests: Write unit tests to ensure that your code is correct and performs as
expected. Use automated testing frameworks such as "unittest," "pytest," or "nose."
4. Use version control: Use a system like Git to manage your code and collaborate with other
developers. It allows you to track changes and revert to previous code versions.
Multithreading involves running more than one thread within a single process, allowing multiple
program parts to execute concurrently. Python provides a threading module that is used for
multithreading. The simplest way to create a new thread is to instantiate the Thread class and pass it
a callable object (e.g., a function) in that thread.
Multiprocessing involves running multiple processes that can execute concurrently, taking advantage
of multiple CPUs or CPU cores. Python provides a multiprocessing module that is used for
multiprocessing. The simplest way to create a new process is to instantiate the Process class and pass
it a callable object (e.g., a function) in that process.
One key feature that enables metaprogramming in Python is introspection, which allows a program
to examine its structure and behavior. For example, the built-in dir() function can be used to get a list
of all the attributes and methods of an object, while the getattr() and setattr() functions can be used
to get or set a characteristic of an object dynamically.
The "pickle" module in Python provides a way to serialize and deserialize Python objects, meaning it
can convert a Python object into a byte stream, which can then be stored or transmitted, then
converted back to the original object. Converting an object into a byte stream is called "pickling," and
restoring the stream into an object is called "unpickling."
The "pickle" module provides a way to easily store and transport complex Python objects between
different programs or machines without manually converting the objects into a format that can be
stored or transmitted.
1. Django is a full-featured web framework that provides tools for building web applications,
including an object-relational mapper (ORM), a templating engine, and an administration
interface.
2. Flask is a lightweight web framework that is easy to get started with and is designed to be
highly customizable. It provides a minimal set of tools for building web applications.
3. Pyramid is another web framework for Python that is highly flexible and can be used for
building applications of any size or complexity. It follows the model-view-controller (MVC)
architectural pattern and provides many tools for building web applications.
81. Given two lists, generate a list of pairs (one element from each list)with the help of the zip
function.
Using the Zip function, you can generate a list of pairs from two lists.
We define lists list1 and list2 with values [1, 2, 3] and ['a', 'b', 'c'], respectively. We then use the zip()
function to generate a new list of pairs, where the first element of each pair comes from list1 and the
second element of each pair comes from list2.
The zip() function takes multiple iterables as arguments and returns an iterator that aggregates
elements from each iterable into tuples. In this case, we pass in list1 and list2 as arguments to zip().
We then convert the resulting iterator to a list using the list() function.
Python
list1 = [1, 2, 3]
print(pairs)
You can also try this code with Online Python Compiler
Run Code
Output:
A data frame is a data structure used in programming and data analysis, often in the context of
working with data in a tabular format. It is a two-dimensional table-like structure, with rows
representing observations or cases and columns representing variables or attributes.
In Python, data frames are typically created using panda's library, which provides a DataFrame object
that can be used to store and manipulate data in tabular format. Data frames can be created from
various sources, including CSV files, Excel spreadsheets, SQL databases, etc.
Python
import pandas as pd
df = pd.DataFrame(data)
print(df)
You can also try this code with Online Python Compiler
Run Code
Output:
1 Raju 30 London
2 Charu 35 Paris
3 Lokesh 40 Dubai
The Global Interpreter Lock (GIL) is a mechanism in Python that ensures only one thread executes
Python bytecode at a time. This means that multiple threads can exist within a Python process but
cannot execute Python bytecode in parallel.
The GIL is implemented in CPython, the default and most widely used implementation of the Python
programming language. It is a design choice made to simplify memory management and improve
performance by preventing conflicts that can occur when multiple threads access the same objects
or data structures simultaneously.
While the GIL provides a certain level of safety and simplicity, it can limit the performance of CPU-
bound tasks that are parallelizable, as only one thread can execute at a time. However, the GIL does
not necessarily impact performance for I/O-bound tasks, which often rely on external resources such
as disk or network, and therefore do not heavily use the CPU.
There have been attempts to work around the limitations of the GIL, such as using multiprocessing,
which allows for parallel execution across multiple processes or using other implementations of
Python, such as Jython or IronPython, that do not have a GIL. However, these solutions come with
their trade-offs and may only sometimes be suitable for some use cases.
Lambda functions are useful when you must define a simple function that will only be used in one
place. However, they can be difficult to read and understand when they become too complex or are
used in many places in your code.
Generally, it's a good practice to use regular named functions for more complex operations or
functions that will be reused in multiple places.
Here we take a lambda function that takes two arguments and returns their sum:
Python
sum = lambda a, b: a + b
print(sum(3, 4))
You can also try this code with Online Python Compiler
Run Code
Output:
The swapcase() function is a built-in method for strings that returns a new string where all uppercase
characters are converted to lowercase, and all lowercase characters are converted to uppercase. The
original string is not modified.
The swapcase() function can be useful when you need to quickly and easily change the case of a
string. For example, you might use it to normalize user input for a case-insensitive search or to
format text for display in a particular way.
Example:
Python
new_string = my_string.swapcase()
print(new_string)
You can also try this code with Online Python Compiler
Run Code
Output:
CodinG NinjaS
Explanation:
The original string my_string contains both uppercase and lowercase characters. The swapcase()
function is called on the string, which returns a new string where all the uppercase characters are
converted to lowercase, and all the lowercase characters are converted to uppercase. The resulting
string is then stored in the new_string variable.
86. Write a Python function that takes a list of integers and finds the list's longest increasing
subsequence (LIS). The LIS is the longest subsequence of the list in which the elements are in
increasing order.
You can use dynamic programming to find the longest increasing subsequence of a list of integers.
Here's one way to write the longest_increasing_subsequence() function using dynamic
programming:
Python
def longest_increasing_subsequence(nums):
n = len(nums)
dp = [1] * n
for j in range(i):
# Initialize an empty list to hold the LIS and find its maximum length.
lis = []
max_len = max(dp)
i = dp.index(max_len)
lis.append(nums[i])
lis.append(nums[j])
i=j
return lis[::-1]
print(longest_increasing_subsequence(nums))
You can also try this code with Online Python Compiler
Run Code
Output:
87. Write a function that takes a binary tree as input and returns the maximum path sum of any
path in the tree.
Python
class TreeNode:
self.val = val
self.left = left
self.right = right
def max_path_sum(root):
def helper(node):
nonlocal max_sum
if not node:
return 0
# Recursively calculate the maximum path sum from the left and right subtrees.
left_path_sum = helper(node.left)
right_path_sum = helper(node.right)
max_sum = float('-inf')
helper(root)
return max_sum
# Get user input to create a binary tree.
root = TreeNode(root_val)
if input("Does the root node have a left child? (y/n) ") == 'y':
root.left = TreeNode(left_val)
if input("Does the root node have a right child? (y/n) ") == 'y':
root.right = TreeNode(right_val)
max_sum = max_path_sum(root)
You can also try this code with Online Python Compiler
Run Code
Output:
88. Given a list of intervals representing different meetings' start and end times, write a Python
function to find the minimum number of meeting rooms required to hold all the meetings.
Constraints:
You may assume that each meeting starts and ends within the same day, so the start time is
always less than the end time.
You may assume that the list of intervals is non-empty and contains at least one meeting.
Code
Python
import heapq
def min_meeting_rooms(meetings):
meetings.sort(key=lambda x: x[0])
# Initialize a priority queue to store the end times of the currently scheduled meetings.
end_times = []
heapq.heappop(end_times)
heapq.heappush(end_times, end)
meetings = []
for i in range(n):
meetings.append((start, end))
min_rooms = min_meeting_rooms(meetings)
You can also try this code with Online Python Compiler
Run Code
Output:
(Optional) For opening a text file using the above modes, we can also append ‘t’ with them as
follows:
Similarly, a binary file can only be appropriately parsed and read by appending 'be with them as
follows:
If you want to append the content in the files, we can also use the append mode (a):
90. What are the functions of file-related modules in Python? Give some examples of such types of
modules?
Python has many file-related modules that can manipulate text files and binary files in a file system.
It can also use them to pickle-unpickle data from files, while It can use some of them to create a text
or binary file, update their content, copy, delete etc.
Some such modules are os, os.path, and shutil.os. The os.path module has function to access the file
system, while the shutil.os module can also be used to copy or delete files.
91. What is the difference between opening a Python file versus using the 'with' statement to do
the same? What is the syntax to do that?
Using the 'with' statement in Python, one can open a file that gets automatically closed as soon as
the block of code, where 'with' is used, exits. In this way, we can opt not to use the close() method.
import random
def read_random_line(fname):
lines = open(fname).read().splitlines()
return random.choice(lines)
print(read_random_line ('randomfile.txt'))
93. Why isn’t all the memory deallocated after the end of execution of Python programs?
When Python programs exit, especially those using Python modules with circular references
to other objects or the objects referenced from the global namespaces are not always
deallocated or freed.
Since it is not possible to deallocate those portions of memory that the C library reserves.
On exit, because of having its efficient cleanup mechanism, Python would try to deallocate
every object.
94. What advantages does Numpy Arrays Have over Nested Lists for data analysis with large
datasets?
Numpy is written in C, so all its complexities are backed into a simple "to use module". While Lists, on
the other hand, are dynamically typed. Therefore, Python checks the data type of each element
every time it uses it. This makes Numpy arrays much faster than python lists.
Numpy has many additional functionalities that the list doesn't offer; for instance, It can automate
many things in Numpy.
95. How are the arguments in Python by default passed? Is it by value or by reference?
By default, all the arguments are passed in Python by a reference. This means that It will reflect any
changes made within the function in the original object.
Consider two sets of code shown below:
In the first example, we only assign a value to one element of ‘l’, so the output becomes [3, 2, 3, 4].
In the second example, we have created a whole new object for 'l'. But, the values [3, 2, 3, 4] don't
show up in the output as they are outside the function's definition.
96. What Is the Difference Between using ‘Del’ and ‘Remove()’ on Lists?
Del -> removes all elements of a list within a given range
Syntax: del list[start:end]
>>del lis[1:3]
>>lis
>>lis.remove(1999)
>>lis
Note that in the range 1:3, the elements are counted up to (the second index) 2 and not 3.
Python uses a multi-threading package to achieve multithreading across programs, but it comes with
an overhead of the multi-threaded executions to spread to the rest of your program.
Python has a construct/mechanism which is known as the Global Interpreter Lock (GIL). It makes
sure that only one of the 'threads' can execute at any single point in time. A thread acquires the GIL,
does some work, and then passes the GIL onto the next thread.
This happens quickly so that it may seem like your threads are executing in parallel, but they are just
taking turns using the same CPU core.
All this GIL passing adds an overhead to the execution. If one wants to make the code run faster,
using the threading package often isn't a good idea.
GIL stands for Global Interpreter Lock. It is a mutex used for limiting access to python objects and
aids in effective thread synchronisation by avoiding deadlocks. GIL helps one in achieving
multitasking (and not parallel computing). The following diagram represents how GIL works.
Based on the diagram, there are three threads. The first thread acquires the GIL first and starts the
I/O execution. When the I/O operations are done, the first thread releases the acquired GIL, which is
taken up by the second thread. The process repeats, and the GIL is used by different threads
alternatively, which is done until all threads have completed their execution. The threads that did not
have the GIL lock go into a waiting state and resume execution only when it acquires the lock.
A session allows you to remember information from one request to another. In 'Flask', a session uses
a signed cookie to look at the session contents and modify them. The user can change the session if
only it has the secret key Flask.secret_key.
after_request(): They are called after request and pass the response sent to the client.
teardown_request(): They are called in situations as and when an exception gets raised, and
the responses are not guaranteed. They are called after response has been constructed. They
aren't allowed to modify the request, and the values are ignored.
Django and Flask map the URL or addresses typed in the web browsers to functions in Python.
Flask is much simpler than Django, but Flask does not do a lot for you; that is, one needs to specify
the details, whereas Django does a lot for you (has a batteries-included approach), whereas you
would not need to do much work. Django consists of prewritten modules, which the user will need to
most frequently whereas, Flask gives the users the freedom to create their parts of the backend
modules required, making it more straightforward to understand. Technically both are equally good
and have their pros and cons.
102. Mention the critical differences between using Django, Pyramid and Flask.
Flask is a web “microframework” primarily built for a small application with more detailed
requirements. In Flask, you have to use some external libraries to achieve most of the
standard functionalities required. Flask has an "Always quickly ready to use" approach.
Pyramid is built for larger applications. It provides flexibility and lets the developers use the
right tools required for their projects. Developers can choose the database, URL structure,
templating style and more. The "Pyramid" framework is heavily configurable.
One can also use Django for larger applications, just like Pyramid. But comes with a specific
structure and style/pattern of creating most functional components. It also includes an ORM.
An MVC architecture has been there for a long time in the software industry since the very
beginning. Almost all languages/frameworks use it with a slight variation, but the concept remains
consistent.
MVC stands for Model – Views – Controller, where the 'Model' provides an interface for the data
stored in the database. In contrast, the "View" is responsible for displaying Model Data to the user
and also to take up information from the user, with the "Controller" in MVC being accountable for
the entire logic behind the web application.
With this conceptual understanding of the pattern being followed or adopted in some way or the
other in most frameworks, "Django" includes its implementation method in its web applications.
Hence, its framework handles all the parts of the controller by itself.
Hence Django implements a particular kind of architecture known as the "MVT" (Model – View –
Template) architecture. Where "MVT" stands for Model – View – Template, i.e.:-
1. Model: Like "Model" in the MVC architecture, it has the same functionality for providing an
interface for the data stored in the database.
2. Template: Just like "Views" in MVC, Django makes use of "Templates" in its framework.
"Templates" are responsible for the User Interface completely. It handles all static parts of
the webpage along with the HTML, which the users visiting the webpage will perceive.
3. Views: In Django, Views link the Model data and the Templates.
Note: Like the controllers in MVC, views in Django MVT are responsible for handling all business logic
behind the scenes across the web app. It acts as the bridge between 'models' and 'templates'.
It sees the user request, retrieves appropriate data from the database, then renders back the
template along with recovered data.
Therefore there is no separate controller in Django MVT architecture, and everything is based on
Model -View – Template itself and hence the name MVT.
The significant initialisation steps for database setup in Django are done by editing and defining
required settings in the mysite/setting.py; it is a standard python module with module-level
representing Django settings.
SQLite is an integrated RDBMS that Django uses by default; it is easy for Django developers as it
won't require any other type of installation. If your database choice is different, you have to the
following keys in the DATABASE 'default' item to match your database connection settings.
Engines: you can change the database by defining the following config:
‘django.db.backends.sqlite3’,
'django.db.backeneds.mysql', 'django.db.backends.postgresql_psycopg2',
'django.db.backends.oracle' , so on ,
Name: The name of your database. In the case of an SQLite DB as your database, in that
case, the database will be a file on your computer; the name should be an absolute path,
including the file name of the file.
If one is not choosing SQLite as your database, then various settings/configurations such as
Password, Host, User, etc., must be separately added.
Django uses SQLite as a default database component; it stores data as a single file in the filesystem.
Suppose someone has a database server—PostgreSQL, MySQL, Oracle, MSSQL etc.—and wants to
use it rather than the conventional SQLite. In that case, they use database's administration tools to
create new databases for their Django projects. Either way, with an (empty) database in place, all
that remains is to specify/detail Django how to use it. This is where the project's settings.py file
comes in.
DATABASES = {
'default': {
'ENGINE' : 'django.db.backends.sqlite3',
import datetime
def Current_datetime(request):
now = datetime.datetime.now()
return HttpResponse(html)
"Templates" are simple text files. It can create any text-based formats like XML, CSV, HTML, etc. A
template contains the variables that get replaced with the values when evaluated and tags (% tag %)
that control the template's logic.
107. How are sessions maintained and used in the Django framework?
Django provides session tokens that let you store and retrieve data on a per-site-visitor basis. Django
abstracts the process of sending and receiving cookies by placing a session ID cookie on the client-
side and keeping all the related data on the server-side.
So the data itself is not stored client-side, which is also significant from the perspective of security.
Abstract Base Classes: The style is used when one only wants the parent's class to hold the
information you don't want to type out for each model.
Multi-table Inheritance: The style is used If you sub-class an existing model and need each
model to have its database table.
Proxy models: You can use this model If you only want to modify the Python level behaviour
of the model without changing the model's fields.
109. How To Save An Image Locally Using Python Whose URL Address I Already Know?
We will use the following code to save an image locally from an URL address:
import urllib.request
urllib.request.urlretrieve("URL", "local-filename.jpg")
110. Are there any tools for identifying bugs and performing static analysis in Python?
Yes, tools like PyChecker and Pylint are used as static analysis and linting tools, respectively.
PyChecker helps find bugs in the python source code files and raises alerts for the code issues and
complexities. Pylint also checks for the module's coding standards and supports various plugins to
enable custom features to meet this requirement.
112. What are decorators, and how are they used in Python?
Decorators in Python are essentially functions that add functionality to existing functions in Python
without changing the structures of the functions themselves. They are represented by
'@decorator_name' in Python and are called in a bottoms-up fashion. For instance:
def lowercase_decorator(function):
def wrapper():
func = function()
string_lowercase = func.lower()
return string_lowercase
return wrapper
def splitter_decorator(function):
def wrapper():
func = function()
string_split = func.split()
return string_split
return wrapper
def hello():
The beauty of these decorators lies in the fact that besides adding functionalities to existing outputs
of the methods, they can even accept arguments for functions and further modify those arguments
before passing them to the function itself. The inner nested function, i.e. 'wrapper' function, plays
significant role here. It is implemented in order to enforce encapsulation and thus, keep itself hidden
from the global scope.
def names_decorator(function):
arg1 = arg1.capitalize()
arg2 = arg2.capitalize()
return string_hello
return wrapper
@names_decorator
113. Write a script to scrape data from IMDb top 250 movies page. It should only have fields of
movie name, year, and rating.
import requests
import sys
response = requests.get(url)
soup = BeautifulSoup(response.text)
tr = soup.findChildren("tr")
tr = iter(tr)
next(tr)
title = movie.find(
).find('a').contents[0]
year = movie.find(
).find(
).contents[0]
rating = movie.find(
).find('strong').contents[0]
row = title + ' - ' + year + ' ' + ' ' + rating
print(row)
The script above will help scrape data from IMDb's top 250 list.
114. What is meant by functional programming? Does Python follow a functional programming
style? If yes, list few methods to implement functionally oriented programming in Python.
Functional programming is a coding style where the primary logic source in programs comes from
functions.
Incorporating a functional style of programming means writing pure functions.
These are functions that cause little or no changes outside of their scope. These changes are referred
to as side effects. Pure functions are used to reduce these side effects, which makes the code easy to
follow, test, or debug.
Python does follow a functional programming style. Following are a few examples of functional
programming in Python.
filter(): Filter lets us filter some of the values based on a conditional logic.
Ex.
Output:
[7, 8]
Ex.
Output:
reduce(): Reduce repeatedly reduces a sequence in a pair-wise manner until it reaches a single value.
Ex.
Output:
-13
NumPy comprises array data types and the most basic linear and vector operations such as indexing,
sorting, reshaping, essential element-wise functions, etc.
While all the numerical functionalities reside in SciPy, one of NumPy's essential goals is compatibility,
so NumPy tries to retain all the features supported by either of its predecessors.
Thus NumPy also contains some linear algebra functions, even though these more appropriately
belong in SciPy. In any case, SciPy contains fully-featured versions of the linear algebraic modules and
many other numerical algorithms.
If you are doing scientific computing using Python then, you should probably install both NumPy and
SciPy. However, most new features belong in SciPy rather than NumPy.
The list data structure defined in Python is highly efficient and capable of performing various
functions. But, they have severe limitations in computation vectorised operations that deal with
element-wise multiplication and addition.
Lists also require information regarding the type of every element, which results in overhead as type
dispatching code and gets executes every time any operation is performed on any aspect. The
NumPy arrays come into the picture as NumPy arrays handle all the limitations of python lists.
Additionally, as the sizes of the NumPy arrays increase, NumPy becomes around 30x times faster
than Python Lists. Due to their homogenous nature, it can densely pack the Numpy arrays into the
memory, which makes the memory free up quicker.
117. How will you access the dataset of a publicly shared spreadsheet in CSV format stored in
Google Drive?
https://docs.python.org/3/We can use the StringIO module from the io module to read from the
Google Drive link, and then we can use the pandas library using the obtained data source.
import pandas
csv_link = "https://docs.google.com/spreadsheets/d/..."
data_source = StringIO.StringIO(requests.get(csv_link).content))
dataframe = pd.read_csv(data_source)
print(dataframe.head())
Regression is a supervised machine learning algorithm technique that is used to find the correlation
between variables and help to predict the dependent variable(y) based upon the independent
variable (x). It is mainly used for predictions, time series modelling, forecasting and determining the
causal-effect relationship between variables.
119. What is classification? How would you import Decision Tree Classifier using the "Sklearn"
module?
Classification refers to a predictive modelling process where a class label is predicted for a
given example of input data. It helps categorise the provided input into a label that other
observations with similar features have. For example, one can use it to classify a mail,
whether it is spam or not or check whether users will churn based on their behaviour.
o Decision tree
The pandas "groupby" function is a feature supported by pandas that are used to split and group an
object. Like the RBMS languages such as sql/mysql/oracle "group by" it is also used to group data by
classes, entities which can then be further used for aggregation. A dataframe can also be grouped by
one or more columns.
df = pd.DataFrame(
{'Vehicle':['Etios','Lamborghini','Apache200','Pulsar200'],
'Type':["car","car","motorcycle","motorcycle"]}
df
Output
121. How do you split the data in train and test datasets in Python?
One can achieve this by using the "Scikit" machine learning library and importing the
"train_test_split" function in Python as shown below:
Import sklearn.model_selection.train_test_split
122. Is it possible to overfit a model if the data is split into train/test splits?
Yes, a common beginner’s mistake often is re-tuning a model or training new models with different
parameters even after seeing its performance on the test set.
123. What are the main principles of Object-Oriented Programming (OOP) in Python?
The main principles of OOP in Python are encapsulation, inheritance, polymorphism, and
abstraction. All principles serve different roles in OOPs:
Encapsulation: It helps in bundling the data and the methods into a single unit.
We can define a class in Python using the class keyword. It is followed by the class name and a colon.
For example:
class MyClass:
self.value = value
125. What is the main purpose of the __init__ method in a Python class?
The __init__ method is a special method in Python. It is used for initializing a newly created object. It
is called automatically when an object is instantiated. It is also used to set initial values for the
object's attributes.
We can implement inheritance by creating a new class that derives from an existing class. For
example:
class ParentClass:
pass
class ChildClass(ParentClass):
pass
Method overriding means when a subclass provides a specific implementation for a method that is
already defined in its superclass. This allows the subclass to modify or extend the behavior of the
inherited method.
We can achieve encapsulation by using private and protected attributes and methods. Private
attributes are prefixed with double underscores (e.g., __private_var). On the other hand, protected
attributes are prefixed with a single underscore (e.g., _protected_var).
The __str__ method returns a human-readable string representation of an object, intended for end-
users. On the other hand, the __repr__ method returns an unambiguous string representation. It is
useful for debugging and development.
130. How can you create a class method and a static method?
Class methods are created using the @classmethod decorator. It takes cls as their first parameter. On
the other hand, Static methods are created with the @staticmethod decorator. It does not take self
or cls. For example:
class MyClass:
@classmethod
def class_method(cls):
pass
@staticmethod
def static_method():
pass
We can implement multiple inheritance by specifying more than one parent class in the class
definition. For example:
class B1:
pass
class B2:
pass
pass
NumPy is a library for numerical computing in Python. It provides support for large, multi-
dimensional arrays and matrices. it also helps with mathematical functions to operate on these
arrays efficiently.
We can create a NumPy array from a Python list using numpy.array(). For example:
import numpy as np
np.array() creates an array from existing data. On the other hand, np.zeros() creates an array filled
with zeros. For example:
import numpy as np
We can perform element-wise operations in NumPy using standard arithmetic operators directly on
arrays. For example:
import numpy as np
ans = arr + 5
We can calculate the mean of a NumPy array using the np.mean() function. For example:
import numpy as np
meanVal = np.mean(arr)
We can reshape a NumPy array using the reshape() method. For example:
import numpy as np
np.vstack() stacks arrays vertically (row-wise). On the other hand, np.hstack() stacks arrays
horizontally (column-wise). For example:
import numpy as np
We can perform matrix multiplication using the np.dot() function or the @ operator. For example:
import numpy as np
np.copy() creates a new array with a copy of the data. On the other hand, np.view() creates a new
view of the same data without copying. The changes to the view will affect the original array.
Python
def fibonacci(n):
a, b = 0, 1
while a < n:
a, b = b, a + b
fibonacci(10)
You can also try this code with Online Python Compiler
Run Code
Output
0112358
Python
return x + y
def subtract(x, y):
return x - y
return x * y
if y != 0:
return x / y
else:
print(add(10, 5))
print(subtract(10, 5))
print(multiply(10, 5))
print(divide(10, 0))
You can also try this code with Online Python Compiler
Run Code
Output
15
50
Python MCQ
print(2 ** 3 ** 2)
A) 64
B) 512
C) 128
D) 81
Answer: B) 512
A) Tuple
B) String
C) List
D) Integer
Answer: C) List
A) /*
B) //
C) #
D) <!--
Answer: C) #
[1, 2] + [3, 4]
A) [1, 2, 3, 4]
C) [1, 2] + [3, 4]
D) [4, 3, 2, 1]
Answer: A) [1, 2, 3, 4]
len("Python")
A) 5
B) 7
C) 6
D) 8
Answer: C) 6
A) insert()
B) append()
C) extend()
D) add()
Answer: B) append()
A) func
B) define
C) def
D) function
Answer: C) def
list(range(5))
A) [1, 2, 3, 4, 5]
B) [0, 1, 2, 3, 4]
C) [1, 2, 3, 4]
D) [0, 1, 2, 3, 4, 5]
Answer: B) [0, 1, 2, 3, 4]
A) isinstance()
B) type()
C) check_type()
D) data_type()
Answer: B) type()
Python has seven types of operators which are arithmetic operators, assignment operators,
comparison operators, logical operators, identity operators, membership operators, and bitwise
operators. All these operators work with variables or values to execute mathematical operations.
A variable's type is specified using Python Data Types. It specifies the kind of data that will be kept in
a variable. The four main data types available in Python are int, long, float, and complex.
To prepare for a Python interview, review key concepts such as data structures, algorithms, and
object-oriented programming. Practice coding problems, understand common libraries, and be ready
to discuss real-world applications and past projects.
A list in Python is a mutable, ordered collection of items, which can be of different types. Lists are
defined using square brackets and support various operations like indexing, slicing, and appending.
In Python, a Boolean is a data type with two possible values: True and False. It is often used for
conditional statements and logical operations to represent truth values and control flow in programs.