Harrison a. Python for Beginners. Learn Python From Scratch...Part 2. 2024
Harrison a. Python for Beginners. Learn Python From Scratch...Part 2. 2024
for Beelinnets
Mastering the Ba licsofPython-Part2
Part 2 (2/3)
PYTHON FOR BEGINNERS
Mastering the Basics of
Python
Part 2 (2/3)
Published by
NewYork Courses
Fifth Avenue, 5500
New York, NY 10001.
www.newyorktec.com
Copyright © 2024 by NewYork Courses, New York, NY
Published by NewYork Courses, New York, NY
Simultaneously published in EUA.
No part of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, scanning, or otherwise, except as permitted under Sections 107, 108,
and 110 of the United States Copyright Act (17 U.S.C.), without prior written
permission from the publisher. Requests for permission from the publisher
should be sent to the Permissions Department, NewYork Courses, Fifth Avenue,
5500, New York, NY 10001, or online at [email protected].
Trademarks:
NewYork Courses, the NewYork Courses logo, and other
related styles are trademarks of NewYork Courses Inc.
and/or its affiliates in the United States and other countries
and may not be used without written permission.
All other trademarks are the property of their respective
owners. NewYork Courses is not affiliated with any product
or vendor mentioned in this book.
1 my_list » [1, 2, 3, 4, 5]
Modifying Lists:
Lists are mutable, meaning you can change their contents.
Youcanassignnewvaluestospecificindexes:
• ••
1 = 10
2 print(my_li$t) # Output: [1&, 2, 3, 4, 5j
Youcanalsoaddnewelementsusingtheappend()method:
• ••
1 my_Vist.append!6)
2 printtmy_l1st} # Output: [1&, 2, 3, 4, 5, 6]
Toremoveanitemfromthelist,usetheremove()method:
•••
1 my_Ust. remove! 3)
2 print (my_l, i$t) # Output: [10, 2, 4, 5, 0]
1 my_diet["age'1 J =26 # Modifying the value associated with the 'age' key
2 my_dict[”country”] - "USA" # Adding a new key-value pair
3 print!my. diet) # Output: {‘name': ‘Alice’, ‘age‘: 26, 'city': 'New
YorkJ, 'country'USA'}
Toremoveakey-valuepair,youcanusethepop()method:
•••
1 my_dtct.pop("city")
2 print(my_dict) # Output: {'name': ‘Alice', ‘age': 26, ’country': ‘USA'}
1 # List of usernames
2 usernames = ['altce', 'bob', 'charlie')
3
4 # Dictionary for credentials: username as the key and password as the
value
5 user_credentials = {
6 'altce': 'passwordl23',
7 bob': 'abcl23‘,
8 'charlie': 'qwerty'
9 }
10
11 # Tuple for other user data: (email, role)
12 user^data = {
13 'altce': ('[email protected]', 'admin'),
14 'bob': ('[email protected]', 'user'),
15 'charlie': ('[email protected]', 'user')
15 }
17
18 # Adding a new user
19 usernames.append('david')
29 user_credentials['david'] = 'password'
21 user_data['david'1 = ('[email protected]', 'user')
22
23 # Retrieving user information
24 user = "altce1
25 prtnt(f“User {user} Email: {user_data[userj[0]}, Role: {user_data[user)
(!]}")
•••
1 list.append(element)
Output:
1 [1, 2, 3, 4)
2 fruits.append!"orange")
r
3 print!fruits)
Output:
• ••
Output:
Output:
1 (42, hello
*
, 3.14, True)
1 list.remove(element)
2 fruits.remove^"banana"}
r
3 print!fruits J
Output:
•••
Output:
• ••
1 ValueError: list.remove(x): x not in list
1 try;
2 fruits.remove)"orange")
3 except ValueError:
4 print)"Element not found!”)
Output:
• ••
1 Element not found!
Output:
• ••
1 ('apple1, 'cherry1, 'date']
♦•*
1 list.index!element)
- list : The list from which you want to find the element.
- element : The item whose index you want to find.
Let’s look at some examples of using index .
Example1:FindingtheIndexofanElement
«••
1 fruits - ("apple", “banana", "cherry"]
2 position = fruits.tndexf"banana")
3 print!position)
Output:
•••
11
Output:
• ••
1 ValueError: ‘orange’ is not in list
♦•♦
1 try;
2 position = fruits.index("orange")
3 except ValueError:
4 print("Element not found!”)
Output:
• ••
1 Element not found!
Example3:FindingtheIndexofMultipleOccurrences
Output:
•••
11
*•*
1 students = ["Alice", "Bob", "Charlie"]
2 students.append("David")
3 print(students)
Output:
*,
1 (’Alice 'Bob
*, *,
‘Charlie *]
’David
Output:
I •••
*,
1 (’Alice 'Charlie
*, 'David']
In this case, "Bob" was removed from the list, and the
remaining students are shown. If we attempt to remove a
student who isn’t in the list, the program will throw an error:
•••
1 students.remove("Eve") # This will cause a ValueError
1 if "Eve" tn students:
2 students.remove("Eve”)
3 else:
4 print("Student not found.”)
Output:
• e•
1 2
♦ •♦
1 try:
2 tndex_of_eve = students.index("Eve")
3 except ValueError:
4 prtnt("Student not found.”)
Output:
•••
1 numbers = [4, 2, 9, 1, 5, 6]
2 numbers.sort()
3 print(numbers)
Output:
♦••
1 (1, 2, 4, 5, 6, 9]
In this example, the sort() method sorts the list of numbers
in ascending order. Notice that the list numbers has been
modified in place.
Example 2: Sorting Strings
Output:
• ••
1 (’apple1, 'banana1, ‘cherry1, 'date']
♦••
1 numbers = (4, 2, 9, 1, 5, 5]
2 numbers.sort!reverse=True)
3 prtnt(numbers)
Output:
♦•«
1 (9, 6, 5, 4, 2, 1]
*•*
1 words = ["banana", “apple", "cherry", "date"]
2 words.sort(key=len)
3 print(words)
Output:
• ••
1 ('apple', ’date’, 'banana1, ’cherry1]
Output:
1 (1, 2, 4, 5, 6, 5]
2 (4, 2, 9, 1, 5, 6]
Output:
Output:
Just like with sort() , you can use the key parameter in
sorted() to sort the list by a custom function, such as sorting
by the length of the words.
2. Comparing sort() and sorted()
Although both sort() and sorted() are used for sorting, there
are key differences that can influence your choice
depending on the specific situation:
- Modification of the Original List: sort() sorts the list in
place, modifying the original list, while sorted() returns a
new sorted list and does not affect the original.
- Efficiency: Both sort() and sorted() have the same time
complexity, which is O(n log n) for most cases. However, if
you don’t need to preserve the original list, sort() may be
slightly more efficient because it modifies the list directly,
whereas sorted() must create a copy.
- Use Cases:
- If you need to preserve the original list and work with a
sorted version, use sorted() .
- If you want to modify the list in place and do not need to
keep the original order, use sort() .
In general, you’ll use sort() when you’re working with the
same list and you don’t need to preserve its original state.
On the other hand, if you need a sorted list without
changing the original one, sorted() is the better choice.
3. Searching in Lists
Sorting is often used in conjunction with searching,
especially when dealing with large datasets. There are
different ways to search for elements in a list in Python, but
the most basic and common method is linear search.
However, once a list is sorted, you can take advantage of
more efficient searching techniques like binary search.
3.1 Linear Search
In a linear search, you check each element in the list one by
one to see if it matches the value you’re looking for. This
method has a time complexity of O(n), meaning that in the
worst case, you may need to check every element in the
list.
•••
•••
•••
9 else:
1 def binary_search(arr, target):
2 left, right = 6, len(arr) - 1
3 while left <= right:
4 mid = (left + right) // 2
5 if arrfmid] == target:
6 return mid- mid + 1
left
1
7 elif arr[mid] < target:
1 import time
2
3 # Create a large sorted list
4 large_list = list(range(1, 1000001))
5
6 # Linear Search
7 start_time = time.time()
8 Linear_search(Large_list, 999999)
9 end.time = time.time()
10 prtnt(f"Linear Search took {end.time - start_time} seconds.")
11
12 # Binary Search
13 start_time = time.ttme( )
14 binary_search(large_list, 999999)
15 end_ttme = time.ttme()
16 prlnt(f"Binary Search took {end_time - start_ttme} seconds.")
In this example, you will see that the binary search will take
significantly less time compared to the linear search
because the binary search reduces the problem size
exponentially with each step. In contrast, the linear search
must go through each item one by one, which takes much
longer.
In conclusion, the choice between linear search and binary
search largely depends on the size and order of the data
you are working with. For unsorted or small datasets, linear
search is simple and effective. However, for large, sorted
datasets, binary search offers substantial performance
improvements and is the preferred method.
In this chapter, we explored key concepts related to sorting
and searching in Python lists, focusing on two essential
operations that are frequently used in many programming
tasks: sorting and searching.
1. Sorting Lists: Python offers two primary ways to sort lists:
the sort() method and the sorted() function. The sort()
method modifies the list in place, meaning the original list is
changed, while sorted() returns a new list with the elements
sorted, leaving the original list unchanged. Both methods
use the Timsort algorithm, which is a hybrid sorting
algorithm combining merge sort and insertion sort. This
provides an efficient sorting process with an average time
complexity of O(n log n). When choosing between these two
options, consider whether you need to preserve the original
list or not. If in-place modification is acceptable, sort() can
be more memory-efficient. Otherwise, if you need to retain
the original list, use sorted() .
2. Searching for Elements: For searching, Python provides a
built-in method in , which checks if an element exists in a
list. While this method is simple and convenient, it operates
with a time complexity of O(n), meaning it checks each
element one by one. If the list is sorted, more efficient
searching algorithms such as binary search can be used.
The bisect module in Python allows for fast binary search
operations in sorted lists. Binary search has a time
complexity of O(log n), which is much faster than linear
search for large lists. It’s essential to decide which search
method to use based on the size of the data and whether
the list is sorted or not.
3. Best Practices: When selecting sorting or searching
techniques, consider the nature of the data and the
performance requirements. For smaller datasets, the
difference between sorting methods might not be
noticeable, but for larger datasets, choosing the most
efficient algorithm becomes crucial. If you need a stable sort
(i.e., equal elements retain their original order), both sort()
and sorted() provide stability. For searching, always check if
your list is sorted, as using binary search on unsorted lists
can lead to errors or inefficient performance.
In conclusion, understanding the differences between
sorting and searching methods, as well as their
complexities, is fundamental for writing efficient Python
programs. Always choose the method that best fits the
problem at hand, keeping in mind both time complexity and
memory usage.
5.2.3 - List Comprehension
In Python, lists are a fundamental data structure that allows
us to store and manipulate collections of items. As you
progress in your journey of learning Python, you’ll encounter
different techniques to work with lists effectively. One of the
most powerful and efficient ways to handle lists is through
list comprehension. This technique allows you to create and
manipulate lists in a compact and readable manner. In this
chapter, we will explore list comprehension, explaining its
syntax, providing examples, and showing you how it can
significantly improve the efficiency and clarity of your code.
What is List Comprehension?
List comprehension in Python is a concise way of creating
lists by applying an expression to each element in an
existing iterable (such as a list, range, or string) or filtering
elements based on certain conditions. This technique can
often replace the need for writing multiple lines of code to
iterate over a sequence, apply a transformation, and build a
new list. List comprehension simplifies the process and
makes the code more readable and expressive.
Instead of using loops to append elements one by one, list
comprehension provides a one-liner to perform the same
task in a much more efficient way. In Python, list
comprehension is widely regarded as one of the most
powerful features for manipulating lists because it combines
the clarity of the Python language with the performance of
an optimized internal implementation.
Syntax of List Comprehension
The syntax of list comprehension is fairly simple and
consists of three main parts:
1. Expression: The operation that will be applied to each
item in the iterable.
2. Iteration: The loop that iterates over each element of the
iterable.
3. Optional Condition: A condition that filters elements from
the iterable, only including the ones that satisfy the
condition.
A typical list comprehension follows this structure:
•••
1 squares = []
2 for i tn range(10):
squares.append(i
3 **
2 )
I ••♦
**
2
1 squares - [t for t in range(10)]
In this case:
- The **
expression is **
i2 , which squares each number.
- The iteration is for i in range(10) , which loops over the
numbers from 0 to 9.
- There is no condition in this example, so all numbers in the
range are included.
The result of the list comprehension is a list of the squares
of the numbers from 0 to 9: '[0, 1, 4, 9, 16, 25, 36, 49, 64,
81]'.
Example with Condition: Filtering Even Numbers
List comprehensions can also include a condition to filter the
results. Let’s say you only want to include the even numbers
from the list of numbers from 0 to 9. Using list
comprehension, we can write:
•••
1 even_numbers = [I for t in range(16) if t % 2 == 0]
Here:
- The expression is simply i , as we want to include the
number itself.
- The iteration is for i in range(10) , which iterates over
numbers from 0 to 9.
- The condition is if i % 2 == 0 , which filters the numbers
to include only those that are divisible by 2 (i.e., even
numbers).
The resulting list will be: '[0, 2, 4, 6, 8]'.
Example with Transformation: Converting Strings to Lists of
Characters
Another common use case for list comprehension is
transforming elements. For example, let’s say we want to
convert each string in a list of words into a list of characters.
We can do this using list comprehension:
•••
In this case:
- The expression is list(word) , which converts each word
into a list of characters.
- The iteration is for word in words , iterating over each
word in the list '["hello", "world"]'.
- There is no condition in this example.
The result will be a list of lists: '[['h', 'e', 'l', 'l', 'o'], ['w', 'o',
'r', 'l', 'd']]'.
More Complex Example: Applying Multiple Conditions
List comprehension can also handle multiple conditions. For
example, let’s say you want to create a list of squares of all
even numbers greater than 5 from a given list. You could
write:
Here:
- The **expression is **
x
2 , which squares each valid
number.
- The iteration is for x in numbers , which iterates over the
list of numbers.
- The condition is if x % 2 == 0 and x > 5 , ensuring that
only even numbers greater than 5 are included.
The result will be: '[36, 64, 100]'.
Advantages of List Comprehension
1. Conciseness: List comprehension significantly reduces the
amount of code you need to write. Instead of using multiple
lines for loops and appending elements to a list, you can
often replace that with a single line of code, making your
program shorter and more concise.
2. Improved Readability: Although it may seem
counterintuitive at first, list comprehensions can often make
code easier to read. When used properly, they condense
logic into a simple, declarative expression that tells you
exactly what is being done. For example, filtering even
numbers or squaring elements is immediately clear from the
syntax.
3. Performance: List comprehensions tend to be faster than
using traditional for loops with append() because Python’s
internal implementation of list comprehensions is optimized.
The time complexity of list comprehensions is generally
better when compared to constructing lists using explicit
loops.
4. Memory Efficiency: Since list comprehensions are
designed to work efficiently in memory, they can help you
manage large datasets more effectively. By using generator
expressions (which is a variant of list comprehension), you
can further optimize memory usage when you don't need to
store the entire list in memory at once.
5. Flexibility: List comprehensions support the inclusion of
conditions and complex transformations, making them
highly versatile. Whether you need to filter data, apply
transformations, or even flatten nested lists, list
comprehension offers a flexible way to handle various tasks.
In conclusion, list comprehension is a powerful tool in
Python, making your code more readable, concise, and
efficient. It allows for expressive one-liners to generate or
transform lists, and can be used in a variety of scenarios—
from simple transformations to complex filtering. By
mastering list comprehension, you'll be able to write cleaner
and more Pythonic code.
List comprehensions in Python provide a concise and
readable way to create lists. However, as the feature
becomes more advanced, it’s possible to construct lists
using more intricate patterns. This section will explore
advanced use cases of list comprehensions, including
nested expressions, conditional logic, and multiple loops,
along with best practices and scenarios where traditional
loops may be a better choice for readability.
1. List Comprehension with Multiple Conditions
Output:
r
2 result = ['even' if x % 2 ™ 8 else odd' for x in numbers]
3 prtnt(result) # Output: ['odd', 'even', 'odd', 'even', 'odd', 'even',
'odd1, 'even', 'odd', ’even'}
1 empty.tuple - ( )
2 prtntfempty_tuple) # Output: ()
3
4 empty„tuple_2 = tuplef )
5 print(empty_tuple_2) # Output: ()
1 Stngle.element.tuple - (5,}
2 prlnt(single_element_tuple) # Output: (5,)
3
4 single_element_tuple_2 = 5t
5 E>rint( stngle_element_tuple_2) # Output: (5,)
1 multi.element.tuple = {1, 2, 3)
2 print{nulti^element_tuple) # Output: (1, 2, 3)
3
4 multi„element.tuple_2 = 1, 2, 3
5 prtnt( multi..element.tupIe_2} Output: (1, 2, 3)
1 = tuple![I, 2, 3)}
2 print!from_l1st) # Output: (1, 2, 3)
3
4 frair_string ■ tuple! ,,abc'“)
5 grint( f rom_striing) # Output: ('a', 'b', ‘c’)
I
1 my_tuple = (10, 2®, 30, 40)
2
3 print!my_tuple[0]) # Output: J# (First element)
4 print! iny_tuple[-l)} # Output: 40 (Last element)
1 matrix = {
2 (1, 2, 3).
3 (4, 5, 6),
4 (7, 8,
5 }
6 prtnt(matrix!1)(2)} # Output: 6 (Element in row 2, column 3)
1 (a, b) ■ nested„tuple[€]
2 prtnt(a) # Output; 1
3 print(b) # Output: 2
1 stuple,tuple = (1, 2, 3, 4)
1 print(simple_tuple(G]) # Output: 2
2 print($imple_tuple(3)) # Output: 4
♦•♦
1 t - (18, 28)
2k- t(«]
3 y - t[1]
4 prtnt(x, y) # Output: 10 26
•••
1 t = (1, 2, 3, 4, 5)
2 x, *y, z - t
3 prtnt(x) # Output: 1
4 prtnt(y) # Output: [2, 3, 4]
5 prtnt(z) # Output: 5
••*
1 t = (1. 2, 3, 4, 5)
2 head, ‘tail = t
3 prlnt(head) # Output: 1
4 prtnt(tatl) # Output: [2, 3, 4, 5]
1 *
def add_numbers(
args):
2 return sum(args)
3
4 numbers ■ fl, 2, 3, 4)
5 result = add_numbers(‘numbers)
6 print(result) # Output: 10
1 def min_max(values):
2 return min(values), max(values)
3
4 numbers = [10, 20, 30, 40]
5 minimum, maximum = min_max(numbers)
6
7 print(f"Minimum: {minimum}, Maximum: {maximum}-")
♦•♦
1 employees = [("Alice”, "Developer”), ("Bob”, “Designer"), (“Charlie",
"Manager”)]
2 for name, role in employees:
3 print(f”{name} works as a {role}.")
1 records = {
2 "Alice": (30, "Engineer"),
3 "Bob": (25, "Designer"),
4 "Charlie": (35, "Manager"),
5 >
6
7 for name, (age, role) tn records.items():
8 print(f"{name} is a {role} aged {age}.")
••4
1 data - (1, 2, 3)
2 a, c = data
3 print(a, c) # Output: 1 3
•«•
1 data = {I, 2, 3, 4, 5}
2 a, *mtddle, b = data
3 prtnt(a) # Output: 1
4 print(middle) # Output: [2, 3, 4]
5 print(b) # Output: 5
♦••
1 rows = [("2625-61-01", "Alice", 1600), ("2625-61-02", "Bob", 1506)]
2 for date, name, amount tn rows:
3 print(f"On {date}, {name} earned ${amount}.")
•••
•••
1 data = [1, 2, 3)
2a, b, c = data # Works, as lists support unpacking
*••
I 1 value = 10
2 # a, b = value # Halses TypeError
1 my_dict = {}
2 another_dict = dUt()
or
•••
1 user_info(’country1] = 'USA'
2 print(user-infp)
This adds the key 'country' with the value 'USA' to the
dictionary.
When updating a dictionary, it’s important to understand
that keys must be unique. If you add a key that already
exists, its previous value will be overwritten. This behavior
allows you to efficiently replace old data with new
information.
Dictionaries are a foundational tool in Python programming,
and understanding how to create, modify, and manage
them is essential for working with structured data. By
practicing with these basic operations, you’ll be well-
equipped to use dictionaries in more complex scenarios.
Dictionaries in Python are powerful tools for managing data
as key-value pairs. They allow efficient data retrieval and
are widely used for various applications, from simple data
storage to more complex use cases. This chapter will delve
into how to initialize, update, and modify dictionaries, as
well as how to remove elements and avoid common pitfalls
when working with them.
1. Updating Dictionaries with update()
The update() method is a convenient way to add or modify
multiple key-value pairs in a dictionary simultaneously. It
takes another dictionary or an iterable of key-value pairs
(e.g., a list of tuples) as an argument and merges it into the
original dictionary. Existing keys will have their values
updated, and new keys will be added. Here’s how it works:
•••
♦•♦
1 product = {'id': 101, 'name': 'Laptop', 'price': 1560}
2 price = product.pop('price')
3 print!price) # Output: 1566
4 prtnt(product) # Output: {'id': 101, 'name': 'Laptop'}
Withadefaultvalue:
•••
•« •
1 user = {'name': 'Bob', 'age': 38}
2 if 'email' tn user;
3 print(user['email'])
4 else;
5 print('Email not available')*1
2
•••
•••
1 users - {
2 'userl': {'name': 'Alice', 'age': 25, 'email':
'[email protected]'},
3 'user2': {'name': 'Bob', 'age': 30, 'email': '[email protected]'}
4 }
5
6 # Adding a new user
7 users!'user3'] = {'name': 'Charlie', 'age': 22, 'email':
'[email protected]'}
8
9 # Updating an existing user's information
19 users!'user2']['age'] = 31
11 prtntfusers!
- Product Inventory:
1 Inventory - {
2 'apple': 50,
3 'banana': 36,
4 'cherry': 20
5 }
6
7 # Adding new stock
8 Inventory.update!{'orange': 40, 'grape': 25})
9
10 # Reducing stock
11 inventory!'banana'] -= 5
12
13 # Removing a sold-out product
14 inventory.pop('cherry', None)
15
16 print(inventory)
1 tasks - {
2 1: {'task': 'Write a blog post', 'status': 'Pending'},
3 2: {'task': 'Prepare presentation', 'status': 'In Progress'}
4 }
5
6 # Marking a task ascompleted
7 tasks[l)('status'] = 'Completed'
3
9 # Adding a new task
10 tasks[3) = {'task': 'Read a book', 'status': 'Pending'}
11
12 # Removing a task
13 tasks.pop(2, None)
14
15 print{tasks)
The keys() method is not just efficient but also makes code
more readable by explicitly signaling your intention to work
with the dictionary’s keys.
The values() Method
The values() method is used to access all the values stored
in a dictionary. Like keys() , it returns a dynamic view object,
but this time, the object represents the dictionary’s values
rather than its keys. This method is particularly useful when
you are only interested in the stored data and not the
identifiers (keys) associated with it.
How it Works
Calling values() on a dictionary yields a dict_values object.
This object is iterable and reflects the current state of the
dictionary, ensuring that any updates to the dictionary are
15
16 print(f"Total items in inventory: {total_items}") # Output: Total items
also reflected in the view.
in Example
inventory: 65
17
14 #total_items4: = Filtering values
sum(inventory.values( ))
18 high_stock = (fruit for fruit, stock in inventory.items() if stock > 20]
When and Why to Use values()
19 print(high_stock) 4 Output: [‘bananas', 'cherries']
13 # Example 3: Summing all values
The values() method is useful in scenarios where you:
1. Need to perform operations on the values, such as
summing
12 numeric values or finding a maximum or minimum
value.
2. Are only interested in the data stored in the dictionary,
not the keys.
11 # 30
3. Want to quickly iterate through all the stored values.
Examples of Using values()
•10• •# 25
1
7 prtnt(value)
h # Output:
• ••
1 data ={'name': 'Alice', 'age': 25, 'city': 'MewYork'J-
2 print(data.keys()) # Output: dict_keys(['name', 'age', 'city'])
• ••
1 for key tn data.keys():
2 print(key)
The primary use case for values() is when you are interested
in processing or analyzing the values stored in a dictionary.
For instance:
• ••
• ••
This method is ideal when you need to access both keys and
values simultaneously. For example:
• ••
-Usevalues()whenyourfocusissolelyonthevalues:
•••
1 if 25 in data.values{):
2 print("The value 25 exists in the dictionary.")
- Use items() for tasks where both keys and values are
needed:
1 filtered..data - {key: value for key, value in data.items!) if
isinstancefvalue, int)}
2 print(filtered_data)
•••
•♦•
1 unlque_values = set(data.values!))
2 prtnt(untque_values)
•••
• ••
You can also add new keys and values to inner dictionaries,
expanding their content:
• ••
1 employees!"1001“]["location"] = "Remote"
2 print(employees["lQ01"]["location"]) # Output: Remote
♦ ••
1 del employees!"1001"]["location"]
2 print(employees("lO01"]) # Output: {'name': 'Alice', 'department': 'IT',
'role': 'Senior Developer'}
• ••
1 employees!"1004"] = {
2 "name": "Diana",
3 "department": "Marketing",
4 "role": "Coordinator"
5 1
6 prtnt(employees[“lO04"])
1 del employees1004")
2 prtnt(employees.get("1004", "Employee not found")) # Output: Employee
not found
*•♦
1 for emp_td, details in employees.items():
2 print!f"Employee ID: {emp_id}")
3 for key, value in details.items! ):
4 print!f" {key}: {value}")
1 company - {
2 "IT": {
3 "Alice": {"role": "Developer", "salary": 70000},
4 "David": {"role": "SysAdmin", "salary": 65000}
5 },
6 "HR": {
7 "Bob": {"role": "Manager", "salary": 80000}
8 }
9 }
To print all roles and salaries:
*••
I Department: IT
2 Alice: Developer - $70000
3 David: SysAdmin - $65000
4 Department: HR
5 Bob: Manager - $83000
1 htgh^earners “ {
2 department: {name: details for name, details tn employees.items() if
detailsf"salary"] > 70000}
3 for department, employees tn company.items()
4 }
5 prtnt(high_earners)
1 nested_dict = {
2 "userl": {"name": "Alice", "age": 30>,
3 "user2": {"name": "Bob", ‘age": 25}
4 }
5
6 # Check if a top-level key exists
1
137 tf "userl" tn nestecJ_dtct:
12
148 print(
# “,get(
Key )’name"
Use "userl for safer in access
existskey userl"s dictionary")
prtnt( exists in the dictionary")
15 user2_data “ nested_dict.get("user2")
9
11 tf "name" in nested_dtct["userl"]:
16
10 tf user2„data
# Check if a nested key in
and "age" user2_data:
exists
17 print!f"users1s age: {user2_data(‘age’)}")
1 nested_dictl = {
2 "userl": {"name": "Alice", "age": 3€}
3 }
4
5 nested_dict2 = {
6 "ifser2": {"name": "Bob", 'age': 25}
7 >
8
9 Merging using / (Python 3.9+)
10 merged_dict = nestecf_dtctl | nested_dtct2
11
12 # Merging using updatef) (older versions)
13 nested_d tc 11. u pdate( ties ted_d let 2)
14
15 print ( merged_dlet}
1 nested_dict ” {
2 "userl": {•name": "Alice", "age": 3€},
3 "user2": {"name": "Bob", 'age": 25}
4 }
5
6 if Safe access using get()
7 u$er3_age = nested_dict-get!"users", {}}.get("age”, "unknown"}
8 print!f"user3's age; <user3_age}") # Output: Unknown
9
18 # Avoid KeyError by checking key existence
11 if "userl" in nested_dict and "name" tn nested_dict ["userl”]:
12 print!nested„dlet["userl")["name”])
♦•♦
1 students = {
2 101: {“name": "Alice", "age": 20, "grade": "A"},
3 102: {“name": "Bob", “age": 22, "grade": "B"}
4 }
This structure allows you to quickly retrieve all the data for
a specific student using their ID. Comparing this to a list, it
would be much less efficient to search for a student's
information by iterating through a list of all students.
In summary, lists, tuples, and dictionaries each have their
own strengths and ideal use cases. Lists are versatile,
ordered, and mutable, making them suitable for scenarios
where the data is dynamic and operations like sorting or
filtering are required. Tuples, being immutable and ordered,
are better for fixed collections of data where integrity is
important, or when performance and memory efficiency are
priorities. Dictionaries, with their key-value structure and
fast lookups, excel at managing related data or scenarios
where quick access to specific elements is needed.
Understanding these differences and knowing when to use
each structure is an essential skill for any Python
programmer. This knowledge not only helps in writing more
efficient code but also ensures that your programs are
easier to read and maintain.
Examples
1. List Example
Lists are best when you need an ordered collection of items
that might change.
♦•♦
1 # Create a shopping list
2 shopping_list = ("apples", "bananas", "carrots")
3
4 # Add an item
5 shopptng_ltst.append("oranges") iff ("apples'1, "bananas", “carrots",
“oranges“J
6
7 # Remove an item
8 shopping_ltst.remove("bananas") # (“apples", "carrots", "oranges"]
9
10 # Update an item
11 shopping_llst[0) = "grapes" # [ "grapes", "carrots", “oranges"]
12
13 print(shopptng_ltst)
I ••e
1 inventory = ["apple", “banana", “orange"]
•••
1 inventory = [(“apple", 10), ("banana", 5), (“orange", 8)]
Here, each item is a tuple, where the first element
represents the product name, and the second element
represents its quantity in stock. This structure helps in more
realistic inventory management, where both the product
name and the quantity are critical to track.
2. Creating Lists for Inventory Management
In a simple inventory system, we can represent the stock of
products as a list. Initially, you might want to create an
empty inventory list to start adding items. Here’s an
example of how to create an empty list:
• e•
1 inventory = []
• ••
This list contains three tuples. Each tuple holds the name of
the product as a string and its corresponding quantity as an
integer.
3. Adding Items to the Inventory
As inventory management systems evolve, adding new
products to the stock is a common operation. Python lists
provide several ways to add items, but the two most
common methods are '.append()' and '.extend()'. Let’s
explore both of these.
3.1. Using '.append()'
The '.append()' method is used to add a single item to the
end of a list. This is ideal when you want to add a new
product to your inventory. Let’s demonstrate how this works.
Suppose you want to add a new product to your inventory,
like "grapes", with a quantity of 12. You can use the
'.append()' method like this:
♦••
1 inventory.append(("grape", 12))
After this operation, the inventory list will now look like this:
• ••
1 [("apple", 10), ("banana", 5), ("orange", 8), ("grape", 12)]
The new item "grape" has been added to the end of the list
with its quantity of 12.
3.2. Using '.extend()'
The '.extend()' method, on the other hand, is used to add
multiple items to the list at once. This is useful when you
have a batch of products to add to the inventory.
Suppose you receive a shipment of "pears" and "mangoes".
Instead of appending each item individually, you can create
a list of tuples for the new items and use '.extend()' to add
them all at once:
1 new_items = [("pear", 7), ("mango", 15)]
2 inventory.extend(new_items)
After this operation, the inventory list will look like this:
•••
1 inventory!®] = ("apple", inventory!©](1] - 3)
This will update the quantity of apples (the first item in the
list, since indexing starts from 0) to reflect the sale. After
this operation, the inventory list will look like:
Output:
*••
r
2 del inventory(2)
3 printfinventory) # Output: [‘apple', 'banana', 'kiwi']
Output:
•••
1 Banana is available
Output:
•«•
I Inventory List
2 - apple
3 - banana
4 - orange
5 - kiwi
1 inventory = [
2 {'name1: 'apple', 'quantity1: 38, 'price': 0.5},
3 I'name': ‘banana’, 'quantity': 15, 'price': 8.3},
4 {'name': "orange", 'quantity': 2®, 'price': 0,6},
5 {'name': 'kiwi', 'quantity'; 10, 'price'; 1.8}
6 ]
7
S print("Inventory List:"}
9 for item tn inventory:
1® print(f"- {ttem["name")} | Quantity: {item(‘quantity')} | Price:
${itein[ 'price']}"}
Output:
•«•
1 Inventory List:
2 - apple | Quantity: 30 | Price; $0.5
3 - banana | Quantity: 15 | Price: $0.3
4 - orange | Quantity: 28 [ Price: $0.6
5 - kiwi | Quantity: 10 |Price: $1.0
Output:
•••
1 ['apple’, 'orange
*
, ’kiwi', 'banana
,
* ’banana', 'banana', 'banana’
banana', 'banana', 'banana’, banana', banana', 'banana']
•••
1 inventory = [
2 {'name': 'apple', 'quantity': 30, 'price': 0.5},
3 {'name': 'banana', 'quantity': 15, 'price': 0.3},
4 {'name': 'orange', 'quantity': 20, 'price': 0.6},
5 {'name': 'kiwi', 'quantity': 10, 'price': 1.0}
6 ]
7
8 item_to_update = 'banana*
9 new_quanttty * 50
10
11 for item in inventory:
12 if item['name') == item_to_update:
13 item['quantity'] = new_quantity
14
15 print!inventory)
Output:
1 inventory - []
•••
1 def remove_product(product_name):
2 if product_name in inventory:
3 inventory.remove(product_name)
4 prtnt(f”{product_name} has been removed from the inventory.")
5 else:
6 prtnt(f”Error: {product.name} not found tn inventory.")
•« •
1 def check_product(product_name):
2 if product.name tn inventory:
3 print(f"{product.name} is in stock.")
4 else;
5 print(f"{product.name} is not in stock.'1)
1 def list_inventory():
2 if inventory!
3 print{“Current inventory:")
4 for product in inventory:
5 print(f - {product}")
6 else:
7 print("The inventory is empty.")
Example Output
•••
Explanation
1. Inventory Initialization
We start with an empty list called inventory where the
product names will be stored. This list dynamically grows or
shrinks as products are added or removed.
2. Adding Products
The function add_product() appends a new product to the
inventory. The append() method adds the product to the end
of the list. This makes it very efficient for adding new items.
3. Removing Products
The function remove_product() checks if the product exists
in the list using the in keyword. If the item is found, it is
removed using the remove() method. If the item doesn't
exist, a message is printed, avoiding any runtime errors.
4. Checking for a Product
The check_product() function also uses the in keyword to
check if a product exists in the inventory. This function
simply prints a message indicating whether the product is
available or not.
5. Listing All Products
The function list_inventory() iterates through the entire list
and prints each product. If the inventory is empty, it notifies
the user that no products are available.
This basic implementation serves as a foundation for more
complex inventory systems, which can be expanded to
include features like product quantity, categories, or prices.
Using lists in this way is particularly useful for small
projects, where the simplicity of managing data without a
database or a more complex structure is sufficient.
Lists allow easy additions, removals, and lookups in an
intuitive and readable manner, making them a great choice
for small-scale inventory management systems. As your
project grows, you could switch to more advanced data
structures like dictionaries or classes, but for a beginner’s
understanding, this simple list-based approach is effective
and practical.
5.6.2 - Organizing Immutable Data
with Tuples
In Python, data structures are crucial for organizing and
manipulating data efficiently. Among the various types of
data structures, tuples hold a special place due to their
immutability, making them ideal for situations where data
should not be changed once it is created. A tuple is a
collection of ordered elements, which can be of any data
type. However, unlike lists, tuples cannot be modified after
their creation. This feature is particularly useful when you
need to store constant data, like geographic coordinates,
and prevent accidental modification.
1. Understanding Tuples in Python
A tuple in Python is an ordered, immutable collection of
items. It is similar to a list but with one key difference: once
a tuple is created, it cannot be altered. This immutability
provides several advantages, including data integrity, and
can help prevent bugs in applications by ensuring that
critical information does not change unexpectedly. Tuples
are commonly used when the data is meant to represent a
fixed set of values that should remain constant throughout
the program's execution.
The main characteristics of tuples are:
- Immutability: Once created, tuples cannot be changed. You
cannot add, remove, or modify elements of a tuple.
- Ordered: The elements within a tuple are ordered,
meaning that their position within the tuple is fixed and can
be accessed by an index.
- Heterogeneous: Tuples can hold elements of different
types, including integers, floats, strings, lists, and even
other tuples.
2. Creating Tuples
Creating a tuple in Python is quite straightforward. You
simply enclose the elements within parentheses '()' and
separate them by commas. Let's look at several ways to
create tuples:
- Empty Tuple: You can create an empty tuple by using a
pair of parentheses without any elements inside.
•••
1 empty_tuple = ()
2 prtnt(empty_tupl.e) # Output: ()
•••
1 single_element_tuple = (5,)
2 print(single_element_tuple) # Output: (5,)
*••
Output:
Here, the slice '[:2]' extracts the first two elements of the
tuple, representing two geographic coordinates. Slicing is a
powerful tool for working with subsets of data in tuples.
5. Practical Use Case: Storing Geographic Coordinates
A common application of tuples is in the representation of
geographic coordinates. Geographic coordinates are usually
given in pairs of latitude and longitude, and tuples are a
natural fit for this type of data. Let’s see how we can use
tuples to store and manage geographic locations.
Consider a scenario where you need to store the
coordinates of several cities around the world. You could
create a tuple for each city, with the first element being the
latitude and the second element being the longitude. Here
is an example:
1 # Tuples for different cities
2 nyc = (40.7128, -74.6060) # New York City
3 la = (34.0522, -118.2437) # Los Angeles
4 London = (51.5074, -6.1278) # London
5
6 # Accessing specific coordinates
7 print(f"NYC Latitude: {nyc[0]}, Longitude: {nyc[l]}“)
8 prlnt(f"LA Latitude: {la[6J}, Longitude: {la(l]}”>
9 print(f“London Latitude: {London[®]}, Longitude: {london[l]}“)
Output:
1 tuple! - (lt 2)
2 repeated_tuple = tuplel * 3
3 printfrepeated_tuple) # Output: (1, 2, 1, 2, 1, 2}
r
3 print!51,5074 tn coordinates) # Output: False
1 # List example
2 my_lUt -[1,2,3]
3 my_list[l] -4 # This is allowed
4 print!my,list} # Output: (1, 4, 3]
5
6 # Tuple example
7 my.tuple = (1, 2, 3)
3 # my_tuple/l] =4 # This will raise an error
3.2 Performance
Tuples are generally faster than lists when it comes to
iteration and access because of their immutability. Since
tuples are fixed in size and data, Python can optimize
memory usage and access speed. Lists, being mutable,
need additional overhead to track changes in their size and
contents.
If you don't need to modify the data, using a tuple can lead
to better performance.
3.3 Syntax
The syntax for creating lists and tuples is also different. Lists
are created using square brackets '[]', whereas tuples are
created using parentheses '()'.
Example:
•••
1 my.Ust =(1,2,3]
2 my_tuple = (1, 2, 3)
1 customer = {
2 "name": "John Doe",
3 "age": 30,
4 "email": "[email protected]"
5 "address": "123 Elm Street"
6 }
In this example:
- '"name"', '"age"', '"email"', and '"address"' are the
keys.
- '"John Doe"', 30 , '"[email protected]"', and '"123
Elm Street"' are the corresponding values.
The keys in a dictionary must be immutable types (e.g.,
strings, numbers, or tuples), while the values can be of any
data type, including other dictionaries, lists, or even
functions.
3. Accessing Elements in a Dictionary
To access the value associated with a specific key in a
dictionary, you simply use square brackets '[]' with the key
inside. For example, if you want to access the email of the
customer from the previous example, you can do the
following:
•••
1 prtnt(customer[”email"]) # Output: [email protected]
You can also use the get() method to retrieve the value. The
advantage of using get() is that it doesn’t throw an error if
the key does not exist. Instead, it returns None (or a default
value you specify) if the key is not found:
4. Modifying a Dictionary
Dictionaries are mutable, which means you can add, update,
or remove items after the dictionary has been created. To
add a new key-value pair, you simply assign a value to a
new key:
1 customer("phone"] = "555-1234"
2 prtnt(customer)
This will add a new key phone with the value '"555-1234"'
to the dictionary.
To update an existing value, you assign a new value to an
existing key:
1 customer]"age") = 31
2 prtnt(customer)
1 del customer["address"]
2 prtnt(customer)
This will delete the key-value pair associated with the key
'"address"'. If you attempt to delete a key that does not
exist, Python will raise a KeyError .
Alternatively, you can use the pop() method, which removes
an item by key and returns its value:
•♦•
1 phone_number = customer.pop("phone")
2 print(phone_number) # Output: 555-1234
If the key is not found, pop() will raise a KeyError unless you
specify a default value.
6. Modeling Customer Records
Now that you understand how to work with dictionaries, let’s
see how they can be used to model real-world data, such as
customer records. Each customer record can be represented
by a dictionary where the keys are customer attributes
(such as name, age, email, and address), and the values are
the corresponding details.
For example:
1 customers = [
2 (
3 "name": "John Doe",
4 "age": 38,
5 "email": “[email protected]",
6 "address": “123 Elm Street"
7 },
8 {
9 "name": "Jane Smith”,
10 "age": 25,
11 "email": “[email protected]"
12 "address": "456 Oak Avenue"
13 }
14 j
***
1 for customer in customers:
2 print(customer["name"])
•••
1 John Doe
2 Jane Smith
1 laptop = products["laptop"]
2 print(laptop["price"]) # Output: 999.99
1 products["laptop")["price"] = 1099.99
2 print(products("laptop"]["price"]) # Output: 1099.99
If you want to remove a product from the catalog, you can
use the del statement:
•••
1 del products!"smartphone"]
2 prtnt(products)
•••
Output:
•••
1 apple
2 banana
3 cherry
•••
1 1,2
2 0,5
3 3.0
•••
1 for product, price tn products.items():
2 prtnt(f"The price of {product} is ${price}")
Output:
♦••
1 The price of apple is $1,2
2 The price of banana is $0.5
3 The price of cherry is $3.0
*•♦
1 customers = {"John": {"age": 30, "email": "[email protected]"}}
2
3 # Adding a new customer
4 customers!"Alice"] = {"age": 25, "email": "[email protected]"}
1 removed.customer = customers.pop("John")
2 prtnt(removed_customer) # This will print the dictionary associated
with John before it was removed
1 employees = {
2 "John": {"job": "Manager", "salary": 56600, "department": "Sales"},
3 "Alice": {"job": "Developer", "salary": 80060, "department": "IT"}
4 }
To access a nested value, you can use multiple keys:
♦••
1 open('filename', mode)
•••
1 content = file.read()
2 prtnt(content)
This will read the entire file and display it. If the file is too
large, this method might not be the most efficient, as it
loads the entire content into memory.
- readline() Method:
The readline() method reads one line at a time from the file.
You can use this method to process the file line by line,
which is more memory-efficient for large files:
1 line = file.readline()
2 prim (line)
- readlines() Method:
The readlines() method reads all lines from the file and
returns them as a list of strings, with each line as a separate
element in the list. For example:
1 lines = ftie.readltnes( )
2 for line tn lines:
3 print(line)
•••
1 # Opening a file for writing
2 file = open('output.txt‘, ’w‘)
This opens the file 'output.txt' in write mode. If the file
already has content, it will be overwritten, so be careful
when using this mode.
- write() Method:
Once the file is open in write mode, you can use the write()
method to write data to the file. For instance:
1 flie.write("Hello, world!”)
2 file.wrtte(“\nWelcome to Python.")
This will write the two lines to the file. Note that the write()
method does not automatically add a newline character at
the end of the string, so you must manually include it if
needed.
5. Opening Files for Appending
If you want to add data to an existing file without
overwriting its content, you can open the file in append
mode ( 'a' ). This mode is useful when you want to add new
information to the end of a log file, for example.
Example:
•••
1 try:
2 file = open('non_existent_ftie.txt', 'r')
3 content = file.read()
4 except FileNotFoundError:
5 print(“The file does not exist.”)
6 finally;
7 file.close! )
• ••
>••
I 1
2
with open('tog.txt', 'a') as file:
file.write("New log entry\n")
In this example, the open() function is used with the 'a'
mode, which ensures that the new content is added to the
end of the file. The write() method is used to add the text to
the file. The string "New log entry\n" is appended to the file,
and the newline character ('\n') ensures that each entry
appears on a new line.
One of the advantages of using append mode is that it
won’t erase the existing content of the file. So, you can
continue adding data over time without worrying about
losing previous records.
3. Why Closing Files Is Important
After performing operations on a file, it’s essential to close
the file to ensure that all changes are saved and resources
are properly freed. While Python handles file closure
automatically in certain scenarios (like using the with
statement), it’s good practice to explicitly close files when
you no longer need them. This prevents file handles from
staying open and consuming system resources, which can
lead to performance issues or errors in the program.
To close a file, use the close() method:
•••
1 with open('data.txt', 'r') as file:
2 content = fVle.readf)
1 try:
2 with open('data.txt', 'w') as file:
3 file.write("Hello, World!”)
4 except FileNotFoundError:
5 print("Error: The file does not exist.")
6 except PermissionError:
7 print("Error: You do not have permission to write to this file.")
8 except Exception as e:
9 print(f"An unexpected error occurred: {e}")
•••
1 file = open('filename', 'mode')
1 try;
2 file = open('data.txt', 'r')
3 content = file.read!)
4 except FileNotFoundError:
5 print!"File not found.")
6 finally:
7 if 'file' in locals():
B file.close! )
Finally, it’s important to always close a file after working
with it. Failing to close files can result in memory leaks or
other issues. This can be done using file.close() , but the
recommended approach is to use the with statement, which
automatically closes the file when done:
•••
1 with open('data.txt', 'r') as file:
2 content = file.readO
♦••
1 with open("sample.txt", "r") as file
2 content = ftle.read()
3 prtntfcontent')
This approach gives you more control over how much data
you load into memory at any given time.
2. The readline() method
The readline() method reads a file one line at a time. It’s
useful when you need to process the file line by line,
especially when you're dealing with large files that you don't
want to load completely into memory. Each call to readline()
returns the next line in the file, and it stops once it reaches
the end of the file.
Here’s how to use the readline() method:
*••
•••
1 with open("sample.txt", "r") as file:
2 lines = file.readlines()
3 for line in lines:
4 prtnt(line.strip())
•« •
1 with open("sample.txt“, "r") as file
2 lines = ftie.readlines()
3 first_five_lines = lines!:5]
4 for line in first_five_lines:
5 print!line.strip!))
This allows you to read the file in one go but only work
with a subset of the lines if necessary.
In conclusion, Python offers a variety of methods for reading
text files, each suited to different needs and file sizes. The
read() method is great for smaller files where you need to
load the entire content, but can be inefficient for large files.
The readline() method is ideal for processing files line by
line, making it a good choice when memory efficiency is
important. Finally, the readlines() method provides an easy
way to load all lines into memory as a list but can also be
memory-heavy for larger files. By understanding these
methods and their memory implications, you can ensure
that your file-reading operations are both efficient and
effective for your specific use case.
When working with text files in Python, there are several
methods available to read the file's content. Each method
has its own advantages, and the choice between them
depends on the file size, memory efficiency, and how the
content needs to be processed. The methods read() ,
readline() , and readlines() are the main tools used for file
reading, and each one serves a different purpose.
Understanding the differences between these methods and
when to use them is key to working effectively with files,
especially large ones.
1. The read() method
The read() method is used to read the entire content of
the file at once. When invoked, it reads the whole file into a
single string and returns it. This method is most useful when
the file is small and can be comfortably loaded into memory.
However, when dealing with large files, using read() can be
inefficient and lead to memory issues, as it tries to load the
entire content into memory.
Example:
In this example, the program opens the file and reads one
line at a time. The loop continues until all lines have been
processed. This approach is more memory-friendly than
using read() , as only one line is held in memory at any
given time. It also makes it easier to process files line by
line, for instance, to perform search operations or
transformations on each line individually.
3. The readlines() method
The readlines() method reads the entire file and returns a
list where each element is a line in the file. Similar to read()
, it loads the entire file into memory, but instead of
returning a single string, it returns a list of strings. Each
string in the list represents one line from the file, including
the newline character at the end of each line.
Example:
This script will create the greetings.txt file and store the text
Welcome to Python programming! inside it.
Example 2: Overwriting an Existing File
If you run the same script again, the contents of
greetings.txt will be overwritten, and the previous message
will be lost. Let’s demonstrate this:
This script will add each item to the items.txt file, with each
item on a new line. The existing contents of items.txt will
remain, and the new data will simply be added to the end.
3. Key Differences Between 'write' and 'append' Modes
Now that we have looked at both modes, let’s summarize
the key differences between the 'write' and 'append' modes:
- 'write' mode ( 'w' ):
- Creates a new file if it doesn’t exist.
- Erases the existing content of the file if it already exists.
- Writes new data from the beginning of the file, replacing
any previous data.
- 'append' mode ( 'a' ):
- Creates a new file if it doesn’t exist.
- Does not erase the existing content of the file.
- Adds new data to the end of the file without modifying
the existing content.
4. Practical Use Cases
The 'write' mode is ideal when you want to start fresh with a
new file or when you need to completely replace the
contents of an existing file. For example, if you're
generating a report and need to overwrite an existing file
with new results each time your program runs, 'write' is the
mode you would choose.
On the other hand, the 'append' mode is perfect for
scenarios where you want to log information or accumulate
data over time without losing previous entries. For example,
if you're keeping track of user activity in a log file, 'append'
mode ensures that new logs are added to the file, while the
older logs remain intact.
Both modes provide essential functionality for file handling
in Python. Understanding when and how to use them
effectively will help you manage file-based data in your
programs efficiently.
In this chapter, we explore the process of creating and
writing to files in Python. One of the key operations when
dealing with files is knowing how to append data to an
existing file, as opposed to overwriting it. This distinction
between the "write" mode and the "append" mode is
essential to understand, especially for beginners who might
not fully grasp the implications of each. Let’s dive into the
usage of the ‘append’ mode and compare it to the ‘write’
mode in Python.
1. Using the 'append' mode to add data
When you open a file in "append" mode ( 'a' ), the data you
write will be added to the end of the file without removing
any of the existing content. This is particularly useful when
you want to keep the original data intact and simply add
new information to it. Here is an example to demonstrate
how the 'append' mode works:
1 Hello, world!
2 This is a test file.
After running the code above, the file would be updated to:
1 Hello, world!
2 This is a test file.
3 This is a new line of text.
As you can see, the new data is appended at the end of the
file, leaving the existing content intact.
2. When should you use the 'append' mode?
You should use the "append" mode when you need to add
data to a file without altering its previous contents. This is
especially useful for logging, where you may want to record
multiple events or messages over time without losing any
prior data. A practical use case could be appending log
entries into a file where each entry contains a timestamp
and a message.
For example, let’s append new log entries to a log file:
•••
1 import time
2
3 # Append a log entry to the log file
4 with open('logftle.txt', 'a') as log:
5 timestamp = time.strftime(
6 log.write(f"[{timestamp}] New event logged.\n")
1 Hello, world!
2 This u a test file.
After running the code, the content of the file will be:
As you can see, the previous contents of the file have been
deleted and replaced with the new line of text. This behavior
makes the "write" mode ideal for scenarios where you want
to completely update the content of a file, such as when
generating a new report or replacing outdated information.
4. When should you use the 'write' mode?
You should use the "write" mode when you need to start
fresh or overwrite the entire file. It is suitable for tasks
where the previous content is no longer relevant, and you
want to replace it entirely. For instance, if you are
generating a new configuration file or writing an updated
version of a report, the "write" mode ensures that the old
data is discarded and replaced with new content.
Here’s an example where the 'write' mode is used to create
a new configuration file:
*••
After executing this code, the config.txt file will contain the
following content:
•••
1 server=localhost
2 port=8a80
This file now holds the new configuration, and any previous
content would have been lost if the file already existed.
5. Key differences between 'write' and 'append'
To summarize the main differences between 'write' and
'append':
- 'write' ( 'w' ): Opens the file and overwrites its contents. If
the file doesn’t exist, it creates a new one. Use this when
you want to completely replace the existing data.
- 'append' ( 'a' ): Opens the file and adds new content to the
end without removing the existing content. If the file doesn’t
exist, it creates a new one. Use this when you want to add
data without affecting the current file content.
Knowing when to use each mode depends on the specific
requirements of your application. If you need to accumulate
data over time, such as appending logs or appending new
entries in a text file, the append mode is your go-to.
However, if you need to reset the content of a file and start
fresh, the write mode is appropriate.
Understanding how to write and append to files is an
essential skill in programming, especially in Python. The
ability to add data to existing files without overwriting them
is crucial for tasks like logging and data collection. On the
other hand, being able to overwrite files when necessary
ensures that you can refresh data in certain situations, such
as generating new reports or configuration files.
In this chapter, we have covered how to use both the "write"
and "append" modes, along with examples to illustrate their
use. By practicing these concepts, you’ll become more
comfortable with file handling in Python and be able to
apply them effectively in your projects.
6.4 - Managing Files with With
1. The Importance of Safe File Handling in Python
In programming, file manipulation is an essential task when
working with data storage, configuration files, logs, or even
when processing large datasets. However, handling files in a
safe and reliable manner is often overlooked, especially by
beginners. Improper handling of file operations can lead to
various issues such as data corruption, memory leaks, or
resource exhaustion, particularly when files are not closed
properly after being opened. In Python, the potential for
these problems is minimized when the file handling process
is carefully managed.
When opening a file, either for reading or writing, the
operating system allocates certain resources to handle that
file. These resources might include memory, file handles,
and input/output buffers. If a file is opened and not closed
correctly, these resources could be tied up, causing
performance degradation, errors, or even the inability to
open other files. In more severe cases, especially when
working with large systems or servers, failing to close files
properly could exhaust the system’s file handle limit,
preventing the opening of new files or causing the
application to crash.
Additionally, not closing a file may lead to data loss. When
writing to a file, the data is typically buffered in memory and
only written to the file when the file is properly closed. If the
file is not closed correctly (due to an error or an unhandled
exception), this buffered data might not be written, resulting
in incomplete or corrupted files. Therefore, managing files
securely and ensuring they are closed properly after being
accessed is a critical best practice.
2. The 'With' Context Manager in Python
To address these concerns, Python provides a feature called
the "context manager" using the with statement. A context
manager is an object that defines the runtime context to be
established when the code block is entered and ensures
that resources are properly managed when leaving that
context. In the case of file handling, the with statement
guarantees that the file is closed automatically after the
code block is executed, whether the operations succeed or
fail.
The with statement is part of the Python language since
version 2.5 and provides a clean and efficient way of
managing resources, such as files, network connections, or
database transactions. It simplifies error handling by
ensuring that files are closed as soon as the program exits
the indented block, without needing explicit close() calls or
worrying about potential exceptions.
When working with files, instead of using traditional
methods like file = open('filename') and later calling
file.close() , Python allows us to wrap the file-handling
process within a with statement. The main advantage is that
the with statement automatically takes care of closing the
file, freeing up the resources as soon as the block execution
is finished. This minimizes the risk of leaving a file open,
and it also improves code readability.
3. Basic Example: Reading a File Using 'With'
Let’s take a closer look at how the with statement works
with file reading operations. Suppose you have a text file
called example.txt that contains the following lines:
1 Hello, World!
2 This is a Python tutorial.
3 Enjoy learning!
- content = file.read()
The file.read() method reads the entire content of the file
and stores it in the variable content . It’s important to note
that after reading the content, the cursor inside the file will
be at the end of the file, and subsequent read operations
will return an empty string unless you reset the cursor.
- print(content)
This line prints the content of the file to the console. At this
point, the file has already been read, and the context
manager guarantees that the file is properly closed once
this block is finished.
In this example, even if an error occurs while reading the file
(e.g., a FileNotFoundError or a PermissionError ), the with
statement ensures that the file is properly closed before the
error propagates, preventing any resource leaks or lingering
open file handles.
4. Writing to a File Using 'With'
The with statement is equally useful when writing to files.
When writing to a file, it's essential to handle resources
properly to avoid data loss or file corruption. Let’s explore a
simple example where we write a message to a new text
file.
- file.write('Python is amazing!\n')
Similarly, this line writes another string into the file, adding
more content. Note that write() doesn’t add a newline
automatically, so you need to explicitly include '\n' if you
want to separate lines.
Once the indented block of code finishes, the context
manager ensures that the file is closed, flushing any
buffered data to the file and releasing the resources
associated with it. This is critical in preventing data loss or
file corruption, especially when the program crashes or an
exception is raised after writing.
Summary of Benefits:
- Automatic file closure: The with statement automatically
closes the file when the block of code is exited, even if an
error occurs. This prevents file handles from being left open.
- Error handling: Any errors that occur during the file
operations don’t affect the closure of the file. The file will
always be closed properly before control is returned to the
calling function.
- Cleaner and more readable code: By using the with
statement, you avoid the need for manual close() calls and
reduce the potential for mistakes in file handling.
In the examples above, we've demonstrated how to read
from and write to a file using the with statement,
emphasizing its role in ensuring the correct and safe
handling of files. The use of the context manager not only
simplifies the process but also guarantees that the file is
always closed, thus protecting resources and ensuring data
integrity.
When writing about file handling in Python, especially in the
context of introducing the with statement, it's important to
provide a comprehensive understanding of how this feature
works, its advantages over traditional file management
methods, and how it enhances error handling. The with
statement in Python simplifies the process of working with
resources that require explicit cleanup, such as files,
network connections, and database cursors.
1. What Happens Behind the Scenes When Using with to
Open Files
In Python, the with statement is part of the context
management protocol. A context manager is an object that
defines two key methods: '_ enter_ ' and '__exit_ '. These
methods handle the setup and teardown of the code block
in which the context manager is used. When it comes to file
handling, the context manager ensures that a file is properly
opened and closed, even if an error occurs during the
execution of the block of code.
When you use the with statement to open a file, Python
automatically calls the '_ enter_ ' method of the file object.
This method opens the file and returns the file object itself.
The file object is then available inside the with block, where
you can perform file operations like reading or writing. Once
the block of code finishes executing, Python automatically
calls the '_ exit_ ' method, which is responsible for closing
the file, even if an exception was raised during file
operations.
Here’s a step-by-step breakdown of what happens when you
use the with statement to open a file:
1. Entering the Context:
When the with statement is executed, the '_ enter_ '
method of the file object is invoked. This method opens the
file in the specified mode (e.g., 'r', 'w', 'a'). The file object is
returned to the variable specified in the with statement.
•••
1 class MyFileManager:
2 def ____ enter_ (self):
3 self, file = open(1ftle.txt1, 1 r‘)
4 return self.file
5
1
6__________ def _ exit__(self, exc.type, exc_val, exc_tb):
7 self.file.close()
8 if exc_type is not None:
9 print(f"An error occurred: {exc_val}”)
12 # Using the context manager with 'with'
19 with MyFileManager(
11
13 asPropagate
return False ) # file: the exception if one occurred
14 data = file.read()
15 print(data)
In this example, '_ enter_ ' opens the file and returns the
file object, while '_ exit_ ' ensures the file is closed when
the block ends. If an error occurs, '_ exit_ ' prints an error
message, but still allows the exception to propagate by
returning False .
3. Advantages of Using with Over Manual File Opening and
Closing
The with statement provides several advantages over
manually opening and closing files. One of the most
significant advantages is automatic resource management,
which simplifies the process and reduces the likelihood of
errors, especially in cases where an exception might occur.
In traditional file handling, the process of opening and
closing a file involves the following steps:
•< •
1 try:
2 with open(‘nonexistent_file.txt', 'r' ) as file
3 data = file.read( )
4 except FileNotFoundError as e:
5 printtfError: {e}")
1 try:
2 with open('file.txt’, ' r') as file:
3 data = ftle.readt)
4 # Simulate a potential exception while processing the data
5 if not data:
6 raise ValueErrorf"The file is empty")
7 except ValueError as e:
8 print(f"Data Error: {e}”)
9 except lOError as e:
10 print(f"File Error: {e}")
•••
1 with open('file.txt’, 'r', encodingsutf-8') as file:
2 data = ftie.read()
•••
1 file = open('ftle_path', ‘rb‘) # Reading binary file
2 file = open('file_path', 'wb') 4 Writing binary file
•••
•••
•••
1 import zipftle
2
3 with zipftle.ZipFile(*'archtve.zip , *'w ) as zipf:
4 zipf.write(‘filel.txt’)
5 zipf.write(‘*file2.jpg )
This code creates a new ZIP file called archive.zip and adds
file1.txt and file2.jpg into it.
Extracting Files from a ZIP Archive
To extract files from an existing ZIP archive:
1 with zipfile.ZtpFile('archive.zip' , 'r') as zipf:
2 zipf,extractall('extracted_fties’) # Extract all files into a
directory
This will extract all the files contained in archive.zip into the
extracted_files directory.
Listing Contents of a ZIP Archive
You can also list the files contained in a ZIP archive:
•••
1 with zipftie.ZipFile('*archive.zip , ’r’) as zipf:
2 prtnt(zipf.nameltst()) # Prints the list of files in the archive
This will output the names of all files contained in the ZIP
archive.
7. Best Practices for Working with Binary Files
When working with binary files, it’s important to follow a few
best practices to ensure your code is both safe and efficient:
- Always use a context manager ( with statement): This
ensures that files are properly closed, even if an error occurs
during reading or writing.
- Handle exceptions: Use try-except blocks to catch and
handle any potential errors (e.g., file not found, permission
errors, etc.).
- Use binary-safe methods: Always ensure you are reading
and writing data in the correct binary format (i.e., using
bytes objects) to avoid data corruption.
- Be mindful of memory: When working with large files,
avoid loading the entire file into memory. Instead, read and
write in manageable chunks.
By following these practices, you can confidently work with
binary files and handle complex data in a safe and efficient
manner in Python.
When working with files in Python, it is essential to
understand the differences between text and binary files,
especially when dealing with data that is not purely textual.
The key distinction lies in how the data is stored and
interpreted. Text files store data as human-readable
characters, usually encoded in formats like ASCII or UTF-8,
while binary files store data in raw byte form, which can
represent anything from numbers to images or even audio
files. This chapter will dive into how to manipulate binary
files in Python, explore the performance implications of
working with binary data, and provide some best practices
to ensure smooth and error-free handling of these files.
1. Difference Between Text and Binary Files
Text files are composed of a sequence of characters, where
each character is represented by one or more bytes,
depending on the encoding format. Python provides a
simple way to read and write text files through its built-in
open() function, specifying the mode as either 'r' for reading
or 'w' for writing. The main advantage of text files is that
they are easy to read and edit manually using text editors,
as their contents are interpretable by humans.
On the other hand, binary files store data in its raw byte
format, which means that the data is not directly readable
by humans. These files can represent images, audio files,
videos, compressed files, or even complex serialized
objects. In Python, binary files are handled by opening files
in binary mode (by specifying 'rb' for reading or 'wb' for
writing). When working with binary files, it is crucial to
understand that the content will not be processed as text
but as raw byte sequences.
2. Working with Binary Files in Python
To read or write binary files in Python, you would typically
use the open() function with the appropriate mode, such as
'rb' (read binary) or 'wb' (write binary). The file object
returned by open() can then be used to manipulate the
binary data. Here is an example of reading a binary file
(e.g., an image file) and writing it to a new file:
1 try:
2 with open(“non_extstent_ftie.jpg”, "rb") as file
3 data = ftle.read( )
4 except FtleNotFoundError:
5 prtnt("Ftle not found!")
This ensures that the file has not been corrupted or altered
unexpectedly.
- Read in Chunks: When dealing with large binary files, it is
more memory-efficient to read the file in chunks rather than
loading the entire file into memory at once. This is
particularly important when dealing with large images or
video files.
1 chunk_size = 1024 # 1 KB
2 with openClarge_file.bin", "rb") as file
3 while chunk := flie.read(chunk_size):
4 process_chunk(chunk)
•••
1 import os
2
3 # Check if the directory exists
4 dir_path = 'my_folder'
5
6 if os.path.exists(dir.path):
7 printff'The path '{dir_path}' exists.’)
8 if os.path.isdir(dir_path):
9 print(f"'{dir_path}' is a directory.")
else:
11 print(f’"{dir_path)-' is not a directory.")
12 else:
13 print(f“The path '{dtr_path)'' does not exist.")
1 import os
2
3 # Create nested directories
4 nested_dir_path = 'parent_folder/child_folder'
5
6 if not os.path.extsts(nested_dtr_path):
7 os.makedirs(nested_dir_path)
8 print(f"Nested directories '{nested_dir_path}' created
successfully,11)
9 else:
10 print(f"Directories '{nested_dir_path}‘ already exist.")
1 Import os
2
3 try:
4 os.mkdirf 'extsting_folder')
5 except FileExistsError:
6 print("The directory already exists.”)
7 except PermlssionError:
8 print("You do not have permission to create the directory.")
1 import os
2
3 dir_path = 'my_folder'
4
5 if os.path.exists(dir_path):
6 items = os.ltstdir(dir_path)
7 txt_fil.es = (item for item tn items if item.endswith('.txt'))
8 print(f”.txt files in '{dir_path}':")
9 for txt_ftle tn txt_files:
10 print(txt_flie)
11 else:
12 prtnt(f“Directory ’{dir_path}' does not exist.”)
This code lists only the '.txt' files in the directory. The
item.endswith('.txt') condition filters out files with other
extensions. You can adjust this filter to look for different file
types as needed.
Example - Differentiating Between Files and Directories:
Sometimes, you may need to list only files or only
directories. You can combine os.listdir() with os.path.isfile()
and os.path.isdir() to filter out only the files or directories.
1 Import os
2
3 dir_path = 'my_folder'
A
5 if os.path.exists(dir_path):
6 items = os.listdir(dir_path)
7 files = [item for item in items if
os.path.isfile(os.path.join(dir_path, item))]
8 directories = [item for item in items if
os.path.isdir(os.path.jointdir_path, item))]
9
10 printtf'Files tn '{dir.path}' :“)
11 for file tn files:
12 print(ftle)
13
14 print(f”\nDirectories in '{dir_path}':”)
15 for directory in directories:
16 printtdirectory)
17 else;
18 print(f"Oirectory '{dir_path}‘ does not exist.'")
1 import os
2
3 dir_path = ‘ol.d_fol.der’
4
5 if os.path.exists(dir_path):
6 os.rmdir(dir_path)
7 print(f"Directory {dir_path}‘ removed successfully.")
8 else:
9 printff"Directory '{dir_path}‘ does not exist.")
1 import os
2
3 # Create a single directory
4 os.mkdtrf1new_dtrectory1)
5
6 # Create a nested directory
7 ps.makedirsl' pa tent_tf ir ec tory/chilcLdi rectory' )
1 import os
2
3 # Traverse the directory tree starting from 'root_directory‘
4 for dirpathj dirnames, filenames in os.walk('root_dtrectory'):
5 print(f'Current directory: {dtrpath}1 )
6 prtnt(f 'Subdirectories: {dirnames}-')
7 prtnt( f ’ Files: {filenames}-')
B printf'---1)
1 if not os.path.exists!'neW-Jtrectory1)
2 os.mkdir!'new directory')
3 else:
4 print!'Directory already exists.')
5
6 if os.path.exists!"new_directory1J:
7 os.rmdtr(1new_directory‘)
8 else:
9 print!'Directory doss not exist.')*
123
1 import os
2
3 # Get the total size of ali files in a directory tree
4 total size = €
5 for dirpath, dirnames, filenames in os.Walk('root,directory')
6 for filename in filenames:
7 file_path = os. path. joini( dir path, filename)
8 total,size *= os.path.getstze(ftle_path)
9
10 print(f'Total size: {total_size} bytes')
I import csv
2
3 # Open the CSV file
4 with open(’data.csv’, newltne='') as csvfile
5 reader = csv.reader(csvftle)
6
7 # Iterate through the rows tn the CSV
8 for row tn reader:
9 print(row}
In this example, Python will open the file data.csv , and the
csv.reader will parse the file line by line, splitting each row's
values by the default delimiter (comma). Each row will be
printed as a list of values.
Customizing Delimiters
The csv.reader function allows you to specify the delimiter
character used to separate the values in the file. By default,
this delimiter is a comma, but CSV files can use other
delimiters, such as semicolons or tabs. The delimiter is
specified with the delimiter parameter.
For example, if a CSV file uses semicolons (';') instead of
commas as delimiters, you can customize the reader as
follows:
1 import csv
2
3 # Open a CSV file with semicolon delimiter
4 with open('data_semi.col.on.csv', newltne='') as csvftle
5 reader = csv.reader(csvftie, delimiters';')
6
7 # Iterate through the rows
8 for row tn reader:
9 print(row}
*••
1 name,age,city
2 "John Doe",36,"New York, NY"
3 "Jane Smith",25,“Los Angeles, CA“
•••
l import csv
2
3 # Open a CSV file with quoted data
4 with open(‘data_quoted.csv', newline='') as csvfile:
5 reader = csv.readerfcsvftie, delimiters',', quotechar='“')
6
7 # Iterate through the rows
8 for row tn reader:
9 print {row}
1 import
2
3 # Data to write
4 data = [
5 ['name', 'age', 'city'J,
6 ['John Doe', 3®, 'New York'],
7 ['Jane Smith', 25, ‘Los Angeles')
a ]
9
10 # Open the file in write mode
11 with open('output.csv', mode='w', newline='') as csvfile
12 writer = csv.writer(csvfile)
13
14 # Write rows of data
15 writer.wrtterows(data)
In this example, the csv.writer writes each list from the data
list as a row in the output CSV file. Note that the newline=''
argument is used when opening the file to ensure that the
CSV writer correctly handles newlines across different
operating systems.
Customizing Delimiters and Quoting
Just as with reading CSV files, when writing CSV files, you
can customize the delimiter, quoting behavior, and other
options. For example, if you want to use semicolons instead
of commas as the delimiter and ensure that all text fields
are quoted, you can do this:
1 import csv
2
3 # Data to write
4 data = [
5 ['name', 'age', 'city'],
6 ['John Doe', 3®, 'New York'],
7 ['Jane Smith', 25, 'Los Angeles']
a J
9
1® # Open the file in write mode
11 with open('output_$emicolon.csv', mode='w', newline='') as csvfile
12 writer = csv.writerfcsvftie, delimiter®';', quotechar®""',
quoting=csv.QU0TE_ALL)
13
14 # Write rows of data
15 writer.writerows(data)
This ensures that only the text fields (name and city) are
quoted, while the numeric fields (age) are not.
Throughout this section, we have covered how to read and
write CSV files using Python's csv module. This module
offers many customization options that allow you to adapt
the CSV reading and writing process to handle different
formats, including varied delimiters, quoted fields, and
specialized line terminators. Mastering these tools is crucial
for anyone working with data, as CSV files remain a popular
format for exchanging and storing structured data.
When working with external data in Python, CSV (Comma
Separated Values) files are one of the most common formats
used due to their simplicity and ease of manipulation. This
chapter will explore how to read and write CSV files using
Python's built-in csv module, detailing various configuration
options such as delimiters and quote characters. In addition,
we will dive into more advanced techniques involving
dictionaries and explore best practices for handling large
files and ensuring data consistency.
1. Writing to CSV Files using csv.writer
The csv.writer object in Python provides a straightforward
way to write data to a CSV file. A basic usage involves
passing a file object (opened in write mode) to the csv.writer
constructor, and then using the writerow() or writerows()
methods to write rows of data.
Example 1: Writing Lists to CSV
Let’s consider writing a simple list of data into a CSV file.
Each list represents a row, and each item in the list
represents a cell.
1 import csv
2
3 data = [
4 ['•Name", "Age", "City"],
5 ["Alice", 38, "New York"],
6 ("Bob", 25, "Los Angeles"),
7 ["Charlie", 35, “Chicago"]
8 ]
9
10 with open(’people.csv', 'w', newline='') as file
11 writer = csv.writer(file)
12 writer.writerows(data)
1 data - [
2 ("Name", 'Age', "City"],
3 ("Alice", 38, "New York”],
4 ("Bob", 25, “Los Angeles"),
5 ("Charlie", 35, "Chicago"]
6 )
7
8 with open(‘people_semicolon.csv1, 'w‘, newline='') as file
9 writer = csv.writer(file, delimiters;')
19 writer.writerows(data)
•••
1 data - [
2 ("Name", “Address"],
3 ("Alice", "123, Main St."],
4 ("Bob", "456, Oak St.")
5 1
6
7 with open(’people^quoted.csv’, ’w’, newline='’) as file:
8 writer = csv.writer{file, quotechar=’”’, quoting=csv.QUOTE_MINIMAL)
9 writer.writerows(data)
***
1 import csv
2
3 with open('people.csv', newline='') as file:
4 reader = csv.DictReader(ftle)
5 for row tn reader:
6 print{row)
1 Name,Age,City
2 Al ice,30,New York
3 Bob,25,Los Angeles
4 Charlie,35,Chicago
The output will be:
♦•♦
l{’Name': 'Alice', 'Age': '30', 'City': 'New York'}
2 {'Name': 'Bob', 'Age': '25', 'City': 'Los Angeles'}
3 {'Name': 'Charlie', 'Age': '35', 'City': 'Chicago'}
••♦
1 import csv
2
3 data = [
4 {"Name": "Alice", "Age": 30, "City"; "New York"},
5 {"Name": "Bob", "Age": 25, "City": "Los Angeles”},
6 {"Name": "Charlie", "Age": 35, "City": "Chicago"}
7 ]
0
9 with open('people_dict.csv', 'w', newline='') as file:
19 fieldnames = ["Name", "Age", "City"]
11 writer = csv.DictWriter(file, fieldnames=fieldnames)
12 writer.writeheader() # Writes the header row
13 writer.writerows(data)
1 Name,Age,City
2 Al ice,39,New York
3 Bob,25,Los Angeles
4 Charite,35,Chicago
1 import csv
2
3 with open(’large.file.csv', newline=’’) as file
4 reader = csv.DictReader(file)
5 for row in reader:
6 # Process each row one by one
7 print(rowj
•••
•••
Deserialization Example:
- If you receive a JSON string from an external source (like
an API response), you can deserialize that string back into a
Python dictionary or list to manipulate the data in your
program.
3. Using the Python json Module
Python's json module offers an easy way to work with
JSON data, providing functions for both serialization and
deserialization. The key functions in the json module are
json.dump() , json.dumps() , json.load() , and json.loads() .
Let's break down each function and see how they are used
in practice.
- json.dump(): This function is used to serialize a Python
object and write it directly to a file. It takes two arguments:
the object you want to serialize and the file object where the
data will be written.
1 Import json
2
3 data = {"name": "Alice”, "age": 36, "city": "New York"}
4
5 with open("data.json", "w") as file:
6 json.dump(data, file)
1 import J son
2
3 data = {"name": "Bob", "age": 25, "city": "Los Angeles"}
4
5 json_string = json.dumps(data)
6 print(json_string)
•••
1 {"name": "Bob", "age": 25, "city": "Los Angeles"}
1 import json
2
3 with open(“data.json", “r") as file:
4 data = json.load(file)
5 print(data)
♦••
1 {“name": "Alice", "age": 30, "city": "New York"}
•••
1 {'name': ’Alice', 'age1: 30, 'city': 'New York'}
1 import json
2
3 json.string = '{"name": "Charlie”, "age": 35, "city": “Chicago"}
4
5 data = json.loads(json_string)
6 print(data)
1 data = {
2 "employees"■ [
3 {"name": "John", “age": 28, “department": "HR"},
4 {"name": "Sarah", "age": 32, "department": "Engineering"},
5 {"name": "Mike", "age": 25, "department": "Sales"}
6 ]
7 }
•••
1 import json
2
3 with open("employees.json", "w") as file:
4 json.dump(data, file, indent=4)
*••
11 with open('data.json', 'r‘) as file:
1 import json
2
3 # Assuming you have a file 'data.json' containing:
4 # {
105 # Open"name":
# the JSON file
'Alice",
6 # "age": 25,
1
7 # "city": "New York"
B # }
9
♦•♦
1 import json
2
3 # A Python dictionary
4 data = {
5 "name1’: "Bob",
6 "age": 30,
7 "city": "Los Angeles"
6 }
9
19 # Convert the Python dictionary to a JSON string
11 json.string = json.dumps(data)
12
13 # Now you have a JSON string
14 print(json_strtng)
Output:
•••
1 {"name": "Bob", "age": 30, "city": "Los Angeles"}
This would serialize the data dictionary and write the JSON
string directly into output.json .
3. Converting a JSON String to a Python Dictionary Using
json.loads()
The json.loads() function is the reverse of json.dumps() . It
converts a JSON-formatted string back into a Python object.
This is especially useful when you're dealing with JSON data
received from a web API or other external sources, as you’ll
often receive JSON data in the form of a string that needs to
be parsed into a usable Python object.
Here’s an example of how to use json.loads() :
• ••
1 Import json
2
3 # A JSON string
4 json_string = '{"name": "Charlie”, "age”: 35, "city": “San Francisco”}
5
6 # Convert the JSON string to a Python dictionary
7 data = json.loads(json_string)
a
9 # Now you can access the data like a dictionary
10 print(data) # {'name': 'Charlie', ‘age’: 35, 'city'; 'San Francisco'}
11 print(data[* name']) # 'Charlie'
1 import json
2
3 json_string = •{"name": "David" "age": 40, “city": "Chicago"}' # Invalid
JSON
4
5 try:
6 data = json.loadsfjson_string)
7 except json.JSONDecodeError as e:
8 printff'Failed to decode JSON: {e}")
1 import json
2
3 try:
r ) as file:
4 with open(*'data.json , *
5 data = json.load(ftle)
6 except json.JSONDecodeError as e:
7 print(f"Error decoding JSON: {e}'
)*
8 except FileNotFoundError as e:
9 print(f“Error: File not found. {e}
*
')
10 except PermissionError as e:
11 print(f"Error: Permission denied, {e}")
This approach ensures that your code can handle a variety
of errors gracefully.
The ability to work with JSON data is essential for Python
developers, especially when dealing with web APIs,
configuration files, or data exchange between different
applications. By understanding how to read JSON from files,
convert Python objects to JSON strings, and handle common
errors, you’ll be equipped to handle a wide range of real-
world programming challenges.
When working with JSON (JavaScript Object Notation) in
Python, we often encounter situations where we need to
serialize or deserialize data. JSON provides a simple and
widely accepted way of storing and exchanging data
between systems. One common real-world application of
JSON is in configuration management for software
applications, where configuration data is stored in a JSON
file and loaded into the program at runtime. This ensures
that settings are easily adjustable without modifying the
source code itself.
Let’s go through a practical example to understand how to
read and write JSON files in Python, particularly focusing on
configuration management.
1. Storing and Loading Configuration Data
Imagine a program that needs to store user preferences,
such as theme settings (light or dark mode), language
preferences, and whether or not notifications are enabled.
Instead of hardcoding these values into the program, it
makes sense to store them in a JSON file. This allows users
or administrators to update the configuration without
needing to modify the code directly.
Writing to a JSON File
Let’s first create a Python script that stores user settings in
a JSON file. We will use the json module to serialize the data
into a JSON format and write it to a file.
1 import json
2
3 # A dictionary containing the settings data
4 settings = {
5 "theme": "dark",
6 "language": "English",
7 "notifications_enabled": True
8 }
9
10 # Writing the dictionary to a JSON file
11 with open("config.json", "w") as json_file:
12 json.dump(settings, json.file, indent=4)
1{
2"theme": "dark",
3 "language": "English",
4 "notifications.enabled": true
5 }
1 Import json
2
3 # Reading the configuration data from the JSON file
4 with open("config.json”, "r") as json_file:
5 settings = json.loadfjson_file)
6
7 # Print the loaded settings
3 print(settings)
The json.load() function reads the JSON data from the file
and converts it into a Python dictionary. After executing this
code, the settings variable will contain the same data as in
the config.json file, allowing the program to use it in the
same way as the original dictionary.
Modifying the Configuration
One common scenario when working with JSON files is
modifying the configuration. For instance, a user may
choose to switch the theme to "light". You can update the
dictionary and write the updated data back into the same
JSON file.
1 import json
2
3 # Simulated JSON response from Service A
4 response = ‘{"user'': "john_doe", “email": “[email protected]", “active"
true}'
5
6 # Deserialize the JSON data into a Python dictionary
7 data = json.loads(response)
6
9 # Use the data in the program
10 print(f"User: {data['user']}, Email: {data['email']}, Active:
{data(‘active']}")
1 import json
2
3 # Python data
4 user_tnfo = {
5 "user": "john_doe",
6 "email": "[email protected]",
7 "active": True
6 >
9
19 # Serialize the data to a JSON string
11 json.response = json.dumps(user_info)
12
13 # Simulate sending the JSON response
14 prtnt(json_response)
•••
1 pip install pandas openpyxl
•••
1 import pandas as pd
2 import openpyxl
I ••e
1 df = pd. read_excel( 'fil.e_path.xlsx')
1 df = pd.read_excel(’ftle_path.xlsx', sheet_name='Sheetl')
2 # or
3 df = pd.read_excei('ftle_path.xlsx', sheet_name=0) # Indexing starts at
0
♦•♦
1 dfs = pd.read_excel('ftle_path.xlsx', sheet_name=['Sheetl', 'Sheet2'])
This will return a dictionary of DataFrames, where the keys
are the sheet names and the values are the corresponding
DataFrames.
- usecols : This parameter is used when you only want to
read specific columns from the Excel sheet. For example, if
you only want to read the columns 'A' and 'C', you can
specify:
•••
1 df = pd.read_excet('ftle_path.xlsx', nrows=10)
I ••♦
1 df.to_excel('output_ftle.xlsx** )
| ♦
1 df.to_excel('output_ftle.xlsx
* , sheet_name='Results‘)
*•*
1 df.to_excel('output_ffle.xlsx', lndex=False)
- columns : If you want to write only a subset of columns
from the DataFrame to the Excel file, you can specify the
columns you want using the columns parameter. For
example, if your DataFrame has columns 'A', 'B', and 'C',
and you only want to write 'A' and 'C':
•••
1 df.to_excel('output_ftle.xlsx', columns=['A', 'C'])
•••
1 df.to_excel('output_ffle.xlsx‘, engtne='openpyxU)
This will create an Excel file with two sheets: 'Sheet1' and
'Sheet2' , each containing the data from df1 and df2 ,
respectively.
Additionally, if you need to apply formatting or customize
the structure of the Excel file further (such as setting
column widths, adding conditional formatting, or inserting
charts), you may need to use the openpyxl library directly.
While pandas is great for straightforward reading and
writing of data, openpyxl offers more advanced features for
Excel file manipulation, such as cell formatting, adding
formulas, and much more.
In this chapter, we’ve covered the basics of reading from
and writing to Excel files using pandas . By using the
read_excel() and to_excel() functions, you can efficiently
load, manipulate, and save data in Excel format. In the next
sections, we’ll explore more advanced features and
techniques to automate and streamline your data
processing workflows even further.
Manipulating Excel files is a common task in many data-
driven projects, and Python provides powerful libraries for
handling such files. In this chapter, we will explore how to
manipulate Excel spreadsheets using the openpyxl library,
covering file opening, reading specific cells, and adding new
data. Additionally, we will compare pandas and openpyxl in
terms of their usage for reading and writing Excel files,
demonstrating when each library is most appropriate. The
chapter will also include tips on best practices for working
with Excel files in Python, such as optimizing performance,
handling common errors, and efficiently managing large
datasets.
1. Manipulating Excel Files Using openpyxl
The openpyxl library is a popular choice for working with
Excel files in Python, especially when you need to perform
low-level operations, such as reading specific cells or adding
new data to a spreadsheet. Unlike pandas , which operates
primarily on dataframes, openpyxl allows for more granular
control over the Excel file structure, such as modifying cell
styles, formatting, and formulas.
To work with Excel files using openpyxl , you first need to
install the library (if it is not already installed):
•••
1 pip install openpyxl
1 sheet = workbook!'Sheetl']
1 import pandas as pd
2
3 # Read an Excel file into a pandas DataFrame
4 df = pd.read_excel('example.xlsx', sheet_name='Sheetl’)
5 print(df.head())
This code reads the data from the 'Sheet1' sheet into a
DataFrame and prints the first few rows. The simplicity of
pandas makes it highly effective for tasks that require
working with tabular data.
- openpyxl: While openpyxl can also read data from Excel
files, it does not convert the data into a structured form like
a DataFrame . Instead, you have to work with individual
cells. This gives you more flexibility for low-level
manipulation, but it requires more code if you're looking to
work with entire datasets.
Writing Excel Files
- pandas: Writing to Excel with pandas is straightforward.
You can write a DataFrame to an Excel file with the
to_excel() method:
•••
1 df.to_excel('output.xlsx', lndex=False)
•••
•••
1 try;
2 workbook = load_workbook('example.xlsx')
3 except FileNotFoundError:
4 print("File not found!")
Explanation:
- The open() function tries to open a file that doesn't exist,
resulting in a FileNotFoundError .
- The try-except block catches this error, and the message
from the exception is printed, which helps you understand
the nature of the problem.
To avoid this error, one solution is to check if the file exists
before attempting to open it. You can use the
os.path.exists() method for this purpose.
1 Import os
2
3 file_path = 'non_existent_file.txt'
4 if os.path.exists(file_path):
5 with open(file_path, 'r') as file:
6 content = ftie.read()
7 else:
8 print(f"Error: The file {file_path} does not exist.")
2. PermissionError
Another frequent error when working with files is the
PermissionError . This error occurs when your Python script
tries to access a file, but the operating system denies
permission. This could be due to file ownership, restrictive
file permissions, or the user not having sufficient privileges
to read, write, or execute the file.
Cause:
This error can happen if:
- The file is locked or in use by another process.
- The user running the Python script does not have the
necessary permissions to access the file.
- The file is marked as read-only, but you attempt to write to
it.
Example:
•••
1 try:
2 with open( ‘ restricted f tie, txt' , ‘w’) as file:
3 ftie,writef"This will fail if the file is write-protected,")
4 except PermtsstonError as e;
5 print(f"Error: {e}")
Explanation:
- If the restricted_file.txt is read-only or the user doesn't
have write permissions, Python will raise a PermissionError .
- Again, using try-except , we can catch this specific error
and handle it accordingly, perhaps by notifying the user of
insufficient permissions.
To handle this, you can either change the file's permissions
or handle the exception and ask the user to adjust the file's
properties manually.
••<
1 import os
2
3 file_path = 'restricted_file.txt‘
4 if os.accessffile_path, os.W_0K):
5 with open(file_path, 'w') as file:
6 file.write("This will succeed if the file is writeable.")
7 else:
8 print(f"Error: The file {file_path} is not writable or does not
exist.")
3. IOError
IOError is a more general error that occurs during file I/O
(Input/Output) operations. While FileNotFoundError and
PermissionError are specific types of IOError , this error can
also occur when there are issues during reading or writing to
the file, such as a full disk, a network file system that is
unavailable, or an unexpected hardware failure.
Cause:
This error can arise due to a wide range of reasons:
- The disk is full or unavailable.
- The file system is read-only.
- The file is being accessed over a network that is currently
disconnected.
- The file is being used by another program, preventing
access.
Example:
•«•
1 try;
2 with open{‘5ome_file.txt't 'r'} as file:
3 content = file.read!)
4 except lOError as e;
5 print!f"Error: <e}")
Explanation:
- This example tries to read from a file, but an I/O error
prevents the operation. The error could be caused by any of
the above-mentioned reasons, and the exception is
captured and printed for debugging.
While IOError is more generic, it’s still important to manage
it, especially when working with files over a network or with
external storage devices. One approach is to include retries
or error logging in your code.
1 import time
2
3 for _ in range(3): # Retry 3 times
4 try:
5 with open('some_file.txt', ' r') as file:
6 content = file.read()
7 break # If successful, exit the loop
8 except lOError as e:
9 print!f"Error: {e}. Retrying in 2 seconds...")
10 time.sleep!2)
1 import zipfile
2
3 # Create a new ZIP file and add files to it
4 with ztpftle.ZtpFtle('example.zip', 'w') as ztpf:
5 ztpf.wrtte('ftlel.txt') # Add filel.txt to the ZIP archive
6 ztpf.wrtte('ftle2.txt') # Add file2.txt to the ZIP archive
This will output a list of the files in the ZIP archive, such as:
1 (•ftlel.txt
*
, 'ftle2.txt', 'flle3.txt')
♦•♦
1 # Extract a specific file from the ZIP archive
2 with zlpftie.ZipFtle('example.zip', 'r') as zlpf:
3 zlpf,extract('flle2.txt', 'extracted.fties') # Extract file2.txt to
‘extracted files' folder
If you want to extract all the files from the archive, you can
use extractall() :
♦•♦
1 # Extract all files to a directory
2 with ztpftie.ZtpFilei'example.zip', r* ‘) as ztpf:
3 zipf.extractall('extracted_fties') # Extract all files to
‘extracted files' folder
1 import gzip
2 import shuttl
3
4 # Compress a file using GZIP
5 with open('example.txt', 'rb') as f_in:
6 with gzip.open('example.txt.gz', 'wb') as f_out:
7 shutil.copyftleobjlf_in, f_out) # Copy content from example.txt
to example.txt.gz
In this example, we’re compressing the contents of
example.txt and saving it as example.txt.gz . The
shutil.copyfileobj() function is used to copy the contents of
the input file to the compressed output file.
3.2 Reading a Compressed GZIP File
To read the contents of a compressed GZIP file, we open it in
read mode ( 'rb' ) using the gzip.open() function:
♦•♦
1 # Read data from a GZIP compressed file
2 with gzip.open(’example.txt.gz', ‘rb’) as f:
3 file_content = f.read() # Read the entire compressed file content
4 print(file_content.decode()) # Print the decompressed content
•••
1 Import |zip
2
3 # Open the GZIP file for reading
4 with gzip.open(‘example.gz‘, 'rb') as f_in:
5 # Open the output file for writing the decompressed content
6 with open('decompressed_ftie.txt’, 'wb') as f_out:
7 # Read from the GZIP file and write to the new file
8 f_out.wrtte(f_tn.read())
Explanation:
- gzip.open('example.gz', 'rb') opens the GZIP file for reading
in binary mode ( 'rb' ).
- f_in.read() reads the compressed data.
- The decompressed data is written to a new file using the
built-in open() function.
This example decompresses the entire file into a new text
file ( decompressed_file.txt ).
Decompressing and Processing the Data in Memory
Sometimes, you might want to decompress the data and
process it without saving it to a new file. You can read the
decompressed content directly into memory. Here’s an
example:
••*
1 import gzip
2
3 # Open the GZIP file for reading
4 with gztp.openi‘example.gz', 'rb') as f_tn:
5 # Decompress the content and store it in memory
6 file_content = f_in.read()
7
8 # Now you can process the decompressed data
9 prlnt(file_content.decode('utf-8‘)) # If the content is a text file
•••
1 import zipftle
2
3 # Extract all files from the ZIP archive
4 with ztpftie.ZtpFile('archive.zip', 'r') as ztpf:
5 ztpf.extractall('extracted_fties/’)
This will decompress all files from the ZIP archive into the
extracted_files directory.
3. Final Thoughts
In this chapter, we’ve discussed the basics of working with
compressed files using Python’s gzip and zipfile libraries.
The gzip module is ideal for compressing and
decompressing single files, while zipfile is a more versatile
option for dealing with multiple files and directories. Each
library has its strengths and ideal use cases, so
understanding these differences will help you choose the
right tool for your specific needs.
Make sure to experiment with both libraries in your projects
to better understand how they work. Whether you’re
handling large datasets, backups, or file transfers,
mastering file compression will make your Python
programming more efficient and effective. Keep practicing
and apply these concepts to different use cases—soon,
you’ll be comfortable working with compressed files in
Python like a pro.
6.12 - Best Practices in File Handling
When working with files in Python, ensuring that your code
follows best practices is essential for writing maintainable,
efficient, and secure software. The handling of files is a
fundamental aspect of many programming tasks, whether
it’s reading from or writing to text files, manipulating data
logs, or working with configurations. Inadequate handling of
file operations, however, can lead to significant problems
such as data corruption, file system errors, information
leaks, and even security vulnerabilities. This chapter aims to
provide you with the necessary tools and techniques to
manipulate files safely and efficiently, keeping performance
and data integrity in mind.
1. Proper Use of 'with open()'
One of the most important things to understand when
working with files is how to ensure they are properly opened
and closed. Using with open() is a Pythonic way to manage
file operations, as it ensures that a file is properly closed
after the operation is completed, even if an error occurs
within the block. This avoids common issues like memory
leaks or file locks that may occur when the file is not
properly closed.
Example of proper usage:
•••
1 # Reading a file using 'with open()‘ to automatically close it after use
2 r') as file:
with open('example.txt', *
3 content = file.readO
4 print!content)
5 # The file is automatically closed here, even if an error occurs
1 try:
2 with open('important_file.txt', 'r') as file:
3 data - file. reacK)
4 except FtleNotFoundError:
5 printfThe file was not found.")
6 except PermissionError:
7 print("You do not have permission to access this file.")
8 except Exception as e:
9 print(f"An unexpected error occurred: {e}")
This code safely attempts to open the file and read its
contents. If the file doesn’t exist or there’s a permissions
issue, an appropriate error message is printed. By using try-
except , you can ensure that the program doesn’t crash
unexpectedly and that you can log or handle the error
properly.
4. Efficient Reading and Writing of Large Files
When working with large files, it’s important to be mindful of
memory usage. Reading or writing an entire file into
memory can lead to performance issues or even crashes
due to excessive memory consumption. A better approach is
to process the file line by line or in chunks.
For example, reading a file line by line:
*•*
♦•♦
1 # Efficient writing to a file in chunks
2 with open('output_large.txt', 'w') as file:
3 for chunk tn data_chunks:
4 ftle.wrtte(chunk)
♦••
1 from pathlib import Path
2
3 # Create a Path object
4 fite_path = Path(1example_dtrectory/example_ftle.txt')
5
6 # Check if the file exists
7 if ftle_path.extsts( ):
8 print("The file exists.")
9 else:
19 printfThe file does not exist.")
You can also easily join paths using the '/' operator, which
simplifies working with directories and subdirectories:
♦••
1 # Joining paths easily with '/'
2 new_file_path = Path('directory’) / 'subdirectory' / 'file.txt'
3 prlnt(new_file_path) # Output: directory/subdirectory/file. txt
1 import re
2
3 ftle_name = 'data_2023.csv'
4 if re.match(r,''data_\d{4}\.csv$', file_name):
5 print("The file name matches the expected format.”)
6 else:
7 printt"Invalid file name format.")
Checking that the file has the correct format (e.g., CSV,
JSON) before proceeding with any parsing or reading
operation prevents errors that could occur from attempting
to process the wrong type of file.
7. Handling Protected or Corrupted Files
When working with files, especially in production
environments, you may encounter files that are either
protected or corrupted. You must handle such cases
properly to ensure your program remains stable. For
example, if a file is locked or if it’s in an unsupported
format, appropriate error handling must be applied.
Example:
1 try:
2 with open(’protected_file.txt', 'r') as file:
3 data = file.read( )
4 except lOError as e:
5 print(f"An error occurred while accessing the file: {e}")
*••
1 import os
2
3 filename = "example.txt"
4
5 # Check if the file exists before attempting to open it
6 if os.path.exists(filename):
7 with open(filename, 'r') as file:
8 content = file.read()
9 printCFtle read successfully!")
10 else:
11 printff"Error: The file {filename} does not exist.")
*•*
1 import os
2
3 filename = "data.csv"
4
5 # Check if the file extension matches the expected type
6 if filename.lower().endswith('.csv'):
7 with open(filename, 'r') as file:
8 content = file.read()
9 printCCSV file read successfully!")
10 else:
11 print(f'Error: The file {filename} is not a valid CSV file.”)
1 import os
2
3 filename = “data.csv"
4
5 # Get the absolute path of the file
6 abs_path = os.path.jotn(os.path.abspath(os.getcwd()), filename)
7
8 prtnt(f"Absolute path of the file: {abs_path}")
This code combines os.path.abspath() with os.getcwd() to
generate an absolute file path. By using absolute paths, you
prevent ambiguity and help avoid errors when accessing
files.
4. Handling Sensitive Data Safely
When working with sensitive data, such as passwords or
encryption keys, it is crucial to protect that data both when
it is stored and during transmission. Sensitive data should
never be stored in plaintext files without proper encryption.
Here’s an example of how to encrypt data before writing it
to a file using the cryptography library:
•••
1 import os.path
2
3 # Check if a file exists
4 ftle„path = "example.txt"
5 if os.path.exists(flle_path):
6 print(f"{file_path} exists.")
7 else:
S print{f“{file_path} does not exist."}
1 import sys
2
3 # Print the command-line arguments passed to the script
4 prtnt("Command-line arguments, sys.argv)
1 import datetime
2
3 # Get the current date and time
4 now = datetime.datetime.now!)
5 print!"Current date and time:", now}
6
7 # Format date
8 formatted_date = np^-strftimeC'^Y-^m-^d")
5 print!"Formatted date:", formatted_date)
1 import random
2
3 # Generate a random integer between 1 and 1&
4 random_number - random. randint( 1, 10)
5 print!"Random number:", random_number)
These are just a few examples of the native modules
available in Python. There are hundreds of such modules
that come with the standard Python installation, making it
incredibly versatile and powerful for a wide range of
applications, from file handling to data manipulation.
3. External Libraries in Python
While Python’s native modules cover many tasks, there are
times when you need functionality that’s not part of the
standard library. That’s where *external libraries
* come in.
External libraries are collections of modules written by third-
party developers that provide additional functionality not
built into the Python standard library.
The Python Package Index (PyPI) is the official repository
where you can find these external libraries. PyPI hosts a vast
number of libraries covering everything from web
frameworks to machine learning tools. To install an external
library, Python provides a tool called *.pip Pip is the
package installer for Python, and it allows you to easily
install libraries from PyPI or other sources.
Here are a few examples of popular external libraries:
- requests: The requests library is one of the most popular
Python libraries for making HTTP requests. It simplifies
interacting with web APIs, handling HTTP requests, and
processingresponses.
♦•♦
1 pip install requests
Onceinstalled,youcanuseitlikethis:
1 import requests
2
3 response = requests.get('https://api*github.com'}
4 print!"Status code;", response.status_code)
5 print!"Response content:", response.text)
Example of usage:
1 import numpy as np
2
3 Create a 2D array
4 array = np.arrayt[[1, 2], [3, 4]])
5 print!"Array:\n", array)
6
7 # Perform matrix multiplication
8 result = np.dot(array, array)
5 print!"Matrix multiplication result:\n", result)
Example of usage:
1 import pandas as pd
2
3 # Create a DataFrame
4 data - {'Name1; ['Alice’, 'Bob', 'Charlie’), 'Age'; [25, 30, 35]}
5 df - pd.OataFramef data)
6 print!df)
1 import datetime
2
3 # Get current date and time
4 now = datettme.datetime.now()
5 prtnt(“Current date and time:", now)
6
7 # Add one day to the current date
8 tomorrow = now + datetime.timedeltafdays=l)
9 prlnt(“Tomorrow's date:", tomorrow)
1 import pendulum
2
3 # Get current date and time
4 now = pendulum.now()
5 print(“Current date and time:", now)
6
7 # Add one day to the current date
8 tomorrow = now.add(days=l)
9 prlnt(“Tomorrow's date:", tomorrow)
1 import math
2
3 # Calculate the square root of 25
4 result = math.sqrt(25)
5 print("Square root of 25:", result)
•••
1 import requests
2
3 # Make a GET request to a website
4 response = requests.get("https://jsonplaceholder.typtcode.com/posts ")
5 if response.status_code = 200:
6 prtnt("Response received:")
7 prtntfresponse.json())
1 # utils,py
2
3 def is_prime(number):
4 if number <= 1:
5 return False
6 for i in range(2, int(number ** 0.5) + 1)
7 if number % t == 0:
a return False
9 return True
Now, you can import and use this module in another script:
1 # main.py
2 from utils import is_prime
3
4 # Check if a number is prime
5 number = 29
6 if ts_prime(number):
7 print(f“{number} is a prime number.")
a else:
9 print(f"{number} is not a prime number.”)
1 analytics/
2 _____ init_ .py
3 statistics.py
4 visualization, py
•••
1 from analytics.statistics import mean
2 from analytics.visualization import plot_graph
In this example:
- The import math statement brings in the entire math
module.
- To use the sqrt function, we prefix it with math. (i.e.,
math.sqrt() ).
This approach is useful when you need access to multiple
functions or variables from a module. However, it requires
you to always refer to the module name whenever you want
to access a function or variable, which can make the code
more verbose.
2. The from Keyword
If you only need a specific function or variable from a
module, you can use the from keyword to import just that
particular element. This can make your code cleaner and
reduce redundancy because you won’t need to reference
the module name every time.
For example, instead of importing the entire math module,
you could import only the sqrt function:
1 from math import sqrt # Importing only the sqrt function from the math
module
2
3 # Using the sqrt function directly without the math, prefix
4 number - 16
5 result = sqrt(number)
6 printCThe square root of", number, “is", result)
In this case:
- The from math import sqrt statement imports only the sqrt
function from the math module.
- You can now use sqrt() directly without needing to prefix it
with math. .
This approach is ideal when you need only a small portion of
a large module. It keeps your code concise and improves
readability, especially when dealing with long module
names.
3. The as Keyword for Aliases
Sometimes, especially when working with large libraries or
modules, it’s helpful to give a module or function a shorter
or more meaningful name to make your code easier to work
with. This is where the as keyword comes in. It allows you to
assign an alias to the module or function you're importing.
A common use of the as keyword is with libraries like numpy
and pandas . These libraries are often used in data science
and scientific computing, and their names can be long to
type repeatedly. To save time and space, you can use as to
assign them a shorter alias.
For example:
•••
1 import numpy as np # Importing numpy with an alias
2
3 # Using numpy's array function to create a simple array
4 arr = np.array![1, 2, 3, 4, 5])
5 prlntfarr)
*••
1 import pandas as pd # Importing pandas with an alias
2
3 # Using pandas to create a DataFrame
4 data = {'name': ['Alice', ’Bob'), 'age': (25, 36)}
5 df = pd.DataFrame!data)
6 print!df)
In this example:
- We import the pandas library and alias it as pd .
- We can now use pd to reference pandas throughout our
code, which is much quicker and easier to type.
4. Combining from and as
You can also combine the from and as keywords to import
specific functions or classes from a module and assign them
aliases. This is particularly useful when you want to import
specific functions but also want to give them more
descriptive or shorter names.
Let’s consider a scenario where we only need the sqrt
function from the math module but also want to give it a
more descriptive name:
In this case:
- The from math import sqrt as square_root statement
imports the sqrt function from the math module and
renames it to square_root .
- You can now call the function using the more descriptive
name square_root() .
This approach is particularly useful when the imported
function’s original name is unclear or doesn’t fit well with
your program's naming conventions.
5. Best Practices and Considerations
While Python’s import system is simple, there are some best
practices you should follow to keep your code organized and
maintainable:
- Avoid importing everything: It’s generally better to avoid
using from module import * , which imports all the
functions, classes, and variables from a module. This can
lead to namespace pollution and make it unclear where
certain elements come from.
•••
1 # Avoid this:
2 from math import * # This imports everything from math, which can be
messy
3
4 # Instead, prefer explicit imports like:
5 from math import sqrt, pt # Import only what you need
♦••
1 import math # Standard library import
2
3 import numpy as np # Third-party library import
4 import pandas as pd
5
6 import my_module # Your own module import
1 import math
2 result = math.sqrt(16)
3 print(result)
♦•♦
1 from math import sqrt
2 result = sqrt(16)
3 print(result)
♦•♦
1 from math import sqrt, pow
2 resultl = sqrt(16)
3 result2 = pow(2, 3)
4 prlntfresultl, result2)
In this case, both sqrt and pow are imported from math
and can be used directly in the code without the need for
the math. prefix.
4. Renaming Modules with as
The as keyword allows you to assign a custom alias to a
module or function, which can make your code more concise
and easier to work with, especially when dealing with long
module names or conflicting names. For instance, the
numpy library is often imported with the alias np for brevity:
♦••
1 import numpy as np
2 arr = np.array([l, 2, 3, 4))
3
♦•♦
1 from math import sqrt as square_root
2 result = square_root(16)
3 prtnt(result)
1 import pandas as pd
2 df = pd.DataFrame({'A': (1, 2, 3], 'B': (4, 5, 6]})
3 printjdf)
1 import math
2 from datetime import datetime
3 import numpy as np
4
5 # Using math for square root
5 num = 25
7 sqrt_result = math.sqrt(num)
8
9 # Using datetime to get the current date and time
10 current_time = datetime.now()
11
12 # Using numpy to create an array
13 arr = np.array([l, 2, 3, 4, 5])
14
15 print(f"Square root of {num} is {sqrt_result}")
16 prtnt(f"Current time is {current_time}“)
17 prtnt(f"Numpy array: {arr}")
In this example:
- The math module is imported using import to access the
square root function.
- The datetime module is imported with from to access the
datetime.now() function.
- The numpy module is imported with import ... as np to
create an array more conveniently.
By combining these methods, you can structure your
imports based on what is most efficient and readable for
your specific use case.
Understanding how to import modules and functions
efficiently is a key skill in Python programming. The three
main import techniques—using import , from , and as —
each have their strengths and use cases. By combining
them effectively, you can keep your code organized,
readable, and efficient. It’s important to choose the right
import method for the task at hand, whether you're working
with a large library, a few functions, or creating aliases for
frequently used modules. As you continue to develop in
Python, make sure to practice different import scenarios in
your projects to improve the clarity and maintainability of
your code.
7.2.2 - Practical usage examples
In the world of programming, Python has established itself
as a versatile language that can be applied to a wide range
of use cases. However, to truly master Python and be able
to leverage its full potential, it's essential to understand not
just the language itself but also the vast array of libraries
and modules that come with it. Python's standard library
contains a wealth of modules that are widely used for
everyday tasks, helping programmers to solve common
problems efficiently without reinventing the wheel.
In this chapter, we will explore practical examples of some
of the most popular and useful Python modules. These
modules are widely used in various fields, from system
administration to web development and data analysis. By
learning how to use them, you will be equipped with
powerful tools to tackle a variety of real-world scenarios. We
will focus on three key modules that are commonly
encountered in Python development: os , sys , and datetime
. These modules will help you handle files and directories,
interact with the system environment, and manage time-
related tasks—all of which are essential for most
programming tasks.
1. The 'os' Module: Interacting with the Operating System
The os module is one of Python's most important and
frequently used libraries. It provides a way to interact with
the operating system in a portable manner. This module is
indispensable for file and directory manipulation, as well as
for interacting with the environment variables and other
system-level functionalities. By using os , Python code can
be written to handle a variety of file management tasks,
making it particularly useful for system administration and
automation scripts.
Main Functionalities
Some of the key functionalities of the os module include:
- Navigating the file system: The ability to list directories
and manipulate files within them.
- Creating and removing directories: The capability to create
and delete directories and files.
- Working with paths: Manipulating file paths and
constructing platform-independent paths.
Let’s look at some practical examples.
Example 1: Listing Files in a Directory
The os.listdir() function is used to list all files and directories
in the specified directory. This can be especially useful for
performing tasks like checking the contents of a folder or
processing files one by one.
•••
1 import os
2
3 # List all files and directories in the current directory
4 files_and_di.rs = os.listdir('.')
5 prtnt(ftles_and_dtrs)
•••
1 import os
2
3 # Create a new directory called 'new_directory'
4 os.mkdir(’new.directory’)
5 prlnt(“Directory created successfully")
I import sys
2
3 # Print all command-line arguments
4 prtnt(“Number of arguments:", len(sys.argv))
5 prtnt("Arguments:", sys.argv)
If you run this script from the command line like this:
I •••
1 python script.py argl arg2 arg3
«••
1 Number of arguments: 4
2 Arguments: ('script.py', 'argl', ,arg2', 'arg3')*1
7
6
5
4
3
2
1 import sys
2
3 # Print Python version
4 prtnt("Python Version:'1, sys.version)
5
6 # Print platform information
7 print("Platform:", sys.platform)
For example, on a Unix-based system, sys.platform might
return 'linux' or 'darwin', depending on the system.
3. The 'datetime' Module: Working with Dates and Times
The datetime module is one of Python's most powerful and
flexible modules for working with dates and times. It allows
you to easily manipulate dates and times, perform date
arithmetic, and format dates in a variety of ways. This
module is essential for applications that require handling
dates, such as scheduling systems, time logging, and any
form of time-based computation.
Main Functionalities
Some of the key features of the datetime module include:
- Handling date and time objects: You can create datetime
objects to represent specific moments in time.
- Manipulating dates and times: The ability to add or
subtract time intervals from dates.
- Formatting and parsing dates: Converting date objects to
strings in different formats and vice versa.
Example 1: Getting the Current Date and Time
The datetime.now() function returns the current local date
and time as a datetime object.
•••
1 import datetime
2
3 # Get the current date and time
4 current_datetime = datetime.datetime.now()
5 print(“Current Date and Time:", current_datetime)
♦•*
1 import datetime
2
3 F Get the current date
4 current_date = datetime.datetime.now()
5
6 # Format the date as ‘day/month/year’
7 formatted_date = current_date.strftime("%d/%m/%Y")
8 print("Formatted Date:", formatted_date)
••♦
1 import datetime
2
3 # Create two datetime objects
4 datel = datetime.datetime(2025, 1, 1)
5 date2 = datetime.datetime(2025, 12, 31)
6
7 # Calculate the difference between the two dates
8 difference = date2 - datel
9 print("Difference:“, difference.days, "days")
The output would be:
•••
1 Difference: 364 days
1 import random
2 # Generate a random integer between 1 and 1@
3 random_int = random.randintf1, 10}
4 printfrandom_int)
1 import random
2 fruits = ['apple', 'banana', cherry", 'date')
3 random_fruit = random.chotce(fruits)
4 pr trvt( random_f rutt)
1 import random
2 deck- ("ace1, 2’, 3"t '4', "51, '61, "71, 9’t IB', "jack
1 queen', 1 king’J
3 random,shufflefdeck)
4 print(deck}
After running this code, the deck will be shuffled, and the
order of elements will be different each time you run it.
2. Working with the json module for handling JSON data
The json module in Python is essential for working with JSON
(JavaScript Object Notation) data, which is commonly used
in APIs, configuration files, and data interchange. Python
provides functions to easily read, write, and manipulate
JSON data, converting it into Python data structures like
dictionaries and lists.
- Reading and writing JSON data:
To work with JSON data, you can use the json.load()
method to parse JSON data from a file and json.dump() to
write data to a file.
ExampleofreadingaJSON file:
•1•
1 import jscn
2 # Assuming ‘data.json' contains a JSON object
3 with openf'data,json', 'r') as file:
4 data = json.loadtftie)
5 print(data}
1 import json
2 data “ {'name": Alice't age': 30, city1: 'Wonderland'}
3 with open('data.json', *
w' ) as file:
4 json.dumpfdata, file)
1 import json
2 python_dict = {'name'; 'John', 'age': 25}
3 json_string - json.dumpsfpython_dict)
4 prtnt(j$on_$tring) # {"name": ''Jahn", ''age": 25}
5
6 back_to_dict = json.loads(json_string}
7 print(back_to_dict) {'name': 'John', 'age': 25}
1 import requests
2 url - 'https://api.example.com/data'
3 response = requests.get(url)
4 if response, status^code — 200:
5 data = response.json( ) # Parses the JSON response body
6 prtnt(data)
7 else:
S print(f"Error: {response.status_code}"}*1234567B
1 import requests
2 url - 'https://apt.example.com/submit'
3 data = {‘name’: ’Bob', 'age': 22}
4 response - requests.post(url, json=data)
5 if response.status_code ™ 2®1:
6 prtnt('Data submitted successfully1 )
7 else:
B printff"Error: {response.status_code}"}
Here, we send a JSON object to the server. If the request is
successful and the server responds with a 201 Created
status, the program prints a success message.
- Handling connection errors:
It's important to handle potential errors, such as
connection timeouts or invalid URLs. The requests module
provides exceptions like
requests.exceptions.RequestException to handle these
situations.
Example:
1 import requests
2 try:
3 response - requests.get(1 https://nonexistentwebsite,com' )
4 response.raise_for_status{ ) # Will raise an error for non-209
status codes
5 except requests.exceptions.RequestExcept ion as e:
6 prtnt(f"An error occurred: {e}'1)
1 import re
2 text = “My email is john.doe^example,com"
3 match = re.search(r1\b[A-Za-zD-9._%+-)+@[A-Za-zO-9.-]+\.[A-Z|a-zj
{Zjib'j text)
4 if match:
5 prtntff"Found email: {match.groupf)}")
6 else:
7 print("No email found")
1 import re
2 text = "The sky is blue"
3 new_text = re.sub(r1 blue1, 'green1, text)
4 print(new_text) # "The sky is green"
1 import re
2 phone_number = input(“Enter your phone number: ")
3 if re.match!r, phone^number}:
4 print("Valid phone number")
5 else:
6 print("Invalid phone number")
When you run this command, pip will output its current
version. For example:
I •••
•••
1 pip install requests
Once you run this command, pip will connect to PyPI, find
the latest version of the requests package, download it, and
install it into your current Python environment. After
installation, you can import and use the package in your
Python scripts, like so:
••♦
1 import requests
2
3 response = requests .get( “https://ww.example.com")
4 print(response.text)
The installation process may take a few moments,
depending on the size of the package and the speed of your
internet connection. If you are installing multiple packages,
you can list them all at once, separated by spaces. For
example:
•••
1 pip install numpy pandas matplotlib
• ft •
••♦
1 ptp install —upgrade requests
This will ensure that you always have the most up-to-date
version of the package.
8. Uninstalling Packages
If you no longer need a specific package or want to free up
space in your environment, you can easily uninstall
packages using pip. The command to uninstall a package is:
♦••
1 ptp uninstall <package_name>
For example, to uninstall the requests package, you would
type:
I ♦•♦
1 pip uninstall requests
•••
1 python -m venv myenv
2 source myenv/btn/acttvate # On windows, use myenv\Scrlpts\activate
♦•♦
1 pip list
3 pip 21.1.2
4 setuptools 49.6.0
5 requests 2.25.1
6 numpy 1.20.3
•••
♦••
1 pip uninstall requests
•••
1 pip freeze > requtrements.txt
1 requests—?. 25.1
2 numpy==1.20.3
3 flask==2.S.l
I ••♦
1 pip install -r requtrements.txt
- On macOS/Linux:
1 source myprojectenv/bin/acttvate
followedby:
♦•♦
1 pip install --upgrade ${pip list --outdated | awk 'NR>2 {print $1}'}
•••
1 pip install package_nane
For example, if you wanted to install the popular requests
library, which is used for making HTTP requests, you would
run:
I ♦♦♦
1 pip install requests
•••
1 pip install numpy
•••
1 pip install numpy==l.19.5
The ' ==' operator specifies that you want to install exactly
the version indicated. If you only want a version greater or
equal to a certain release, you could use the '>=' operator,
like so:
•••
1 pip install numpy>=1.21
This is useful for ensuring compatibility with your project
while allowing for newer versions to be used if they meet
the minimum version requirement.
Another practical use of pip install is to install packages
listed in a requirements file. In collaborative projects,
developers often use a requirements.txt file to specify all
dependencies required for the project. You can use the
following command to install all the packages listed in such
a file:
I ••♦
1 pip install -r requtrements.txt
•••
1 pip uninstall package_name
• ••
1 pip uninstall requests -y
•••
1 ptp uninstall requests numpy
•••
1 pip list
1 Package Version
3 pip 23.2. 1
4 requests 2.28. 1
5 flask 2.2.3
This output allows you to quickly check which packages are
installed and their corresponding versions, which is helpful
when debugging issues or ensuring compatibility with
specific dependencies.
One of the powerful features of pip list is its ability to use
optional flags. For instance, the '--outdated' flag is
particularly useful to identify packages that have newer
versions available. By running:
•••
1 pip list —format=freeze
I ••e
1 pip install numpy
•••
1 import numpy as np
2 print(np._version_ )
This code will import the NumPy library and print the
installed version of NumPy. If there are no errors and the
version number is displayed, NumPy has been installed
correctly.
5. Creating Arrays in NumPy
NumPy provides several methods for creating arrays. Here
are some of the most commonly used functions:
- numpy.array() : This is the most basic function for creating
a NumPy array. It converts a Python list (or any other
sequence) into a NumPy array. For example:
•••
1 import numpy as np
2 my.list = [1, 2, 3, 4]
3 np_array = np.array(my_list)
4 prtnt(np_array)
Output:
1 [1234]
•••
Output:
♦••
1 [[0. 0, 0.]
2 [0. 0. 0.)
3 [0. 0, 0.)]
- numpy.ones() : Similar to zeros() , this function creates an
array filled with ones. It can be used when you need to
initialize an array with a value of 1. For example:
•••
1 ones_array = np.ones((2, 4)) # Create a 2x4 array of ones
2 print]ones_array)
Output:
•••
i [[i. i. i. i.j
2 [1. 1. 1. 1,]]
Output:
1 [02468]
♦••
1 ltnspace_array = np.linspace(O, 1, 5) # Create an array of 5 values
between & and 1
2 print(linspace_array)
Output:
• ••
1 [0. €.25 0.5 0.75 1. J
1 import numpy as np
2
3 arr = np.array([l, 2, 3, 4, 5))
4 prtntfarr)
•••
1 prlnt(arr(0]) # Output: 1
2 prtnt(arr(3]) # Output: 4
♦•♦
1 prlnt(arr[:3)) # Output: [1 2 3]
You can also specify the step size in your slice. For example,
to get every other element from the array:
• ••
1 prtnt(arr(::2]) # Output: [1 3 5]
• ••
1 arr[l] = 10
2 prtnt(arr) # Output: [1 10 3 4 5]
***
1 arr[:3] = (6, 7, 8]
2 print(arr) # Output: [6 7 8 4 5]
*•*
1 arrl_2d = np.array([[1, 2), [3, 4]])
2 arr2_2d = np.array([[5, 6], (7, 8]])
3 prlnt(np.matmul.(arrl_2d, arr2_2d)) # Output: [[19 22}
4 jtf [43 50] ]
•••
1 import numpy as np
2
3 # Creating a 2D array (matrix) with 2 rows and 3 columns
4 arr_2d = np.array([[1, 2, 3], (4, 5, 6)))
5
6 prtnt(arr_2d)
Output:
•••
1 ([1 2 3]
2 (4 5 6J]
In this case, the array arr_2d has two dimensions: one axis
with two rows and another with three columns. You can
access specific elements of this array using indices, just like
you would with a list, but you need to specify both the row
and the column:
I ••♦
1 ((2 3 4]
2 [5 5 71)
Output:
•••
1 ([1 2 3]
2 (4 5 6J]
Output:
• ••
1(123456)
•••
1 # This Mould raise an error:
2 # arr_reshaped_invalid = arr_ld. reshape(4, 2)
Output:
•••
1 [[2 4 6]
2 (5 7 91]
1 import pandas as pd
2
3 # Example of a Series
4 data = [10, 20, 30, 40]
5 index = ['a', 'b', 'c', 'd']
6 series = pd.Series(data, index=index)
7 print("Series Example:”)
a prlnt(series)
9
10 # Example of a DataFrame
11 data = {'Name1: ['Alice', 'Bob', 'Charlie'], 'Age': [25, 30, 35], 'City
['New York', 'Los Angeles', 'Chicago']}
12 df = pd.DataFrame(data)
13 print(“XnDataFrame Example:”)
14 prtntf df)
Assumeexample.csvcontainsthefollowingdata:
1 NamehAge,City
2 Alice,25,New York
3 Bob,3D,Los Angeles
4 Charite,35,Chicago
When you run the code, df_csv will contain the data from
the file as a DataFrame, where the columns correspond to
the headers in the CSV file.
2. Importing data from an Excel file:
*•*
1 # Getting a summary of the DataFrame
2 print("\nDataFrame Info:")
3 df.tnfo()
3. describe : This method generates summary statistics for
numerical columns in the DataFrame, such as the mean,
median, standard deviation, and percentiles.
*•*
1 # Generating summary statistics
2 print("\nSummary Statistics:")
3 print(df.describe())
1 import pandas as pd
2
3 # Sample DataFrame
4 data = {1 Name1 ['Alice', 'Bob1, ‘Charite'],
5 Age': [25, 30, 35],
6 'City1: ['New York1, ‘Los Angeles', 'Chicago']}
7
8 df = pcL OataFrame(data)
9
10 # Select a single column
11 ages = dt('Age'J
12
13 # Select multiple columns
14 name_city = df[['Name1, 'City'])
15
16 print!ages)
17 prtnt(name_city)
Filtering Rows
You can filter rows using conditions.
Example:
•••
Using Conditions
Multiple conditions can be combined using logical operators
like '&' (and), '|' (or), and '~' (not).
Example:
1 # Sample DataFrame
2 data = {'Category*1;11['A', 'A't 'B', B],
3 'Value': [10, 20, 30, 40]}
4
5 df = pd.DataFrame! data)
6
7 # Group by 'Category' and calculate the sum
B grcuped_sum - d'f.groupby! ’Category' )[ 'Value'] >sum( )
9
1® # Group by 'Category' and calculate the mean
11 groupedjnean = dt.groupbyf'Category')[‘Value1].mean! }
12
13 print! grajped_.sum)
14 print!grouped_mean)
Multiple Aggregations
You can apply multiple aggregation functions using
'.agg()'.
Example:
1 # Sample DataFrames
2 dfl = P(LDataFrante({'A1 : [1, 2L 'B1 : [3, 4]})
3 df2 = pd.DataFrane({‘A‘: [5, 6], * B': [7, 8)})
4
5 Concatenate along rows
6 concatenated = pd
*
concat([df 1, df2]}
7
8 # Concatenate along columns
9 concatenated_cols ■ pd>concat([dflt tff2], axi$=l)
10
11 print!concatenated)
12 print!concatenated_cols)
Using merge
The merge method performs SQL-style joins (inner, outer,
left, right) on DataFrames.
Example:
«••
1 Sample DataFrames
2 dfl = pd.DataFrameH'ID1: [1, 2, 3], 'Name': (‘Alice1, ’Bob",
'Charlie' ]})
3 df2 = pd,DataFrame!{'ID': [2, 3, 4], 'Age': [38, 35, 48)})
4
5 # Inner join on 'ID'
6 merged_inner = pd.mergefdfl, df2, on='I0", how='inner1)
7
S # Outer join on 'ID'
9 merged-Outer = pd.merge!dfl, df2, on=‘ID' , hpw='outer1)
10
11 print!merged,inner)
12 print! merged,outer)
Using join
The join method is used to combine DataFrames on their
index.
Example:
••♦
1 # Add a new column 'Bonus' that is 18% of the 'Salary'
2 df[ Bonus'] = df['Salary') * 0,1
3
4 # Calculate the total compensation
5 df['Total_Compensatton'] = df['Salary') + df['Bonus')
6
7 # Display the modified DataFrame
8 print(df.head!))
•••
•••
It’s important to test the saved files to confirm that the data
has been written correctly. You can reload the saved files
and inspect them as follows:
•••
I ♦•♦
1 import requests
2
3 # Define the URL for the API
4 url = “https://apt.exchangerate-api.com/v4/latest/USD"
5
6 # Make the GET request
7 response = requests.get(url)
8
9 # Parse the ISDN response
19 data = response.json()
11
12 # Access specific information (e.g., exchange rate for EUR)
13 exchange_rate = data["rates"]("EUR"]
14
15 # Print the exchange rate
16 print(f"Exchange rate from USD to EUR: {exchange_rate}")
In this example:
1. No additional parameters are required:
This particular API does not need any extra parameters
since it defaults to USD as the base currency. The URL alone
is enough to get the exchange rates.
2. Accessing specific data:
Once the response is parsed into a Python dictionary, you
can access specific data by using the appropriate keys. In
this case, data["rates"]["EUR"] retrieves the exchange rate
for the Euro.
To demonstrate how parameters can refine a query,
consider an API that allows you to search for articles or blog
posts. Such an API might accept parameters like query ,
author , or date . Here’s how you could pass these
parameters:
1 import requests
2
3 # Define the URL for the API
4 url = "https://example.com/apt/arttcl.es "
5
6 # Define the search parameters
7 params “ {
3 "query": "Python",
9 "author": "John Poe",
10 "date": "2023-01-01"
11 }
12
13 # Make the GET request
14 response = requests.get(url, params=params)
15
16 # Parse the JSON response
17 articles = response.json()
18
19 # Print the articles
20 prtnt(articles)
• ••
1 # Example request
2 response = requests.get("https://example.com/api/resource")
3
4 # Check status code
5 tf response.status_code == 200:
6 prtnt("Request successful:", response.json())
7 elif response.status_code == 464:
8 print("Error: Resource not found.")
9 elif response.status_code == 566:
16 print("Error: Server encountered an issue.")
11 else:
12 print(f"Unexpected status code {response.status_code}:
{response.text}")
1 try:
2 response = requests.get("https://example.com/api/resource")
3 response.raise_for_status() # Raises HTTPError for bad responses
(4xxr 5xx)
4 grint("Success:", response.json())
5 except requests.exceptions.HTTPError as http.err:
6 print("HTTP error occurred:", http_err)
7 except requests.exceptions.RequestException as req_err:
8 print("Request error occurred:", req_err)
4. Uploading Files
The Requests library also supports file uploads in POST
requests using the files parameter. For example, if an API
allows you to upload profile pictures, you can do the
following:
• ••
1 import requests
2
3 try:
4 response = requests.get(•https://apt.example.com/data', timeout=(5
10))
5 printfresponse.json())
6 except requests.except ions.Timeout:
7 print(“The request timed out. Please try again later.")
8 except requests.exceptions.RequestExceptton as e:
9 print(f"An error occurred: {e}")
•••
1 try:
2 response = requests.get('https://apt.example.com/resource',
ttmeout=10)
3 response.raise_for_status() # Trigger an HTTPError for bad responses
(4xx, Sxx)
4 print("Response received:", response.json())
5 except requests.exceptions.HTTPError as http_err:
6 print(f"HTTP error occurred: {http_err}")
7 except requests.exceptions.ConnectionError:
8 print(”A connection error occurred. Check your network settings.")
9 except requests.exceptions.Timeout:
19 print("The request timed out. Please try again later.")
11 except requests.exceptions.RequestExceptton as req.err:
12 print(f“An unexpected error occurred: {req_err}“)
♦•♦
1 import meu_modulo
2 from importlib import reload
3
4 reload(meu_modulo) # Reload the module after making changes
♦••
1 # meujnodulo.py
2
3 nome = "Alice"
4
5 def saudacao():
6 return f'Hello, {nome}!
♦•*
1 # main.py
2
3 import meu_moduto
4
5 print(meu_modulo.saudacao()) # Calls the saudacaof) function from
meu_modulo
6 prlnt(meu_modulo.nome) ft Accesses the nome variable from
meu modulo
•••
1 # main.py
2
3 import meu_modulo as mm
4
5 print(mm.saudacao()) # Calls the saudacaof) function from meu_modulo,
now using the alias mm
•••
1 # meu_modulo.py
2
3 none ■ "Alice"
4
5 def saudacaof):
6 return f"Hello, {nome}!"
*••
1 # main.py
2
3 import meu_modulo
4
5 # Calling the saudacao function and printing its return value
6 greeting = meu_modulo.saudacao( )
7 print(greeting) # Output: Hello, Alice!
0
9 A Accessing the nome variable and printing its value
10 print(meu_modulo.nome) # Output: Alice
*•♦
1 # mymodule.py
2
3 def circle_area(radius):
4 return 3.14 * radius ** 2
•••
1 # geometry/area.py
2
3 def circle_area(radius):
4 return 3.14 * radius ** 2
5
6 def square_area(side):
7 return side * side
1 # main.py
2
3 import mymodule
4
5 result = mymodule.circle_area(5)
6 prtnt(result)
•••
1 # main.py
2
3 from mymodule import circle_area
4
5 result = circle_area(5)
6 print(result)
1 # main.py
2
3 from geometry.area import circle_area
4
5 result = ctrcle_area(5)
6 print(result)
1 apt.project/
2 apV
3 I I— __intt__ , py
4 | |— connection.py
5 | |— response,py
6 | 1— database.py
7 1—- main.py
*••
With this setup, when you import the api package, you can
directly use the functions from all the modules:
•••
1 import apt
2
3 api.connect_to_api()
4 api.process_response(response)
5 apt.save_data(data )
2.Addyourproject filestotherepository:
•••
1 git add .
2 git commit —m "Initial commit''
• •
Once they have cloned the repository, they can use your
modules just like they would any other Python package.
Using Local Dependency Management Tools
Python’s pip tool, typically used to install packages from
PyPI, can also be used to install local packages. This is
useful when sharing Python code within a development
team. You can distribute the module as a '.tar.gz' or '.whl'
file, which is a standard format for Python packages.
To install the package locally, you can run the following
command:
♦•*
1 pip install /path/to/your/package.tar.gz
•••
1 -e /path/to/local/package
•••
1 pip install setuptools wheel
• ••
1 python setup.py sdtst bdtst_wheel
• ••
1 pip install twine
*•*
*
1 twine upload dist/
1 # mypackage/greet.py
2 def say_hello(name):
3 return f"Hello, {name}!"
♦•♦
1 twine upload dtst/
*
1 fiask>=i.i.at<=2.e,0
1 numpy==l.18.5
2 pandas>=l,2.0,<1.3.0
3 flask—2.0.1
4 request$>=2.25.1
r
2 pandas==l.2.3
3 requests—2.25.1
Therequirements.txt filewillbegeneratedwiththiscontent:
1 numpy==l,18.5
2 pandas—1.2.3
3 requests—2.25.1
1 numpy==l,21.2
2 requests>=2.25.0
3 pandas
Here:
- numpy==1.21.2 specifies that version 1.21.2 of numpy
must be installed.
- requests>=2.25.0 means that any version of requests
greater than or equal to 2.25.0 will work.
- pandas without a version number means the latest version
of the library will be installed.
This file serves as a record of the external packages your
project depends on, making it easier for collaborators or
future versions of yourself to set up the environment
correctly.
2. Using pip install -r requirements.txt
The primary way to install the dependencies listed in a
requirements.txt file is by using the pip install -r command.
The '-r' flag tells pip to install the packages listed in a
requirements file. Here's how to do it:
1. First, make sure you have a requirements.txt file in your
project directory.
2. Then, open a terminal (or command prompt) and
navigate to your project’s root folder where the
requirements.txt file is located.
3. Run the following command:
I •••
1 pip install -r requlrements.txt
•••
1 numpy==l.21.2
2 requests>=2.25.0
If you already have a different version of numpy installed
(say 1.19.5 ), pip will uninstall the older version and install
1.21.2 instead. If the correct version of requests is already
installed, pip will skip it.
Additionally, pip will resolve dependencies, meaning that if
one package requires another (e.g., pandas might require
numpy ), pip will automatically install those as well.
4. Example: Installing dependencies from requirements.txt
Let’s walk through an example of how you would create a
requirements.txt file and use it to install dependencies.
1. Create a requirements.txt file:
Suppose you are working on a Python project that requires
numpy , requests , and pandas . Create a requirements.txt
file with the following contents:
1 numpy==l,21.2
2 requests>=2.25.0
3 pandas
pip will then go through the file, download, and install the
packages.
3. Verify the installation:
After the installation completes, you can verify that the
packages were correctly installed by running:
♦•♦
1 pip freeze
This will list all the installed packages and their versions,
and you should see numpy , requests , and pandas among
them.
5. Conclusion (To be written later)
At this point, you should understand how to create a
requirements.txt file, what the pip install -r requirements.txt
command does, and how to use it to install dependencies in
a Python project. This process is essential for managing
project dependencies effectively, ensuring that the right
libraries and versions are installed, and helping
collaborators easily set up the project environment. By
using the requirements.txt file, you ensure consistency
across different environments, avoiding potential conflicts
caused by different package versions.
Feel free to apply this practice in your own Python projects,
whether you’re developing a personal project or
collaborating with others.
period of time.
I •••
1 numpy==lJ23.5
This line tells Python’s package installer, pip , to install
version, pip will install the latest version of the library, but
pip is the most widely used tool for installing and managing
file, it reads the file line by line and installs each listed
exactly as specified.
I •••
1 package_name==verston
For example, if your project depends on the numpy , pandas
1 numpy==l,23.5
2 pandas=-l .4.2
3 requests==2.2S.0
«« •
1 numpy>=l.21.fi
maximum version:
•••
1 numpy<=l.23.fi
You can mix version constraints like so:
I••
1 numpy>=l.21.0,<=1.23.0
It’s important to note that, while you can create this file
♦•♦
1 pip freeze > requtrements.txt
terminal:
•••
1 pip install -r requtrements.txt
process:
1. Reading the requirements.txt file: pip begins by
reading the file line by line. For each line, it extracts the
being installed.
dependencies.
In more complex scenarios, you can also create a
packages in production.
•••
1 git+https://github.com/user/repository.git
This will tell pip to fetch the package directly from the Git
your project.
up on any system.
When working with Python projects, managing external
file. This file lists all the packages needed for the project,
correctly.
*,
*DataAnalysis which uses two external libraries: pandas
•••
1 $ mkdlr DataAnalysts
2 $ cd DataAnalysis
•••
- On macOS/Linux:
I t••
1 $ source venv/bin/acttvate
- On Windows:
#••
1 $ *
\venv\Scrtpts\activate
I ••♦
1 (venv) $ pip install pandas matplotltb
This will install the latest versions of these libraries and their
following command:
I ••e
1 (venv) $ pip freeze > requtrements.txt
•••
1 matplotlib==3.6.3
2 pandas==l.5.3
need to use the pip install command with the '-r' flag
••♦
1 $ pip install -r requirements.txt
requirements.txt file.
can use the pip list command. This command will display a
••♦
1 (venv) $ pip list
The output will look similar to the following:
1 Package Version
2-------------------------------
3 matplotlib 3.6.3
4 pandas 1.5.3
5 pip 22.3.1
6 setuptools 65.5.0
Here, you can see that both matplotlib and pandas are
dependencies.
dependencies.