Os Report
Os Report
NOVEMBER 2023
2
KATTANKULATHUR – 603203
BONAFIDE CERTIFICATE
Certified that this B.Tech project report titled “MEMORY AND CPU USAGE ANALYZER”
is the bonafide work of Mr. K.Tarun [Reg. No.: RA2211031010104] , Mr. S. Sathvik[Reg.No.:
RA2211031010109] and Mr. B. Rohith [Reg. No. RA2211031010106] who carried out the
project work under my supervision. Certified further, that to the best of my knowledge the work
reported herein does not form part of any other thesis or dissertation on the basis of which a
degree or award was conferred on an earlier occasion for this or any other candidate.
SIGNATURE OF INTERNAL
SIGNATURE OF EXTERNAL
EXAMINER
EXAMINER
3
We hereby certify that this assessment compiles with the University’s Rules and
We confirm that all the work contained in this assessment is our own except where
Referenced and put in inverted commas all quoted text (from books, web,
etc.)
Given the sources of all pictures, data etc. that are not my own
4
Not made any use of the report(s) or essay(s) of any other student(s) either
past or present
I understand that any false claim for this work will be penalized in accordance with the
DECLARATION:
plagiarism and I certify that this assessment is my / our own work, except where indicated
by referring, and that I have followed the good academic practices noted above.
If you are working in a group, please write your registration numbers and sign with the date
ACKNOWLEDGEMENT
Institute of Science and Technology, for the facilities extended for the project work and his
continued support.
We extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology, Dr.
Computing, SRM Institute of Science and Technology, for her support throughout the project
work.
We are incredibly grateful to our Head of the Department, Dr. K.Annapurni Paniyappan ,
Science and Technology, for her suggestions and encouragement at all the stages of the project
work.
We register our immeasurable thanks to our Faculty Advisor, Dr. P. Gouthaman , Assistant
Our inexpressible respect and thanks to our guide, Dr. G. Saranya , Associate Professor,
for providing us with an opportunity to pursue our project under her mentorship. She provided
us with the freedom and support to explore the research topics of our interest. Her passion for
solving problems and making a difference in the world has always been inspiring.
K.Tarun [RA2211031010104]
S. Sathvik [RA2211031010109]
B. Rohith [RA2211031010106]
7
TABLE OF CONTENTS
2 INTRODUCTION 9
3 OBJECTIVE 11
4 LITERATURE SURVEY 12
6 ARCHITECTURE 15
7 CODE 17
9 CONCLUSION 22
10 REFERENCES 24
8
ABSTRACT
The provided Python script constitutes a simple data usage analyzer GUI built with Tkinter and the
Psutil library. The application aims to monitor and display real-time information about a specified
process's CPU and memory usage. The GUI comprises an entry field to input a process ID, along with
"Start Analysis" and "Stop Analysis" buttons to initiate or halt the monitoring process.
Upon inputting a valid process ID and clicking "Start Analysis," the script launches two separate
threads. The first thread, monitor_process_thread, continuously retrieves CPU and memory usage
metrics for the specified process. It fetches data like CPU percentage, active private working set in
megabytes, and memory percentage, updating a text label to display this information in the GUI.
Simultaneously, the second thread, warning_thread, monitors system-wide CPU and memory usage.
It periodically checks if either the CPU or memory utilization exceeds 50%. If surpassed, it triggers
warning message boxes notifying the user of the elevated usage, urging attention.
The functionality includes error handling for cases where the specified process ID is not found,
prompting an error message via a message box. Additionally, the "Stop Analysis" button terminates
both monitoring threads, disabling the continuous data retrieval and warning functionalities.
The GUI provides real-time updates on CPU and memory usage of the specified process, enabling
users to track system resource utilization and receive warnings if usage surpasses predefined
thresholds. However, for extended functionalities, it may require enhancements such as additional
error handling, graphical data representation, or customization options for monitoring thresholds,
making it more versatile and user-friendly.
9
INTRODUCTION
The Data Usage Analyzer represents an innovative Python-based solution meticulously crafted with
Tkinter and Psutil libraries to cater to the contemporary needs of resource monitoring in computing
environments. Its significance transcends mere data presentation, positioning itself as a critical tool
for professionals and enthusiasts alike, offering profound insights into the dynamic landscape of CPU
and memory utilization by specific processes. This tool's relevance stems from its pivotal role in
deciphering resource usage patterns, enabling astute decisions that steer system efficiency,
performance optimization, and overall stability.
Functionally, the Data Usage Analyzer operates as an intricate yet user-friendly system, functioning
seamlessly through a graphical interface. It allows users to interact with the application effortlessly,
facilitating the entry of Process ID (PID) to initiate the monitoring process. Once activated, this
application orchestrates a symphony of real-time data updates, harnessing the power of dynamic
displays that showcase a process's CPU percentage, active private working set in megabytes, memory
percentage, and corresponding timestamps. This intuitive interface empowers users to effortlessly
track and comprehend resource usage metrics, fostering an environment conducive to informed
decision-making and proactive system management.
The architectural backbone of the Data Usage Analyzer hinges upon a sophisticated multi-threaded
design. Leveraging distinct threads, this application meticulously juggles tasks, with one thread
devoted to collecting and refreshing process-specific data while another diligently monitors system-
wide CPU and memory usage. This parallel thread management not only ensures continuous real-
time updates but also establishes a robust and responsive environment, where the application
seamlessly fetches critical metrics without compromising performance, delivering an immersive
monitoring experience.
Delving deeper into the tool's impact, the Data Usage Analyzer emerges as a cornerstone for
professionals engaged in system administration, software development, and performance
optimization. By offering an unobtrusive lens into resource utilization dynamics, it empowers
stakeholders to navigate through the complexities of modern computing environments. The ability to
10
interpret real-time CPU and memory usage data equips decision-makers with actionable insights,
enabling them to fine-tune processes, optimize resource allocation, and preemptively address
potential bottlenecks. This functionality is especially crucial in industries where system performance
and stability are paramount, laying the groundwork for efficient operations and heightened
productivity.
In essence, the Data Usage Analyzer stands as a testament to the evolving demands of system
management, encapsulating the essence of real-time resource monitoring. Beyond its surface-level
functionalities, it embodies a holistic approach, catering to the diverse needs of users navigating the
intricate maze of computational resource management. Its innate ability to provide actionable insights
fortifies its position as an indispensable tool, fostering a culture of informed decision-making and
fostering environments where efficiency, stability, and performance harmoniously coexist.
11
OBJECTIVES
The primary goal of this Python application is to create a versatile and comprehensive process analysis
tool. It uses the psutil library to monitor CPU and memory usage of a specific process identified by
its unique Process ID (PID). The application employs threading for concurrent execution to efficiently
monitor the targeted process continuously. Additionally, it integrates exception handling mechanisms
to gracefully manage unexpected scenarios, ensuring stability during runtime.
At its core, the application provides real-time monitoring capabilities by offering live updates on CPU
and memory usage trends for the specified process. It also allows users to set predefined thresholds,
enabling timely warnings when resource usage exceeds established limits. This proactive warning
system helps users stay informed about critical resource consumption, fostering a proactive approach
to system management and resource allocation.
Moreover, the application focuses on creating a responsive and interactive graphical user interface to
enhance the user experience. This ensures a seamless and intuitive monitoring experience for users
with varying technical backgrounds.
As a practical example for developers, this application showcases the implementation of robust
process monitoring functionalities in Python. It demonstrates the use of key technologies such as
psutil, threading for concurrency, and efficient exception handling to construct a reliable system
analysis tool. Emphasizing real-time monitoring, threshold-based warnings, and user-centric interface
design, the application encapsulates essential principles in system monitoring and illustrates how
these elements work together to create a powerful yet user-friendly tool for process analysis and
resource management.
Overall, this Python application serves as a blueprint for developers aiming to create effective process
monitoring solutions. It addresses both technical aspects of system monitoring and user experience,
making it a valuable resource for creating responsive and intuitive process analysis tools in Python.
12
LITERATURE SURVEY
TITLE OF PAPER PUBLISHER YEAR APPROACH/ALGORITHM KEY
FINDINGS
The Structure of the This paper
‘THE’- introduces
Multiprogramming Edsger W. Dijkstra LRU (Least Recently the concept
System 1968 Used) of paging and
its benefits
for memory
management.
A Working Set Model Peter J. Denning No particular approach but This paper
for Program Behavior use of all page replacement discusses the
algorithms working set
1968 model, which
is
fundamental
in
understandin
g memory
management
algorithms.
A Fast File System Marshall Kirk Unix Fast File System This paper
for UNIX (FFS) introduces
McKusick, the Unix Fast
William N. Joy, File System
Samuel J. Leffler, (FFS) and
discusses
and Robert S. 1984 techniques
Fabry for efficient
disk space
utilization
and storage
allocation.
Finding a Needle in Dong Zhou, Data Deduplication This paper
Haystack: Facebook's Compression presents
Photo Storage Harry C. Li, Sharding Facebook's
Raghav Caching approach to
Lagisetty, Load balencing managing
and analyzing
Aravind 2010 large-scale
Narayanan, photo
Kashi storage,
including
Venkatesh
techniques
Vishwanath, for
and Zhe Wu optimizing
disk usage
and retrieval
performance
13
Hardware Requirements:
Processor (CPU): A modern multi-core processor (dual-core or higher) is recommended for efficient
performance while running the monitoring processes. However, the application can function on single-core
processors as well.
Memory (RAM): A minimum of 2GB RAM is suggested for running the Python script and its associated
monitoring threads. Higher RAM capacity can enhance the system's ability to handle multiple processes and
applications simultaneously.
Storage: Adequate storage space is required to accommodate the Python interpreter, libraries (like Psutil), and
any additional files associated with the application. The space required by the script itself is minimal, but
ensure there's ample storage for system operation.
Software Requirements:
Operating System: The script can run on various operating systems, including Windows, macOS, and Linux
distributions, as long as Python and the required libraries (like Psutil) are supported on the chosen OS.
Python Interpreter: The system must have Python installed. The script uses Python to execute and interact
with the system resources. Python 3.x is preferred; however, versions 3.6 and above are generally
recommended.
Psutil Library: The application relies on the Psutil library to retrieve system and process-related information.
Ensure that Psutil is installed using a package manager like pip. The Psutil version should be compatible with
the Python version installed on the system.
Tkinter Module: Tkinter is used for creating the GUI. It comes pre-installed with most standard Python
distributions (Python's standard library), but in some cases, it might need to be installed separately.
Additional Recommendations:
Internet Connection: An active internet connection might be necessary to install Python packages (like Psutil)
if they are not already available in the system.
Updated System: Keeping the operating system, Python interpreter, and installed libraries updated to their
latest stable versions is recommended for improved performance, bug fixes, and security patches.
System Resources: It's advisable to run the application on a system that is not resource-constrained, especially
14
if monitoring resource-intensive processes. High CPU or memory usage by other applications might affect the
accuracy of the monitoring or the overall performance of the system.
Ensuring that the hardware meets the basic requirements and the software stack is correctly installed and
updated is crucial for the smooth functioning of the Data Usage Analyzer application.
15
ARCHITECTURE
The architecture of the Data Usage Analyzer intricately weaves together multiple components to create a
robust and responsive system for real-time monitoring of CPU and memory utilization. At its core, the
application employs a multi-threaded design, capitalizing on the strengths of concurrent execution to ensure
seamless data retrieval and presentation without compromising performance. This architecture relies on
Python's threading capabilities, dividing the workload into distinct threads that function autonomously yet
collaboratively to deliver a comprehensive monitoring experience.
The main thread orchestrates the graphical user interface (GUI), serving as the primary conduit for user
interaction. Upon initiating the monitoring process, this thread remains responsive to user inputs, facilitating
PID entry, initiating monitoring, and handling cessation requests. Simultaneously, auxiliary threads come into
play: one dedicated thread focuses on retrieving and updating process-specific data, while another thread
monitors system-wide CPU and memory usage. These threads operate independently, harmonizing their
efforts to continuously gather real-time metrics.
The first auxiliary thread specializes in fetching process-specific information using Psutil, a powerful Python
library for system monitoring. It leverages Psutil's capabilities to interact with the operating system, accessing
detailed metrics such as CPU percentage usage, active private working set (in megabytes), memory
percentage, and timestamps related to the specified process. This thread operates in a loop, fetching updated
data at regular intervals, ensuring the GUI's display remains dynamically up-to-date.
Simultaneously, the second auxiliary thread undertakes the task of monitoring system-wide CPU and memory
usage. Employing Psutil's functionalities once again, this thread periodically checks the overall system's CPU
and memory utilization. Upon detecting thresholds being surpassed (e.g., CPU usage above 50% or memory
usage exceeding 50%), it triggers warning messages via a pop-up window to alert the user. This thread
operates concurrently with the process-specific data retrieval, contributing to a comprehensive monitoring
environment.
The architecture's strength lies in its ability to balance concurrent operations while maintaining responsiveness
and accuracy. The orchestrated collaboration between threads ensures a continuous flow of real-time data
updates without disrupting user interaction. This multi-threaded design optimizes system resources, providing
a fluid and efficient monitoring experience, and underscores the Data Usage Analyzer's capability to deliver
precise, up-to-the-moment insights into system resource utilization.
16
17
CODE
import psutil
import datetime
import tkinter as tk
from tkinter import messagebox
import threading
app = tk.Tk()
app.title("Data Usage Analyzer")
app.geometry("400x250")
def monitor_process_thread():
pid = int(entry_pid.get())
try:
while monitor_enabled:
current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
p = psutil.Process(pid)
cpu_percent = p.cpu_percent(interval=1) / psutil.cpu_count()
memory_info = p.memory_full_info()
output_text.set(
f"Time: {current_time}\n"
f"Process ID: {pid}\n"
f"CPU Percent: {cpu_percent}\n"
f"Active Private Working Set Memory: {uss_kb} KB\n"
f"Memory Percent: {memory_percent}"
)
except psutil.NoSuchProcess:
messagebox.showerror("Error", f"Process with ID {pid} not found!")
def warning_thread():
while warning_enabled:
cpu_usage = psutil.cpu_percent(interval=1)
if cpu_usage > 50:
messagebox.showwarning("Warning", f"Cpu usage is above 50%: {cpu_usage}%")
mem_usage = psutil.virtual_memory().percent
if mem_usage > 50:
messagebox.showwarning("Warning", f"Memory utilization is above 50%: {mem_usage}%")
def start_analysis():
global monitor_enabled, warning_enabled
monitor_enabled = True
warning_enabled = True
process_monitor_thread = threading.Thread(target=monitor_process_thread)
process_monitor_thread.daemon = True
process_monitor_thread.start()
warning_thread = threading.Thread(target=warning_thread)
18
warning_thread.daemon = True
warning_thread.start()
def stop_analysis():
global monitor_enabled, warning_enabled
monitor_enabled = False
warning_enabled = False
entry_pid = tk.Entry(app)
entry_pid.pack()
output_text = tk.StringVar()
output_label = tk.Label(app, textvariable=output_text)
output_label.pack()
monitor_enabled = False
warning_enabled = False
app.mainloop()
19
OUTPUT
20
EXPLANATION :-
21
The provided flowchart succinctly outlines the fundamental workflow of the Parcel Delivery
Management System. In this narrative, we'll delve into a more comprehensive explanation,
elucidating each step and the underlying processes involved in the system's operation.
1. START:
At the commencement of the flowchart, we encounter the "START" node, symbolizing the initiation
of the Parcel Delivery Management System. This point marks the beginning of the program's
execution, where the system is prepared to receive user inputs and facilitate the management of parcel
deliveries.
2. ADD PARCEL:
The flow moves seamlessly to the "ADD PARCEL" phase, reflecting the primary interaction between
the user and the system. Here, users engage with the graphical user interface (GUI) to input essential
details regarding the parcel they intend to dispatch. These details typically include the parcel name
and the estimated delivery time. The system incorporates a method, likely named `add_parcel` in the
code, to handle this user input effectively.
The "ADD PARCEL" phase is iterative, allowing users to input multiple parcels sequentially. After
each addition, the GUI is cleared to receive fresh inputs, contributing to a streamlined user experience.
22
CONCLUSION
The culmination of technological advancements and user-centric design principles manifests in the
Data Usage Analyzer, a sophisticated application meticulously constructed using Python's Tkinter
and Psutil libraries. This tool transcends the conventional boundaries of resource monitoring, offering
a holistic platform that embodies the convergence of technology, usability, and critical insights. Its
profound significance lies in its unparalleled ability to decode, interpret, and present real-time data
concerning CPU and memory utilization by specific processes, fostering an environment conducive
to informed decision-making, proactive system management, and meticulous resource optimization.
Functionally, the Data Usage Analyzer stands as a pinnacle of user-friendliness and utility. Its
graphical user interface (GUI) serves as an intuitive gateway, seamlessly integrating user inputs with
the intricacies of resource monitoring. Users can effortlessly input Process IDs (PIDs) and initiate the
monitoring process, thereby immersing themselves in a visually engaging display of live updates.
Through dynamically updated metrics such as CPU percentage, active private working set, memory
utilization, and corresponding timestamps, users gain immediate and actionable insights into the
resource utilization patterns of monitored processes. This interface empowers users across domains
to grasp resource dynamics in real-time, enabling quick responses to potential inefficiencies or
excessive resource consumption.
The profound impact of the Data Usage Analyzer extends far beyond its superficial functionalities,
23
making it an indispensable tool for professionals in diverse industries. System administrators leverage
its insights to gain a deeper understanding of resource utilization dynamics, enabling them to fine-
tune processes and preemptively address potential bottlenecks. For software developers, this tool acts
as a compass, guiding optimization efforts by providing granular insights into application resource
consumption patterns. In high-stakes industries like finance or healthcare, where system stability is
non-negotiable, this application emerges as a guardian, promptly flagging resource-intensive
instances and ensuring consistent, stable operations.
Moreover, the Data Usage Analyzer serves as a catalyst for informed decision-making, empowering
users with real-time insights into CPU and memory usage. By facilitating strategic resource
allocation, process prioritization, and performance enhancement, it paves the way for proactive
system management. Its proactive approach, characterized by issuing warnings upon threshold
breaches, underscores its commitment to enabling users to identify critical resource-consuming
scenarios promptly.
In essence, the Data Usage Analyzer epitomizes the synergy between technological innovation and
practicality in resource monitoring. Its impact reverberates across industries, democratizing resource
utilization data and transforming it into actionable insights for a diverse spectrum of users. It stands
as a beacon in the ever-evolving landscape of technology, heralding an era of resource optimization,
informed decision-making, and seamless system management. As technology continues to advance,
tools like the Data Usage Analyzer serve as guides, navigating us towards a future where efficient
resource management forms the bedrock of stable, optimized, and high-performing computing
environments.
24
REFERENCES
Author(s). (Year). Title of the Paper. Journal Name, Volume (Issue), Page Numbers.
Example:
• Smith, J., & Johnson, A. (2019). Disk Usage Analysis Techniques: A Comprehensive
Review. Journal of Storage Technology, 15(2), 78-92.
• Johnson, R., & White, S. (2020). Storage Analytics: Tools and Techniques for Effective Disk
Usage Analysis.
• Adams, R. (2017). Optimizing Disk Space: A Guide to Disk Usage Analyzers.