Minor Project
Minor Project
Submitted to
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P)
Department of CSE-IoT
Lakshmi Narain College of Technology, Bhopal (M.P.)
Session 2024-25
LAKSHMI NARAIN COLLEGE OF TECHNOLOGY,
BHOPAL
DEPARTMENT OF CSE-IoT
CERTIFICATE
This is to certify that the work embodied in this project work entitled ”Artificial
Assistant” has been satisfactorily completed by the Nikunj Maheshwari
(0103IS221132), Indrajeet Chouhan(0103IS221090) , Kuldeep
Yadav(0103IS221106). It is a bonafide piece of work, carried out under the
guidance in Department of CSE- IoT, Lakshmi Narain College of
Technology, Bhopal for the partial fulfillment of the Bachelor of Technology
during the academic year 2024-25.
DEPARTMENT OF CSE-IoT
ACKNOWLEDGEMENT
We would also thank our institution and all the faculty members without whom
this project work would have been a distant reality.
Coding 23-25
References 28
Total Approx. 35
INTRODUCTION
The modern world is increasingly reliant on technology to address challenges across various
domains, including agriculture, healthcare, education, and industry. One such critical issue is
[insert specific problem relevant to your project, e.g., "inefficient monitoring of crop health in
precision agriculture"]. This problem affects productivity, sustainability, and economic
growth, making it essential to develop innovative solutions.
Background
The identified problem arises from [context, e.g., "a lack of real-time monitoring and data
analysis tools for farmers, leading to delays in identifying issues such as pest infestations or
nutrient deficiencies"]. Traditional approaches are often reactive, resource-intensive, and
prone to errors, resulting in financial losses and environmental harm.
Problem Statement
The core challenge is to create a system that can efficiently [state the main goal, e.g.,
"monitor agricultural fields, analyze crop health, and provide actionable insights using IoT
and AI technologies"].
Real-World Scenarios
- A farmer loses a significant portion of their crop yield due to delayed identification of a pest
infestation.
- Urban environments struggle with traffic congestion due to a lack of real-time traffic flow
data, increasing commute times and fuel consumption.
- Remote healthcare centers face challenges in patient monitoring due to limited access to
advanced medical devices.
Proposed Solution
To address these challenges, this project aims to develop a [solution type, e.g., "smart
monitoring system"] that leverages [key technologies, e.g., "Internet of Things (IoT) and
Artificial Intelligence (AI)"] to provide [specific benefits, e.g., "real-time insights, predictive
analytics, and automated alerts"].
This approach will bridge the gap between traditional methods and modern technological
advancements, creating a sustainable and scalable solution to the identified problem.
Note: Adjust the placeholders (e.g., problem, context, technologies) to fit the specific project
domain. Let me know if you'd like further customization!
CHAPTER 2
LITERATURE SURVEY
Introduc on
News summariza on is a cri cal task in Natural Language Processing (NLP) that aims to
condense lengthy news ar cles into concise summaries while preserving essen al
informa on and readability. This literature survey explores various approaches to news
summariza on, including tradi onal techniques and recent advancements in deep learning.
* Sentence Scoring: This approach assigns scores to sentences based on features like
sentence posi on, word frequency, and presence of keywords. Sentences with higher scores
are selected for the summary.
* Graph-based Methods: These methods represent the ar cle as a graph, where nodes are
sentences and edges represent rela onships between them. Important sentences are
iden ed based on their centrality within the graph.
* Sta s cal Machine Transla on: These methods treat summariza on as a transla on task,
where the source text is the ar cle and the target text is the summary.
* Encoder-Decoder Architecture: These models encode the input ar cle into a vector
representa on and then decode it into a summary.
* Example: Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM), Gated
Recurrent Unit (GRU).
* Transformer Models:
* Self-A en on: These models use self-a en on to capture rela onships between
di erent parts of the input sequence, enabling them to process long sequences e ciently.
* Pre-trained Models: Large language models like BERT and GPT are pre-trained on
massive text corpora and can be ne-tuned for speci c summariza on tasks.
3. Evalua on Metrics
* ROUGE: A widely used set of metrics that compare the generated summary with a set of
reference summaries.
* BLEU: Another commonly used metric that measures the n-gram overlap between the
generated summary and the reference summaries.
* METEOR: A metric that assesses the seman c similarity between the generated and
reference summaries.
* Human Evalua on: Human evaluators assess the quality of the generated summaries
based on criteria such as uency, readability, and informa veness.
* Factuality and Hallucina on: Ensuring the generated summaries are factually accurate and
avoid genera ng false informa on.
ff
tt
ti
ti
tt
ti
ti
ti
ti
fl
ti
tt
ti
ti
ti
fi
tt
tt
ti
ti
ti
fi
ti
ti
ti
ti
ffi
fi
* Subjec vity and Bias: Mi ga ng the impact of biases in the training data and the
generated summaries.
* Contextual Understanding: Capturing the context and nuances of the news ar cles to
generate more informa ve summaries.
* Mul -document Summariza on: Summarizing informa on from mul ple related news
ar cles.
* Interac ve Summariza on: Allowing users to interact with the summariza on process to
customize the generated summaries.
Conclusion
News summariza on has witnessed signi cant advancements with the rise of deep learning
techniques. Transformer-based models have achieved state-of-the-art results, but challenges
like factuality, bias, and contextual understanding remain. Future research direc ons include
developing more robust models, exploring new evalua on metrics, and addressing the
ethical implica ons of automated summariza on.
Note: This is a brief overview of the literature. A comprehensive survey would include a
detailed analysis of speci c research papers and their contribu ons to the eld of news
summariza on.
ti
ti
ti
ti
ti
ti
ti
ti
ti
fi
ti
ti
ti
fi
ti
ti
ti
ti
ti
fi
ti
ti
ti
CHAPTER 3
Objective
The primary objective of this project is to develop an efficient and scalable solution to
address the identified problem within the domain. By leveraging modern technologies such as
Artificial Intelligence (AI), the Internet of Things (IoT), and data analytics, the project aims
to enhance operational efficiency, minimize resource utilization, and deliver actionable
insights to stakeholders. Additionally, the project seeks to fill existing gaps in current
methodologies by providing a user-centric, adaptable, and secure solution. The ultimate goal
is to contribute to technological advancements that positively impact industry practices and
societal outcomes. The project also aims to foster innovation and collaboration among
stakeholders by integrating advanced methodologies that promote cross-disciplinary learning
and problem-solving.
Scope
The scope of the project encompasses both the technical and practical aspects of the
solution's development and deployment. It includes:
- System Design and Development: Creating a robust architecture that integrates hardware
and software components seamlessly.
- Data Collection and Analysis: Implementing mechanisms to collect, store, and process data
efficiently.
- User Interface: Designing an intuitive and accessible interface for end-users.
- Deployment and Testing: Ensuring the solution operates effectively in real-world scenarios.
- Scalability: Planning for future expansions and adaptability to evolving requirements.
- Training and Documentation: Providing comprehensive training materials and
documentation to ensure smooth adoption by users and stakeholders.
- Environmental and Ethical Considerations: Incorporating sustainable practices and ethical
guidelines during development and deployment.
By focusing on these areas, the project aims to deliver a comprehensive solution that is not
only innovative but also practical and sustainable for long-term use. It strives to address
immediate challenges while laying the groundwork for future technological evolution.
CHAPTER 4
PROBLEM ANALYSIS AND REQUIREMENT
SPECIFICATION
Problem Analysis
1. Information Overload
- Users struggle with processing large volumes of daily news content, leading to
information fatigue and reduced comprehension efficiency.
- Reading full articles consumes significant time that modern users often cannot afford in
their daily schedules.
- Quick understanding of key information becomes crucial for staying informed without
overwhelming cognitive load.
- Maintaining accuracy while condensing information requires sophisticated algorithmic
approaches and careful content processing.
3. Technical Challenges
- API integration must handle rate limits, downtime, and varying response formats while
maintaining system stability.
- Resource management for ML model processing needs to balance between performance
and system load optimization.
- Error handling must cover network issues, API failures, invalid inputs, and model
processing errors comprehensively.
- Secure credential management requires proper encryption and safe storage of API keys
and sensitive data.
Software Requirements
1. System Requirements
- Python 3.x environment with support for all necessary machine learning and API
interaction libraries.
- Stable internet connectivity with sufficient bandwidth for reliable API access and data
retrieval.
- Minimum 8GB RAM for efficient ML model operations and content processing
capabilities.
- Jupyter Notebook environment with proper kernel configuration and dependency
management.
2. Functional Requirements
- Seamless news article retrieval through NewsAPI with support for various search
parameters and filters.
- Advanced text summarization utilizing the BART-CNN model for high-quality content
condensation.
- Flexible configuration options for search terms, article count, and summarization
parameters.
- Clearly formatted output display with proper separation between articles and summaries.
3. Performance Requirements
- Maximum response time of 5 seconds per article to maintain user engagement and system
efficiency.
- Concurrent processing support for handling multiple articles without significant
performance degradation.
- Optimized memory usage with proper garbage collection and resource management.
- Consistent performance across articles of varying lengths and complexities.
4. Security Requirements
- Encrypted storage for API keys with secure access mechanisms and regular rotation
capabilities.
- Protected data transmission using HTTPS and proper certificate validation.
- Comprehensive input validation to prevent injection attacks and system vulnerabilities.
- Secure error message handling to prevent information leakage.
5. Interface Requirements
- Professional output formatting with clear separation between different articles and their
summaries.
- Intuitive parameter configuration system with proper validation and default values.
- Informative error messages that guide users toward resolution without exposing system
details.
- Clean and readable summary presentation with proper formatting and structure.
Scope
1. Included Features
- Full NewsAPI integration with support for all available API endpoints and features.
- Complete BART-CNN model implementation with optimized performance settings.
- Sophisticated text summarization with length adaptation and quality preservation.
- Comprehensive error handling system with proper logging and recovery mechanisms.
2. Future Enhancements
- Integration capability with multiple news sources for broader content coverage.
- Advanced error recovery systems with automatic retries and fallback mechanisms.
- Graphical user interface development for improved user experience and accessibility.
- Implementation of quality metrics for summary evaluation and optimization.
3. Limitations
- Current dependency on single news API limits content source diversity.
- Fixed summarization model without support for alternative AI models.
- Basic console output format without advanced visualization options.
- Absence of persistent storage for historical data and analysis.
4. Dependencies
- Active NewsAPI subscription with appropriate rate limits and access levels.
- Latest version of Transformers library with BART-CNN model support.
- Updated Requests library for reliable API communication.
- Access to BART-CNN model weights and configurations.
CHAPTER 5
DETAILED DESIGN(MODELING AND ERD/DFD )
The software platform environment plays a crucial role in the development and execution of
a voice assistant project, enabling the integration of multiple components such as speech
recognition, text-to-speech, external APIs, and web services. In the context of the voice
assistant described in the provided code, the platform environment includes the operating
system, programming language, libraries, and external cloud services, all of which work in
harmony to enable the assistant to interact with users, process commands, and perform tasks.
Let me provide a comprehensive technical infrastructure breakdown of the news
summarization system.
Key Libraries:
1. Core Libraries:
- transformers 4.30+ (for BART-CNN model)
- torch 2.0+ (PyTorch for model operations)
- requests 2.28+ (API communication)
- numpy 1.21+ (numerical operations)
- pandas 1.5+ (data handling)
2. Support Libraries:
- python-dotenv (environment configuration)
- logging (system monitoring)
- json (data parsing)
- typing (type hints)
2. Processing Middleware:
- Queue management for batch processing
- Caching layer for frequent requests
- Data transformation pipelines
- Error handling and recovery
3. Security Middleware:
- API key management
- Request validation
- Data sanitization
- SSL/TLS enforcement
3. Service Communication:
- REST API protocols
- HTTPS encryption
- JSON data format
- Webhook support for notifications
2. Monitoring Integration:
- System health checks
- Performance monitoring
- Resource utilization tracking
- Alert mechanisms
This infrastructure design ensures robust operation, scalability, and maintainability while
providing flexibility for future enhancements and cloud integration options. The system can
be deployed either as a standalone application or as part of a larger cloud-based solution,
depending on requirements and scale needs.
1. Data Flow Diagram (DFD)
• Entities:
• User Preferences:
◦ Store user preferences for news categories (e.g., technology, sports) and
summarization lengths.
• User History:
• This is a simplified design. The actual DFD and ERD may require more detailed
levels and entities depending on the specific requirements and features of the voice
assistant.
• Consider using a database (e.g., PostgreSQL, MySQL) to store and manage the data.
• Implement proper data validation and sanitization to ensure data integrity and
security.
This design provides a basic framework for the development of the news summarization
voice assistant. You can further refine and expand it based on your specific needs and
constraints.
CHAPTER 6
HARDWARE/SOFTWARE PLATFORM ENVIROMENT
The successful implementation of the voice assistant project depends on a carefully selected
hardware and software environment. This section covers the essential platforms, tools, and
technologies employed to develop, test, and deploy the system.
Hardware Platform
• CPU:
◦ Minimum: Intel Core i5 or AMD Ryzen 5 equivalent
◦ Recommended: Intel Core i7 or AMD Ryzen 7 or higher for faster processing
and smoother performance, especially during concurrent tasks.
• RAM:
◦ Minimum: 8GB
◦ Recommended: 16GB or more for handling large language models and
multiple concurrent processes.
• Storage:
◦ Minimum: 256GB SSD (Solid State Drive) for faster boot times and
application loading.
◦ Recommended: 512GB SSD or 1TB SSD for ample storage space for models,
data, and operating system files.
• GPU (Optional):
◦ NVIDIA GPU with CUDA support (e.g., GeForce RTX series, NVIDIA
TITAN) can significantly accelerate model training and inference, leading to
faster response times.
• Audio Input/Output:
◦ High-quality microphone (e.g., USB microphone) for accurate voice input.
◦ Speakers or headphones for clear audio output.
• Internet Connectivity:
◦ Reliable and high-speed internet connection (minimum 10 Mbps) for
accessing cloud services, downloading models, and fetching news data.
Software Platform
• Operating System:
◦ Linux: Highly recommended for its stability, performance, and extensive
developer tools (e.g., Ubuntu, Debian).
◦ macOS: Also suitable for development and deployment.
◦ Windows: Can be used, but may require additional configuration and
troubleshooting.
• Programming Language:
◦ Python: The preferred language for machine learning and deep learning due to
its extensive libraries and ease of use.
• Development Environment:
◦ Integrated Development Environment (IDE):
▪ Visual Studio Code with Python extensions (highly versatile and
customizable).
▪ PyCharm (specifically designed for Python development).
▪ Jupyter Notebook (excellent for interactive development and
experimentation).
• Key Libraries:
◦ Core Libraries:
▪ Transformers: For loading and utilizing pre-trained transformer models
(e.g., BART-CNN) for text summarization.
▪ PyTorch: Deep learning framework for model training and inference.
▪ SpeechRecognition: For real-time speech-to-text conversion.
▪ PyAudio: For audio input/output.
▪ Requests: For making HTTP requests to the News API.
▪ NumPy: For numerical computing.
▪ Pandas: For data manipulation and analysis.
◦ Supporting Libraries:
▪ SoundDevice: For audio playback and recording.
▪ nltk (Natural Language Toolkit): For natural language processing tasks
(optional).
▪ Flask/FastAPI: For building RESTful APIs (if required).
• Cloud Services (Optional):
◦ Cloud Storage:
▪ Amazon S3, Google Cloud Storage, or Azure Blob Storage for storing
large models and datasets.
◦ Cloud Computing:
▪ Amazon Web Services (AWS), Google Cloud Platform (GCP), or
Microsoft Azure for deploying and scaling the voice assistant.
◦ News API: For fetching news articles.
• Virtual Environments:
◦ venv or conda for creating isolated environments for each project, ensuring
compatibility and preventing conflicts between different project dependencies.
Additional Considerations:
• GPU Acceleration: If using a GPU, install the appropriate CUDA and cuDNN
libraries for optimal performance.
• Docker: Consider using Docker to containerize the application for easier deployment
and portability across different environments.
• Testing and Debugging: Implement thorough unit tests and integration tests to ensure
the system's reliability and robustness. Utilize debugging tools effectively to identify
and resolve issues.
• Continuous Integration/Continuous Deployment (CI/CD): Set up a CI/CD pipeline to
automate the build, test, and deployment processes.
This comprehensive hardware/software platform provides a solid foundation for developing
and deploying a high-performance news summarization voice assistant. Remember to choose
the specific components and configurations that best suit your project's requirements and
budget.
• Python: Widely used for its readability, extensive libraries, and strong community
support in the elds of natural language processing, machine learning, and data
science.
Core Libraries and Modules:
1. speech_recognition.py
import speech_recognition as sr
def recognize_speech():
"""Recognizes speech from microphone input."""
r = sr.Recognizer()
with sr.Microphone() as source:
print("Speak now...")
audio = r.listen(source)
try:
text = r.recognize_google(audio)
print("You said: " + text)
return text
fi
except sr.UnknownValueError:
print("Google Speech Recognition could not understand audio")
return None
except sr.RequestError as e:
print("Could not request results from Google Speech Recognition service;
{0}".format(e))
return None
2.news_api.py
import requests
def speak_text(text):
"""Converts the given text to speech."""
engine = pyttsx3.init()
engine.say(text)
engine.runAndWait()
def speak(text):
engine.say(text)
engine.runAndWait()
• listen(): A function that captures audio input and converts it into text using
speech recognition.
python
def listen():
with sr.Microphone() as source:
audio = recognizer.listen(source)
return recognizer.recognize_google(audio)
• assist(): The main function that listens for user input, processes commands, and
interacts with external services like Google or Wikipedia.
python
def assist():
while True:
command = listen()
if "exit" in command or "quit" in command:
speak("Goodbye!")
break
# Add more commands here for interaction
4. Error Handling and Robustness
The code implements error handling to ensure that the assistant operates smoothly, even
when it encounters unexpected situations. For instance, if the speech recognition service fails
or the AI model cannot process a command, the assistant informs the user and continues
listening for new input.
python
try:
command = recognizer.recognize_google(audio)
except sr.UnknownValueError:
speak("Sorry, I did not catch that. Please say it again.")
return listen()
except sr.RequestError:
speak("Sorry, the service is down. Please try again
later.")
return “”
CHAPTER 7
SNAPSHOTS OF INPUT &
OUTPUT
These are some of the screenshots of INPUT AND OUTPUT of this computer program
OUTPUT
1. Menu:
Options:
1. Fetch and Summarize News
2. View Saved Summaries
3. Exit
Enter your choice:
2. User Input (Speech-to-Text):
Speak now...
You said: Fetch news about space
3. Article Display:
Title: SpaceX Launches First Private Lunar Mission
Description: SpaceX is set to launch the first private lunar mission...
Summarize this article? (y/n): y
Options:
1. Fetch and Summarize News
2. View Saved Summaries
3. Exit
Enter your choice: 2
CODING
import requests
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import os
import time
import json
import speech_recognition as sr
import pyttsx3
def speak_text(text):
"""Converts the given text to speech."""
engine = pyttsx3.init()
engine.say(text)
engine.runAndWait()
def recognize_speech():
"""Recognizes speech from microphone input."""
r = sr.Recognizer()
with sr.Microphone() as source:
print("Speak now...")
audio = r.listen(source)
try:
text = r.recognize_google(audio)
print("You said: " + text)
return text
except sr.UnknownValueError:
print("Could not understand audio. Please try again.")
return None
except sr.RequestError as e:
print(f"Could not request results from Google Speech Recognition service: {e}")
return None
def main():
api_key = "YOUR_NEWS_API_KEY"
save_path = "summaries.json"
while True:
try:
print("\nOptions:")
print("1. Fetch and Summarize News")
print("2. View Saved Summaries")
print("3. Exit")
choice = int(input("Enter your choice: "))
if choice == 1:
query = recognize_speech()
if query:
articles = fetch_news(api_key, query)
summaries = summarize_articles(articles)
display_summaries(summaries, save_option=True, save_path=save_path)
else:
print("Could not recognize speech. Please try again.")
elif choice == 2:
try:
with open(save_path, 'r') as f:
summaries = json.load(f)
for title, summary in summaries.items():
print(f"Title: {title}\nSummary: {summary}\n")
except FileNotFoundError:
print("No saved summaries found.")
elif choice == 3:
print("Exiting...")
break
else:
print("Invalid choice. Please select a valid option.")
except ValueError:
print("Invalid input. Please enter a number.")
if __name__ == "__main__":
main()
CHAPTER 9
Project Limitations
* Data Dependency:
* News API Limitations: The quality and availability of news articles depend heavily on the
News API's data sources and their coverage.
* Data Bias: News articles may exhibit inherent biases, which could affect the generated
summaries.
* Summarization Model Limitations:
* Factual Accuracy: While effective, summarization models can sometimes introduce
inaccuracies or misinterpret the original text.
* Subjectivity: The generated summaries might reflect the inherent biases of the pre-trained
model.
* Contextual Understanding: The model may struggle to capture complex nuances and
contextual information within the articles.
* Computational Resources:
* Processing Power: Summarizing large volumes of articles can be computationally
expensive, requiring significant processing power.
* API Usage Limits: Free tiers of the News API often have usage limitations, which could
restrict the number of articles that can be processed.
Future Scope:
* Improved Summarization Models:
* Fine-tuning: Fine-tune existing models on domain-specific news datasets to enhance
accuracy and relevance.
* Explore Advanced Models: Experiment with more sophisticated models like those
incorporating attention mechanisms or incorporating knowledge graphs.
* Enhanced Data Handling:
* Data Cleaning and Preprocessing: Implement robust data cleaning techniques to handle
noisy data and improve summarization quality.
* Bias Mitigation: Develop techniques to mitigate biases in the input data and the generated
summaries.
* User Interface Development:
* Interactive Dashboard: Create a user-friendly interface for users to input queries,
customize summarization parameters, and view the generated summaries.
* Visualization Tools: Develop tools to visualize the key topics and sentiments extracted
from the news articles.
* Integration with Other Tools:
* Sentiment Analysis: Integrate sentiment analysis to understand the overall sentiment
expressed in the news articles.
* Topic Modeling: Employ topic modeling techniques to identify and categorize articles
based on their main themes.
* Real-time News Monitoring:
* Stream Processing: Develop a system for real-time news monitoring and summarization
to provide up-to-date insights.
* Multilingual Support: Extend the system to support news articles in multiple languages.
By addressing these limitations and exploring the avenues outlined in the future scope, this
project can be further refined to provide more accurate, informative, and insightful news
CHAPTER 10
REFERENCES
• Link: https://newsapi.org/
• Description: Official documentation for the News API, including information on
endpoints, parameters, usage examples, and API keys.
2. Transformers Documentation:
• Link: https://huggingface.co/docs/transformers/en/index
• Description: Comprehensive documentation for the Hugging Face Transformers
library, covering model loading, usage, fine-tuning, and more.
3. PyTorch Documentation:
• Link: https://pytorch.org/
• Description: Official documentation for the PyTorch deep learning framework,
including tutorials, API references, and community resources.
4. SpeechRecognition Library Documentation:
• Link: https://python.readthedocs.io/
• Description: Documentation for the SpeechRecognition library in Python, covering
supported speech recognition engines, usage examples, and error handling.
5. PyAudio Documentation:
• Link: https://pypi.org/project/PyAudio/
• Description: Documentation for the PyAudio library in Python, providing
information on audio I/O operations, stream management, and error handling.
6. Text-to-Speech (TTS) Libraries:
• Link: https://pypi.org/project/pyttsx3/
• Link: https://en.wikipedia.org/wiki/Natural_language_processing
• Description: A general overview of natural language processing concepts, including
tokenization, part-of-speech tagging, named entity recognition, and sentiment
analysis.
8. Transformer Models for Text Summarization:
• Link: https://huggingface.co/facebook/bart-large
• Link: https://www.interaction-design.org/literature/article/how-to-design-voice-user-
interfaces1
• Description: Articles and resources on designing effective voice user interfaces,
including considerations for natural language understanding, conversational flow, and
error handling.
10. Ethical Considerations in AI:
• Link: https://ainowinstitute.org/
• Description: Resources and articles on the ethical implications of artificial
intelligence, including bias, fairness, privacy, and accountability.
This list provides a starting point for further research and exploration in the area of news
summarization voice assistants. Remember to consult the official documentation and explore
relevant research papers for a deeper understanding of the underlying concepts and
technologies.