Pds Unit4 Notes
Pds Unit4 Notes
IBM Watson for Oncology, virtual assistants like Siri, and fraud detection
systems in finance are notable examples.
Q. What are the three elements of cognitive computing?
Perception, learning, and reasoning form the core elements, enabling systems
to mimic human-like intelligence effectively.
Q. What is the main objective of cognitive computing?
The primary goal is to create systems that understand, learn, and interact with
data in a human-like manner.
Q. What is the difference between cognitive computing and AI?
Cognitive computing is a subset of AI, focusing on mimicking human cognition,
while AI encompasses broader capabilities.
Q. What are the characteristics of cognitive computing?
Natural language processing, learning from data, problem-solving,
adaptability, and human-machine collaboration define cognitive computing’s
key characteristics.
TERMINOLOGIES
Cognitive computing refers to technology that mimics human thought
processes using artificial intelligence (AI) and machine learning. Key terms
include natural language processing (NLP) for understanding human language,
machine learning (ML) for improving performance through data, and neural
networks for pattern recognition and contextual awareness for understanding
the environment. and deep learning, a subset of ML that uses multi-layered
neural networks to analyze large datasets. The goal is to enhance decision-
making and automate complex tasks by simulating human cognition.
Machine Learning Deep Learning
Takes less time to train the model. Takes more time to train the model.
In Deep learning applications, second application is NLP. NLP, the Deep learning model can
enable machines to understand and generate human language. Some of the main
applications of deep learning in NLP include:
Automatic Text Generation – Deep learning model can learn the corpus of text and
new text like summaries, essays can be automatically generated using these trained
models.
Language translation: Deep learning models can translate text from one language to
another, making it possible to communicate with people from different linguistic
backgrounds.
Sentiment analysis: Deep learning models can analyze the sentiment of a piece of
text, making it possible to determine whether the text is positive, negative, or
neutral. This is used in applications such as customer service, social media
monitoring, and political analysis.
Speech recognition: Deep learning models can recognize and transcribe spoken
words, making it possible to perform tasks such as speech-to-text conversion, voice
search, and voice-controlled devices.
3. Reinforcement learning:
In reinforcement learning, deep learning works as training agents to take action in an
environment to maximize a reward. Some of the main applications of deep learning in
reinforcement learning include:
Game playing: Deep reinforcement learning models have been able to beat human
experts at games such as Go, Chess, and Atari.
Robotics: Deep reinforcement learning models can be used to train robots to
perform complex tasks such as grasping objects, navigation, and manipulation.
Control systems: Deep reinforcement learning models can be used to control
complex systems such as power grids, traffic management, and supply chain
optimization.
Challenges in Deep Learning
Deep learning has made significant advancements in various fields, but there are still some
challenges that need to be addressed. Here are some of the main challenges in deep
learning:
1. Data availability: It requires large amounts of data to learn from. For using deep
learning it’s a big concern to gather as much data for training.
2. Computational Resources: For training the deep learning model, it is
computationally expensive because it requires specialized hardware like GPUs and
TPUs.
3. Time-consuming: While working on sequential data depending on the computational
resource it can take very large even in days or months.
4. Interpretability: Deep learning models are complex, it works like a black box. it is
very difficult to interpret the result.
5. Overfitting: when the model is trained again and again, it becomes too specialized
for the training data, leading to overfitting and poor performance on new data.