0% found this document useful (0 votes)
26 views12 pages

Pds Unit4 Notes

Principles of data science

Uploaded by

211cs011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views12 pages

Pds Unit4 Notes

Principles of data science

Uploaded by

211cs011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

What are some examples of cognitive computing?

IBM Watson for Oncology, virtual assistants like Siri, and fraud detection
systems in finance are notable examples.
Q. What are the three elements of cognitive computing?
Perception, learning, and reasoning form the core elements, enabling systems
to mimic human-like intelligence effectively.
Q. What is the main objective of cognitive computing?
The primary goal is to create systems that understand, learn, and interact with
data in a human-like manner.
Q. What is the difference between cognitive computing and AI?
Cognitive computing is a subset of AI, focusing on mimicking human cognition,
while AI encompasses broader capabilities.
Q. What are the characteristics of cognitive computing?
Natural language processing, learning from data, problem-solving,
adaptability, and human-machine collaboration define cognitive computing’s
key characteristics.

Learning in Cognitive Computing


 What it Means: Learning allows the system to improve over time by
using data to find patterns and make predictions.
 How it Works:
o Supervised Learning: The system learns from labeled data, like
feeding it labeled pictures of animals so it learns what dogs, cats,
etc., look like.
o Unsupervised Learning: It finds patterns on its own, without
labeled data. For instance, it could group customer preferences for
targeted marketing.
o Reinforcement Learning: It learns from feedback, like a game that
rewards good moves and penalizes bad ones, improving over time.
Perception in Cognitive Computing
 What it Means: Perception is how the system understands and
interprets sensory data, such as images, text, or sound.
 Components:
o Computer Vision: Helps the system “see” and recognize things in
images or videos, like identifying a face or an object.
o Natural Language Processing (NLP): Lets the system understand
and process human language so it can interact in a conversation.
o Speech Recognition: Turns spoken words into text, letting the
system follow voice commands.

Learning + Perception Together


 Example: Imagine a self-driving car. It uses perception to recognize
pedestrians, traffic lights, and other vehicles. Then it applies learning to
respond to these objects correctly, like stopping at a red light or slowing
down for pedestrians.

In Summary: Cognitive computing uses learning to get better at tasks with


experience and perception to interpret data from the environment, allowing it
to make smart, informed decisions.

TERMINOLOGIES
Cognitive computing refers to technology that mimics human thought
processes using artificial intelligence (AI) and machine learning. Key terms
include natural language processing (NLP) for understanding human language,
machine learning (ML) for improving performance through data, and neural
networks for pattern recognition and contextual awareness for understanding
the environment. and deep learning, a subset of ML that uses multi-layered
neural networks to analyze large datasets. The goal is to enhance decision-
making and automate complex tasks by simulating human cognition.
Machine Learning Deep Learning

Uses artificial neural network


Apply statistical algorithms to learn
architecture to learn the hidden
the hidden patterns and relationships
patterns and relationships in the
in the dataset.
dataset.

Requires the larger volume of


Can work on the smaller amount of
dataset compared to machine
dataset
learning

Better for complex task like image


Better for the low-label task. processing, natural language
processing, etc.

Takes less time to train the model. Takes more time to train the model.

A model is created by relevant


Relevant features are automatically
features which are manually
extracted from images. It is an end-
extracted from images to detect an
to-end learning process.
object in the image.

More complex, it works like the


Less complex and easy to interpret
black box interpretations of the
the result.
result are not easy.

It can work on the CPU or requires


It requires a high-performance
less computing power as compared
computer with GPU.
to deep learning.
Deep Learning Applications:
The main applications of deep learning AI can be divided into computer vision, natural
language processing (NLP), and reinforcement learning.
1. Computer vision
The first Deep Learning applications is Computer vision. In computer vision, Deep learning AI
models can enable machines to identify and understand visual data. Some of the main
applications of deep learning in computer vision include:
 Object detection and recognition: Deep learning model can be used to identify and
locate objects within images and videos, making it possible for machines to perform
tasks such as self-driving cars, surveillance, and robotics.
 Image classification: Deep learning models can be used to classify images into
categories such as animals, plants, and buildings. This is used in applications such as
medical imaging, quality control, and image retrieval.
 Image segmentation: Deep learning models can be used for image segmentation into
different regions, making it possible to identify specific features within images.
2. Natural language processing (NLP):

In Deep learning applications, second application is NLP. NLP, the Deep learning model can
enable machines to understand and generate human language. Some of the main
applications of deep learning in NLP include:
 Automatic Text Generation – Deep learning model can learn the corpus of text and
new text like summaries, essays can be automatically generated using these trained
models.
 Language translation: Deep learning models can translate text from one language to
another, making it possible to communicate with people from different linguistic
backgrounds.
 Sentiment analysis: Deep learning models can analyze the sentiment of a piece of
text, making it possible to determine whether the text is positive, negative, or
neutral. This is used in applications such as customer service, social media
monitoring, and political analysis.
 Speech recognition: Deep learning models can recognize and transcribe spoken
words, making it possible to perform tasks such as speech-to-text conversion, voice
search, and voice-controlled devices.
3. Reinforcement learning:
In reinforcement learning, deep learning works as training agents to take action in an
environment to maximize a reward. Some of the main applications of deep learning in
reinforcement learning include:
 Game playing: Deep reinforcement learning models have been able to beat human
experts at games such as Go, Chess, and Atari.
 Robotics: Deep reinforcement learning models can be used to train robots to
perform complex tasks such as grasping objects, navigation, and manipulation.
 Control systems: Deep reinforcement learning models can be used to control
complex systems such as power grids, traffic management, and supply chain
optimization.
Challenges in Deep Learning
Deep learning has made significant advancements in various fields, but there are still some
challenges that need to be addressed. Here are some of the main challenges in deep
learning:
1. Data availability: It requires large amounts of data to learn from. For using deep
learning it’s a big concern to gather as much data for training.
2. Computational Resources: For training the deep learning model, it is
computationally expensive because it requires specialized hardware like GPUs and
TPUs.
3. Time-consuming: While working on sequential data depending on the computational
resource it can take very large even in days or months.
4. Interpretability: Deep learning models are complex, it works like a black box. it is
very difficult to interpret the result.
5. Overfitting: when the model is trained again and again, it becomes too specialized
for the training data, leading to overfitting and poor performance on new data.

Advantages of Deep Learning:


1. High accuracy: Deep Learning algorithms can achieve state-of-the-art performance in
various tasks, such as image recognition and natural language processing.
2. Automated feature engineering: Deep Learning algorithms can automatically
discover and learn relevant features from data without the need for manual feature
engineering.
3. Scalability: Deep Learning models can scale to handle large and complex datasets,
and can learn from massive amounts of data.
4. Flexibility: Deep Learning models can be applied to a wide range of tasks and can
handle various types of data, such as images, text, and speech.
5. Continual improvement: Deep Learning models can continually improve their
performance as more data becomes available.
Disadvantages of Deep Learning:
1. High computational requirements: Deep Learning AI models require large amounts
of data and computational resources to train and optimize.
2. Requires large amounts of labeled data: Deep Learning models often require a large
amount of labeled data for training, which can be expensive and time- consuming to
acquire.
3. Interpretability: Deep Learning models can be challenging to interpret, making it
difficult to understand how they make decisions.
Overfitting: Deep Learning models can sometimes overfit to the training data,
resulting in poor performance on new and unseen data.
4. Black-box nature: Deep Learning models are often treated as black boxes, making it
difficult to understand how they work and how they arrived at their predictions.

What is a neural network?


Neural Networks are computational models that mimic the complex functions of the human
brain. The neural networks consist of interconnected nodes or neurons that process and
learn from data, enabling tasks such as pattern recognition and decision making in machine
learning.
Importance of Neural Networks
The ability of neural networks to identify patterns, solve intricate puzzles, and adjust to
changing surroundings is essential. Their capacity to learn from data has far-reaching effects,
ranging from revolutionizing technology like natural language processing and self-driving
automobiles to automating decision-making processes and increasing efficiency in numerous
industries. The development of artificial intelligence is largely dependent on neural
networks, which also drive innovation and influence the direction of technology.

Advantages of Neural Networks


Neural networks are widely used in many different applications because of their many
benefits:
 Adaptability: Neural networks are useful for activities where the link between inputs
and outputs is complex or not well defined because they can adapt to new situations
and learn from data.
 Pattern Recognition: Their proficiency in pattern recognition renders them
efficacious in tasks like as audio and image identification, natural language
processing, and other intricate data patterns.
 Parallel Processing: Because neural networks are capable of parallel processing by
nature, they can process numerous jobs at once, which speeds up and improves the
efficiency of computations.
 Non-Linearity: Neural networks are able to model and comprehend complicated
relationships in data by virtue of the non-linear activation functions found in
neurons, which overcome the drawbacks of linear models.
Disadvantages of Neural Networks
Neural networks, while powerful, are not without drawbacks and difficulties:
 Computational Intensity: Large neural network training can be a laborious and
computationally demanding process that demands a lot of computing power.
 Black box Nature: As “black box” models, neural networks pose a problem in
important applications since it is difficult to understand how they make decisions.
 Overfitting: Overfitting is a phenomenon in which neural networks commit training
material to memory rather than identifying patterns in the data. Although
regularization approaches help to alleviate this, the problem still exists.
 Need for Large datasets: For efficient training, neural networks frequently need
sizable, labeled datasets; otherwise, their performance may suffer from incomplete
or skewed data.
Frequently Asked Questions (FAQs)
1. What is a neural network?
A neural network is an artificial system made of interconnected nodes (neurons) that process
information, modeled after the structure of the human brain. It is employed in machine
learning jobs where patterns are extracted from data.
2. How does a neural network work?
Layers of connected neurons process data in neural networks. The network processes input
data, modifies weights during training, and produces an output depending on patterns that
it has discovered.
3. What are the common types of neural network architectures?
Feedforward neural networks, recurrent neural networks (RNNs), convolutional neural
networks (CNNs), and long short-term memory networks (LSTMs) are examples of common
architectures that are each designed for a certain task.

Types of neural network (notes)

You might also like