QN: What Is Difference Between Symbolic AI and ML? Ans
QN: What Is Difference Between Symbolic AI and ML? Ans
Ans:
In classical programming, the paradigm of symbolic AI, humans input rules (a program) and data to
be processed according to these rules, and outcome answers (see figure 1.2). With
machine learning, humans input data as well as the answers expected from the data,
and outcome the rules. These rules can then be applied to new data to produce original answers.
A machine-learning model transforms its input data into meaningful outputs, a process that is “learned” from
exposure to known examples of inputs and outputs. Therefore, the central problem in machine learning and deep
learning is to meaningfully transform data: in other words, to learn useful representations of the input data at
hand—representations that get us closer to the expected output.
Machine-learning models are all about finding appropriate representations for their input data—transformations
of the data that make it more amenable to the task at hand, such as a classification task.
So that’s what machine learning is, technically: searching for useful representations of some input data, within a
predefined space of possibilities, using guidance
from a feedback signal. This simple idea allows for solving a remarkably broad range
of intellectual tasks, from speech recognition to autonomous car driving.
Ans:
Deep learning is a specific subfield of machine learning: a new take on learning representations from data that
puts an emphasis on learning successive layers of increasingly meaningful representations.
The deep in deep learning isn’t a reference to any kind of deeper understanding achieved by the approach; rather,
it stands for this idea of successive layers of representations.
How many layers contribute to a model of the data is called the depth of the model.
Other appropriate names for the field could have been layered representations learning and hierarchical
representations learning.
Modern deep learning often involves tens or even hundreds of successive layers of representations—
and they’re all learned automatically from exposure to training data.
Meanwhile, other approaches to machine learning tend to focus on learning only one or two layers of
representations of the data; hence, they’re sometimes called shallow learning.
You can think of a deep network as a multistage information-distillation operation, where information goes
through successive filters and comes out increasingly purified (that is, useful with regard to some task).
Learning means finding a set of values for the weights of all layers in a network, such that the network will
correctly map example inputs to their associated targets.
To control the output of a neural network, you need to be able to measure how far this output is from what you
expected. This is the job of the loss function of the network, also called the objective function. The loss function
takes the predictions of the network and the true target (what you wanted the network to output) and computes a
distance score, capturing how well the network has done on this specific example
The fundamental trick in deep learning is to use this score as a feedback signal to adjust the value of the weights
a little, in a direction that will lower the loss score for the current example. This adjustment is the job of the
optimizer, which implements what’s called the Backpropagation algorithm: the central algorithm in deep
learning.