Sigmoid Function: Soft Computing Assignment
Sigmoid Function: Soft Computing Assignment
Sigmoid Function: Soft Computing Assignment
UNIT 1
neural networks. It is a differentiable real function, defined for real input values,
smoothness. The sigmoid function appears in the output layer of the deep
learning models and is used for predicting probability-based outputs. The sigmoid
The hyperbolic tangent function, a.k.a., the tanh function, is another type of AF. It
3. Softmax Function
The softmax function is another type of AF used in neural networks to compute
output that ranges between values 0 and 1 and with the sum of the probabilities
4. Softsign Function
represented by:
One of the most popular AFs in DL models, the rectified linear unit (ReLU)
performance with stellar results. Compared to other AFs like the sigmoid and
tanh functions, the ReLU function offers much better performance and
gradient-descent methods.
The ReLU function performs a threshold operation on each input element where
all values less than zero are set to zero. Thus, the ReLU is represented as:
The exponential linear units (ELUs) function is an AF that is also used to speed
up the training of neural networks (just like ReLU function). The biggest
advantage of the ELU function is that it can eliminate the vanishing gradient
problem by using identity for positive values and by improving the learning
i.e., it is a randomly defined process that can be analyzed statistically but not with
precision.
2. Hard computing relies on binary logic and predefined instructions like a numerical
Soft computing is based on the model of the human mind where it has probabilistic
3. Hard computing needs exact input of the data and is sequential; on the other hand,
Soft computing can handle an abundance of data and handles multiple computations
4. Hard computing takes a lot of time to complete tasks and is costly while soft
Intelligence Quotient (MIQ) and lower cost. It also provides better communication.
5. Hard computing is best suited for solving mathematical problems which give some
precise answers.
Soft computing resolves the nonlinear issues that involve uncertainty and impreciseness
6. Hard computing takes a lot of time in computing as it requires the stated analytical
model and the model soft computing is based on is that of human intelligence.
UNIT 2
A1.The Self-Organizing Map is one of the most popular neural network models. It
during the learning and that little needs to be known about the characteristics of the
input data. We could, for example, use the SOM for clustering data without knowing the
class memberships of the input data. The SOM can be used to detect features inherent
to the problem and thus has also been called SOFM, the Self-Organizing Feature Map.
The Self-Organizing Map was developed by professor Kohonen . The SOM has been
topology preserving mapping from the high dimensional space to map units. Map units,
or neurons, usually form a two-dimensional lattice and thus the mapping is a mapping
from high dimensional space onto a plane. The property of topology preserving means
that the mapping preserves the relative distance between the points. Points that are
near each other in the input space are mapped to nearby map units in the SOM. The
SOM can thus serve as a cluster analyzing tool of high-dimensional data. Also, the
SOM has the capability to generalize. Generalization capability means that the network
can recognize or characterize inputs it has never encountered before. A new input is
A2.It has been reported through simulations that Hopfield networks for crossbar
switching almost always achieve the maximum throughput. It has therefore appeared
that Hopfield networks of high-speed computation by parallel processing could possibly
be used for crossbar switching. However, it has not been determined whether they can
always achieve the maximum throughput. In the paper, the capabilities and limitations of
a Hopfield network for crossbar switching are considered. The Hopfield network
considered in the paper is generated from the most familiar and seemingly the most
powerful neural representation of crossbar switching. Based on a theoretical analysis of
the network dynamics, we show what switching control the Hopfield network can or
cannot produce. Consequently, we are able to show that a Hopfield network cannot
always achieve the maximum throughput.
UNIT 3
A1.Crisp sets are the sets that we have used most of our life. In a crisp set, an
element is either a member of the set or not. For example, a jelly bean belongs in the
class of food known as candy. Mashed potatoes do not. Fuzzy sets, on the other hand,
allow elements to be partially in a set.
Set Theory and Sets are one of the fundamental and widely present concepts in
mathematics. A Crisp Setor simple a Set is a well-defined collection of distinct objects
where each object is considered in its own right. Here are the 11 main properties/laws of
crisp sets:
Law of Commutativity:
● (A ∪ B) = (B ∪ A)
● (A ∩ B) = (B ∩ A)
Law of Associativity:
● (A ∪ B) ∪ C = A ∪ (B ∪ C)
● (A ∩ B) ∩ C = A ∩ (B ∩ C)
Law of Distributivity:
● A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
● A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
Idempotent Law
● A∪A=A
● A∩A=A
Identity Law
● A ∪ Φ = A => A ∪ E = E
● A ∩ Φ = Φ => A ∩ E = A
Law of Absorption
● A ∪ (A ∩ B) = A
● A ∩ (A ∪ B) = A
Involution Law
● (Ac)c = A
Law of Transitivity
● If A ⊆ B, B ⊆ C, then A ⊆ C
● (A ∪ Ac) = E
Law of Contradiction
● (A ∩ Ac) = Φ
De morgan laws
● (A ∪ B)c = Ac ∩ Bc
● (A ∩ B)c = Ac ∪ Bc
neural network affects the classification performance. The common method to solve this
problem is to use several simulations with training patterns presented in random order,
where voting strategy is used to compute the final performance. Recently, an ordering
method based on min–max clustering was introduced to select the presentation order of
training patterns based on a single simulation. In this paper, another single simulation
training patterns for improving the performance of SFAM. The proposed method is
signals and three other datasets from UCI repository. The proposed method has the
training time compared to the random ordering and min–max methods. When compared
to the random ordering method, the new ordering scheme has the additional advantage
of requiring only a single simulation. As the proposed method is general, it can also be
UNIT 4
A2.Fitness Function (also known as the Evaluation Function) evaluates how close a
given solution is to the optimum solution of the desired problem. It determines how fit a
solution is.In genetic algorithms, each solution is generally represented as a string of
binary numbers, known as a chromosome. We have to test these solutions and come
up with the best set of solutions to solve a given problem. Each solution, therefore,
needs to be awarded a score, to indicate how close it came to meeting the overall
specification of the desired solution. This score is generated by applying the fitness
function to the test, or results obtained from the tested solution.The following
requirements should be satisfied by any fitness function.
1. The fitness function should be clearly defined. The reader should be able to
clearly understand how the fitness score is calculated.
2. The fitness function should be implemented efficiently. If the fitness function
becomes the bottleneck of the algorithm, then the overall efficiency of the
genetic algorithm will be reduced.
3. The fitness function should quantitatively measure how fit a given solution is
in solving the problem.
4. The fitness function should generate intuitive results. The best/worst
candidates should have best/worst score values.
Q2.What is reproduction? Give various methods of selecting chromosomes for
parent to crossover?
A2.In genetic algorithms and evolutionary computation, crossover, also called recombination, is a
genetic operator used to combine the genetic information of two parents to generate new offspring. It
is one way to stochastically generate new solutions from an existing population, and is analogous to
the crossover that happens during sexual reproduction in biology. Solutions can also be generated
by cloning an existing solution, which is analogous to asexual reproduction. Newly generated
solutions are typically mutated before being added to the population.
Different algorithms in evolutionary computation may use different data structures to store genetic
information, and each genetic representation can be recombined with different crossover operators.
Typical data structures that can be recombined with crossover are bit arrays, vectors of real
numbers, or trees.
UNIT 5
In ACO, a set of software agents called artificial ants search for good solutions to a
given optimization problem. To apply ACO, the optimization problem is transformed into
the problem of finding the best path on a weighted graph. The artificial ants (hereafter
ants) incrementally build solutions by moving on the graph. The solution construction
process is stochastic and is biased by a pheromone model, that is, a set of parameters
associated with graph components (either nodes or edges) whose values are modified
at runtime by the ants.
The easiest way to understand how ant colony optimization works is by means of an
example. We consider its application to the traveling salesman problem (TSP). In the
TSP a set of locations (e.g. cities) and the distances between them are given. The
problem consists of finding a closed tour of minimal length that visits each city once and
only once.
To apply ACO to the TSP, we consider the graph defined by associating the set of cities
with the set of vertices of the graph. This graph is called construction graph. Since in the
TSP it is possible to move from any given city to any other city, the construction graph is
fully connected and the number of vertices is equal to the number of cities. We set the
lengths of the edges between the vertices to be proportional to the distances between
the cities represented by these vertices and we associate pheromone values and
heuristic values with the edges of the graph. Pheromone values are modified at runtime
and represent the cumulated experience of the ant colony, while heuristic values are
problem dependent values that, in the case of the TSP, are set to be the inverse of the
lengths of the edges.
The ants construct the solutions as follows. Each ant starts from a randomly selected
city (vertex of the construction graph). Then, at each construction step it moves along
the edges of the graph. Each ant keeps a memory of its path, and in subsequent steps it
chooses among the edges that do not lead to vertices that it has already visited. An ant
has constructed a solution once it has visited all the vertices of the graph. At each
construction step, an ant probabilistically chooses the edge to follow among those that
lead to yet unvisited vertices. The probabilistic rule is biased by pheromone values and
heuristic information: the higher the pheromone and the heuristic value associated to an
edge, the higher the probability an ant will choose that particular edge. Once all the ants
have completed their tour, the pheromone on the edges is updated. Each of the
pheromone values is initially decreased by a certain percentage. Each edge then
receives an amount of additional pheromone proportional to the quality of the solutions
to which it belongs (there is one solution per ant).