Fin Irjmets1653474126
Fin Irjmets1653474126
Fin Irjmets1653474126
Vectors
Vectorization
Vectors
Vectors are one dimensional array of numbers these are typically denoted in lower case, italics, bold. Vectors
are arranged in an order, so element can be accessed by its index. Vectors are representing a particular point in
space.Vectors of length two represents location in 2D matrix, length of three represent location in 3D cube,
length of n represents location in n dimensional tensor.
Matrix
Matrix is a well arranged array of objects . A can contain any number or any function as its elements and each
unique data (any number, symbol or expression) is called the element of the matrix. Matrix has different
number of columns and rows.
Norms
Norms is a quantity that describes the size of the vector it measures simple (euclidean) distance from origin.
Most common norm in machine learning is instead of ||x||2, it can be denoted as ||x||.
Vectorization
Vectorization which is a technique of speeding up our computations especially for deep learning algorithms
where we are dealing with huge number of data so there can be multiple ways of doing the computations The
one way would be that you convert the data into form of matrices and vectors and instead of using explicit for
loop and doing scalar calculation
VI. NEURAL NETWORK FOR LINEAR ALGEBRA
This operations can be compute the output for the first and only hidden layer of the above neural network. Let’s
break it down. Every single column of this network is vectors. Vectors act as dynamic arrays that can be used to
collect the data. In the current neural network, the vector ‘x’ holds the input. It is not necessary to represent
inputs as vectors but if you do so, they become increasingly convenient to perform operations in parallel.
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[3844]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:04/Issue:05/May-2022 Impact Factor- 6.752 www.irjmets.com
VII. CONCLUSION
In this paper the application of Mathematics in Machine Learning Algorithms and their working. Hence in our
project we reason out of Machine Learning is important. This paper tells how the mathematical concept to
introduced Machine Learning algorithms that can be used to solve the task with accuracy.
VIII. REFERENCES
[1] Gennady Grabarnik, Luiza Kim-Tyan, Serge Yaskolko, “Addressing Prerrequisited for STEM Classes
Using an Example of Linear Algebra for a Course in Machine Learning”, The Twelfth International
Conference on Mobile, Hybrid, and On-line Learning, 2020, pp. 21_26
[2] Anzt Hartwig, Dongarra Jack, Flegar Goran and S. Enrique, "Quintana-Orti. Variable-Size Batched LU for
Small Matrices and Its Integration into Block-Jacobi Preconditioning", 2017 46th International
Conference on Parallel Processing (ICPP), pp. 91-100, 2017.
[3] Markus Hegland and Paul E. Saylor, "Block jacobi preconditioning of the conjugate gradient method on
a vector processor", International Journal of Computer Mathematics, vol. 44, no. 1–4, pp. 71-89, 1992.
[4] S. Bhowmick, V. Eijkhout, Y. Freund, E. Fuentes, and D. Keyes. Application of machine learning to the
selection of sparse linear solvers. Int. J. High Perf. Comput. Appl., 2006. submitted.
[5] Xiao-Chuan Cai and Marcus Sarkis. A restricted additive Schwarz preconditioner for general sparse
linear systems. SIAM J. Sci. Comput., 21:792-797, 1999.
[6] Erika Fuentes. Statistical and Machine Learning Techniques Applied to Algorithm Selection for Solving
Sparse Linear Systems. PhD thesis, University of Tennessee, Knoxville TN, USA, 2007.
[7] T.A. Manteuffel. An incomplete factorization technique for positive definite linear systems. Math.
Comp., 34:473-497, 1980.
[8] Henk van der Vorst. Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of
nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 13:631-644, 1992.
[9] David C. Wilson, David B. Leake, and Randall Bramley. Case-based recommender components for
scientific problem-solving environments. In Proceedings of the Sixteenth International Association for
Mathematics and Computers in Simulation World Congress, 2000.
[10] N. Kyrtatas and D. G. Spampinato, "A Basic Linear Algebra Compiler for Embedded Processors", 2015
Design Automation Test in Europe Conference Exhibition (DATE), pp. 1054-1059, 2015.