0% found this document useful (0 votes)
28 views

AI Test Study Guide

The document discusses different types of neural networks including recurrent neural networks and feed forward neural networks. Recurrent neural networks can process sequences of inputs and exhibit dynamic temporal behavior, while feed forward networks are acyclic and rely on local optimization. Learning methods for neural networks include backpropagation and training on partitions of a dataset with error propagation.

Uploaded by

brentbaum
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

AI Test Study Guide

The document discusses different types of neural networks including recurrent neural networks and feed forward neural networks. Recurrent neural networks can process sequences of inputs and exhibit dynamic temporal behavior, while feed forward networks are acyclic and rely on local optimization. Learning methods for neural networks include backpropagation and training on partitions of a dataset with error propagation.

Uploaded by

brentbaum
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Recurrent vs Feed Forward Neural Networks Multi-layer Feedforward networks: Acyclical Sufficient (converges on solution) Learning method Works

by local optimization. Solution is a blackbox, generalizability is questionable. Examples of functions that cant be learned: Recurrent Neural Network: Connections between units form a directed cycle. Can exhibit dynamic temporal behavior Can process arbitrary sequences of inputs. Minskey + Pappert Early Work What the frogs eye tells the frogs brain Guaranteed to work under what conditions? Linearly separable (single layer perceptrons) Examples of unsolvable by perceptron Generalized in two forms Threshold Transform Back-propogration learning Not guaranteed to be a stable configuration Optimization process, not solver. (Hill climbing) Process Learning Constant Training set Schedule for training Break full training set into partitions Training and Testing Output value compared to correct answer, error propogates back. Minimizes error. Repeat process with different partitions. Assigment of credit problem Decision trees Non-parametrics Turn the training set, X, into a classifier. Must maintain full set and have an indexing structure Nearest neighbors Bayesian networks Max expected utility Build a network that does a logical function I can do that! Build a decision tree that does something I can do that too!

Formulas Know by frequency in slides - k Bayesian conditional probabilities aight. Learning process for perceptrons Training set X = {(X1, C1), .. (Xm, Cm)} Initial weights {W0 = W00, W01, W0n} Learning constants a0, a1, at, Training sequence: for each t in sequence At = g(W(t-1) * Xt) Err = (Ct at), {-1, 0, 1} W(t+1) = Wt + at*Err*Xt Specific forms not important in perceptron learning Eye makeup Sensors Multiple Layers

You might also like