0% found this document useful (0 votes)
121 views3 pages

CS 3600 Project 4b Analysis

The document contains results from neural network tests on pen and car data using different numbers of hidden layer perceptrons. For the pen data, the accuracy increases from 0 to 5 perceptrons then levels off between 5-40 perceptrons, excluding a spike at 15 perceptrons. For the smaller car data, the accuracy is less uniform due to random variations in the smaller dataset and fewer test runs. More perceptrons generally increase accuracy but returns diminish with larger numbers.

Uploaded by

Christine Feng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views3 pages

CS 3600 Project 4b Analysis

The document contains results from neural network tests on pen and car data using different numbers of hidden layer perceptrons. For the pen data, the accuracy increases from 0 to 5 perceptrons then levels off between 5-40 perceptrons, excluding a spike at 15 perceptrons. For the smaller car data, the accuracy is less uniform due to random variations in the smaller dataset and fewer test runs. More perceptrons generally increase accuracy but returns diminish with larger numbers.

Uploaded by

Christine Feng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Christine Feng, CS 3600, Spring 2017

Project 4b Analysis
Q6
testPenData:
Run # Accuracy
1 0.906804
2 0.902802
3 0.907662
4 0.905089
5 0.901372
MAX: 0.907662
AVG: 0.9047458
SDEV: 0.002367238

testCarData:
Run # Accuracy
1 0.852749
2 0.834424
3 0.85733
4 0.846859
5 0.866492
MAX: 0.866492
AVG: 0.8515708
SDEV: 0.010708992

Q7
testPenData:
# run 1 run 2 run 3 run 4 run 5 max avg sd
0 0 0 0 0 0 0 0 0
5 0.846484 0.825615 0.839337 0.843911 0.823899 0.846483705 0.835849057 0.009357243
10 0.883076 0.901658 0.872213 0.893654 0.904231 0.904230989 0.890966266 0.011928465
15 0.900515 0.903659 0.899371 0.895083 0.94797 0.947970269 0.909319611 0.019519729
20 0.894225 0.905089 0.899085 0.889365 0.907376 0.907375643 0.899028016 0.006676085
25 0.904803 0.901658 0.899943 0.903659 0.905089 0.905088622 0.903030303 0.001959044
30 0.901658 0.908233 0.903373 0.902802 0.894225 0.908233276 0.902058319 0.004515057
35 0.906232 0.906232 0.901086 0.903087 0.903373 0.906232133 0.904002287 0.001983917
40 0.904231 0.907662 0.904231 0.906232 0.907376 0.907661521 0.905946255 0.001479952
testPenData
1

0.9

0.8

0.7
accuracy rate

0.6

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20 25 30 35 40
# of hidden layer perceptrons

avg

testPenData, without [0] hidden layer results


0.92

0.91

0.9

0.89

0.88

0.87

0.86

0.85

0.84

0.83
0 5 10 15 20 25 30 35 40 45
testCarData:
# run 1 run 2 run 3 run 4 run 5 max avg sdev
0 0.703534 0.703534 0.703534 0.703534 0.703534 0.703534031 0.703534031 0
5 0.854058 0.850785 0.842277 0.864529 0.861911 0.864528796 0.854712042 0.00798322
10 0.856675 0.846204 0.861257 0.863874 0.849476 0.863874346 0.855497382 0.006743061
15 0.829188 0.836387 0.852094 0.865838 0.852094 0.865837696 0.847120419 0.012934957
20 0.856675 0.850785 0.850131 0.868455 0.848168 0.868455497 0.854842932 0.007374118
25 0.850785 0.833115 0.831806 0.861911 0.833115 0.861910995 0.842146597 0.012126947
30 0.850785 0.847513 0.861257 0.831806 0.831152 0.861256545 0.844502618 0.011564342
35 0.841623 0.852094 0.835079 0.85144 0.833115 0.852094241 0.842670157 0.007946654
40 0.815445 0.837042 0.83966 0.855366 0.842277 0.855366492 0.837958115 0.012901803

testCarData
0.9

0.85
accuracy rate

0.8

0.75

0.7

0.65
0 5 10 15 20 25 30 35 40 45
# of hidden layer perceptrons

In the testPenData graph, there is a clear increase from 0 hidden layer perceptrons to 5, and
then from 5 onwards. Although the results from 15-40 perceptrons seem uniform in the
topmost graph, the second graph, which excludes the results from the 0 hidden layer
perceptron run, shows a closer view of the data. Other than the spike at 15 hidden layer
perceptrons, the accuracy rate of the data generally increases as the number of hidden layer
perceptrons increases. This can be attributed to the fact that the more perceptrons there are,
the more calculations that are being made, and thus the more accurate the results of the
algorithm will be. However, for testCarData, which has a significantly smaller dataset (cars
has 200 training examples and 1528 test; pen has 7494 training examples and 3498 test) and
only 5 trials for each run, random variations cause the results to be less uniform.

You might also like