Neural Network with Back Propagation - One Hidden Layer | |
One Input, with a two neuron hidden layer and RELU activation, single value output
|
|
Desired Output | |
Input Number | |
Layer 1 First Weight | |
Layer 1 Second Weight | |
Layer - 2 First Weight | |
Layer 2 Second Weight | |
Learning Rate | |
Training data sets | |||||||
x | W11 | W12 | H1 | H2 | W21 | W22 | y |
50 | 3 | -1 | 150 | 0 | 2 | 4 | 300 |
-46 | 3 | -1 | 0 | 46 | 2 | 4 | 184 |
48 | 3 | -1 | 144 | 0 | 2 | 4 | 288 |
55 | 3 | -1 | 165 | 0 | 2 | 4 | 330 |
-2 | 3 | -1 | 0 | 2 | 2 | 4 | 8 |
6 | 3 | -1 | 18 | 0 | 2 | 4 | 36 |
-18 | 3 | -1 | 0 | 18 | 2 | 4 | 72 |
63 | 3 | -1 | 189 | 0 | 2 | 4 | 378 |
-96 | 3 | -1 | 0 | 96 | 2 | 4 | 384 |
22 | 3 | -1 | 66 | 0 | 2 | 4 | 132 |
-88 | 3 | -1 | 0 | 88 | 2 | 4 | 352 |
-38 | 3 | -1 | 0 | 38 | 2 | 4 | 152 |
94 | 3 | -1 | 282 | 0 | 2 | 4 | 564 |
41 | 3 | -1 | 123 | 0 | 2 | 4 | 246 |
31 | 3 | -1 | 93 | 0 | 2 | 4 | 186 |
-28 | 3 | -1 | 0 | 28 | 2 | 4 | 112 |
73 | 3 | -1 | 219 | 0 | 2 | 4 | 438 |
-76 | 3 | -1 | 0 | 76 | 2 | 4 | 304 |
71 | 3 | -1 | 213 | 0 | 2 | 4 | 426 |
11 | 3 | -1 | 33 | 0 | 2 | 4 | 66 |
-5 | 3 | -1 | 0 | 5 | 2 | 4 | 20 |
-61 | 3 | -1 | 0 | 61 | 2 | 4 | 244 |
Optimal Weights | ||||
W11 | 3.883655516 | W21 | 1.54493078 | |
W12 | -1.499246109 | W22 | 2.667997907 | |