diff options
| author | Miguel <m.i@gmx.at> | 2019-03-21 13:39:20 +0100 |
|---|---|---|
| committer | Miguel <m.i@gmx.at> | 2019-03-21 13:39:20 +0100 |
| commit | aaa922fc4dee537c54e2594b4835b5a3425c0426 (patch) | |
| tree | 614f9b924649f41f4d9122c95550dde2e9d2348f /00_blog | |
| parent | 5f262c9ad14a63148f0ca9c79dcc23799fcf3b2e (diff) | |
cv and some stuff about neuronets
Diffstat (limited to '00_blog')
| -rw-r--r-- | 00_blog/00040_Haskell/00220_Neural-Networks/index.md | 15 |
1 files changed, 13 insertions, 2 deletions
diff --git a/00_blog/00040_Haskell/00220_Neural-Networks/index.md b/00_blog/00040_Haskell/00220_Neural-Networks/index.md index f075430..ee62d09 100644 --- a/00_blog/00040_Haskell/00220_Neural-Networks/index.md +++ b/00_blog/00040_Haskell/00220_Neural-Networks/index.md @@ -9,12 +9,23 @@ WORK IN PROGRESS * recurrent neural networks * cost / loss / objective function * quadratic cost function / mean squared error -* gradient descent +* gradient descent * gradient (vector of partial derivatives) -* Stochastic gradient descent +* stochastic gradient descent +* on-line / incremental learning +* deep neural networks +* mini-batches +* backpropagation (4 funamental equations) +* weighted input +* required assumptions about the cost function +* hadamard / schur product + + +* saturated neuron ## Ref * [1] <http://neuralnetworksanddeeplearning.com/> * [2] <http://www.deeplearningbook.org/> * [3] <https://medium.com/tebs-lab/how-to-classify-mnist-digits-with-different-neural-network-architectures-39c75a0f03e3> +* [4] <http://yann.lecun.com/exdb/mnist/> |
