- Joined
- Jul 17, 2023
- Messages
- 1
- Reaction score
- 0
Hi, I'd like to know why exactly my back propagation function keeps giving the same pattern of weights for every neuron of the same layer (except the first layer, it's different because it has non-linear activation functions).
This is the sheet with the weights:
If I knew how to, I would post the code. If it's needed I'll try to figure out how to post it.
This is the sheet with the weights:
1.xlsx
docs.google.com
If I knew how to, I would post the code. If it's needed I'll try to figure out how to post it.