The toughest task in a programmer's life is not the actual algorithm but the bug hunting, I noticed that something was still weird in ANNE and I found some more bugs in my code, but well.
I implemented some improved trainers for XOR operation and for the emotion part, but this time not by running a fixed number of epochs, but by checking for the LMS (least mean squares) of the training sets, which actually works nicely so. When setting the trainer to 0.01% LMS: ann structure = input:(hidden layers): output
* XOR works after 11422 epochs with 2 hidden layers and it takes 11855 epochs with 1 hidden layer, 2:(2:2):1 versus 2:(2):1. Precision seems to be on par of both by 1%.
* Emotions are trained after 25201 epochs with 2 hidden layers and 18387 epochs for 1 hidden layer, the first is 6:(6:6):6 and the other is 6:(7):6. Their precision is about on par for learned values, but the first is much more precise towards one or the other emotion when interpreting new values, the second would allow for some more varied emotions (like primary and secondary).
My interpretation here is that while for most practical purposes a second layer is not required, it does add a lot of precision if you care for it and if you don't mind the extra time for the training.
I believe ANNE is now working correctly. Now I will start trying different AI scenarios for NPCs, it seems that ann's are suited to be the brains of a NPC, let's see.