Wednesday, July 18, 2012

Project being revived

It's been quite a while since my last post, but I am still alive and well.   I have been exploring Unity 3D recently and I find the graphical results and ease of use compelling to say the least.   What this means is that I will replace Apocalyx as the GUI and Unity will take its place.   I am currently searching for some animators to make a few toons for the game I have in mind and have had for the longest time.   Unity also has integrated Networking, which means that it can very simply update game objects without further code.   So, what would be left for the SMASH framework then ??   Well, I will go through Unity's features to understand the basics first, then I will check the networking part, which already allows for many basic things during game play; what Unity does not do is store User Data, toon data, quest info, etc .  So I will be using the framework to store persistent info and probably chat will still go through SMASH.   It may however be, that I throw everything out of the window and merely do a SQL pass-through, we will see, once I am done checking out Unity's network features.   So we will see what we end up with.   I'll link next time maybe a screenie or 2, although my graphics still don't look very sophisticated.

Now as to the why I got side-tracked for a while there, I have had many personal things going on that left me no room for this project and although I love Apocalyx I found it a bit hard to be doing GUI + server side + graphics + animations at the same time.  And I am not talking down Apocalyx, Leo is awesome, his engine rocks, but I faced the problem with the characters that I had to source rotate the axis for each upon import and then keep moving them according to this, which was a mess for me and my limited abilities to think in 3D, Unity for example allows you to rotate an object ONCE and then move it like it was born there a kind of rotation of origin, which is way easier to do and I can do my terrain edits right within the GUI, also to do a game GUI is so much easier in Unity.   So with that I will sadly discontinue Apocalyx and make my way through Unity from now on.   I DO wish to end up with a playable game, but I could not handle all the work in the previous settings.   I can already make small games in Unity, which is encouraging.  So, once I get a few animators and I am through with networking, I guess, I will have more to show for.   Oh and btw, I can even publish let's say parts of it, by publishing Unity as a kind of flash applet on this page, so stay tuned and sorry for the looooooong delay.  

After that I'll be back with state machines and such, I need to find out where Anne will now fit in, poor Anne.

Cheers and until next time.


Thursday, May 20, 2010

Project on hold for a while

I am currently fighting on too many fronts to be able to keep up with this side project, which doesn't mean it's dead, I definitely want to see some progress here, but at present this is impossible.

This fact not withstanding, whenever I play a MMO I think how I could this or that in SMASH, like f.e. I find myself intrigued with Star Trek Online's design. Cryptic implemented STO in such a way that there is only one realm for the whole game but every location is an instance to avoid sending hundreds if player data to each client, you get sent to a certain instance and that's about 20 players or so, nothing more, chat however is sent to all instances. I like this design and I have wondered how this relates to SMASH. My conclusion is that this could be implemented by means of what server you are connected to, so chat messages could get relayed like they do now, but player data f.e. could be limited to be sent only to the users on that same server, voila, we got instancing. It would only be required to allow instance switching, by means of logging a user out of one server and on the desired instance server, I am not yet 100% sure if I am not overlooking anything, but I guess SMASH can easily handle instancing this way.

To handle instances like private raid dungeons this is a bit more complex, as it would require copying zone data into a new zone, but we are still far from this approach anyway, so not much to worry about, yet.

Not many updates for now, but at least a sign of life. My goal is to have a working SAD demo by the end of 2010 with a simple zone, I cannot promise more given all the stuff I need to handle at the moment and my apologies for that.



Tuesday, September 15, 2009

ANNE gets more emotional

It's been a while since my last post and that is due to the fact that I have been testing ANNE and expanding her emotional range. I am trying to set up a neural network for a "Smith and Ellsworth" style model, which reflects the feelings of happiness, sadness, anger, fear, disgust, surprise, boredom, challenge, hope, interest, contempt, frustration, pride, shame and guilt by determined by the external factors of pleasantness, responsibility, certainty, attention, effort and control. Given that I don't have access to the model I am making up the matrix as I go. It takes around 15 min (!!) to train ANNE, but evaluations are instantaneous (Erlang is speedy), oh well if you need more precision, it takes 149 microseconds (µs = millionth of a second) on the average, a mean of 1µs and a maximum time of 15900µs (most likely the first call), so you could perform over 6700 calls per second to it, not even the most emotionally unstable human could evaluate that many ;) . What this shows is that a single process running this ann could easily evaluate emotional feedback for thousands of NPCs in the game world. Believe it or not, this rather simple matrix of (6, 20, 15) takes up 7 KB !! That's quite a mouthful of data, but anyway. Random tests show very logical although on occasion rather unexpected results, remember that anns do pattern matching, not value matching.

Here some random inputs with E being the emotions ann, note that sometimes even when changing 2-3 inputs they still result in in the same emotion. Curious, but consistent and no worries, it really does evaluate to all possible emotions depending on the training matrix and input. -1.5 denotes the lowest possible value, 1.5 the highest possible one, so f.e. -0.3 pleasantness means a little bit unpleasant, 0.5 control , means some control and so on. The output shows how certain ANNE is regarding the highest match, results near 1 mean very certain, while f.e. a "0.40 surprised" could mean a little bit surprised so you really get an emotion and a degree, the higher, the stronger. I am starting to love neural networks, this is so much fun :) . I am thinking, that animals probably don't need all these emotions, nor all the input factors, which might lead to different anns for different entities.

Well, it seems to me that I will still spend some time with the fine tuning, so be patient with me. If there is enough interest, I might publish the source code, too.



Monday, August 17, 2009

Fleeting thoughts on AI

No doubt about it, AI is a fascinating topic and there are no clear cut answers, possibilities are simply endless. AI may depend first of all on the game genre, so a platform game will have a different AI compared to an evolutionary game.

Nowadays from what I have seen most seem to have some kind a Finite State Machine (FSM), where certain conditions are analyzed and actions are taken accordingly, this works similar to IF..THEN..ELSE statements. So a NPC just stands around forever on the same spot doing nothing or walking senselessly around day after day without rest, at most the NPC is going through its FSM and that's it. When aggroed, as pointed out, most NPCs keep a hate list, where the NPC sorts who has caused how much aggro, be it through heals or damage dealt and decides who to hit on. Run out of range and the NPC resets, the hate list is wiped, it's got a clean slate, boy do I envy those NPCs !!!

Well, IMHO I find this kind of NPC extremely boring. I envision a bit more and I'll mention an example inspired by Matt Buckland's book "Programming Game AI by Example".

A story from the fictitious mining town "Backwater":

Miner Irwin goes every morning out to the mine to fill his pockets with gold, during this process he will get thirsty, hungry and sleepy, but he also appreciates digging up a certain amount of gold, once satisfied he goes to the bank and deposits the money, if he gets thristy he visits the local saloon, has a drink and talks to his buddies, if he gets hungry he goes home to have his lady who serves him some stew (both will need to communicate as well), if he gets sleepy he will lay down, when rested he gets up and at it again. Let's add to this some more sense, he may be receiving input from his surroundings, so f.e. he one day detects within 50 yds. an orc, he also determines his level and notes that he has weapons equipped, his emotional ann determines fear and he runs back to town and alerts the guards. His goals may temporarily change in favor of self preservation, Irwin goes to the local saloon for more drinks until the danger is over. Guard Erl hears him shout and stands ready to fight the orc. Erl loses the fight to the orc with the rusty spoon.

-HALT- Design decision ahead -

Option 1: Normal MMO AI's would wipe their memories, Irwin engages in the fight before perishing, the new Erl is just as "savvy", as he was before, and both will just respawn and surely be killed again and again and again.

Option 2: Erl preserves his short term memory through death, he remembers that the orc with the rusty spoon was 3 levels higher than him, he modifies his internal state and makes a note to himself: if dead by attack and enemy_level + 3 then update in table fights[self+3] values pleasantness - 0.1 and control - 0.1 . When next confronted with that orc the ANNE with emotional responses will evaluate the situation differently, there will be a moment when anger becomes fear and he may just run for his life and/or call for additional help, this orc William is just too much for him alone. The internal goal seeker of Erl might determine that when he has lost 3 battles in a row, he needs further training and emerges as a level+1, he will now be tougher than before, after some cycles, the orc finally loses the battle and Erl, 4 levels higher and 20 generations later maybe feels invincible again. He keeps standing around for a while and his abilities might slowly degrade to the original skills/levels (
or he might stay at his level), this learn and forget setup would avoid players farming certain NPCs but ensures that on the long run the game world is somewhat stable. He will slowly forget that he lost 8 battles against orc William and won 2. However he just might remember in his long term memory how many battles he lost and won sorted by enemy levels and how many attacked him. He may also communicate this to his peers, in case of SMASH maybe on the same city chat channel or a special guards channel. Erl will also change guards with others and not just stand there forever, but swap with Marlo, while Marlo stands guard, Erl goes for lunch or dinner and sleeps a while. Meanwhile our miner Irwin either asks the guards for dangers and/or notices that the threat has gone away, he no longer perceives orc William in the surroundings scanning some 100 yds (just to be sure) and goes back to his original goal of getting his pockets full of gold.

If enough orcs have attacked the city, the council members might stage an all out attack against neighboring orc villages to protect themselves.

While this is still future music, this is what I have in mind for the AI, a simulation rather than just a cheap FSM.

This requires an agent capable of perceiving its surroundings (sight and words)
, a certain personality to evaluate perceptions, a goal seeking Hierachical FSM (HFSM) capable of sorting goals, a neural network (-> ANNE) to analyze what emotions result from its current situation applied to the personality and a HFSM that decides on memory, learning and thresholds whether to pursue its goals or take alternate actions and then carry them out. Let's call this the perception system (PS), goal seeker (GS), personality profile (PP), ANNE you already know, the memory records (MR), the decision maker (DM) and the action machine (AM). If you wanted to implement a hive mind, you would need to sync the goal seeker, the memory and probably the personality profile or if you do it in Erlang, just call the processes that represent those per hive mind.

I believe this design should enable for different actions according to each NPC's personality: f.e. when insulted, Eliza might feel sad, run home and cry, while guard Erl while take his sword and apply a subtle lobotomy. When attacked, Irwin might run for his life, while Erl fights back. When applying evolutionary rules, their behavior might even change over time. Most literature does not recommend to enable long term learning when you go life, because results will inevitably be unpredictable, so only time will tell if long term evolution is a good choice or not.

I highly recommend downloading and running "AI Planet" from and also read the author's design notes, it is fascinating to watch how creatures evolve over time. Imagine this in the settings of a MMO and I am sure it will create immersion for the player, but it would also mean the game to be different according to when you joined, tough call.

In the end though, some restrictions may apply, according to how much processing power you have on the server side, if you have 200'000 NPCs in your game world, you'll probably downsize the AI a bit to match CPU power versus NPC numbers, I am eyeing the shared process structure as mentioned above.



Thursday, July 16, 2009

News on ANNE

The toughest task in a programmer's life is not the actual algorithm but the bug hunting, I noticed that something was still weird in ANNE and I found some more bugs in my code, but well.

I implemented some improved trainers for XOR operation and for the emotion part, but this time not by running a fixed number of epochs, but by checking for the LMS (least mean squares) of the training sets, which actually works nicely so. When setting the trainer to 0.01% LMS: ann structure = input:(hidden layers): output

* XOR works after 11422 epochs with 2 hidden layers and it takes 11855 epochs with 1 hidden layer, 2:(2:2):1 versus
2:(2):1. Precision seems to be on par of both by 1%.

* Emotions are trained after 25201 epochs with 2 hidden layers and 18387 epochs for 1 hidden layer, the first is 6:(6:6):6 and the other is 6:(7):6. Their precision is about on par for learned values, but the first is much more precise towards one or the other emotion when interpreting new values, the second would allow for some more varied emotions (like primary and secondary).

My interpretation here is that while for most practical purposes a second layer is not required, it does add a lot of precision if you care for it and if you don't mind the extra time for the training.

I believe ANNE is now working correctly. Now I will start trying different AI scenarios for NPCs, it seems that ann's are suited to be the brains of a NPC, let's see.



Wednesday, July 8, 2009

ANNE to ELIZA: Think before you talk !!

As I said before, there are so many, many components that need to be built for the complete SMASH, one that I have barely mentioned so far is the AI part, no game ever would be really complete without some intelligence. So let's talk smart today.

Eliza will most likely be part of the NPC perception system and and it will be important for stateful conversations with the added context info (which will later form a sort of decision tree) and when I coded Eliza, I had the idea of also including something e
lse, one thing that I find about basic Eliza is that it feels too cold, there are no emotional reactions behind it. We humans associate words with feelings, like it or not, so I started looking into that.

Now, how do we translate input into emotions ??

In order to do that association I dug up a couple of AI books on my shelf and found a nice model about how a NPC (I recommend the book: Programming Believable Characters for Computer Games by Dr. Penny Baillie-de By
) can translate situational information into emotions, which turns out to be an artificial neural network (ann), so after some real messy programming sessions ANNE was born: Artificial Neural Network Entity.

ANNE is a fully connected multilayer feedforward network with normal error backpropagation. I will spare you yet another explanation of how ann's (also called sometimes multilayer perceptrons) work, mathwise, I found however this very insightful page: Introduction to Neural Networks which does an awesome job at explaining them.

While it's tempting to implement each neuron in Erlang as a process, I didn't, because I do not like to have processes just laying around doing nothing 99% of the time, rather, I implemented this as a list of lists, which can now ve
ry conveniently be saved to KVS*. This way you don't have to retrain it every time, just save a trained version on KVS2 and with KVS* replication, you now have the model ready to use across all SMASH nodes.

I wanted to see just how many neurons ANNE could support, so after some partial testing I can say that I
was able create f.e. an ann with 500 inputs, 2 hidden layers with 5000 neurons each and 500 outputs, let me tell you that this a hell of a list of lists with a total of 30 Mio internal weights, so be patient during the training process. Most literature also does not speak about how many iterations are required to train an ann, but it would seem to me that it requires about 50'000+ epoch (epoch = 1 run with all training sets) to perceive a convergence of the weights. Erlang impressed me yet again, I was unsure if it could hold THAT much data in a single list .... but it did.

Now, let's get a feel for it:
My first test was as suggested by Mrs. Penny a simple XOR operation (with 2 inputs, 2 hidden layers neuron in a single layer and 1 output neuron) which after sufficient training works just fine. Once it passed the initial runs I grabbed that emotional model described in the book looks like this (chapter 7.5.2):

What you see here on the left is the lower part of the trained ann with 6 inputs, 1 hidden layer with 7 neurons and 6 output neurons. The ann is then trained with 6 x 6 samples for about 50'000 epochs, returning the new trained ann, the output values should be loosely binary as intended, meaning 5 ceros and a single 1 value.

The call to anne:ann([INPUT], ANN) accesses the anns knowledge, so I fed the values from the book into it, knowing that those values will need to be interpreted and hoping that the 4th value is clearly the highest one of them, .... indeed the 4th parameter is the highest with 0.88 ... which is correct !!! In this example it represents: "fear", also note that the 3rd value is 0.24 which stands for "anger", the beauty of an ann is that it states the most prevalent emotions, but you could also take into account the 2nd strongest one, so maybe this input to the NPC will cause it to run away in fear cursing angrily.

I now need to find some example with 2 hidden layers to test ANNE on, but so far everything works just fine.

I have not decided yet how a fully grown NPC will have to work, it might be a sort of hierarchical finite state machine (HFSM) with modules of Eliza and ANNE integrated into it or just ANNE + Eliza, but I believe this is good progress already on the NPC AI.

One more nice thing is, if you embed an ann into a looped process you would only need a single instance on all of the SMASH servers for all NPCs to interpret that input. Maybe several anns will feed its data into a HFSM and make it trigger its actions and responses.

ANNE is still an uncooked meal, but the ingredients look fine. And that's all for now folks. Until next time.



Wednesday, June 17, 2009

Eliza expands its mind

I have looked at many Eliza implementations and most only match a single word in the sentence, or a word in the middle and the rest after the match, so I took it a bit further, I can now match stuff like: "I %% flowers **" and also stuff like "I %% and %% flowers **". This would catch several multi-word patterns in between and whatever goes behind flowers. Given that Eliza does pattern matching but the humans employ several words for the same thing if only your Eliza could group by similarities for example by ["love","like","fond of","don't hate","don't dislike"] . Another example: how many Elizas can tell that these are equivalent to a human: ["MAC", "MACs", "Macs", "Mac", "Macintosh", "mac", "macintosh", "apple", "Apple", "Macbook", "mac mini", "Macmini"] ?? So how do you catch different ways of expressing tastes with different nouns, that tell you more or less the same thing ??

Answer: pattern expansion !! I expanded Eliza's linguistic abilities to match similar patterns, you only need to create a pattern list of synonyms and specify that list in another pattern as an expansion pattern like "I ## ##" where ## stands for the synonym list, here is how:

To recreate the above example you would create 3 records in the Eliza file, the first is the pattern and the other 2 are merely variables.

{["I ## ## **"],
["I live inside a computer","I can live inside a Mac","Maybe I reside in a Mac right now"],
{subject,computers,["##1", "##2"],["like","Mac"],[]}},
{["##1"], ["love", "like", "fond of", "don't hate", "don't dislike"], {internal,synonyms,[],[],[]}},
["MAC","MACs", "Macs", "Mac", "Macintosh", "mac", "macintosh", "apple", "Apple", "Macbook", "Ibook", "Macmini"],

So now the pattern "I ## ##" is expanded against both variables
["##1", "##2"]. which creates in this case 150 patterns in run-time, but your file only required a simple entry and again Eliza returns some values that your AI can now chew on. Those variables can obviously be reused in other patterns and it's needless to say that I will be able to teach her new stuff in run-time.

Things that I find are still needed be it either on Eliza's or the AI's side is to catch specifically nouns and verbs, maybe by matching against another list and trying to find a corresponding context for them, like when your application knows nothing about flowers, but flowers are plants and maybe your AI knows something about plants or maybe it lacks info on Mac's but knows about computers. And Eliza should also be able to ask the user and remember its question
. But for the time being, I am quite happy with the implementation, it will suit SMASH well to have NPCs that the user can actually talk to. The current version is stand alone, future ones will likely put their info on KVS* for easy data access and replication.