Neural Networks and some modified engines
Posted: Wed Feb 02, 2011 1:40 am
Hello,
I'd like to present you a couple of meta-engines that use
backpropagation neural networks for evaluation. This is a slightly
refined result of an introductory neural networks course assignment
that I did 2 yeas ago (this is why the first target was Grapefruit).
I slightly modified Grapefruit and Stockfish recently (hey, a big
thank you to all authors of open source engines!), swapping their
original evaluation functions with neural networks that are
trained... by the original functions. Changes are unobtrusive and the
engines have a few different modes of play (including original
engines' behaviour). I've not yet experimented much as I was eyeing
for bugs (recently found one that made all my previous learning
efforts moot), refactoring, some thread support, a little
documentation and a bit of benchmarking. There is room for
experimentation with different sized networks, network parameters and
of course - learning (also learning over eval logs of different
engine). It's not supposed to be strong but it's fun. Especially the
online learning modes that allow learning while playing. If anyone's
interested, here are links to the sources with more info:
neuroStock https://github.com/m00natic/neuroStock
neuroGrape https://github.com/m00natic/neuroGrape
neuroChessTrainer https://github.com/m00natic/neuroChessTrainer
neuroStock is a bit slow (on my laptop at least) compared to
neuroGrape, I'll have to get more familiar with it's inside to see
whether something can be done about this. neuroChessTrainer is a tool
that allows more controlled training/testing/creation of neural
networks over log files created by the engines.
Now, I'm using exclusively GNU/Linux but both target engines run on
more platforms. The only thing that would need attention for porting
is the rudimentary use of pthreads in the neural network part of the
code, I guess.
Cheers
I'd like to present you a couple of meta-engines that use
backpropagation neural networks for evaluation. This is a slightly
refined result of an introductory neural networks course assignment
that I did 2 yeas ago (this is why the first target was Grapefruit).
I slightly modified Grapefruit and Stockfish recently (hey, a big
thank you to all authors of open source engines!), swapping their
original evaluation functions with neural networks that are
trained... by the original functions. Changes are unobtrusive and the
engines have a few different modes of play (including original
engines' behaviour). I've not yet experimented much as I was eyeing
for bugs (recently found one that made all my previous learning
efforts moot), refactoring, some thread support, a little
documentation and a bit of benchmarking. There is room for
experimentation with different sized networks, network parameters and
of course - learning (also learning over eval logs of different
engine). It's not supposed to be strong but it's fun. Especially the
online learning modes that allow learning while playing. If anyone's
interested, here are links to the sources with more info:
neuroStock https://github.com/m00natic/neuroStock
neuroGrape https://github.com/m00natic/neuroGrape
neuroChessTrainer https://github.com/m00natic/neuroChessTrainer
neuroStock is a bit slow (on my laptop at least) compared to
neuroGrape, I'll have to get more familiar with it's inside to see
whether something can be done about this. neuroChessTrainer is a tool
that allows more controlled training/testing/creation of neural
networks over log files created by the engines.
Now, I'm using exclusively GNU/Linux but both target engines run on
more platforms. The only thing that would need attention for porting
is the rudimentary use of pthreads in the neural network part of the
code, I guess.
Cheers