Page 1 of 2

Upcoming repetition detection

Posted: Sat Apr 06, 2013 10:50 pm
by marcelk
I wrote down some notes regarding upcoming repetition detection I've been working on recently.
http://marcelk.net/2013-04-06/paper/upcoming-rep-v2.pdf
Abstract
An algorithm is presented that detects cycles one ply before they appear in the search of a game tree. The algorithm is suitable for use in the leaf nodes of a chess program and can there shift the detection of repetition draws to an earlier iteration. The algorithm is fast because it doesn’t generate or iterate over candidate moves during search. Instead, the Zobrist hashes of potential reversible moves are precalculated and stored in a cuckoo table. Further speed is gained by employing a light­weight measure for the displacement of opponent pieces that is based on the exclusive OR of an even number of hash history items. Measurements are given for tree size, node speed and improved playing strength when applied to a chess program.
Feedback is welcome.

Re: Upcoming repetition detection

Posted: Sun Apr 07, 2013 11:46 pm
by Gerd Isenberg
Wow, interesting reading and insights in Cuckoo hashing.

I once proposed the cheapo trick with direct move repetition applied two plies before the repetition might be occur, to prune "unmake" of a previous reversible move of the stronger side (alpha > 0), which would allow a refutation of the weaker side in "unmaking" its previous reversible move as well to repeat the position:

http://www.talkchess.com/forum/viewtopic.php?t=13791
and below.

Your proposal is of course much more general and elegant. Interesting to play with.

Thanks for sharing & Cheers,
Gerd

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 6:36 am
by BB+
marcelk wrote:If the search employs the null move heuristic[2] such move will be considered an irreversible move that resets hm to zero when it is made on the internal board.
You might want to stress that one can still have a "repetition" (null Nf6 null Ng8), but that these shall be ignored for the purposes here. [For that matter, you might mention castling/rights, as it is "reversible" for the 50-move rule, but not for 3-rep (see this thread].
marcelk wrote:More importantly, in the nodes of interest presumably a complete move generation has been done already because another reversible move is already being searched from there and many move generators produce all such moves at once. (3.1, page 5)
I don't know about this, as hash moves and killers are often reversible, but usually are processed before "ordinary" moves w/o the full move generation?
marcelk wrote:The problem of a regular hash table is that it either needs a lot of space to prevent collisions, or it needs an arbitrary number of probes.
I've often had hash tables in programming that I do, and have never found the cuckoo solution all that useful. Is ensuring the worst case is small that important? Otherwise something simply like a "rolling" hash [linear probing with step 1, I guess] has good average performance (if X is filled, use X+1, average case takes 1/(1-density) probes). Admittedly, capping the probe limit probably has some branch prediction benefits. Also, in my applications (math/chess/...), usually I can't precompute all the data, and am receiving it on an ongoing basis (e.g., if data is found, return hashkey, else store data in hash), and in one of my applications [due to memory constraints] I was much above 50% density (I ended up with 7636282800 entries in 2^33 bins [8 bytes each, so 64GB], or 89%). You discuss this partially in 3.2, but I think you could say a bit more [maybe bullet point the desirable features compared to alternatives] about why cuckoo might be the solution of choice in the given situation (other than having a cutesy name :lol:).

Also, what is the genesis of Figure 1 [correlation of 15s versus 300s searches]? I have been wondering about such data for some time but could find nothing too recent (my recollection is that an old ICCA article gave a predictive model, but I can't find it).

You might also mention that the idea shares some (vague) similarity to enhanced transposition cutoffs, though it seems the technical details of your repetition scheme are not available there.

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 11:22 am
by marcelk
Gerd Isenberg wrote:I once proposed the cheapo trick with direct move repetition applied two plies before the repetition might be occur, to prune "unmake" of a previous reversible move of the stronger side (alpha > 0), which would allow a refutation of the weaker side in "unmaking" its previous reversible move as well to repeat the position:

http://www.talkchess.com/forum/viewtopic.php?t=13791
and below.
Thanks for the referenced thread. I wasn't following computer chess at that time and it didn't show up in my searches.

Regarding your:

Code: Select all

 if ( beta < 0 && move50 >= 3 && ply >= 3
     && move[ply-3].from == move[ply-1].to
     && move[ply-3].to == move[ply-1].from
     && move[ply-1].to isNotElement (inbetween(move[ply-2].from, move[ply-2].to) )
        return 0; 
(btw: shouldn't that condition read "beta <= 0"?)
Uri's case is exactly the trap I fell in when I tried to unroll my loop one iteration and thought I could discard the legality check in the unrolled part. Tests proved otherwise...

It was interesting for me to find that the from/to check can be written as:

Code: Select all

 if ( ... &&
     S[0] ^ S[1] ^ S[2] ^ S[3] == 0
     && ... )
        return 0; 
This, even without the legality check, may still give some Elo to programs where one doesn't intend to do anything more complicated.

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 12:09 pm
by BB+
It struck me that one might be able to make a perfect hash, say with 8K entries (a bit more than twice the dictionary size for a given colour).

I formulate the problem like this: you are specifying 320 inputs (5 pieces, 64 squares), and want a selection of the XOR pairs of these to be distinct (13 bits with 8K entries, maybe you can even do 4K entries). Now for a "typical" piece/sq, say Bb3, one has 9 XOR pairs to consider (a4/c2/d1 and a2/c4/d5/e6/f7/g8). If the table is half-full, upon trying about 2^9 different Zobrists for Bb3, you should get one that doesn't collide. So one can try to build a perfect hash-table iteratively, adding one Zobrist at a time. Of course, one might start with queens (need to have 21+ non-collisions), and end with kings/knights in corners. .

I will see if I can actually make this work in practise, there may be something that I am missing.

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 1:05 pm
by BB+
I attach code that generates a 4096 entry collision-less table for the 1834 dictionary entries, in the manner indicated above. It seems to hang on Qh8 for 2048 entries, maybe the linear algebra over F2 demands collisions (other random seeds get as far as Re7 or so, but I've never reached completion).

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 2:25 pm
by marcelk
Thanks a lot for your feedback. I'll address them point by point.
BB+ wrote:
marcelk wrote:If the search employs the null move heuristic[2] such move will be considered an irreversible move that resets hm to zero when it is made on the internal board.
You might want to stress that one can still have a "repetition" (null Nf6 null Ng8), but that these shall be ignored for the purposes here.
Or I might remove the null move reference all together as it doesn't matter for the algorithm and there is always somebody ready to claim that repetitions should be allowed to span the null move.
BB+ wrote:[For that matter, you might mention castling/rights, as it is "reversible" for the 50-move rule, but not for 3-rep (see this thread].
The confusion of castling and en-passant inconsistencies in the FIDE rules only distract more from the algorithm IMO.
BB+ wrote:
marcelk wrote:More importantly, in the nodes of interest presumably a complete move generation has been done already because another reversible move is already being searched from there and many move generators produce all such moves at once. (3.1, page 5)
I don't know about this, as hash moves and killers are often reversible, but usually are processed before "ordinary" moves w/o the full move generation?
That is entirely true. This consideration is admittedly a bit Rookie-centric, because there I validate hash moves and killers indirectly through the history table. That all moves might not have been generated in other programs merely makes the opposite approach even more clumsier of course. The whole paragraph at the end of section 3.1 could in fact just go (or at least be more clearly labelled "intermezzo").
BB+ wrote:
marcelk wrote:The problem of a regular hash table is that it either needs a lot of space to prevent collisions, or it needs an arbitrary number of probes.
I've often had hash tables in programming that I do, and have never found the cuckoo solution all that useful. Is ensuring the worst case is small that important? Otherwise something simply like a "rolling" hash [linear probing with step 1, I guess] has good average performance (if X is filled, use X+1, average case takes 1/(1-density) probes). Admittedly, capping the probe limit probably has some branch prediction benefits.
Indeed. To avoid badly predicted branches one would have to probe the worst-case number of slots unconditionally (that is: don't bail out when you see an empty slot). The cuckoo scheme restricts this to two probes.
BB+ wrote:Also, what is the genesis of Figure 1 [correlation of 15s versus 300s searches]? I have been wondering about such data for some time but could find nothing too recent (my recollection is that an old ICCA article gave a predictive model, but I can't find it).
I generated the raw data in there once with an older version of my program when trying to find an alternative method to measure search quality.
BB+ wrote:You might also mention that the idea shares some (vague) similarity to enhanced transposition cutoffs, though it seems the technical details of your repetition scheme are not available there.
I can squeeze in an "unlike ETC [ref]" somewhere when I talk about not generating moves and application in leaf nodes.
marcelk wrote:A perfect hashing scheme with a small footprint seems unobtainable
BB+ wrote:I attach code that generates a 4096 entry collision-less table for the 1834 dictionary entries, in the manner indicated above. It seems to hang on Qh8 for 2048 entries, maybe the linear algebra over F2 demands collisions (other random seeds get as far as Re7 or so, but I've never reached completion).
That is pretty neat, it would eliminate half of the probes! Is there something fundamental that prohibits this method from squeezing all of White's and Black's moves into a single table? Also, more importantly, is there an expected effect, either positive or negative, on type-1 errors / key collisions?

Re: Upcoming repetition detection

Posted: Mon Apr 08, 2013 4:01 pm
by Gerd Isenberg
BB+ wrote:I attach code that generates a 4096 entry collision-less table for the 1834 dictionary entries, in the manner indicated above. It seems to hang on Qh8 for 2048 entries, maybe the linear algebra over F2 demands collisions (other random seeds get as far as Re7 or so, but I've never reached completion).
Hmm, I would have expected 3668 instead of 1834 dictionary entries since hash(piece,a,b) != hash(piece,b,a), but your to-squares are always < from (at least for the sliders)!?

Re: Upcoming repetition detection

Posted: Tue Apr 09, 2013 3:38 am
by BB+
Gerd Isenberg wrote:Hmm, I would have expected 3668 instead of 1834 dictionary entries since hash(piece,a,b) != hash(piece,b,a)
The paper indicates that for ZobristXOR hashing we do have equality of these hash() calls.
marcelk wrote: In chess there are 7,336 reversible move hashes or half that number if we equal (piece, a, b) to
(piece, b, a) as is the case with XOR­based Zobrist hashing.
We have that HASH[PIECE][TO] ^ HASH[PIECE][FROM] has XOR commute with TO/FROM, so that
HASH_DIFF[PIECE][TO][FROM]==HASH_DIFF[PIECE][FROM][TO], as desired.
I gain another 2x by only considering one colour in my implementation (again following the lead of the paper).
marcelk wrote:Is there something fundamental that prohibits this method from squeezing all of White's and Black's moves into a single table? Also, more importantly, is there an expected effect, either positive or negative, on type-1 errors / key collisions?
There shouldn't be any black/white problem. Just set the dictionary size at 8192, and compute queens twice, rooks twice, etc. I attach code for this.

Incidentally, now I realise the code can fail, for instance, if Bb4 and Bg1 have the same hash, then when including Bc5, both Bc5-g1 and Bc5-b4 will be the same, and the quick-and-dirty code will not detect this. However, choosing a different random number seed usually clears up such problems.

Re-reading the Cuckoo paper, it seems that they largely talk about Cuckoo hashing as a "limit the worst case" algorithm (certainly valuable in some applications), and even more than a decade ago had noted that cache effects tend to make linear probing preferable on average (it wins all their experiments).

Re: Upcoming repetition detection

Posted: Tue Apr 09, 2013 3:56 am
by BB+
marcelk wrote:Or I might remove the null move reference all together as it doesn't matter for the algorithm and there is always somebody ready to claim that repetitions should be allowed to span the null move.
Yes, removing side issues and quibbling points is always a desideratum. Similarly, in 3.6 you take the DRAW_SCORE to be 0, but in a contemptful world, this might not be true.
marcelk wrote:I can squeeze in an "unlike ETC [ref]" somewhere when I talk about not generating moves and application in leaf nodes.
I see that Richard Pijl mentioned ETC (as where he does rep checks) in the old thread with Gerd's idea.