Visualizing alpha and beta
Visualizing alpha and beta
Anybody got a link to nice graphic that illustrates cut, all, pv, cutoffs, alpha,beta, fail high, etc.? Not equations, but boxes and lines and trees!
I like picture books!
I like picture books!
Re: Visualizing alpha and beta
Wikipedia is your friend .benstoker wrote:Anybody got a link to nice graphic that illustrates cut, all, pv, cutoffs, alpha,beta, fail high, etc.? Not equations, but boxes and lines and trees!
I like picture books!
http://upload.wikimedia.org/wikipedia/e ... nmaxab.gif
Re: Visualizing alpha and beta
Has someone tried a decision rule that "jumps" near the root depending on the score of the "last potential game state"?, like if it jumps over some score, one jumps here to the black circle, but if it decreases below some point, one goes to the violet circle near the root?
That's how I've been analyzing my chess games at the core and I've found it's faster at finding better moves than refuting all the variations near the tail first.
Of course you have nothing conclusive until one is done with an iteration (i.e. one doesn't have a candidate move because one hasn't refuted all the moves), so a chess engine would lose on time without knowing what move to make, but I wonder if an engine with a "jumping" strategy would be better at fixed depth (with some exploding ones in difficult positions), or something.
In this specific example that the second branch was worth 6, or that the second branch of the first was worth 3, would have been found faster (and in my experience, that's common).
That's how I've been analyzing my chess games at the core and I've found it's faster at finding better moves than refuting all the variations near the tail first.
Of course you have nothing conclusive until one is done with an iteration (i.e. one doesn't have a candidate move because one hasn't refuted all the moves), so a chess engine would lose on time without knowing what move to make, but I wonder if an engine with a "jumping" strategy would be better at fixed depth (with some exploding ones in difficult positions), or something.
In this specific example that the second branch was worth 6, or that the second branch of the first was worth 3, would have been found faster (and in my experience, that's common).
Re: Visualizing alpha and beta
Uly, what you're describing closely resembles the idea of late move reductions/pruning present in all top engines right now. Once you've reached a certain place in the move list and suspect you're just going to fail-low anyways, further attempts are either ignored or searched at a reduced depth. If a reduced depth search indicates it might not fail low a research of that move at full depth is then used.
Re: Visualizing alpha and beta
Well, I've seen how Rybka does it with the Visualizer, say, she has a mainline like
1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 0.20
Now, she'll go and search alternatives for Nf6, Ba4, a6, Bb5, Nc6, Nf3, e5 and finally e4, in that order, every time, if she doesn't find any good alternative she'll mark this iteration as ended with the value of 0.20 and extend Nf6 in the start of the next.
But if it previously had a score of 0.10 then it's very likely that she will find an alternative for black, let's assume it's 2...Nf6, she'll go searching for alternatives for Nf6, Ba4, a6, Bb5, and find it for Nc6.
The way I'd do it is only searching for black alternatives (leaving for moves that are better than 0.20 for white for later), and start from the root forwards, searching for an alternative for e5, and then find the one for Nc6.
Basically, having different strategies depending on how the score jumps (e.g. if it was 0.30 and fell to 0.20, I'd search for white alternatives instead). No idea if it'd work, just wondering if it has been tried or if only Rybka does this refutation from the tail towards the root (like in the minimax picture example).
1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 0.20
Now, she'll go and search alternatives for Nf6, Ba4, a6, Bb5, Nc6, Nf3, e5 and finally e4, in that order, every time, if she doesn't find any good alternative she'll mark this iteration as ended with the value of 0.20 and extend Nf6 in the start of the next.
But if it previously had a score of 0.10 then it's very likely that she will find an alternative for black, let's assume it's 2...Nf6, she'll go searching for alternatives for Nf6, Ba4, a6, Bb5, and find it for Nc6.
The way I'd do it is only searching for black alternatives (leaving for moves that are better than 0.20 for white for later), and start from the root forwards, searching for an alternative for e5, and then find the one for Nc6.
Basically, having different strategies depending on how the score jumps (e.g. if it was 0.30 and fell to 0.20, I'd search for white alternatives instead). No idea if it'd work, just wondering if it has been tried or if only Rybka does this refutation from the tail towards the root (like in the minimax picture example).
Re: Visualizing alpha and beta
The precise way you have in mind requires a lot of state information be saved (since you want to potentially go back to the node at a later time), which is why going to full depth and then unwinding (refuting from the tail end as you see in the visualiser) is the typical process.
But saving energy once you expect a fail low (a not good enough alternative for the side to move) is used. For instance in your example as you unwind and look at alternatives to Nf6, one of the alternatives that doesn't make much sense is Ra7. It's not simple to teach a computer why it shouldn't try Ra7 in this position so it will need to be searched. A white 'cut' move is found (may not be the best move, it only needs to be good enough to 'prove' that Ra7 is not better than Nf6), and then you'll hit a fail-low node for black where 'all' moves must be considered. To reduce the space for such trees, most of those moves will then be reduced in search depth (things like Ba3??, Ke7? Qe7?! h5?! will probably all be reduced by a couple ply by virtue of occurring late in the move order along with several other moves that you might even consider good, these reductions are often done very aggressively), then another cut node, and another 'all' node where most moves are reduced. By following this pattern the amount of effort of checking the 'alternatives' to Nf6 becomes greatly reduced as many of the unlikely alternatives are only given a cursory search, and the search sort of will hop around like you want it to. Think of the reduced depth searches it's doing as the equivalent of the magical process you do in your head that tells you 'ok, it's time to move on from this node, i don't expect to find anything here any more'. It takes much less time than doing an exhaustive search of it, but it's difficult to just abandon the node entirely without introducing errors. And returning to it is increasing the depth by 1, and doing the next iteration which will let it look just that little bit deeper at the alternatives to Nf6.
But saving energy once you expect a fail low (a not good enough alternative for the side to move) is used. For instance in your example as you unwind and look at alternatives to Nf6, one of the alternatives that doesn't make much sense is Ra7. It's not simple to teach a computer why it shouldn't try Ra7 in this position so it will need to be searched. A white 'cut' move is found (may not be the best move, it only needs to be good enough to 'prove' that Ra7 is not better than Nf6), and then you'll hit a fail-low node for black where 'all' moves must be considered. To reduce the space for such trees, most of those moves will then be reduced in search depth (things like Ba3??, Ke7? Qe7?! h5?! will probably all be reduced by a couple ply by virtue of occurring late in the move order along with several other moves that you might even consider good, these reductions are often done very aggressively), then another cut node, and another 'all' node where most moves are reduced. By following this pattern the amount of effort of checking the 'alternatives' to Nf6 becomes greatly reduced as many of the unlikely alternatives are only given a cursory search, and the search sort of will hop around like you want it to. Think of the reduced depth searches it's doing as the equivalent of the magical process you do in your head that tells you 'ok, it's time to move on from this node, i don't expect to find anything here any more'. It takes much less time than doing an exhaustive search of it, but it's difficult to just abandon the node entirely without introducing errors. And returning to it is increasing the depth by 1, and doing the next iteration which will let it look just that little bit deeper at the alternatives to Nf6.
Re: Visualizing alpha and beta
Thanks for the information.