kingliveson wrote:are you saying with extended time, Houdini plays worse?
No, I'm saying that for interactive analysis of correspondence games, when relying on Houdini, it produces worse moves, ideas, or/and evaluation scores of positions than Rybka 4, Critter, Stockfish, and even engines of lower elo like Komodo or Zappa Mexico II.
As a person that uses many engines for analysis this has been my personal experience.
At some point I found it counter-intuitive, and thought it should have been only me, or that the analysis methods I used were bad for Houdini (i.e. if the engine really has higher elo than the others, it was possible my analysis methods didn't suit Houdini well, and not the other way around).
But then started reading comments of other correspondence chess player on Rybka Forum, depicting the same. SEVERAL players, you mention how bad Houdini is for analysis, several posters jump and agree with you.
I have concluded that this is true after reading Moz's posts, he seems very sensible in his criticisms and also complains about the positions in where Rybka under-performs (so this is not fanboyism). He is a Houdini supporter and suggests people to buy the engine because the new features are very useful. Yet, he claims that for analysis, Houdini 2 is still worse than Rybka 4, depicting the same problems that Houdini 1.5a had.
I don't have H2, but I believe him.
So far we only have anecdotal evidence and no easy way to show more (because Correspondence games last months! by the time we have data showing strong evidence of it, it's no longer relevant and Houdini 4 is already here...), but the question is more like, what is causing this phenomenon? (the phenomenon
is real. For instance, assume that Houdini is really the best for correspondence chess games, then, what would lead people to believe that it isn't and to fail to use it properly to find the best move? That's also something interesting even if Houdini isn't worse).
A striking difference between normal games and interactive games is that in the former, a move is made and it's done, while in the latter, you can make the move, see the resulting positions, and take it back, and look if there's something better. That's the core of analysis and with Houdini, if it doesn't understand a variation then it's going to suggest more and more moves that also don't work, while other engines tend to have a mainline, and when the mainline is wrong, it's easier to kill it and make the engine understand the variations are bad and make it agree with the moves of an engine that understands the position better, while Houdini is stuck in stupid.
I really believed that the learning feature was going to help in this, surely if you show Houdini how wrong it is on some variation and it learns about it, it's expected that it goes back to better mainlines and provides useful analysis? But Moz reports that, nope, learning isn't helping, Houdini has a core issue that doesn't let it perform like one would expect from an engine of its elo.