There are some historical examples of eval features at
https://icga.wikispaces.com/Evaluation+Overlap
Even with many of these being quite "vague", there is noticeable divergence.
Rebel wrote:Chess programmers have limited choices regarding coding and data.
The fact that there are a such wide variety of eval functions around essentially rebuts this. Back in 1995, did HIARCS, REBEL, Junior, MChess, ..., [top programs on the same hardware] all manage to end up with similar evaluation functions from "limited" choices therein?
Or a (small) selection from
http://www.rebel.nl/authors.htm
The King wrote:From the start, The King was given an attractive and enterprising playing style. Unlike many other computer programs, The King actively seeks attacking possibilities and is ready to sacrifice material not only on tactical, but also on positional grounds. Results and playing strength of the program steadily increased, and it has been among the world’s strongest ever since 1990.
Fritz wrote:Fritz is build around a selective search technique known as the null-move search. As part of its search, Fritz allows one side to move twice (the other side does a null-move). This allows the program to detect weak moves before they are searched to their full depth. Move generators, evaluation functions and data structures have been designed to maximize the effectiveness of the null-move search.
Gandalf wrote:Gandalf was started around 1985 by Steen Suurballe. The program was a rule-based selective program, which was very slow, but did surprisingly well. In 1993 Dan Wulff joined in the work, and has been doing the opening library ever since.
In 1995 Steen decided to skip the selective search, and concentrate on the evaluation function. The program got much stronger after this change, and although it has become a lot faster than the prior version, it is still rater slow, when compared with other programmes.
The search was changed to a standard alpha/beta search, with null-move reductions, and a lot of extensions.
HIARCS wrote:HIARCS searches around an order of magnitude less positions per second (av. 18,000) than most of its competitors. However, it makes up for this apparent slow speed by clever searching and accurate evaluation.
HIARCS uses many selective search extension heuristics to guide the search and incorporates a sophisticated tapered search to resolve tactical uncertainties while finding positionally beneficial lines.
Does this sound like they were "limited" in their differential aspects? [Fritz actually designed eval to maximise null-move effectivity?!].
Rebel wrote:
But I reject EVAL_COMP fully. Next to Rybka and Fruit you should have taken Shredder, Junior, Fritz (yes Fritz), Hiarcs. In 2005 those were the programs with good chess knowledge inside, they were on top for a good reason. You would had a whole different outcome.
I disagree (quite strongly) with your last sentence. I see little reason to think that these engines used evaluation features that had large overlap. There is little if any evidence to think that engines "at the top" must have similar evaluation, either in 2005, or at any juncture. Currently, one can note that Don Dailey is currently championing the evaluation of Komodo as being its main plus. R232a had a Fruit-like eval, R3 has a heavyweight eval, while R4 is back to a lightweight one [though not so Fruit-like]. Yet all of them were at the top in their time. Similarly, Stockfish differs from Critter.