Part #2 has appeared. A Part 3 is promised (where the evaluations will be discussed). A bit surprisingly, Dr. Riis thanks a number of people for "eagle-eyed proof reading" at the end, yet I found some (rather obvious) factual errors at first glance (e.g., claiming FL's interview was from 2008) -- maybe fact-checking was not their task?
From the top:
Dr. Riis obsesses over a "paradigm shift". He then gives some (unsourced) Elo-based data to try to bolster this conviction (he also argues that the ICGA should adopt this shift also). Some of the data appears wrong and/or presented in a dubious manner (see below -- the errors seem significant enough to affect his conclusions). It is also almost completely irrelevant (similar to his hagiography of Rybka games in Part 1) to the question of whether Rajlich's entries into the ICGA events met their originality standard (he tries to argue that Elo gain is a sign of originality, but does not make apparent why it should imply
complete originality, or at least something close to it).
He then addresses the question of reverse engineering. He quotes a Wikipedia article concerning its legality in the EU (see
here for the law text). He admonishes the ICGA for publishing RE'd R2.3.2a code (among others). Given that the published part of the code is derivative of Fruit 2.1, his argument would appear moot.
Ironically (to use his term), he parenthetically labels IPPOLIT an R3-derivative(!). Though I largely agree with this, on what basis does he make this conclusion? For instance, what standard can one invent where IPPOLIT is an R3-derivative but Rybka 1.0 Beta is not a Fruit-derivative? He seems (as typical) just to quote Rajlich about this, and thus assure us of its truth.
He then turns to the dictionary about "plagiarism", plying as if a few crumbs of praise for FL suffices (perhaps he should ask FL about this...). This doesn't address (e.g.) the fact that the ICGA Rules demand (minimally) such acknowledgment
with the entry. [Riis seems aware of the LION++ case; they acknowledged Fruit in the READ_ME, and this was found inarguably to be insufficient]. And again I might refer him to the QMUL plagiarism page:
Plagiarism is presenting someone else’s work as if it were your own, whether you intend to or not. This is exactly what Rajlich did in entering ICGA tournaments when the Rybka evaluation function was derivative of that of Fruit.
Riis then turns to "similarity testing", measuring functional differences in programs. This is the wrong standard. For instance, it is perfectly legitimate to try to make your program play like others. Famously, Deep Thought tuned its evaluation by matching moves from GM games. It appears there is a possibility (small, perhaps) that Fritz 11 might have done something similar with Rybka/Strelka. Furthermore, Dr. Riis seems aware of the legal issues regarding computer programs, and thus must know that they are typically protected as
literary works, and not functional devices (where patent law would be more likely to apply). Thus "code comparison" (as applicable in a given case) is often of more interest than operational similarity.
Riis then concludes by asking what an "original program" is. I can't say he answers the question too well (if at all). [The word "original" has been taken by courts to apply to the "origins" -- in the case of Rybka 1.0 Beta, this is demonstrated by the evidence to have its origins in Fruit 2.1]. He then gives some quotations from Letouzey, erroneously claimed to be from 2008 (the interview is from 2005, note that "Fruit 2.0" is the latest version therein). He then promises to address the evaluation function issue in the next part.
More specific comments:
Riis first gives a graph of 11 engines with ratings over almost 20 years. He gives no source for the data. Some of it appears erroneous. I might also dispute that he gives "Rybka" a 2004 datapoint in the same line as Rybka from 2005, as the former only appeared in a few pirivate tournaments, and is not much related to the Rybka from 2005. He then zooms in on the last 7 years (the "paradigm shift"), with data from 7 engines. He claims:
Focusing on the last seven years, a number of chess engines either sharply improved around the time that Fruit source code was released, or debuted after Fruit and then soared. Looking at his graph, only Fruit itself and Naum fall in the former category (with Rybka). I myself would further exclude Naum, as (the later) Naum 4 uses a re-implementation of the Rybka/Strelka evaluation function, and so I have my doubts about its transgressive Fruit-iness already in 2005. Excluding Rybka, this leaves him with zero engines that "sharply improved around the time that Fruit source code was released", so I find his phrasing notably misleading. [He might have added Zappa to the engines, BTW].
As noted above, all these graphs are irrelevant to the ICGA decision in any event, but I noticed some other errors/problems. For instance, Rybka in 2005 is plotted with a point at about 2675. I can find no list that gives this. Indeed, the Rybka datapoint is
below the Naum/Fruit datapoints(!), even though Riis expands greatly on how Rybka 1.0 Beta was already superior to them at the time of first release (Dec 2005). In short, I have no idea how the Rybka datapoints for 2004 or 2005 are derived, or for that matter, from where he gets the Naum 2004/2005 numbers (e.g., Naum 2.0 was released in Sep 2006 according to CCRL, and is 2800 at 40/40 64-bit, compared to 2919 for Rybka 1.0 Beta --- Naum 2.2, gaining 100 Elo to ~2900, was apparently from July 2007).
Riis then conflates strength with originality:
It had to be [original], because from first release Rybka was already far ahead of Fruit, and the gap just kept widening. This is a dubious conclusion. Fruit 2.1 was essentially a development snapshot, with much room for improvement, both in engineering (bitboards, for example) and otherwise. Letouzey himself added about 100 Elo in the next year. Riis also gives CEGT data that lists Fruit 2.1 at 2712 on a 32-bit machine, to be compared (via highlighting) with Rybka 1.0 Beta at 2868 on a 64-bit machine, presumably in support of his
Rybka was as much as 150 Elo ahead of the pack on equal hardware. He is, at least, kind enough to list the 32-bit Rybka 1.0 Beta number (2816) in the same table, which shows it to be closer to 100 Elo
on equal hardware -- again I find his text deliberately misleading and exaggerative.
Riis then notes that HIARCS (derogatorily labelled a "slow-climber" -- though since R3 the Rybka climbing is even slower, I might say) had "its biggest Elo jump in twelve years" (quoting the HIARCS site). He suspects it to be due to the Fruit influence. However, upon looking at his first graph, one does not see any big jump. A plausible alternative is simply that the time period between the HIARCS 9 and HIARCS 10 releases was longer than typical (Oct 2003 to Jan 2006, by my account), with there being no additional Elo/year jump from Fruit influence. If nothing else, the HIARCS blurb concerning Elo jump is
version-based, while the data of Riis is
year-based, and conflating the two is sloppy.
He makes a similar claim about Junior, for which his first graph does show a more notable leap. Again, though, his data is unsourced, and I cannot replicate it. I find Junior 9 to be from late 2004, while Junior 10 was released in Aug 2006 (after the Turin victory). CCRL 40/40 lists the former at 2778, and the latter at 2843, about 70 Elo over more than a year and a half. So I find the conclusion of Riis regarding Fruit influence to be (at best) difficult to justify, at least on the data given (or not given).
I think I've covered most of the next few sections above.
As noted above, Riis then gives a history of Rybka derivatives. Unsurprisingly, he fails to say how Rajlich concludes that Strelka and IPPOLIT fall into such categories. Assertion appears enough. He then notes Rajlich was (essentially)
in absentia at the ICGA proceedings, failing to note that this was Rajlich's own design.
Riis then turns to plagiarism (thinking a random thankyou suffices for an ICGA entry), and assumes Hyatt speaks for the ICGA. This is bizarre. He attempts to justify it, but I might simply suggest he is too lazy to read anyone else's posts, outside his Rybka Forum bubble.
Accordingly I will develop the Rajlich defense under the assumption that Hyatt speaks for the ICGA. I don't see any reason to a) listen a "Rajlich defense" (developed or not) unless VR specifies this is indeed
his definitive defense, or b) debate with anyone who assumes Hyatt speaks for the ICGA.
Riis then turns to ponderhit data. As noted above, this is a improper standard. Again Riis justifies it by quoting Rajlich. Also, as in other places, he seems to assume that a lack of similarity (of moves, in this case) from a high level test implies a lack of low level similarity, when this need not be the case.