Page 1 of 3

Fire's null_new_depth

Posted: Thu Feb 24, 2011 12:23 am
by benstoker
I was looking at Sentinel's Fire code, and null_move.h. It appears as though the null_new_depth function defined in null_move.h is not actually used. The code looks interesting and I was wondering if I am wrong and that the null move routines do indeed use this. Can you enlighten me?

Here's the code:

Code: Select all

static int null_new_depth ( int depth, int delta ) {
    double ddelta, r;
    uint64 Nodes = 0;
    int cpu, rp;
    for ( cpu = 0; cpu < NumThreads; cpu++ )
        for ( rp = 0; rp < RPperCPU; rp++ )
            Nodes += RootPosition[cpu][rp].nodes;
    ddelta = MIN ( ( double ) delta, 225.0 );
    if ( depth < 34 || Nodes <= 5000 * 1000 )
        r = 8 + sqrt ( ddelta * depth ) / 60.0;
    else if ( depth < 46 || Nodes <= 200000 * 1000 )
        r = 10 + sqrt ( ddelta * depth ) / 60.0;
    else
        r = 12 + sqrt ( ddelta * depth ) / 60.0;
    return ( depth - ( int ) r );
}

Re: Fire's null_new_depth

Posted: Thu Feb 24, 2011 2:18 am
by Sentinel
benstoker wrote:I was looking at Sentinel's Fire code, and null_move.h. It appears as though the null_new_depth function defined in null_move.h is not actually used. The code looks interesting and I was wondering if I am wrong and that the null move routines do indeed use this. Can you enlighten me?
It's an improved idea of Dann's smooth scaling. It's not used in Fire 1.3. In earlier versions it can be activated through UCI parameter (NMR_SCALING).

Re: Fire's null_new_depth

Posted: Tue Mar 01, 2011 1:31 am
by benstoker
Sentinel wrote:
benstoker wrote:I was looking at Sentinel's Fire code, and null_move.h. It appears as though the null_new_depth function defined in null_move.h is not actually used. The code looks interesting and I was wondering if I am wrong and that the null move routines do indeed use this. Can you enlighten me?
It's an improved idea of Dann's smooth scaling. It's not used in Fire 1.3. In earlier versions it can be activated through UCI parameter (NMR_SCALING).
Sentinel, while I'm at it, could you explain what your addition, below, to evaluation.h and .c does?

From evaluation.h

Code: Select all

static const int WBPinValue[16] =
    {
    0, 0, Score(6, 6), 0, 0, 0, Score(4, 4), Score(4, 4), 0, 0, Score(8, 8), 0, 0, 0, Score(15, 15), Score(15, 15)
    };
static const int BBPinValue[16] =
    {
    0, 0, Score(8, 8), 0, 0, 0, Score(15, 15), Score(15, 15), 0, 0, Score(6, 6), 0, 0, 0, Score(4, 4), Score(4, 4)
    };
static const int WRPinValue[16] =
    {
    0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0, 0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0
    };
static const int BRPinValue[16] =
    {
    0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0, 0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0
    };

#define WBPinTarget (bBitboardQ | bBitboardR)
#define WRPinTarget (bBitboardQ)
#define BBPinTarget (wBitboardQ | wBitboardR)
#define BRPinTarget (wBitboardQ)
And in evaluation.c

Code: Select all

        if(WRPinTarget & Ortho[b])
            {
            T = between[b][LSB(WRPinTarget & Ortho[b])] & (wBitboardOcc | bBitboardOcc);

            if((T &(T - 1)) == 0)
                Value += WRPinValue[Position->sq[LSB(T)]];
		}

Re: Fire's null_new_depth

Posted: Tue Mar 01, 2011 11:40 am
by RobertP
benstoker wrote: ...could you explain what your addition, below, to evaluation.h and .c does?
This is standard bitboard pin-detection adapted to positional scoring for pins or discovered attack (wR -> bQ). With the proviso that I haven't seen the full source (i.e. based only on the code snippets):
On entry, square b evidently contains a wR.
See comments:

Code: Select all

        if(WRPinTarget & Ortho[b]) // square b is on same rank or file as a bQ
            {
            T = between[b][LSB(WRPinTarget & Ortho[b])] & (wBitboardOcc | bBitboardOcc); // all blockers of both colors

            if((T &(T - 1)) == 0) // 0 or 1 blocker
                Value += WRPinValue[Position->sq[LSB(T)]]; // credit depending on the blocking piece
		}
Robert P.

Re: Fire's null_new_depth

Posted: Fri Apr 22, 2011 12:52 am
by kranium
benstoker wrote:
Sentinel wrote:
benstoker wrote:I was looking at Sentinel's Fire code, and null_move.h. It appears as though the null_new_depth function defined in null_move.h is not actually used. The code looks interesting and I was wondering if I am wrong and that the null move routines do indeed use this. Can you enlighten me?
It's an improved idea of Dann's smooth scaling. It's not used in Fire 1.3. In earlier versions it can be activated through UCI parameter (NMR_SCALING).
Sentinel, while I'm at it, could you explain what your addition, below, to evaluation.h and .c does?

From evaluation.h

Code: Select all

static const int WBPinValue[16] =
    {
    0, 0, Score(6, 6), 0, 0, 0, Score(4, 4), Score(4, 4), 0, 0, Score(8, 8), 0, 0, 0, Score(15, 15), Score(15, 15)
    };
static const int BBPinValue[16] =
    {
    0, 0, Score(8, 8), 0, 0, 0, Score(15, 15), Score(15, 15), 0, 0, Score(6, 6), 0, 0, 0, Score(4, 4), Score(4, 4)
    };
static const int WRPinValue[16] =
    {
    0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0, 0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0
    };
static const int BRPinValue[16] =
    {
    0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0, 0, 0, Score(6, 6), 0, Score(4, 4), Score(4, 4), 0, 0
    };

#define WBPinTarget (bBitboardQ | bBitboardR)
#define WRPinTarget (bBitboardQ)
#define BBPinTarget (wBitboardQ | wBitboardR)
#define BRPinTarget (wBitboardQ)
And in evaluation.c

Code: Select all

        if(WRPinTarget & Ortho[b])
            {
            T = between[b][LSB(WRPinTarget & Ortho[b])] & (wBitboardOcc | bBitboardOcc);

            if((T &(T - 1)) == 0)
                Value += WRPinValue[Position->sq[LSB(T)]];
		}

I'm responsible for the addition of the pin code, not Sentinel..
not sure why you assume it was his idea...
In fact, unfortunately, he had very little to do with the last version of Fire...was very busy at work.

This pin code was originally presented in RobboLito TA, by Thinkingalot...
I'm convinced that it doesn't actually add any ELO benefit however...
but it makes sense and I couldn't help but add it.

Re: Fire's null_new_depth

Posted: Fri Apr 22, 2011 7:11 am
by mcostalba
kranium wrote: I'm convinced that it doesn't actually add any ELO benefit however...
but it makes sense and I couldn't help but add it.
IMHO if it doesn't add any ELO then it doesn't make sense at all to add it.

Anyhow your sentence is interesting because with few words you perfectly synthesize what IMHO is the wrong way to approach engine development, in particular:

1) "I'm convinced that": there is nothing to be convinced, or tests prove it works or prove it doesn't. Stop, nothing more. Tests are the only metric that we apply to evaluate a change.

2) " it makes sense to add": As already explained above, adding stuff with no proven ELO increase is just the fastest way to build up a complete mess out of a good source base.

Of course everybody uses the approach he prefers, my comment simply says that we use a completely different approach in SF and we are happy with that ! :-)

Re: Fire's null_new_depth

Posted: Tue Apr 26, 2011 3:31 pm
by kranium
mcostalba wrote:
kranium wrote: I'm convinced that it doesn't actually add any ELO benefit however...
but it makes sense and I couldn't help but add it.
IMHO if it doesn't add any ELO then it doesn't make sense at all to add it.

Anyhow your sentence is interesting because with few words you perfectly synthesize what IMHO is the wrong way to approach engine development, in particular:

1) "I'm convinced that": there is nothing to be convinced, or tests prove it works or prove it doesn't. Stop, nothing more. Tests are the only metric that we apply to evaluate a change.

2) " it makes sense to add": As already explained above, adding stuff with no proven ELO increase is just the fastest way to build up a complete mess out of a good source base.

Of course everybody uses the approach he prefers, my comment simply says that we use a completely different approach in SF and we are happy with that ! :-)
It's not completely black and white Marco... I believe there is a 'gray' area left for the developer's intuition and instinct.
IMO, sometimes (often) the true strength of an engine is only really known after 1000s of LTC testing.

There exists a lot of debate as to:
when does any particular test 'empirically' prove something...?
(especially chess testing at ultra fast TC).

I realize that you believe SF is doing well, and that you have had success with your testing methods,
but maybe you should reconsider them...SF seems to be lagging far behind, and Ippolit source code explains it all quite clearly.

i.e. If your testing techniques are superior, and perhaps empirical....then why is StockFish not as strong than the rest of the field?

Re: Fire's null_new_depth

Posted: Tue Apr 26, 2011 3:59 pm
by mcostalba
kranium wrote: i.e. If your testing techniques are superior, and perhaps empirical....then why is StockFish not as strong than the rest of the field?
Because we are not able to come up with winning ideas: we test a lot, but for the most part candidate changes result in no ELO change or even in a weaker engine.

Regarding the rest of the field, apart from Houdini, we think we are almost already there...

Re: Fire's null_new_depth

Posted: Tue Apr 26, 2011 7:24 pm
by hyatt
Intuition only takes you so far. Often only as far as "the crash scene" or something similar. :)

(My intuition said that the road would be open in spite of a flash flood warning...)

I, like you, prefer actual testing...

Re: Fire's null_new_depth

Posted: Wed Apr 27, 2011 9:45 am
by UncombedCoconut
kranium wrote:There exists a lot of debate as to:
when does any particular test 'empirically' prove something...?
(especially chess testing at ultra fast TC).

I realize that you believe SF is doing well, and that you have had success with your testing methods,
but maybe you should reconsider them...SF seems to be lagging far behind, and Ippolit source code explains it all quite clearly.

i.e. If your testing techniques are superior, and perhaps empirical....then why is StockFish not as strong than the rest of the field?
Is the alternative to follow your gut through a sequence of 95%-certainly-<=-0 changes until you wind up >=+50?
I have a program on which such techniques are likely to work. Perhaps I'll release its next version next April Fools' Day.

There is often room in SF-level programs to make logical, strengthening changes. (As an example, SF can clear hash on "ucinewgame" if the previous game was Fischer-random.) And when combined such changes will be worth very little...