Page 1 of 2

IvanHoe54

Posted: Tue Jul 13, 2010 10:17 am
by xshat

Re: IvanHoe54

Posted: Tue Jul 13, 2010 10:41 am
by Jeremy Bernstein
xshat wrote:http://ippolit.wikispaces.com/file/deta ... 4-Beta.tar

IvanHoe T54 and IvanHoe T54A
http://www.speedyshare.com/files/23355284/T54_T54A.rar
depositfiles.com mja3e4ozu
http://ifolder.ru/18516893
http://www.zshare.net/download/78256877ab226034/
mediafire.com qitirylzjw3
rapidshare.com T54_T54A.rar
OSX Version + RobboBuild attached to this post, for anyone interested. Large pages are disabled, as they are not supported using shmget() on OSX.

Re: IvanHoe54

Posted: Tue Jul 13, 2010 10:56 am
by xshat
do large pages provide +elo gain?

Re: IvanHoe54

Posted: Tue Jul 13, 2010 1:12 pm
by Taner Altinsoy
What is the difference between T54 and T54A?

Taner

Re: IvanHoe54

Posted: Tue Jul 13, 2010 6:42 pm
by BTO7
Taner Altinsoy wrote:What is the difference between T54 and T54A?

Taner
They were two quick compiles by the same compiler and he wanted to know which one was the fastest. On my machine it was T54.

Regards
BT

Re: IvanHoe54

Posted: Tue Jul 13, 2010 7:06 pm
by royb
xshat wrote:http://ippolit.wikispaces.com/file/deta ... 4-Beta.tar

IvanHoe T54 and IvanHoe T54A
http://www.speedyshare.com/files/23355284/T54_T54A.rar
depositfiles.com mja3e4ozu
http://ifolder.ru/18516893
http://www.zshare.net/download/78256877ab226034/
mediafire.com qitirylzjw3
rapidshare.com T54_T54A.rar
What is the expected improvement between this version and 999955m?

Thanks.

Re: IvanHoe54

Posted: Tue Jul 13, 2010 8:37 pm
by benstoker
xshat wrote:http://ippolit.wikispaces.com/file/deta ... 4-Beta.tar

IvanHoe T54 and IvanHoe T54A
http://www.speedyshare.com/files/23355284/T54_T54A.rar
depositfiles.com mja3e4ozu
http://ifolder.ru/18516893
http://www.zshare.net/download/78256877ab226034/
mediafire.com qitirylzjw3
rapidshare.com T54_T54A.rar
I tried to activate the large pages on my linux box, which is Ubuntu 10.04; dual core intel; 4GB RAM. I executed "# echo 1024 > /proc/sys/vm/nr_hugepages", and the system immediately started thrashing the hard drive terribly - had to force a cold reboot. The new ivanhoe readme warns to be sure it's the proper size -- but how do you know? I assumed that a 4GB RAM box could handle this. For some reason it cannot. I am ignorant about what I'm doing here. What kind of machine can handle setting 'nr_hugepages' to 1024? I would think 4GB RAM machines are still 'roomy' machines. Do you need even more RAM to handle setting nr_hugepages to 1024? Note the ivanhoe dev states large pages can kick up nps 8%. Worth it, if you can make it work! For easy reference, here's the readme copy/pasted:

Code: Select all


LINUX_LARGE_PAGES

1) vm_nr.hugepages

Attach to /etc/sysctl.conf one line
vm.nr_hugepages=1024

This demands 1024 2Mb pages with "sysctl -p" or reboot for read "man sysctl" to see.
Or in the system
# echo 1024 > /proc/sys/vm/nr_hugepages
Ensure the number 1024 is your proper size in desire.
The overlimit subdues the system!

2) MEMLOCK limit

Attach to /etc/security/limits.conf two lines for 20GB in the limit
* soft memlock 20971520
* hard memlock 20971520

Now read "man limits.conf" for these are per login.
The limit size can be too big and not annoy.
Check "ulimit -a" to see memlock limits in the actual.

3) shmget standards

Contrive with /proc/sys/kernel/shmmax to lift for size in the choice (20GB).
# echo 21474836480 > /proc/sys/kernel/shmmax

Again too big has no annoy. Purview "man shmget" NOTES in the more.

4) Arriving in IvanHoe
The UCI TryLargePages demands to use yours.
The problem is to do so for Hash and PawnsHash.
The size for Hash is better at 1GB pages but not for PawnsHash unless your computer is big.
The simultaneous split is not seen by us.
So the 2MB size for the Large Pages is what we see now.

Always disengage IvanHoe via quit to ensure deremedy of LargePages.
Yet: the GUI has the fault too. The SIGKILL eludes the catch.

5) Testing results (1GB+64Mb+64Mb)
NPS 8477000 vs NPS 9167000 (8%)

Re: IvanHoe54

Posted: Tue Jul 13, 2010 8:44 pm
by benstoker
benstoker wrote:
xshat wrote:http://ippolit.wikispaces.com/file/deta ... 4-Beta.tar

IvanHoe T54 and IvanHoe T54A
http://www.speedyshare.com/files/23355284/T54_T54A.rar
depositfiles.com mja3e4ozu
http://ifolder.ru/18516893
http://www.zshare.net/download/78256877ab226034/
mediafire.com qitirylzjw3
rapidshare.com T54_T54A.rar
I tried to activate the large pages on my linux box, which is Ubuntu 10.04; dual core intel; 4GB RAM. I executed "# echo 1024 > /proc/sys/vm/nr_hugepages", and the system immediately started thrashing the hard drive terribly - had to force a cold reboot. The new ivanhoe readme warns to be sure it's the proper size -- but how do you know? I assumed that a 4GB RAM box could handle this. For some reason it cannot. I am ignorant about what I'm doing here. What kind of machine can handle setting 'nr_hugepages' to 1024? I would think 4GB RAM machines are still 'roomy' machines. Do you need even more RAM to handle setting nr_hugepages to 1024? Note the ivanhoe dev states large pages can kick up nps 8%. Worth it, if you can make it work! For easy reference, here's the readme copy/pasted:

Code: Select all


LINUX_LARGE_PAGES

1) vm_nr.hugepages

Attach to /etc/sysctl.conf one line
vm.nr_hugepages=1024

This demands 1024 2Mb pages with "sysctl -p" or reboot for read "man sysctl" to see.
Or in the system
# echo 1024 > /proc/sys/vm/nr_hugepages
Ensure the number 1024 is your proper size in desire.
The overlimit subdues the system!

2) MEMLOCK limit

Attach to /etc/security/limits.conf two lines for 20GB in the limit
* soft memlock 20971520
* hard memlock 20971520

Now read "man limits.conf" for these are per login.
The limit size can be too big and not annoy.
Check "ulimit -a" to see memlock limits in the actual.

3) shmget standards

Contrive with /proc/sys/kernel/shmmax to lift for size in the choice (20GB).
# echo 21474836480 > /proc/sys/kernel/shmmax

Again too big has no annoy. Purview "man shmget" NOTES in the more.

4) Arriving in IvanHoe
The UCI TryLargePages demands to use yours.
The problem is to do so for Hash and PawnsHash.
The size for Hash is better at 1GB pages but not for PawnsHash unless your computer is big.
The simultaneous split is not seen by us.
So the 2MB size for the Large Pages is what we see now.

Always disengage IvanHoe via quit to ensure deremedy of LargePages.
Yet: the GUI has the fault too. The SIGKILL eludes the catch.

5) Testing results (1GB+64Mb+64Mb)
NPS 8477000 vs NPS 9167000 (8%)
Perhaps a partial answer to my own question, in case others are as ignorant as me:
It may be necessary to reboot to be able to allocate all the hugepages that is needed. This is because hugepages requires large areas of contiguous physical memory. Over time, physical memory may be mapped and allocated to pages, thus the physical memory can become fragmented. If the hugepages are allocated early in the boot process, fragmentation is unlikely to have occurred.

Re: IvanHoe54

Posted: Wed Jul 14, 2010 1:25 am
by kingliveson
I am not sure exactly what you've done or haven't done. It works here on openSUSE.

Code: Select all

setoption name TryLargePages value true
HI 557059
info string Hash as HUGETLB 512
HI 589828
info string PawnsHash as HUGETLB 32
HI 622597
info string PVHash as HUGETLB 2
info string Optional TryLargePages true

Re: IvanHoe54

Posted: Wed Jul 14, 2010 3:27 am
by BB+
It may be necessary to reboot to be able to allocate all the hugepages that is needed. This is because hugepages requires large areas of contiguous physical memory. Over time, physical memory may be mapped and allocated to pages, thus the physical memory can become fragmented. If the hugepages are allocated early in the boot process, fragmentation is unlikely to have occurred.
My guess is that this correct. Making 2GB (or even 4GB, depending on your Hugepagesize) of huge pages with "1024" (as they said) on a 4GB machine is not likely to do well if your machine has been up for awhile. You can look at /proc/buddyinfo (lists contiguous sections available) or /proc/meminfo (gives the amount of current allocated memory) to find out more (including your Hugepagesize).