Forum > GPU crunching
Multiple WU's per GPU
TouchuvGrey:
My old brain has failed me yet again. i know i have seen the instructions
here, but cannot recall where. i have 2 video cards a GTS 250 and a GTX 460.
i would like to run 2 work units at the same time per card.
My cc_config.xml currently looks like this:
<cc_config>
<log_flags>
<sched_op_debug>1</sched_op_debug>
<work_fetch_debug>1</work_fetch_debug>
</log_flags>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>
What do i need to change it to ?
arkayn:
--- Quote from: TouchuvGrey on 08 Dec 2010, 09:48:03 pm ---My old brain has failed me yet again. i know i have seen the instructions
here, but cannot recall where. i have 2 video cards a GTS 250 and a GTX 460.
i would like to run 2 work units at the same time per card.
My cc_config.xml currently looks like this:
<cc_config>
<log_flags>
<sched_op_debug>1</sched_op_debug>
<work_fetch_debug>1</work_fetch_debug>
</log_flags>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>
What do i need to change it to ?
--- End quote ---
You would need to change it in the app_info.xml file, find the line that says count and change it to 0.5
Josef W. Segur:
--- Quote from: TouchuvGrey on 08 Dec 2010, 09:48:03 pm ---...
i have 2 video cards a GTS 250 and a GTX 460.
i would like to run 2 work units at the same time per card.
...
--- End quote ---
The 200 series cards like your GTS 250 are not capable of running more than one WU at a time, and there's no way to tell BOINC to treat two cards in one host differently. You'll have to move a card to a different host or give up the idea.
Joe
TouchuvGrey:
i must be misunderstanding what i am seeing in
that case ( this is not unusual )
12/9/2010 6:18:42 PM SETI@home Restarting task 13ja10aa.7071.21335.12.10.234_0 using setiathome_enhanced version 608
12/9/2010 6:18:42 PM SETI@home Restarting task 13ja10aa.7071.21335.12.10.228_0 using setiathome_enhanced version 608
12/9/2010 6:18:42 PM SETI@home Restarting task 13ja10aa.7071.21335.12.10.225_0 using setiathome_enhanced version 608
12/9/2010 6:18:42 PM SETI@home Restarting task 13ja10aa.7071.21335.12.10.222_0 using setiathome_enhanced version 608
12/9/2010 6:18:42 PM [wfd]: work fetch start
12/9/2010 6:18:42 PM SETI@home chosen: minor shortfall NVIDIA GPU: 0.00 inst, 936855.26 sec
12/9/2010 6:18:42 PM [wfd] ------- start work fetch state -------
12/9/2010 6:18:42 PM [wfd] target work buffer: 0.86 + 864000.00 sec
12/9/2010 6:18:42 PM [wfd] CPU: shortfall 6848715.19 nidle 7.84 saturated 0.00 busy 0.00 RS fetchable 0.00 runnable 0.00
12/9/2010 6:18:42 PM SETI@home chosen: minor shortfall NVIDIA GPU: 0.00 inst, 936855.26 sec
12/9/2010 6:18:42 PM [wfd] ------- start work fetch state -------
12/9/2010 6:18:42 PM [wfd] target work buffer: 0.86 + 864000.00 sec
12/9/2010 6:18:42 PM [wfd] CPU: shortfall 6848715.19 nidle 7.84 saturated 0.00 busy 0.00 RS fetchable 0.00 runnable 0.00
12/9/2010 6:18:42 PM SETI@home [wfd] CPU: fetch share 0.00 LTD 0.00 backoff dt 3887.85 int 86400.00
12/9/2010 6:18:42 PM [wfd] NVIDIA GPU: shortfall 936855.26 nidle 0.00 saturated 394927.13 busy 0.00 RS fetchable 1000.00 runnable 1000.00
12/9/2010 6:18:42 PM SETI@home [wfd] NVIDIA GPU: fetch share 1.00 LTD 0.00 backoff dt 0.00 int 0.00
12/9/2010 6:18:42 PM SETI@home [wfd] overall LTD -1931022.03
12/9/2010 6:18:42 PM SETI@home [wfd] CPU: fetch share 0.00 LTD 0.00 backoff dt 3887.85 int 86400.00
12/9/2010 6:18:42 PM [wfd] NVIDIA GPU: shortfall 936855.26 nidle 0.00 saturated 394927.13 busy 0.00 RS fetchable 1000.00 runnable 1000.00
12/9/2010 6:18:42 PM SETI@home [wfd] NVIDIA GPU: fetch share 1.00 LTD 0.00 backoff dt 0.00 int 0.00
12/9/2010 6:18:42 PM SETI@home [wfd] overall LTD -1931022.03
it looks to me like i'm running 2 WU's on each card. If that is not the case
please enlighten me as to what i'm seeing.
Jason G:
200 series ( and even 8800 series like the GTS 250 ::)) *should* run 2 instances at a time fine provided you don't run out of memory. They just won't benefit from doing so directly, since context switch hardware wasn't included until Fermi. Since you have 400 series there, the likely benefit will outweigh any added cost penalty to the older card (Your mileage may vary).
Note that the operation you're seeing is AFTER, many driver revisions & improvements, so I am surprised that it is working also. Joe's statements were quite correct not so long ago (though I can't pinpoint the exact dates/versions of the corrections . Too many changes too quickly ;))
... Cuda 3.1 was most definitely broken with mixing generations in the same host, which I reported through nVidia's registered developer program. These things are fixed in Cuda 3.2. The Cuda 3.0 build in operation, should also be fine as your host shows, just be certain to keep an eye on things ;)
Navigation
[0] Message Index
[#] Next page
Go to full version