Forum > Linux
SETI MB CUDA for Linux
Tye:
--- Quote from: sunu on 22 Jul 2009, 07:01:32 pm ---Well, then, there is nothing more we could ask. 6.6.11 it is for multi-gpus in linux.
--- End quote ---
I've been looking, but haven't found exactly how to write my app_info.xml to be able to crunch MB wus with both CPU and GPU - can you point me in the right direction?
Right now my app_info.xml lets me do the optimized AP with the CPU and the CUDA MB with the GPU only, but I do run other projects as well.
sunu:
I also confirm that 6.6.11 gets both CPU and GPU multibeam workunits.
Tye, this is the seti section of my app_info.xml. Mind you that your cpu or your cuda client might have a different name:
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>AK_V8_linux64_ssse3</name>
<executable/>
</file_info>
<file_info>
<name>setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu</name>
<executable/>
</file_info>
<file_info>
<name>libcudart.so.2</name>
<executable/>
</file_info>
<file_info>
<name>libcufft.so.2</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<flops>5634219710.66940475</flops>
<avg_ncpus>1.0000</avg_ncpus>
<max_ncpus>1.0000</max_ncpus>
<file_ref>
<file_name>AK_V8_linux64_ssse3</file_name>
<main_program/>
</file_ref>
</app_version>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>608</version_num>
<plan_class>cuda</plan_class>
<flops>19317324722.295102</flops>
<avg_ncpus>1.0000</avg_ncpus>
<max_ncpus>1.0000</max_ncpus>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
<file_ref>
<file_name>setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>libcudart.so.2</file_name>
</file_ref>
<file_ref>
<file_name>libcufft.so.2</file_name>
</file_ref>
</app_version>
riofl:
is the flops section of the app info file really important? i'm not using it. my file is below. does the addition of the flops help with processing efficiency?
<app_info>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>AK_V8_linux64_ssse3</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>603</version_num>
<file_ref>
<file_name>AK_V8_linux64_ssse3</file_name>
<main_program/>
</file_ref>
</app_version>
<app>
<name>setiathome_enhanced</name>
</app>
<file_info>
<name>setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu</name>
<executable/>
</file_info>
<file_info>
<name>libcudart.so.2</name>
<executable/>
</file_info>
<file_info>
<name>libcufft.so.2</name>
<executable/>
</file_info>
<app_version>
<app_name>setiathome_enhanced</app_name>
<version_num>608</version_num>
<plan_class>cuda</plan_class>
<avg_ncpus>0.250000</avg_ncpus>
<max_ncpus>0.250000</max_ncpus>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
<file_ref>
<file_name>setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>libcudart.so.2</file_name>
</file_ref>
<file_ref>
<file_name>libcufft.so.2</file_name>
</file_ref>
</app_version>
</app_info>
riofl:
slightly off topic but possibly relevant.
does the cpu-gpu perl script V5 available in another topic actually catch vlars and vhars? maybe it doesn't report them because i have not seen it report any yet. a little concerned because i have had several computation error workunits from the 'vlar-killer 2.2 cuda' ap and with task viewing turned off at seti i cannot tell.
the other thing is would it be better to have my ratio set so that boinc never sees a shortage of cuda workunits so it never gets cuda workunits but only cpu workunits and then let this script supply the gpu work? i would think if it handles vlars it would be the most efficient way to never get one scheduled for cuda?
sunu:
--- Quote from: riofl on 23 Jul 2009, 05:25:41 am ---is the flops section of the app info file really important? i'm not using it. my file is below. does the addition of the flops help with processing efficiency?
--- End quote ---
It is important, but not for processing efficiency. flops numbers help boinc to better calculate estimated computation times, so to better plan ahead and download or not new workunits to fill the cache. Fairly accurate estimated computation times help stabilize Duration Correction Factor and brings boinc to a balanced state.
It is also the other way around. You see, Duration Correction Factor, estimated computation times, balanced boinc, good cache management are all interconnected and flops numbers in app_info.xml help to bring all these in a balanced state.
If you think that your boinc is balanced and steady without the flops numbers, then probably you don't need them. Otherwise you have to put them in.
Edit:
--- Quote from: riofl on 23 Jul 2009, 05:43:09 am ---does the cpu-gpu perl script V5 available in another topic actually catch vlars and vhars? maybe it doesn't report them because i have not seen it report any yet. a little concerned because i have had several computation error workunits from the 'vlar-killer 2.2 cuda' ap and with task viewing turned off at seti i cannot tell.
--- End quote ---
I haven't used that script. Might be that vlar_kill has a greater vlar angle range than the script is looking for? To see if it is working correctly, after a script run that doesn't report anything, make a manual search in the workunits directory to see how many files(workunits) contain the text <true_angle_range>0.0 and then see if some of them are still assigned to cuda.
--- Quote from: riofl on 23 Jul 2009, 05:43:09 am ---the other thing is would it be better to have my ratio set so that boinc never sees a shortage of cuda workunits so it never gets cuda workunits but only cpu workunits and then let this script supply the gpu work? i would think if it handles vlars it would be the most efficient way to never get one scheduled for cuda?
--- End quote ---
In both cases you don't avoid using the script. Choose whatever suits you better.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version