Forum > GPU crunching
V10/11 of modified SETI MB CUDA + opt AP package for full multi-GPU+CPU use
Raistmer:
"team" packs (V10 included) became obsolete now. To simplify life for users who can't read whole this thread to stay tuned with latest changes (not too latest already though, BOINC 6.6.20 exists as recommended version more than month already AFAIK) I lock this thread .
All my own support of "teamed" packs now ceased (until I get some example why "teamed" pack should be used instead of BOINC 6.6.20 native GPU scheduling at least)
"teamed" packs were temporary solution and well served their aim during lifespan. But now this period is over.
What should be used instead:
1) BOINC 6.6.20
2) latest CUDA MB build (can be found in this thread and will be posted in new thread devoted optimized CUDA MultiBeam app).
3) appropriate app_info.xml
some examples can be found in this thread: http://setiathome.berkeley.edu/forum_thread.php?id=52589
2) and 3) could be replaced by using Jason's Lunatics installer (currently in beta stage, can be found in beta area of this forum: http://lunatics.kwsn.net/installer-testing/index.0.html )
What should not be used
1) ncpus field in cc_config.xml
2) app_info.xml supplied with obsolete "team" packs.
3) "teamed" modification of AK_v8 (was part of "team" packs).
The key difference from V9 packs is :
The "team" mod now supports multi-GPU configs.
There will be number_of_GPUs file in SETI project directory. By default it contains number of 1.
If you have more GPUs just edit that file (enter number of GPUs installed in host instead of 1 ).
And don't forget to change your cc_config.xml
Now ncpus value required to be NUMBER_OF_LOGICAL_PROCESSORS+NUMBER_OF_GPUs for host.
For example, for Quad with 2 CUDA-enabled GPUs it should be 6
Here is example of minimal cc_config.xml file you need
<cc_config>
<options>
<ncpus>NUMBER_OF_LOGICAL_PROCESSORS+NUMBER_OF_GPUs</ncpus>
</options>
</cc_config>
File should be placed in boinc data directory (the one that contains projects subdirectory).
Currently V10 available only in SSSE3 version, other builds will follow.
When you post to this thread seeking for help, please, don't forget to provide link on your host and description of your config (OS, number of GPU cards, what pack you use, video driver version).
It will wastly decrease number of unneeded questions and save time both for you and anyone who will help you.
Please, don't forget to check ALL STUFF (including these packs) you downloaded from Internet with updated antivirus.
Fixed version posted, please, update your configs.
Thanks mr.kjellen from SETI main forums for bug report.
Jason00:
Thanks I downloaded this yesterday! So far no problems. I only have astropulse in my waiting WU's right now. So can't say anything of the GPU crunching.
Also will this optimized client work with any of the new Bonic beta's? I'm still running 6.4.5 waiting on word if I can upgrade it yet.
PatrickV2:
I just registered here (after freeloading on the optimized clients posted here before), and am currently running the V10 package on my Q6600/8800GTX machine, under WinVistax86_Ultimate.
Looks to run ok, 4 process on the Q6600, a fifth called 'MB_6.08_mod_CUDA...' is taking about 5% CPU, and I see one of the 5 units running progressing at about twice the rate of the others (1 is at ~31%, 4 at ~15%).
Is there a way to see from within BOINC manager which WU is running on the GPU? (Besides observing it progresses quicker?)
Regards, Patrick.
mr.mac52:
I checked my system with V10a X64 installed and I have finally received and processed several CUDA jobs now without errors and one did detect and kill a VLAR workunit.
I'd say your V10a x86 and x64 packages are both working as desired.
Thanks Raistmer once again for your excellent work!
gaulois952:
I only have that in my cc_config
<cc_config>
<log_flags>
</log_flags>
<options>
<dont_contact_ref_site>1</dont_contact_ref_site>
</options>
</cc_config>
<!-- View http://boinc.berkeley.edu/trac/wiki/ClientMessages for full set of
options and some explanations -->
Help me :(
Navigation
[0] Message Index
[#] Next page
Go to full version