Forum > Linux

SETI MB CUDA for Linux

<< < (14/162) > >>

sunu:
Autokill unfortunately no. You can do it with a script or manually (make a search for <true_angle_range>0.01 or 0.00 and delete or abort them).

rja:
I realize that the app_info.xml files are hints, but is there a reason for the MB file to differ from the MB+AP file?

Should the version_num, avg_ncpus, and max_ncpus match between the MB+AP app_info.xml and MB app_info.xml files in setiathome-CUDA-6.08.i686.tar.bz2?  Same for setiathome-CUDA-6.08.x86_64.tar.bz2?

The MB+AP app_info.xml has version_num of 607 while the MB app_info.xml has version_num of 608.

The version 603 MB+AP avg_ncpus, and max_ncpus are set to 1.0000 while the MB avg_ncpus, and max_ncpus are set to 0.040000.

ML1:

--- Quote from: Hefto99 on 31 Jan 2009, 08:58:56 pm ---My CPU is running on 2GHz (Athlon X2 3800+), GK is 8600 GT on default clocks, openSUSE 11.1 64-bit, here are some results:

[...]

CPU utilization is almost 100% for SETI
--- End quote ---
I see pretty much the same on my system for an AthlonXP 6400+ and 8600GT GPU (256 MB VRAM).

Is the CPU doing a busy-wait poll of the GPU? Or why the high CPU utilisation?

As an experiment I'm keeping the CPU priority down to nice 19 (instead of the default 10) to see if there is any slowdown for the CUDA processing. However, that only reduces the CPU load to between 75% and 90% for a core.

Is there any profiling that we can run to see what it is doing with the CPU time?

Happy crunchin',
Martin

ML1:

--- Quote from: ML1 on 06 Feb 2009, 08:28:49 pm ---I see pretty much the same [100% CPU] on my system for an AthlonXP 6400+ and 8600GT GPU (256 MB VRAM).

Is the CPU doing a busy-wait poll of the GPU? Or why the high CPU utilisation?

As an experiment I'm keeping the CPU priority down to nice 19 (instead of the default 10) to see if there is any slowdown for the CUDA processing. However, that only reduces the CPU load to between 75% and 90% for a core...
--- End quote ---
And for a brief comparison of a very few examples (sorting by AR):

04-Feb-2009 20:28:20 04-Feb-2009 20:43:07 2.7155224489909 19dc08ac.31914.13160.15.8.44
05-Feb-2009 17:48:16 05-Feb-2009 18:02:44 2.7155504718476 17dc08ae.20201.15207.8.8.172
07-Feb-2009 09:34:39 07-Feb-2009 09:53:13 2.7155603822925 17dc08ae.1228.13162.7.8.1
07-Feb-2009 11:07:38 07-Feb-2009 11:26:01 2.7155603822925 17dc08ae.1228.13162.7.8.143
07-Feb-2009 11:26:01 07-Feb-2009 11:43:35 2.7155603822925 17dc08ae.1228.13162.7.8.149
07-Feb-2009 10:11:51 07-Feb-2009 10:30:33 2.7155603822925 17dc08ae.1228.13162.7.8.7
03-Feb-2009 18:19:10 03-Feb-2009 18:35:18 2.7155918111126 20dc08ae.27884.20931.9.8.10
03-Feb-2009 19:23:17 03-Feb-2009 19:38:40 2.7155918111126 20dc08ae.27884.20931.9.8.11

07-Feb onwards is at nice 19 and 75% and 90% CPU on one core. So roughly, 15mins WUs go up to be about 19mins.

05-Feb-2009 23:24:24 05-Feb-2009 23:57:35 0.70474762854275 16dc08ad.2380.10706.11.8.3
05-Feb-2009 22:51:12 05-Feb-2009 23:24:24 0.7330587117685 16dc08ad.2380.11115.11.8.6
07-Feb-2009 04:03:45 07-Feb-2009 04:41:28 0.81850781128826 21dc08ab.5849.11933.10.8.173
06-Feb-2009 10:13:58 06-Feb-2009 10:43:48 0.82366885623122 21dc08ab.11148.20931.4.8.88
07-Feb-2009 02:27:27 07-Feb-2009 03:04:36 0.82505766342312 21dc08ab.5849.20931.10.8.31
05-Feb-2009 22:21:52 05-Feb-2009 22:51:12 0.84615758941882 16dc08ac.28940.4571.10.8.84
06-Feb-2009 05:54:03 06-Feb-2009 06:23:36 0.86971373346817 20dc08ab.30334.20931.11.8.13

And 33mins is pushed up to 38mins, & 30mins -> 37mins...

06-Feb-2009 19:11:13 06-Feb-2009 19:56:01 0.43305205747667 16no08ag.22317.25021.15.8.184
07-Feb-2009 03:04:36 07-Feb-2009 04:03:45 0.43362818142428 01dc08aa.19405.7025.3.8.251
07-Feb-2009 01:31:22 07-Feb-2009 02:27:27 0.43362879013513 01dc08aa.19405.4571.3.8.98
06-Feb-2009 18:24:59 06-Feb-2009 19:11:13 0.43418781810648 16dc08ac.14209.240437.9.8.152
06-Feb-2009 17:39:39 06-Feb-2009 18:24:59 0.43418793067813 16dc08ac.14209.238801.9.8.234
06-Feb-2009 22:16:11 06-Feb-2009 23:01:44 0.43422120716759 16dc08ac.28940.243300.10.8.199
07-Feb-2009 04:41:29 07-Feb-2009 05:40:32 0.43430233574049 16dc08ac.26815.244118.11.8.113

45mins -> 56-59mins...


Sooo... The slowdown looks to be roughly proportionate to the lower proportion of CPU used...

Are the nVidia Windows drivers so very much more efficient than their compiles for Linux?

Or are there very many frequent busy-waits for many small GPU steps?

Happy crunchin',
Martin

Raistmer:
For my GPU -poll mod is useless:

http://setiathome.berkeley.edu/forum_thread.php?id=51712&nowrap=true#863155

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version