Could you provide a link to your host, or some failed work units?If you follow the instructions provided by sunu and make sure the Nvidia modules are loaded, then it will also work for you I'm running the 190.18 driver with CUDA 2.3 libraries and 2.2VLARkill app now on two machines with G92 chips, so far no isses.
[...]setiathome_CUDA: CUDA Device 1 specified, checking... Device 1: GeForce 8600 GTS is okaySIGSEGV: segmentation violationStack trace (16 frames):setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x47cba9]/lib64/libpthread.so.0[0x7f0954f4f0f0]/usr/lib64/libcuda.so.1[0x7f09559c3920]/usr/lib64/libcuda.so.1[0x7f09559c9684]/usr/lib64/libcuda.so.1[0x7f0955992a0f]/usr/lib64/libcuda.so.1[0x7f095571e296]/usr/lib64/libcuda.so.1[0x7f095572ebab]/usr/lib64/libcuda.so.1[0x7f0955716190]/usr/lib64/libcuda.so.1(cuCtxCreate+0xaa)[0x7f095571000a]setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x5ace4b]setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x40d4ca]setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x419f23]setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x424c7d]setiathome-CUDA-6.08.x86_64-pc-linux-gnu[0x407f60]/lib64/libc.so.6(__libc_start_main+0xe6)[0x7f0954bec576]setiathome-CUDA-6.08.x86_64-pc-linux-gnu(__gxx_personality_v0+0x241)[0x407be9]Exiting...</stderr_txt>]]>
After crunching other projects for some months, I restarted SETI@GPU just his morning, but using the initial CUDA build for Linux which uses 100% of one CPU core as well...So these new versions, that can be found in crunch3rs board, will use only few % of one CPU? That would be awsome...I'm currently still at 180.60, some 2.6.30 rc and 6.6.17, but willing to update if I could free up that core with a never version of the app
Yup, that's been bothering me too. I'm wondering if there's a way to trick it into reporting clock time rather than cpu time... I'm using nvidia 185.18.14 and BOINC 6.6.11 btw since I'd like to do multi-GPU here soon.
6.6.37 was reporting proper cpu/gpu times, but when i went back to 6.6.11 to use multiple devices that time reporting broke. i am not sure if adding a flops statement in app_info.xml will help with that or not.
The default priority of nice 10 seems to slow the process down on my box, once I switched it to 0 or -5, it processed much faster and collected up CPU time quicker.
I tried to go to the link, but every time I go to calbe.dw70.de I get access denied... is there another place to get it?
Has anyone tried the CUDA 2.2 client together with 190.xx drivers and the CUDA 2.3 dlls, if there is some speed-up like under Windows?
does anyone know if there is a cuda 2.3 vlarkill x86_64 app available yet? i am switching everything to 2.3 and the 190 driver today.
I'm trying to get CUDA working (with the 32 bit binary posted at message 1 of this thread) with my new 8600GTS in Slackware Linux and I'm having issues.. I have run the nvidia installer, etc, but I get some weird errors..
1. The output shows I have a cuda device, however, it says I have revision 0 of the driver installed, even though I have installed the 185.18.14 and updaged to the 185.18.31.. CUDA device: GeForce 8600 GTS (driver version 0, compute capability 1.1, 255MB, est. 18GFLOPS).
2. I have modified my app_info.xml to allow both AK_V8_SSE3 (32bit) and the cuda to run simultaneously (included .xml file).. I have 3 active tasks being worked on, two (for my dual CPU) say setiathome_enhanced 6.03 and run just fine. The third is the CUDA which setiathome_enhanced 6.08 (cuda), and the status NEVER goes past Ready to start. It will eventuall error out with Computation error.Anyone have any thoughts or advice on how to debug this?
4. I'm not using XWindows at all. This is all console based only.. Is Xorg required to be running to utilize CUDA?
#!/bin/bashmodprobe nvidiaif [ "$?" -eq 0 ]; then# Count the number of NVIDIA controllers found.N3D=`/sbin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l`NVGA=`/sbin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`N=`expr $N3D + $NVGA - 1`for i in `seq 0 $N`; domknod -m 666 /dev/nvidia$i c 195 $i;donemknod -m 666 /dev/nvidiactl c 195 255elseexit 1fi
my gkrellm monitors were the most sensitive to this behavior and began displaying symptoms before it got to the noticable level affecting my desktops. since i run an extrememly busy set of desktops (2 of the desktops display a total of 29 gkrellm monitor strips monitoring our servers in real time) i suspect the 190 driver isn't ready for prime time yet for linux when handling more than near idle desktop load plus cuda.
Thanks, but before I use it how does the VLAR kill work? I rebrand all of my VLAR to the CPU as soon as I can, and prefer to work whatever units I get since I have the 8 cores just sitting there most of the time.
Are you talking about boinc manager? Some time down the road, boinc manager changed from cpu time to elapsed time. If you use boinc 6.6.11 for multi-gpu you can use a later boinc manager version that shows the elapsed time. I'm using boinc 6.6.11 with boinc manager 6.6.37.
Perfect - I didn't think of doing that! Thanks again, sunu - you continue to be a big help and it's definitely appreciated!
Quote from: sunu on 02 Aug 2009, 07:07:41 amAre you talking about boinc manager? Some time down the road, boinc manager changed from cpu time to elapsed time. If you use boinc 6.6.11 for multi-gpu you can use a later boinc manager version that shows the elapsed time. I'm using boinc 6.6.11 with boinc manager 6.6.37.Perfect - I didn't think of doing that! Thanks again, sunu - you continue to be a big help and it's definitely appreciated!
Argh - somehow it doesn't like running with 6.6.11 boinc and 6.6.36 boincmgr... Is there some trick to that I'm missing?
Quote from: Tye on 02 Aug 2009, 11:16:11 amArgh - somehow it doesn't like running with 6.6.11 boinc and 6.6.36 boincmgr... Is there some trick to that I'm missing?What problem do you have? Just copy boincmgr to your 6.6.11 installation.
Yep, that's what I did, but it just sits there frozen at the "Communicating with client" portion on startup. No messages, no display, no processes starting, etc.