I don't understand something. The developers obviously know that cuda devices hate vlar and vhar workunits. why can't they put the smarts that are in the perl script directly into boinc so that as it downloads the workunit it reads the angles etc and decides where to assign it at that point? it should be an easy thing to implement saving lots of trouble for people like me who have a card that locks up at the slightest hint of a vlar and having to run something external to make sure 'proper' workunits are fed to cuda?seems to me that this ia a gross oversight leaving this out of boinc.
Hey all.I recently got a computation error with one of my CUDA WU's, but it wasn't a segfault. Maybe there's some info in there for you guys :http://setiathome.berkeley.edu/result.php?resultid=1347109626
now i dont know about desktop settings much but there is one change i made in the past few weeks with nvidia-settings. i unchecked Sync to VBlank in xvideo settings and also unchecked sync to vblank and allow flipping in the opengl settings. wasnt sure what they did but there seemed to be no difference. should they be checked?
power mizer which seems to not have settings says adaptive clocking enabled performance level2 perforamce mode desktop. level2 is the 3d settings above however i remember when i first got the card, performance mode said maximum performance and somewhere along the line it changed to desktop. since the other settings are the same i can only assume it is a function of which driver is being used for which text shows up.
since 6.9.0 reports 2 teslas, could it be possible it is mixing up which device is 0 and which is 1? because it is completely odd since the tesla is running gpu 500mhz and memory 900mhz so it should be considerably slower. it rates both devices it thinks are teslas at 74gflops yet the 285 is rated by 6.6.11 as 127gflops
i am going to reboot this tomorrow so when i do i am going to go over the settings in cmos. presently it is set to auto on pci-e bus frequency. maybe i will fix it at 100mhz .. it could be doing God knows what in auto.
also the 3 digit time workunits are still the 285 and 2 digit the tesla. i wonder if it has something to do with how busy my desktops are? i have quite a lot going on 24/7 with 18 gkrellm server monitors running in one desktop, usually 4 or 5 browser windows in different desktops with maybe 28 or so tabs open, average 8 or 10 ssh konqueror tabs open into our servers, email, virtualbox running xp which also runs boinc, kopete, 8 or 9 postit notes in the various desktops, a few kedit windows open plus momentary things like adobe reader, smplayer or whatever.. im in totally new territory here. my experience in graphics cards is plug it in and make sure it works with a stable and peppy screen however the 'busyness' of the desktops is not new and was basically the same when i had 10-13min workunits out of both cards.
Hello,I try Crunch3rs CUDA seti application and search google and this forum too but result is not OK.I tried to CUDA 2.1 and aplication setiathome-CUDA-6.08.x86_64-pc-linux-gnu. This compute OK but take 100% of CPU. http://setiathome.berkeley.edu/result.php?resultid=1340190227Now I test CUDA 2.3 with same apliaction and result is SIGSEGV: segmentation violationI try add setiathome-CUDA-6.08.x86_64-pc-linux-gnu to /usr/local/bin but still not working.CUDA 2.2 take segmentation violation too.Thanks for any ideasLibor
I try this application too but here is another problemldd setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu./setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./setiathome-6.08.CUDA_2.2_x86_64-pc-linux-gnu)CentOS5 have no GLIBCXX_3.4.9 in updates now.Libor
Is it even possible to use CUDA with two different devices? If so, what am I missing?
no, i have not tried just running boinc without a gui.. i will try that this coming weekend when i can spare some downtime from work and monitoring the servers. will let it run for 1 hr with no X running and then will go in and see if there are any differences.
thing is, the usage of my desktops has not changed much at all during the past year so i had the same stuff open with the 13min workunits a few months ago. will be interesting to see if the 3 digit numbers move into 2 digit though on the tasks report.
i hate powermizer myself but i cannot find any options to turn it off and leave the card in high perf mode at all times. every time i spot check it its always in hi perf mode so maybe my temps are not high enough to trigger it (assuming temp is its onlly trigger) and if idle is a trigger, my desktop is never idle even when i go to bed, all the gkrellm monitors are advancing their graphs every second.
seems so strange with all the mb servers down, my cuda cards are both idling at around 46c. really odd since i am used to them being in the low or mid 60s all the time.
Quote from: riofl on 02 Sep 2009, 12:05:14 pmno, i have not tried just running boinc without a gui.. i will try that this coming weekend when i can spare some downtime from work and monitoring the servers. will let it run for 1 hr with no X running and then will go in and see if there are any differences. Leave X, just close all those apps you have running. Just the desktop with boinc in the background.ok ill close down all my 'server' functions as well like my jabber server, bind, etc. so its just x and boinc running.Quote from: riofl on 02 Sep 2009, 12:05:14 pmthing is, the usage of my desktops has not changed much at all during the past year so i had the same stuff open with the 13min workunits a few months ago. will be interesting to see if the 3 digit numbers move into 2 digit though on the tasks report. The bigger multibeam workunits started about a month or two ago.hehe thats about the time i started noticing issues. maybe they're not issues afer all.Quote from: riofl on 02 Sep 2009, 12:05:14 pmi hate powermizer myself but i cannot find any options to turn it off and leave the card in high perf mode at all times. every time i spot check it its always in hi perf mode so maybe my temps are not high enough to trigger it (assuming temp is its onlly trigger) and if idle is a trigger, my desktop is never idle even when i go to bed, all the gkrellm monitors are advancing their graphs every second.Many people have tried many ways to turn off powermizer usually with no success. Powermizer levels are triggered by GPU usage or very high (95+°C) temperatures.ok well i hardly do anything involving true graphics besides cuda running on that stuff and i have my hardware monitors set to shut the system down if the gpu gets to 80c.. once i adjusted the fans and air flow in the case they have never gone above 70c.Quote from: riofl on 02 Sep 2009, 12:05:14 pmseems so strange with all the mb servers down, my cuda cards are both idling at around 46c. really odd since i am used to them being in the low or mid 60s all the time.I have some WUs cached for a few days more
You can switch those VHARs to your graphics cards.