Forum > GPU crunching

GTX 460 superclocked

<< < (19/23) > >>

TouchuvGrey:
Downloaded and installed Lunatics_Win64v0.37_(SSE3+)_AP505r409_AKv8bx64_Cudax32f.exe
( Beta ) hoping this will help. i've watched my host average drop from 13,500 per day to
8700 per day since installing the GTX460. Suspecting i screwed something up along the way.
i have a Black Belt in that sort of thing. <sigh>

Jason G:
Cheers!  Yeah should help a fair bit straight away ~20% maybe.  It's still early days with those cards.  no Cuda documentation mentioning them, and only one introductory driver out. Rest assured they seem to be a very popular card, and will start to grab a foothold.

Jason

Josef W. Segur:

--- Quote from: TouchuvGrey on 24 Aug 2010, 04:04:47 pm ---Downloaded and installed Lunatics_Win64v0.37_(SSE3+)_AP505r409_AKv8bx64_Cudax32f.exe
( Beta ) hoping this will help. i've watched my host average drop from 13,500 per day to
8700 per day since installing the GTX460. Suspecting i screwed something up along the way.
i have a Black Belt in that sort of thing. <sigh>
--- End quote ---

Actually, we screwed up. We failed to make it clear that all S@H CUDA applications built before the Fermi cards were released have problems on those cards, so you were running the v12 application and turning in a lot of tasks with a false result_overflow. Many of those ended up being judged invalid so got no credits. Some also happened to be paired with another host also running old CUDA code on Fermi, those unfortunately get validated and assimilated into the database. However, they overflow so quickly that there are few credits granted even for those.

There may be a lingering problem because the DCF has adapted to doing a lot of the work in extremely short time. That could lead to BOINC killing some tasks for "exceeded elapsed time limit", the infamous -177 error. The new rescheduler Fred M made has an expert feature to prevent any possibility of that, and IIRC there's a way to use that feature without actually rescheduling tasks. I hope someone who has actually used that will post a quick clear procedure. I don't have any GPU capable of crunching, so am only going on what I've read elsewhere.

You might also want to reduce your cache settings before asking for work during the uptime beginning Friday, the system thinking your GTX460 is much faster than it really is could lead to getting more work than you really want. After the host has 50 or so validated tasks done with x32f the server average should be close enough to not worry about that much, so the cache can be made as large as you need before the next outage.
                                                                                       Joe

SciManStev:
On the expert tab, there is a check box that says Limit rsc_fpops_bound to avoid -177 errors. Check that off, and go to the first tab and push run. It takes a few seconds, but it works perfectly. I stopped a bunch of -177 errors cold by running that. Make sure you are not in simulation mode, which is also on the expert tab.

Steve

Richard Haselgrove:

--- Quote from: Josef W. Segur on 24 Aug 2010, 06:06:41 pm ---
Actually, we screwed up. We failed to make it clear that all S@H CUDA applications built before the Fermi cards were released have problems on those cards, so you were running the v12 application and turning in a lot of tasks with a false result_overflow. Many of those ended up being judged invalid so got no credits. Some also happened to be paired with another host also running old CUDA code on Fermi, those unfortunately get validated and assimilated into the database. However, they overflow so quickly that there are few credits granted even for those.
                                                                                       Joe

--- End quote ---

Well, we were slow to pick up, but we were there by early June: I thnink all the warnings were in place by

http://lunatics.kwsn.net/1-discussion-forum/when-corrupted-results-get-validated.msg27734.html#msg27734
http://lunatics.kwsn.net/gpu-crunching/unified-installer-with-fermi-support.msg27926.html#msg27926

Anybody who installed any v12 or other non-Fermi app after then, with all the warnings here and on the main project, just wasn't reading. And of course, from that point onwards, just allowing stock download would have worked.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version