Ummm........Just noticed that they've implemented changeset 22180 http://boinc.berkeley.edu/trac/changeset/22180 @ Beta now!Bit concerned at the moment about this as it gives my "Average Processing Rate" for CPU as 22.20838622151 and for my Cuda card as 147.55580825638.From Joe's post on the NC forum you should be able to move the decimal point 9 places to the right to get the flops value for the each application. I've done this and reset my DCF for Beta to 1, but this has set my estimated to about 1/2 of what I know they will take running two tasks at a time on my rig. CPU tasks seem to have about the right times stillOnly thing I can think is that these "Average Processing Rates" aren't taking into account running multiple instances on one cardUnless I've got the wrong end of the stick about how this is supposed to work? Edit: No CPU times are wrong as well down from 2hrs 20(correct) to 1 hr 35EDIT 2: OK ignore my moaning I got it wrong - was expecting it to give my existing tasks the correct estimated time as well, seems it only works on new tasks - and is giving them the correct runtimes!
Hi Joe,I got a few (maybe 20 or so) "Lost Tasks" from Beta earlier today, but everything else since have been new tasks
The DCF was around 1.8-1.9 after a while I got bored of trying to keep it stable @ 1 and figured it would settle down in the end, so have left it to it's own devices now It does seem to be giving some pretty accurate readings now, the only downside I can see of it is, it takes -9's into account as well which is going to skew the value if you get a few of them at a time
Hmm, I didn't even think about that rate showing for a host which doesn't have enough validated results to make a good average. BOINC doesn't use it for estimate scaling until there are at least 10 "Number of tasks completed", and that means validated. Up to 19 validated, even one result_overflow could make a noticeable change to the rate and a series of them could be really bad. After that, each validated result can only shift the rate by 1% at most so it would take quite a few overflows to make it really bad. Unfortunately, S@H is likely to deliver a fairly large number of tasks from the same 107 second section of one "tape" for a work request sometimes, and if a burst of RFI is there, they might all overflow. That should be rare, but maybe Dr. Anderson will eventually make use of variance as a means of deemphasizing unusual cases.Edit: I had thought maybe they cleared the "completed" count and averages when updating the server build, but many of the other top hosts have higher counts than that would allow. Do you know why your host started at the beginning again? Joe