+- +-
Say hello if visiting :) by Gecko
11 Jan 2023, 07:43:05 pm

Seti is down again by Mike
09 Aug 2017, 10:02:44 am

Some considerations regarding OpenCL MultiBeam app tuning from algorithm view by Raistmer
11 Dec 2016, 06:30:56 am

Loading APU to the limit: performance considerations by Mike
05 Nov 2016, 06:49:26 am

Better sleep on Windows - new round by Raistmer
26 Aug 2016, 02:02:31 pm

Author Topic: Average processing rate @ Beta  (Read 11881 times)

Ghost0210

  • Guest
Average processing rate @ Beta
« on: 30 Aug 2010, 07:00:48 am »
Ummm........

Just noticed that they've implemented changeset  22180 http://boinc.berkeley.edu/trac/changeset/22180 @ Beta now!
Bit concerned at the moment about this as it gives my "Average Processing Rate" for CPU as 22.20838622151 and for my Cuda card as 147.55580825638.
From Joe's post on the NC forum you should be able to move the decimal point 9 places to the right to get the flops value for the each application.
I've done this and reset my DCF for Beta to 1, but this has set my estimated to about 1/2 of what I know they will take running two tasks at a time on my rig. CPU tasks seem to have about the right times still
Only thing I can think is that these "Average Processing Rates" aren't taking into account running multiple instances on one card

Unless I've got the wrong end of the stick about how this is supposed to work?

Edit: No CPU times are wrong as well down from 2hrs 20(correct) to 1 hr 35
EDIT 2: OK ignore my moaning I got it wrong - was expecting it to give my existing tasks the correct estimated time as well, seems it only works on new tasks - and is giving them the correct runtimes!
« Last Edit: 30 Aug 2010, 07:23:09 am by Ghost »

Offline Josef W. Segur

  • Janitor o' the Board
  • Knight who says 'Ni!'
  • *****
  • Posts: 3112
Re: Average processing rate @ Beta
« Reply #1 on: 30 Aug 2010, 01:42:53 pm »
Ummm........

Just noticed that they've implemented changeset  22180 http://boinc.berkeley.edu/trac/changeset/22180 @ Beta now!
Bit concerned at the moment about this as it gives my "Average Processing Rate" for CPU as 22.20838622151 and for my Cuda card as 147.55580825638.
From Joe's post on the NC forum you should be able to move the decimal point 9 places to the right to get the flops value for the each application.
I've done this and reset my DCF for Beta to 1, but this has set my estimated to about 1/2 of what I know they will take running two tasks at a time on my rig. CPU tasks seem to have about the right times still
Only thing I can think is that these "Average Processing Rates" aren't taking into account running multiple instances on one card

Unless I've got the wrong end of the stick about how this is supposed to work?

Edit: No CPU times are wrong as well down from 2hrs 20(correct) to 1 hr 35
EDIT 2: OK ignore my moaning I got it wrong - was expecting it to give my existing tasks the correct estimated time as well, seems it only works on new tasks - and is giving them the correct runtimes!

That mix of old estimates and new estimates is certainly difficult to handle. You could just set DCF to give about the right estimates for the older work (something like 1.9 for your case), and when it gets to the new work it will drift down toward 1.0, or you could reset it to 1 then. The real difficulty is the period when CPU is still doing older work and GPU has already gotten to new, or vice versa. One kind is trying to increase DCF, the other to lower it.

If the project still has "Resend lost results" enabled, it might be used to get all the tasks resent with the new estimates. But they might have turned that off to test the limited version from changset 22203. Sorry I don't have any Beta work or I'd test.
                                                                                 Joe

Ghost0210

  • Guest
Re: Average processing rate @ Beta
« Reply #2 on: 30 Aug 2010, 02:15:32 pm »
Hi Joe,
I got a few (maybe 20 or so) "Lost Tasks" from Beta earlier today, but everything else since have been new tasks
The DCF was around 1.8-1.9 after a while I got bored of trying to keep it stable @ 1 and figured it would settle down in the end, so have left it to it's own devices now 
It does seem to be giving some pretty accurate readings now, the only downside I can see of it is, it takes -9's into account as well which is going to skew the value if you get a few of them at a time

Offline Josef W. Segur

  • Janitor o' the Board
  • Knight who says 'Ni!'
  • *****
  • Posts: 3112
Re: Average processing rate @ Beta
« Reply #3 on: 30 Aug 2010, 06:33:02 pm »
Hi Joe,
I got a few (maybe 20 or so) "Lost Tasks" from Beta earlier today, but everything else since have been new tasks

That probably means they have the full feature turned on still, the limited feature is unlikely to be tripped when the servers aren't overloaded.

Quote
The DCF was around 1.8-1.9 after a while I got bored of trying to keep it stable @ 1 and figured it would settle down in the end, so have left it to it's own devices now 
It does seem to be giving some pretty accurate readings now, the only downside I can see of it is, it takes -9's into account as well which is going to skew the value if you get a few of them at a time

Hmm, I didn't even think about that rate showing for a host which doesn't have enough validated results to make a good average. BOINC doesn't use it  for estimate scaling until there are at least 10 "Number of tasks completed", and that means validated. Up to 19 validated, even one result_overflow could make a noticeable change to the rate and a series of them could be really bad. After that, each validated result can only shift the rate by 1% at most so it would take quite a few overflows to make it really bad. Unfortunately, S@H is likely to deliver a fairly large number of tasks from the same 107 second section of one "tape" for a work request sometimes, and if a burst of RFI is there, they might all overflow. That should be rare, but maybe Dr. Anderson will eventually make use of variance as a means of deemphasizing unusual cases.

Edit: I had thought maybe they cleared the "completed" count and averages when updating the server build, but many of the other top hosts have higher counts than that would allow. Do you know why your host started at the beginning again?
                                                                                           Joe
« Last Edit: 30 Aug 2010, 10:47:34 pm by Josef W. Segur »

Ghost0210

  • Guest
Re: Average processing rate @ Beta
« Reply #4 on: 31 Aug 2010, 03:16:43 am »

Hmm, I didn't even think about that rate showing for a host which doesn't have enough validated results to make a good average. BOINC doesn't use it  for estimate scaling until there are at least 10 "Number of tasks completed", and that means validated. Up to 19 validated, even one result_overflow could make a noticeable change to the rate and a series of them could be really bad. After that, each validated result can only shift the rate by 1% at most so it would take quite a few overflows to make it really bad. Unfortunately, S@H is likely to deliver a fairly large number of tasks from the same 107 second section of one "tape" for a work request sometimes, and if a burst of RFI is there, they might all overflow. That should be rare, but maybe Dr. Anderson will eventually make use of variance as a means of deemphasizing unusual cases.

Edit: I had thought maybe they cleared the "completed" count and averages when updating the server build, but many of the other top hosts have higher counts than that would allow. Do you know why your host started at the beginning again?
                                                                                           Joe
I had issues even getting to the Beta site, the site kept giving me a Certificate Warning, then when I eventually got to the site I found it had created a duplicate host for me :D
After merging the two hosts it seems that the application details are reset, had this happen to me @ Seti a few weeks ago as well.

With the overflows, hopefully Seti won't be affected too much, especially for me as it seems that overflows tend to all come from the same part of the tape, and it's rare for me to get sequential tasks from Seti.
Beta on the other hand is a different story though. I assume it's because the work sent out is a lot less @ Beta so the chances are that you can get sequential tasks, and therefore hit a run of overflows.
The only reason I noticed, to be honest was because the first task I had validated after merging the two hosts was an overflow and this pushed my "Average Processing Rate" to over 2500.00

 

Welcome, Guest.
Please login or register.
 
 
 
Forgot your password?
Members
Total Members: 97
Latest: ToeBee
New This Month: 0
New This Week: 0
New Today: 0
Stats
Total Posts: 59559
Total Topics: 1672
Most Online Today: 40
Most Online Ever: 983
(20 Jan 2020, 03:17:55 pm)
Users Online
Members: 0
Guests: 36
Total: 36
Powered by EzPortal