Forum > Windows

Optimized Astropulse 5.03 for Windows

<< < (2/7) > >>

Jason G:
Can't see anything by that link Heinz ["No access"].  Is there some text in stderr that says "Blanking too much RFI?" .. if so, there's a stuck bit in some channel of some tapes apparently.

[Edit:] Yep, I found one in your tasks by going through your user info:

http://setiathome.berkeley.edu/result.php?resultid=1218244630


--- Quote ---Error in ap_remove_radar.cpp: generate_envelope: num_ffts_performed < 100.  Blanking too much RFI?
--- End quote ---

_heinz:

--- Quote from: Jason G on 06 May 2009, 07:04:38 pm ---Can't see anything by that link Heinz ["No access"].  Is there some text in stderr that says "Blanking too much RFI?" .. if so, there's a stuck bit in some channel of some tapes apparently.

[Edit:] Yep, I found one in your tasks by going through your user info:

http://setiathome.berkeley.edu/result.php?resultid=1218244630


--- Quote ---Error in ap_remove_radar.cpp: generate_envelope: num_ffts_performed < 100.  Blanking too much RFI?
--- End quote ---


--- End quote ---
in all 9 cases is it this error:
Error in ap_remove_radar.cpp: generate_envelope: num_ffts_performed < 100.  Blanking too much RFI?

too bad....if the tape has a lot of them... and the wu will be sent to the next user again
http://setiathome.berkeley.edu/workunit.php?wuid=438579544
http://setiathome.berkeley.edu/workunit.php?wuid=438102609
http://setiathome.berkeley.edu/workunit.php?wuid=438809547
http://setiathome.berkeley.edu/workunit.php?wuid=438938766
http://setiathome.berkeley.edu/workunit.php?wuid=438808229
http://setiathome.berkeley.edu/workunit.php?wuid=438585889
http://setiathome.berkeley.edu/workunit.php?wuid=438675872
http://setiathome.berkeley.edu/workunit.php?wuid=438558426
http://setiathome.berkeley.edu/workunit.php?wuid=438555961

heinz

Josef W. Segur:

--- Quote from: Jason G on 06 May 2009, 07:04:38 pm ---Can't see anything by that link Heinz ["No access"].  Is there some text in stderr that says "Blanking too much RFI?" .. if so, there's a stuck bit in some channel of some tapes apparently.

[Edit:] Yep, I found one in your tasks by going through your user info:

http://setiathome.berkeley.edu/result.php?resultid=1218244630


--- Quote ---Error in ap_remove_radar.cpp: generate_envelope: num_ffts_performed < 100.  Blanking too much RFI?
--- End quote ---

--- End quote ---


--- Quote from: _heinz on 06 May 2009, 08:00:24 pm ---in all 9 cases is it this error:
Error in ap_remove_radar.cpp: generate_envelope: num_ffts_performed < 100.  Blanking too much RFI?

too bad....if the tape has a lot of them... and the wu will be sent to the next user again
http://setiathome.berkeley.edu/workunit.php?wuid=438579544
http://setiathome.berkeley.edu/workunit.php?wuid=438102609
http://setiathome.berkeley.edu/workunit.php?wuid=438809547
http://setiathome.berkeley.edu/workunit.php?wuid=438938766
http://setiathome.berkeley.edu/workunit.php?wuid=438808229
http://setiathome.berkeley.edu/workunit.php?wuid=438585889
http://setiathome.berkeley.edu/workunit.php?wuid=438675872
http://setiathome.berkeley.edu/workunit.php?wuid=438558426
http://setiathome.berkeley.edu/workunit.php?wuid=438555961

heinz
--- End quote ---

My estimate is that there are at least 10000, on all B3_P1 channel WUs from 12 March through 20 March. It's a reappearance of a problem seen about a year ago in some SETI Beta Astropulse work, the data for that channel has a stuck bit. The reaction of 5.03 is to see that as a DC offset which means blanking is needed, and then since all the data is blanked it can't establish the envelope for data to do the blanking.

The good thing is this didn't happen with AP 5.00, which would have blanked all the data and then done full length processing antway. Some tests we did long ago proved that full blanking from 5.00 actually produces some false signals, those would have gone into the science database.

With 5.03 the cost to users is basically just whatever time it took to download the work, nobody is likely to get so many it has a serious impact on their daily quota. There will be 7 tasks sent for each WU, the 6th error returned will inhibit any beyond that time. So that's 7 tasks but doesn't cost much time nor have any bad science effect. Considering that normal Astropulse v5 work has been taking about 3.5 tasks sent to get a validated pair, the extra burden on the project servers isn't too bad either.

It would have been even better if it hadn't happened until 5.05 was released, that would put the same error line in stderr but produce a minimal result file and do a normal exit. There would be no signals in the output, the validator would compare the first two results received and see they were the same, and no reissues would be generated.

Obviously the best thing would have been locating and fixing the cause after the episode at Beta last year, but it wasn't happening on newly recorded data so I'm not surprised the cause couldn't be located.
                                                                                   Joe

_heinz:
Thanks Jason & Joe,
some more comming today:
http://setiathome.berkeley.edu/workunit.php?wuid=438441412
http://setiathome.berkeley.edu/workunit.php?wuid=438809808
http://setiathome.berkeley.edu/workunit.php?wuid=438515256

Josef W. Segur:

--- Quote from: _heinz on 07 May 2009, 10:31:58 am ---Thanks Jason & Joe,
some more comming today:
http://setiathome.berkeley.edu/workunit.php?wuid=438441412
http://setiathome.berkeley.edu/workunit.php?wuid=438809808
http://setiathome.berkeley.edu/workunit.php?wuid=438515256
--- End quote ---

Your post yesterday for http://setiathome.berkeley.edu/workunit.php?wuid=438675872 (ap_21mr09aa_B3_P1_00358_20090501_27701.wu) extended the range we know has been affected by a day. Before that we knew about 12mr09 through 20mr09.

There are some more B3_P1 tasks on your host (not crunched yet) which have the problem. http://setiathome.berkeley.edu/workunit.php?wuid=440203658 (ap_03ap09aa_B3_P1_00250_20090505_10103.wu_4) is the most interesting, as it extends our knowledge of how long the problem has lasted by almost 2 weeks.

There are some more 'tapes' which the ap_splitter processes haven't done yet which I think likely to have the same problem:

09mr09aa  50.20 GB
10mr09aa  50.20 GB
10mr09ab  50.20 GB
10mr09ac  50.20 GB

If you get any B3_P1 tasks from those, I'll be very interested in knowing whether thay do or do not have the problem. If we can identify when the problem started, that could help the project figure out a probable cause.

I'll be watching what few AP tasks I get and any reported in the project NC forum, but your host is getting a very good sample of the AP work being split.
                                                                           Joe

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version