Forum > Windows

sources with Orcas

<< < (9/10) > >>

_heinz:
Hi Jason,
great, your sample means start a task (ask mum to make a sandwich) parallel to the Main Program (TV program). You must still wait till the sandwich ( the task) is ready.  ;)
We can enlarge this too:
Start a variable number of tasks parallel to the Main Program.
Later we can do so.
But at first we had to resolve some basics on the way to go parallel as "Load balanced parallel execution of a fixed number of independent loop iterations" and some others.

heinz

Josef W. Segur:
1. If the leaves you're cutting are always the same size and shape, an ideal tool would make the cuts all at once. If the leaves come in a few different sizes, either a tool for each size or an even more complex tool with suitable adjustments is needed.

2. The characteristics of the Validator need to be kept in mind when thinking about dividing the work differently. When it is comparing results it checks that each signal in result A has a matching signal in result B, then checks that each signal in result B has a matching signal in result A. For the ~95% of WUs which have less than 31 reportable signals the order signals are found wouldn't make a difference. But for the ~5% which overflow we need to be sure we'll report the same subset as the stock app does.
                                                        Joe

Jason G:

--- Quote from: Josef W. Segur on 08 Dec 2007, 02:08:10 pm ---1. If the leaves you're cutting are always the same size and shape, an ideal tool would make the cuts all at once. If the leaves come in a few different sizes, either a tool for each size or an even more complex tool with suitable adjustments is needed.
--- End quote ---
Or perhaps a modular tool, with a set of adaptors designed to fit each possible  variation [or groups of variations], with a different plan/tool adapter for each one of the finite set of possibilties. (A single complex tool is large and unwieldly, many different tools is more efficient but maybe even larger in total with redundancy (and requires selection), a modular tool seems an ideal compromise but also requires selection/adaptation overhead)... mmm all food for thought.

--- Quote ---2. The characteristics of the Validator need to be kept in mind when thinking about dividing the work differently. When it is comparing results it checks that each signal in result A has a matching signal in result B, then checks that each signal in result B has a matching signal in result A. For the ~95% of WUs which have less than 31 reportable signals the order signals are found wouldn't make a difference. But for the ~5% which overflow we need to be sure we'll report the same subset as the stock app does.
                                                        Joe

--- End quote ---
:o, so even though a faster overflow detection mechanism may be possible, the positive overflow will still require the same processing order/results...[You seem to be saying the order of signals is important in those ~5% where overflow occurs] thinking about that a little I can probably live with the current speed, or even reduced speed,  where it results in overflow.  I wonder if there may be benefit to quickly disproving [or just detecting reduced likelihood of] overflow condition early on... (then we may perhaps tactically reorder detection)

Jason

Josef W. Segur:

--- Quote from: j_groothu on 10 Dec 2007, 04:23:46 am ---..., so even though a faster overflow detection mechanism may be possible, the positive overflow will still require the same processing order/results...[You seem to be saying the order of signals is important in those ~5% where overflow occurs] thinking about that a little I can probably live with the current speed, or even reduced speed,  where it results in overflow.  I wonder if there may be benefit to quickly disproving [or just detecting reduced likelihood of] overflow condition early on... (then we may perhaps tactically reorder detection)

Jason
--- End quote ---

The order of the signals within the output result file never matters, but I can see no practical way to select the right subset of what may be a very large number of potential signals other than using the same sequence of searches as stock.

Prechecking for possible overflow is certainly an interesting concept. If someone came up with a really efficient way to do that, the project might consider putting that code in the splitter. In the science app, maybe the best opportunity is during baseline smoothing.

I'll also note that if we found a way of dividing the work much more effectively, the changes could be applied to the official sources prior to the next stock release. That release could be named setiathome_multibeam or something similar, and all participants would have to upgrade.
                                                      Joe

Jason G:

--- Quote from: Josef W. Segur on 10 Dec 2007, 12:28:04 pm ---The order of the signals within the output result file never matters, but I can see no practical way to select the right subset of what may be a very large number of potential signals other than using the same sequence of searches as stock.
--- End quote ---
Ahh I see, a sticky problem.  Just musing some more, getting a better picture, statistically might there be a strong subset of overflow cases where the dataset tends to white noise?(I realise probably all the good data probably does anyway  ::)) And in such cases would the first 31 signals be definitely pulses?[or spikes rather]


--- Quote ---Prechecking for possible overflow is certainly an interesting concept. If someone came up with a really efficient way to do that, the project might consider putting that code in the splitter. In the science app, maybe the best opportunity is during baseline smoothing.
--- End quote ---
Don't know about efficient  ;D.  I would, perhaps incorrectly, expect at least some types of obvious overflow signal [tasks] to be fairly 'white'.  It's been a long time since I looked at an autocorrelation function, from vague memory they involve a single convolution. Something like that would be able to judge the whiteness of the signal against a decided threshold dirac figure/function. I've used them in signal processing many years ago for analysis of buried periodic signals. Subtracting the autocorrelation of white noise from that of the source [ had interesting results].... but that was on a 1k node torus so algorithmic complexity and other practical considerations weren't an issue under much consideration   :P. [though they should've been]


--- Quote ---I'll also note that if we found a way of dividing the work much more effectively, the changes could be applied to the official sources prior to the next stock release. That release could be named setiathome_multibeam or something similar, and all participants would have to upgrade.
                                                      Joe

--- End quote ---
  That I'll leave for thought 'till next week when I'm on my holidays... yay... I haven't been following what Heinz is up to there, I was lost somewhere around 'Leeks' but you gave some food for thought and I'll figure it all out then.

'Till next week

Jason

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version