Seti@Home optimized science apps and information

Optimized Seti@Home apps => Discussion Forum => Topic started by: michael37 on 15 Sep 2007, 01:11:07 pm

Title: CPU temperature
Post by: michael37 on 15 Sep 2007, 01:11:07 pm
There are always on and off threads regarding CPU temperature and computer overheating due to BOINC.

Here is a curious fact.  Seti@Home is the hottest.  My notebook CPUs run up to 75C during Seti runs.  The lowest I've seen at 100% CPU utilization on both CPUs was another project which peaked at 60C.



Title: Re: CPU temperature
Post by: Josef W. Segur on 15 Sep 2007, 03:36:54 pm
There are always on and off threads regarding CPU temperature and computer overheating due to BOINC.

Here is a curious fact.  Seti@Home is the hottest.  My notebook CPUs run up to 75C during Seti runs.  The lowest I've seen at 100% CPU utilization on both CPUs was another project which peaked at 60C.

I've often wondered if it would be possible to code a sequence which has no pipeline stalls, keeps all execution units fully occupied, uses cache fully, and uses main memory transfers at peak. In a sense that's the ultimate goal of optimization; nothing wasted. I don't think we're anywhere near that yet, but I take higher CPU temperatures as a good sign in general. OTOH, a bad algorithm can use a lot of CPU without being optimal, so I don't think temperature is a sufficient indicator of effective optimization.
                                                     Joe
Title: Re: CPU temperature
Post by: Gecko_R7 on 15 Sep 2007, 04:59:56 pm
There are always on and off threads regarding CPU temperature and computer overheating due to BOINC.

Here is a curious fact.═ Seti@Home is the hottest.═ My notebook CPUs run up to 75C during Seti runs.═ The lowest I've seen at 100% CPU utilization on both CPUs was another project which peaked at 60C.

I've often wondered if it would be possible to code a sequence which has no pipeline stalls, keeps all execution units fully occupied, uses cache fully, and uses main memory transfers at peak. In a sense that's the ultimate goal of optimization; nothing wasted. I don't think we're anywhere near that yet, but I take higher CPU temperatures as a good sign in general. OTOH, a bad algorithm can use a lot of CPU without being optimal, so I don't think temperature is a sufficient indicator of effective optimization.
 Joe

Along your comment Joe, I remember Francois commenting once (& Alex long before him  ;) ), about the importance of minimizing L1 thrashing.  Francois rather threw that in everyone's face a year ago that the then-current code thrashed heavily and that would be a primary area he'd focus/clean-up.
To your point, this would contribute to higher CPU temps, right?.... but would certainly not be optimized performance.

How much validity do you think there is to his L1 thrashing comment today?
Is it really that much of an issue/area of opportunity?
Title: Re: CPU temperature
Post by: Josef W. Segur on 15 Sep 2007, 06:44:34 pm
...
OTOH, a bad algorithm can use a lot of CPU without being optimal, so I don't think temperature is a sufficient indicator of effective optimization.
 Joe

Along your comment Joe, I remember Francois commenting once (& Alex long before him  ;) ), about the importance of minimizing L1 thrashing.  Francois rather threw that in everyone's face a year ago that the then-current code thrashed heavily and that would be a primary area he'd focus/clean-up.
To your point, this would contribute to higher CPU temps, right?.... but would certainly not be optimized performance.

How much validity do you think there is to his L1 thrashing comment today?
Is it really that much of an issue/area of opportunity?

I'm sure it could be an important area to focus on, and I wish some programmer with the right skills were contributing here. I focus on the logic of the application algorithms, but know little about the details of the hardware. I probably should study Alex's recent code and see if he's found some improvements along those lines, though I wouldn't necessarily understand the fine details well enough to recognize what's particularly L1 directed. In addition, both Francois and Alex are concentrating on recent CPUs with less variation of cache size and capabilities to consider. Something good for the L1 in Core 2 might be terrible on an older system.
                                                     Joe