Forum > Windows

optimized sources

<< < (60/179) > >>

_heinz:
Hi Jason,
please read Next Generation Intel Microarchitecture with Intel QuickPath Architecture it is a revolution and a step forward to the next generation.  :o
under this circumstances is it better to wait for it and meanwhile use a cheaper solution as described before.
please give feedback if you have read it.

heinz

Jason G:
That pretty much describes the reasons I chose to settle for a Wolfdale Right now:
  - I needed a fairlly inexpensive upgrade (well it was at the time cheap, now they have gone too high in price here)
  - My other jobs require me to support development of SSE4.1 functionality
  - My current system has limited power & cooling considerations (45 degree C desert heat, 1 10amp power circuit @240Vac, for everything)

 So the dual core suits my needs perfectly for now, but I recognise the software infrastructure required to make better use of parallelism is improving immensely, both through fine grained and thread level approaches,. So my next upgrade will probably be either Nehalem or the 32nm variant after Nehalem, depending on what my work demands.. I saw the name but don't remeber it of the next generation...

Gecko_R7:

--- Quote from: Jason G on 23 Mar 2008, 08:09:19 am ---
--- Quote from: seti_britta on 22 Mar 2008, 10:32:29 pm ---Sometime I'm not sure what todo, SkullTrail yes/no with E5405 (178 Euro)
or better wait for Nehalem
or meanwhile a cheaper resulution: board XFX GeForce 7150/MCP630i (70 Euro) no graphiccard necessary, with a Intel Core2 Quad Q9450 4x2.67GHz BOX (300 Euro), case ram disk..all together ca 680 Euro for the hardware + software XP Professional (130 Euro) for testing our parallel stuff.....


--- End quote ---
Honestly Heinz, I'd say it'd be difficult to go past the Q6600 at the moment. I'd guess the Yorkfields are being held off 'till the stock of those clears a bit. Then the Yorkfields will be awesome [If this wolfdale is anything to go by]. I am getting the feeling that the Nehalem architecture will be a fairly radical departure from what we're used to, and it may take some time for the software to follow. Perhaps something like the OpenMP standard gives some insight there, many cores with shared memory.

Jason


--- End quote ---

I think Nehalem will be a quite expensive upgrade/transition for a bit when all costs are factored.
It's LGA1366 so factor a brand new Mobo.
Also, DDR3 is almost a given to take advantage of the new arch.

Likely to be massive price gouging and very little supply initially...ltd mobo options, and buggy release bios's.  Wouldn't be surprised for it to be at least Q2 09' before we see decent pricing and availability for us mere mortals that have budgets to consider.
We should also see some nice price drops on Penryn/Yorkfield and perhaps a new stepping as Nehalem is released.

Hard to argue against the current value and mobo selection of c2d & quadcore chips.
Pretty cheap $$$ / performance ratio.

Cheers!

Jason G:

--- Quote from: Gecko_R7 on 23 Mar 2008, 12:20:27 pm ---Likely to be massive price gouging and very little supply initially...ltd mobo options, and buggy release bios's.  Wouldn't be surprised for it to be at least Q2 09' before we see decent pricing and availability for us mere mortals that have budgets to consider.
We should also see some nice price drops on Penryn/Yorkfield and perhaps a new stepping as Nehalem is released.

--- End quote ---

Well priorities have a way of shifting depending on need.  As you point out p4's & AMDs of SSE2 vintage are still extremely popular according to boincstats, and dominate throughput in many respects.

What the tests seem to be showing is that Alex pretty well nailed the Core2 code, and unless we decide to tackle the other end there may be little left to do there for now (Unless, that is,  some of the relaxed validation requirements that have been spoken about are put in place, then the parallelism race may be back on in force). 

Early p4's have special characteristics to do with cache that aren't necessarily all that happy with techniques used in builds targeted for the core2 architecture.  There are speed improvement showing in the p4(SSE3) I tested, but not as great as the core2 improvements.  There might be plenty of room to tweak that and the SSE3 instructions may as well be macro encapsualted while we're there, allowing SSE2 substitution.

There is though possibly still quite a bit more opportunity to squeeze more performance from the core2 build first.  We have spoken about profile guided optimisations, which haven't been touched yet, and in fact no profiles have even been run yet to identify possible bottlenecks or problems with the build,  That is why, in my book,  it is still considered pre-alpha.  Valid results is one thing, but releasing substandard builds I'd rather leave to the software companies who have the excuse of pressure from the marketing department.

Jason

Gecko_R7:

--- Quote from: Jason G on 23 Mar 2008, 01:17:34 pm ---
Well priorities have a way of shifting depending on need. As you point out p4's & AMDs of SSE2 vintage are still extremely popular according to boincstats, and dominate throughput in many respects.

What the tests seem to be showing is that Alex pretty well nailed the Core2 code, and unless we decide to tackle the other end there may be little left to do there for now (Unless, that is, some of the relaxed validation requirements that have been spoken about are put in place, then the parallelism race may be back on in force).

Early p4's have special characteristics to do with cache that aren't necessarily all that happy with techniques used in builds targeted for the core2 architecture. There are speed improvement showing in the p4(SSE3) I tested, but not as great as the core2 improvements. There might be plenty of room to tweak that and the SSE3 instructions may as well be macro encapsualted while we're there, allowing SSE2 substitution.

There is though possibly still quite a bit more opportunity to squeeze more performance from the core2 build first. We have spoken about profile guided optimisations, which haven't been touched yet, and in fact no profiles have even been run yet to identify possible bottlenecks or problems with the build, That is why, in my book, it is still considered pre-alpha. Valid results is one thing, but releasing substandard builds I'd rather leave to the software companies who have the excuse of pressure from the marketing department.

Jason

--- End quote ---

Did you intend this rseponse to be attached to other thread?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version