Forum > GPU crunching

GTX 460 superclocked

<< < (16/23) > >>

Jason G:

--- Quote from: TouchuvGrey on 22 Aug 2010, 07:51:42 am ---Can someone explain to me ( in small words please )
just what i'm seeing and what it means ?
--- End quote ---
 


In a nutshell, the geometry listed for the 460  is around twice the computing capacity of the GTS250.  When drivers mature & we get a bit more development along Fermi lines going, numbers should reflect as such.

For more detail:
For the GTS 250 there are 16 multiprocessors, of 32 shaders each.  With a warp size of 32 threads, that means a kernel will generally execute a block per multiprocessor, each one having 8192 registers (which just like in CPU land, are faster storage for computation than memory).

The 460 also has 32 shaders per multiprocessor, but less multiprocressors.  That's not an error, it actually has 48, but the architecture uses the extra 16 for super-scalar execution, which is a hardware optimisation, so you treat it as 32 but things go ~50% faster at the same clock.    This has 32k registers per block, which is ~4x the 250.  Also, at a maximum of 1024 threads per block it means the multiprocessors can do about twice as much at once.  All told, that should make the 460 roughly equivalent to a GTX 285 in throughput ( or 2 x GTS 250s )

JAson

Raistmer:
warp size remains same so the number of truly simultaneous threads.
Also, why so big downgrade in freq ???

Claggy:

--- Quote from: Raistmer on 22 Aug 2010, 10:04:16 am ---warp size remains same so the number of truly simultaneous threads.
Also, why so big downgrade in freq ???


--- End quote ---
Perhaps the GTX 460 is reporting GPU clock speed instead of shader clock speed?

Claggy

Raistmer:

--- Quote ---Perhaps the GTX 460 is reporting GPU clock speed instead of shader clock speed?

Claggy

--- End quote ---
Looks like that. At least from my experience with other NV and ATI cards.

Ghost0210:
Looks like it's an issue with both 460 and 465 then
GPU-Z and Caps GPU Viewer report the following for me:

Core = 608 Mhz
Memory = 802 Mhz
Shader = 1215 Mhz

Nut nVidia control panel reads it as:

Core = 607 Mhz
Memory = 1604 Mhz
Shader = 1215 Mhz

When using GPU-Z etc, I usually just double the memory clock value to get the true figure?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version