Seti@Home optimized science apps and information
Optimized Seti@Home apps => Discussion Forum => Topic started by: Vyper on 09 Nov 2006, 04:22:49 am
-
With the release of their newest architecture nvidia is starting a API called CUDA, C compilers to let developers send their code to the graphicscard to execute massive calculation power and parallelizm..
Look below fellow optimisers
http://www.anandtech.com/video/showdoc.aspx?i=2870&p=8
Kind Regards Vyper
-
That looks to be a much more workable approach than what ATI has until now, especially due to the separation of the GPU and general purpose processing stuff. Not having to rely on shaders and a workaround is a good thing.
Since the new nVidia generation is organized into 16 blocks of 16 stream processors, it should be a parallel computation monster (each stream processor runs at 1.3+ GHz and has unified cache for the 16 sub-processors).
Still, the cost of getting one is pretty high at this point. Will be interesting to see how much work it takes to parallelize computation - as I see it now, it's all happening serialized instead.
Regards,
Simon.
-
I'm in the verge of getting one real soon, i have already sold my gfx card..
Want.. need.. greed :)
//Vyper
-
I think so to that it is serialized but if there were possible to execute one executable to send out 128 pcs of S@H applications that match the shader possibilities it would be tremendous parallell execution , if it were possible it would be such fun to see 128 S@H progressing albeit slowly all at once :o ..
//Vyper
-
Hi ,)
Glaubs´t, du kannst einen Client dafür schreiben?
NVIDIA-CUDA
http://developer.nvidia.com/object/cuda.html
Maxxx
www.SETIAustria.at
-
Ich derzeit nicht, aber vielleicht spendet ja jemand ne GTX 8800 zum Testen? ;)
Wär doch notwendig damit man sowas entwickeln kann, und kostet ne Stange Geld....
Ansonsten, nettes Projekt, findet sich sicher wer der's ausprobieren will wenn man ihm die Hardware sponsert. Ernsthafte Angebote vorausgesetzt...
lG,
Simon.
-
Naja ;)
Naja Spenden ist leida net drin, aber bestellt ist´s schon ( QX6700 + ASUS EN8800GTX)
Mich würde nur ein´s wirklich interess. , obs möglich ist.
Lg Markus
www.SETIAustria.at
-
Got damn, i cant understand greek.. Please try to write in english please.. ::)
http://setiathome.berkeley.edu/forum_thread.php?id=35485
-
Gehn sollt's, wenn auch nur unter erheblichem Aufwand (Parallelisierung von derzeit sequentiellem Code) bzw. starten von 16 oder 128 threads, ne nachdem was so eine Mini-Rechen-Einheit da kann.
lG,
Simon
P.S. It ain't greek, it's Austrian! ;) To translate, use Google and select "german to english".
-
Got damn, i cant understand greek.. Please try to write in english please.. ::)
Sorry! I ask only "it is possible to make a Client for the new nVidia GPU´s"
g Maxxx
-
With offerings (by you) of lots of money....sure! ;D
-
Naja, die "mini Rechen Einheit" hat 681 Millionen Transistoren .
Also nicht so mini.
Was ich mich frage, reicht die pci-e gesch. aus?
Lg
ps. woher kommst du so ungefähr ;)
-
Gehn sollt's, wenn auch nur unter erheblichem Aufwand (Parallelisierung von derzeit sequentiellem Code) bzw. starten von 16 oder 128 threads, ne nachdem was so eine Mini-Rechen-Einheit da kann.
lG,
Simon
P.S. It ain't greek, it's Austrian! ;) To translate, use Google and select "german to english".
Hehe i know, i was just teasing :D
//Vyper
-
P.S. It ain't greek, it's Austrian! ;) To translate, use Google and select "german to english".
Ich habe den kleinen Babelfisch für Hilfe gefragt. Der arme kleine Fischlein hat sich auf der Umgangssprache die letzte Milchzähne gebrochen :(
Die Übersetzung habe ich nur verstanden, weil ich bisschen Deutsch kann. Sonst wäre es für mich nur ein Bahnhof ;D
Peter
-
Has someone got a response in anyway from nvidia??
I have posted to be part to their Cuda program but haven't got any response at all in over a week!
Kind Regards Vyper
-
Take a look at this thread (http://setiathome.berkeley.edu/forum_thread.php?id=35704).
The main limitation of CUDA is that Nvidia requires to sign an NDA. So, the information about CUDA cannot be discussed in public.