Forum > GPU crunching

CUDA_V12_app

<< < (11/16) > >>

Pappa:

--- Quote from: sunu on 13 Jan 2010, 07:21:03 pm ---In a multi-GPU, multi core system there could be two possibilities: 1) all GPU tasks assigned to a single core,  2) every GPU gets its own core. Have you considered these regarding affinity?

--- End quote ---

It has been studied many times and can work to the advantage of the system administrator. Over the years there have been many utilities that you could down load that would "lock on launch" an application to a specific CPU. So while CPU 0 is handling Ring 0 stuff, launch the Database compression on CPU1 with affinity lock.  So in this first experiment Raistmers intent was in machines with multi cores and multi GPU's that the GPU tasks were assigned to a specific core. Looking at Boinc and how things are ran the Apps migrate back and forth depending on what "might happen" at any given instant. With the advent of multicores Crunch3r attempted to convince the Boinc Devs that "affinity locking" might be a very good idea. He even produced a boinc core or two to prove it.

So to actually fully prove it all Lunatics apps would have to be CPU/core aware (and a table setup to read what is where) . Core 0 takes care of the OS and other stuff the Users plays with. Other cores real or virtual then are assigned to a specfic app, they run clean uninteruppted.

in Nix you can assign certain things to CPU's which has been there for ages... That was done on a Dual Pent Pro under slackware (then it was only 19 1.44 floppies to load).



sunu:
Thanks Pappa, I know all these, more or less. I'm asking how Raistmer implemented the affinity in this app, all cuda tasks in a fixed core or distributes them among all available?

Pappa:

--- Quote from: sunu on 13 Jan 2010, 09:07:01 pm ---Thanks Pappa, I know all these, more or less. I'm asking how Raistmer implemented the affinity in this app, all cuda tasks in a fixed core or distributes them among all available?

--- End quote ---

From what I read all GPUs would use one core. But that was few pages ago...

We wait for Raistmer   ;D

For nostaliga it is team member #5 before it went to Nix

http://seticlassic.ssl.berkeley.edu/stats/team/team_57956.html

boy how things change

sunu:

--- Quote from: Pappa on 13 Jan 2010, 09:24:38 pm ---From what I read all GPUs would use one core. But that was few pages ago...
We wait for Raistmer   ;D

--- End quote ---
Theoretically speaking, I think it would be better to distribute them among all cores, or not?   :-\
I'll try to do something similar in linux through script.


--- Quote from: Pappa on 13 Jan 2010, 09:24:38 pm ---For nostaliga it is team member #5 before it went to Nix
http://seticlassic.ssl.berkeley.edu/stats/team/team_57956.html
boy how things change

--- End quote ---

Oh, memories of seti classic! :)

Pappa:

--- Quote from: sunu on 13 Jan 2010, 10:22:39 pm ---
--- Quote from: Pappa on 13 Jan 2010, 09:24:38 pm ---From what I read all GPUs would use one core. But that was few pages ago...
We wait for Raistmer   ;D

--- End quote ---
Theoretically speaking, I think it would be better to distribute them among all cores, or not?   :-\
I'll try to do something similar in linux through script.


--- Quote from: Pappa on 13 Jan 2010, 09:24:38 pm ---For nostaliga it is team member #5 before it went to Nix
http://seticlassic.ssl.berkeley.edu/stats/team/team_57956.html
boy how things change

--- End quote ---

Oh, memories of seti classic! :)

--- End quote ---

Actually I think that is part was what is trying to be processed . Silly me.  :o

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version