do yuo run V12b? On single GPU? at any moment It's for multi GPU and even there advantages still questionable.
In a multi-GPU, multi core system there could be two possibilities: 1) all GPU tasks assigned to a single core, 2) every GPU gets its own core. Have you considered these regarding affinity?
Thanks Pappa, I know all these, more or less. I'm asking how Raistmer implemented the affinity in this app, all cuda tasks in a fixed core or distributes them among all available?
From what I read all GPUs would use one core. But that was few pages ago...We wait for Raistmer
For nostaliga it is team member #5 before it went to Nixhttp://seticlassic.ssl.berkeley.edu/stats/team/team_57956.htmlboy how things change
Quote from: Pappa on 13 Jan 2010, 09:24:38 pmFrom what I read all GPUs would use one core. But that was few pages ago...We wait for Raistmer Theoretically speaking, I think it would be better to distribute them among all cores, or not? I'll try to do something similar in linux through script.Quote from: Pappa on 13 Jan 2010, 09:24:38 pmFor nostaliga it is team member #5 before it went to Nixhttp://seticlassic.ssl.berkeley.edu/stats/team/team_57956.htmlboy how things changeOh, memories of seti classic!
yes, slots numbers should go. But it's worth to check if affinity really increase throughput.I interested affinity locking few times already and always seen performance degradation for app. Only in some special circumstanses as app feeding GPU it could bring some benefits.
Quote from: Raistmer on 14 Jan 2010, 10:58:56 amyes, slots numbers should go. But it's worth to check if affinity really increase throughput.I interested affinity locking few times already and always seen performance degradation for app. Only in some special circumstanses as app feeding GPU it could bring some benefits.The logical thing would be that four AK_v8 tasks fixed in their respective cores during their lifetime would be better than have them jumping around cores every few seconds. You say that experiments have shown the opposite?