Forum > GPU crunching
CPU <-> GPU rebranding
Geek@Play:
While I agree with Mark and Joe on the commitment to crunch all work assigned to the client computers, in reality it is unrealistic to expect this commitment to be achieved.
Using the Rescheduler tool to move the VLAR work to the CPU's very quickly overwhelms the CPU's. There are days when 2 or 3 times the work being moved than the CPU's are capable of in one day. The CPU's fall behind with the ever increasing larger cache for the CPU's work and eventually work starts being crunched too late.
I have then stopped using the Rescheduler tool to move the VLAR work to the CPU's. The CPU's can crunch the work assigned to them, including the VLAR work. I have also swithed the the "killer" app for the GPU's so the VLAR assigned to the GPU's will error out.
I do not care for this procedure as I feel that I am cherry picking the work. But in fact this is the only way that the client computers can run without constant attention to them.
If there is a better solution I am open for suggestions.
sunu:
Yes if you have a high output machine, that is with many GPU cores, the amount of VLARS gets out of hand very quickly and your CPU can't hope to finish them not even after a month. If you use that machine and you care about it's responsiveness, you don't have other option than to abort them.
Well, if you're really determined to crunch them all, you could wait that month and see at the end that you really can't do them all and you have to ditch them after so much time sitting on them. And that would be worse.
SciManStev:
I am still in a learning curve here, so this may have been discussed before. Sincs GPU's are not well suited for VLAR's, is there a way for S@H to only brand them for CPU use? That might help to balance the total work load. I recently depleted my cache to prepare for getting my GTX 480's crunching, and it took a coule of days to run out of GPU work, and close to two weeks to run out of CPU work. I attributed this to the VLAR's being rebranded for the CPU. If they came for CPU's in the first place, then it seems that the load would have been more balanced.
Vyper:
--- Quote from: SciManStev on 29 May 2010, 08:55:40 am ---is there a way for S@H to only brand them for CPU use? That might help to balance the total work load.
--- End quote ---
This has been questioned before and for the moment the s@h servers doesn't do an AR analyse before sending it to a GPU host on their side.
The only two solutions is either by using Rescheduler for the moment or older series 2xx series and down can use a prepared VLAR kill .exe that checks the wu and if it's a VLAR then it's killed instantly.
Regards Vyper
Gecko_R7:
--- Quote from: SciManStev on 29 May 2010, 08:55:40 am ---.... Sincs GPU's are not well suited for VLAR's, is there a way for S@H to only brand them for CPU use? That might help to balance the total work load. I recently depleted my cache to prepare for getting my GTX 480's crunching, and it took a coule of days to run out of GPU work, and close to two weeks to run out of CPU work. I attributed this to the VLAR's being rebranded for the CPU. If they came for CPU's in the first place, then it seems that the load would have been more balanced.
--- End quote ---
I'll wade-in to answer your question Steve, but hope Devs. will correct me if I'm mistaken.
The project doesn't discriminate WUs to specific hardware and a WU can be assigned to any host w/ compatible OS and CPU and/or GPU.
The WU could be assigned to an Atom running Linux, a PowerPC 800 Mhz chip running OSX, or a 9400GT GPU.
From the project's perspective, there are such a wide range of performance differences with all the eligible hosts out there, that WU could just as easily be assigned to a Pentium III, as to a GTX285. Whether fast, or slow, if it falls into a supported HW/SW platform that can complete and validate by deadline, then the project considers it acceptable.
Discriminating VLARs specifically for CPU would essentially require a S@H GPU subproject w/ a separate application, so you could run both S@H CPU and S@H GPU applications at the same time. However, this would significantly complicate the back end project requirements and maintenance by Berkeley.
For a 2nd project, the efficiency/production gain would have to justify the additional expense and resources required at the project level.
Both of which are barely enough to support the current effort.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version