Why play with your cache levels? Change from 6 to 10 and then back to 2, why? Pick a cache level and leave it there. I'm using 10 days. If you were using 6 days, it is fine also. 6 days cache means that the workunits downloaded now will be crunched in about 6 days, so you have 6 days to check for vlars. No need to run that script x times per hour or x times per day.
Back in the days when cuda needed a whole core, I was running a 3+1 config in my quad core. All processes had the lowest priority (19) and I don't think I had any serious slowdown, maybe a minute or so, not more. And this was my everyday desktop so many things were running, firefox with many many tabs, full 3d compiz effects, everyday backups, etc.Only now that cuda shares a core with the other seti@home tasks, I started renicing them only to make them higher priority than the other seti@home instances. I think -5 is not necessary.
<avg_ncpus>0.040000</avg_ncpus><max_ncpus>0.040000</avg_ncpus>
PID PR NI RES SHR %CPU TIME+ COMMAND15538 39 19 48m 1472 101 13:44.03 AK_V8_linux64_s15539 39 19 48m 1464 101 20:17.47 AK_V8_linux64_s15540 39 19 48m 1468 101 20:25.35 AK_V8_linux64_s15541 39 19 48m 1464 99 19:52.12 AK_V8_linux64_s15544 39 19 48m 1484 99 20:30.54 AK_V8_linux64_s15545 30 10 114m 10m 99 18:42.69 setiathome-CUDA15546 39 19 48m 1488 94 19:55.04 AK_V8_linux64_s16208 39 19 48m 1488 51 12:22.11 AK_V8_linux64_s15542 39 19 48m 1472 46 12:14.44 AK_V8_linux64_s
Quote from: sunu on 18 Aug 2009, 02:02:43 pmWhy play with your cache levels? Change from 6 to 10 and then back to 2, why? Pick a cache level and leave it there. I'm using 10 days.ahh yes except for one thing. i have seen even this new version of boinc obey the due dates and pick the next workunit from among the newly downloaded.
Why play with your cache levels? Change from 6 to 10 and then back to 2, why? Pick a cache level and leave it there. I'm using 10 days.
Now to the question - am I doing something wrong and cuda does not behave correctly?Or is this normal and I should just set avg_ncpus & max_ncpus to 1, and pin the process to some core + make it use it exclusively?
ahh yes except for one thing. i have seen even this new version of boinc obey the due dates and pick the next workunit from among the newly downloaded. if they stayed in ascending date order i would agree but it does not seem to work that way for me. at least 3 or 4 times so far i noticed a cuda and/or cpu workunit placed on hold to pick up one that had a closer due date that was recently downloaded. this means there is a danger the gpu app will reject a possible vlar before it can be flagged.
Quote from: riofl on 18 Aug 2009, 09:11:47 pmahh yes except for one thing. i have seen even this new version of boinc obey the due dates and pick the next workunit from among the newly downloaded. if they stayed in ascending date order i would agree but it does not seem to work that way for me. at least 3 or 4 times so far i noticed a cuda and/or cpu workunit placed on hold to pick up one that had a closer due date that was recently downloaded. this means there is a danger the gpu app will reject a possible vlar before it can be flagged.This happens only for vhar workunits, they have shorter deadlines than the rest. VLARs have "normal" deadlines and they are crunched when their time comes, about x(cache) days after they've been downloaded.Macros, what pp says. Make sure you're using cuda 2.2 or later together with a compatible nvidia driver.Shameless plug: I've reached #4 in the top hosts list. I don't know how long I can hold on there though. Attaching pdf for future proof.
Quote from: sunu on 08 Aug 2009, 06:17:55 amBack in the days when cuda needed a whole core, I was running a 3+1 config in my quad core. All processes had the lowest priority (19) and I don't think I had any serious slowdown, maybe a minute or so, not more. And this was my everyday desktop so many things were running, firefox with many many tabs, full 3d compiz effects, everyday backups, etc.Only now that cuda shares a core with the other seti@home tasks, I started renicing them only to make them higher priority than the other seti@home instances. I think -5 is not necessary.Question regarding this. I am using the default settings in app_info.xml's <app_version> for cuda as follows:Code: [Select]<avg_ncpus>0.040000</avg_ncpus><max_ncpus>0.040000</avg_ncpus>The problem is, that setiathome-CUDA process has demand obviously higher than that and is able to eat up CPU time of whole one core. That results in other (regular CPU) processes to fight over the CPU time, context switches, cache thrashing etc. ->Code: [Select] PID PR NI RES SHR %CPU TIME+ COMMAND15538 39 19 48m 1472 101 13:44.03 AK_V8_linux64_s15539 39 19 48m 1464 101 20:17.47 AK_V8_linux64_s15540 39 19 48m 1468 101 20:25.35 AK_V8_linux64_s15541 39 19 48m 1464 99 19:52.12 AK_V8_linux64_s15544 39 19 48m 1484 99 20:30.54 AK_V8_linux64_s15545 30 10 114m 10m 99 18:42.69 setiathome-CUDA15546 39 19 48m 1488 94 19:55.04 AK_V8_linux64_s16208 39 19 48m 1488 51 12:22.11 AK_V8_linux64_s15542 39 19 48m 1472 46 12:14.44 AK_V8_linux64_sNow to the question - am I doing something wrong and cuda does not behave correctly?Or is this normal and I should just set avg_ncpus & max_ncpus to 1, and pin the process to some core + make it use it exclusively?