Author Topic: maximize rendering resources evenly  (Read 3102 times)


  • Jr. Member
  • **
  • Posts: 9
maximize rendering resources evenly
« on: June 24, 2008, 05:22:21 PM »
We have 13 dual quad core render servers and 23 clients, but to simplify the case, let's treat it as 13 render servers and 13 clients. I want to assign workstation#1 as cluster One in the qb.conf and assign to render server#1, workstation#2 as cluster Two and assign to render server#2, etc. This case, I can ensure if all servers are busy, each user can at least have one render server to service them as priority.

But one shortcoming is, if workstation#1 launch a job, it will be sent to all render servers, and render server#1 will treat it as priority. But if workstation#2 (Cluster Two in qb.conf) send a job, only render server#2 will treat it as priority and start to render workstation#2 jobs, all other servers will still need to finish the job sent from workstation#1 before starting to render the job from workstation#2 because they have the same job priority.

My question is, how can I set the allocation in a way that when workstation#1 send out a job, all render servers will render. But when workstation#2 send a job, half of the servers will stop the job from workstation#1 and start to render job from workstation#2. When workstation#3 send out a job, 1/3 of the render servers will render job from workstation#1, 1/3 of the render servers will render job from workstation#2, 1/3 for workstation#3..... This can allow all users to share same level of resources in the rendering farm fairly, but at the same time maximize all resources.

Is there any way to do this allocation automatically?



  • Hero Member
  • *****
  • Posts: 229
Re: maximize rendering resources evenly
« Reply #1 on: June 24, 2008, 05:34:56 PM »
What it sounds like you are trying to do is create what is sometimes called a "fairshare" queuing algorithm. Qube does not provide support for this kind of queue.

If you know how many "shares" you want to divide your farm into, you could divide up your machines into that many clusters. When you submit a job with a cluster specification, it will have priority on the machines in that cluster. If other machines are available they will accept the job, subject to later preemption by jobs that are submitted to those clusters.