PipelineFX Forum

Qube! => SimpleCmds => Topic started by: bijuramavfx on August 21, 2012, 12:47:15 PM

Title: not using more than one worker
Post by: bijuramavfx on August 21, 2012, 12:47:15 PM
Hi there:

I have two workers configures in a group and when I submit a job in this group for some reason the job is taking more than one worker ..? But when I submit another job in the same group the other worker starts rendering ..? Any parameters that I should look into ..?

Cheers
/Biju
Title: Re: not using more than one worker
Post by: bijuramavfx on August 21, 2012, 02:27:48 PM
I realized that I have to specify the 'cpus' property too ..:-(. But when I have specified the cluster and the group name I was expecting to use all the workers with the host.processors=1+..

Thanks
/Biju
Title: Re: not using more than one worker
Post by: jburk on August 28, 2012, 12:13:30 AM
Qube! has  a couple of terminology issues that we're addressing bit by bit.

The "cpus" parameter is a bit confusing, and has been renamed "instance" in the new 6.4 version of the QubeGUI, due out later this week by the end of August.  As you've found, this parameter his affects how many "separate instances" of your job will be running on the farm at one time.

Each instance will reserve one or more "job slots" on a single worker; the number of slots reserved is specified by the (another old parameter name) "host.processors" reservation string.  By default, a worker will advertise having as many slots as it has cores, so a 12-core worker will have 12 job slots open when it's not running anything.  When you specify "host.processors=1", that worker would be able to run up to 12 instances of a job that is only reserving 1 job slot.  If you had another job that had a reservation of "host.processors=4", that worker would be able to run as many subjobs between the 2 jobs, where the total number of job slots reserved equals 12.

When you say "host.processors=1+", you're saying "start on a machine with at least 1 slot free, but reserve them all".  This allows a job to start on a machine with at least 1 free slot, but not allow any other jobs to run on the worker until it's done.  If you said "host.processors=12" instead, then that job wouldn't start until the worker was completely empty.  This might never occur if someone else is also submitting jobs with a reservation that only calls for 1 slot, so that the worker might always only have 10 or 11 free slots, and not all 12.

The group parameter means "only start on machines in this host group".

The cluster parameter means "run on the machines in this cluster at an elevated priority".  A machine in cluster /A will run the worst-priority job that is also in cluster /A before it will run the highest-priority job from another cluster; it's a way to offer heirarchical priority on a farm.