Qube and Maya (Dynamic Allocation)
First of all, if you're using the Maya jobtype, you don't need to think in terms of rendering in batches to minimize the number of times Maya is started and the scenefile is opened.
Once of the cool things Qube does with Maya is for each subjob, Maya is started only once in batch "prompt" mode, and then we fire MEL commands at Maya mel: prompt. The first command is to load the scenefile. Once the scenefile is loaded, the worker then turns around and asks the supervisor for a frame to render; the supervisor sends just the number for the next unrendered frame. The worker then fires a MEL command at the prompt instructing Maya to essentially move the timeslider (even though there's no UI) to the frame number in question, then when Maya replies saying it's done that, the worker tells Maya to render the current frame. When the render's done, Maya tells the worker, and the worker asks the supervisor for another frame to render.
When a worker asks for the next frame but all the frames have been rendered, the supervisor tells the worker to shut Maya down. Only once all the frames are rendered is the scene unloaded and a "quit -f" command fired at the MEL prompt.
The marketing term for this is "Dynamic Allocation". Frames are dealt like cards to workers when they ask for the next frame.
So you get all the benefit of rendering in chunks (since Maya is started only once, and the scenefile and textures are loaded only once), but you don't have to decide in advance how big you want your chunks to be. This also works out very nicely when you have a mix of machine speeds in your farm. With chunks, the slower machines can take a lot longer to finish a 5-frame chunk than a faster machine.
With Qube's Dynamic Allocation, faster machines simply ask for another frame number more quickly than slower machines, so the faster machines usually end up rendering more frames than the slower machines, and all the workers usually finish up around the same time regardless of machine speed.
So a subjob in Qube is a single instance of Maya running on a worker. You can start out rendering across 10 subjobs on your 50-node farm (to be polite and not plug the farm up for your fellow users), and if you decide part way through the job that you want to run on more (or all of the) nodes on the farm, you simply modify the job's "subjob process count" (job's "cpus" in older versions of Qube) to the number of copies of Maya you want, and more subjobs start up, more instances of Maya start running, and instead of 10 subjobs working through the frame list one by one, you now have 50 subjobs working through the list.
The list of frames to render is referred to as the "agenda" in Qube, and we often refer to frames as "agenda items" or "pieces of work", or simply "work". Since Qube is used for many things other than rendering frames, the agenda can be comprised of a list of any type of work, but frames are the most common.
So your 300-frame job will create an agenda with 300 items in it (like a deck of cards with 300 cards in it), and you can increase the number of subjobs that process each card in the deck at any time. Or shrink it, to give some capacity back to an emergency job, but Qube can handle this automatically with it's job priority.
Priority and Job Preemption
Now that you have a picture of how work is dealt out to subjobs like cards to players, you can get a good idea of how priority and preemption work in Qube. Imagine a regular-priority subjob is running and rendering a frame. A user submits a higher-priority job (or you bump the priority up on one of your other jobs). The next time the subjob asks the supervisor for the next agenda item (the next frame), the supervisor will instead instruct the worker to surrender the job slot so that a subjob from the higher-priority job can start in the slot instead. The frames is finished, and the subjob from the lower-priority job goes back into a "pending" state, waiting for an available worker so that it can begin processing more frames at a later time.
This is "passive" preemption: finish what you're doing and then get off the worker. Qube can also implement "aggressive" preemption (which approach to take is defined on the supervisor), which would kick the lower-priority subjob and frame off the worker immediately, throwing out the work done so far, and puts the frame and subjob back into a "pending" state, to be restarted when a worker is available. Aggressive preemption can throw out a lot of cpu time; it's only really recommended in scenarios where all of the work in the farm runs very quickly, so tossing 10 seconds of compute time isn't a killer. Last thing you want to do is toss 15 hours of a 16-hour render... Farms that do nothing but transcoding or compositing are good candidates for aggressive preemption; farms that render anything longer than 30 second frames should stick with passive preemption.
Frame Chunking
If you're running jobs through Qube that don't take advantage of Dynamic Allocation, we support and make it easy to build job chunks. In each submission interface in the QubeGUI, there's an "Execution" control in the Frame Range section. The default is "Individual Frames", but you can select "Chunks with n frames" and set the chunksize, which will give you some number of chunks all of the same size. If you know how many chunks you want to end up with and don't care about the chunk size ("I want to evenly spread 378 frames across 17 subjobs"), you can select "Split into n partitions", and this is where you'd get 17 chunks of near-equal size. It does the arithmetic for you, so you don't end up with 18 chunks instead with the last chunk being 1 frame (which always used to happen to me...)
Workers and job reservations
The reservation string host.processors=1 determines how many worker job slots to "consume" while a job is running. Workers advertise having a certain number of "job slots" available when they boot; this is defined by the worker_cpus setting for the worker. The default value is 0; each worker will advertise as many job slots as it has cores installed.
When you have an 8-core worker with 8 job slots defined, and you submit a job with the reservation set to host.processor=1, you're saying that up to 8 of these subjobs could fit on a single worker. So if you wanted each subjob to run all by itself on a worker, you could either set the processor reservation to 8, or use the shorthand reservation host.processors=1+, which means "start on a worker with at least 1 free slot, but reserve them all as soon as I start". A worker with a 1+ reservation job running on it will show all slots in use, as in 8/8.
If every job you send to all your workers are going to be multi-threaded and expected to consume the entire worker, it's common to set each worker as advertising only 1 job slot with worker_cpus=1. Then, with the default 1-slot reservation for each job, it's always 1 job to 1 worker.