Recent Posts

Pages: 1 2 [3] 4 5 ... 10
UPDATE: this has been fixed in 7.0-0a, released 5/31/2018 at 12:00 PDT

The 7.0-0 supervisor has a known bug which prevents jobs reserving a global resource from running.  These jobs stay in the "pending" state forever and are not dispatched.   We are actively working on this, and will be releasing a 7.0-0a patch in the next few days.

Customers using Qube's global resource feature in production should wait until 7.0-0a is available before upgrading.

This topic will be updated once this issue is fixed, and we will also be posting in our Announcements forum.  We'd recommend subscribing to either this topic or the Announcements in order to receive notifications when 7.0-0a is available.
Announcements / Qube! 7.0-0 with Postgres Released!
« Last post by pipelinescott on May 23, 2018, 11:36:44 PM »
Version 7.0-0 of Qube! has been released and is immediately available for download.

Major and groundbreaking changes come to this version of Qube!

- Qube! now runs off of Postgres
- Supervisors can be acquired free of charge
- Performance is noticeably better with Postgres
- A functional version of the NEW Qube UI is available for you to test upon request

Other benefits included in this release:

- New Metered Licensing email alerts about usage and spending
- Support in OS X 10.13 High Sierra
- Support for Maya and 3DS Max 2018
- Support for After Effects CC 2018
- Support for Nuke Frame Server and NukeStudio
- Support for Unreal Sequencer job submission
- Support for Keyshot job submission

Downloads available at:

Tech specs:


While technically possible it is not recommended to run more than one instance of aerender per worker.  The issue is that each instance of aerender will attempt to use all available cores on the worker, and in our experience this creates a higher rate of hung processes and failed frames.



Jobtypes and Applications / Multiple Instances of aerender on the same worker
« Last post by johndavidwright on December 12, 2017, 09:19:51 PM »
Is it possible for a worker to run multiple instances of aerender? Say my worker has 44 slots. If I give my job the reservation host.processors=4 and set it to have 11 instances, would that worker be able to pickup all 11 instances and run 11 instances of aerender?
SimpleCmds / Adding support for AfterEffects CC 2018
« Last post by jburk on October 18, 2017, 10:13:58 PM »
We've added support today for AfterEffects CC 2018, and for those customers running Qube 6.10, you don't have to wait for us to release a new version in order to take advantage of this.

To get access to submitting jobs for CC 2018, install the attached module into the AfterEffects (ArtistView) or aerender (WranglerView) directory located in either application's File->Open AppUI Dir, then restart AV or WV.

This will only be possible in Qube 6.10, earlier versions will not support this ae_versions module.
SimpleCmds / Re: set environment variables with simpleCMD
« Last post by wingart on August 25, 2017, 10:25:58 PM »
thanks, i will give that a try.
SimpleCmds / Re: set environment variables with simpleCMD
« Last post by jburk on August 24, 2017, 08:12:15 PM »
You can do this in the existing preSubmit() function in

Code: [Select]
def preSubmit(cmd, job):
    # Handle renderer-specific callbacks
    if cmd.package['-renderer'] == 'mi':
        return preSubmit_mi(cmd, job)
    elif cmd.package['-renderer'] == 'turtlebake':
        return preSubmit_turtlebake(cmd, job)
    job['env'] = {
        'foo': 'bar',
        'foobar': 'bat'

SimpleCmds / set environment variables with simpleCMD
« Last post by wingart on August 23, 2017, 10:25:04 PM »
I need to set a few environment variables to load up vray before rendering like so:

SET VRAY_PATH=\\server\vray\35203_maya2016
SET VRAY_FOR_MAYA2017_MAIN_x64=%VRAY_PATH%\maya_vray
SET VRAY_FOR_MAYA2017_PLUGINS_x64=%VRAY_PATH%\maya_vray\vrayplugins
SET VRAY_OSL_PATH_MAYA2017_x64=%VRAY_PATH%\vray\opensl
SET VRAY_RENDER_DESC_PATH=%VRAY_PATH%\maya_root\bin\rendererDesc

How do I hardcode it inside['key1']='value1'['key2']='value2'

This is a supervisor-only patch release of 6.9-2 and 6.10-0 that includes the following key fixes.

  • Supervisor patches to help cut down on the number of threads, and reduce chances of repeated worker rejections on some farms due to race-conditions/timing issues.
  • A bug in the startHost() dispatch routine causing the supervisor NOT to always dispatch jobs to workers when they became available.

And this fix which applies to 6.10-0 only:

  • Job instances can become unkill-able with QB_PREEMPT_MODE_FAIL internal status

The releases are labeled as 6.9-2b and 6.10-0a.

NOTE regarding dependencies on Linux: Installation of this updated supervisor package on a linux system requires the use of rpm with the --nodeps argument; the yum utility does not support disabling the dependency checks during installation, only removal.
Announcements / Maintenance/Patch release 6.9-2a of Qube! available
« Last post by jburk on July 10, 2017, 11:29:31 PM »
We have released a patched version of Qube!, labeled 6.9-2a, that contain various fixes. 

This is a recommended release for all customers running Qube v6.9-1 or earlier; customers already running v6.9-2 need only upgrade if they are impacted by any issues addressed by this release.

The Qube! Installer should automatically pick up this new version when the 6.9-2 manifest is chosen from the public repository, or if a local copy of the 6.9-2a manifest is chosen on a host which can access the internet.

Notable fixes and changes are:

Code: [Select]
@CHANGE: background helper thread improvements
* limit the number of workers that are potentially recontacted by the background helper routine to 50 per iteration.
* background thread exits and refreshes after running for approximately 1 hour, as opposed to 24 hours

@CHANGE: job queries requesting for subjob and/or work details now must explicitly provide job IDs.
Both qbjobinfo() C++ and qb.jobinfo() Python APIs now reject such submissions and return an error.
For example, the Python call "qb.jobinfo(subjobs=True)" will raise a runtime exception.
It must be now called like "qb.jobinfo(subjobs=True, id=12345)" or "qb.jobinfo(subjobs=True, id=[1234,5678])"

@FIX: shortened the timeout for "qbreportwork" when it reports a "failed" work that has migrate_on_frame_retry from 600 seconds to 20.
This was causing long 10-minute pauses on the job instance when a frame
fails after exhausting all of its retry counts.

Please review the release notes to see if you are experiencing an issue that may be resolved by this release

Pages: 1 2 [3] 4 5 ... 10