Author Topic: Cinema4D R15 jobs filling up local worker disks and crashing the host  (Read 5525 times)


  • Administrator
  • *****
  • Posts: 493
We've had several customers report that Cinema4D R15 jobs running on their Qube farm experience hanging job instances on random workers, and that the job logs for the instance running on that worker can grow large enough to fill up the local disk.  We've seen cases where a single job log grew as large as 2TB before crashing the worker.

This behavior is not tied to any particular worker host or Qube version, it's been tracked down to a bug in Maxon C4D R15.

The most obvious symptoms are that workers will render C4D R15 frames successfully, but then at some point random workers will hang at the start of the next frame and never complete the frame.

If you remote into the worker and browse down to the local job log directory and find a *.out file for the running job that seems overly large (100MB or more)  and use the QBDIR/utils/ <path to workerlog> utility to check the last few lines of the workerlog file, you will see the following message repeated:

Code: [Select]
CRITICAL: Stop [/perforce_buildsystem_osx/c4d_mx_buildsystem_osx/release/15.0/work/futurama/frameworks/kernel/source/memory/systemallocator.cpp(445)]
There have been two different methods to resolve this.

The first approach (recommended by Maxon support) reported by one of our other customers:
They (Maxon support) said the error was from C4D trying to grab any license. We made a group in the license server for command line licenses. Then added the workers we want to use them. This will force them to look in one place for their license and eliminate the error logs.

We've had one customer report that grouping the C4D cmdline licenses helped but didn't completely solve the issue, and what appears to have solved it for good was:

-On the local render proxy user I went into ~/Library/Preferences/ and deleted the "maxon" folder (in some cases it was a symlink).
-also in that same dir I deleted all c4d ".mca" files.

In every case, it was still necessary to remote into each worker and manually clear out the bloated job logs.
« Last Edit: September 26, 2014, 07:10:36 PM by jburk »