Recent Posts

Pages: 1 2 3 [4] 5 6 ... 10
Announcements / Qube! 6.10 released
« Last post by jburk on July 06, 2017, 12:30:13 AM »
Today, we have released Qube! 6.10.

To download this latest version, you can visit our website:

To see what platforms or packages are supported in this release, you can visit this page:

What's New in Qube! 6.10-0
  • Online Performance Reports - Accessible through the metered licensing website, we introduce Online Performance Reporting which are by-the-minute reports that provide users with actionable data on how their Supervisors are doing at any given time. With an initial focus on operations and stability, the graphs will reveal site stress, distribution rates and other data points that help wranglers reallocate and troubleshoot. For 6.10-0, online performance reporting is only available for Linux platforms. Support for OS X and Windows will be added in the future.
  • Online License Keys - Perpetual license keys created for Qube! 6.10 and after will now be downloaded via the metered licensing website. Additional information on license history and total worker numbers is also available, with more management features planned for the future.
  • Microsoft Azure Beta Integration - During the beta release of this feature in Qube! 6.10-0, users will have command line access to Microsoft Azure, allowing them to start and stop cloud nodes from within Qube!. Plans for Azure node management through ArtistView are underway.
  • Clarisse Renderer Support - Clarisse renders can now be dispatched from within Qube! via command line, in app, and load once job submissions.
  • Updated Shotgun Integration - The integrated Shotgun user interface has undergone several minor improvements:
    • Qube! Images to Movie submission will pull in specific data from Shotgun
    • the movie upload script has been updated, fixing a major bug that prevented automated movie uploading to the professional project management tool
  • Partner Licensing Daemon – This helper Daemon will be the foundation for future cloud service provider integrations.
  • C4D Take System – Available only in ArtistView, Qube! now supports submission for the Cinema 4D Take System.
  • EXR Support in Modo – Create one job submission per EXR layer from Modo.
  • Deferred Table Creation – Added as an option in a previous version of Qube!, this optimization for submitting a large number of jobs simultaneously will be on by default starting with 6.10-0.
  • Job Modification – Modifications made to jobs will now run multi-threaded as opposed to running single threaded as they did in previous versions of Qube!, and additional verbosity will be available in logs for jobs that were modified.
  • Linux Platform Support – Added support for CentOS 7.3.
With this version of Qube!, we will no longer be supporting the XSI and MTOR job types.
As part of an ongoing effort to allow IT organizations to consume only the exact amount of public cloud computing resources they need, Google this week announced it has removed the memory caps attached to any virtual machine. In addition, Google claims it has become the first public cloud provider to make the latest generation of Intel Xeon processors, codenamed Skylake, generally available via the Google Cloud Platform (GCP).

Paul Nash, group product manager for GCP, says Google is taking pains to enable IT organizations to consume the virtual machines without requiring them to commit to specific sizes on even an hourly amount of time that ultimately winds up forcing them to pay for unused resources.

A Skylake instance of an Intel processor can now be configured with up to 455GB of RAM. Rather than setting specific memory limits, Nash says IT organizations can now determine how much memory they want to allocate to a virtual CPU instance. That approach is intended to be especially appealing to IT organizations aiming to deploy, for example, in-memory computing databases on a public cloud.

“We’re starting to see more deployments of applications such as SAP HANA databases or analytics applications by enterprise customers,” says Nash.

At the same time, via a new Minimum CPU Platform feature, Google is now allowing IT organizations to select a specific CPU platform for VMs in any given zone. GCP will always schedule a virtual machine to run on that class of CPU family or better.

It’s clear that Google is now spending a lot more time and energy courting enterprise customers. While public clouds have been around for 10 years, most enterprise IT organizations are just now making public clouds a standard deployment option for their applications. That doesn’t mean everything will be moving into a public cloud. But it does mean that before making any substantial commitments, many enterprise IT organizations are likely to be very particular about the terms and conditions offered by a public cloud service provider.
Shortly after releasing its first storage appliances back in 2009, Avere Systems products caught on with visual effects facilities, who saw big benefits in placing the company’s high-performance storage tiers in between their existing storage architecture and the render farms that needed to quickly access large amounts of data.

Avere’s FXT filers provided easy scalability for render farms while taking pressure off other parts of the network. Last month, Avere Systems said longtime customer Sony Pictures Imageworks was deploying Avere’s new FXT Edge Filer 5600, improving throughput by 50 percent without replacing any of its existing storage architecture, as part of a recent 20 percent expansion in rendering capabilities — with plans to expand by another 20 percent over the next year. We spoke to Avere VP of Marketing Rebecca Thompson to get some background on how Sony is using the new hardware, the difference between filer clusters on premise and in the cloud, and how smaller studios can use the technology to spin up serious rendering power on demand.

StudioDaily: In Sony’s case, the FXT filers are basically being used as a high-performance layer between the render farm and the storage infrastructure, correct?
Rebecca Thompson: The primary purpose is to accelerate the performance of the render farm and be able to scale out effectively. But at the same time, they want to make sure the artists’ workflow doesn’t get disrupted. The artists are accessing the same storage servers, but the renders are so resource-intensive that if you don’t think about the architecture carefully you can end up starving out your artists — the renders go on in the background and the artists can’t access anything. Render farms don’t pick up phones and call and complain. Producers will complain if their stuff’s not getting done on time, but artists will pick up the phone and complain if they can’t get their editing and compositing done.

We love the Sony story because they have been a long-term customer of Avere’s. They were one of our first production customers in the media space back in 2010, and as they’ve grown we’ve grown, too. Their render farm was probably about a quarter of what it is now, but all along the way they have been a repeat customer on a pretty consistent basis. I know they are excited. The last one they put in was our new hardware, the 5600, which was our high-end model with a 4x improvement in SSD. We went from 14 tb of ssd to 29 tb of ssd in that box, and it went from close to 7 gigs in read throughput up to 11 gigs.

That’s fast. And it’s nice that you can put this in without completely reinventing your architecture.
That’s one of the things that we are conscious of every time we come out with a new model. Our models work in a clustered fashion, so a customer can have anywhere from three to more than 25 nodes in a single cluster. Let’s say you have a cluster of 10 boxes. You want to put in three new nodes. You don’t have to take anything down. They will just auto-join. They don’t have to be the same models. And that’s really nice for customers. They can keep their older Avere gear and make use of that, and then drop in the new stuff and get the advantages, and everything works well and plays well together.

If customers are using a mix of on-premise and off-premise storage, or are using some cloud capacity for storage or rendering, can they also take advantage of this technology to increase their throughput when they need it?
Absolutely. Sony has an infrastructure that’s probably typical of larger VFX studios. They have a large data center in Culver City, but a lot of their production work is done up in Vancouver. They have Avere clusters on their remote sites as well as within the data center. And the remote sites are WAN-caching. You have all the data local to Vancouver, but you also have copies back in the L.A.-based data center. That’s the way they’re using it.

Now, we have other customers, particularly in the rendering space, who do something that we call cloud-bursting. That’s where they want to use cloud compute nodes rather than cloud storage. We have customers who work with both Google and Amazon Web Services [AWS], and they are probably split evenly — rendering is one area where I think Google has done better and made more inroads in the M&E space. So we have a virtual version of our product. Instead of the physical product, it’s our Avere operating system in a virtual format, residing up in a cloud on a platform that we specify. We have a couple of different flavors in each cloud provider, to say we require X amount of SSD capacity and X amount of memory and it resides on those and acts as a caching layer. That allows people to keep their data on premise. Let’s say you have yourself Isilon or NetApp or whatever storage hardware. You can send to the cloud only the amount of data you need to render, render it, and send it back on premise. A lot of studios are reluctant to store data in the cloud over the long term. Sony is very vertically oriented, making their own movies and doing their own VFX work. But a lot of the VFX studios are doing contract work on projects like the Disney Marvel movies where there are a lot of restrictions in place around security. You want to make sure the movies don’t leak out before release. So we actually have customers who have physical nodes of ours for use on premise, and then they’ll spin up more [in the cloud].

For full story, go to Studio Daily:

Announcements / and Forum Moved to New Hosting Company
« Last post by pipelinescott on April 12, 2017, 02:51:10 AM »
The hosting of our corporate website at has moved to a new provider! So has the hosting of our forum.
Our corporate site domain has not changed. You can still find all of the information you are looking for at

Our corporate site and forum are now at the same provider where our Documentation and Metered Licensing were already hosted. Those two domains are the same: &

Our forum on the other hand has a new base URL.
Changed from:
Changed to:

All other PipelineFX touch points including FTP, support, email aliases, etc. remain the same.

Jobtypes and Applications / Re: Renderman collision issue
« Last post by jburk on April 04, 2017, 04:59:54 PM »
Which jobtype are you submitting with?  One way to check is to view the job's "prototype" value in WranglerView or ArtistView.

If you're using the "maya" jobtype, there's not really a command-line available for modification, but if you're using the batchRender "cmdrange" or "pyCmdrange" jobtypes, the submission interfaces have a "command template" where you can add this extra parameter in.  Or are you exportinb .RIB files and rendering with one of Pixar's command-line utilities?

As well, CCA is entitled to customer support, you can open a support case by sending mail to from your email address.
Jobtypes and Applications / Renderman collision issue
« Last post by dfischer-walker on April 04, 2017, 04:16:43 PM »

My school is having some issues rendering Maya scenes with the renderman render. I’ve browsed both the Qube and Renderman forums and have found an argument that will fix this issue,  but need help implimenting it into the qube submission so that renderman will render correctly. Currently whenever we render with renderman using Qube as the query system many of our jobs will fail or they will complete but we won't see any images. From what I have gathered this is a collision issue from servers trying to write the same files to the same directory. Renderman has a built in work around, but you have to place it into the correct command parameters in order to fix this. From what the Renderman team has told me the argument that has to be added is "-batchContext $JOBDATETIME", but we're not sure under which command parameters this should go under. If you could help us figure out where that argument belongs that would be greatly appreciated. I will include links to the Renderman forums as well. If you have any questions I am more than happy to answer them as well.

Also, we know that this is not a network bandwidth issue or an issue with Qube because we have rendered out Arnold, Mental Ray, and even Maya Software renders without a problem.

Thank you
Google’s Cloud Platform improves its free tier and adds always-free compute and storage services

Google today quietly launched an improved always-free tier and trial program for its Cloud Platform.

The free tier, which now offers enough power to run a small app in Google’s cloud, is offered in addition to an expanded free trial program (yep — that’s all a bit confusing). This free trial gives you $300 in credits that you can use over the course of 12 months. Previously, Google also offered the $300 in credits, but those had to be used within 60 days.

The free tier, which the company never really advertised, now allows for free usage of a small (f1-micro) instance in Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions. In total, the free tier now includes 15 services.

The addition of the Compute Engine instance and 5GB of free Cloud Storage usage is probably the most important update here because those are, after all, the services that are at the core of most cloud applications. You can find the exact limits here.

It’s worth noting that the free tier is only available in Google’s us-east1, us-west1 and us-central1 regions.

With this move, Google is clearly stepping up its attacks against AWS, which offers a similar but more limited free tier and free 12-month trial program for its users. Indeed, many of Google’s limits look fairly similar to AWS’s 12-month free tier, but the AWS always-free tier doesn’t include a free virtual machine, for example (you only get it for free for 12 months). I expect AWS will pull even with Google in the near future and extend its free offer, too.

The idea here is clearly to get people comfortable with Google’s platform. It’s often the developer who runs a hobby project in the cloud who gets the whole team in an enterprise to move over or who decides to use a certain public cloud to run a startup’s infrastructure. New developers, too, typically chose AWS to learn about the cloud because of its free 12-month trials. The 60-day $300 credit Google previously offered simply didn’t cut it for developers who wanted to learn how to work with Google’s cloud.

full article published here:
Announcements / Qube 6.9-2 released
« Last post by jburk on February 27, 2017, 08:16:51 PM »
This is a maintenance release of the Qube! Core/Supervisor/Worker/ArtistView/WranglerView products.

This is a recommended release for all customers running Qube v6.9-0; customers already running v6.9-1 need only upgrade if they are impacted by any issues addressed by this release.

Notable changes and fixes are:
  • Supervisor and worker now use sytemd unit files for managing startup and shutdown on CentOS 7+
  • Jobs in a "badlogin" state can now be retried or killed

Please see for complete release notes.
Rendering in the Cloud / Google launches GPU support for its Cloud Platform
« Last post by Render Guru on February 22, 2017, 06:44:21 PM »
Three months ago, Google announced it would in early 2017 launch support for high-end graphics processing units (GPUs) for machine learning and other specialized workloads. It’s now early 2017 and, true to its word, Google today officially made GPUs on the Google Cloud Platform available to developers. As expected, these are Nvidia Tesla K80 GPUs, and developers will be able to attach up to eight of these to any custom Compute Engine machine.

These new GPU-based virtual machines are available in three Google data centers: us-east1, asia-east1 and europe-west1. Every K80 core features 2,496 of Nvidia’s stream processors with 12 GB of GDDR5 memory (the K80 board features two cores and 24 GB of RAM).

You can never have too much compute power when you’re running complex simulations or using a deep learning framework like TensorFlow, Torch, MXNet of Caffee. Google is clearly aiming this new feature at developers who regularly need to spin up clusters of high-end machines to power their machine learning frameworks. The new Google Cloud GPUs are integrated with Google’s Cloud Machine Learning service and its various database and storage platforms.

The cost per GPU is $0.70 per hour in the U.S. and $0.77 in the European and Asian data centers. That’s not cheap, but a Tesla K80 accelerator with two cores and 24 GB of Ram will easily set you back a few thousand dollars, too.

The announcement comes only a few weeks before Google is scheduled to host its Cloud NEXT conference in San Francisco — where chances are we’ll hear quite a bit more about the company’s plans for making its machine learning services available to even more developers.

Read full story here:
Pages: 1 2 3 [4] 5 6 ... 10