Recent Posts

Pages: 1 ... 3 4 [5] 6 7 ... 10
41
SimpleCmds / set environment variables with simpleCMD
« Last post by wingart on August 23, 2017, 10:25:04 PM »
I need to set a few environment variables to load up vray before rendering like so:

SET VRAY_PATH=\\server\vray\35203_maya2016
SET VRAY_AUTH_CLIENT_FILE_PATH=%VRAY_PATH%
SET VRAY_FOR_MAYA2017_MAIN_x64=%VRAY_PATH%\maya_vray
SET VRAY_FOR_MAYA2017_PLUGINS_x64=%VRAY_PATH%\maya_vray\vrayplugins
SET VRAY_OSL_PATH_MAYA2017_x64=%VRAY_PATH%\vray\opensl
SET VRAY_RENDER_DESC_PATH=%VRAY_PATH%\maya_root\bin\rendererDesc

How do I hardcode it inside mayabatch.py?

cmdjob.properties.env['key1']='value1'
cmdjob.properties.env['key2']='value2'

thanks.
42
This is a supervisor-only patch release of 6.9-2 and 6.10-0 that includes the following key fixes.

  • Supervisor patches to help cut down on the number of threads, and reduce chances of repeated worker rejections on some farms due to race-conditions/timing issues.
  • A bug in the startHost() dispatch routine causing the supervisor NOT to always dispatch jobs to workers when they became available.

And this fix which applies to 6.10-0 only:

  • Job instances can become unkill-able with QB_PREEMPT_MODE_FAIL internal status

The releases are labeled as 6.9-2b and 6.10-0a.

NOTE regarding dependencies on Linux: Installation of this updated supervisor package on a linux system requires the use of rpm with the --nodeps argument; the yum utility does not support disabling the dependency checks during installation, only removal.
43
Announcements / Maintenance/Patch release 6.9-2a of Qube! available
« Last post by jburk on July 10, 2017, 11:29:31 PM »
We have released a patched version of Qube!, labeled 6.9-2a, that contain various fixes. 

This is a recommended release for all customers running Qube v6.9-1 or earlier; customers already running v6.9-2 need only upgrade if they are impacted by any issues addressed by this release.

The Qube! Installer should automatically pick up this new version when the 6.9-2 manifest is chosen from the public repository, or if a local copy of the 6.9-2a manifest is chosen on a host which can access the internet.

Notable fixes and changes are:

Code: [Select]
@CHANGE: background helper thread improvements
* limit the number of workers that are potentially recontacted by the background helper routine to 50 per iteration.
* background thread exits and refreshes after running for approximately 1 hour, as opposed to 24 hours

@CHANGE: job queries requesting for subjob and/or work details now must explicitly provide job IDs.
Both qbjobinfo() C++ and qb.jobinfo() Python APIs now reject such submissions and return an error.
For example, the Python call "qb.jobinfo(subjobs=True)" will raise a runtime exception.
It must be now called like "qb.jobinfo(subjobs=True, id=12345)" or "qb.jobinfo(subjobs=True, id=[1234,5678])"

@FIX: shortened the timeout for "qbreportwork" when it reports a "failed" work that has migrate_on_frame_retry from 600 seconds to 20.
This was causing long 10-minute pauses on the job instance when a frame
fails after exhausting all of its retry counts.


Please review the release notes to see if you are experiencing an issue that may be resolved by this release

http://docs.pipelinefx.com/display/RELNOTES/Qube+6.9-2a+Release+Notes
http://docs.pipelinefx.com/display/RELNOTES/ArtistView+6.9-2a+Release+Notes
http://docs.pipelinefx.com/display/RELNOTES/WranglerView+6.9-2a+Release+Notes

44
Announcements / Qube! 6.10 released
« Last post by jburk on July 06, 2017, 12:30:13 AM »
Today, we have released Qube! 6.10.

To download this latest version, you can visit our website:
https://www.pipelinefx.com/downloadversions/

To see what platforms or packages are supported in this release, you can visit this page:
https://www.pipelinefx.com/technical-specifications/

What's New in Qube! 6.10-0
  • Online Performance Reports - Accessible through the metered licensing website, we introduce Online Performance Reporting which are by-the-minute reports that provide users with actionable data on how their Supervisors are doing at any given time. With an initial focus on operations and stability, the graphs will reveal site stress, distribution rates and other data points that help wranglers reallocate and troubleshoot. For 6.10-0, online performance reporting is only available for Linux platforms. Support for OS X and Windows will be added in the future.
  • Online License Keys - Perpetual license keys created for Qube! 6.10 and after will now be downloaded via the metered licensing website. Additional information on license history and total worker numbers is also available, with more management features planned for the future.
  • Microsoft Azure Beta Integration - During the beta release of this feature in Qube! 6.10-0, users will have command line access to Microsoft Azure, allowing them to start and stop cloud nodes from within Qube!. Plans for Azure node management through ArtistView are underway.
  • Clarisse Renderer Support - Clarisse renders can now be dispatched from within Qube! via command line, in app, and load once job submissions.
  • Updated Shotgun Integration - The integrated Shotgun user interface has undergone several minor improvements:
    • Qube! Images to Movie submission will pull in specific data from Shotgun
    • the movie upload script has been updated, fixing a major bug that prevented automated movie uploading to the professional project management tool
  • Partner Licensing Daemon – This helper Daemon will be the foundation for future cloud service provider integrations.
  • C4D Take System – Available only in ArtistView, Qube! now supports submission for the Cinema 4D Take System.
  • EXR Support in Modo – Create one job submission per EXR layer from Modo.
  • Deferred Table Creation – Added as an option in a previous version of Qube!, this optimization for submitting a large number of jobs simultaneously will be on by default starting with 6.10-0.
  • Job Modification – Modifications made to jobs will now run multi-threaded as opposed to running single threaded as they did in previous versions of Qube!, and additional verbosity will be available in logs for jobs that were modified.
  • Linux Platform Support – Added support for CentOS 7.3.
Deprecation
With this version of Qube!, we will no longer be supporting the XSI and MTOR job types.
45
46
As part of an ongoing effort to allow IT organizations to consume only the exact amount of public cloud computing resources they need, Google this week announced it has removed the memory caps attached to any virtual machine. In addition, Google claims it has become the first public cloud provider to make the latest generation of Intel Xeon processors, codenamed Skylake, generally available via the Google Cloud Platform (GCP).

Paul Nash, group product manager for GCP, says Google is taking pains to enable IT organizations to consume the virtual machines without requiring them to commit to specific sizes on even an hourly amount of time that ultimately winds up forcing them to pay for unused resources.

A Skylake instance of an Intel processor can now be configured with up to 455GB of RAM. Rather than setting specific memory limits, Nash says IT organizations can now determine how much memory they want to allocate to a virtual CPU instance. That approach is intended to be especially appealing to IT organizations aiming to deploy, for example, in-memory computing databases on a public cloud.

“We’re starting to see more deployments of applications such as SAP HANA databases or analytics applications by enterprise customers,” says Nash.

At the same time, via a new Minimum CPU Platform feature, Google is now allowing IT organizations to select a specific CPU platform for VMs in any given zone. GCP will always schedule a virtual machine to run on that class of CPU family or better.

It’s clear that Google is now spending a lot more time and energy courting enterprise customers. While public clouds have been around for 10 years, most enterprise IT organizations are just now making public clouds a standard deployment option for their applications. That doesn’t mean everything will be moving into a public cloud. But it does mean that before making any substantial commitments, many enterprise IT organizations are likely to be very particular about the terms and conditions offered by a public cloud service provider.
47
Shortly after releasing its first storage appliances back in 2009, Avere Systems products caught on with visual effects facilities, who saw big benefits in placing the company’s high-performance storage tiers in between their existing storage architecture and the render farms that needed to quickly access large amounts of data.

Avere’s FXT filers provided easy scalability for render farms while taking pressure off other parts of the network. Last month, Avere Systems said longtime customer Sony Pictures Imageworks was deploying Avere’s new FXT Edge Filer 5600, improving throughput by 50 percent without replacing any of its existing storage architecture, as part of a recent 20 percent expansion in rendering capabilities — with plans to expand by another 20 percent over the next year. We spoke to Avere VP of Marketing Rebecca Thompson to get some background on how Sony is using the new hardware, the difference between filer clusters on premise and in the cloud, and how smaller studios can use the technology to spin up serious rendering power on demand.

StudioDaily: In Sony’s case, the FXT filers are basically being used as a high-performance layer between the render farm and the storage infrastructure, correct?
Rebecca Thompson: The primary purpose is to accelerate the performance of the render farm and be able to scale out effectively. But at the same time, they want to make sure the artists’ workflow doesn’t get disrupted. The artists are accessing the same storage servers, but the renders are so resource-intensive that if you don’t think about the architecture carefully you can end up starving out your artists — the renders go on in the background and the artists can’t access anything. Render farms don’t pick up phones and call and complain. Producers will complain if their stuff’s not getting done on time, but artists will pick up the phone and complain if they can’t get their editing and compositing done.

We love the Sony story because they have been a long-term customer of Avere’s. They were one of our first production customers in the media space back in 2010, and as they’ve grown we’ve grown, too. Their render farm was probably about a quarter of what it is now, but all along the way they have been a repeat customer on a pretty consistent basis. I know they are excited. The last one they put in was our new hardware, the 5600, which was our high-end model with a 4x improvement in SSD. We went from 14 tb of ssd to 29 tb of ssd in that box, and it went from close to 7 gigs in read throughput up to 11 gigs.

That’s fast. And it’s nice that you can put this in without completely reinventing your architecture.
That’s one of the things that we are conscious of every time we come out with a new model. Our models work in a clustered fashion, so a customer can have anywhere from three to more than 25 nodes in a single cluster. Let’s say you have a cluster of 10 boxes. You want to put in three new nodes. You don’t have to take anything down. They will just auto-join. They don’t have to be the same models. And that’s really nice for customers. They can keep their older Avere gear and make use of that, and then drop in the new stuff and get the advantages, and everything works well and plays well together.

If customers are using a mix of on-premise and off-premise storage, or are using some cloud capacity for storage or rendering, can they also take advantage of this technology to increase their throughput when they need it?
Absolutely. Sony has an infrastructure that’s probably typical of larger VFX studios. They have a large data center in Culver City, but a lot of their production work is done up in Vancouver. They have Avere clusters on their remote sites as well as within the data center. And the remote sites are WAN-caching. You have all the data local to Vancouver, but you also have copies back in the L.A.-based data center. That’s the way they’re using it.

Now, we have other customers, particularly in the rendering space, who do something that we call cloud-bursting. That’s where they want to use cloud compute nodes rather than cloud storage. We have customers who work with both Google and Amazon Web Services [AWS], and they are probably split evenly — rendering is one area where I think Google has done better and made more inroads in the M&E space. So we have a virtual version of our product. Instead of the physical product, it’s our Avere operating system in a virtual format, residing up in a cloud on a platform that we specify. We have a couple of different flavors in each cloud provider, to say we require X amount of SSD capacity and X amount of memory and it resides on those and acts as a caching layer. That allows people to keep their data on premise. Let’s say you have yourself Isilon or NetApp or whatever storage hardware. You can send to the cloud only the amount of data you need to render, render it, and send it back on premise. A lot of studios are reluctant to store data in the cloud over the long term. Sony is very vertically oriented, making their own movies and doing their own VFX work. But a lot of the VFX studios are doing contract work on projects like the Disney Marvel movies where there are a lot of restrictions in place around security. You want to make sure the movies don’t leak out before release. So we actually have customers who have physical nodes of ours for use on premise, and then they’ll spin up more [in the cloud].

For full story, go to Studio Daily:  http://www.studiodaily.com/2017/05/avere-systems-vfx-render-farms-on-premise-in-cloud/


48
Announcements / PipelineFX.com and Forum Moved to New Hosting Company
« Last post by pipelinescott on April 12, 2017, 02:51:10 AM »
The hosting of our corporate website at www.pipelinefx.com has moved to a new provider! So has the hosting of our forum.
Our corporate site domain has not changed. You can still find all of the information you are looking for at www.pipelinefx.com.

Our corporate site and forum are now at the same provider where our Documentation and Metered Licensing were already hosted. Those two domains are the same:
docs.pipelinefx.com & metered.pipelinefx.com

Our forum on the other hand has a new base URL.
Changed from: www.pipelinefx.com/forum/
Changed to: forum.pipelinefx.com

All other PipelineFX touch points including FTP, support, email aliases, etc. remain the same.

Thanks!
Scott
49
Jobtypes and Applications / Re: Renderman collision issue
« Last post by jburk on April 04, 2017, 04:59:54 PM »
Which jobtype are you submitting with?  One way to check is to view the job's "prototype" value in WranglerView or ArtistView.

If you're using the "maya" jobtype, there's not really a command-line available for modification, but if you're using the batchRender "cmdrange" or "pyCmdrange" jobtypes, the submission interfaces have a "command template" where you can add this extra parameter in.  Or are you exportinb .RIB files and rendering with one of Pixar's command-line utilities?

As well, CCA is entitled to customer support, you can open a support case by sending mail to support@pipelinefx.com from your @cca.edu email address.
50
Jobtypes and Applications / Renderman collision issue
« Last post by dfischer-walker on April 04, 2017, 04:16:43 PM »
Hello,

My school is having some issues rendering Maya scenes with the renderman render. I’ve browsed both the Qube and Renderman forums and have found an argument that will fix this issue,  but need help implimenting it into the qube submission so that renderman will render correctly. Currently whenever we render with renderman using Qube as the query system many of our jobs will fail or they will complete but we won't see any images. From what I have gathered this is a collision issue from servers trying to write the same files to the same directory. Renderman has a built in work around, but you have to place it into the correct command parameters in order to fix this. From what the Renderman team has told me the argument that has to be added is "-batchContext $JOBDATETIME", but we're not sure under which command parameters this should go under. If you could help us figure out where that argument belongs that would be greatly appreciated. I will include links to the Renderman forums as well. If you have any questions I am more than happy to answer them as well.

Also, we know that this is not a network bandwidth issue or an issue with Qube because we have rendered out Arnold, Mental Ray, and even Maya Software renders without a problem.

https://renderman.pixar.com/forum/showthread.php?s=&threadid=33872&highlight=qube

https://renderman.pixar.com/forum/showthread.php?s=&threadid=34352

Thank you
Pages: 1 ... 3 4 [5] 6 7 ... 10