Author Topic: How Avere Systems Helps VFX Scale Up Render Farms on Premise and in the Cloud  (Read 3276 times)

Render Guru

  • Pipelinefx
  • *
  • Posts: 41
Shortly after releasing its first storage appliances back in 2009, Avere Systems products caught on with visual effects facilities, who saw big benefits in placing the company’s high-performance storage tiers in between their existing storage architecture and the render farms that needed to quickly access large amounts of data.

Avere’s FXT filers provided easy scalability for render farms while taking pressure off other parts of the network. Last month, Avere Systems said longtime customer Sony Pictures Imageworks was deploying Avere’s new FXT Edge Filer 5600, improving throughput by 50 percent without replacing any of its existing storage architecture, as part of a recent 20 percent expansion in rendering capabilities — with plans to expand by another 20 percent over the next year. We spoke to Avere VP of Marketing Rebecca Thompson to get some background on how Sony is using the new hardware, the difference between filer clusters on premise and in the cloud, and how smaller studios can use the technology to spin up serious rendering power on demand.

StudioDaily: In Sony’s case, the FXT filers are basically being used as a high-performance layer between the render farm and the storage infrastructure, correct?
Rebecca Thompson: The primary purpose is to accelerate the performance of the render farm and be able to scale out effectively. But at the same time, they want to make sure the artists’ workflow doesn’t get disrupted. The artists are accessing the same storage servers, but the renders are so resource-intensive that if you don’t think about the architecture carefully you can end up starving out your artists — the renders go on in the background and the artists can’t access anything. Render farms don’t pick up phones and call and complain. Producers will complain if their stuff’s not getting done on time, but artists will pick up the phone and complain if they can’t get their editing and compositing done.

We love the Sony story because they have been a long-term customer of Avere’s. They were one of our first production customers in the media space back in 2010, and as they’ve grown we’ve grown, too. Their render farm was probably about a quarter of what it is now, but all along the way they have been a repeat customer on a pretty consistent basis. I know they are excited. The last one they put in was our new hardware, the 5600, which was our high-end model with a 4x improvement in SSD. We went from 14 tb of ssd to 29 tb of ssd in that box, and it went from close to 7 gigs in read throughput up to 11 gigs.

That’s fast. And it’s nice that you can put this in without completely reinventing your architecture.
That’s one of the things that we are conscious of every time we come out with a new model. Our models work in a clustered fashion, so a customer can have anywhere from three to more than 25 nodes in a single cluster. Let’s say you have a cluster of 10 boxes. You want to put in three new nodes. You don’t have to take anything down. They will just auto-join. They don’t have to be the same models. And that’s really nice for customers. They can keep their older Avere gear and make use of that, and then drop in the new stuff and get the advantages, and everything works well and plays well together.

If customers are using a mix of on-premise and off-premise storage, or are using some cloud capacity for storage or rendering, can they also take advantage of this technology to increase their throughput when they need it?
Absolutely. Sony has an infrastructure that’s probably typical of larger VFX studios. They have a large data center in Culver City, but a lot of their production work is done up in Vancouver. They have Avere clusters on their remote sites as well as within the data center. And the remote sites are WAN-caching. You have all the data local to Vancouver, but you also have copies back in the L.A.-based data center. That’s the way they’re using it.

Now, we have other customers, particularly in the rendering space, who do something that we call cloud-bursting. That’s where they want to use cloud compute nodes rather than cloud storage. We have customers who work with both Google and Amazon Web Services [AWS], and they are probably split evenly — rendering is one area where I think Google has done better and made more inroads in the M&E space. So we have a virtual version of our product. Instead of the physical product, it’s our Avere operating system in a virtual format, residing up in a cloud on a platform that we specify. We have a couple of different flavors in each cloud provider, to say we require X amount of SSD capacity and X amount of memory and it resides on those and acts as a caching layer. That allows people to keep their data on premise. Let’s say you have yourself Isilon or NetApp or whatever storage hardware. You can send to the cloud only the amount of data you need to render, render it, and send it back on premise. A lot of studios are reluctant to store data in the cloud over the long term. Sony is very vertically oriented, making their own movies and doing their own VFX work. But a lot of the VFX studios are doing contract work on projects like the Disney Marvel movies where there are a lot of restrictions in place around security. You want to make sure the movies don’t leak out before release. So we actually have customers who have physical nodes of ours for use on premise, and then they’ll spin up more [in the cloud].

For full story, go to Studio Daily: