Recent Posts

Pages: 1 [2] 3 4 ... 10
11
As part of an ongoing effort to allow IT organizations to consume only the exact amount of public cloud computing resources they need, Google this week announced it has removed the memory caps attached to any virtual machine. In addition, Google claims it has become the first public cloud provider to make the latest generation of Intel Xeon processors, codenamed Skylake, generally available via the Google Cloud Platform (GCP).

Paul Nash, group product manager for GCP, says Google is taking pains to enable IT organizations to consume the virtual machines without requiring them to commit to specific sizes on even an hourly amount of time that ultimately winds up forcing them to pay for unused resources.

A Skylake instance of an Intel processor can now be configured with up to 455GB of RAM. Rather than setting specific memory limits, Nash says IT organizations can now determine how much memory they want to allocate to a virtual CPU instance. That approach is intended to be especially appealing to IT organizations aiming to deploy, for example, in-memory computing databases on a public cloud.

“We’re starting to see more deployments of applications such as SAP HANA databases or analytics applications by enterprise customers,” says Nash.

At the same time, via a new Minimum CPU Platform feature, Google is now allowing IT organizations to select a specific CPU platform for VMs in any given zone. GCP will always schedule a virtual machine to run on that class of CPU family or better.

It’s clear that Google is now spending a lot more time and energy courting enterprise customers. While public clouds have been around for 10 years, most enterprise IT organizations are just now making public clouds a standard deployment option for their applications. That doesn’t mean everything will be moving into a public cloud. But it does mean that before making any substantial commitments, many enterprise IT organizations are likely to be very particular about the terms and conditions offered by a public cloud service provider.
12
Shortly after releasing its first storage appliances back in 2009, Avere Systems products caught on with visual effects facilities, who saw big benefits in placing the company’s high-performance storage tiers in between their existing storage architecture and the render farms that needed to quickly access large amounts of data.

Avere’s FXT filers provided easy scalability for render farms while taking pressure off other parts of the network. Last month, Avere Systems said longtime customer Sony Pictures Imageworks was deploying Avere’s new FXT Edge Filer 5600, improving throughput by 50 percent without replacing any of its existing storage architecture, as part of a recent 20 percent expansion in rendering capabilities — with plans to expand by another 20 percent over the next year. We spoke to Avere VP of Marketing Rebecca Thompson to get some background on how Sony is using the new hardware, the difference between filer clusters on premise and in the cloud, and how smaller studios can use the technology to spin up serious rendering power on demand.

StudioDaily: In Sony’s case, the FXT filers are basically being used as a high-performance layer between the render farm and the storage infrastructure, correct?
Rebecca Thompson: The primary purpose is to accelerate the performance of the render farm and be able to scale out effectively. But at the same time, they want to make sure the artists’ workflow doesn’t get disrupted. The artists are accessing the same storage servers, but the renders are so resource-intensive that if you don’t think about the architecture carefully you can end up starving out your artists — the renders go on in the background and the artists can’t access anything. Render farms don’t pick up phones and call and complain. Producers will complain if their stuff’s not getting done on time, but artists will pick up the phone and complain if they can’t get their editing and compositing done.

We love the Sony story because they have been a long-term customer of Avere’s. They were one of our first production customers in the media space back in 2010, and as they’ve grown we’ve grown, too. Their render farm was probably about a quarter of what it is now, but all along the way they have been a repeat customer on a pretty consistent basis. I know they are excited. The last one they put in was our new hardware, the 5600, which was our high-end model with a 4x improvement in SSD. We went from 14 tb of ssd to 29 tb of ssd in that box, and it went from close to 7 gigs in read throughput up to 11 gigs.

That’s fast. And it’s nice that you can put this in without completely reinventing your architecture.
That’s one of the things that we are conscious of every time we come out with a new model. Our models work in a clustered fashion, so a customer can have anywhere from three to more than 25 nodes in a single cluster. Let’s say you have a cluster of 10 boxes. You want to put in three new nodes. You don’t have to take anything down. They will just auto-join. They don’t have to be the same models. And that’s really nice for customers. They can keep their older Avere gear and make use of that, and then drop in the new stuff and get the advantages, and everything works well and plays well together.

If customers are using a mix of on-premise and off-premise storage, or are using some cloud capacity for storage or rendering, can they also take advantage of this technology to increase their throughput when they need it?
Absolutely. Sony has an infrastructure that’s probably typical of larger VFX studios. They have a large data center in Culver City, but a lot of their production work is done up in Vancouver. They have Avere clusters on their remote sites as well as within the data center. And the remote sites are WAN-caching. You have all the data local to Vancouver, but you also have copies back in the L.A.-based data center. That’s the way they’re using it.

Now, we have other customers, particularly in the rendering space, who do something that we call cloud-bursting. That’s where they want to use cloud compute nodes rather than cloud storage. We have customers who work with both Google and Amazon Web Services [AWS], and they are probably split evenly — rendering is one area where I think Google has done better and made more inroads in the M&E space. So we have a virtual version of our product. Instead of the physical product, it’s our Avere operating system in a virtual format, residing up in a cloud on a platform that we specify. We have a couple of different flavors in each cloud provider, to say we require X amount of SSD capacity and X amount of memory and it resides on those and acts as a caching layer. That allows people to keep their data on premise. Let’s say you have yourself Isilon or NetApp or whatever storage hardware. You can send to the cloud only the amount of data you need to render, render it, and send it back on premise. A lot of studios are reluctant to store data in the cloud over the long term. Sony is very vertically oriented, making their own movies and doing their own VFX work. But a lot of the VFX studios are doing contract work on projects like the Disney Marvel movies where there are a lot of restrictions in place around security. You want to make sure the movies don’t leak out before release. So we actually have customers who have physical nodes of ours for use on premise, and then they’ll spin up more [in the cloud].

For full story, go to Studio Daily:  http://www.studiodaily.com/2017/05/avere-systems-vfx-render-farms-on-premise-in-cloud/


13
Announcements / PipelineFX.com and Forum Moved to New Hosting Company
« Last post by pipelinescott on April 12, 2017, 02:51:10 AM »
The hosting of our corporate website at www.pipelinefx.com has moved to a new provider! So has the hosting of our forum.
Our corporate site domain has not changed. You can still find all of the information you are looking for at www.pipelinefx.com.

Our corporate site and forum are now at the same provider where our Documentation and Metered Licensing were already hosted. Those two domains are the same:
docs.pipelinefx.com & metered.pipelinefx.com

Our forum on the other hand has a new base URL.
Changed from: www.pipelinefx.com/forum/
Changed to: forum.pipelinefx.com

All other PipelineFX touch points including FTP, support, email aliases, etc. remain the same.

Thanks!
Scott
14
Jobtypes and Applications / Re: Renderman collision issue
« Last post by jburk on April 04, 2017, 04:59:54 PM »
Which jobtype are you submitting with?  One way to check is to view the job's "prototype" value in WranglerView or ArtistView.

If you're using the "maya" jobtype, there's not really a command-line available for modification, but if you're using the batchRender "cmdrange" or "pyCmdrange" jobtypes, the submission interfaces have a "command template" where you can add this extra parameter in.  Or are you exportinb .RIB files and rendering with one of Pixar's command-line utilities?

As well, CCA is entitled to customer support, you can open a support case by sending mail to support@pipelinefx.com from your @cca.edu email address.
15
Jobtypes and Applications / Renderman collision issue
« Last post by dfischer-walker on April 04, 2017, 04:16:43 PM »
Hello,

My school is having some issues rendering Maya scenes with the renderman render. I’ve browsed both the Qube and Renderman forums and have found an argument that will fix this issue,  but need help implimenting it into the qube submission so that renderman will render correctly. Currently whenever we render with renderman using Qube as the query system many of our jobs will fail or they will complete but we won't see any images. From what I have gathered this is a collision issue from servers trying to write the same files to the same directory. Renderman has a built in work around, but you have to place it into the correct command parameters in order to fix this. From what the Renderman team has told me the argument that has to be added is "-batchContext $JOBDATETIME", but we're not sure under which command parameters this should go under. If you could help us figure out where that argument belongs that would be greatly appreciated. I will include links to the Renderman forums as well. If you have any questions I am more than happy to answer them as well.

Also, we know that this is not a network bandwidth issue or an issue with Qube because we have rendered out Arnold, Mental Ray, and even Maya Software renders without a problem.

https://renderman.pixar.com/forum/showthread.php?s=&threadid=33872&highlight=qube

https://renderman.pixar.com/forum/showthread.php?s=&threadid=34352

Thank you
16
Google’s Cloud Platform improves its free tier and adds always-free compute and storage services

Google today quietly launched an improved always-free tier and trial program for its Cloud Platform.

The free tier, which now offers enough power to run a small app in Google’s cloud, is offered in addition to an expanded free trial program (yep — that’s all a bit confusing). This free trial gives you $300 in credits that you can use over the course of 12 months. Previously, Google also offered the $300 in credits, but those had to be used within 60 days.

The free tier, which the company never really advertised, now allows for free usage of a small (f1-micro) instance in Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions. In total, the free tier now includes 15 services.

The addition of the Compute Engine instance and 5GB of free Cloud Storage usage is probably the most important update here because those are, after all, the services that are at the core of most cloud applications. You can find the exact limits here.

It’s worth noting that the free tier is only available in Google’s us-east1, us-west1 and us-central1 regions.

With this move, Google is clearly stepping up its attacks against AWS, which offers a similar but more limited free tier and free 12-month trial program for its users. Indeed, many of Google’s limits look fairly similar to AWS’s 12-month free tier, but the AWS always-free tier doesn’t include a free virtual machine, for example (you only get it for free for 12 months). I expect AWS will pull even with Google in the near future and extend its free offer, too.

The idea here is clearly to get people comfortable with Google’s platform. It’s often the developer who runs a hobby project in the cloud who gets the whole team in an enterprise to move over or who decides to use a certain public cloud to run a startup’s infrastructure. New developers, too, typically chose AWS to learn about the cloud because of its free 12-month trials. The 60-day $300 credit Google previously offered simply didn’t cut it for developers who wanted to learn how to work with Google’s cloud.

full article published here:  https://techcrunch.com/2017/03/09/googles-cloud-platform-improves-its-free-tier-and-adds-always-free-compute-and-storage-services/
17
Announcements / Qube 6.9-2 released
« Last post by jburk on February 27, 2017, 08:16:51 PM »
This is a maintenance release of the Qube! Core/Supervisor/Worker/ArtistView/WranglerView products.

This is a recommended release for all customers running Qube v6.9-0; customers already running v6.9-1 need only upgrade if they are impacted by any issues addressed by this release.

Notable changes and fixes are:
  • Supervisor and worker now use sytemd unit files for managing startup and shutdown on CentOS 7+
  • Jobs in a "badlogin" state can now be retried or killed

Please see http://docs.pipelinefx.com/display/RELNOTES/PipelineFX+Release+Notes for complete release notes.
18
Rendering in the Cloud / Google launches GPU support for its Cloud Platform
« Last post by Render Guru on February 22, 2017, 06:44:21 PM »
Three months ago, Google announced it would in early 2017 launch support for high-end graphics processing units (GPUs) for machine learning and other specialized workloads. It’s now early 2017 and, true to its word, Google today officially made GPUs on the Google Cloud Platform available to developers. As expected, these are Nvidia Tesla K80 GPUs, and developers will be able to attach up to eight of these to any custom Compute Engine machine.

These new GPU-based virtual machines are available in three Google data centers: us-east1, asia-east1 and europe-west1. Every K80 core features 2,496 of Nvidia’s stream processors with 12 GB of GDDR5 memory (the K80 board features two cores and 24 GB of RAM).

You can never have too much compute power when you’re running complex simulations or using a deep learning framework like TensorFlow, Torch, MXNet of Caffee. Google is clearly aiming this new feature at developers who regularly need to spin up clusters of high-end machines to power their machine learning frameworks. The new Google Cloud GPUs are integrated with Google’s Cloud Machine Learning service and its various database and storage platforms.

The cost per GPU is $0.70 per hour in the U.S. and $0.77 in the European and Asian data centers. That’s not cheap, but a Tesla K80 accelerator with two cores and 24 GB of Ram will easily set you back a few thousand dollars, too.

The announcement comes only a few weeks before Google is scheduled to host its Cloud NEXT conference in San Francisco — where chances are we’ll hear quite a bit more about the company’s plans for making its machine learning services available to even more developers.

Read full story here:  https://techcrunch.com/2017/02/21/google-launches-gpu-support-for-its-cloud-platform/
19
Google Cloud Platform: How Preemptible Instances Can Bolster Your Cloud Cost-Optimization Strategy

Back in August last year, Google Cloud Platform announced price reductions of up to 33% on its Preemptible Virtual Machines (VMs), which could deliver potential savings of up to 80% on the cost of its regular on-demand instances.

The move was just the latest in a long series of price cuts designed to attract more enterprise customers to the platform and challenge the dominance of Amazon and Microsoft in the cloud infrastructure marketplace.

But what exactly are Preemptible VMs? And what applications are suitable for deployment to the service? In this post, we run through the main features of Preemptible VMs and compare them with Amazon’s rival discount offering Spot Instances.

What Are Preemptible VMs?

Launched in May 2015, Preemptible VMs are a disposable class of instances through which Google offers excess compute capacity at a much lower price compared with its standard machines.

They’re just like their regular VM counterparts, except that they automatically shut down after 24 hours. What’s more, at any time during that period, they may be shut down at short notice whenever the vendor needs to reclaim spare capacity to meet compute demands elsewhere.

Google takes a number of factors into consideration in its preemption process. For example, it avoids stopping too many VMs from a single customer and gives preference to instances that have been running longer. In other words, instances are more at risk of preemption when they first start running. However, apart from any separate licensing costs, you’re not charged if Google stops your instance within the first 10 minutes.

Charges for Preemptible VMs are typically around 20% of the full on-demand price. Based on average usage, which takes into account partial sustained-use discounts, they generally work out at about 25% of the running cost of the equivalent standard machine. The service carries no guarantee of availability and no SLA.

What Applications Are a Good Fit?

Preemptible VMs are a great way to reduce the cost of processing large-scale workloads that aren’t time sensitive and require massive amounts of compute resources. They’re well suited to security testing, short-term scaling, media encoding, financial or scientific analysis and insurance risk management, where you can spin up hundreds or thousands of machines and quickly complete a job in a short space of time.

They’re a particularly useful option as part of the VM mix for massively parallel processing applications, such as Hadoop and Spark, as they can complement Google’s conventional VMs to enhance performance. However, they’re not suitable for mission-critical services such as operational databases and Internet applications.

How Should Your Workloads Handle Preemption?

First of all, if you haven’t done so already, you should build fault tolerance into your application. You can also mitigate against the impact of preemption by combining regular instances with Preemptible VMs in your clusters, thereby ensuring you maintain a baseline level of compute availability.

Alternatively, you can create a shutdown script that responds to preemption alerts by automatically launching regular instances to cover your shortfall in compute capacity. But, if costs are paramount and you’re prepared to wait, your script should simply clean up and save your job so it can pick up where it left off.

In either case, you should test your application’s response to a preemption event. You can do this by stopping the VM and checking that your script correctly completes your shutdown procedure.

However, if you want to avoid preemption in the first place, you should look to run your instances at off-peak times, such as nights and weekends, when the risk of disruption is at a minimum.

How Do They Compare with Spot Instances?

Preemptible VMs are very similar to Amazon’s Spot Instances. Nevertheless, there are several key differences between them.

Preemptible VMs are charged at fixed rates, which are individually set according to the type and size of instance. Availability depends on the vendor’s level of spare capacity.

By contrast, allocation of Spot Instances is based on a bidding process, where your machine will run as long as your bid price is above the current Spot price on the Spot Market. Provided you don’t cancel your request, the machine will restart whenever the Spot price falls below your bid price. And while your machine is running the actual price you pay is the Spot price.

Some enterprise customers will prefer the more predictable costs of Preemptible VMs. On the other hand, some will like the flexibility of Spot Instances—through the fact they don’t automatically terminate after 24 hours and that you can bid at a higher price to reduce the risk of interruption.

Preemptible VM costs are rounded up to the nearest minute. However, if you terminate your instance within the first 10 minutes, usage is rounded up to a full 10 minutes. By contrast, the costs for Spot Instances are rounded up to the nearest hour. But, if your Spot Instance is interrupted, you don’t pay for your last partial hour of usage.

Finally, Google only gives you 30 seconds’ notice of preemption, while Amazon gives you 2 minutes’ notice of interruption. And Preemptible VMs are available to any Compute Engine instance type, whereas Spot Instances aren’t available to the burstable T2 family of instances.

As yet, there is no Microsoft equivalent of either service.

A Highly Capable Offering

Google is a relative newcomer to the cloud computing arena and still playing catch-up with Amazon and Microsoft in terms of available features and market share. Nevertheless, it has already established itself as a highly capable offering and an ideal cloud platform for specific use cases.

However, in the company’s drive to attract new enterprise customers, pricing will continue to play an important role in its marketing strategy. So, if your applications are a good fit for the platform, you can expect further cost-cutting campaigns and even better value for money from your cloud infrastructure expenditure. And all the more so if you optimize your costs, adopt a flexible approach to your workloads and take advantage of cost-saving options such as Preemptible VMs.

Published by Jonathan Maresky
Published on Virtual Strategy Magazine
http://virtual-strategy.com/2017/02/15/google-cloud-platform-how-preemptible-instances-can-bolster-your-cloud-cost-optimization-strategy/

20
Announcements / Qube! 6.9-1 Released
« Last post by jburk on February 16, 2017, 07:55:50 PM »
This is a maintenance release of the Qube! Core/Supervisor/Worker/ArtistView/WranglerView products.

This is a recommended release for all customers running Qube v6.9-0.

Notable changes and fixes are:
  • Numerous memory leaks in the supervisor have been plugged, fixes runaway memory consumption of single supervisor processes.
  • Faster re-loading of the qbwrk.conf central worker configuration file, also optimizes supervisor startup.
  • qbping results always reflect the current state of license file, artificially low values for installed licenses were throwing off the "metered license" calculations.

Please see http://docs.pipelinefx.com/display/RELNOTES/PipelineFX+Release+Notes for complete release notes.
Pages: 1 [2] 3 4 ... 10