Recent Posts

Pages: 1 ... 4 5 [6] 7 8 ... 10
51
Google’s Cloud Platform improves its free tier and adds always-free compute and storage services

Google today quietly launched an improved always-free tier and trial program for its Cloud Platform.

The free tier, which now offers enough power to run a small app in Google’s cloud, is offered in addition to an expanded free trial program (yep — that’s all a bit confusing). This free trial gives you $300 in credits that you can use over the course of 12 months. Previously, Google also offered the $300 in credits, but those had to be used within 60 days.

The free tier, which the company never really advertised, now allows for free usage of a small (f1-micro) instance in Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions. In total, the free tier now includes 15 services.

The addition of the Compute Engine instance and 5GB of free Cloud Storage usage is probably the most important update here because those are, after all, the services that are at the core of most cloud applications. You can find the exact limits here.

It’s worth noting that the free tier is only available in Google’s us-east1, us-west1 and us-central1 regions.

With this move, Google is clearly stepping up its attacks against AWS, which offers a similar but more limited free tier and free 12-month trial program for its users. Indeed, many of Google’s limits look fairly similar to AWS’s 12-month free tier, but the AWS always-free tier doesn’t include a free virtual machine, for example (you only get it for free for 12 months). I expect AWS will pull even with Google in the near future and extend its free offer, too.

The idea here is clearly to get people comfortable with Google’s platform. It’s often the developer who runs a hobby project in the cloud who gets the whole team in an enterprise to move over or who decides to use a certain public cloud to run a startup’s infrastructure. New developers, too, typically chose AWS to learn about the cloud because of its free 12-month trials. The 60-day $300 credit Google previously offered simply didn’t cut it for developers who wanted to learn how to work with Google’s cloud.

full article published here:  https://techcrunch.com/2017/03/09/googles-cloud-platform-improves-its-free-tier-and-adds-always-free-compute-and-storage-services/
52
Announcements / Qube 6.9-2 released
« Last post by jburk on February 27, 2017, 08:16:51 PM »
This is a maintenance release of the Qube! Core/Supervisor/Worker/ArtistView/WranglerView products.

This is a recommended release for all customers running Qube v6.9-0; customers already running v6.9-1 need only upgrade if they are impacted by any issues addressed by this release.

Notable changes and fixes are:
  • Supervisor and worker now use sytemd unit files for managing startup and shutdown on CentOS 7+
  • Jobs in a "badlogin" state can now be retried or killed

Please see http://docs.pipelinefx.com/display/RELNOTES/PipelineFX+Release+Notes for complete release notes.
53
Rendering in the Cloud / Google launches GPU support for its Cloud Platform
« Last post by Render Guru on February 22, 2017, 06:44:21 PM »
Three months ago, Google announced it would in early 2017 launch support for high-end graphics processing units (GPUs) for machine learning and other specialized workloads. It’s now early 2017 and, true to its word, Google today officially made GPUs on the Google Cloud Platform available to developers. As expected, these are Nvidia Tesla K80 GPUs, and developers will be able to attach up to eight of these to any custom Compute Engine machine.

These new GPU-based virtual machines are available in three Google data centers: us-east1, asia-east1 and europe-west1. Every K80 core features 2,496 of Nvidia’s stream processors with 12 GB of GDDR5 memory (the K80 board features two cores and 24 GB of RAM).

You can never have too much compute power when you’re running complex simulations or using a deep learning framework like TensorFlow, Torch, MXNet of Caffee. Google is clearly aiming this new feature at developers who regularly need to spin up clusters of high-end machines to power their machine learning frameworks. The new Google Cloud GPUs are integrated with Google’s Cloud Machine Learning service and its various database and storage platforms.

The cost per GPU is $0.70 per hour in the U.S. and $0.77 in the European and Asian data centers. That’s not cheap, but a Tesla K80 accelerator with two cores and 24 GB of Ram will easily set you back a few thousand dollars, too.

The announcement comes only a few weeks before Google is scheduled to host its Cloud NEXT conference in San Francisco — where chances are we’ll hear quite a bit more about the company’s plans for making its machine learning services available to even more developers.

Read full story here:  https://techcrunch.com/2017/02/21/google-launches-gpu-support-for-its-cloud-platform/
54
Google Cloud Platform: How Preemptible Instances Can Bolster Your Cloud Cost-Optimization Strategy

Back in August last year, Google Cloud Platform announced price reductions of up to 33% on its Preemptible Virtual Machines (VMs), which could deliver potential savings of up to 80% on the cost of its regular on-demand instances.

The move was just the latest in a long series of price cuts designed to attract more enterprise customers to the platform and challenge the dominance of Amazon and Microsoft in the cloud infrastructure marketplace.

But what exactly are Preemptible VMs? And what applications are suitable for deployment to the service? In this post, we run through the main features of Preemptible VMs and compare them with Amazon’s rival discount offering Spot Instances.

What Are Preemptible VMs?

Launched in May 2015, Preemptible VMs are a disposable class of instances through which Google offers excess compute capacity at a much lower price compared with its standard machines.

They’re just like their regular VM counterparts, except that they automatically shut down after 24 hours. What’s more, at any time during that period, they may be shut down at short notice whenever the vendor needs to reclaim spare capacity to meet compute demands elsewhere.

Google takes a number of factors into consideration in its preemption process. For example, it avoids stopping too many VMs from a single customer and gives preference to instances that have been running longer. In other words, instances are more at risk of preemption when they first start running. However, apart from any separate licensing costs, you’re not charged if Google stops your instance within the first 10 minutes.

Charges for Preemptible VMs are typically around 20% of the full on-demand price. Based on average usage, which takes into account partial sustained-use discounts, they generally work out at about 25% of the running cost of the equivalent standard machine. The service carries no guarantee of availability and no SLA.

What Applications Are a Good Fit?

Preemptible VMs are a great way to reduce the cost of processing large-scale workloads that aren’t time sensitive and require massive amounts of compute resources. They’re well suited to security testing, short-term scaling, media encoding, financial or scientific analysis and insurance risk management, where you can spin up hundreds or thousands of machines and quickly complete a job in a short space of time.

They’re a particularly useful option as part of the VM mix for massively parallel processing applications, such as Hadoop and Spark, as they can complement Google’s conventional VMs to enhance performance. However, they’re not suitable for mission-critical services such as operational databases and Internet applications.

How Should Your Workloads Handle Preemption?

First of all, if you haven’t done so already, you should build fault tolerance into your application. You can also mitigate against the impact of preemption by combining regular instances with Preemptible VMs in your clusters, thereby ensuring you maintain a baseline level of compute availability.

Alternatively, you can create a shutdown script that responds to preemption alerts by automatically launching regular instances to cover your shortfall in compute capacity. But, if costs are paramount and you’re prepared to wait, your script should simply clean up and save your job so it can pick up where it left off.

In either case, you should test your application’s response to a preemption event. You can do this by stopping the VM and checking that your script correctly completes your shutdown procedure.

However, if you want to avoid preemption in the first place, you should look to run your instances at off-peak times, such as nights and weekends, when the risk of disruption is at a minimum.

How Do They Compare with Spot Instances?

Preemptible VMs are very similar to Amazon’s Spot Instances. Nevertheless, there are several key differences between them.

Preemptible VMs are charged at fixed rates, which are individually set according to the type and size of instance. Availability depends on the vendor’s level of spare capacity.

By contrast, allocation of Spot Instances is based on a bidding process, where your machine will run as long as your bid price is above the current Spot price on the Spot Market. Provided you don’t cancel your request, the machine will restart whenever the Spot price falls below your bid price. And while your machine is running the actual price you pay is the Spot price.

Some enterprise customers will prefer the more predictable costs of Preemptible VMs. On the other hand, some will like the flexibility of Spot Instances—through the fact they don’t automatically terminate after 24 hours and that you can bid at a higher price to reduce the risk of interruption.

Preemptible VM costs are rounded up to the nearest minute. However, if you terminate your instance within the first 10 minutes, usage is rounded up to a full 10 minutes. By contrast, the costs for Spot Instances are rounded up to the nearest hour. But, if your Spot Instance is interrupted, you don’t pay for your last partial hour of usage.

Finally, Google only gives you 30 seconds’ notice of preemption, while Amazon gives you 2 minutes’ notice of interruption. And Preemptible VMs are available to any Compute Engine instance type, whereas Spot Instances aren’t available to the burstable T2 family of instances.

As yet, there is no Microsoft equivalent of either service.

A Highly Capable Offering

Google is a relative newcomer to the cloud computing arena and still playing catch-up with Amazon and Microsoft in terms of available features and market share. Nevertheless, it has already established itself as a highly capable offering and an ideal cloud platform for specific use cases.

However, in the company’s drive to attract new enterprise customers, pricing will continue to play an important role in its marketing strategy. So, if your applications are a good fit for the platform, you can expect further cost-cutting campaigns and even better value for money from your cloud infrastructure expenditure. And all the more so if you optimize your costs, adopt a flexible approach to your workloads and take advantage of cost-saving options such as Preemptible VMs.

Published by Jonathan Maresky
Published on Virtual Strategy Magazine
http://virtual-strategy.com/2017/02/15/google-cloud-platform-how-preemptible-instances-can-bolster-your-cloud-cost-optimization-strategy/

55
Announcements / Qube! 6.9-1 Released
« Last post by jburk on February 16, 2017, 07:55:50 PM »
This is a maintenance release of the Qube! Core/Supervisor/Worker/ArtistView/WranglerView products.

This is a recommended release for all customers running Qube v6.9-0.

Notable changes and fixes are:
  • Numerous memory leaks in the supervisor have been plugged, fixes runaway memory consumption of single supervisor processes.
  • Faster re-loading of the qbwrk.conf central worker configuration file, also optimizes supervisor startup.
  • qbping results always reflect the current state of license file, artificially low values for installed licenses were throwing off the "metered license" calculations.

Please see http://docs.pipelinefx.com/display/RELNOTES/PipelineFX+Release+Notes for complete release notes.
56
Google circulates a new whitepaper that appears designed to reassure Google Cloud Platform customers that their data is protected by multiple layers of physical and cyber-security.

Data security and trust have long been major concerns for organizations considering cloud-computing options.  In an apparent bid to allay these fears among its customers at least, Google has released a new whitepaper enumerating the complex multi-layered strategy the company uses to protect enterprise data in the cloud.
The paper shows that Google has deployed security controls in six progressive layers starting with physical and hardware security at the bottom and operational security controls at the top of the stack.

A lot of the technology in Google's data centers is home built and incorporates what the company claims are multiple physical security controls.

Access to Google's data centers is tightly restricted and only a "very small fraction" of Google employees ever have access to the facilities housing the systems that power the company's range of cloud computing services. Security measures for controlling facility access include biometric identification, laser-based intrusion detection systems, vehicle barriers, metal detection and webcams.

Full story by By Jaikumar Vijayan
Published here  http://www.eweek.com/cloud/google-uses-multi-layered-controls-to-protect-data-in-the-cloud.html

57
Rendering in the Cloud / Survey: Google Cloud Most Popular Choice for SMBs
« Last post by Render Guru on January 12, 2017, 07:30:04 PM »
From Kris Blackmon | The VAR Guy
Published here:  http://thevarguy.com/cloud-computing-services-and-business-solutions/survey-google-cloud-most-popular-choice-smbs


A new survey from Clutch may explain why enterprises gravitate toward Microsoft Azure while smaller organizations choose Google.

A recent Clutch survey of 247 organizations showed that while powerhouse cloud providers Amazon Web Services (AWS) and Microsoft Azure tend to be the top choices of enterprise customers, small and midsize businesses (SMBs) gravitate toward Google Cloud Platform (GCP).

The data was collected from businesses from one to 10,000+ employees, and respondents were evenly distributed among users of each service, with about a third from each. Across all three platforms, "better selection of tools/features" ranked as the top reason customers chose their primary provider, with brand familiarity and security tying for second.

RELATED
The Rise of the Public Cloud and the Future of the Channel
http://thevarguy.com/cloud-computing-services-and-business-solutions/rise-public-cloud-and-future-channel
Public Cloud Is Imminent: Three Vital Migration Tips for ISVs
http://thevarguy.com/blog/public-cloud-imminent-three-vital-migration-tips-isvs

Despite being the oldest provider and having the lion's share of the market, AWS ranks lowest on brand familiarity. According to the survey, it ranks at 15 percent. Azure was the most recognized brand at 24 percent, and GCP sat right at at 20 percent.

The survey found that 37 percent of Azure users identify as enterprises, compared to only 25 percent who identify as an SMB and 22 percent who call themselves a startup or sole proprietorship. In contrast, 41 percent of GCP users fall into the SMB category.

Nick Martin, Principal Applications Development Consultant at Cardinal Solutions, says enterprise loyalty to Azure makes sense. “Windows Server and other Microsoft technologies are prevalent in the enterprise world. Azure provides the consistency required by developers and IT staff to tightly integrate with the tools that Microsoft-leaning organizations are familiar with.”

The report theorizes that GCP's pricing may be more palatable to SMBs, which when combined with its brand familiarity, may explain its popularity in that space. However, it's notable that GCP's analytics tool, Cloud Datalab, is the provider's most popular service, suggesting that smaller businesses may use it as their sole analytics service.

Clutch draws some high-level conclusions from its data that may help partners that are migrating customer data from on-prem to a public cloud:

*If you are an enterprise, require Windows integration, or seek a strong PaaS (platform-as-a-service) provider, consider Microsoft Azure.

*If you want heavy emphasis on analytics or are an SMB with a limited budget, look into Google Cloud Platform.

*If a service’s longevity, IaaS (infrastructure-as-a-service) offerings, and wide selection of tools are important to you, Amazon Web Services may be your best option.
58
Full Story by Tara Seals US/North America News Reporter, Infosecurity Magazine

Published here:  http://www.infosecurity-magazine.com/news/google-broadens-encryption-options/

Google is broadening its continuum of encryption options available on Google Cloud Platform (GCP), with the addition of the Cloud Key Management Service (KMS).

Now in beta, Cloud KMS offers a cloud-based root of trust that customers in regulated industries, such as financial services and healthcare, can monitor and audit. As an alternative to custom-built or ad-hoc key management systems, which are difficult to scale and maintain, Cloud KMS is aimed at making it easy to keep keys safe.

“With the launch of Cloud KMS, Google has addressed the full continuum of encryption and key management use cases for GCP customers,” said Garrett Bekker, principal security analyst at 451 Research. “Cloud KMS fills a gap by providing customers with the ability to manage their encryption keys in a multi-tenant cloud service, without the need to maintain an on-premise key management system or HSM.”

With Cloud KMS, users can manage symmetric encryption keys in a cloud-hosted solution, whether they’re used to protect data stored in GCP or another environment. Users also can create, use, rotate and destroy keys via our Cloud KMS API, including as part of a secret management or envelope encryption solution. It’s directly integrated with Cloud Identity Access Management and Cloud Audit Logging for greater control as well.

“Forward thinking cloud companies must lead by example and follow best practices,” said Maya Kaczorowski, Google product manager, in a blog. “For example, Ravelin, a fraud detection provider, encrypts small secrets, such as configurations and authentication credentials, needed as part of customer transactions, and uses separate keys to ensure that each customer's data is cryptographically isolated. Ravelin also encrypts secrets used for internal systems and automated processes.”

Leonard Austin, CTO at Ravelin, added, “Google is transparent about how it does its encryption by default, and Cloud KMS makes it easy to implement best practices. Features like automatic key rotation let us rotate our keys frequently with zero overhead and stay in line with our internal compliance demands. Cloud KMS’s low latency allows us to use it for frequently performed operations. This allows us to expand the scope of the data we choose to encrypt from sensitive data, to operational data that does not need to be indexed.”

At launch, Cloud KMS uses the Advanced Encryption Standard (AES), in Galois/Counter Mode (GCM), the same encryption library used internally at Google to encrypt data in Google Cloud Storage. This AES GCM is implemented in the BoringSSL library that Google maintains, and continually checks for weaknesses using several tools, including tools similar to the recently open-sourced cryptographic test tool Project Wycheproof.

By default, Cloud Storage manages server-side encryption keys, but if users prefer to manage their cloud-based keys themselves,  they can select Cloud KMS. For managing keys on-premise, they can select Customer Supplied Encryption Keys for Google Cloud Storage and for Google Compute Engine.

“While we’re on the topic of data protection and data privacy, it might be useful to point out how we think about GCP customer data,” added Kaczorowski. “Google will not access or use GCP customer data, except as necessary to provide them the GCP services.”


59
Announcements / Qube! WranglerView 6.9-0c patch release is available
« Last post by jburk on December 15, 2016, 01:52:40 AM »
A 6.9-0c patch release is now available for the Qube 6.9-x WranglerView.  This is a recommended patch release for all customers running any version of Qube v6.9, and can be installed directly on top of any 6.9 version of Qube.

For those using the Qube Installer utility, the 6.9-0 manifest file has been updated.  If you are using version 2.1-0 or later of the QubeInstaller installer utility, it will auto-detect if a manifest has a newer version and offer to download it.

You may also download the patched version directly at: http://repo.pipelinefx.com/downloads/pub/qubegui/current/

Full release notes are available on our documentation site: http://docs.pipelinefx.com/display/RELNOTES/Complete+WranglerView+Release+Notes

Code: [Select]
-------------------------------------------------------------------------------
6.9-0c FEATURES AND FIXES
-------------------------------------------------------------------------------
This is a WV-only release to roll-up bug fixes that were impacting several customers
=========================
New features
=========================
< None >

=========================
Fixes
=========================
==== CL 17323 ====
@FIX: default user can't remove their own jobs: userHasQubePermission: Unknown Qube user permission: "remove"
@FIX: non-admin users should only be permitted to remove jobs that are either complete, failed, or killed
==== CL 17323 ====
@FIX: right-click on WV job list is slow on Windows

=========================
Changes in behavior
=========================
==== CL 17339 ====
@CHANGE: WV will exit at startup when running against a supervisor from an older major/minor version
60
We have released a patched version of Qube! core, supervisor, and worker packages, labeled 6.8-4a, that contain various fixes.

The Qube! Installer should automatically pick up this new version when the 6.8-4 manifest is selected.

Listed below is the RELEASE notes for 6.8-4a, for your reference.

Cheers!

----
Code: [Select]
##############################################################################
@RELEASE: 6.8-4a

This is a cumulative patch release of the qube-core, supervisor, and worker
packages, for all platforms, including several key fixes.


==== CL 17208 ====
@CHANGE: Popluate the subjob (instance) objects with more data (like status), and not just the IDs, when subjob info is requested via "qbhostinfo" (qb.hostinfo(subjobs=True) for python API)

Previously, only jobid, subid, and host info (name, address, macaddress)
were filled. Now, things like "status", "timestart", "allocations",
etc. are properly filled in.

JIRA: QUBE-2073
ZD: 16541

==== CL 17206 ====
@FIX: When "migrate_on_frame_retry" job flag is set, prevent backend from doing further processing (especially another requestwork()) after a work failed

This was causing race-conditions that will get agenda items to be stuck in
"retrying" state, while there are no instances processing them.

Now the reportwork() API routine is modified so that if it's invoked to
report that a work "failed", and the "migrate_on_frame_retry" is set on the
job, it will stop processing (does a long sleep), and let the worker/proxy
do the process clean up.

JIRA: QUBE-2202
ZD: 16553

==== CL 17186 ====
@FIX: "VirtualBox Host-Only Ethernet Adapter" now when daemons (supe, worker) try to pick a primary mac address

JIRA: QUBE-2149
ZD: 16561

==== CL 17182 ====
@CHANGE: all classes that inherit from QbObject print as a regular dictionary, no longer have a __repr__ which prints the job data as a single flat string
@NEW: add qb.validatejob() function to python API, help find malformed jobs that crash the user interfaces

==== CL 17141 ====
@FIX: Any job submitted from within a running job picks up the pgrp of the submitting job

By design, if the submission environment has QBGRPID and QBJOBID set, the
API's submission routine will set the job's pgrp and pid, respectively to
the values specified in the environment variables.

One couldn't override this "inheritance" behavior even by explicitly
specifying "pgrp" or "pid" in the job being submitted, for instance with
the "-pgrp" command-line option of qbsub.

Fixed, so that setting "pgrp" to 0 on submission means that the job should
generate its own pgrp instead of inheriting it from the environment.

JIRA: QUBE-2141
ZD: 16545

==== CL 17101 ====
@NEW: add "-dying" and "-registering" options to qbjobs.
@CHANGE: also add dying and registering jobs to the "-active" filter.

JIRA: QUBE-2091
ZD: 16469

==== CL 16804 ====
@TWEAK: added code to print what operation was requested, when printing out "permission granted to user..."


Pages: 1 ... 4 5 [6] 7 8 ... 10