Now let's get into the nitty-gritty of the Google Cloud Platform. Any cloud platform really is all about resources. It allows you to use resources for a fee. What's cool about Cloud Platforms is the great diversity and versatility in terms of what resources are available to us. This might include hardware such as virtual machine instances, or persistent disks, services such as BigQuery or BigTable or even more complex software such as Cloud ML Engine and the various APIs for machine learning. But in addition to just the hardware and software, there is a lot of little detailed networking, load balancing, logging, monitoring, and so on. The GCP, like other major cloud platforms, provides a great variety of services; take load balancing for instance, GCP load balancing options span to four different layers (that is, data link, network, transport, and application layers) of the OSI networking stack.
You will learn the following topics in this blog:
- The difference between regions and zones
- The organization of resources in a GCP project
- Accessing cloud resources using Cloud Shell and the web console
Global, regional, and zonal resources
Now, of course, there is no free lunch in life and you have to pay for (almost) all of this, and the payment models are going to differ. For instance, with persistent disks, you pay for the capacity that you allocate, whereas with cloud storage buckets, you pay for the capacity that you actually use. However, the basic idea is that there are resources that will be billed. All billable resources are grouped into entities named projects.
Let's now look at how resources are structured. The way the Google Cloud Platform operates, all resources are scoped as being the following:
Now you might think that this geographical location of resources is an implementation detail that you shouldn't have to worry about, but that's only partially true. The scoping actually also determines how you architect your cloud applications and infrastructure.
Regions are geographical regions at the level of a subcontinent—the central US, western Europe, or east Asia. Zones are basically data centers within regions. This mapping between a zone and a data center is loose, and it's not really explicit anywhere, but that really is what a zone is.
These distinctions matter to use as an end users because regional and zonal resources are often billed and treated differently by the platform. You will pay more or less depending on the choices you make regarding these levels of scope access. The reason that you pay more or less is that there are some implicit promises made about performance within regions.
For instance, the Cloud Docs tell us that zones within the same region will typically have network latencies in the order of less than 5 milliseconds. What does typical mean? Well here, it is the 95 percentile delay latency, that is, 95% of all network traffic within a region will have latency of less than 5 ms. That's a fancy way of saying that within a region, network speeds will be very high, whereas across regions, those speeds will be slower.
Cost and latency are two reasons why these geographical choices are important to you, another has to do with failure locations. Zones can be thought of as single failure domains within a region. Now common sense says that, basically, it is a data center, so you might want to create different versions of resources situated in different zones or even in regions depending on your budget and your user base. That's because a zone is a single failure domain. Zones reside inside regions, and they are identified using the name of the corresponding region as well as a single lowercase letter, asia-east1-a for instance. A zone is a single point of failure in Google's data center network. Zones are analogous to Availability Zones in AWS. If you replicate resources across different zones, such architecture can legitimately be termed as high-availability architecture.
If a resource is available globally, it's known as a global or a multiregional resource. These multiregional resources tend to be the most expensive, the most available, and also the most widely replicated and backed up kind of resources. One level down come regional resources; these only need to be backed up to different data centers within the same region and then at the bottom of this access hierarchy are the zonal resources. These only need to be replicated within the same data center.
There are lots of examples in each of these categories, for instance, tools such as Cloud Storage, DataStore, and BigQuery. All of this can be global or multiregional; this makes sense intuitively, as we expect storage technologies to be global rather than regional (Cloud SQL and BigTable are regional; however, Cloud Spanner can be either regional or multiregional).
On the other hand, compute tends to be regional or zonal. AppEngine is regional, whereas VM instances are zonal. Disk storage that takes the format of either persistent ordinary hard disks or persistent SSD disks is zonal as well. Disks need to be local, and they need to be in the same zone as the corresponding virtual machine instance they are used by:
|Us-east1||South Carolina, USA||
|Southamerica-east1||Sao Paulo, Brazil||southamerica-east1-a
|Europe-west1||St. Ghislain, Belgium||
Accessing the Google Cloud Platform
Now that we understand some of the hardware and software choices that are available to us in the Google Cloud Platform buffet, we also should know how we can go about consuming these resources. We have multiple following choices:
- One really handy way is using the GCP console, also known as the web console; simply access this from a web browser at this link.
- Another is by making use of a command-line interface using command-line tools. There are four command-line utilities that you might encounter while working with the GCP:
gcloud: This is for pretty much everything other than the specific cases mentioned later
gsutil: This is for working with cloud storage buckets
bq: This is for working with BigQuery
kubetcl: This is for working with Kubernetes (note that kubectl is not tied to GCP. If you use Kubernetes on a competing cloud provider such as Azure, you'd use kubectl there as well)
- Another way is to programmatically access GCP resources is from various client libraries. These are available in a host of languages, including Java, Go, and Python.
Projects and billing
Let's also really quickly talk about how billing happens on the Google Cloud Platform. At heart, billing is associated with projects. Projects are logical units which consumes a bunch of resources. Projects are set up within organizations, and we will get to that hierarchy later on in the course. Projects are associated with accounts and accounts with organizations. However, billing really happens on a per project basis. Each project can be thought of as Resources + Settings + Metadata. So, if GCP is a lunch buffet, a project can be thought of as a meal. You select what you would like to consume, how you would like to consume it, and associate all of that information inside this one unit that should then pay for it.
Extending that analogy just a little further: just as you can mix and match food items within a meal, you can easily have resources within a project interact with each other. And so in a sense, a project can be thought of as a namespace. If you have various name resources for instance, those names typically only need to be unique within the project. There are some exceptions to this, which we will discuss later, Google Cloud Storage buckets for instance.
A project is really associated with or defined by three pieces of metadata - the name, ID, and the number. The project ID is unique and permanent. Even if you go ahead and delete a project that ID will not be available for use for other projects in the future.
Setting up a GCP account
Execute the following steps to set up a GCP account:
- Go to this link and sign in to continue using Google Cloud Platform.
- If you already have a Gmail account that's what you will use to sign in here. If you don't, get a Gmail account before you sign in to Google Cloud Platform.
- If you are doing this for the very first time, it will take you to a page where it will ask you for a bunch of personal information.
This is where you get access to all GCP products. Google currently enables a free trial for everyone and gives you 300 US Dollars of free credit. So even if you are going to upgrade to a paid account, you won't shell out any money until you reach the 300 dollar limit. In addition, if you consume resources worth more than 300 USD, all your resources will be shut down, so you don't inadvertently end up paying a large bill because you forgot to turn down VM or shut down a BigTable instance. Google is considerate that way. You will need to provide a credit card number in order to use the free trial, but you won't be charged though.
Your Google Cloud account has been created. You will automatically be taken to a page that is the dashboard for your account:
The first thing is to create a new project. Click on the drop-down icon right up top next to the three horizontal lines (that is, the Products & Services menu, also colloquially known as the hamburger). Here, we already have a project; the name of that project is
If you click on the arrow next to the project name, a new dialog will open up, allowing you to create a new project, or select an existing one:
The projects associated with your Google Cloud Account are your top level billing instances. All of the GCP resources are provisioned under some or another project. You can choose between having a common billing account for each project or separate ones accordingly. All projects and billing accounts can be linked to a common organization, which will be linked to your Google Cloud account. Billing accounts encompass source of payment (for example, credit card details). Thus via different billing accounts you can ask different people to pay for different resources (for example, different teams of your organization).
Every project has a unique name which you specify as well as an ID generated by Google. The project ID contains the project name which we have specified and a string of numbers which makes it unique across GCP globally.
Let's orient ourselves on the dashboard page. At the very top, you can see what project this dashboard is associated with. There is also a dropdown to allow you to switch between projects easily. The very first card gives us the details of the projects such as its name and the associated project ID. There is also a quick link to your project settings page where you can change your billing information and other project-related details. The compute engine card shows you a summary of your compute instances. We have no instances; therefore, this card is currently empty. We get a quick status check on the right indicating that all our services are green. We can see the billing details of the project at a glance.
Now we are at the Google Cloud dashboard that shows how we get all the services that Cloud Platform makes available to us. You will use this three-line navigation button at the top left. This is the most important button that you will find that while reading this blog and using the platform:
You will be navigating here over and over again. Click on the hamburger menu, open up the navigation menu, and you will see all Cloud Platform services and products available to you. Take the time and explore this menu as there is a lot of interesting stuff out there. But what are we going to work on? First is to create a VM instance, an instance of the compute engine. Go to the compute engine menu and click on VM instances:
This will take us to a page where we can create our very first virtual machine instance on Google Cloud. You also now know that all Google Cloud resources or services or products that you use are built to a top-level projects. You can set up different projects for different teams in your organization.
Using the Cloud Shell
Before we jump into the compute options on GCP and create our first VM instance, let's understand the Cloud Shell. A Cloud Shell is a machine instance that runs on the Google Cloud which serves as your command line. All GCP accounts have a Cloud Shell that they can use to access resources on the Google Cloud Platform. You can access the Cloud Shell by clicking on a button to the top right of the navigation ribbon:
The great thing about the Cloud Shell is that it provides a complete environment for you to connect to various resources in the Cloud. Also, it is worth noting that the Cloud Shell is completely free for usage. The cool thing about it is that you can directly use the
gcloud command-line tools to connect to resources in the cloud, create resources, provision it, and so on. You don't need to install and set up anything. The Cloud Shell is what you would use if, say, your organization does not allow you to download software on your local machine. It is a great alternative in that case, which just works. When you first connect to the Cloud Shell or
gshell, Google has to spin up an active instance to use for this. It might take a little while, so be patient:
The figure that follows is our provisioned Cloud Shell and you will notice that it is associated with the same project which we mentioned earlier. Let's take the
gcloud command-line tool for a test run. Remember that the Cloud Shell is just a terminal session on an ephemeral VM. This session will get disconnected after 30 minutes of inactivity. Also, when you are in the Cloud Shell, you are in a home directory with 5 GB of space, and this home directory will remain constant across all Cloud Shell sessions in the project and will be deleted after 120 days of inactivity:
Let’s explore the Cloud Shell further. The
gcloud is Google's main command-line tool that allows you to work with resources and to a whole bunch of operations. It's especially useful if you want to script operations and just run a script rather than perform it manually over and over again.
You can view what the current default project is by typing out the
gcloud config list command:
You can see that
loonycorn-project08 is the default. If you need help with what commands are available with
gcloud, simply type
gcloud -h, and you will see a whole bunch of information. The
gcloud config list command shows you what properties have been set so far in the configuration. This will only display those properties that are different from the defaults. For example, you can see here that the account has been set to
If you need help for a particular command, let's say it's the
compute command, you can simply say
gcloud compute --help:
This essentially throws up the main page for that particular command. In other words,
gcloud has context-sensitive help, and that is a great way to go about building the commands you need:
Everything you need is right there on the screen.
In a nutshell, the Google Cloud Shell is a great tool for quick work on the console. Remember, again, though, that it is a short, time-limited session on an ephemeral VM. So, if you are going to be intensely developing on the Google Cloud and you can download software, it's better to download the Google Cloud SDK and use that instead. This offers a permanent connection to your instances on the cloud instead of a temporary VM instance that has to be spun up in order to use Cloud Shell.
You learned about the distinction between global, regional, zonal resources, and the SLAs provided by Google for network traffic and availability within regions and zones. We got started with GCP by exploring the GCP web console. We also made use of the Google Cloud Shell and typed out a few basic commands using the
gcloud command-line utility.