Deploying Linux Servers in the Cloud

Linux Servers in the Cloud Services

With the prevalence of cloud providers today, it’s more feasible and easier than ever for everybody to launch and run their own Linux servers. End users have a wide variety of providers and implementations to select from, such as OpenStack, Linode, Amazon Web Services (AWS), Google Cloud Platform (GCP), Azure, and IBM Cloud.

Table of contents[Show]

The speed and ease with which a new Linux server distribution can be launched online may surprise you. Almost everything comes prebaked for you in the cloud. You only need to know how to turn on or off the right switches to get up and running in the cloud - and have the appropriate billing information handy, of course. 

Behind the Cloud

Cloud environments are powered by a gazillion virtual machines (VMs). VMs are powered by the operating system (OS). The OS is powered by hardware or physical machines - loads and loads of hardware. You can rest assured there are no magical unicorns making things happen in the cloud. Cloud environments are powered by hardware.

What this means for you as a Linux system administrator is that more than ever, your skills and knowledge are heavily needed - even in a cloud-driven world. So instead of thinking in terms of a few physical hard drives attached to servers, you now think in terms of petabytes and pools of storage to be shared by potentially hundreds of smaller VMS. Instead of thinking in terms of terabytes of installed physical memory (RAM), you now think in terms of splitting that memory into smaller sizes to be shared by lots of smaller virtual machines (or containers). Instead of thinking in terms of a couple of Ethernet cards on a box, you think in terms of hundreds of virtual network cards and switches all communicating using the magic of software-defined networking (SDN). In a nutshell, this all means that the work (and concerns) of a system administrator increases by orders of magnitude in a cloud environment!

Obtaining and Spinning Up New Virtual Linux Servers

In this section, we talk about some of the many quick methods for spinning up a new Linux server - but using other people’s hardware and infrastructure! Once it is successfully spun up, you should be able to apply the theories and follow the exercises in the rest of the blog, just as if you were following along on your own physical machine.

Because we’ve not yet covered some other pertinent details (such as working on the command line), our coverage here is necessarily going to be very high level. At a minimum, you will need access to the command-line interface of an existing system (preferably Linux), with the right tools installed, in order to be able to use the commands detailed in the next sections. Even though we prefer Linux-based systems, you won’t be stranded if you are following along on another platform such as macOS or Windows - you’ll just have to consult the relevant provider documentation for how to obtain and set up the tools on either of those platforms.

Free-to-Run Virtual Linux Servers

By “free-to-run,” we mean you don’t necessarily need to give your billing information or credit card details to any third-party provider to get a virtual Linux server up and running. As a result, this often involves some hands-on prep work to be done by you or somebody else! We only provide a few examples of this type of server in this section.


This is a utility for downloading or building new virtual machines via customizable templates. It is easy to use, and all you need is access to an existing, running Linux operating system. The authors of virt-builder have created a decent library of templates for building various versions and spins of popular Linux distributions, such as Fedora, Ubuntu, Debian, OpenSUSE, and CentOS. After installing the software package that provides the virt-builder utility, you can use it to query and view a list of available operating systems by running the following command:

$ virt-builder -list

opensuse-tumbleweed	x86_64	openSUSE Tumbleweed
centos-8.0	        x86_64	CentOS 8.0
debian-10	        x86_64	Debian 10 (stretch)
fedora-34	        x86_64	Fedora® 34 Server
freebsd-11.1	        x86_64	FreeBSD 11.1


 To view any relevant installation notes (such as login passwords, usernames, and so on) that might be available for the fedora-34 distro returned in the preceding sample listing, type this command:

$ virt-builder --notes fedora-34

Next, run the following command to build and download the disk image file for our sample fedora-34 distro. By default, the file will be downloaded to the current working directory.

$ virt-builder fedora-34

After successfully downloading the raw virtual disk image file (fedora34.img in our example), you should be able to use any garden-variety hypervisor platform to boot up and run the VM encapsulated in the image file. 


OpenStack is an amalgamation of several individual FOSS projects that can be integrated to provide a single and complete cloud computing platform suitable for public and private clouds. Collectively, the individual projects are responsible for powering some very important services that OpenStack relies heavily on, such as Compute (Nova), Networking (Neutron), Block Storage (Cinder), Identity (Keystone), Image (Glance), Object Storage (Swift), Database (Trove), and Messaging (Zaqar). The OpenStack project has the backing of lots of technology industry stalwarts that use the project as well as contribute to its development. Once OpenStack is properly set up and configured, you are limited only by your imagination for what you can do with it.

Although the individual projects that comprise OpenStack usually have their own native tools (and quirks), the overarching openstack binary aims to be a unified tool that can be used to perform numerous functions across the entire stack. You can learn more about and download the openstack client toolset from

After configuring and authorizing the tool to work on the target openstack deployment, you can bring up a new Linux server VM by supplying the correct parameters and running the following:

$ openstack server create \

--flavor <FLAVOR_ID> --image <IMAGE_ID> --key-name <KEY_NAME> \ 
--security-group <SEC GROUP NAME> <INSTANCE NAME>


Commercial Cloud Service Providers

Commercial cloud providers are any of the many for-profit companies that rent out parts of their compute infrastructure for a fee. Even though most of these providers have so-called free tiers of their services, you can rest assure that there is no such thing as a completely free lunch. The free tiers are designed to wet your appetite just enough to get you addicted/hooked on their services. Most commercial service providers require you to sign up and establish security tokens and keys that are unique to that service or provider. Generally, to use the command-line interface (CLI) tools in the following sections, you need to have security tokens handy!

Application Programming Interface (API)

One phrase you’ll come across very frequently in the cloud world is application programming interface, or API. Here, we explain how APIs came to be and what problems they solve.


When you are running your workloads in the cloud, you are relying on the cloud provider’s physical infrastructure. This means that you don’t have access to physical switches, buttons, cables, ports, or anything like you would have in your own server closet or data center. The only way to manage your environment is via whatever virtual (software) interfaces the provider makes available to you. The interface of choice that most providers make available is via an API. APIs can be crudely defined as a canned set of rules designed to complete or trigger certain functions that a provider makes available to authorized users. For example, a provider can have a set of APIs that end users can invoke to launch a brand-new Fedora server VM with 128GB of RAM in a data center located in Antarctica. APIs can help provide a software abstraction layer that end users can interact with in a controlled manner.


The net result of all this is that OpenStack, Google, AWS, Azure, Bob, and his uncle all have their own unique APIs that you must use to interact with them. Thankfully, most of the APIs are implemented via a common and well-understood Representational State Transfer (REST) interface. Once you are authorized and understand the nuances of a system’s API, you can easily bring up a single or a thousand new Linux servers using a few well-crafted commands!


Linode has a long and rich history in the Infrastructure as a Service (IaaS) world, specifically in the area of its Linux-based distro offerings. A VM in the Linode world is often referred to as a “linode” (Linux node).

Besides the graphical web front-end for managing resources in the Linode cloud, end users also have the option of a rich command line interface aptly named Linode CLI. The Linode CLI is a wrapper around the rich Linode API. You can download and learn more about Linode CLI from

Once it’s properly configured and you supply the correct parameters, you should be able to bring up a sample linode VM under your account with a command similar to the one here:

$ linode-cli linodes create \

--type g5-standard-2 --region us-east \
--image <IMAGE NAME> --root pass

Amazon Web Services (AWS)

AWS provides cloud computing services and products for various applications and industries. There are literarily hundreds of services under the AWS umbrella. Amazon Elastic Compute Cloud (EC2) is the specific AWS offering that provides the compute infrastructure that interests us here.

AWS and numerous third parties have curated a decent collection of virtual machines on the EC2 platform. The AWS homegrown tool for interacting with its various cloud services is called aws or aws-cli. You can learn more about the aws tool at

After signing up for an AWS account and properly configuring the tool, you can launch a new VM in AWS by supplying the correct values for the parameters (Amazon Machine Image ID, number of instances, and so on) in the following sample command:

$ aws ec2 run-instances \

--count <NUMBER_OF_INSTANCES> --instance-type <TYPE_OF_INSTANCE> \ 
--key-name <KEY PAIR NAME> --security-group-ids <SECURITY GROUP ID>


Google Cloud Platform (GCP)

GCP is made up of a plethora of services - and the number of services keeps growing. The specific component under GCP that is closest to offering a traditional Linux (virtual) server experience to end users is the Google Compute Engine (GCE). Google’s homegrown command-line toolset for end users to interact with its products and service under GCP is delivered via its Cloud SDK. The specific tool for interacting with GCE is called gcloud. You can download and learn more about the Cloud SDK and gcloud at

After signing up for a GCP account and setting up your environment, bringing up a new Linux server can be as simple as supplying the correct values for the parameters (image family, image project, and so on) and running the following command:

$ gcloud compute instances create <INSTANCE_NAME> \ 

--image-family <IMAGE_FAMILY> \
--image-project cIMAGE PROJECT>


Azure is Microsoft’s cloud computing arm. As of the time of this writing, it comprises over several hundred different cloud services! The Azure component we are interested in here is aptly name “Virtual Machines,” and it allows end users to create Linux and Windows virtual machines on Azure’s IaaS platform.

Azure CLI (az) is the command-line tool for managing various things in the Azure world. You can download and learn more about this tool at

As with the other commercial public cloud providers, you’ll need to sign up for an account before you can start spinning up servers on Azure’s infrastructure. Once you have the az tool properly set up and you’ve created any needed resources (such as a resource group), you should be able to spin up a sample Ubuntu server in Azure by running a command similar to the one here:

$ az vm create \

--resource-group <Resource_Group_Name> \ 
--name <VM Name> --image <Image Name>



Even though almost everything in the cloud is virtual, it is still very important that you as the system administrator have a good understanding of the main principles of administering physical servers. The concepts are very similar; only the scale is different. The rest of this blog covers a good chunk of the main principles of Linux server administration, such that you will be ready to work on both physical and virtual cloud-based servers.

As we wrote at the beginning of this article, one of the beauties of the cloud computing model is that almost everything comes prebaked for you in the cloud. However, to take full advantage of the ease of use and accessibility of the cloud, you need to be able to speak the language of the provider and play by their rules - and this is often done via APIs and custom tools wrapped around the APIs.



Вас заинтересует / Intresting for you:

Understanding Linux security: ...
Understanding Linux security: ... 1493 views Zero Cool Sat, 17 Jul 2021, 06:52:25
Understanding the differences ...
Understanding the differences ... 2552 views Mike Sun, 07 Feb 2021, 18:50:31
Linux: Ways to find and instal...
Linux: Ways to find and instal... 940 views Игорь Воронов Thu, 22 Dec 2022, 06:49:48
Understanding SELinux modes: d...
Understanding SELinux modes: d... 1272 views Zero Cool Sat, 17 Jul 2021, 07:11:59
Comments (1)
This comment was minimized by the moderator on the site

Great article. Thanks to. Just started studying Deploying Linux Servers in the Cloud!

There are no comments posted here yet
Leave your comments
Posting as Guest
Suggested Locations