Understanding the differences between Windows and Linux

Windows VS Linux comparison

As you might imagine, the differences between Microsoft Windows and the Linux operating system cannot be completely discussed in the one article only. Throughout this blog, topic by topic, you’ll read about the specific contrasts between the two systems. In some things, you’ll find no comparisons because a major difference doesn’t really exist. But before we attack the details, let’s take a moment to discuss the primary architectural differences between the two operating systems.


Table of contents[Show]


Single Users vs. Multiple Users vs. Network Users

Windows was originally designed according to the “one computer, one desk, one user” vision of Microsoft co-founder Bill Gates. For the sake of discussion, we’ll call this philosophy “single user.” In this arrangement, two people cannot work in parallel running (for example) Microsoft Word on the same machine at the same time. You can buy Windows and run what is known as Terminal Services or thin clients, but this requires extra computing power/hardware and extra costs in licensing. Of course, with Linux, you don’t run into the cost problem, and Linux will run fairly well on averagely specced hardware. Linux easily supports multiuser environments, where multiple users doing different things can be concurrently logged onto a central machine. The operating system (Linux) on the central machine takes care of the resource-“sharing” details.

“But, hey! Windows can allow people to offload computationally intensive work to a single machine!” you may argue. “Just look at SQL Server!” Well, that position is only half correct. Both Linux and Windows are indeed capable of providing services such as databases over the network. We can call users of this arrangement network users, since they are never actually logged into the server but rather send requests to the server. The server does the work and then sends the results back to the user via the network. The catch in this case is that an application must be specifically written to perform such server/client duties. Under Linux, a user can run any program allowed by the system administrator on the server without having to redesign that program. Most users find the ability to run arbitrary programs on other machines to be of significant benefit.

The Monolithic Kernel and the Micro-Kernel

Three popular forms of kernels are used in operating systems. The first, a monolithic kernel, provides all the services the user applications need. The second, a micro-kernel, is much more minimal in scope and provides only the bare minimum core set of services needed to implement the operating system. And the third is a hybrid of the first two.

Linux, for the most part, adopts the monolithic kernel architecture: It handles everything dealing with the hardware and system calls. Windows, on the other hand, has traditionally worked off a micro-kernel design, with the latest Windows server versions using the hybrid kernel approach. The Windows kernel provides a small set of services and then interfaces with other executive services that provide process management, input/output (I/O) management, and so on. It has yet to be proved which methodology is truly the best way.

Separation of the GUI and the Kernel

Taking a cue from the original Macintosh design concept, Windows developers integrated the GUI with the core operating system. One simply does not exist without the other. The benefit with this tight coupling of the operating system and user interface is consistency in the appearance of the system.

Although Microsoft does not impose rules as strict as Apple’s with respect to the appearance of applications, most developers tend to stick with a basic look and feel among applications. One reason why this is dangerous, however, is that the video card driver is now allowed to run at what is known as “Ring 0” on a typical x86 architecture. Ring 0 is a protection mechanism - only privileged processes can run at this level, and typically user processes run at Ring 3. Because the video card is allowed to run at Ring 0, it could misbehave (and it does!), and this can bring down the whole system.

On the other hand, Linux (like UNIX in general) has kept the two elements - user interface and operating system - separate. The windowing or graphical stack (X11, Xorg, Wayland, and so on) is run as a user-level application, which makes the overall system more stable. If the GUI (which is complex for both Windows and Linux) fails, Linux’s core does not go down with it. The GUI process simply crashes, and you get a terminal window. The graphical stack also differs from the Windows GUI in that it isn’t a complete user interface. It defines only how basic objects should be drawn and manipulated on the screen.

One of the most significant features of the X Window System is its ability to display windows across a network and onto another workstation’s screen. This allows a user sitting on host A to log into host B, run an application on host B, and have all of the output routed back to host A. It is possible, for example, for several users to be logged into the same machine and simultaneously use an open source equivalent of Microsoft Word (such as LibreOffice).

In addition to the core graphical stack, a window manager is needed to create a useful environment. Linux distributions come with several window managers, including the heavyweight and popular GNOME and KDE environments. Both GNOME and KDE offer an environment that is friendly, even to the casual Windows user. If you’re concerned with speed and small footprint, you can look into the F Virtual Window Manager (FVWM), Lightweight X11 Desktop Environment (LXDE), and XFCE window managers.

So which approach is better - Windows or Linux - and why? That depends on what you are trying to do. The integrated environment provided by Windows is convenient and less complex than Linux, but out of the box, Windows lacks the X Window System feature that allows applications to display their windows across the network on another workstation. The Windows GUI is consistent, but it cannot be easily turned off, whereas the X Window System doesn’t have to be running (and consuming valuable hardware resources) on a server.

NOTE  

With its latest server family of operating systems, Microsoft has somewhat decoupled the GUI from the base operating system (OS). You can now install and run the server in a so-called Server Core mode. Managing the server in this mode is done via the command line or remotely from a regular system, with full GUI capabilities.

My Network Places

The native mechanism for Windows users to share disks on servers or with each other is through My Network Places (the former Network Neighborhood). In a typical scenario, users attach to a share and have the system assign it a drive letter. As a result, the separation between client and server is clear. The only problem with this method of sharing data is more people-oriented than technology-oriented: People have to know which servers contain which data.

With Windows, a new feature borrowed from UNIX has also appeared: mounting. In Windows terminology, it is called reparse points. This is the ability to mount an optical drive into a directory on your C drive.

Right from its inception, Linux was built with support for the concept of mounting, and as a result, different types of file systems can be mounted using different protocols and methods. For example, the popular Network File System (NFS) protocol can be used to mount remote shares/folders and make them appear local. In fact, the Linux Automounter can dynamically mount and unmount different file systems on an as-needed basis. The concept of mounting resources (optical media, network shares, and so on) in Linux/UNIX might seem a little strange, but as you get used to Linux, you’ll understand and appreciate the beauty in this design. To get anything close to this functionality in Windows, you have to map a network share to a drive letter.

A common example of mounting resources under Linux involves mounted home directories. The user’s home directories can reside on a remote server, and the client systems can automatically mount the directories at boot time. So the /home (pronounced slash home) directory exists on the client, but the /home/username directory (and its contents) can reside on the remote server.

Under Linux, NFS, and other Network File Systems, users never have to know server names or directory paths, and their ignorance is your bliss. No more questions about which server to connect to. Even better, users need not know when the server configuration must change. Under Linux, you can change the names of servers and adjust this information on client-side systems without making any announcements or having to reeducate users. Anyone who has ever had to reorient users to new server arrangements or major infrastructure changes will appreciate the benefits and convenience of this.

The Registry vs. Text Files

Think of the Windows Registry as the ultimate configuration database - thousands upon thousands of entries, only a few of which are completely documented.

“What? Did you say your Registry got corrupted?” <insert maniacal laughter> “Well, yes, we can try to restore it from last night’s backups, but then Excel starts acting funny and the technician (who charges $130 just to answer the phone) said to reinstall .…”

In other words, the Windows Registry system can be, at best, difficult to manage. Although it’s a good idea in theory, most people who have serious dealings with it don’t emerge from battling it without a scar or two.

Linux does not have a registry, and this is both a blessing and a curse. The blessing is that configuration files are most often kept as a series of text files (think of the Windows .ini files). This setup means you’re able to edit configuration files using the text editor of your choice rather than tools such as regedit. In many cases, it also means you can liberally add comments to those configuration files so that six months from now you won’t forget why you set up something in a particular way. Most software programs that are used on Linux platforms store their configuration files under the /etc (pronounced slash etc) directory or one of its subdirectories. This convention is widely understood and accepted in the FOSS world.

The curse of a no-registry arrangement is that there is no standard way of writing configuration files. Each application can have its own format. Many applications are now coming bundled with GUI-based configuration tools to alleviate some of these problems. So you can do a basic setup easily via the GUI tool and then manually edit the configuration file when you need to do more complex adjustments.

In reality, having text files hold configuration information usually turns out to be an efficient method and makes automation much easier too. Once set, these files rarely need to be changed; even so, they are straight text files and therefore easy to view and edit when needed. Even more helpful is that it’s easy to write scripts to read the same configuration files and modify their behavior accordingly. This is especially helpful when automating server maintenance operations, which is crucial in a large site with many servers.

Domains and Active Directory

The idea behind Microsoft’s Active Directory (AD) is simple: Provide a repository for any kind of administrative data, whether it is user logins, group information, or even just telephone numbers. In addition, AD provides a central place to manage authentication and authorization (using Kerberos and LDAP) for a domain. The domain synchronization model also follows a reliable and well-understood Domain Name System (DNS)–style hierarchy. As tedious as it may be, AD works pretty well when properly set up and maintained.

Out of the box, Linux does not use a tightly coupled authentication/authorization and data store model the way that Windows does with AD. Instead, Linux uses an abstraction model that allows for multiple types of stores and authentication schemes to work without any modification to other applications. This is accomplished through the Pluggable Authentication Modules (PAM) infrastructure and the name resolution libraries that provide a standard means of looking up user and group information for applications. PAM also provides a flexible way of storing that user and group information using a variety of schemes.

For administrators looking to Linux, this abstraction layer can seem peculiar at first. However, consider that you can use anything from flat files to Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or Kerberos for authentication. This means you can pick the system that works best for you. For example, if you have an existing infrastructure built around AD, your Linux systems can use PAM with Samba or LDAP to authenticate against the Windows domain model. And, of course, you can choose to make your Linux system not interact with any external authentication system. In addition to being able to tie into multiple authentication systems, Linux can easily use a variety of tools, such as OpenLDAP, to keep directory information centrally available as well.

Summary

This article provided an overview of the basic architectural features of Linux and Windows. In our opinion, Linux is more attractive to use and gives more options to users. Although, of course, the final choice is always determined by the specific application tasks facing the user and his personal preferences.

Вас заинтересует / Intresting for you:

Deploying Linux Servers in the...
Deploying Linux Servers in the... 1981 views Mike Thu, 11 Feb 2021, 20:02:23
Understanding Linux security: ...
Understanding Linux security: ... 1469 views Zero Cool Sat, 17 Jul 2021, 06:52:25
Linux: Ways to find and instal...
Linux: Ways to find and instal... 862 views Игорь Воронов Thu, 22 Dec 2022, 06:49:48
Working with firewalls in Linu...
Working with firewalls in Linu... 1254 views Zero Cool Tue, 10 Aug 2021, 18:05:14
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations