Introducing Core IT Security Principles

Introducing Core IT Security Principles
Андрей Волков

Андрей Волков

Системное, сетевое администрирование +DBA. И немного программист!))  Профиль автора.

When thinking about security, most people start by thinking about their stuff. We all have stuff. We have stuff that we really care about, we have stuff that would be really difficult to replace, and we have stuff that has great sentimental value. We have stuff we really don’t want other people to find out about. We even have stuff that we could probably live without. Now think about where you keep your stuff. It could be in your house, your car, your school, your office, in a locker, in a backpack or a suitcase, or a number of other places. Lastly, think about all of the dangers that could happen to your stuff. People could be robbed or experience a disaster such as a fire, earthquake, or flood. In any case, we all want to protect our possessions no matter where the threat comes from.



At a high level, security is about protecting stuff. In the case of personal stuff, it’s about making sure to lock the door when leaving the house, or remembering to take your purse when leaving a restaurant, or even making sure to cover all the presents purchased for Christmas and putting them in the back of the car before heading back into the mall.

Many of the security topics we will discuss in this blog boil down to the same common sense used every day to protect stuff. In the business environment, the stuff we protect is assets, information, systems, and networks, and we can protect these valuable assets with a variety of tools and techniques that we will discuss at length in this book.

In this article, we will start with the basics. We’ll look at some of the underlying principles of a security program to set the foundation for understanding the more advanced topics covered later in my blog. We’ll also discuss the concepts of physical security, which is critical not only for securing physical assets but information assets as well. By the time we’re done, you’ll have a good idea how to protect stuff for a living.

A fundamental understanding of the standard concepts of security is essential before people  can start securing their environment. It’s easy to start buying firewalls, but until you understand what needs to be protected, why it needs to be protected, and what it’s being protected from, you’re just throwing money away.

When working in the security field, one of the first acronyms to be encountered in the information security field is CIA. Not to be confused with the government agency with the  same acronym, in information security, this acronym represents the core goals of an infor- mation security program. These goals are:

  • Confidentiality 
  • Integrity 
  • Availability 

 

Understanding Confidentiality 

Confidentiality is a concept we deal with frequently in real life. We expect our doctor to   keep our medical records confidential. We trust our friends to keep our secrets confidential.  In the business world, we defi ne confidentiality as the characteristic of a resource — ensuring access is restricted to only permitted users, applications, or computer systems. 

What does this mean in reality? Confidentiality deals with keeping information, networks,  and systems secure from unauthorized access. 

An area where this issue is particularly critical in today’s environment is with the high-profi le leaking of people’s personal information by several large companies. These breaches  in confidentiality made the news largely because the information could be used to perpetrate identity theft against the people whose information was breached. 

There are several technologies that support confidentiality in an enterprise security 
implementation. These include the following: 

  • Strong encryption 
  • Strong authentication 
  • Stringent access controls 

Another key component to consider when discussing confidentiality is how to determine what information is considered confidential. Some common classifications of data are Public, Internal Use Only, Confidential, and Strictly Confi dential. The Privileged classifi cation is also used frequently in the legal profession. The military often uses Unclassifi ed, Restricted, Confidential, Secret, and Top Secret. These classifications are then used to determine the appropriate measures needed to protect the information. If information is not classifi ed, there are two options available—protecting all information as if it were confidential (an expensive and daunting task) or treating all information as if it were Public or Internal Use Only and not taking stringent protection measures.

Classify all data and assets - it’s the only way to effectively protect them.

 

 Understanding Integrity

 We defi ne  integrity in the information security context as the consistency, accuracy, and validity of data or information. One of the goals of a successful information security program is to ensure that the information is protected against any unauthorized or accidental changes. The program should include processes and procedures to manage intentional changes, as well as the ability to detect changes.

 Some of the processes that can be used to effectively ensure the integrity of information include authentication, authorization, and accounting. For example, rights and permissions could be used to control who can access the information or resource. Also, a hashing function (a mathematical function) can be calculated before and after to show if information has been modifi ed. In addition, an auditing or accounting system can be used that records when changes have been made.

 

Understanding Availability

Availability is the third core security principle, and it is defi ned as a characteristic of a resource being accessible to a user, application, or computer system when required. In other words, when a user needs to get to information, it’s available to them. Typically, threats to availability come in two types—accidental and deliberate. Accidental threats would include natural disasters like storms, fl oods, fi re, power outages, earthquakes, and so on. This category would also include outages due to equipment failure, software issues, and other unplanned system, network, or user issues. The second category is related to outages that result from the exploitation of a system vulnerability. Some examples of this type of threat would include a denial-of-service attack or a network worm that impacts vulnerable systems and their availability. In some cases, one of the fi rst actions a user needs to take following an outage is to determine into which category an outage fi ts. Companies handle accidental outages very differently than deliberate ones.

 Defining Threat and Risk Management

 Threat and risk management is the process of identifying, assessing, and prioritizing threats and risks. A  risk is generally defi ned as the probability that an event will occur. In reality, businesses are only concerned about risks that would negatively impact a computing environment. There is a risk that you’ll win the lottery on Friday—that’s not a risk to actively address, because it would be a positive. A  threat is a very specifi c type of risk, and it is defi ned as an action or occurrence that could result in a breach in the security, outage, or corruption of a system by exploiting known or unknown vulnerabilities. The goal of any risk management plan is to remove risks when possible and to minimize the consequences of risks that cannot be eliminated.

The fi rst step in creating a risk management plan is to conduct a  risk assessment . Risk assessments are used to identify the risks that might impact an environment.

In a mature risk assessment environment, it is common to record risks in a r isk register , which provides a formal mechanism for documenting the risks, impacts, controls, and other information required by the risk management program.

After completing an assessment and identifying risks, the next step is to evaluate each risk for two factors. First, determine the likelihood that a risk will occur in the environment. For example, a tornado is much more likely in Oklahoma than in Vermont. A meteor strike is probably not very likely anywhere, although it’s the example commonly used to represent the complete loss of a facility when discussing risk. After determining the likelihood of a risk, a user needs to determine the impact of that risk on their environment. A virus on a user’s workstation generally has a relatively low impact on the company, although it can have a high impact on the user. A virus on a user’s financial system has a much higher impact, although hopefully a lower likelihood.

After evaluating risks, it’s time to prioritize them. One of the best mechanisms to assist with the prioritization is to create a risk matrix, which can be used to determine an overall risk ranking. A risk matrix should include the following:

  • The risk
  • The likelihood that the risk will actually occur
  • The impact of the risk
  • A total risk score
  • The relevant business owner for the risk
  • The core security principles that the risk impacts (confidentiality, integrity, and/or availability)
  • The appropriate strategy or strategies to deal with the risk

Some additional fields that may prove useful in a risk register include:

  • A deliverable date for the risk to be addressed.
  • Documentation about the residual risk, which is the risk of an event that remains after measures have been taken to reduce the likelihood or minimize the effect of the event.
  • A status on the strategy or strategies to address the risk. These can include status indicators like Planning, Awaiting Approval, Implementation, and Complete.

One easy way to calculate a total risk score is to assign numeric values to the likelihood and impact. For example, rank likelihood and impact on a scale from 1 to 5, where 1 equals low likelihood or low probability, and 5 equals high likelihood or high impact. Then, multiply the likelihood and impact together to generate a total risk score. Sorting from high to low provides an easy method to initially prioritize the risks. Next, review the specific risks to determine the final order in which to address them. At this point, external factors, such as cost or available resources, might affect the priorities.

After prioritizing all risks, there are four generally accepted responses to these risks. These responses include the following:

  • Avoid
  • Accept
  • Mitigate
  • Transfer

Risk avoidance is the process of eliminating a risk by choosing to not engage in an action or activity. An example of risk avoidance would be a person who identifi es that there is a risk that the value of a stock might drop, so they avoid this risk by not purchasing the stock. A problem with risk avoidance is that there is frequently a reward associated with a risk—avoid the risk and you avoid the reward. If the stock in the example were to triple in price, the risk averse investor would lose out on the reward because he or she wanted to avoid the risk.

Risk acceptance is the act of identifying and then making an informed decision to accept the likelihood and impact of a specifi c risk. In the stock example, risk acceptance would be the process where a buyer would thoroughly research a company whose stock they are interested in, and after ensuring they are informed, make the decision to accept the risk that the price might drop.

Risk mitigation consists of taking steps to reduce the likelihood or impact of a risk. A common example of risk mitigation is the use of redundant hard drives in a server. There is a risk of hard drive failure in any system. By using redundant drive architecture, users can mitigate the risk of a drive failure by having the redundant drive. The risk still exists, but it has been reduced by a user’s actions.

Risk transfer is the act of taking steps to move responsibility for a risk to a third party through insurance or outsourcing. For example, there is a risk that a person may have an accident while driving a car. Purchasing insurance transfers this risk, so that in the event of an accident, the insurance company is responsible to pay the majority of the associated costs.

 One other concept in risk management that needs to be covered is  residual risk . Residual risk is the risk of an event that remains after measures have been taken to reduce the likelihood or minimize the effect of the event. To continue with the car insurance example, the residual risk in the event of an accident would be the deductible a driver has to pay in the event of an accident.

There are many different ways to identify, assess, and prioritize risks. There is no one right way. Use the techniques that best fit the environment and requirements.

While we are discussing risks, we need to look at two fi nal concepts that will help you understand the foundations of security principles and risk management.

 

 Understanding the Principle of Least Privilege

 The  Principle of Least Privilege is a security discipline that requires that a user, system, or application be given no more privilege than necessary to perform its function or job. On its face, this sounds like a very commonsense approach to assigning permissions, and when seen on paper, it is. However, when attempting to try to apply this principle in a complex production environment, it becomes signifi cantly more challenging.

The Principle of Least Privilege has been a staple in the security arena for a number of years, but many organizations struggle to implement it successfully. However, with an increased focus on security from both a business as well as a regulatory perspective, organizations are working harder to build their models around this principle. The regulatory requirements of Sarbanes-Oxley, HIPAA, HITECH, and the large number of state data/ privacy breach regulations, coupled with an increased focus by businesses into the security practices of the business partners, vendors, and consultants, are driving organizations to invest in tools, processes, and other resources in order to ensure this principle is followed.

But why is a principle that sounds so simple on paper so difficult to implement in reality? The challenge is largely related to the complexity of a typical environment. It is very easy to visualize how to handle this for a single employee. On a physical basis, they would need access to the building they work in, common areas, and their office.

Logically, the employee needs to be able to log on to their computer, have user access to some centralized applications, access to a file server, a printer, and an internal website. Now, imagine that user multiplied by a thousand. The thousand employees work in six different office locations. Some employees need access to all the locations, while others only need access to their own location. Still others need access to subsets of the six locations; they might need access to the two offices in their region, for example. Some will need access to the data center so they can provide IT support.

Logically, instead of a single set of access requirements, there are multiple departments with varying application requirements. The different user types vary from a user to a power user to an administrator, and you need to determine not only which employee is which type of user, but also manage their access across all the internal applications. Add to this mix new hires, employees being transferred or promoted, and employees who leave the company, and you can start to see how making sure that each employee has the minimum amount of access required to do their job can be a time-intensive activity.

But wait, we’re not done. In addition to the physical and user permissions, in many IT environments, applications also have a need to access data and other applications. In order to follow the Principle of Least Privilege, it is important to ensure the applications have the minimum access in order to function properly. This can be extremely difficult when working in a Microsoft Active Directory environment, due to the extremely detailed permissions included in Active Directory. Determining which permissions an application requires to function properly with Active Directory can be challenging in the extreme.

To further complicate matters, in industries where there is heavy regulation, like Finance or Medical, or when regulations like Sarbanes-Oxley are in effect, there are additional requirements that are audited regularly to ensure the successful implementation and validation of privileges across the enterprise.

Getting into a detailed discussion of how to implement and maintain the Principle of Least Privilege is beyond the scope of this book, but there are some high level tools and strategies to be aware of:

Groups  Groups can be used to logically group users and applications so that permissions are not applied on a user-by-user basis or application-by-application basis.

Multiple User Accounts for Administrators   One of the largest challenges when implementing the Principle of Least Privilege relates to administrators. Administrators are typically also users, and it is seldom a good idea for administrators to perform their daily user tasks as an administrator. To address this issue, many companies will issue their administrators two accounts—one for their role as a user of the company’s applications and systems, and the other for their role as an administrator.

Account Standardization   The best way to simplify a complex environment is to standardize on a limited number of account types. Each different account type permitted in an environment adds an order of magnitude to the permissions management strategy. Standardizing on a limited set of account types makes managing the environment much easier.

Third-Party Applications   There are a variety of third-party tools designed to make managing permissions easier. These can range from account lifecycle management applications to auditing applications and application fi rewalls.

Processes and Procedures   One of the easiest ways to manage permissions in an environment is to have a solid framework of processes and procedures for managing accounts. With this framework to rely on, the support organization doesn’t have to address each account as a unique circumstance. They can rely on the defi ned process to determine how an account is created, classifi ed, permissioned, and maintained.

A perfect implementation of the Principle of Least Privilege is very rare. A best effort is typically what is expected and is achievable.

 

Understanding Separation of Duties

Separation of duties is a principle that prevents any single person or entity from being able to have full access or complete all the functions of a critical or sensitive process. It is designed to prevent fraud, theft, and errors.

 When dealing with orders and payments, it is common to divide those processes into two or more sub-processes. For example, in accounting, the Accounts Receivable employees review and validate bills, and the Accounts Payable employees pay the bills. In any case, those users involved with the critical processes do not have access to the logs. A third set of employees would review and validate what has been occurring and validate that there are no suspicious activities.

 When working with IT, while there may be administrators with full access to an application or service, such as a database, the administrators should not be given access to the security logs. Instead, the security administrators regularly review the logs, but these security administrators will not have access to data within the databases. To maintain separation of duties, perform user rights and permissions on a regular basis to ensure that separation of duties is maintained.

 

Understanding an Attack Surface

One final concept to tackle when discussing core security principles is the idea of an attack surface when evaluating an environment. The concept of an attack surface with respect to systems, networks, or applications is another idea that has been around for some time. An attack surface consists of the set of methods and avenues an attacker can use to enter a system and potentially cause damage. The larger the attack surface of an environment, the greater the risk of a successful attack.

In order to determine the attack surface of an environment, it’s frequently easiest to divide the evaluation into three components:

  • Application
  • Network
  • Employee

When evaluating the application attack surface, look at things like the following:

  • Amount of code in an application
  • Number of data inputs to an application
  • Number of running services
  • Ports on which the application is listening

When evaluating the network attack surface, consider the following:

  • Overall network design
  • Placement of critical systems
  • Placement and rule sets on firewalls
  • Other security-related network devices like IDS, VPN, and so on When evaluating the employee attack surface, consider the following:
  • Risk of social engineering
  • Potential for human errors
  • Risk of malicious behavior

After evaluating these three types of attack surface, you will have a solid understanding of the total attack surface presented by the environment and a good idea of how an attacker might try to compromise the environment.

 

Performing an Attack Surface Analysis

An attack surface analysis helps to identify the attack surface that an organization may be susceptible to. Because the network infrastructure and necessary services and applications are usually complicated, particularly for medium and large organizations, performing an attack surface analysis can also be just as complicated. When completed, the attack surface analysis can be used to determine how to reduce the attack surface.

When analyzing a network, the first priority is to determine the security boundaries within an organization. As a minimum, an organization should have an internal network, a DMZ, and the Internet. However, when an organization has multiple sites, or multiple data centers, the organization will also have individual sites, multiple DMZs, and multiple Internet connections. A good place to determine security boundaries is to look at the organization’s network documents. Ensure that the organization has proper documentation, which includes network diagrams.

After determining the security boundaries, the next step is to determine everything that connects at those security boundaries. Typically, this includes routers and firewalls, but it might also include some level-3 switches. Next, look at the security mechanisms used for the routers, firewalls, and switches and any security rules associated with those security mechanisms.

With an understanding of the network infrastructure, the next step is to analyze the logs to see which traffic is allowed and which traffic is blocked. Ingress traffic is traffic that originates from outside the network’s routers and proceeds toward a destination inside the network. Egress traffic is network traffic that begins inside a network and proceeds through its routers to its destination somewhere outside of the network.

While network ingress filtering makes Internet traffic traceable to its source, egress filtering helps ensure that unauthorized or malicious traffic never leaves the internal network. Egress traffic might reveal incidents in which an attacker has already gained access to the internal network, or perhaps has gained access to internal users who might be releasing confidential information to the attacker with or without their knowledge. Inter-workload communications should remain internal; they should not transverse the perimeter.

It is important to review egress and ingress traffic on a regular basis. When examining egress and ingress traffic, look at the source and target addresses as well as the ports used. The ports help identify the applications and services to which the traffic packets are related. When creating rules that allow traffic in and out, use descriptive names and consider using templates that can help standardize the setup of multiple firewalls and routers.

In addition to examining egress and ingress traffic, analyze traffic to and from critical systems or systems that contain confidential information. This might help identify problems internally and externally. There might be things that aren’t noticed when analyzing egress and ingress traffic.

Testing can also be performed to identify open ports, services, and/or applications that are running on a system and what can be accessed from the outside. There are applications that can test all ports and test for known vulnerabilities. It is also important to configure intrusion detection/prevention systems, including setting alerts that indicate potential threats as they happen.

While analyzing traffic patterns, look at which traffic is encrypted and which traffic is not encrypted. This can help determine which traffic is essential and which traffic could be easily captured. In addition, this helps determine whether encryption should be established for unencrypted data and whether encryption policies need to be established.

To identify application attack services, assess all running network services and applications that communicate with other computers. Then, access best practice guides or hardening guides to learn how to disable any unnecessary program and service so that they cannot be used against you.

When a decision has been made to deploy or adopt a software solution or to build a software solution, it is important to build security into the solution from the beginning. If the organization developed the software solution, make sure the developers and designers are following best practices; their work should be audited from time to time to minimize the risk posed by security vulnerabilities. For third-party applications, choose companies that follow best practices. Ensure they have an update mechanism and process in place for security updates.

Because users often provide the biggest attack surface, remember to review current security policies to make sure that they are being followed. Also, determine whether any policies need to be created or modified. Ensure that all administrators and users are aware of the appropriate policies; if they aren’t, ensure that they receive any necessary training.

When evaluating servers, review administrative accounts from time to time to ensure that proper access is provided to the right people. Also, review open sessions. Create and deploy a password policy to make sure that passwords are being changed periodically and that those passwords are strong enough.

Reviewing and reducing attack surfaces should be done periodically to ensure systems are as secure as possible. Also, update the list of attack surfaces as new vulnerabilities are discovered, as new systems are added, and as systems change.

 

Understanding Social Engineering

One of the key factors to consider when evaluating the employee attack surface is the risk of a social engineering attack. Social engineering is a method used to gain access to data, systems, or networks, primarily through misrepresentation. This technique typically relies on the trusting nature of the person being attacked.

In a typical social engineering attack, the attacker will typically try to appear as harmless or respectful as possible. These attacks can be perpetrated in person, through email, or via phone. Attackers will try techniques including pretending to be from a Help Desk or Support Department, claiming to be a new employee, or in some cases even offering  credentials that identify them as an employee of the company.

Generally, this attacker will ask a number of questions in an attempt to identify possible avenues to exploit during an attack. If they do not receive sufficient information from one employee, they may reach out to several others until they have sufficient information for the next phase of an attack.

Some techniques for avoiding social engineering attacks include the following:

Be Suspicious   Phone calls, emails, or visitors who ask questions about the company, its employees, or other internal information, should be treated with extreme suspicion, and if appropriate, reported to the security organization.

Verify Identity   When receiving inquiries that you are unsure of, verify the identity of the requestor. If a caller is asking questions that seem odd, try to get their number so you can call them back. Then, check to ensure that the number is from a legitimate source. If someone approaches with a business card as identifi cation, ask to see a picture ID. Business cards are easy to print, and even easier to take from the “Win a Free Lunch” bowl at a local restaurant.

Be Cautious   Do not provide sensitive information unless certain not only of the person’s identity but also of the person’s right to have the information.

Don’t Use Email   Email is inherently insecure and prone to a variety of address spoofing techniques. Don’t reveal personal or fi nancial information in email. Never respond to email requests for sensitive information and be especially cautious of providing this information after following web links embedded in an email. A common trick is to embed a survey link in an email, possibly offering a prize, or prize drawing, and then asking questions about the computing environment like “How many fi rewalls do you have deployed?” or “What fi rewall vendor do you use?” Employees are so accustomed to seeing these types of survey requests in their inbox that they seldom think twice about responding to them.

The key to thwarting a social engineering attack is through employee awareness—if employees know what to look out for, an attacker will find little success.

 

Linking Cost with Security

 When dealing with security, there are some points to keep in mind when developing a security plan. First, security costs money. Typically, the more money is spent, the more secure the information or resources will be (up to a point). So, when examining risk and threats, look at how much the confidential data or resource is worth to the organization if it is compromised or lost and how much money the organization is willing to spend to protect the confidential data or resource.

 In addition to cost, strive to make the security seamless to the users who are using or accessing the confidential information or resource. If the security becomes a heavy burden, users will often look for methods to circumvent the security that has been established. Of course, training goes a long way in protecting confidential information and resources, because it will show users what to look for regarding security issues.

Вас заинтересует / Intresting for you:

Security Standards and Framewo...
Security Standards and Framewo... 1271 views Андрей Волков Mon, 23 Mar 2020, 16:56:47
Network Security: Know Thy Ene...
Network Security: Know Thy Ene... 1082 views Андрей Волков Sun, 22 Mar 2020, 13:30:53
Payment Card Industry Data Sec...
Payment Card Industry Data Sec... 1084 views Андрей Волков Wed, 25 Mar 2020, 05:55:34
Cisco: Securing the Control Pl...
Cisco: Securing the Control Pl... 1600 views Андрей Волков Sat, 04 Apr 2020, 07:09:39
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations