Tuesday, September 24, 2013

A crash course on the CPU, operating system, and process control

As we move into discussing security architecture and design in computers it's important to have a basic understanding of how the CPU and operating system handle process control. This function forms the basis for how all commands and instructions are carried out. To say what follows is a simplistic version would be an understatement. Entire books are written on the subject and dive much deeper than I ever could.

The CPU (central processing unit), if you didn't already know, is essentially what makes a computer, a computer. It is the brain of the entire machine. It carries out every line of instruction necessary for applications and programs to run. Within the CPU are several components that work in harmony to control the flow of data and carry out the instructions passed to it.
The core of the CPU is the ALU (arithmetic logic unit). This is where the actual instructions are carried out. Because a computer can only perform one instruction at a time, there is a control unit put in place to synchronize the requests from applications with the ALU. As the ALU performs the instructions, it is sometimes necessary to load a temporary value into a register for later retrieval. When the CPU is ready to store a value for a longer period of time, it transfers the data along the bus to memory.

Whenever a new program is launched from within the operating system, a process is created to manage the code associated with the program. A process consists of the instructions that need to be sent to the CPU and any resources that the operating system dedicates to the program. Before a process runs on the CPU, the control unit checks the setting of the program status word (PSW). The PSW declares if the process is trusted or not. Most processes outside of the operating system will be run as untrusted which restricts their access to critical system resources.

The operating system is in charge of controlling how processes access the CPU. Every process is either in a state of running, ready, or blocked. Running means the process is currently being executed by the CPU; ready means the process is ready to be executed; and blocked means the process is waiting on input from somewhere else before it can proceed. In the early days of computing, poor process management was a sinful error that resulted in lost CPU time because many times a blocked process would remain running on the CPU. Because the CPU is what runs the entire machine it is important to allocate work as efficiently as possible. Today, operating systems have been designed to maximize CPU efficiency by using process tables. A process table holds an entry for every current process that describes the processes state, stack pointer, memory allocation, program status, and the status of any open files. The stack pointer is like a placeholder that tells the CPU where the next line of code to perform is.

Making it work in harmony

In the past, a computer would have to perform an entire process at one time and wait for it to release the resources it was using before it could move on to the next task. With the creation of preemptive multitasking, this problem was eliminated. Operating systems now have the ability to recognize when a process is blocked and force it to release any resources it is using. As described above, the operating system has also greatly improved at scheduling processes. Operating systems have become much more sophisticated about preventing processes from accessing memory outside of their initial declared area and can now prevent a process from consuming too many resources and possibly creating a denial of service attack.

Another great improvement has been in thread management and multiprocessing. When a process wants to perform an action, such as printing a file, a thread is generated. The thread contains instructions on how to carry out the requested action. In computers with more than one processor, these threads can be passed to the soonest available processor, thus maximizing the efficiency. When an operating system is able to distribute threads and processes evenly across the processors available this is known as symmetric mode. There also exists asymmetric mode where one processor may be dedicated to only one process, and all other threads are passed to another.

Thursday, September 19, 2013

Access Control Models

Access control models provide a framework for how users access content. There exists three primary models that are used today. These are discretionary access control, mandatory access control, and role based access control. Each has it's own merits and limitations and is built into the core or the kernel of the operating system to control how users obtain access to objects.

Discretionary Access Control

Discretionary Access Control (DAC) is perhaps the most familiar access control model. Many current operating systems are based on this model. Under DAC, whoever creates a file is considered the owner and is able to control who can access it. The owner may allow certain users only read privileges while giving others read and write privileges, or none at all.

Access under DAC is user focused. Each user may have different access settings pertaining to each file. This is commonly enforced through an access control list. An example ACL can be viewed below.


In the example Jeanne, Joe, and Jim are each granted unique permissions in reference to the object. These permissions can be changed at any time by the object owner.




While the DAC model allows for a lot of flexibility, it can create irregularities throughout the organization. Since each user is left to determine who should be allowed what type of access to their objects, it is easy for too much access to be given away.

Mandatory Access Control

Mandatory access control (MAC) is what most people think of when they think of governments. Access is predicated on the subject having a certain level of clearance to be able to access an object. Terms such as 'confidential', 'privileged', and 'top secret' are common in this control model. Many of the rights and abilities users normally have are stripped away. Users will rarely be able to install software, change file permissions, add new users, etc.

MAC is much more rigid and strict. Users are allowed to use systems in only very specific ways. This type of environment is appropriate for organizations that greatly value system security, such as governments. Access is enforced through the use of sensitivity labels. These labels detail what level of clearance a subject and object have. Typically, clearance works in a hierarchical manner, meaning that if you have been cleared to access objects labeled 'top secret', you can also access any objects ranked below that in clearance.

Sensitivity labels do not only deal with clearance, however. They also contain what categories a subject or object belongs to. So while Jim may have top secret clearance, if he is listed as being in category Research and Development, he will not be able to access any objects that are in a different category, such as Finance, even if the object is ranked below top secret clearance. This is commonly referred to as a 'need to know'. In this case, Jim does not have a need to know about the objects within Finance.

Role Based Access Control

Role based access control (RBAC) is becoming one of the most predominant models used in businesses today. RBAC is based on assigning users to roles that describe their job function, such as marketing. Administration then maps what access controls are allowed and disabled for each role. This technique is useful for large organizations that have a high turnover rate. If Jim from marketing decides to leave it is a simple tasks to remove his account from the marketing role. Then when Jan starts she can quickly be mapped to the marketing role, and she'll have all the rights granted to that position.

RBAC is a centrally administered model and this can help control what level of access each user has. While a company may have thousands of employees, there may only be tens of roles. It is a much simpler tasks to define and outline what privileges each role should have rather than dealing with it on an employee by employee basis.

Some RBAC models include a hierarchical component. In these models, roles that are higher on the hierarchy inherit the privileges of lower roles. For instance, a nurse may be allowed to only read patient information, while a doctor would inherit this right and additionally be able to edit the information. It is important that if an organization implements a hierarchical based model they be sure to take precautions to prevent fraud. A manager working in accounts payable should not be given the privilege to also create invoices, even if this right is granted to a lower role.

Tuesday, September 17, 2013

The many ways of proving who you are

There are three broad categories for how a user can prove their identity. These are knowledge, possession, and inheritance. Another popular way of saying this is something you know, something you have, and something you are. Something you know would be a password or pass phrase. Something you have could be a token or a smart card. And something you are includes fingerprints or retinal scans. Within each category there are a myriad of technologies and options. Below is a listing of some of the more popular ones for each category.

Knowledge

  • Password
  • Passphrase
  • PIN
  • Pattern

Possession

  • Token
  • Smart card
  • Cryptographic key
  • Memory card

Inheritance

  • Fingerprint
  • Hand geometry/topography
  • Retina scan
  • Iris scan
  • Signature dynamics
  • Keystroke dynamics

Markup languages and protocols

I've previously discussed different technologies that help facilitate access management such as directories, web access management software, and single sign-on systems. These devices are great and can make an administrator's life much easier, but how do they communicate with other systems? All of these systems are dealing with data such as user names, passwords, and permissions and need the ability to share that information with other systems.

As is true whenever communication occurs between systems, there are protocols and markup languages that control how this information is transported and viewed. The most well known example is of the HTTP protocol and the HTML language (yes, I know it sounds redundant) that is used for web traffic. Another well known markup language forms the basis of two security specific languages that we will discuss shortly. This language is XML.

Service Provisioning Markup Language (SPML) and Security Assertion Markup Language (SAML) are two commonly used languages for transfer user security information. SPML is primarily used to manage account creating, modification, and deletion. This process is as simple as an SPML client sending a request, written in SPML of course, to a SPML server which then reviews the request before forwarding it on to the provisioning target. This is the entity that actually carries out the request.

SAML is most often used in single sign-on web systems to streamline the process of users moving from system to system. After a user has signed in to one service, they often need to be moved on to another in order to complete whatever their doing. When the user wants to transition from one system to another, SAML is used to transport information about their session to the next system.

Transmission of SAML data can take place over different protocols, but the most common one is Simple Object Access Protocol (SOAP). SOAP provides a standardized way to transport data such as SAML across systems. When the object receives a SOAP packet, the protocol tells the object what kind of information the packet holds so that it knows what to do with it. Since this communication takes place over the web, the SOAP packet is contained within an HTTP packet. Below is a diagram outlining this.

A note on moving forward

Ugh. I hate to admit it, but I've fallen behind. Farther behind than I care to admit. Not so much with the reading, more just in making posts. It seems my previous plan to make a post about every topic covered in the book is just not feasible given my current work load and the depth of information covered in the book. I can say though that I have a plan for moving forward.

So here's the plan. I'm going to have to do more quick summary posts that point to outside resources for further clarification. Major topics that are central to a chapter's theme will still receive a deeper exploration. Things such as current technologies used to implement various controls will probably have to take a backseat. My focus is going to be on the principles behind security controls and not so much the controls themselves. Technology changes, but the principles (for the most part) stay the same.

So, with that being said, onward and upward!

Sunday, September 15, 2013

Account Management

Account management deals with the creating, maintaining, upgrading or downgrading, and deletion of user accounts. It is a process that requires great attention and that is often poorly handled in many corporations. A common practice is to have IT departments create accounts manually while giving the user more rights and permissions than is truly needed. This problem can be further compounded when employees leave or are fired and their accounts are not properly decommissioned. Account management products attempt to eliminate these issues.

Best case scenario

Ideally, when a new user account needs to be created, a request is created, perhaps by HR, and approved by the employee's manager. This approval may trigger the system to automatically set up the account with the requested permissions, or it may generate a ticket for IT. When a user needs elevated permissions, or an account needs to be removed, a similar workflow process is followed.

Automating this process eases the workload of network administrators, and prevents lapses in account management. Implementing account management software also allows for an audit trail to be created. And everyone loves making auditors happy!

Password Management

Passwords are ubiquitous in every corporate environment. How a company manages these passwords plays a vital role in their overall security program. There are trade-offs that have to be considered when developing a password management system. A user can be required to have a separate password for every resource, thus minimizing the damage if one of the passwords is compromised. The downside to this is no one wants, quite reasonably, to memorize fifteen passwords. As a result users write down the passwords on sticky notes and cleverly hide them under their keyboard. On the other hand, single sign-on technology can be used to allow a user to enter one password and be able to access all the resources they need. It's easy to remember one password, but if an attacker should acquire this password there is nothing to stop them from compromising all of the systems. To manage these trade-offs there are three common approaches summarized below.

Password Synchronization

Password synchronization works much like single sign-on technology. The software eliminates the need to maintain multiple passwords by synchronizing a single password across all of the systems it operates with. This technique can be very easy on users and on the help-desk, as the need to reset forgotten passwords is drastically reduced.

Self-Service Password Reset

Self-service products give users the ability to reset their own password. By asking a user to answer security questions such as mother's maiden name, high school graduated from, or first pet's name, a user can verify their identity and reset a forgotten password on their own. This can reduce the workload for the help-desk, but does require more time and effort on the user's end should a password need to be reset.

Assisted Password Reset

Some products assist the help-desk in resetting passwords for users. When a user forgets a password, they can call the help-desk and request a password reset. The help-desk will have the user answer a series of security questions to verify their identity. It is important that the help-desk not be able to directly view a user's password as that would be a security risk. After verifying the user's identity, the help-desk can setup a one time password to allow the user to log in. The user should immediately change their password.

Wednesday, September 11, 2013

Web Access Management

A major concern for companies is how to handle users that originate outside of the network. The internet is growing and becoming more and more vital to daily business. To help manage these users, companies can install web access management (WAM) software. This software serves as a gate between the outside world and the internal network, allowing only authorized users to access resources.

When a user from outside the network requests access to an object, there are several steps that occur. Below is an example of a simple web access management solution.

  1. User requests access to an object
  2. The web server requests credentials
  3. The user supplies their credentials
  4. The WAM module verifies the user's credentials with a validation service (Kerberos in this case)
  5. The WAM loads the attributes of the identity
  6. The web server provides the requested resource




The WAM software is typically a plug-in for a web server, and functions as the gateway from the web into the corporate web based resources. A useful feature is that WAMs usually allow for single sign-on. That way, once a user is authenticated, they are able to use several different resources without having to log in multiple times. The WAM is able to do this by maintaining a constant session with the user so that it can check the user's permissions whenever requesting a new object. This is achieved by issuing a cookie which the user's browser can easily supply when requested. Once the session is over, the cookie is erased and the browser no longer has access until re-authenticated.

Tuesday, September 10, 2013

Directories

What is a directory?

A directory is a product that contains information pertaining to user identities and network resources. Within the directory is information on what each user's identity is, how to properly authenticate their identity, and what resources they are authorized to use.

Most directories implement the LDAP/X.500 protocols for interfacing. These protocols are standard and widely implemented in today's technology. They allow other services to request information from the directory about any of the objects it holds. The objects within a directory can be managed using a directory service. A directory service gives administrators the ability to control how identification, authentication, and authorization take place across network systems and resources.

To illustrate this, let's look at a Windows environment. When a user logs in to their local machine they log in to a domain controller. Within this domain controller is a directory containing information on who you are and what you can do. The directory service provides access to the directory for any applications or resources you may like to use so that they can check anytime you try to access their objects.

Their role in IdM

Directories provide a central place for applications and resources to check if a user is allowed to use something. Rather than having to identify and authenticate yourself every time you'd like to print or view a file, and rather than applications having to look in a hundred different places or store the information themselves, directories provide a simple solution.

So are there downsides? Of course, there always are. Because directories are stand-alone systems, other systems have to be able to communicate with them. This isn't a problem for newer systems as we now have the standard protocols X.500 and LDAP. It can be an issue for any legacy systems, however. If a company has a financial system that was designed in the eighties, chances are it will not be able to communicate with a directory. The solution? Buy new software, or be prepared to configure the system manually.

Identity management overview

Consider the typical company. As an employee, there is likely several different systems you interact with on a regular basis. There is most probably time management software, file sharing services, HR, email, ERP, web, and any number of other services that you interact with on a regular basis. Each of these systems requires you to provide an identity so that it can authenticate it and authorize you to perform certain things.

The question of how to manage all of these identities is a very real concern. It would be near impossible for system administrators to manually create, maintain, and delete identities and their associated authorizations. To do so manually would also greatly increase the chance for an oversight or error. Identity management is therefore a broad term that describes how, and with what products, a company addresses this problem. The main goal of identity management (IdM) technology is to streamline the processes involved in creating, maintaining, deleting, and auditing user accounts and permissions across multiple systems.

There are several products associated with IdM, and I will take a more in depth look deserving it's own post for each. They are as follows:


  • Directories
  • Web access management
  • Password management
  • Legacy single sign-on
  • Account management
  • Profile update

Thursday, September 5, 2013

Access Controls

Access controls are a vital part of any system's front line of defense. They control how users and systems interact with one another. As such, access can be defined as the flow of information between two entities; the subject and the object. Quite simply, the subject is the user or service that requests access to the object or the data it holds.

The Core of Access Control

There are four things core to access control: identification, authentication, authorization, and accountability.

Identification describes the process of verifying that a service is who it claims to be. When a subject makes a request to use an object or it's data, unique credentials such as a user name or MAC address can be provided as a way of identification. This process works closely with authentication. Along with some piece of identifying information, the subject will supply some sort of password, passphrase, key, or any other piece of information that proves the subject is who it claims to be. Authentication can be achieved through something a subject knows (password), something a subject has (key or badge), or something a subject is (biometrics). While only one is required, many systems use two-factor, or strong, authentication. By requiring a subject to supply two forms of authentication, security is greatly strengthened.

Once a subject has been identified and authenticated, the object must then decided if the subject has the proper authorization to access the object or it's data. For example, while a user may be able to log on to a shared network using their ID and password (identification and authentication), they may not be allowed to access certain shared folders on the network drives.

Finally, it is important for access controls to maintain accountability. This process ensures that the subject is held responsible for it's actions. If a user accesses a shared folder and totally destroys it, the access controls need to be able to identify which user was responsible. It is for this reason that it is extremely important that subjects are uniquely identified.

These four concepts are at the core of access controls and play a critical role in any system's security.

Wednesday, September 4, 2013

Risky business

Risk management is essential for every business. It's also a process that many companies struggle with. A misunderstanding of what risk management is, valuation errors, and poor implementation of risk management strategies plague many businesses today. In this post I'm going to explore the steps involved in creating a risk management program.


The Team

I won't go into much detail here, but first a risk management team must be assembled. The team should be comprised of persons from all departments with strong domain knowledge. The reasoning for this is a person cannot accurately value an asset or risk that they are not strongly familiar with. The overall goal of the team is to cost-effectively protect the business from risk.
The team will oversee the initial risk assessment, the risk analysis, and help create and maintain the policies and guidelines that will make up the risk management program.

The Assessment

There are two major goals of the risk assessment. First, the company has to decide how much their assets are worth. Assets in this case not only includes tangibles such as buildings and computer hardware, but also intangibles such as information and reputation. Knowing the value of these things will help the company later determine how much should be spent to protect them. Things to consider when valuing assets include:
  • Price to purchase, repair, and maintain
  • Loss of revenue if a system goes down
  • Value of asset to adversaries
  • Liability issues if a system is compromised
Second, the assessment aims to compile a comprehensive list of risks associated with each asset that is within scope of the project. Various threats should be considered for each asset, and the possible impact if that threat were exploited should be recorded.

Risk Analysis

With the results of the assessment in hand, an analysis can now be completed to decide what level of security is appropriate for each asset. There are two major approaches for risk analysis, quantitative and qualitative. One is not necessarily better than the other, as each has it's time and place.

Quantitative

Quantitative analysis is number based. Quantitative is best applied for assets where a clear value can be assigned and there are measurable losses associated with it's risk. Two commonly used equations within this analysis are the single loss expectancy (SLE) and the annual loss expectancy (ALE).

These formulas can be a great indicator of what is appropriate for a company to spend on protecting an asset. For instance, if the annual loss expectancy for a server is $37,000, it does not make sense to spend $100,000 annually to protect it. One note of caution, however. These formulas on based on predictions on how often damages will occur or how widespread they will be. While these estimates are usually based on historical data, they cannot predict the future.

Qualitative

Qualitative analysis is much more subjective. This analysis technique is best applied when hard and fast numbers are not associated with an asset. One common technique for performing qualitative analysis is to use a risk matrix. In the example below you can see that the threat is listed on the left, with it's likelihood of occurrence and impact listed to the right. This example also lists a countermeasure. These matrices may also be distributed among a group of professionals from various departments and then averaged.


Once all analysis has been completed, the company can then move on to selecting what controls it would like to implement. The major consideration here is balancing cost against benefit. This can be simple with quantitative data, as there is already a dollar amount associated. Qualitative data can be very difficult to manage however, and it will be up to the team to decide what is appropriate and justified.

Implementation

The final step is implementing the controls and creating policies, standards, guidelines, and procedures for the controls. The graphic below summarizes the relationship between policies, standards, guidelines, and procedures.

As you move down the chart, the statements and language become more specific. So while a policy may state that company computers are for employee use only, the associated standard may state that a user ID and password are required, and the procedure may detail that a password must consist of at least 8 characters and include one upper case letter.

These policies, standards, guidelines, and procedures are designed to support the controls that have been chosen for securing the assets. Without these, there is no way to ensure that the controls will be properly implemented throughout the organization.