Tuesday, September 24, 2013

A crash course on the CPU, operating system, and process control

As we move into discussing security architecture and design in computers it's important to have a basic understanding of how the CPU and operating system handle process control. This function forms the basis for how all commands and instructions are carried out. To say what follows is a simplistic version would be an understatement. Entire books are written on the subject and dive much deeper than I ever could.

The CPU (central processing unit), if you didn't already know, is essentially what makes a computer, a computer. It is the brain of the entire machine. It carries out every line of instruction necessary for applications and programs to run. Within the CPU are several components that work in harmony to control the flow of data and carry out the instructions passed to it.
The core of the CPU is the ALU (arithmetic logic unit). This is where the actual instructions are carried out. Because a computer can only perform one instruction at a time, there is a control unit put in place to synchronize the requests from applications with the ALU. As the ALU performs the instructions, it is sometimes necessary to load a temporary value into a register for later retrieval. When the CPU is ready to store a value for a longer period of time, it transfers the data along the bus to memory.

Whenever a new program is launched from within the operating system, a process is created to manage the code associated with the program. A process consists of the instructions that need to be sent to the CPU and any resources that the operating system dedicates to the program. Before a process runs on the CPU, the control unit checks the setting of the program status word (PSW). The PSW declares if the process is trusted or not. Most processes outside of the operating system will be run as untrusted which restricts their access to critical system resources.

The operating system is in charge of controlling how processes access the CPU. Every process is either in a state of running, ready, or blocked. Running means the process is currently being executed by the CPU; ready means the process is ready to be executed; and blocked means the process is waiting on input from somewhere else before it can proceed. In the early days of computing, poor process management was a sinful error that resulted in lost CPU time because many times a blocked process would remain running on the CPU. Because the CPU is what runs the entire machine it is important to allocate work as efficiently as possible. Today, operating systems have been designed to maximize CPU efficiency by using process tables. A process table holds an entry for every current process that describes the processes state, stack pointer, memory allocation, program status, and the status of any open files. The stack pointer is like a placeholder that tells the CPU where the next line of code to perform is.

Making it work in harmony

In the past, a computer would have to perform an entire process at one time and wait for it to release the resources it was using before it could move on to the next task. With the creation of preemptive multitasking, this problem was eliminated. Operating systems now have the ability to recognize when a process is blocked and force it to release any resources it is using. As described above, the operating system has also greatly improved at scheduling processes. Operating systems have become much more sophisticated about preventing processes from accessing memory outside of their initial declared area and can now prevent a process from consuming too many resources and possibly creating a denial of service attack.

Another great improvement has been in thread management and multiprocessing. When a process wants to perform an action, such as printing a file, a thread is generated. The thread contains instructions on how to carry out the requested action. In computers with more than one processor, these threads can be passed to the soonest available processor, thus maximizing the efficiency. When an operating system is able to distribute threads and processes evenly across the processors available this is known as symmetric mode. There also exists asymmetric mode where one processor may be dedicated to only one process, and all other threads are passed to another.

No comments:

Post a Comment