Pages

Monday, March 14, 2022

 Operating Systems Theory & Design

Modern operating systems have common major functions and services.  While their specific implementation may vary between the operating system and computing environment, they all have User and System services that provide the main functions of any OS.  These include Program Execution, I/O Operations, Resource Allocation, File Systems, Protection & Security, and Error Detection, among others.


Figure 1- Major Functions and Services of Operating Systems

 

Processes Control & Management

Operating systems enable processes to share, exchange and manage information and resources.  This function is implemented via a Process Control Block, which is a data structure that contains all process-related information, including the current state of the operating system.  From the process ID, registers, memory, and files used, the Process Control Block (PCB) is the quarterback of every process.

Figure 2- Process, Threads and Process Synchronization

 Operating systems enable processes to share, exchange and manage information and resources.  This function is implemented via a Process Control Block, which is a data structure that contains all process-related information, including the current state of the operating system.  From the process ID, registers, memory, and files used, the Process Control Block (PCB) is the quarterback of every process.

Modern processors and computing needs have demanded a solution that allowed multiple processes to be executed in parallel as opposed to sequentially.  To accomplish this, multi-core and multi-threaded processors were developed to allow for processes with multiple threads to be executed.  A thread is considered to be a light process, but can only exist inside a process.  The benefit is that every thread has its own registers, counter, stack, but shares data, files, and code with the other processes being executed in parallel.  This has vastly improved process execution.

Critical Section Problem

The critical-section is a code segment that accesses shared variables and has to be executed as an atomic action. The critical section problem refers to the problem of how to ensure that at most one process is executing its critical section at a given time.

For example, if two processes are running, and both need to access the same file, the software needs to be written in a way that these two processes are not executing their critical-sections at the same time. Otherwise, one process may be editing a file that the other process is trying to access at the same time. This can cause the program to throw an exception error.

The critical-section problem can be solved by implementing mutual exclusion in the code, progress, and bounded waiting.  The code below is an example using the Peterson Solution:

 do {

    flag[i] = true;

    turn = j;

    while(flag[j] && turn == j);

    critical section

    flag[i] = false;

 remainder section

}

while(true);

 In this solution, when a process is executing in a critical state, then the other process only executes the rest of the code, and the opposite can happen. This method also helps to make sure that only a single process runs in the critical section at a specific time.

Memory Management

Memory management is a main function of the Operating System.  Its primary responsibility is to manage memory and move processes and related process data back and forth between the physical memory and the virtual memory on the disk.  It decides which process will get what memory at what time, based on the knowledge of what memory is free and available.

Memory in a computer is dynamically allocated depending on the needs of the application and processes being executed, and it is freed up when the process no longer requires the memory, thus allocating that slot of memory to another process if needed.

Ultimately how memory is managed will depend on how effective the hardware, operating system, and programs or applications are programmed and/or configured.

 

Figure 3- Memory Management

File Systems Management

File management is a key function of the operating system.  It is also one that most of us interact with frequently, even if we don't specifically interact with the OS in a terminal.  All applications that open and save files are calling on the OS to locate, open, edit, save or delete files.

A file is a collection of information saved on secondary storage and loaded into memory.  File management can be thought of as the process of manipulating files on a computer.  There are several logical structures of a directory as shown in Figure 4 below:

Figure 4- File Systems Management / Directory Structures

  Each of these directory structures provides advantages and disadvantages.  For example, the single directory structure is easy to implement, and manipulating files in this structure is easier and faster.  A disadvantage in this structure is that name collisions can occur and searching can become time-consuming if a directory structure is long.

The two-level directory is a bit more robust in that you can have different users, and each can contain directories and files of the same name.  Searching in this structure also becomes easier; however, a user cannot share files with other users.

Tree-structured directories are the most common.  This is what we experience in the Windows OS environment.  It is scalable, easy to search, and can use both absolute and relative paths.  Despite its popularity, this structure cannot share files and is considered inefficient.

The acyclic graph is a graph with no cycle and allows users to share subdirectories and files. The same file or subdirectories may be in two different directories. I think of it as a souped-up tree-structured directory.

In general graph directory structure, cycles are allowed within a directory structure where multiple directories can be derived from more than one parent directory. The main problem with this directory structure is to calculate the total size or space that has been taken by the files and directories.

OS Security & Protection

The goal of security is to protect the computer system from various threats, including malicious software such as worms, trojans, viruses, and unwanted remote intrusions.  Security typically involves the implementation of control techniques that can protect a computer from unauthorized changes to files, configurations, and even access to specific applications and resources attached to it.

The most common methods employed in protecting computer systems include the use of antivirus and antimalware software, as well as other administrative system policies requiring regular OS system updates, user access control, and even physical access control.

Figure 5- Security and Protection

Conclusion

Having a more in-depth understanding of all of these important concepts about operating systems will play a key role in my future as an IT professional. From deployment of security systems to programming for specific environments, my understanding of resource management will be helpful in delivering a more optimized product for my internal stakeholder.

 

No comments:

Post a Comment