An operating system is a set of computer programs that manage the hardware and software resources of a computer.
An operating system processes raw input and responds by allocating and
managing tasks and internal system resources. An operating system
performs basic tasks such as:
Unix, Windows and Mac OS are some of the most popular operating systems.
Some operating systems, such as Windows, have software built in to load from the code area in the primary data storage device's boot sector. Other operating systems, such as Unix, cannot load directly from the boot sector's code area and require a prior program, known as a bootloader.
The kernel is the central component of most operating systems. Its responsibilities include managing the system's resources (the communication between hardware and software components).
A typical vision of a computer architecture exists as a series of abstraction layers, for example:
Most operating systems rely on the kernel concept. The existence of a kernel is a natural consequence of designing a computer system as a series of abstraction layers, each relying on the functions of layers beneath itself. The kernel, from this viewpoint, is simply the name given to the lowest level of abstraction that is implemented in software. In order to avoid having a kernel, one would have to design all the software on the system not to use abstraction layers, which would increase the complexity of design.
At startup the bootloader usually executes the kernel in supervisor mode. The kernel then initialises itself and starts the first process. After this, the kernel does not typically execute directly, but rather in response to external events (e.g., system calls by programs to request services, or interrupts by the hardware to notify of events). Additionally, the kernel typically provides a loop, called the idle process, that is executed whenever no processes are available to run.
A kernel will usually provide features for the following:
The kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of:
The kernel controls which processes are allocated to the CPU or CPUs (each of which can often run only one process at a time).
Primary storage is used to store both program instructions and data. Normally both must be present for a program to execute. Often multiple processes (executing programs) require access to primary storage, frequently demanding more storage than is available. The kernel controls how much primary storage each process can use, and determines what to do when insufficient storage is available.
input/output (I/O) devices: keyboard, mouse, disk drives, printers, displays, etc
The kernel allocates requests from processes to perform I/O to devices and provides convenient methods for using them (typically abstracted to the point where the processes do not need to be aware of implementation details of the devices).
process |
A process is an instance of a program that is being executed by a computer system. A program is just a passive collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the same program; for example, running several instances of the same program often causes more than one process to be executed. A process owns resources allocated by the operating system including the following:
|
kernel thread |
A thread of execution results from a fork of a computer program into concurrently running tasks. A thread is normally contained inside a process. Multiple threads can exist within the same process and share resources such as primary storage, while different processes do not share these resources. The term thread without kernel or user qualifier normally refers to a kernel thread. |
user thread |
A thread can be implemented in a userspace library, and if so is called a user thread. The kernel is not aware of it; it is managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads to benefit from multi-processor machines. |
fiber |
A fiber is a very light unit of scheduling. Fibers are cooperatively scheduled: a running fiber must explicitly yield to allow another fiber to run, which makes their implementation much easier than kernel or user threads. |
multithreading
Running of concurrent threads is known as multithreading.
A process that has only one thread is referred to as a single-threaded process, while a process with multiple threads is referred to as a multi-threaded process.
multitasking
A multitasking kernel is able to make it appear that the number of processes being run simultaneously on the computer is higher than the maximum number of processes the computer is physically able to run simultaneously. Typically, the number of processes a system may run simultaneously is equal to the number of CPUs installed.
pre-emptive multitasking
In a pre-emptive multitasking operating system, the kernel gives each process a slice of time and switches from process to process so quickly that the processes appear to be executed simultaneously. The kernel uses scheduling algorithms to determine which process is running next and how much time it will be given. The algorithm chosen may allow for some processes to have higher priority than others. The kernel generally also provides a way for these processes to communicate; this is known as inter-process communication (IPC).
co-operative multitasking
Some operating systems provide co-operative multitasking, where each process is allowed to run uninterrupted until it makes a special request that tells the kernel it may switch to another process. Such requests are known as yielding, and typically occur in response to requests for interprocess communication, or for waiting for an event to occur.
multiprocessing
The operating system might also support multiprocessing. In this case, different processes and threads may run on different CPUs. A kernel for such a system must be designed to be re-entrant, meaning that two different parts of its code may safely be executed simultaneously. This typically involves provision of synchronisation mechanisms (such as spinlocks) to ensure that multiple CPUs do not attempt to modify the same data at the same time.
The kernel has full access to the system's primary storage and must allow CPUs to safely access this storage as they require it. This can be done by virtual addressing, which is usually achieved by paging and/or segmentation. Using virtual addressing, it is possible to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the storage that one process accesses at a particular (virtual) address may be different storage from what another process accesses at the same address. This allows every process to behave as if it is the only one running and thus prevents processes from crashing each other.
On many systems, a process's virtual address may refer to data which is not currently in primary storage. The layer of indirection provided by virtual addressing allows the operating system to use secondary storage to store what would otherwise have to remain in primary storage. As a result, operating systems can allow processes to use more storage than the system has physically available as primary storage. When a process needs data that is not currently in primary storage, the kernel writes the contents of an inactive memory block to secondary storage and replaces it with the data requested by the process. This scheme is known as demand paging.
Virtual addressing also facilitates creation of virtual partitions of primary storage in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the processes (user space). The processes are not permitted by the CPU to address kernel storage, thus preventing a process from damaging the running kernel.
To perform useful functions, processes need access to peripherals connected to the computer, which are controlled by the kernel through device drivers. For example, to display something on the screen, a process makes a request to the kernel, which forwards the request to its display driver, which in turn is responsible for plotting the characters or pixels.
A kernel must maintain a list of available devices. This list may be:
known in advance
An example is an embedded system, where the kernel will be rewritten if the available hardware changes.
configured by the user
This is typical on older PCs and on systems that are not designed for personal use.
detected by the operating system at run time
This is normally called plug and play. In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers.
In order to function, a process must be able to access services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invokes the related kernel functions.
If memory isolation is in use, it is impossible for a process to call the kernel directly, because that would be a violation of the CPU's access control rules. The following methods are commonly used:
software-simulated interrupt
This method is available on most hardware, and is therefore very common.
call gate
A call gate is a special address which the kernel has added to a list stored in kernel primary storage and which the CPU knows the location of. When the CPU detects a call to that location, it redirects to the target location without causing an access violation. This requires hardware support, but the hardware for it is quite common.
special system call instruction
This technique requires special hardware support, which some common architectures lack.
memory-based queue
A program that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of primary storage that the kernel periodically scans to find requests.
A shell is a program that provides an interface for users. An operating system shell provides access to the services of an operating system kernel, and is employed to issue commands to the kernel. Operating systems themselves have no user interfaces; the user of an operating system is a program, not a person. The operating system forms a platform for other system and application software.
Operating system shells generally fall into one of two categories: command-line and graphic. Command-line shells provide a command-line interface (CLI) to the operating system, while graphic shells provide a graphic user interface (GUI). In either category the primary purpose of the shell is to invoke or launch another program; however, shells frequently have additional capabilities such as viewing the contents of directories.
When a user logs into the operating system the shell program is usually executed. The program is called a shell because it hides the details of the underlying operating system. The shell manages the technical details of the operating system kernel interface.
A Unix shell is normally a command-line interface and script host that provides a text-based user interface for Unix and Unix-like operating systems. Operation of the computer is directed by entering command input as text or by creating text scripts containing one or more such commands. The traditional Unix shell is both an interactive command language and a scripting programming language.
home | Home Page |