Wednesday, October 27, 2010

System Calls and System Programs



System Calls and System Programs



System Calls and System Programs

System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.





EXAMAPERS123.BLOGSPOT.COM

Interprocess Communication



Interprocess CommunicatioN




Interprocess Communication

Since processes frequently needs to communicate with other processes therefore, there is a need for a well-structured communication, without using interrupts, among processes.

Race Conditions
In operating systems, processes that are working together share some common storage (main memory, file etc.) that each process can read and write. When two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions. Concurrently executing threads that share data need to synchronize their operations and processing in order to avoid race condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is implemented as a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than the linked list could become corrupt.


Critical Section
How to avoid race conditions?


The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes in Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are
• Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is possibly being altered by another thread.
• Codes that alter one or more variables that are possibly being referenced in “read-updata-write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by another thread.
• Codes alter any part of a data structure while it is possibly in use by another thread.

Here, the important point is that when one process is executing shared modifiable data in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time.


Mutual Exclusion


A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.
Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.


Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).
• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.


Proposals for Achieving Mutual Exclusion

The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-protocol (or exist protocol) to keep two or more threads from being in their critical sections at the same time. Tanenbaum examine proposals for critical-section problem or mutual exclusion problem.
Problem
When one process is updating shared modifiable data in its critical section, no other process should allowed to enter in its critical section.


Proposal 1 -Disabling Interrupts (Hardware Solution)
Each process disables all interrupts just after entering in its critical section and re-enable all interrupts just before leaving critical section. With interrupts turned off the CPU could not be switched to other process. Hence, no other process will enter its critical and mutual exclusion achieved.
Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the kernel of an operating system, but it is not appropriate as a general mutual exclusion mechanism for users process. The reason is that it is unwise to give user process the power to turn off interrupts.


Proposal 2 - Lock Variable (Software Solution)
In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses - some process is in its critical section.
Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will be in their critical section simultaneously.


Proposal 3 - Strict Alteration
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section. Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.
Conclusion
Taking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation violates above mentioned condition 3.
Using Systems calls 'sleep' and 'wakeup'
Basically, what above mentioned solution do is this: when a processes wants to enter in its critical section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some interprocess communication primitives is the pair of steep-wakeup.
• Sleep
o It is a system call that causes the caller to block, that is, be suspended until some other process wakes it up.
• Wakeup
o It is a system call that wakes up the process.

Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used to match up 'sleeps' and 'wakeups' .


The Bounded Buffer Producers and Consumers
The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite numbers of slots are available.
Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer and the consumer takes information out.
Trouble arises when
1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.
Conclusion
This approaches also leads to same race conditions we have seen in earlier approaches. Race condition can occur due to the fact that access to 'count' is unconstrained. The essence of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.

Semaphores


E.W. Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores.
Definition
A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:
P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:
V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1

Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.
Producer-Consumer Problem Using Semaphores
The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.
The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously.
Initialization
• Set full buffer slots to 0.
i.e., semaphore Full = 0.
• Set empty buffer slots to N.
i.e., semaphore empty = N.
• For control access to critical section set mutex to 1.
i.e., semaphore mutex = 1.
Producer ( )
WHILE (true)
produce-Item ( );
P (empty);
P (mutex);
enter-Item ( )
V (mutex)
V (full);
Consumer ( )
WHILE (true)
P (full)
P (mutex);
remove-Item ( );
V (mutex);
V (empty);
consume-Item (Item)








EXAMAPERS123.BLOGSPOT.COM

Definition of Process



'Definition of Process





SUCCESS FOR CAREER


Definition of Process

The notion of process is central to the understanding of operating systems. There are quite a few definitions presented in the literature, but no "perfect" definition has yet appeared.

Definition
The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the term process, used somewhat interchangeably with 'task' or 'job'. The process has been given many definitions for instance
• A program in Execution.
• An asynchronous activity.
• The 'animated sprit' of a procedure in execution.
• The entity to which processors are assigned.
• The 'dispatchable' unit.
and many more definitions have given. As we can see from above that there is no universally agreed upon definition, but the definition "Program in Execution" seem to be most frequently used. And this is a concept are will use in the present study of operating systems.
Now that we agreed upon the definition of process, the question is what is the relation between process and program. It is same beast with different name or when this beast is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. In the following discussion we point out some of the difference between process and program. As we have mentioned earlier.
Process is not the same as program. A process is more than a program code. A process is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a program is an algorithm expressed in some suitable notation, (e.g., programming language). Being a passive, a program is only a part of process. Process, on the other hand, includes:
• Current value of Program Counter (PC)
• Contents of the processors registers
• Value of the variables
• The process stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables.
• A data section that contains global variables.
A process is the unit of work in a system.
In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called multiprogramming).




Process State


The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:
• Code for the program.
• Program's static data.
• Program's dynamic data.
• Program's procedure call stack.
• Contents of general purpose registers.
• Contents of program counter (PC)
• Contents of program status word (PSW).
• Operating Systems resource in use.

A process goes through a series of discrete process states.
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
• Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run.
• Terminated state: The process has finished execution.






Process State


The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:
• Code for the program.
• Program's static data.
• Program's dynamic data.
• Program's procedure call stack.
• Contents of general purpose registers.
• Contents of program counter (PC)
• Contents of program status word (PSW).
• Operating Systems resource in use.

A process goes through a series of discrete process states.
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
• Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run.
• Terminated state: The process has finished execution.







Process Control Block


A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track "which is which" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems.





EXAMAPERS123.BLOGSPOT.COM