Computer Notes, Programming codes, Hardware and Networking Tip, Entertainment, Biography, Internet Tip, Tech News, Latest Technology, YouTube,

Latest Post
About RAM Advantages of multiprocessing system Associative memory Binary Number System CA CA Notes Change drive icon change pendrive icon Computer Abbreviation Computer Architecture Computer fundamental MCQ Computer Generation Computer generation computer notes Computer MCQ Computer Network MCQ Computer Operator MCQ Critical Section Critical section in OS Database connectivity in java Deadlock avoidance Deadlock detection algorithm Deadlock Detection and Recovery Deadlock detection method Deadlock Handling Deadlock in OS Deadlock Prevention Deadlock Recovery define object and class Define system cell Descrete Structure Device Driver Device driver in computer device driver in os DFA DFA contains with DFA ends with dfa examples dijkstra's algorithm Discrete Structure Discrete Structure graph theory Download JDBC Driver Download MySql Download PUBG DS DS Notes FCFS Job Scheduling Algorithm Finding shortest path Finite Sate Automata Flynn's Classifications fragmentation in computer fragmentation in harddisk fragmentation in os fragmented memory Full form related to computer Generations of operations Generations of OS Graph theory ICT 1st semester notes Instruction Address Modes Java java array declaration java class and object example Java Database connectivity example java event handling example program Java JDBC Java JMenuBar Java JSP program example java notes java program methods example Java program to create class and object java program to create methods java program to print even number between any two numbers Java programming java programming notes Java programs Java question answer java swing example java swing program to calculate simple interest Java Tutorials JSP program learn qbasic Lekh MCQ MCQ Computer MCQ Operating System memory fragmentation MICT 1st semester notes mict 1st semester operating system notes MICT first semester notes Multiprocessing mutex in os Necessary conditions for deadlock Number System Operating System Operating system notes OS OS Notes OS Numeric pattern printing program in qbasic patterns in qbasic Pipeline Hazards Pipelining Pipelining concept prime or composite in qbasic print patterns qbasic print series in qbasic Printing Series in qbasic PUBG PUBG Mobile PUBG PC PUBG Story qbasic qbasic code qbasic for class 10 qbasic for class 8 qbasic for class 9 qbasic for school QBASIC Pattern printing qbasic pattern printing program qbasic pattern printing programs qbasic pattern types qbasic patterns qbasic programming tutorials qbasic programs qbasic sequence printing programs qbasic tutorials Race Condition in Operating system reverse order in qbasic RISC and CISC RISC Pipeline Scheduling algorithm segmentation in operating system segmentation in os semaphore and mutex Semaphore concept in os Semaphore in os semaphore in os notes semaphore meaning sequential programs in qbasic series in qbasic series printing programs in qbasic shell in operating system shell in os shortest path shortest path algorithm simple interest program in java swing System Bus System Cell Teach Blog Tech Blog Tech School technical school The Shell in Operating system types of fragmentation Types of Multiprocessor types of operating system Types of pipeline hazards View Connected Wifi password Virtual Memory Virtual memory in OS Von Neumann's architecture What is associative memory? what is class? What is computer system? What is Fragmentation? What is jsp? what is object? What is process? What is segmentation What is System Cell What is Thread? what is virtual memory in computer What is virtual memory? पब्जी गेम

  • System calls are the interface between the operating system and the user programs. Access to the operating system is done through system calls.
  • Each system call has a procedure associated with it so that calls to the operating system can be done in a similar way as that of normal procedure call. 
  • When we call one of these procedures it places the parameters into registers and informs the operating system that there is some work to be done. When a system call is made TRAP instruction is executed. This instruction switches the CPU from user mode to kernel (or supervisor) mode. 
  • Eventually, the operating system will have completed the work and will return a result.
  • Making a system call is like making a special kind of procedure call, only system calls enter the kernel and procedure calls do not.
  • Example: count = read(file, buffer, nbytes);
To make this concept clearer, let us examine the read call written above. Calling program first pushes the parameters onto stack (steps 1-3) before calling the read library procedure (step 4), which actually makes the read system call. 
  • The library procedure, possibly written in assembly language, typically puts the system call number in specified register (step 5). Then it executes a TRAP instruction to switch from user mode to kernel mode and start execution at a fixed address within the kernel (step 6). 
  • The kernel code examines the system call number and then dispatches to the correct system call handler (step 7). At that point the system call handler runs (step 8). 




  • Once the system call handler has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAP instruction (step 9). 
  • This procedure then returns to the user program in the usual way procedure calls return (step 10). 
  • To finish the job, the user program has to clean up the stack, as it does after any procedure call (step 11).  

Process 

Process is a program that is ready for execution in CPU. When a program is loaded into memory, it becomes ready for execution and competes with other process to access CPU. Thus when a program is loaded into memory it becomes process. Computers nowadays can do many things at the same time. They can be writing to a printer, reading from a disc and scanning an image. Operating system is responsible for running many processes, usually, on the same CPU. In fact, only one process can be run at a time so the operating system has to share the CPU between the processes that are available to be run, while giving the illusion that the computer is doing many things at the same time. This approach can be directly contrasted with the first computers. They could only run one program at a time. 



Now, the main point of this part is to consider how an operating system deals with processes when we allow many to run. To achieve true parallelism we must have multiprocessor system when n processors can execute n programs simultaneously. True parallelism cannot be achieved with singl CPU. In single processor CPU switched from one process to another process rapidly. In 1 sec CPU can serve 100's of processes giving the illusion of parallelism to a user which is called pseudo-parallelism. 



Threads 

Threads are like mini-processes that operate within a single process. Each thread has its own program counter and stack so that it knows where it is. Apart from this they can be considered the same as process, with the exception that they share the same address space. This means that all threads from the same process have access to the same global variable and the same files. Threads are also called light weight process. The table given below shows the various items that a process has compared to the items that each thread has. 

Uses of Thread
  • Some application need to perform multiple activities at once that share common data and files. Thus reason behind need of multiple threads. 
  • Ability of parallel entities, within a process to all share common variables and data. 
  • Since threads do not have their own address space, it is easier to create and destroy threads than processes. In fact thread creation is around 100 times faster than process creation. 
  • Thread switching has much less overhead than process switching because threads have few information attached with them. 
  • Performance gain when there is a lot of computing and lot of I/O as tasks can overlap. Pure CPU bound thread application no advantage through
  • Threads are very effective on multiple CPU system for concurrent processing because with multiple CPU's true parallelism can be achieved


Difference between Process and Threads
Thread Process
- Threads are lightweight - Processes are heavyweight
- Threads are runs in address space of processes - Processes have their own address space
- Threads shares files and other data - Processes do not share files and other data
- Thread switching is faster - Process switching is slower
Threads are easy to create and destroy Process are difficult to create and destroy

In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters into wait state. Waiting processes may never again change state, because the resources they have requested are held by other waiting processes. This situation is called a deadlock. System is deadlocked if there is a set of processes such that every process in the set is waiting for another process in the set.

Example 1
System has 2 disk drives.
P1 and P2 each hold one disk drive and each needs another one.

Example 2
- Porcess A makes a request to use the scanner
- Porcess A given the scanner
- Process B request thee CD writer
- Process B given the CD writer
- Now a requrest the CD writer
- A is denied permission until B releases the CD writer
- Also now B ask for the scanner
- Result is DEADLOCK !!


Necessary conditions for deadlock/Deadlock Characterization

A deadlock can arise if the following four conditions hold simultaneously (Necessary Condition).
1. Mutual exclusion: 
Only one process at time can use the resource.
.
2.  Hold and wait: 
A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.

3. No preemption: 
A resource can be released only voluntarily by the process holding it, after that process has completed its task.

4. Circular wait:
A set {P0, P1, …, Pn} of waiting processes must exist such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.



Deadlocks Handling

Mainly there are four methods of handling deadlock
1. Deadlock avoidance 
To avoid deadlocks the system dynamically considers every request and decides whether it is safe to grant it at this point. The system requires additional prior information regarding the overal potential use of each resource for each process. 

2. Deadlock Prevention 
It means that we design such a system where there is no chance of having a deadlock. The goal is to ensure that at least one of the Coffman's necessary conditions for deadlock can never hold. Deadlock can be prevented deadlocks by constraining how request for resources can be made in the system and how they are handled.

3. Deadlock Detection and Recovery 
We can allow the system to enter a deadlock state, detect it and recover - If a system does not employ either a deadlock prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment, the system can provide an algorithm that examines the state of the system to determine whether a deadlock has occurred and an algorithm to recover from the deadlock.

4.  Ignore the Problem (Ostrich Algorithm) : 
We can ignore the deadlock problem altogether. If the deadlock occurs, the system will stop functioning and will need to be restarted manually. This approach is used in systems in which deadlocks occur infrequently and if it is not a life critical system. This method is cheaper than other methods. This solution is used by most operating systems, including UNIX.



Deadlock Prevention 

If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same happens with deadlock, if we can be able to violate one of the four necessary conditions and don't let them occur together then we can prevent the deadlock.

Deadlock prevention methods are given below
1. Attack mutual Exclusion
Must hold only for non-sharable resources. Sharable resources, on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock.

2. Attack Hold and Wait
Ensure that whenever a process requests a resource, it does not hold any other resources. Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. There is a drawback of low resource utilization in this method.

3. Attack No Preemption
If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are preempted (or released). If a process requests some resources and if they are not available, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process.

4. Attack Circular Wait
One way to ensure that circular wait never holds is to Impose a total ordering of all resource types, and require that each process requests resources in an increasing or decreasing order of enumeration.



Deadlock Avoidance 

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime. For every resource request, the system sees whether granting the request will cause the system to enter an unsafe state. The system then only grants request that will lead to safe states. In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time. 
  • Resources currently available
  • Resources currently allocated to each process 
  • Resources that will be required and released by these process in the future
It is possible for a process to be in an unsafe state but for this not result in a deadlock. The notion of safe/unsafe state only refers to the ability of the system to enter a deadlock state or not. For example, if a process requests A which would result in an unsafe state but releases B which would prevent circular wait then the state is but the system is not in deadlock. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requests resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is impossible practically. 

Traditional memory stores data at a specific address and "recalls" that data later if the address is specified. Instead of an address, associative memory can recall data if a small portion of the data itself is specified.


Associative memory is often referred to as content. Addressable memory (CAM). In associative memory, any stored items can be accessed by using the contends of item. Items are stored in an associative memory have two field format, key and data.
Associative searching is based on simultaneous matching of key to be searched with stored key associated with each line of data.



The following diagram shows the block representation of an associative memory.
From the block diagram we can say that an associative memory consists of a memory array and logic for 'm' words with 'n' bits per word. 


The functional registers like the argument register A and key register K each have n bits, are for each bit of a word. The match register m consist of M bits, one for each memory word.

Pipeline hazards are situations that prevent the next instruction in the instruction stream from executing during its designated clock cycles. In another word, any condition that causes a stall in the pipeline operations can be called a hazard. There are mainly three types of hazards, they are:
  1. Data Hazards 
  2. Structural Hazards
  3. Control Hazards 


Data Hazards
It arises when instructions depend on the result of previous instruction but the previous instruction is not available yet.

Structural Hazards:
They arise when there are resource conflicts that prevents hardware to execute simultaneous execution of instruction. For e.g. Lets say the hardware has a register file which has a limitation of only one read or write in a cycle. If there is an instruction that needs to read from this register file while another instruction needs to write to this register file, only one can execute because of conflict.

Control Hazards
These hazards arise as a result of any type of branch instruction. Till the branch is completely executed. The branch is completely executed, the stream of following instructions will not be known completely.


Reduced instruction set computer (RISC).
 The main characteristics of RISC pipeline is to use an efficient instructions pipeline. In case of RISC pipeline, the instruction pipeline can be implemented with only two or three segments where segments 1 fetches the instructions from the memory.

Segment 2 executes the instruction in the ALU, and segment 3 may be used to store the results of the ALU operation in a particular register.



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget