Computer Notes, Programming codes, Hardware and Networking Tip, Entertainment, Biography, Internet Tip, Tech News, Latest Technology, YouTube,

Latest Post
About RAM Advantages of multiprocessing system Associative memory Binary Number System CA CA Notes Change drive icon change pendrive icon Computer Abbreviation Computer Architecture Computer fundamental MCQ Computer Generation Computer generation computer notes Computer MCQ Computer Network MCQ Computer Operator MCQ Critical Section Critical section in OS Database connectivity in java Deadlock avoidance Deadlock detection algorithm Deadlock Detection and Recovery Deadlock detection method Deadlock Handling Deadlock in OS Deadlock Prevention Deadlock Recovery define object and class Define system cell Descrete Structure Device Driver Device driver in computer device driver in os DFA DFA contains with DFA ends with dfa examples dijkstra's algorithm Discrete Structure Discrete Structure graph theory Download JDBC Driver Download MySql Download PUBG DS DS Notes FCFS Job Scheduling Algorithm Finding shortest path Finite Sate Automata Flynn's Classifications fragmentation in computer fragmentation in harddisk fragmentation in os fragmented memory Full form related to computer Generations of operations Generations of OS Graph theory ICT 1st semester notes Instruction Address Modes Java java array declaration java class and object example Java Database connectivity example java event handling example program Java JDBC Java JMenuBar Java JSP program example java notes java program methods example Java program to create class and object java program to create methods java program to print even number between any two numbers Java programming java programming notes Java programs Java question answer java swing example java swing program to calculate simple interest Java Tutorials JSP program learn qbasic Lekh MCQ MCQ Computer MCQ Operating System memory fragmentation MICT 1st semester notes mict 1st semester operating system notes MICT first semester notes Multiprocessing mutex in os Necessary conditions for deadlock Number System Operating System Operating system notes OS OS Notes OS Numeric pattern printing program in qbasic patterns in qbasic Pipeline Hazards Pipelining Pipelining concept prime or composite in qbasic print patterns qbasic print series in qbasic Printing Series in qbasic PUBG PUBG Mobile PUBG PC PUBG Story qbasic qbasic code qbasic for class 10 qbasic for class 8 qbasic for class 9 qbasic for school QBASIC Pattern printing qbasic pattern printing program qbasic pattern printing programs qbasic pattern types qbasic patterns qbasic programming tutorials qbasic programs qbasic sequence printing programs qbasic tutorials Race Condition in Operating system reverse order in qbasic RISC and CISC RISC Pipeline Scheduling algorithm segmentation in operating system segmentation in os semaphore and mutex Semaphore concept in os Semaphore in os semaphore in os notes semaphore meaning sequential programs in qbasic series in qbasic series printing programs in qbasic shell in operating system shell in os shortest path shortest path algorithm simple interest program in java swing System Bus System Cell Teach Blog Tech Blog Tech School technical school The Shell in Operating system types of fragmentation Types of Multiprocessor types of operating system Types of pipeline hazards View Connected Wifi password Virtual Memory Virtual memory in OS Von Neumann's architecture What is associative memory? what is class? What is computer system? What is Fragmentation? What is jsp? what is object? What is process? What is segmentation What is System Cell What is Thread? what is virtual memory in computer What is virtual memory? पब्जी गेम

  • Paged virtual memory is one-dimensional in which virtual addresses goes from 0 to some maximum address
  • A segment is a logical entity. It might contain a procedure, or an array, or a stack, or a collection of scalar variables, but usually it doesn’t contain a mixture of different types.
  • Segmentation is the mechanism that provides the machine with multiple completely independent address spaces. Thus segmentation provides two-dimensional address space. Each segment consists of linear sequence of addresses, from 0 to some maximum



  • To specify an address in this segmented or 2-dimensional memory, the program must supply a 2-part virtual address: 1) segment number(s) and 2) an address within the segment called offset (s).
  • Virtual address to physical address translation process is shown in figure below:


  • If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system must provide:
  • An algorithm that examines the state of the system to determine whether a deadlock has occurred.
  • An algorithm to recover from the deadlock.

Algorithm for detecting deadlock:
  1. For each node, N in the graph, perform the following five steps with N as the starting node.
  2. Initialize L to the empty list, designate all arcs as unmarked.
  3. Add current node to end of L, check to see if node now appears in L two times. If it does, graph contains a cycle (listed in L), algorithm terminates.
  4. From given node, see if any unmarked outgoing arcs. If so, go to step 5; if not, go to step 6.
  5. Pick an unmarked outgoing arc at random and mark it. Then follow it to the new current node and go to step 3.
  6. If this is initial node, graph does not contain any cycles, algorithm terminates. Otherwise, dead end. Remove it, go back to previous node, make that one current node, go to step 3.

Recovery From Deadlock
When a detection algorithm determines that a deadlock exists, there are two options for breaking a deadlock. One solution is simply to abort one or more processes  to break the circular wait. The second option is to preempt some resources from one or more of the deadlocked processes.

Process Termination: To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system reclaims all resources allocated to the terminated processes.


  • Abort all deadlocked processes
  • Abort one process at a time until the deadlock cycle is eliminated –overhead, a deadlock-detection after terminating a process


Resource Preemption: To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken. Some issues need to be addressed are:

  • Selecting a victim: Some process will have to rolled back to break deadlock.  Select that process as victim that will incur minimum cost.
  • Rollback: Determine how far to roll back the process
Total rollback: Abort the process and then restart it. More effective is to roll back the process only as far as necessary to break deadlock

Starvation: Starvation happens if same process is always chosen as victim. To avoid this, we include the number of rollbacks in the cost factor.


  • One way to avoid race conditions is not to allow two processes to be in their critical sections at the same time. 
  • Critical section is the part of the process that accesses a shared variable. 
  • That is, we need a mechanism of mutual exclusion. Mutual exclusion is the way of ensuring that one process, while using the shared variable, does not allow another process to access that variable.


  • Race conditions are situations where two or more processes reads or writes some shared data at the same time and the final result is incorrect. 
  • Final result may be different according to order of completion of competing processes. 
  • let us consider a simple but common example: a print spooler. When a process wants to print a file, it enters the file name in a special spooler directory. 
  • Another process periodically checks to see if there are any files to be printed, and if there are, it prints them and then removes their names from the directory.
  • Consider the situation given in figure below, where in and out are two shared variables.
  • Process A reads value of in(i.e. 7) but before inserting the name of the document to be printed in index 7, the CPU decides to schedule process B. 
  • Now, process B also reads in, and also gets 7. Process B now continues to run and stores the name of its file in slot 7 and updates in to be an 8. 



  • After some time process A runs again. It looks at value of in stored in its local variable and finds a 7 there, and writes its file name in slot 7.Then it increments value of in and sets in to 8.The printer daemon will not notice anything wrong, but process B will never receive any output. Situations like this are called race conditions. 



  • System calls are the interface between the operating system and the user programs. Access to the operating system is done through system calls.
  • Each system call has a procedure associated with it so that calls to the operating system can be done in a similar way as that of normal procedure call. 
  • When we call one of these procedures it places the parameters into registers and informs the operating system that there is some work to be done. When a system call is made TRAP instruction is executed. This instruction switches the CPU from user mode to kernel (or supervisor) mode. 
  • Eventually, the operating system will have completed the work and will return a result.
  • Making a system call is like making a special kind of procedure call, only system calls enter the kernel and procedure calls do not.
  • Example: count = read(file, buffer, nbytes);
To make this concept clearer, let us examine the read call written above. Calling program first pushes the parameters onto stack (steps 1-3) before calling the read library procedure (step 4), which actually makes the read system call. 
  • The library procedure, possibly written in assembly language, typically puts the system call number in specified register (step 5). Then it executes a TRAP instruction to switch from user mode to kernel mode and start execution at a fixed address within the kernel (step 6). 
  • The kernel code examines the system call number and then dispatches to the correct system call handler (step 7). At that point the system call handler runs (step 8). 




  • Once the system call handler has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAP instruction (step 9). 
  • This procedure then returns to the user program in the usual way procedure calls return (step 10). 
  • To finish the job, the user program has to clean up the stack, as it does after any procedure call (step 11).  

Process 

Process is a program that is ready for execution in CPU. When a program is loaded into memory, it becomes ready for execution and competes with other process to access CPU. Thus when a program is loaded into memory it becomes process. Computers nowadays can do many things at the same time. They can be writing to a printer, reading from a disc and scanning an image. Operating system is responsible for running many processes, usually, on the same CPU. In fact, only one process can be run at a time so the operating system has to share the CPU between the processes that are available to be run, while giving the illusion that the computer is doing many things at the same time. This approach can be directly contrasted with the first computers. They could only run one program at a time. 



Now, the main point of this part is to consider how an operating system deals with processes when we allow many to run. To achieve true parallelism we must have multiprocessor system when n processors can execute n programs simultaneously. True parallelism cannot be achieved with singl CPU. In single processor CPU switched from one process to another process rapidly. In 1 sec CPU can serve 100's of processes giving the illusion of parallelism to a user which is called pseudo-parallelism. 



Threads 

Threads are like mini-processes that operate within a single process. Each thread has its own program counter and stack so that it knows where it is. Apart from this they can be considered the same as process, with the exception that they share the same address space. This means that all threads from the same process have access to the same global variable and the same files. Threads are also called light weight process. The table given below shows the various items that a process has compared to the items that each thread has. 

Uses of Thread
  • Some application need to perform multiple activities at once that share common data and files. Thus reason behind need of multiple threads. 
  • Ability of parallel entities, within a process to all share common variables and data. 
  • Since threads do not have their own address space, it is easier to create and destroy threads than processes. In fact thread creation is around 100 times faster than process creation. 
  • Thread switching has much less overhead than process switching because threads have few information attached with them. 
  • Performance gain when there is a lot of computing and lot of I/O as tasks can overlap. Pure CPU bound thread application no advantage through
  • Threads are very effective on multiple CPU system for concurrent processing because with multiple CPU's true parallelism can be achieved


Difference between Process and Threads
Thread Process
- Threads are lightweight - Processes are heavyweight
- Threads are runs in address space of processes - Processes have their own address space
- Threads shares files and other data - Processes do not share files and other data
- Thread switching is faster - Process switching is slower
Threads are easy to create and destroy Process are difficult to create and destroy

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget