Computer Notes, Programming codes, Hardware and Networking Tip, Entertainment, Biography, Internet Tip, Tech News, Latest Technology, YouTube,

Latest Post
About RAM Advantages of multiprocessing system Associative memory Binary Number System CA CA Notes Change drive icon change pendrive icon Computer Abbreviation Computer Architecture Computer fundamental MCQ Computer Generation Computer generation computer notes Computer MCQ Computer Network MCQ Computer Operator MCQ Critical Section Critical section in OS Database connectivity in java Deadlock avoidance Deadlock detection algorithm Deadlock Detection and Recovery Deadlock detection method Deadlock Handling Deadlock in OS Deadlock Prevention Deadlock Recovery define object and class Define system cell Descrete Structure Device Driver Device driver in computer device driver in os DFA DFA contains with DFA ends with dfa examples dijkstra's algorithm Discrete Structure Discrete Structure graph theory Download JDBC Driver Download MySql Download PUBG DS DS Notes FCFS Job Scheduling Algorithm Finding shortest path Finite Sate Automata Flynn's Classifications fragmentation in computer fragmentation in harddisk fragmentation in os fragmented memory Full form related to computer Generations of operations Generations of OS Graph theory ICT 1st semester notes Instruction Address Modes Java java array declaration java class and object example Java Database connectivity example java event handling example program Java JDBC Java JMenuBar Java JSP program example java notes java program methods example Java program to create class and object java program to create methods java program to print even number between any two numbers Java programming java programming notes Java programs Java question answer java swing example java swing program to calculate simple interest Java Tutorials JSP program learn qbasic Lekh MCQ MCQ Computer MCQ Operating System memory fragmentation MICT 1st semester notes mict 1st semester operating system notes MICT first semester notes Multiprocessing mutex in os Necessary conditions for deadlock Number System Operating System Operating system notes OS OS Notes OS Numeric pattern printing program in qbasic patterns in qbasic Pipeline Hazards Pipelining Pipelining concept prime or composite in qbasic print patterns qbasic print series in qbasic Printing Series in qbasic PUBG PUBG Mobile PUBG PC PUBG Story qbasic qbasic code qbasic for class 10 qbasic for class 8 qbasic for class 9 qbasic for school QBASIC Pattern printing qbasic pattern printing program qbasic pattern printing programs qbasic pattern types qbasic patterns qbasic programming tutorials qbasic programs qbasic sequence printing programs qbasic tutorials Race Condition in Operating system reverse order in qbasic RISC and CISC RISC Pipeline Scheduling algorithm segmentation in operating system segmentation in os semaphore and mutex Semaphore concept in os Semaphore in os semaphore in os notes semaphore meaning sequential programs in qbasic series in qbasic series printing programs in qbasic shell in operating system shell in os shortest path shortest path algorithm simple interest program in java swing System Bus System Cell Teach Blog Tech Blog Tech School technical school The Shell in Operating system types of fragmentation Types of Multiprocessor types of operating system Types of pipeline hazards View Connected Wifi password Virtual Memory Virtual memory in OS Von Neumann's architecture What is associative memory? what is class? What is computer system? What is Fragmentation? What is jsp? what is object? What is process? What is segmentation What is System Cell What is Thread? what is virtual memory in computer What is virtual memory? पब्जी गेम

In this section we take a brief look at the history of operating system which is almost the same as looking at the history of computers.


First Generation (1945-1955)
  • During second second world war many people were developing automatic calculating machines. These first generation computers filled entire rooms with thousands of vacuum tubes. 
  • They did not have an operating system, they did not even have programming languages and programmers had to physically wire the computer to carry out their intended instructions. The programmers also had to book time on the computer as a programmer had to have dedicated use of the machine. 
Second Generation (1955-1965)
  • Vacuum Tubes provide very unreliable and a programmer, wishing to run his program, could quite easily spend all his/her time searching for an replacing tubes that had blown. 
  • Development of Transistor : Now, instead of programmers booking time on the machine, the computers were under submitted on punched cards that were placed onto a magnetic tape. This tape was given to the operators who ran the job through the computer and delivered the output to the expectant programmer. 
  • Concept of batch-processing (jobs): Instead of submitting one job at a time, many jobs were placed onto a single tape and these were processed one after another by the computer. The ability to do this can be seen as the first real operating system. 


Third Generation (1965-1980)
  • IC (Integrated circuit) as a replacement for transistors : The third generation saw the start of multi programming. This is the computer could give the illusion of running more than one task at a time. 
  • When one job had to wait for I/O request, another program could use the CPU. The concept of multi-programming led to a need for a more complex operating system. 
  • Another feature of third generation machine was that they implemented spooling. This allowed reading of punch cards onto disc as soon as they were brought into the computer room. This eliminated the need to store the jobs stored to disc, thus allowing programs that produced output to tun at the speed of the disc, and not the printer. 
  • Up until these point programmers were used to giving their job to an operator and watching it run. 
  • This concept led to the concept of time sharing. This allowed programmers to access the computer from a terminal and work in an interactive manner. 
  • Obviously, with the advent of multi programming, spooling and time sharing, operating system had to become a lot more complex in order to deal with all these issues. 

Fourth Generation (1980-present): 
  • The late seventies saw the development of Large Scale Integration (LSI). This led directly to the development of the personal computer (PC). 
  • These computers were (originally) designed to be single user, highly interactive and provide graphics capability. One of the requirements for the original PC produced by IBM was an operating system and, Bill Gates supplied MS-DOS on which he made his fortune. 
  • In addition, mainly on non-Intel processors, the UNIX operating system was being used.
  •  Mainly, we can say that Graphical User Interface (GUI) became popular in 4th  generation computers.


Fifth Generation (Sometime in the future): 
  • We can notice that each generation have been influenced by new hardware. The fifth generation of computers may be the first that breaks with this tradition and the advances in software will be as important as advances in hardware. 
  • Able to interact with humans in a way that is natural to us. No longer will we use mice and keyboards but we will be able to talk to computers in the same way that we communicate with each other. 
  • In addition, we will be able to talk in any language and the computer will have the ability to convert to any other language. Computers will also be able to reason in a way that imitates humans. Advances need to be made in AI (Artificial Intelligence). 
  • It is also likely that computers will need to be more powerful. Maybe parallel processing will be required. Maybe a computer based on a non-silicon substance may be needed to fulfill that requirement (as silicon has a theoretical limit as to how fast it can go). 

  • Paged virtual memory is one-dimensional in which virtual addresses goes from 0 to some maximum address
  • A segment is a logical entity. It might contain a procedure, or an array, or a stack, or a collection of scalar variables, but usually it doesn’t contain a mixture of different types.
  • Segmentation is the mechanism that provides the machine with multiple completely independent address spaces. Thus segmentation provides two-dimensional address space. Each segment consists of linear sequence of addresses, from 0 to some maximum



  • To specify an address in this segmented or 2-dimensional memory, the program must supply a 2-part virtual address: 1) segment number(s) and 2) an address within the segment called offset (s).
  • Virtual address to physical address translation process is shown in figure below:


  • If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system must provide:
  • An algorithm that examines the state of the system to determine whether a deadlock has occurred.
  • An algorithm to recover from the deadlock.

Algorithm for detecting deadlock:
  1. For each node, N in the graph, perform the following five steps with N as the starting node.
  2. Initialize L to the empty list, designate all arcs as unmarked.
  3. Add current node to end of L, check to see if node now appears in L two times. If it does, graph contains a cycle (listed in L), algorithm terminates.
  4. From given node, see if any unmarked outgoing arcs. If so, go to step 5; if not, go to step 6.
  5. Pick an unmarked outgoing arc at random and mark it. Then follow it to the new current node and go to step 3.
  6. If this is initial node, graph does not contain any cycles, algorithm terminates. Otherwise, dead end. Remove it, go back to previous node, make that one current node, go to step 3.

Recovery From Deadlock
When a detection algorithm determines that a deadlock exists, there are two options for breaking a deadlock. One solution is simply to abort one or more processes  to break the circular wait. The second option is to preempt some resources from one or more of the deadlocked processes.

Process Termination: To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system reclaims all resources allocated to the terminated processes.


  • Abort all deadlocked processes
  • Abort one process at a time until the deadlock cycle is eliminated –overhead, a deadlock-detection after terminating a process


Resource Preemption: To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken. Some issues need to be addressed are:

  • Selecting a victim: Some process will have to rolled back to break deadlock.  Select that process as victim that will incur minimum cost.
  • Rollback: Determine how far to roll back the process
Total rollback: Abort the process and then restart it. More effective is to roll back the process only as far as necessary to break deadlock

Starvation: Starvation happens if same process is always chosen as victim. To avoid this, we include the number of rollbacks in the cost factor.


  • One way to avoid race conditions is not to allow two processes to be in their critical sections at the same time. 
  • Critical section is the part of the process that accesses a shared variable. 
  • That is, we need a mechanism of mutual exclusion. Mutual exclusion is the way of ensuring that one process, while using the shared variable, does not allow another process to access that variable.


  • Race conditions are situations where two or more processes reads or writes some shared data at the same time and the final result is incorrect. 
  • Final result may be different according to order of completion of competing processes. 
  • let us consider a simple but common example: a print spooler. When a process wants to print a file, it enters the file name in a special spooler directory. 
  • Another process periodically checks to see if there are any files to be printed, and if there are, it prints them and then removes their names from the directory.
  • Consider the situation given in figure below, where in and out are two shared variables.
  • Process A reads value of in(i.e. 7) but before inserting the name of the document to be printed in index 7, the CPU decides to schedule process B. 
  • Now, process B also reads in, and also gets 7. Process B now continues to run and stores the name of its file in slot 7 and updates in to be an 8. 



  • After some time process A runs again. It looks at value of in stored in its local variable and finds a 7 there, and writes its file name in slot 7.Then it increments value of in and sets in to 8.The printer daemon will not notice anything wrong, but process B will never receive any output. Situations like this are called race conditions. 



  • System calls are the interface between the operating system and the user programs. Access to the operating system is done through system calls.
  • Each system call has a procedure associated with it so that calls to the operating system can be done in a similar way as that of normal procedure call. 
  • When we call one of these procedures it places the parameters into registers and informs the operating system that there is some work to be done. When a system call is made TRAP instruction is executed. This instruction switches the CPU from user mode to kernel (or supervisor) mode. 
  • Eventually, the operating system will have completed the work and will return a result.
  • Making a system call is like making a special kind of procedure call, only system calls enter the kernel and procedure calls do not.
  • Example: count = read(file, buffer, nbytes);
To make this concept clearer, let us examine the read call written above. Calling program first pushes the parameters onto stack (steps 1-3) before calling the read library procedure (step 4), which actually makes the read system call. 
  • The library procedure, possibly written in assembly language, typically puts the system call number in specified register (step 5). Then it executes a TRAP instruction to switch from user mode to kernel mode and start execution at a fixed address within the kernel (step 6). 
  • The kernel code examines the system call number and then dispatches to the correct system call handler (step 7). At that point the system call handler runs (step 8). 




  • Once the system call handler has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAP instruction (step 9). 
  • This procedure then returns to the user program in the usual way procedure calls return (step 10). 
  • To finish the job, the user program has to clean up the stack, as it does after any procedure call (step 11).  

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget