Computer Notes, Programming codes, Hardware and Networking Tip, Entertainment, Biography, Internet Tip, Tech News, Latest Technology, YouTube,

Articles by "MICT 1st semester notes"
About RAM Advantages of multiprocessing system Associative memory Binary Number System CA CA Notes Change drive icon change pendrive icon Computer Abbreviation Computer Architecture Computer fundamental MCQ Computer Generation Computer generation computer notes Computer MCQ Computer Network MCQ Computer Operator MCQ Critical Section Critical section in OS Database connectivity in java Deadlock avoidance Deadlock detection algorithm Deadlock Detection and Recovery Deadlock detection method Deadlock Handling Deadlock in OS Deadlock Prevention Deadlock Recovery define object and class Define system cell Descrete Structure Device Driver Device driver in computer device driver in os DFA DFA contains with DFA ends with dfa examples dijkstra's algorithm Discrete Structure Discrete Structure graph theory Download JDBC Driver Download MySql Download PUBG DS DS Notes FCFS Job Scheduling Algorithm Finding shortest path Finite Sate Automata Flynn's Classifications fragmentation in computer fragmentation in harddisk fragmentation in os fragmented memory Full form related to computer Generations of operations Generations of OS Graph theory ICT 1st semester notes Instruction Address Modes Java java array declaration java class and object example Java Database connectivity example java event handling example program Java JDBC Java JMenuBar Java JSP program example java notes java program methods example Java program to create class and object java program to create methods java program to print even number between any two numbers Java programming java programming notes Java programs Java question answer java swing example java swing program to calculate simple interest Java Tutorials JSP program learn qbasic Lekh MCQ MCQ Computer MCQ Operating System memory fragmentation MICT 1st semester notes mict 1st semester operating system notes MICT first semester notes Multiprocessing mutex in os Necessary conditions for deadlock Number System Operating System Operating system notes OS OS Notes OS Numeric pattern printing program in qbasic patterns in qbasic Pipeline Hazards Pipelining Pipelining concept prime or composite in qbasic print patterns qbasic print series in qbasic Printing Series in qbasic PUBG PUBG Mobile PUBG PC PUBG Story qbasic qbasic code qbasic for class 10 qbasic for class 8 qbasic for class 9 qbasic for school QBASIC Pattern printing qbasic pattern printing program qbasic pattern printing programs qbasic pattern types qbasic patterns qbasic programming tutorials qbasic programs qbasic sequence printing programs qbasic tutorials Race Condition in Operating system reverse order in qbasic RISC and CISC RISC Pipeline Scheduling algorithm segmentation in operating system segmentation in os semaphore and mutex Semaphore concept in os Semaphore in os semaphore in os notes semaphore meaning sequential programs in qbasic series in qbasic series printing programs in qbasic shell in operating system shell in os shortest path shortest path algorithm simple interest program in java swing System Bus System Cell Teach Blog Tech Blog Tech School technical school The Shell in Operating system types of fragmentation Types of Multiprocessor types of operating system Types of pipeline hazards View Connected Wifi password Virtual Memory Virtual memory in OS Von Neumann's architecture What is associative memory? what is class? What is computer system? What is Fragmentation? What is jsp? what is object? What is process? What is segmentation What is System Cell What is Thread? what is virtual memory in computer What is virtual memory? पब्जी गेम

 The thread is a lightweight process that means one single program can be divided into small threads which will execute concurrently for fast execution of task. We can say that thread is a sub process of a process.

A thread is an independent path of execution within a program. Many thread can run concurrently within a program. Every threads in Java is created and controlled by the java.lang thread class. A java program can have many threads, and these threads can run currently, either synchronously or asynchronously.

 - Threads are lightweight compared to process.

Difference between thread and process

Process Thread
- A Process is an instance of a program. It contains a threads. - Threads are the parts of process. It cannot contain a process.
- Process run in its separate. - Thread run in shared memory spaces.
- Process is controlled by the operating system. - Threads a re controlled by programmer in a program.
- Processes are independent.  - Threads are dependent. 
- New processes require duplication of the parent process.  - New threads are easily created.
- Process require more time for context switching as they are more heavy. - Threads require less time for context switching as they are lighter then process. 
- Process require more resources then threads.  - Threads generally need less resources than process. 
- Process require more time for termination. - Threads require less time for termination.

 Life cycle of threads

 

New
The thread is in new state if you create an instance of thread class but before the invocation of start() method.

Runnable
A thread in the runnable state is executing the Java virtual machine but it may be waiting for other resources from the operating system such as processor.

Running
The thread is in running state if the thread scheduler has selected it.

Blocked (Non Runnable)
It is the state when the thread is still alive, but is currently not eligible to run.

Terminated
A thread is in terminated or dead state when its run() method exists.


  • One way to avoid race conditions is not to allow two processes to be in their critical sections at the same time. 
  • Critical section is the part of the process that accesses a shared variable. 
  • That is, we need a mechanism of mutual exclusion. Mutual exclusion is the way of ensuring that one process, while using the shared variable, does not allow another process to access that variable.


Process 

Process is a program that is ready for execution in CPU. When a program is loaded into memory, it becomes ready for execution and competes with other process to access CPU. Thus when a program is loaded into memory it becomes process. Computers nowadays can do many things at the same time. They can be writing to a printer, reading from a disc and scanning an image. Operating system is responsible for running many processes, usually, on the same CPU. In fact, only one process can be run at a time so the operating system has to share the CPU between the processes that are available to be run, while giving the illusion that the computer is doing many things at the same time. This approach can be directly contrasted with the first computers. They could only run one program at a time. 



Now, the main point of this part is to consider how an operating system deals with processes when we allow many to run. To achieve true parallelism we must have multiprocessor system when n processors can execute n programs simultaneously. True parallelism cannot be achieved with singl CPU. In single processor CPU switched from one process to another process rapidly. In 1 sec CPU can serve 100's of processes giving the illusion of parallelism to a user which is called pseudo-parallelism. 



Threads 

Threads are like mini-processes that operate within a single process. Each thread has its own program counter and stack so that it knows where it is. Apart from this they can be considered the same as process, with the exception that they share the same address space. This means that all threads from the same process have access to the same global variable and the same files. Threads are also called light weight process. The table given below shows the various items that a process has compared to the items that each thread has. 

Uses of Thread
  • Some application need to perform multiple activities at once that share common data and files. Thus reason behind need of multiple threads. 
  • Ability of parallel entities, within a process to all share common variables and data. 
  • Since threads do not have their own address space, it is easier to create and destroy threads than processes. In fact thread creation is around 100 times faster than process creation. 
  • Thread switching has much less overhead than process switching because threads have few information attached with them. 
  • Performance gain when there is a lot of computing and lot of I/O as tasks can overlap. Pure CPU bound thread application no advantage through
  • Threads are very effective on multiple CPU system for concurrent processing because with multiple CPU's true parallelism can be achieved


Difference between Process and Threads
Thread Process
- Threads are lightweight - Processes are heavyweight
- Threads are runs in address space of processes - Processes have their own address space
- Threads shares files and other data - Processes do not share files and other data
- Thread switching is faster - Process switching is slower
Threads are easy to create and destroy Process are difficult to create and destroy

In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the process enters into wait state. Waiting processes may never again change state, because the resources they have requested are held by other waiting processes. This situation is called a deadlock. System is deadlocked if there is a set of processes such that every process in the set is waiting for another process in the set.

Example 1
System has 2 disk drives.
P1 and P2 each hold one disk drive and each needs another one.

Example 2
- Porcess A makes a request to use the scanner
- Porcess A given the scanner
- Process B request thee CD writer
- Process B given the CD writer
- Now a requrest the CD writer
- A is denied permission until B releases the CD writer
- Also now B ask for the scanner
- Result is DEADLOCK !!


Necessary conditions for deadlock/Deadlock Characterization

A deadlock can arise if the following four conditions hold simultaneously (Necessary Condition).
1. Mutual exclusion: 
Only one process at time can use the resource.
.
2.  Hold and wait: 
A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.

3. No preemption: 
A resource can be released only voluntarily by the process holding it, after that process has completed its task.

4. Circular wait:
A set {P0, P1, …, Pn} of waiting processes must exist such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.



Deadlocks Handling

Mainly there are four methods of handling deadlock
1. Deadlock avoidance 
To avoid deadlocks the system dynamically considers every request and decides whether it is safe to grant it at this point. The system requires additional prior information regarding the overal potential use of each resource for each process. 

2. Deadlock Prevention 
It means that we design such a system where there is no chance of having a deadlock. The goal is to ensure that at least one of the Coffman's necessary conditions for deadlock can never hold. Deadlock can be prevented deadlocks by constraining how request for resources can be made in the system and how they are handled.

3. Deadlock Detection and Recovery 
We can allow the system to enter a deadlock state, detect it and recover - If a system does not employ either a deadlock prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment, the system can provide an algorithm that examines the state of the system to determine whether a deadlock has occurred and an algorithm to recover from the deadlock.

4.  Ignore the Problem (Ostrich Algorithm) : 
We can ignore the deadlock problem altogether. If the deadlock occurs, the system will stop functioning and will need to be restarted manually. This approach is used in systems in which deadlocks occur infrequently and if it is not a life critical system. This method is cheaper than other methods. This solution is used by most operating systems, including UNIX.



Deadlock Prevention 

If we simulate deadlock with a table which is standing on its four legs then we can also simulate four legs with the four conditions which when occurs simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall definitely. The same happens with deadlock, if we can be able to violate one of the four necessary conditions and don't let them occur together then we can prevent the deadlock.

Deadlock prevention methods are given below
1. Attack mutual Exclusion
Must hold only for non-sharable resources. Sharable resources, on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock.

2. Attack Hold and Wait
Ensure that whenever a process requests a resource, it does not hold any other resources. Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. There is a drawback of low resource utilization in this method.

3. Attack No Preemption
If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are preempted (or released). If a process requests some resources and if they are not available, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process.

4. Attack Circular Wait
One way to ensure that circular wait never holds is to Impose a total ordering of all resource types, and require that each process requests resources in an increasing or decreasing order of enumeration.



Deadlock Avoidance 

Deadlock can be avoided if certain information about processes are available to the operating system before allocation of resources, such as which resources a process will consume in its lifetime. For every resource request, the system sees whether granting the request will cause the system to enter an unsafe state. The system then only grants request that will lead to safe states. In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time. 
  • Resources currently available
  • Resources currently allocated to each process 
  • Resources that will be required and released by these process in the future
It is possible for a process to be in an unsafe state but for this not result in a deadlock. The notion of safe/unsafe state only refers to the ability of the system to enter a deadlock state or not. For example, if a process requests A which would result in an unsafe state but releases B which would prevent circular wait then the state is but the system is not in deadlock. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requests resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is impossible practically. 

Traditional memory stores data at a specific address and "recalls" that data later if the address is specified. Instead of an address, associative memory can recall data if a small portion of the data itself is specified.


Associative memory is often referred to as content. Addressable memory (CAM). In associative memory, any stored items can be accessed by using the contends of item. Items are stored in an associative memory have two field format, key and data.
Associative searching is based on simultaneous matching of key to be searched with stored key associated with each line of data.



The following diagram shows the block representation of an associative memory.
From the block diagram we can say that an associative memory consists of a memory array and logic for 'm' words with 'n' bits per word. 


The functional registers like the argument register A and key register K each have n bits, are for each bit of a word. The match register m consist of M bits, one for each memory word.

Pipeline hazards are situations that prevent the next instruction in the instruction stream from executing during its designated clock cycles. In another word, any condition that causes a stall in the pipeline operations can be called a hazard. There are mainly three types of hazards, they are:
  1. Data Hazards 
  2. Structural Hazards
  3. Control Hazards 


Data Hazards
It arises when instructions depend on the result of previous instruction but the previous instruction is not available yet.

Structural Hazards:
They arise when there are resource conflicts that prevents hardware to execute simultaneous execution of instruction. For e.g. Lets say the hardware has a register file which has a limitation of only one read or write in a cycle. If there is an instruction that needs to read from this register file while another instruction needs to write to this register file, only one can execute because of conflict.

Control Hazards
These hazards arise as a result of any type of branch instruction. Till the branch is completely executed. The branch is completely executed, the stream of following instructions will not be known completely.


Reduced instruction set computer (RISC).
 The main characteristics of RISC pipeline is to use an efficient instructions pipeline. In case of RISC pipeline, the instruction pipeline can be implemented with only two or three segments where segments 1 fetches the instructions from the memory.

Segment 2 executes the instruction in the ALU, and segment 3 may be used to store the results of the ALU operation in a particular register.



Parallel processing systems are designed to speed up the execution of programs by dividing the program into multiple fragments and processing these fragments simultaneously such systems are known as tightly coupled systems.


Parallel computing is an evolution of serial computing where the jobs are broken into discrete parts that can be executed currently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPU's.


Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel computers varies accordingly and the processes of multiple CPU's must be co-ordinates and synchronized.

Pipeline is the process of accumulating instruction from the process through a pipeline. It allows strong and executing instructions in an orderly process. It is also known as pipeling process.

Pipeline is a technique where multiple instructions are overlapped during execution. Pipeline is divided into stages and these stages are connected with one another to form a pipe like structure. Instructions entered from one end and exit from another end.

Pipelining increases the overal instruction throughput. 


  1. Arithmetic Pipeline 
  2. Instruction Pipeline 
Arithmetic Pipeline 
Arithmetic pipeline unit are found in usually in most of the high speed computers. Floating point operations, multiplication of fixed-point numbers, and similar computation in scientific problem. 
For example : the input to floating point adder pipeline is : 
x = A*Z^a
y = B*2^b

Here, A and B are significant digits of floating point number, while a and b are exponents.


Instruction Pipeline
In this a stream of instructions can be executed by overlapping fetch, decode and execute phases of an instruction cycle. This type of technique is used to increases the throughput of the computer system.

An instruction pipeline reads instruction from the memory while previous instructions are being executed in other segments of the pipeline. Thus we can execute multiple instructions simultaneously. The pipeline will be more efficient if the instruction cycle is divided into segment of equal duration.


The interconnection structure must support the following types of module

  • Memory to processor : the CPU reads an instruction or data from memory
  • Processor to memory : the CPU write data to memory 
  • I/O to processor : the CPU reads data from the I/O device via the I/O module 
  • Processor to I/O
  • I/O to or from memory : An I/O module is allowed to change data directly with memory without going through the processor using DMA (Direct Memory Access) 



Bus Interconnection 
  • A bus is a  communication pathway consisting of lines and it is connecting two or more devices. 
  • A bus is considered as a shared transmission medium allowing multiple devices to connect to it
  • However, only one device at a time can successfully transmit 
  • Several lines of the bus can be used to transmit binary digits simultaneously. 
  • For example : 
    • An 8 bit unit of data can be transmitted over 8-bus lines. 
  • A bus that connect the major computer components (CPU, Memory, I/O) is called System Bus. 
  • A system bus may consists of 50 or hundreds of separate lines. Each line has a particular functions. 
  • The interconnection structures are based on the use of one or more system buses. 
  • Bus lines can be classified based 3 functional groups. 

Bus lines can be classified based on 3 functional groups. 
  1. Data Lines
  2. Address Lines 
  3. Control Lines
Data Lines
  • It provide pathway for moving data between system modules. 
  • These lines are called Data Bus
  • The Lines (32 to hundred) referred to as the width of the bus
  • The width determines the overall system performance. 
e.g If the data bus is 8 bit wide, and each instruction is 16 bit long, then the processor must access the memory module twice during each instruction cycle. 

Address Line

  • It is used to determine the source or destination of the data on the data bus. 
  • For example : 
    • The CPU puts the address of the desired word to be read from/or written to memory on the address lines. 
  • The width of an system bus determine the maximum addressable memory
  • The address lines are also used to address I/O ports. 

Control Lines

  • Control lines are used to hold control signals to control the access and the use of data and address lines since these lines are shared by all components
  • Control signals transmit command and timing information between system components. 
    • Timing signal indicate the validity of data and address information 
    • Command signals specifies the type of operations to be performed

Main operation of Bus 
If a module wishes to send data to another module it must do two things.

  • Obtain the use of the bus
  • Transfer data via the bus
If a module wishes to request data from another module it must do two things. 
  • Obtain the use of the bus
  • Transfer a request to the other module over appropriate control and address lines
  • Wait for the other module to send the data 

Define arithmetic pipelining. Explain pipelining hazards with examples. 




Pipelining is a technique where multiple instructions are overlapped during execution. Pipeline is divided into stages and these stages are connected with one another to form a pipe like structure. Instructions enter from one end and exit from another end. There are two types of pipeline. 
  1. Arithmetic Pipeline
  2. Instruction Pipeline 
Arithmetic pipelines are usually found in most of the computers. They are used for floating point operations, multiplication of fixed point numbers etc.
Pipeline hazards are situations that prevent the next instruction in the instruction stream from 



Pipelining Hazards 
Pipeline hazards are situations that prevent the next instruction in the instruction stream from executing during its designated clock cycles. In another word, any condition that causes a stall in the pipeline operations can be called a hazard. There are mainly three types of hazards, They are : 

  1. Data Hazards
  2. Control Hazards or instruction Hazards
  3. Structural Hazards.

Example:  
A=3+A
B=A*4

For the above sequence, the second instruction needs the value of ‘A’ computed in the first instruction. Thus the second instruction is said to depend on the first. In this situation data hazards is arises.  A data hazard is any condition in which either the source or the destination operands of an instruction are not available at the time expected in the pipeline.

Define memory hierarchy. Explain cache memory mapping functions with example. 



Memory hierarchy is a concept that is necessary for the CPU to be able to manipulate data. In computer architecture, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. The following figure shows the hierarchy of memory in computer. 
Cache is used by the CPU for memory which is being accessed over and over again. Instead of pulling it every time from the main memory, it is put in cache for fast access. It is also a smaller memory, however, larger than internal register.
Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations.
There are different levels of catch memory. Level 1, Level 2, Level 3 etc.  Levels of catch is based on the architecture of computer. 

Cache Mapping
There are three different types of mapping used for the purpose of cache memory which are as follows:

  1. Direct mapping
  2. Associative mapping and 
  3. Set-Associative mapping. 

Direct Mapping
Direct catch mapping

  • The simplest way to determine cache locations in which store Memory blocks is direct Mapping technique.
  • In this block J of the main memory maps on to block J modulo 128 of the cache. Thus main memory blocks 0,128,256,….is loaded into cache is stored at block 0. Block 1,129,257,….are stored at block 1 and so on.
  • Placement of a block in the cache is determined from memory address. Memory address is divided into 3 fields, the lower 4-bits selects one of the 16 words in a block.
  • When new block enters the cache, the 7-bit cache block field determines the cache positions in which this block must be stored.
  • The higher order 5-bits of the memory address of the block are stored in 5 tag bits associated with its location in cache. They identify which of the 32 blocks that are mapped into this cache position are currently resident in the cache.
  • It is easy to implement, but not Flexible



Explain the Organization of micro-programmed control design. 


Micr-program is a process of writing microcode for a microprocessor. Microcode is low-level code that defines how a microprocessor should function when it executes machine-language instructions. Typically, one machine language instruction translates into several microcode instruction, on some computers, the microcode is stored in ROM and can not be modified.

Micro programmed Control Unit:

  • A control unit with its binary control values stored as words in memory is called as micro programmed control. Each word in the control memory contains micro instruction that specifies one or more micro operations for the system. A sequence of micro instructions constitutes a micro program.
  • Micro programmed implementation is a software approach in contrast to the hardwired approach.
  • It deals with various units of software but at the micro level i.e. micro-operation, micro-instruction, micro-program etc.
  • Different key elements used for implementation of a control unit using micro programmed approach is shown in fig. below:
Control Address Register (CAR)

It contains the address of next micro instruction to be read. This is similar to the program counter(PC) which stores the address of the next instruction.



Control Memory
The set of micro instruction is stored in control Memory (CM) also called as control store.

Control Buffer Register(CBR)
When microinstruction is read from the control memory, it is transferred to a control Buffer Register (CBR), which is similar to the instruction Register (IR) that stores the opcode of the instruction read from the memory.

Sequencing
It loads the control Address register with the address of the next instruction to be read abd issues a read command to control memory.



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget