Computer Notes, Programming codes, Hardware and Networking Tip, Entertainment, Biography, Internet Tip, Tech News, Latest Technology, YouTube,

Latest Post
About RAM Advantages of multiprocessing system Associative memory Binary Number System CA CA Notes Change drive icon change pendrive icon Computer Abbreviation Computer Architecture Computer fundamental MCQ Computer Generation Computer generation computer notes Computer MCQ Computer Network MCQ Computer Operator MCQ Critical Section Critical section in OS Database connectivity in java Deadlock avoidance Deadlock detection algorithm Deadlock Detection and Recovery Deadlock detection method Deadlock Handling Deadlock in OS Deadlock Prevention Deadlock Recovery define object and class Define system cell Descrete Structure Device Driver Device driver in computer device driver in os DFA DFA contains with DFA ends with dfa examples dijkstra's algorithm Discrete Structure Discrete Structure graph theory Download JDBC Driver Download MySql Download PUBG DS DS Notes FCFS Job Scheduling Algorithm Finding shortest path Finite Sate Automata Flynn's Classifications fragmentation in computer fragmentation in harddisk fragmentation in os fragmented memory Full form related to computer Generations of operations Generations of OS Graph theory ICT 1st semester notes Instruction Address Modes Java java array declaration java class and object example Java Database connectivity example java event handling example program Java JDBC Java JMenuBar Java JSP program example java notes java program methods example Java program to create class and object java program to create methods java program to print even number between any two numbers Java programming java programming notes Java programs Java question answer java swing example java swing program to calculate simple interest Java Tutorials JSP program learn qbasic Lekh MCQ MCQ Computer MCQ Operating System memory fragmentation MICT 1st semester notes mict 1st semester operating system notes MICT first semester notes Multiprocessing mutex in os Necessary conditions for deadlock Number System Operating System Operating system notes OS OS Notes OS Numeric pattern printing program in qbasic patterns in qbasic Pipeline Hazards Pipelining Pipelining concept prime or composite in qbasic print patterns qbasic print series in qbasic Printing Series in qbasic PUBG PUBG Mobile PUBG PC PUBG Story qbasic qbasic code qbasic for class 10 qbasic for class 8 qbasic for class 9 qbasic for school QBASIC Pattern printing qbasic pattern printing program qbasic pattern printing programs qbasic pattern types qbasic patterns qbasic programming tutorials qbasic programs qbasic sequence printing programs qbasic tutorials Race Condition in Operating system reverse order in qbasic RISC and CISC RISC Pipeline Scheduling algorithm segmentation in operating system segmentation in os semaphore and mutex Semaphore concept in os Semaphore in os semaphore in os notes semaphore meaning sequential programs in qbasic series in qbasic series printing programs in qbasic shell in operating system shell in os shortest path shortest path algorithm simple interest program in java swing System Bus System Cell Teach Blog Tech Blog Tech School technical school The Shell in Operating system types of fragmentation Types of Multiprocessor types of operating system Types of pipeline hazards View Connected Wifi password Virtual Memory Virtual memory in OS Von Neumann's architecture What is associative memory? what is class? What is computer system? What is Fragmentation? What is jsp? what is object? What is process? What is segmentation What is System Cell What is Thread? what is virtual memory in computer What is virtual memory? पब्जी गेम

Parallel processing systems are designed to speed up the execution of programs by dividing the program into multiple fragments and processing these fragments simultaneously such systems are known as tightly coupled systems.


Parallel computing is an evolution of serial computing where the jobs are broken into discrete parts that can be executed currently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPU's.


Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel computers varies accordingly and the processes of multiple CPU's must be co-ordinates and synchronized.

Pipeline is the process of accumulating instruction from the process through a pipeline. It allows strong and executing instructions in an orderly process. It is also known as pipeling process.

Pipeline is a technique where multiple instructions are overlapped during execution. Pipeline is divided into stages and these stages are connected with one another to form a pipe like structure. Instructions entered from one end and exit from another end.

Pipelining increases the overal instruction throughput. 


  1. Arithmetic Pipeline 
  2. Instruction Pipeline 
Arithmetic Pipeline 
Arithmetic pipeline unit are found in usually in most of the high speed computers. Floating point operations, multiplication of fixed-point numbers, and similar computation in scientific problem. 
For example : the input to floating point adder pipeline is : 
x = A*Z^a
y = B*2^b

Here, A and B are significant digits of floating point number, while a and b are exponents.


Instruction Pipeline
In this a stream of instructions can be executed by overlapping fetch, decode and execute phases of an instruction cycle. This type of technique is used to increases the throughput of the computer system.

An instruction pipeline reads instruction from the memory while previous instructions are being executed in other segments of the pipeline. Thus we can execute multiple instructions simultaneously. The pipeline will be more efficient if the instruction cycle is divided into segment of equal duration.


The interconnection structure must support the following types of module

  • Memory to processor : the CPU reads an instruction or data from memory
  • Processor to memory : the CPU write data to memory 
  • I/O to processor : the CPU reads data from the I/O device via the I/O module 
  • Processor to I/O
  • I/O to or from memory : An I/O module is allowed to change data directly with memory without going through the processor using DMA (Direct Memory Access) 



Bus Interconnection 
  • A bus is a  communication pathway consisting of lines and it is connecting two or more devices. 
  • A bus is considered as a shared transmission medium allowing multiple devices to connect to it
  • However, only one device at a time can successfully transmit 
  • Several lines of the bus can be used to transmit binary digits simultaneously. 
  • For example : 
    • An 8 bit unit of data can be transmitted over 8-bus lines. 
  • A bus that connect the major computer components (CPU, Memory, I/O) is called System Bus. 
  • A system bus may consists of 50 or hundreds of separate lines. Each line has a particular functions. 
  • The interconnection structures are based on the use of one or more system buses. 
  • Bus lines can be classified based 3 functional groups. 

Bus lines can be classified based on 3 functional groups. 
  1. Data Lines
  2. Address Lines 
  3. Control Lines
Data Lines
  • It provide pathway for moving data between system modules. 
  • These lines are called Data Bus
  • The Lines (32 to hundred) referred to as the width of the bus
  • The width determines the overall system performance. 
e.g If the data bus is 8 bit wide, and each instruction is 16 bit long, then the processor must access the memory module twice during each instruction cycle. 

Address Line

  • It is used to determine the source or destination of the data on the data bus. 
  • For example : 
    • The CPU puts the address of the desired word to be read from/or written to memory on the address lines. 
  • The width of an system bus determine the maximum addressable memory
  • The address lines are also used to address I/O ports. 

Control Lines

  • Control lines are used to hold control signals to control the access and the use of data and address lines since these lines are shared by all components
  • Control signals transmit command and timing information between system components. 
    • Timing signal indicate the validity of data and address information 
    • Command signals specifies the type of operations to be performed

Main operation of Bus 
If a module wishes to send data to another module it must do two things.

  • Obtain the use of the bus
  • Transfer data via the bus
If a module wishes to request data from another module it must do two things. 
  • Obtain the use of the bus
  • Transfer a request to the other module over appropriate control and address lines
  • Wait for the other module to send the data 

In a shared memory multiprocessor with a separate cache memory for each processor, it is possible to have many copies of any one instruction operand. One copy in the main memory and one in each cache memory. When one copy often operand is changed, the other copies of the operand must be changed.
For example : 

In the above illustration, consider both the processor have a cache copy of a particular memory block from previous read. Let suppose processor 1 updates or change the cache block, after that it change the memory block using any one methods (write through, write block or instruction flow). But processor doesn't get any notification or signals of update/change. So here data inconsistency occurs, it is called cache coherence. It happens in multiprocessor system. 



Cache coherence occurs in the following conditions 
* Inconsistency in sharing of writable data


* Inconsistency in process migration

* IO Activity 

There are three distinct levels of cache coherence:
  1. Every write operation appears to occur instantaneously.
  2. All processes see exactly the same sequence of changes of values for each separate operand.
  3. Different processes may see an operand assume different sequences of values. (This is considered non-coherent behavior.)

Types of cache coherence solution
To avoid cache coherence we have two types of solution
  1. Software Solution
  2. Hardware Solution
Software Solution
- Problem is managed completely by compiler and OS. 
- No additional circuitry
- In this approach, compiler marks the data which are likely to be changed, the OS prevent those data to be cached.  

Hardware Solution
- Hardware solution provide dynamic recognition at run time of potential inconsistency conditions. Because the problem is only dealt with when it actually arises, there is more effective use of caches, leading to improved performance over a software approaches. 
- Hardware schemes can be divided into two categories 
  1. Directory Protocol 
  2. Snoopy protocols

Snooping
- Used with low-end MPs
- Few processors 
- Centralized memory
- Bus-based
- Distributed implication : responsibility for maintaining coherence lies with each cache

Direct
- Used with higher-end MPs
- More processors 
- Distributed memory
- Multi-path interconnect
- Centralizing for each address : responsibility for maintaining coherence lies with the directory for each address   







Define arithmetic pipelining. Explain pipelining hazards with examples. 




Pipelining is a technique where multiple instructions are overlapped during execution. Pipeline is divided into stages and these stages are connected with one another to form a pipe like structure. Instructions enter from one end and exit from another end. There are two types of pipeline. 
  1. Arithmetic Pipeline
  2. Instruction Pipeline 
Arithmetic pipelines are usually found in most of the computers. They are used for floating point operations, multiplication of fixed point numbers etc.
Pipeline hazards are situations that prevent the next instruction in the instruction stream from 



Pipelining Hazards 
Pipeline hazards are situations that prevent the next instruction in the instruction stream from executing during its designated clock cycles. In another word, any condition that causes a stall in the pipeline operations can be called a hazard. There are mainly three types of hazards, They are : 

  1. Data Hazards
  2. Control Hazards or instruction Hazards
  3. Structural Hazards.

Example:  
A=3+A
B=A*4

For the above sequence, the second instruction needs the value of ‘A’ computed in the first instruction. Thus the second instruction is said to depend on the first. In this situation data hazards is arises.  A data hazard is any condition in which either the source or the destination operands of an instruction are not available at the time expected in the pipeline.

Define memory hierarchy. Explain cache memory mapping functions with example. 



Memory hierarchy is a concept that is necessary for the CPU to be able to manipulate data. In computer architecture, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. The following figure shows the hierarchy of memory in computer. 
Cache is used by the CPU for memory which is being accessed over and over again. Instead of pulling it every time from the main memory, it is put in cache for fast access. It is also a smaller memory, however, larger than internal register.
Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations.
There are different levels of catch memory. Level 1, Level 2, Level 3 etc.  Levels of catch is based on the architecture of computer. 

Cache Mapping
There are three different types of mapping used for the purpose of cache memory which are as follows:

  1. Direct mapping
  2. Associative mapping and 
  3. Set-Associative mapping. 

Direct Mapping
Direct catch mapping

  • The simplest way to determine cache locations in which store Memory blocks is direct Mapping technique.
  • In this block J of the main memory maps on to block J modulo 128 of the cache. Thus main memory blocks 0,128,256,….is loaded into cache is stored at block 0. Block 1,129,257,….are stored at block 1 and so on.
  • Placement of a block in the cache is determined from memory address. Memory address is divided into 3 fields, the lower 4-bits selects one of the 16 words in a block.
  • When new block enters the cache, the 7-bit cache block field determines the cache positions in which this block must be stored.
  • The higher order 5-bits of the memory address of the block are stored in 5 tag bits associated with its location in cache. They identify which of the 32 blocks that are mapped into this cache position are currently resident in the cache.
  • It is easy to implement, but not Flexible



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget