Cache Coherence
In a shared memory multiprocessor with a separate cache memory for each processor, it is possible to have many copies of any one instruction operand. One copy in the main memory and one in each cache memory. When one copy often operand is changed, the other copies of the operand must be changed.
Types of cache coherence solution
Snooping
For example :
In the above illustration, consider both the processor have a cache copy of a particular memory block from previous read. Let suppose processor 1 updates or change the cache block, after that it change the memory block using any one methods (write through, write block or instruction flow). But processor doesn't get any notification or signals of update/change. So here data inconsistency occurs, it is called cache coherence. It happens in multiprocessor system.
Cache coherence occurs in the following conditions
* Inconsistency in sharing of writable data
* Inconsistency in process migration
* IO Activity
There are three distinct levels of cache coherence:
- Every write operation appears to occur instantaneously.
- All processes see exactly the same sequence of changes of values for each separate operand.
- Different processes may see an operand assume different sequences of values. (This is considered non-coherent behavior.)
Types of cache coherence solution
To avoid cache coherence we have two types of solution
- Software Solution
- Hardware Solution
- Problem is managed completely by compiler and OS.
- No additional circuitry
- In this approach, compiler marks the data which are likely to be changed, the OS prevent those data to be cached.
Hardware Solution
- Hardware solution provide dynamic recognition at run time of potential inconsistency conditions. Because the problem is only dealt with when it actually arises, there is more effective use of caches, leading to improved performance over a software approaches.
- Hardware schemes can be divided into two categories
- Directory Protocol
- Snoopy protocols
Snooping
- Used with low-end MPs
- Few processors
- Centralized memory
- Bus-based
- Distributed implication : responsibility for maintaining coherence lies with each cache
Direct
- Used with higher-end MPs
- More processors
- Distributed memory
- Multi-path interconnect
- Centralizing for each address : responsibility for maintaining coherence lies with the directory for each address