Monday, June 5, 2023

CSA : UNIT 4: Input /Output Organization

#CSA

UNIT 4

UNIT-IV: Input /Output Organization: Peripheral Devices, I/O interfaces I/O-mapped 1/O and memory-mapped I/0, interrupts and interrupt handling mechanisms, vectored interrupts, synchronous vs. asynchronous data transfer, Direct Memory Access.

Text Books:

1. Computer System Architecture, M. Marris Mano, PHI 2. Computer Organization, VC Hamacher, ZGVranesicand S.C.Zaky, McGraw Hill

NOTES:

Methods of Data transfer:

1. Programmed I/O : In programmed I/O, the processor keeps on scanning whether any device is ready for data transfer. If an I/O device is ready, the processor fully dedicates itself in transferring the data between I/O and memory.

2. Interrupt driven I/O: In Interrupt driven I/O, whenever the device is ready for data transfer, then it raises an interrupt to processor

The above two modes of data transfer are not useful for transferring a large block of data. But, the DMA controller completes this task at a faster rate and is also effective for transfer of large data block.SO,

3. Direct memory access (DMA) : It is a another mode of data transfer between the memory and I/O devices. 

RISC and CISC Processors

RISC stands for Reduced Instruction Set Computer and

CISC stands for Complex Instruction Set Computer.

S.No.RISCCISC
1.Simple instruction setComplex instruction set
2.Consists of Large number of registers.Less number of registers
3.Larger ProgramSmaller program
4.Simple processor circuitry (small number of transistors)Complex processor circuitry (more number of transistors)
5.More RAM usageLittle Ram usage
6.Simple addressing modesVariety of addressing modes
7.Fixed length instructionsVariable length instructions
8.Fixed number of clock cycles for executing one instructionVariable number of clock cycles for each instructions



QUESTIONS : COMPILED FROM UNIVERSITY Q P: 

1. What is Direct Memory Access technique? Explain the rate of DMA controller with diagram.

Ans:

Direct Memory Access (DMA) transfers the block of data between the memory and peripheral devices of the system, without the participation of the processor. The unit that controls the activity of accessing memory directly is called a DMA controller.

  • DMA is an abbreviation of direct memory access.
  • DMA is a method of data transfer between main memory and peripheral devices.
  • The hardware unit that controls the DMA transfer is a DMA controller.
  • DMA controller transfers the data to and from memory without the participation of the processor.
  • The processor provides the start address and the word count of the data block which is transferred to or from memory to the DMA controller and frees the bus for DMA controller to transfer the block of data.
  • DMA controller transfers the data block at the faster rate as data is directly accessed by I/O devices and is not required to pass through the processor which save the clock cycles.
  • DMA controller transfers the block of data to and from memory in three modes burst modecycle steal mode and transparent mode.
  • DMA can be configured in various ways it can be a part of individual I/O devices, or all the peripherals attached to the system may share the same DMA controller.

Thus the DMA controller is a convenient mode of data transfer. It is preferred over the programmed I/O and Interrupt-driven I/O mode of data transfer.

  1. Whenever an I/O device wants to transfer the data to or from memory, it sends the DMA request (DRQ) to the DMA controller. DMA controller accepts this DRQ and asks the CPU to hold for a few clock cycles by sending it the Hold request (HLD).
  2. CPU receives the Hold request (HLD) from DMA controller and relinquishes the bus and sends the Hold acknowledgement (HLDA) to DMA controller.
  3. After receiving the Hold acknowledgement (HLDA), DMA controller acknowledges I/O device (DACK) that the data transfer can be performed and DMA controller takes the charge of the system bus and transfers the data to or from memory.
  4. When the data transfer is accomplished, the DMA raise an interrupt to let know the processor that the task of data transfer is finished and the processor can take control over the bus again and start processing where it has left.

2. Explain in detail about the structure of a Magnetic Disk system. Also mention how we can find its capacity.

A magnetic disk primarily consists of a rotating magnetic surface (called platter) and a mechanical arm that moves over it. Together, they form a “comb”. 

The mechanical arm is used to read from and write to the disk. The data on a magnetic disk is read and written using a magnetization process.

A magnetic disk is a storage device that can be assumed as the shape of a Gramophone record. This disk is coated on both sides with a thin film of Magnetic material. This magnetic material has the property that it can store either ‘1’ or ‘0' permanently. The magnetic material has square loop hysteresis (curve) which can remain in one out of two possible directions which correspond to binary ‘1’ or ‘0’.

Bits are saved in the magnetized surface in marks along concentric circles known as tracks. The tracks are frequently divided into areas known as sectors.

In this system, the lowest quantity of data that can be sent is a sector.


A magnetic disk contains several platters. Each platter is divided into circular shaped tracks. The length of the tracks near the centre is less than the length of the tracks farther from the centre. Each track is further divided into sectors, as shown in the figure.

Tracks of the same distance from centre form a cylinder. A read-write head is used to read data from a sector of the magnetic disk.

The speed of the disk is measured as two parts:

  • Transfer rate: This is the rate at which the data moves from disk to the computer.
  • Random access time: It is the sum of the seek time and rotational latency.

Seek time is the time taken by the arm to move to the required track. 

Rotational latency is defined as the time taken by the arm to reach the required sector in the track.







Magnetic Disk in Computer Architecture-

 

In computer architecture,

  • Magnetic disk is a storage device that is used to write, rewrite and access data.
  • It uses a magnetization process.

Architecture-

 

  • The entire disk is divided into platters.
  • Each platter consists of concentric circles called as tracks.
  • These tracks are further divided into sectors which are the smallest divisions in the disk.
  • cylinder is formed by combining the tracks at a given radius of a disk pack.
  • There exists a mechanical arm called as Read / Write head.
  • It is used to read from and write to the disk.
  • Head has to reach at a particular track and then wait for the rotation of the platter.
  • The rotation causes the required sector of the track to come under the head.
  • Each platter has 2 surfaces- top and bottom and both the surfaces are used to store the data.
  • Each surface has its own read / write head.
  •  

    Storage Density-

     

    • All the tracks of a disk have the same storage capacity.
    • This is because each track has different storage density.
    • Storage density decreases as we from one track to another track away from the center.

     

    Thus,

    • Innermost track has maximum storage density.
    • Outermost track has minimum storage density.

     

    Capacity Of Disk Pack-

     

    Capacity of a disk pack is calculated as-

     

    Capacity of a disk pack

    = Total number of surfaces x Number of tracks per surface x Number of sectors per track x Storage capacity of one sector



3. What do you understand by computer peripherals? Explain with proper explanation any two computer peripherals.

Ans:

A peripheral device is a device that is connected to a computer system but is not part of the core computer system architecture. Generally, more people use the term peripheral more loosely to refer to a device external to the computer case.

Peripheral devices: It is generally classified into 3 basic categories which are given below:

         1.Input Devices: The input devices is defined as it converts incoming data and instructions into a pattern of electrical signals in binary code that are comprehensible to a digital computer. Example:

Keyboard, mouse, scanner, microphone etc.

   2. Output Devices: An output device is generally reverse of the input process and generally translating the digitized signals into a form intelligible to the user. The output device is also performed for sending data from one computer system to another. For some time punched-card and paper-tape readers were extensively used for input, but these have now been supplanted by more efficient devices. Example:

Monitors, headphones, printers etc. 
3. Storage Devices: Storage devices are used to store data in the system which is required for performing any operation in the system. The storage device is one of the most requirement devices and also provide better compatibility. Example: 
Hard disk, magnetic tape, Flash memory etc. 

4. What is address space? Explain isolated w/s memory mapped I/O.

Ans:

In a microprocessor system, there are two methods of interfacing input/output (I/O) devices: 

memory-mapped I/O and I/O mapped I/O

The key factor of differentiation between memory-mapped I/O and Isolated I/O is that in memory-mapped I/O, the same address space is used for both memory and I/O device. While in I/O-mapped I/O, separate address spaces are used for memory and I/O device.

Memory-mapped I/O:

Processor does not differentiate between memory and I/O. Treats I/O devices also like memory devices.

I/O addresses are as big as memory addresses. E.g.in 8085, I/O addresses will be 16 bit as memory addresses are also 16-bit.

This allows us to increase the number of I/O devices. E.g. in 8085, we can access up to 216 = 65536 I/O devices.

We need only two control signals in the system: Read and Write. 

I/O mapped I/O:

Processor differentiates between I/O devices and memory. It isolates I/O devices.

I/O addresses are smaller than memory addresses. E.g. in 8085, I/O addresses will be 8 bit though memory addresses are 16-bit.

This allows us to access limited number of I/O devices. E.g. in 8085, we can access only up to 28 = 256 I/O devices.

We need four control signals: Memory Read, Memory Write and I/O Read and I/O Write

Comparison chart:

Basis for ComparisonMemory mapped I/OI/O mapped I/O
BasicI/O devices are treated as memory.I/O devices are treated as I/O devices.
Allotted address size16-bit (A0 – A15)8-bit (A0 – A7)
Data transfer instructionsSame for memory and I/O devices.Different for memory and I/O devices.
Cycles involvedMemory read and memory writeI/O read and I/O write
Interfacing of I/O portsLarge (around 64K)Comparatively small (around 256)
Control signalNo separate control signal is needed for I/O devices.Special control signals are used for I/O devices.
EfficiencyLessComparatively more
Decoder hardwareMore decoder hardware required.Less decoder hardware required.
IO/M’During memory read or memory write operations, IO/M’ is kept low.During I/O read and I/O write operation, IO/M’ is kept high.
Data movementBetween registers and ports.Between accumulator and ports.
Logical approachSimpleComplex
UsabilityIn small systems where memory requirement is less.In systems that need large memory space.
Speed of operationSlowComparatively fast
Example of instructionLDA ****H
STA ****H
MOV A, M
IN ****H
OUT ****H

5. Define priority interrupt. Explain daisy chaining priority interrupt with a block diagram 

OR 

How Daisy Chaining priority interrupt works?

What is priority interrupt in computer architecture? 

Ans:

It is a system responsible for selecting the priority at which devices generating interrupt signals simultaneously should be serviced by the CPU. High-speed transfer devices are generally given high priority, and slow devices have low priority.

When I/O devices are ready for I/O transfer, they generate an interrupt request signal to the computer. 

The CPU receives this signal, suspends the current instructions it is executing, and then moves forward to service that transfer request. 

But what if multiple devices generate interrupts simultaneously. In that case, we have a way to decide which interrupt is to be serviced first. 

In other words, we have to set a priority among all the devices for systemic interrupt servicing. The concept of defining the priority among devices so as to know which one is to be serviced first in case of simultaneous requests is called a priority interrupt system. This could be done with either software or hardware methods.

SOFTWARE METHOD – POLLING

The major disadvantage of this method is that it is quite slow. To overcome this, we can use hardware solution, one of which involves connecting the devices in series. This is called Daisy-chaining method.

HARDWARE METHOD – DAISY CHAINING: 

Daisy-Chaining Priority:

    
    The Daisy–Chaining method of establishing priority on interrupt sources uses the hardware i.e., it is         the hardware means of establishing priority.


In this method, all the device, whether they are interrupt sources or not, connected in a serial manner. Means the device with highest priority is placed in the first position, which is followed by lowest priority device.And all device share a common interrupt request line, and the interrupt acknowledge line is daisy chained through the modules.

    The figure shown below, this method of connection with three devices and the CPU.

It works  as follows:

    When any device raise an interrupt, the interrupt request line goes activated, the processor when             sense it, it sends out an interrupt acknowledge which is first received by device1.If device1 does not       need service, i.e., processor checks, whether the device has pending interrupt or initiate interrupt             request, if the result is no, then the signal is passed to device2 by placing 1 in the PO(Priority Out) of       device1.And if device need service then service is given to them by placing first 0 in the PO of                 device1, which indicate the next-lower-priority device that acknowledge signal has been blocked. And       device that have processor responds by inserting its own interrupt vector address(VAD) into the data       bus for the CPU to use during interrupt cycle.

    In this way, it gave services to interrupt source according to their priority. And thus, we can say that, it       is the order of device in chain that determine the priority of interrupt sources.

Q. Explain Interrupts Handling mechanism in Computer system Architecture.

Ans: 

Interrupts in Computer Architecture

An interrupt in computer architecture is a signal that requests the processor to suspend its current execution and service the occurred interrupt. 

To service the interrupt the processor executes the corresponding interrupt service routine (ISR). 

After the execution of the interrupt service routine, the processor resumes the execution of the suspended program. 

Interrupts can be of two types of hardware interrupts and software interrupts.



Types of Interrupts in Computer Architecture

The interrupts can be various type but they are basically classified into hardware interrupts and software interrupts.

1. Hardware Interrupts

If a processor receives the interrupt request from an external I/O device it is termed as a hardware interrupt. Hardware interrupts are further divided into maskable and non-maskable interrupt.

  • Maskable Interrupt: The hardware interrupt that can be ignored or delayed for some time if the processor is executing a program with higher priority are termed as maskable interrupts.
  • Non-Maskable Interrupt: The hardware interrupts that can neither be ignored nor delayed and must immediately be serviced by the processor are termed as non-maskeable interrupts.

2. Software Interrupts

The software interrupts are the interrupts that occur when a condition is met or a system call occurs.

Interrupt Cycle

A normal instruction cycle starts with the instruction fetch and execute. But, to accommodate the occurrence of the interrupts while normal processing of the instructions, the interrupt cycle is added to the normal instruction cycle as shown in the figure below.



After the execution of the current instruction, the processor verifies the interrupt signal to check whether any interrupt is pending. If no interrupt is pending then the processor proceeds to fetch the next instruction in the sequence.


******************************************************************************



For Any Query Contact: 

~pradeep





---------------------------------------------------------------------------------------------------------------------
Disclaimer:  เคธाเค‡เคŸ เคชเคฐ เคธाเคฎเค—्เคฐी เค•ेเคตเคฒ เคถैเค•्เคทिเค• เค‰เคฆ्เคฆेเคถ्เคฏों เค•े เคฒिเค เคนै เค”เคฐ เคฏเคน เคชेเคถेเคตเคฐ เคธเคฒाเคน เคจเคนीं เคนै।

Educational Purpose Only: The information provided on this blog is for general informational and educational purposes only. All content, including text, graphics, images, and other material contained on this blog, is intended to be a resource for learning and should not be considered as professional advice.

No Professional Advice: The content on this blog does not constitute professional advice, and you should not rely on it as a substitute for professional consultation, diagnosis, or treatment. Always seek the advice of a qualified professional with any questions you may have regarding a specific issue.

Accuracy of Information: While I strive to provide accurate and up-to-date information, I make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability with respect to the blog or the information, products, services, or related graphics contained on the blog for any purpose. Any reliance you place on such information is therefore strictly at your own risk.

External Links: This blog may contain links to external websites that are not provided or maintained by or in any way affiliated with me. Please note that I do not guarantee the accuracy, relevance, timeliness, or completeness of any information on these external websites.

Personal Responsibility: Readers of this blog are encouraged to do their own research and consult with a professional before making any decisions based on the information provided. I am not responsible for any loss, injury, or damage that may result from the use of the information contained on this blog.

Contact: If you have any questions or concerns regarding this disclaimer, please feel free to contact me at my email: pradeep14335@gmail.com

Monday, May 22, 2023

CSA UNIT 3 MEMORY SYSTEMS

CSA  # UNIT 3 : MEMORY SYSTEMS


MEMORY SYSTEMS: 
๐Ÿ‘ตPrerequisite: Go through the notes below before Answering the Questions

The memory hierarchy system consists of all storage devices contained in a computer system from the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory

Auxiliary memory access time is generally 1000 times that of the main memory, hence it is at the bottom of the hierarchy.



The Goal:๐Ÿ‘จ illusion of large, fast, cheap memory

 • Fact: Large memories are slow, fast memories are small

 • How do we create a memory that is large, cheap and fast (most of the time)?

 – Hierarchy

 – Parallelism 

so, 

 – Present the user with as much memory as is available in the cheapest technology.

 – Provide access at the speed offered by the fastest technology. 


How is the hierarchy managed?

 • Registers <-------> Memory – by compiler (programmer?)

 • cache <-------> memory – by the hardware

 • memory <--------> disks – by the hardware and operating system (virtual memory) – by the programmer (files)

 • Virtual memory – Virtual layer between application address space to physical memory – Not part of the physical memory hierarchy 


Memory Technology

 *Random Access: – "Random" is good: access time is the same for all locations

 – DRAM: Dynamic Random Access Memory

 • High density, low power, cheap, slow

 • Dynamic: need to be refreshed regularly

  • Main Memory is DRAM 


See My Lecture/ VIDEO : Click ๐Ÿ‘‰๐ŸŽฌ or Directly click on the Video below:


 – SRAM: Static Random Access Memory

 • Cache uses SRAM : Static Random Access Memory – No refresh

• Low density, high power, expensive, fast

 • Static: content will last forever (until lose power)

Example: 

 

 * "Non-so-random" Access Technology: – Access time varies from location to location and from time to time

 – Examples: Disk, CDROM

  * Sequential Access Technology: access time linear in location (e.g., Tape) 

So,

•  DRAM is slow but cheap and dense: – Good choice for presenting the user with a BIG memory system

 • SRAM is fast but expensive and not very dense: – Good choice for providing the user FAST access time.  

Increasing Bandwidth - Interleaving  : Refer Question#

Disk Storage

• Nonvolatile, rotating magnetic storage:


Magnetic Disks

  • Traditional magnetic disks have the following basic structure:
    • One or more platters in the form of disks covered with magnetic media. Hard disk platters are made of rigid metal, while "floppy" disks are made of more flexible plastic.
    • Each platter has two working surfaces. Older hard disk drives would sometimes not use the very top or bottom surface of a stack of platters, as these surfaces were more susceptible to potential damage.
    • Each working surface is divided into a number of concentric rings called tracks. The collection of all tracks that are the same distance from the edge of the platter, ( i.e. all tracks immediately above one another in the following diagram ) is called a cylinder.
    • Each track is further divided into sectors, traditionally containing 512 bytes of data each, although some modern disks occasionally use larger sector sizes. ( Sectors also include a header and a trailer, including checksum information among other things. Larger sector sizes reduce the fraction of the disk consumed by headers and trailers, but increase internal fragmentation and the amount of disk that must be marked bad in the case of errors. )
    • The data on a hard drive is read by read-write heads. The standard configuration ( shown below ) uses one head per surface, each on a separate arm, and controlled by a common arm assembly which moves all heads simultaneously from one cylinder to another. ( Other configurations, including independent read-write heads, may speed up disk access, but involve serious technical difficulties. )
    • The storage capacity of a traditional disk drive is equal to the number of heads ( i.e. the number of working surfaces ), times the number of tracks per surface, times the number of sectors per track, times the number of bytes per sector. A particular physical block of data is specified by providing the head-sector-cylinder number at which it is located.

****************************************************

Questions and Answers ( As per previous year CSVTU QP): 

1. Explain Memory Hierarchy , give details of each levels in brief.

Ans: 






Levels of memory: 

• Level 0 or Register – It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most commonly used register is accumulator, Program counter, address register etc. 

• Level 1 or Cache memory – It is the fastest memory which has faster access time where data is temporarily stored for faster access. 

• Level 2 or Main Memory – It is memory on which computer works currently. It is small in size and once power is off data no longer stays in this memory. 

• Level 3 or Secondary Memory – It is external memory which is not as fast as main memory but data stays permanently in this memory.


2. What is Virtual memory in Computer System? Give Details of Paging Mechanism. How it is different from Main memory.

Ans: See my Class notes: Click๐Ÿ‘‰ Paging Mechanism

OR

Virtual memory is a memory management technique where secondary memory can be used as if it were a part of the main memory.

Virtual memory is a valuable concept in computer architecture that allows you to run large, sophisticated programs on a computer even if it has a relatively small amount of RAM. A computer with virtual memory artfully juggles the conflicting demands of multiple programs within a fixed amount of physical memory.

Virtual memory is a common technique used in a computer's operating system (OS). 

How does virtual storage work?

This memory system uses both the computer's software and hardware to work. It transfers processes between the computer's RAM and hard disk by copying any files from RAM that aren't currently in use and moving them to the hard disk. 

Virtual storage management: 

Computers can handle virtual storage through segmenting and paging. Below is an explanation of the Paging Mechanism.


Paging Mechanism:

Paging is a Memory management method.

The process of retrieving processes in the form of pages from the secondary storage into the main memory is known as paging.

In paging, the physical memory is divided into fixed-size blocks called page frames, which are the same size as the pages used by the process. The process’s logical address space is also divided into fixed-size blocks called pages, which are the same size as the page frames. 

Now we usually divide the Main Memory in to Frames of equal sizes. And in return each of the Frame consists of equal number of words.

Important Point to be noted is that Page Size = Frame Size.


Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third partition, 1st page of P1 is stored and the other frames are also shown as filled with the different pages of processes in the main memory.

The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame each. The page tables of both the processes contain various information that is also shown in the image.

The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and 7 in the case of P2. This page table base address will be added to the page number of the Logical address when it comes to accessing the actual corresponding entry.


A Translation look aside buffer can be defined as a memory cache which can be used to reduce the time taken to access the page table again and again.

It is a memory cache which is closer to the CPU and the time taken by CPU to access TLB is lesser then that taken to access main memory.



3. A Two way Set Associative Cache Memory uses a Block of Size 128 bytes with Total Cache Size 1MB. Byte . Addressable Main Memory Size is given as 256 MB.

Give the Physical Address Splits in bits.

Ans: See my Class notes : 

         click ๐Ÿ‘‰ Types of Cache mapping Theory with Solved Numerical

or


First lets see the theory behind it: 

In OS we have two kind of addresses: 

Virtual/Logical Address, and Physical Address.

Physical Address:
As explained above that CPU generates addresses of all the processes in the main memory relative to zero i.e., CPU produces address starting from 0.Now the addresses generated by are no way real because if those addresses were valid then all processes would begin from zero and rest spaces in Main Memory would be memory. Thus the addresses generated by CPU should be some way mapped to real (physical address) of the Main Memory. This is done by MMU (Memory Management unit).

he smallest addressable unit in memory is called word. It means the smallest unit which can specifically identified using address bits.

The Address Space of Virtual Address is divided into 2 parts: Page Number, and Page OFFset.

Similarly, the Address Space of Physical Address is divided into 2 parts: Frame Number, and Frame OFFset.

Let Virtual Address has n bits and Physical Address has m bits.

Page Size = Frame Size = k bits 

(as page size = Frame size as mentioned above).

Number of Pages = 2(n-k) pages
Number of Frames = 2(m-k) Frames





5. Give all Information required to construct Cache Memory. 

Ans: 

    Cache Memory

    Cache Memory is a special very high-speed memory. It is used to speed up and synchronizing with high-speed CPU.

    Cache Performance: 

    When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the cache. 

    • If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read from cache 

    • If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled from the contents of the cache. 

    The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

    Hit ratio = hit / (hit + miss) =  no. of hits/total accesses

    Cache Mapping:

      There are three different types of mapping used for the purpose of cache memory which is as follows: Direct mapping, Associative mapping, and Set-Associative mapping.
    1. Direct Mapping –
      Maps each block of main memory into only one possible cache line. If a line is previously taken up by a memory block and a new block needs to be loaded, the old block is trashed. An address space is split into two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored in the main memory.
    2. Associative Mapping –
      A block of main memory can map to any line of the cache that is freely available at that moment. The word offset bits are used to identify which word in the block is needed, all of the remaining bits become Tag.
    3. Set-Associative Mapping –
      Cache lines are grouped into sets where each set contains k number of lines and a particular block of main memory can map to only one particular set of the cache. However, within that set, the memory block can map to any freely available cache line.

Note: Translation Lookaside Buffer (i.e. TLB) is required only if Virtual Memory is used by a processor. In short, TLB speeds up the translation of virtual address to a physical address by storing page-table in faster memory. In fact, TLB also sits between the CPU and Main memory.



What is Direct Mapping? 

(source:https://byjus.com/gate/direct-mapping-notes/)

In the case of direct mapping, a certain block of main memory would be able to map to only a particular line of cache.

Physical Address Division

The physical address, in the case of direct mapping, is divided as follows:



What is Fully Associative Mapping?

“Every memory block can be mapped to any cache line.”

The fully associative mapping helps us resolve the issue related to conflict misses. It means that any block of the main memory can easily come in a line of cache memory. Here, for instance, B0 can easily come in L1, L2, L3, and L4. Also, the case would be similar for all the other blocks. This way, the chances of a cache hit increase a lot.

Now, let us assume that there is a RAM (Main Memory) size of 128 Words along with a Cache size of 16 Words. Here, the Cache and the Main Memory are divided into Lines and blocks, respectively. Every block Line is of the size 4- words. It is shown in the diagram given as follows:



Physical Address

Since the main memory has a size of 128 words, a total of 7 bits would be used for representing the main memory. Thus, the Physical address would be 7 bits in size. Given below is an example of the fully associative mapping of W24 of the B6.



What is K-way Set Associative Mapping?

The K-way set associative mapping is like a mix of fully associative and direct mapping. Here,

  • The total number of sets = The total number of lines/K
  • K refers to the K-way set associative
  • K= 23 for a 2-way set associative
  • The total number of sets = 4/2 or 2 sets (S1, S0)


Physical Address

In case a cache has 4 lines, and we consider a 2-way set associative, then:

6. What is Memory Interleaving?

Ans:

 See my Class notes : click ๐Ÿ‘‰ Memory Interleaving

or

Prerequisite – see Q#2.

Abstraction is one of the most important aspects of computing. It is a widely implemented Practice in the Computational field. 

Memory Interleaving is less or More an Abstraction technique. Though it’s a bit different from Abstraction. 

It is a Technique that divides memory into a number of modules such that Successive words in the address space are placed in the Different modules. 



***************************************************************************


Thank you all


~pradeep





---------------------------------------------------------------------------------------------------------------------
Disclaimer:  เคธाเค‡เคŸ เคชเคฐ เคธाเคฎเค—्เคฐी เค•ेเคตเคฒ เคถैเค•्เคทिเค• เค‰เคฆ्เคฆेเคถ्เคฏों เค•े เคฒिเค เคนै เค”เคฐ เคฏเคน เคชेเคถेเคตเคฐ เคธเคฒाเคน เคจเคนीं เคนै।

Educational Purpose Only: The information provided on this blog is for general informational and educational purposes only. All content, including text, graphics, images, and other material contained on this blog, is intended to be a resource for learning and should not be considered as professional advice.

No Professional Advice: The content on this blog does not constitute professional advice, and you should not rely on it as a substitute for professional consultation, diagnosis, or treatment. Always seek the advice of a qualified professional with any questions you may have regarding a specific issue.

Accuracy of Information: While I strive to provide accurate and up-to-date information, I make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability with respect to the blog or the information, products, services, or related graphics contained on the blog for any purpose. Any reliance you place on such information is therefore strictly at your own risk.

External Links: This blog may contain links to external websites that are not provided or maintained by or in any way affiliated with me. Please note that I do not guarantee the accuracy, relevance, timeliness, or completeness of any information on these external websites.

Personal Responsibility: Readers of this blog are encouraged to do their own research and consult with a professional before making any decisions based on the information provided. I am not responsible for any loss, injury, or damage that may result from the use of the information contained on this blog.

Contact: If you have any questions or concerns regarding this disclaimer, please feel free to contact me at my email: pradeep14335@gmail.com