Virtual Memory

A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that’s set up to emulate the computer’s RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by using disk. Second, it allows us to have memory protection, because each virtual address is translated to a physical address.

Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built into the hardware. The MMU’s job is to translate virtual addresses into physical addresses.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

  • The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped in all at once. Rather they are swapped in only when the process needs them. ( on demand. ) This is termed a lazy swapper, although a pager is a more accurate term.

Copy-on-Write

  • The idea behind a copy-on-write fork is that the pages for a parent process do not have to be actually copied for the child until one or the other of the processes changes the page. They can be simply shared between the two processes in the meantime, with a bit set that the page needs to be copied if it ever gets written to. This is a reasonable approach, since the child process usually issues an exec( ) system call immediately after the fork.

Page Replacement

  • In order to make the most use of virtual memory, we load several processes into memory at the same time. Since we only load the pages that are actually needed by each process at any given time, there is room to load many more processes than if we had to load in the entire process.
  • However memory is also needed for other purposes ( such as I/O buffering ), and what happens if some process suddenly decides it needs more pages and there aren’t any free frames available? There are several possible solutions to consider:
    1. Adjust the memory used by I/O buffering, etc., to free up some frames for user processes. The decision of how to allocate memory for I/O versus user processes is a complex one, yielding different policies on different systems. ( Some allocate a fixed amount for I/O, and others let the I/O system contend for memory along with everything else. )
    2. Put the process requesting more pages into a wait queue until some free frames become available.
    3. Swap some process out of memory completely, freeing up its page frames.
    4. Find some page in memory that isn’t being used right now, and swap that page only out to disk, freeing up a frame that can be allocated to the process requesting it. This is known as page replacement, and is the most common solution. There are many different algorithms for page replacement few of them are described below:
  • FIFO Page Replacement

    • A simple and obvious page replacement strategy is FIFO, i.e. first-in-first-out.
    • As new pages are brought in, they are added to the tail of a queue, and the page at the head of the queue is the next victim. In the following example, 20 page requests result in 15 page faults:
  • Optimal Page algorithm

    • An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-replacement algorithm exists, and has been called OPT or MIN.
    • Replace the page that will not be used for the longest period of time. Use the time when a page is to be used.
  • Least Recently Used (LRU) algorithm

    • Page which has not been used for the longest time in main memory is the one which will be selected for replacement.
    • Easy to implement, keep a list, replace pages by looking back into time.
  • Page Buffering algorithm

    • To get a process start quickly, keep a pool of free frames.
    • On page fault, select a page to be replaced.
    • Write the new page in the frame of free pool, mark the page table and restart the process.
    • Now write the dirty page out of disk and place the frame holding replaced page in free pool.

    Least frequently Used(LFU) algorithm

    • The page with the smallest count is the one which will be selected for replacement.
    • This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.

    Most frequently Used(MFU) algorithm

    • This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.