Chapter 16.1: Purpose of an operating system


Operating system resource optimization

How an operating system can maximize the use of resources:

  • Primary memory

    • Moving frequently accessed instructions to cache
    • … for faster recall
    • … as SRAM is used rather than DRAM for cache
    • Making use of virtual memory
    • … with paging or segmentation
    • … to swap memory to and from disk
    • Partition memory
    • … dividing main memory into static partitions
    • … to allow more than one task to be available
    • Removing unused item/tasks from RAM
    • … by marking a partition as available
    • ... as soon as the process using it has terminated
  • Disk

    • Disk cache
    • … a disk cache holds data that is frequently transferred to/from the disk
    • … the cache can be held in disk or RAM
    • Compression utility
    • … decreasing the size of file stored on the disk
    • … in order to fit larger files on the disk
    • Defragmentation utility
    • … files are rearranged to occupy contiguous disk space
    • … this reduces the time taken to access files/decrease latency
  • CPU

    • Scheduling
    • … Better utilization of CPU time and resources
  • Input/Output system

    • IO operation initiated by the computer user

    • IO operation which occurs while software is being run and resources such as printers and disk drives are requested

    • Direct memory access(DMA) controller is needed to allow hardware to access the main memory independently of the CPU.

      • DMA initiates the data transfer
      • CPU carries out other tasks while this data transfer operation take place
      • Once the data transfer is complete, an interrupt signal is sent from the DMA to CPU.

Hide the complexity of the hardware

  • Using graphical interface rather than command line interface
  • Using device drivers (simplifies the complexity of the hardware interface)
  • Simplifying the saving and retrieving of data from memory and storage device
  • Carrying out background utilities, such as virus scanner which the user can “leave it to its own device”

Process management

Program: the written code (static)

Process: a program that has started to be executed (dynamic)

  • A program which has been placed in the memory
  • with a process control block created
  • or has at least once entered a running state

Process control block(PCB): a data structure which contains all of the data needed for a process to run; this can be created in memory when data needs to be received during execution time. The PCB will store

  • current process state (ready, running, or blocked)
  • process privileges(such as which resources is allowed to access)
  • register values (PC, MAR, MDR, and ACC)
  • process priority and any scheduling information
  • the amount of CPU time the process will need to complete
  • a process ID which allows it to be uniquely identified
  • I/O status information - I/O devices allocated to process, list of open files

Process context - the machine environment while the process is running

  • Program counter, location of instruction to next execute
  • CPU registers, contents of all process-centric registers

Multitasking: allow computers to carry out more than one task(process) at a time. Each of these processes will share common hardware resources. To ensure multitasking operates correctly, scheduling is used to decide which process is carried out first.

Process states

Description

  • New: process is being created

  • Ready: program waiting for CPU time

    • The process is not being executed
    • The process is in the queue
    • … waiting for the processor’s attention/time slice/CPU time
  • Waiting/Blocked: program waiting for an event or Input/Output operation

    • The process is waiting for an event
    • So it cannot be executed by the moment
    • e.g. Input/Output
  • Running: program running on a CPU

    • Process is being executed by the CPU
    • Process is currently using its allocated time slice
  • Terminated(Exit): process has finished executing

Transition between states: conditions

  • Ready -> Running
  • Current process no longer running // processor is available
    • Process was at the head of the ready queue // process has the highest priority
  • Blocked -> Ready
  • The only required resource become available // resource/event/IO operation is complete
  • Running -> Ready
    • When process is executing it is allocated a time slice
    • When the time slice is completed, the process can no longer use the processor even if it is capable of further processing
  • Running -> Blocked
  • Process is executed when it needs to perform IO operation
    • Placed in blocked state until the IO operation is complete

The transition between the states shown in Figure 20.05 can be described as follows

  • A new process arrives in memory and a PCB is created; it changes to the ready state.
  • A process in the ready state is given access to the CPU by the dispatcher; it changes to the running state
  • A process in the running state is halted by an interrupt; it returns to the ready state
  • A process in the running state cannot progress until some event has occurred(IO perhaps); it changes to the waiting/blocked state.
  • A process in the waiting state is notified that an event is complete; it returns to the ready state.
  • A process in the running state completes execution; it changes to the terminated state

Explain how the processes are affected when the following event takes place

  • The running process needs to read a file from a disk
    • Running process is halted
    • Process moved to blocked state
  • The running process using up its time slice
    • Running process is halted // another process has the use of the processor
    • Process moves to ready state
    • … until next time slice is allocated

Explain why a process cannot be moved from the blocked state to the running state

  • When IO operation is completed for the process in blocked state
  • Process put into ready queue
  • OS determine which process get next use of the processor from the ready queue

Explain why a process cannot be moved from the ready state to the blocked state

  • To be in blocked state process must initiate some IO operation
  • To initiate operation, process must be executing
  • If process in ready state, cannot be executing; hence must in running state

Process scheduling

High-level scheduler controls the selection of a program stored on disk to be moved into main memory.

  • Decides which processes are loaded from backing store
  • Into memory/ready queue

Low-level scheduler controls when the program installed in the memory has access to the CPU

  • Decide which process should next get the use of CPU time
  • Based on position/priority
  • Invoked after interrupt/OS call
  • Purpose: ensure response time is acceptable and the system remains stable at all times.

Although the high-level and low-level scheduler will have decisions to make when choosing which program should be loaded into memory, we concentrate here on the options for the low-level scheduler.

Round Robin

Shortest job first

First come first serve

Shortest remaining time


Explain why the operating system needs to use scheduling algorithm

  • To allow multitasking to take place
  • To ensure fair usage of the processor, peripheral, and memory
  • To ensure high priority tasks are executed sooner
  • To ensure all processes have the opportunity to finish
  • To keep CPU busy all the time
  • To service largest amount of job in a given amount of time
  • To minimize the time user must wait for their result

Kernel

Central component responsible for communication between hardware, software, and memory

Responsible for:

  • Process management
  • Device management
  • Memory management
  • Interrupt handling
  • Input/output file communication

Interrupt handling & OS kernel

Interrupt: signal from a hardware or software device, seeking the attention of the processor

  1. The CPU will check for interrupt signals. The system will enter the kernel mode if any of the following type of interrupt signals are sent
    • Device interrupt (printer out of paper, device not present)
    • Exceptions (instruction faults: division by zero, unidentified opcode, stack faults)
    • Traps/software interrupt (process requesting a resource such as disk drive)
  2. Kernel will consult the interrupt dispatch table(IDT) - links a device description with the proper interrupt routine
  3. IDT will supply the address of the low-level routine to handle the interrupt event occurred
  4. The kernel will save the state of the interrupt process on the kernel stack
  5. The process state will be restored once the interrupting task is services.

Describe the sequence of steps which would be carried out by the interrupt handler software when an interrupt is received and serviced

  • Identify the source of interrupt
  • Disable all interrupts of low priority
  • Save the content of the PC
  • Save the content of other registers on the kernel stack
  • Load and run the appropriate interrupt service routine(ISR)
  • Restore registers from the stack
  • Enable all interrupts
  • Continue execution of the interrupts

Memory management

Paging:

  • Memory is split up into partitions of blocks of fixed size

  • Physical memory blocks are known as frames

  • Fixed-sized logical memory blocks are known as pages

  • A program is allocated a number of pages that is usually just larger than what was actually needed

  • The logical address stores the page number in the first part of address and the page offset in the remaining part of the address.

    • Page number: number of bits required to represent the pages in Logical address space.
    • Page offset: Number of bits required to represent a particular word in page.

      如果一共有16个pages,每个page的大小是1024words 那么则需要address的前四个bits来表示page number, 后 10个bits 来表示page offset. 这样的话 通过address可以精准定位到某一个page的某一个word. e.g: 01010000000001代表第5个page的第2个word.

  • --

  • Page table: the page map table shows the mapping to pages to page frames

  • Page: virtual memory divided into blocks of fixed size

  • Page frame: the main memory is divided into page frames of same size as page

How paging is used to handle virtual memory:

  • Divide main memory/RAM into frames
  • Divide virtual memory into blocks of same size called pages
  • Frame/pages are a fixed size
  • Set up a page (map) table that translates logical to physical address
  • Keeps track of all free frames
  • Swap pages in memory with new pages from disk when needed

The page frame’s address entry is 245, state what the values of 245 could represent

  • The 245th page frame from the start of memory

Process X executes until the next instruction is the first instruction in page4. Page 4 is not current main memory

State a hardware device that could be storing this page

  • Flash memory // magnetic disk // hard drive

Segmentation

  • Logical address space is broken up into variable-size memory blocks called segments
  • Segment can be of varying size
  • Each segment has name and size
  • During execution
    • Segments from the logical memory are loaded into physical memory
    • Address = segment number + offset value; found in segment map table
    • Offset value decides the size of the segment

Differences between paging and segmentation

Paging Segmentation
A page is a fixed-size block of memory A segment is a variable-size block of memory
Since the block size is fixed, it is possible that call blocks may not be fully used - this can lead to internal fragmentation Because memory blocks are a variable size, this reduces risk of internal fragmentation but increase the risk of external fragmentation.
The user provide a single value - this means that the hardware decides the actual page size The user will supply the address in two values
A page table maps logical address to physical addresses(this contains the base address of each page stored in frame in physical memory space) Segmentation uses a segment map table containing segment number + offset(segment size); it maps logical addresses to physical addresses
The process of paging is essentially invisible to the user/programmer Segmentation is essentially a visible process to user/programmer
Procedures and any associated data cannot be separated when using paging Procedures and any associated data can be separated when using segmentation
Paging consists of static link and dynamic loading Segmentation consists of dynamic linking and dynamic loading
Pages are usually smaller than segments Segments are usually larger than pages

Virtual memory

  • If the available RAM is exceeded due to multiple processes running, it is possible to corrupt the data used in some of the program being run
  • This can be solved by separately mapping each program’s virtual memory space to RAM and utilizing the hard disk drive/solid state drive if we need more memory. This is the basis behind virtual memory.
  • RAM is the physical memory and virtual memory is RAM+swap space on the hard disk
  • Virtual memory is usually implemented using ‘in demand paging’

State what is meant by virtual memory

  • Disk/Secondary storage is used to extend the RAM/memory available
  • … so CPU can access more memory space than available RAM
  • Only part of the program/data in use needs to be in RAM
  • Data is swapped between RAM and disk

Advantages of using virtual memory

  • Not all pages needs to be in memory at one time
  • More than one processes can be in ready state
  • … because memory addresses can be mapped
  • A process can have an address space larger than that of physical memory

Disk thrashing

The main drawback of using HDD in virtual memory is that as main memory fills, more and more data/pages need to be swapped in and out of virtual memory. This leads to high rate of hard disk read/write head movement. if more time is spent on moving pages in and out of memory than actually doing any processing, then the processing speed of the computer will be reduced. A point can be reached when the execution of a process comes to a halt since the system is so busy paging and in and out of memory; this is known as the thrash point.

  • Pages are requested back into RAM as soon as they are moved to the disk
  • There is continuous swapping (of the same page)
  • No useful processing happens (deadlock)
    • Pages that are in RAM and disk are interdependent
    • All processing times are used for swapping pages

Page replacement

  • Page replacement occurs when a requested page is not in memory. When paging in/out from memory, it is necessary to consider how the computer can decide which page to replace to allow the requested page to be loaded.
  • When a new page is requested but is not in memory, a page fault occurs and the OS replaces one of the existing pages with the new page. How to decide which page to replace is done in a number of different ways, but all methods have the same goal of minimizing the number of page faults.
  • A page fault is a type of interrupt raised by hardware. When a running program accesses a page that is mapped into virtual memory address space, but not yet loaded into physical memory, then the hardware needs to raise this page fault interrupt.

Page replacement algorithms:

  • First in first out (FIFO)

    • The OS keeps track of all the pages in memory using a queue structure
    • The oldest page is at the front of the queue and is the first to be removed when a new page needs to be added
  • FIFO algorithms don’t consider page usage when replacing pages; a page may be replaced simply because it arrived earlier than another page

  • It suffers from, what is known as, Belady’s Anomaly where it is possible to have more page faults when increasing the number of page frames.

  • Additional data required: time of entry

  • Drawbacks:

    • Page in for lengthy period of time may be often accessed
    • … so not a good candidate to be removed
  • Optimal page replacement(OPR)

    • Optimal page replacement looks forward in time to see which frame it can replace in the event of a page fault.
      • The algorithm is actually impossible to implement; at the time of a page fault, the OS has no way of knowing when each of the pages will be replaced next.
      • It tends to get used for comparison studies but has the advantage that it is free of Belady’s Anomaly and also has the fewest page faults.
  • Least recently used page replacement (LRU)

    • With least recently used page replacement (LRU), the page which has not been used for the longest time is replaced.
    • To implement this method, it is necessary to maintain a linked list of all pages in memory with the most recently used page at the front and the least recently used page at the rear.
  • Least used page replacement

    • Additional data required: Number of times the page has been accessed
    • Drawbacks:
      • A page just entered has a low least-used value
      • … so likely to be a candidate for immediately being swapped out
  • Clock page replacement/second-change page replacement