CPSC 431, Fall 2016
Information About the Final Exam


The final exam for this course will take place on Thursday, December 15, at 8:30 AM, in our regular classroom. The exam will be about six pages long, and should take about one and a half hours to complete for most people.

The exam will concentrate heavily on material that we have covered since the first test. However, there will probably be some general questions about major ideas from the first part of the course.

There will be no actual coding on the test, although you might be asked to give pseudocode outlines of some operating system algorithms. There might be some quantitative problems (such as using a page table to convert a virtual address to a physical address or asking how many blocks will be used to store a a file of a given length). However, the majority of the test will be essay-type questions including definitions, short-answer questions, and longer essays. You can expect at least one very general essay question at the end.


Here are some of the things that we have covered since Test 1...

deadlock
dining philosophers problem
conditions for deadlock:
     1. bounded resources
     2. no preemption
     3. wait while holding
     4. cycle of waiting
ordering resources to avoid deadlock
two-phase locking:  acquire locks as needed, release them all at the end
breaking deadlock by killing a process or rolling back a transaction

process scheduling
compute-bound tasks versus I/O-bound tasks
scheduling algorithms
    FIFO (First-in, first-out), low overhead; can be bad for average response time
    SJF (Shortest Job First), best average responce time; possibility of starvation
    Round-robin, optimal fairness for compute-bound tasks, bad for I/O-bound tasks
    MLFQS, attempts fairness for all processes
MLFQS heuristic: increase priority of threads that don't use an entire time slice

address translation: convert virtual address to physical address
a process has its own virtual address space, inaccessible to other processes
segmented memory:  
    segment table contains segment number with base, bound, access info
    a virtual address consists of a segment number and an offset
    a program can use a code segment, global data segment, stack segment, etc.
paged memory:  
    virtual and physical address spaces divided into equal-sized tables
    a page table maps virtual pages to physical pages (plus access info)
    a virtual address consists of a page number and an offset
multilevel page tables, can deal with very large virtual address spaces
combining segmentation with paging
TLB (translation lookaside buffer), a cache for page table lookups
TLB records contain virtual page number, physical page number, access info
TLB "shootdowns" for maintaining TLB consistency across multiple processors
multi-level caching:  virtually-addressed (level 1) cache; TLBs; 
   physically addressed per-processor (level 2) cache; level 3 cache

caching in general
cache hits and cache misses
temporal locality
spatial locality (and why memory caches load blocks of memory)
write-through cache vs. write-back cache
working-set model, hit rate levels off as cache size increases
Ziph model ("long" or "heavy" tail), many hits on rarely-accessed items
cache replacement policies:
    Random, FIFO, LRU (least recently used), LFU (least frequently used)
    
files and file systems
directories (map file names to file numbers)
directory structures (lists, trees, hash tables)
file metadata (such as owner, permissions, creation time)
hard links (multiple directory entires to the same actual file)
block devices
memory-mapped I/O  (map memory addresses to registers on I/O devices)
DMA (Direct Memory Access)
magnetic disks (physical structure: surfaces, tracks and sectors)
    accessing a sector:  seek time, settle time, rotation delay
    why sequential disk accesses are much faster than random accesses
    disk scheduling alorithms: FIFO, SSTF (shortest seek time first), elevator
SSDs, "flash" memory (physical structure: erase blocks and pages)
    must erase before writing; erasing is done in large blocks
    problem of "wear out" from too many writes to the same page
    mapping of logical page numbers to physical pages
    moving data around to mitigate wear-out and erasure time
file system structure
    FAT:  linked list of blocks
    FFS, ext2, ext3:  inode tree, fixed-size data blocks
    NTFS:  list of variable-sized "extents"
    ZFS: a COW (Copy-on-Write) file system
inodes: fixed array of available inodes; an inode contains metadata, direct 
    pointers, and an indirect, a doubly-indirect and a triply-indirect pointer
COW file systems: blocks never modified in place
how COW file systems support file versions, snapshots, cloning
ZFS structure:
    uberblock array (256 slots for the uberblock)
    expandable dnode array, stored in a file
    a file can use from zero to six levels of indirect pointers
    batched updates for efficiency

redo log containing update records, commit records (and maybe rollback records)
crash recovery by replaying a redo log (and why update operations are idempotent)
transactions in a file system; atomic operation is writing one sector
how COW file systems like ZFS are built to support transactions

Here are some major topics from the first part of the course...

functions of operating systems
the kernel
user mode versus kernel mode
interrupts
the timer interrupt and preemptive multitasking
threads and processes and how they differ
synchronization issues (race conditions, locks, and condition variables)