CPSC 327 | Data Structures and Algorithms | Spring 2024 |
There is not a redo opportunity for exam 1, however, there will be an optional section on the final exam covering similar material. A better grade on that section will replace the exam 1 grade.
Order all of the functions together, rather than having separate groups for each of the three columns.
Common issues were not fully simplifying (e.g. O($\log n^2$) instead of O($\log n$) — because $\log n^2 = 2 \log n$) or errors in assessing big-Oh (especially with the sums or recurrence relations). Use the tables provided/discussed in class for the sums or recurrence relations; you don't need to know how to solve them for a closed-form solution.
When it comes to ordering functions by big-Oh, remember the most common simple functions and their order: 1, $\log n$, $n$, $n^d$ for $d > 1$, $2^n$, from slowest-growing to fastest-growing. Other things can be slotted in by cancelling common factors. For example, $n \log n$ (or $n \cdot \log n$) fits between $n$ and $n^2$ because $n = n\cdot 1$ and $n^2 = n\cdot n$ and we know from the basic ordering that $\log n$ is between 1 and $n$.
When going from loops to sums, remember that the sum variable is incremented by 1 for each term of the sum --- you can only go directly from a loop
for ( int j = 1 ; j < n ; j = j+2 ) { ... }
to a sum $\sum_{j=1}^n$ if j is incremented by 1...which it isn't in this loop. Instead you will need to introduce a new sum variable that does increase by 1 for each term and express j in terms of that.
Be careful not to overcount too much — in (a), the inner loop is O($j$) and $j$ does max out at $n$, but to count the inner loop simply as $\Theta(n)$ risks overcounting as $j$ when $j$ is repeatedly doubled, it starts small and there are relatively few repetitions when $j$ is close to $n$. It is safest to deal with nested loops by writing the sum.
if statements are a red flag that there may be different best and worst cases. Don't forget to consider both in (b)! Also be careful with nested loops — just because one has different best or worst case behavior doesn't mean the other does too.
In (c), compute is called within a loop — the running time you figure out from the recurrence relation for compute is only the time for that top-level call, which means it is only the time for one iteration of the loop.
Recognizing (d) as insertion sort and being familiar with insertion sort can certainly help you get the right answer, but your explanation for the runtime needs to be based on the code on the page — the question was about the code given, not insertion sort, so you need to explain why that code has the running time it does rather than reciting known behavior about insertion sort.
$\Omega$ is a lower bound on the growth rate, not specifically how one expresses the running time for a best case, though there are certainly related concepts — the running time of the best case is a lower bound on the running time for the algorithm, for example.