CPSC 327 | Data Structures and Algorithms | Spring 2024 |
Running time for bubble sort.
The explanation of where the running times come from should make reference to the code given.
Explain why repeat until swapped is false repeats at most $n$ times — this requires understanding a bit more about what is really happening in the for loop. Identifying the specific best and worst cases (sorted and reverse sorted, respectively) is helpful because there's a specific input to reason about.
Running time for comb sort.
The same considerations as for bubble sort apply.
For comb sort, the repeat loop has two components — a certain number of repetitions are needed to get gap down to 1 (from $n$), and then there may be additional repetitions beyond that until swapped stays false. Since they have different patterns (gap down to 1 is a definite loop with a different amount of work in each repetition, while the subsequenet repeat until swapped is false does the same work in each repetition (gap is 1) but the number of repetitions depends on the particular input instance), figuring out each piece separately may help.
gap starts at $n$ and decreases to 1. It's not a sum variable directly since it doesn't change in equal increments (it is divided by $s$ each iteration), but you can figure out a sum variable by starting to write out the successive values of gap and looking for a pattern. This is also an key step in figuring out how many repetitions it takes to get gap to 1.
Running time for matrix multiplication.
Write the recurrence relation for the running time in each part.
Explain where the $a$, $b$, and $f(n)$ values come from in terms of the code.
Simplify numbers — $\log(7)/\log(2)$ is not a familiar-looking value, so it is hard to understand how $O(n^{\log(7)/\log(2)})$ compares to other polynomial running times.