Computer Science homework help
Recurrence Equations
1. (5 pts) Stooge sort has the following algorithm. Assume you are given an input array of n integers.
Recursively sort the lower two-thirds of an array, then recursively sort the upper two-thirds, then recursively sort the lower two- thirds again. The recursion stops when the array consists of two or fewer elements.
If the array size is two, the elements are swapped if necessary. Which of the following recurrence equations describe Stooge sort?
a) T(n) = 3T(n/2) + Θ(n)
b) T(n) = 3T(n/2) + Θ(log n)
c) T(n) = 3T(2n/3) + Θ(1)
d) T(n) = 2T(3n/2) + Θ(1)
e) T(n) = 2T(n/3) + Θ(n)
Your Answer: C
2. Solve the recurrence you chose in the previous question. How would you compare Stooge sort to other standard comparison sorting algorithms?
Your Answer:
The solution to the above equation is Stooge sort is a recursive sorting algorithm or a polynomial algorithm. It is notable for its exceptionally bad time complexity. The running time of the algorithm is so slower compared to cheap sorting algorithms, and is slower than Bubble sort, a canonical example of a fairly inefficient sort. It is however a lot of efficient than Slowsort.
3. (8 pts) Assume that the recurrence for merge sort is described by:
T(20) = 1
T(2n) = 2T(2n−1) + 2n
Show how to solve this recurrence for T(2n)
Your Answer:
For Loops
1. (8 pts) Here is an implementation of a sorting algorithm.
To analyze the time complexity of the algorithm Express the for
loops as sums and evaluate these sums.
Listing 1: (a sorting algorithm)
public s t a t i c void Sort ( int [ ] a ) {
for ( int i = 0 ; i < n ; i ++) {
for ( int j = 1 ; j < n − i ; j ++) {
/ / swap a d j a c e n t out−of−o r d e r e l e me nts
i f ( a [ j − 1 ] > a [ j ] ) {
swap( a , j − 1 , j ) ;
}
}
}
}
Your Answer:
Inner Products & Matrix Products
The inner product is a fundamental operation in the study of ge- ometry. The inner product of two vector a = (a0, . . . , an−1) and
b = (b0, . . . , bn−1) is
( a| b) = ∑ a0b0 + · · · + an−1bn−1
The Euclidean length of a vector a is
l al2 = J( a| a)
The cosine of the angle between two vectors a and b is defined to be
( a| b)
| al2 · l bl2
1. (2 pts) What is the time complexity of computing an inner prod- uct?
Your Answer:
2. (2 pts) What is the time complexity of computing the cosine be- tween two vectors?
Your Answer:
3. (5 pts) Let A be an n × m matrix and let B be an m × k matrix. The matrix product A × B as a sequence of inner products: Each row of A inner product each column of B.
What is the time complexity of this matrix product.
Your Answer:
4. (8 pts) Strassen’s matrix multiplication: Suppose that n is even and
n × n matrices A and B are partitioned into n/2 × n/2 matrices
A = A11 A12 and B = B11 B12
A21 A22
Suppose their product is
B21 B22
5. Show that
C = A B = C11 C12
×C21 C22
C11 = A11 × B11 + A12 × B21 C12 = A11 × B12 + A12 × B22 C21 = A21 × B11 + A22 × B21 C22 = A21 × B11 + A22 × B22
And explain why
T(n) = 8T(n/2) + cn2 if n > 1 and T(2) = 8
models the time complexity of this straightforward way to com- pute the product A × B and solve this recurrence.
Your Answer:
6. Strassen’s method reduces the matrix multiply sub-problems from 8 to 7 by defining
P = (A11 + A22) × (B11 + B22)
Q = (A21 + A22) × (B11) R = (A11) × (B12 − B22) S = (A22) × (B21 − B11) T = (A11 + A12) × (B22)
U = (A21 + A11) × (B11 + B12)
V = (A12 − A22) × (B21 + B22)
Then, we have
C11 = P + S − T + V; C12 = R + T;
C21 = Q × S; C22 = P + R − Q + U;
The recurrence becomes
T(n) = 7T(n/2) + dn2 if n > 1 and T(2) = 1
What is the big-O time complexity of the solution to this recur- rence?
Your Answer:
7. (4 pts) How does this time complexity compare to the standard
O(n3) time cost of matrix multiplication multiplication? Your Answer:
Fast Exponentiation
(10 pts) Consider computing ab for natural numbers a an b. The binary representation of b and squaring can be used to create a fast
exponentiation algorithm. For example, if b = 205 = (11001101)2, then
ab = (a27 ) · (a26 ) · (a23 ) · (a22 ) · (a)
Use this to develop an algorithm that computes ab. Test and ana- lyze your code and show that its time complexity is O(lg b). Here’s pseudo-code from The University of Illinois Chicago. It uses odd and floor helper functions.
Your Answer:
Summations & Induction
1. (4 pts) Give an inductive proof that shows the sum of an arith- metic sequence (a, a + d, a + 2d, . . . a + (n − 1)d) is the number of terms times the average of the first and last term.
Your Answer:
2. −(4 pts) Arithmetic-Geometric sums blends arithmetic and geometric sums
∑ ·n 1
(a + kd) brk
k=0
Use mathematical induction to prove
Your Answer:
n
− −∑ k(1/2)k = 2−n( n2n+1 2)
k=0
Asymptotic Order Notation and Short Answers
(12 pts) Decide whether these statements are True or False. You must briefly justify all your answers.
1. If f (n) = Θ(g(n)) and g(n) = Θ(h(n)), then h(n) = Θ( f (n)).
Your Answer:
2. If f (n) = O(g(n)) and g(n) = O(h(n)), then h(n) = Ω( f (n)).
Your Answer:
3. ∑n k=1
log(k) = Θ(n log(n)).
Your Answer:
4. P /= NP
Your Answer:
5. There is a function f : {0, 1}∗ → {0, } that cannot be computed
by any Turing machine. (The Kleene-star {0, 1}∗ is the set of all strings over 0 and 1.)
Your Answer:
6. Given a Turing machine M and input x, there is a Turing machine M (M, x) that outputs 1 if M halts on x within 2|x| steps, and outputs 0 otherwise.
Your Answer:
7. Given a solution to a problem in NP it can be verified in polyno- mial time.
Your Answer:
8. Given a solution to a problem in co − NP it can be verified in polynomial time.
Your Answer:
9. The question “is natural number n prime?” is in co − NP
Your Answer:
10. NP ⊂ PSPACE
Your Answer:
11. If a problem is in P, then that problem is NP-complete.
Your Answer:
12. The space complexity of an algorithm is not more than its time complexity.
Your Answer:
Greedy Algorithms
1. Multiprocessor scheduling
An instance of the problem: A set T of n tasks each with a positive integer length l(t) which is the time it takes to complete task t. Also m identical processors.
A feasible solution: A schedule for all tasks such that each processor executes one task at a time without interruption and each task is scheduled on only one processor.
Objective function: The finishing time when all tasks are finished.
Optimal solution: A feasible solution with minimal finishing time.
(a) (5 pts) Multiprocessor scheduling is NP-complete. Suppose you are given a deadline by which all tasks must be completed and a schedule of tasks. Give an informal description of why this deadline scheduling problem NP-Complete?
Your Answer:
(b) (5 pts) Analyze this greedy pseudo-code algorithm for multi- processor scheduling. l(t) is the length of time to complete task t.
Listing 2: (greedy multiprocessor scheduling)
{
Sort the tasks so that l(t1) ≥ l(t2) ≥ · · · ≥ l(tn)
for ( i =1 , i < m , i ++) { T ( i ) = 0 ; } / / i n i t i a l t ime when p r o c e s s o r i i s f r e e
for ( j =1 , j < n + 1 , j ++) { mini = 1 ;
for ( i = 2 i < m + 1 , i ++){
i f ( T ( i ) < T ( mini ) ) mini = i ; { / / next f r e e p r o c e s s o r
assign task j to processor mini at time T ( mini ) ; T ( mini ) = T ( mini ) + l ( j ) ; }
}
}
return FT = max {T(1), . . . , T(m)} ; } Your Answer:
Dynamic Programming
1. (10 pts) Matrix Chain Products The number of ways to legally parenthesize a product of n + 1 factors a0 · a1 · · · an is given by a Catalan number
Cn = 1 (2n)
n + 1 n
see Catalan number @ Wikipedia .
Prove this by mathematical induction. (Note this Cn is asymp- totically approximated by Θ(4n/n3/2) so considering all ways to parenthesize an expression is impractical.)
Your Answer:
2. (8 pts) Consider the problem of computing a sequence of matrix products
M0 × M1 × · · · × Mn−1
where the number of rows in one matrix equals the number of columns in the next so that all products are well defined.
A feasible solution is any parenthesizing, The objective is to find a parenthesizing that minimizes the number of multiplications required to compute the product.
Notice the problem can be decomposed into two sub-problems: compute (M0 × · · · × Mi) and (Mi+1 × · · · × Mn−1) for some
i ∈ {0, . . . , n − 2}. Let s1 and s2 the cost of computing the optimal solutions of these sub-problems so that the total cost is s1 + s2 + r0cicn−1 where r0 is the number of rows in M0, ci is the number of columns in Mi, Mi+1 has ci rows, and Mn−1 has cn−1 columns.
Describe a dynamic programming algorithm that computes an optimal way to parenthesize a matrix product.
Your Answer: