Fractional Knapsack Problem

Introduction

In the fractional knapsack problem, the setup is the same in the 0-1 knapsack problem, but the thief can take fractions of items, rather than having to make a binary (0, 1) choice for each item. You can think of an item in the 0-1 knapsack problem as being like a gold ingot and an item in the fractional knapsack problem as more like gold dust. [1]

We can solve the fractional knapsack problem by a greedy strategy. To solve the fractional problem, we first compute the value per pound, $v_i/w_i$ for each item. Obeying a greedy strategy, the thief begins by taking as much as possible of the item with the greatest value per pound. If the supply of that item is exhausted, and he can still carry more, he takes as much as possible of the item with the next greatest value per pound, and so forth. [1]

Now let’s draw our attention to an exercise in the CLRS book [1].

Exercise 16.2-6

Show how to solve the fractional knapsack problem in $O(n)$ time. [1]

By sorting the items by value per pound, the greedy algorithm runs in $O(n \cdot log_2 (n))$ time. In fact, any comparison sort should take $\Omega(n \cdot log_2 n)$ time in the worst case. Thus, it’s a bit tricky to come up with an approach that solves the fractional knapsack problem in $O(n)$ time.

Fractional knapsack - Approach

We partition the array at some point, and get the sum of the weights and values of all the items which have less value density than the partition point. Given that we know the sum of all weights and values, we can get the sum of the respective properties of the items which have higher value density. If the current load of the highest density items $+ w[q]$ where $q$ is the partition-point is smaller than the knapsack capacity, then we move to the left half. Otherwise, if the weight of highest density items is larger than the capacity, then we move towards the right side of the array. Note that when we move to the left or right, we need to carefully manipulate our current state. If the sum of the weights of the highest density items is smaller than the knapsack capacity and that $weight + w[q]$ is larger than or equals to the capacity, then we are done. If $r \lt p$, where $p$ and $r$ represent starting and ending positions respectively, then we  have found the answer too. These two are the base cases of our recursion. Here’s the algorithm to solve the fractional knapsack problem.

**Input:** value and weight arrays $v$, $w$ and $c-$ the capacity of the knapsack  
**Output:** maximum attainable worth  
$FRACTIONAL-KNAPSACK(w, v, c)$    
$\phantom{{}++{}}$ $n \gets w.length$   
$\phantom{{}++{}}$ let $d$ be an array of length $n$  
$\phantom{{}++{}}$ `for` $i = 1$ to $n$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$  $d[i] \gets \frac{v[i]}{w[i]}$  
$\phantom{{}++{}}$ return $FRACTIONAL-KNAPSACK(w, v, c, 1, n, d)$



**Input:** value and weight arrays $v$, $w$ and $c-$ the capacity of the knapsack. $p-$ starting, $r-$ ending position respectively and $d$ the value density array.  
**Output:** maximum attainable worth  
$FRACTIONAL-KNAPSACK(w, v, c, p, r, d)$   
$\phantom{{}++{}}$ `if` $r \lt p$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ `return` $0$  
$\phantom{{}++{}}$ $k \gets RANDOMIZED-PARTITION(w, v, d, p, r)$  
$\phantom{{}++{}}$ $q \gets k[1]$  
$\phantom{{}++{}}$ `if` $c \lt k[2]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ `return` $FRACTIONAL-KNAPSACK(w, v, c, q + 1, r, d)$  
$\phantom{{}++{}}$  `else if` $k[2] + w[q] \lt c$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ `return` $k[3] + v[q] + FRACTIONAL-KNAPSACK(w, v, c - k[2] - w[q], p, q - 1, d)$  
$\phantom{{}++{}}$  `else`  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ `return` $k[3] + \frac{(c - k[2])} { w[q]} \times v[q]$



**Input:** weight and value arrays $w$, $v$. $p-$ starting, $r-$ ending position respectively and $d$ the value density array.  
**Output:** array containing the pivot position, sum of the weight and value for the items that lie right to the pivot position.  
$PARTITION(w, v, d, p, r)$   
$\phantom{{}++{}}$ $x \gets d[r]$  
$\phantom{{}++{}}$ $i \gets p - 1$  
$\phantom{{}++{}}$ $lw \gets 0$  
$\phantom{{}++{}}$ $lv \gets 0$  
$\phantom{{}++{}}$ $wSum \gets 0$  
$\phantom{{}++{}}$ $vSum \gets 0$  
$\phantom{{}++{}}$ `for` $j = p$ to $r - 1$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $wSum \gets wSum + w[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $vSum \gets vSum + v[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ `if` $d[j] \le x$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ $i \gets i + 1$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ $lw \gets lw + w[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ $lv \gets lv + v[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ exchange $d[i]$ with $d[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ exchange $w[i]$ with $w[j]$  
$\phantom{{}++{}}$ $\phantom{{}++{}}$ $\phantom{{}++{}}$ exchange $v[i]$ with $v[j]$  
$\phantom{{}++{}}$ exchange $d[i + 1]$ with $d[r]$  
$\phantom{{}++{}}$ exchange $w[i + 1]$ with $w[r]$  
$\phantom{{}++{}}$ exchange $v[i + 1]$ with $v[r]$  
$\phantom{{}++{}}$ `return` {i + 1, wSum - lw, vSum - lv}  



**Input:** weight and value arrays $w$, $v$. $p-$ starting, $r-$ ending position respectively and $d$ the value density array.  
**Output:** array containing the pivot position, sum of the weight and value for the items that lie right to the pivot position.  
$RANDOMIZED-PARTITION(w, v, d, p, r)$   
$\phantom{{}++{}}$ $i \gets RANDOM(p, r)$  
$\phantom{{}++{}}$ exchange $d[r]$ with $d[i]$  
$\phantom{{}++{}}$ exchange $w[r]$ with $w[i]$  
$\phantom{{}++{}}$ exchange $v[r]$ with $v[i]$  
$\phantom{{}++{}}$ `return` $PARTITION(w, v, d, p, r)$


Also note that our algorithm always keeps the rightmost values, i.e. items with the highest value density. If we are moving to the left side of the array, then the current value is the rightmost one. So we take that weight into account and deduct it from the remaining capacity. Moreover, we take the worth of these rightmost items and add it to our final result. This is the greedy choice property we use here. We always take the local optimal solution with the assumption that it yields the global optimal solution. Fortunately, in this case our assumption is correct. If we are moving to the right end of the array, then the answer should be there and we merely dispense the current solution.

The difficult part is that for greedy algorithms you have to work much harder to understand correctness issues. Even with the correct algorithm, it is hard to prove why it is correct. Proving that a greedy algorithm is correct is more of an art than a science. It involves a lot of creativity. Note: Most greedy algorithms are not correct. The exercise 16.2-1 in the CLRS book [1] asks you to prove that the fractional knapsack problem has the greedy-choice property. I’ll leave it for you to answer that question.

Sample Implementation

Here’s the sample implementation of the algorithm that solves the fractional knapsack problem in linear time. 

Analysis

Also note that the algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case behavior. Since we can solve the problem when there is only one item in constant time, the recursion for the runtime is $T(n) = T(n/2) + cn$ and $T(1) = d$ for some constants $c$ and $d$, which is equivalent to the geometric series,

$n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots$

Rewriting the equation, we then obtain:
$$n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots = n \cdot (1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots)$$

And, we know that,

$$1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots = \sum_{n=0}^{\infty}  x^{n} = \frac {1}{1-x} = \frac {1}{1 - \frac {1}{2}} = 2$$

Plugging this $x = \frac {1}{2}$ in the above equation we then get the answer. Moreover, this series converges for $x = \frac {1}{2}$. 

In our geometric series, $x = \frac {1}{2}$, and thus,
$$n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots = 2 \cdot n$$

Coming back to the recurrence, we then obtain:
$$T(n) = T(n/2) + c \cdot n = (n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots) + c \cdot n = 2 \cdot n + c \cdot n = (2 + c) \cdot n = O(n)$$

for some constant $2 + c$. Since our O-notation subsumes constant factors, this solves to $O(n)$ and that concludes our proof.
A more detailed and involved proof can be found in the CLRS book [1] and the interested reader is directed there. In fact, the book uses a slightly different approach to  what I have given here.

References

[1] https://www.amazon.com/Introduction-Algorithms-3rd-MIT-Press/dp/0262033844

Comments

Popular posts from this blog

Introducing Java Reactive Extentions in to a SpringBoot Micro Service

Optimal binary search trees

Combining the emissions of multiple Observables together using RxJava Zip operator in a Spring Boot Micro service