﻿ Algorithm for product of n numbers

# Algorithm for product of n numbers

I have a question about running time of an algorithm that express the product of n numbers.. I think the best solution is divide and conquer which is based on halving the n element by 2 recursively and multiply 2 elements. The confusing part is the number of simple operations. In case of Divide and Conquer the complexity should be O(logn) So if we have 8 numbers to multiply we should end up with 3 basic steps E.g we have 8 numbers... we can halve 8 until we reach to 2 and start multiplying it.. (a1 a2 a3 a4 a5 a6 a7 a8) ... (a1*a2=b1) (a3*a4=b2) (a5*a6=b3) (a7*a8=b4) (b1*b2=c1) (b3*b4=c2) (c1*c2=final result)..However, in this result we need 7 simple multiplication. Can someone clarify this to me..?

Divide and conquer is for cases when you can divide your original set into multiple subsets which, after you've identified and created them, do not interact anymore (or only in a way that's neglectably cheap compared to the operation on each subset). In your case, you're violating the "subsets do not interact after identifying them" rule.

Also, O(x) does not mean the number of operations is less than x. It just means, that for any concrete data set of size x, there is a finite value d so that the number of operations needed is smaller than d*x. (My native language is german, i hope i didn't change the meaning when translating). So the fact that you need 8 operations on 8 data items does not, per se, mean the complexity is larger than O(log n).

If I understand your objective correctly, then the complexity should be O(n), as you can just multiply the n values in sequence. For n values, you need n-1 multiplications. No need for any divide and conquer.

Whichever way you choose to perform multiplication of `n` numbers, you will need to multiply all of them, making `n-1` multiplications. Thus, the time complexity of this multiplication will always be `O(n)`.

Here is how I can explain that the number of steps required to multiply `n` numbers by your approach is close to `n`, not `log(n)`. To multiply `n` numbers you need to make `n/2` multiplications first - first number with the second, third with the fourth and so on, till the `n-1`-th and `n`-th number. After that you have `n/2` numbers and to multiply them all you need to make `n/4` multiplications - first result of previous multiplication with the second, third with the fourth and so on. After that you have `n/4` numbers, and you'll make `n/8` multiplications. This process will end when you'll have only two numbers left - their multiplication will give you the result.
Now, let's count the total number of multiplications needed. On the first stage you've made `n/2` multiplications, on the second - `n/4` and so on, till the only one multiplication. You have the following sequence: `n/2 + n/4 + n/8 + ... + 1`. It was shown, that the sum of such sequence equals to n.