Open In App

Sum of (maximum element - minimum element) for all the subsets of an array.

Last Updated : 11 Aug, 2021
Comments
Improve
Suggest changes
2 Likes
Like
Report

Given an array arr[], the task is to compute the sum of (max{A} - min{A}) for every non-empty subset A of the array arr[].
Examples: 
 

Input: arr[] = { 4, 7 } 
Output: 3
There are three non-empty subsets: { 4 }, { 7 } and { 4, 7 }. 
max({4}) - min({4}) = 0 
max({7}) - min({7}) = 0 
max({4, 7}) - min({4, 7}) = 7 - 4 = 3.
Sum = 0 + 0 + 3 = 3
Input: arr[] = { 4, 3, 1 } 
Output:
 


 


A naive solution is to generate all subsets and traverse every subset to find the maximum and minimum element and add their difference to the current sum. The time complexity of this solution is O(n * 2n).
An efficient solution is based on a simple observation stated below. 
 

For example, A = { 4, 3, 1 } 
Let value to be added in the sum for every subset be V.
Subsets with max, min and V values: 
{ 4 }, max = 4, min = 4 (V = 4 - 4) 
{ 3 }, max = 3, min = 3 (V = 3 - 3) 
{ 1 }, max = 1, min = 1 (V = 1 - 1) 
{ 4, 3 }, max = 4, min = 3 (V = 4 - 3) 
{ 4, 1 }, max = 4, min = 1 (V = 4 - 1) 
{ 3, 1 }, max = 3, min = 1 (V = 3 - 1) 
{ 4, 3, 1 }, max = 4, min = 1 (V = 4 - 1)
Sum of all V values 
= (4 - 4) + (3 - 3) + (1 - 1) + (4 - 3) + (4 - 1) + (3 - 1) + (4 - 1) 
= 0 + 0 + 0 + (4 - 3) + (4 - 1) + (3 - 1) + (4 - 1) 
= (4 - 3) + (4 - 1) + (3 - 1) + (4 - 1)
First 3 'V' values can be ignored since they evaluate to 0 
(because they result from 1-sized subsets).
Rearranging the sum, we get:
= (4 - 3) + (4 - 1) + (3 - 1) + (4 - 1) 
= (1 * 0 - 1 * 3) + (3 * 1 - 3 * 1) + (4 * 3 - 4 * 0) 
= (1 * A - 1 * B) + (3 * C - 3 * D) + (4 * E - 4 * F)
where A = 0, B = 3, C = 1, D = 1, E = 3 and F = 0
If we closely look at the expression, instead of analyzing every subset, here we analyze every element of how many times it occurs as a minimum or a maximum element.
A = 0 implies that 1 doesn't occur as a maximum element in any of the subsets. 
B = 3 implies that 1 occurs as a minimum element in 3 subsets. 
C = 1 implies that 3 occurs as a maximum element in 1 subset. 
D = 1 implies that 3 occurs as a minimum element in 1 subset. 
E = 3 implies that 4 occurs as a maximum element in 3 subsets. 
F = 0 implies that 4 doesn't occur as a minimum element in any of the subsets.


If we somehow know the count of subsets for every element in which it occurs as a maximum element and a minimum element then we can solve the problem in linear time, since the computation above is linear in nature.
Let A = { 6, 3, 89, 21, 4, 2, 7, 9 } 
sorted(A) = { 2, 3, 4, 6, 7, 9, 21, 89 }
For example, we analyze element with value 6 (marked in bold). 3 elements are smaller than 6 and 4 elements are larger than 6. Therefore, if we think of all subsets in which 6 occurs with the 3 smaller elements, then in all those subsets 6 will be the maximum element. No of those subsets will be 23. Similar argument holds for 6 being the minimum element when it occurs with the 4 elements greater than 6. 
 

Hence, 
No of occurrences for an element as the maximum in all subsets = 2pos - 1 
No of occurrences for an element as the minimum in all subsets = 2n - 1 - pos - 1
where pos is the index of the element in the sorted array.


Below is the implementation of the above approach.
 

C++
Java Python3 C# PHP JavaScript

Output: 
9

 

Time Complexity: O(N * log(N)) 
Auxiliary Space: O(1)


Next Article

Similar Reads