DSA Project
DSA Project
### Plan:
We will implement three different methods and compare their time complexities:
Let's start with the brute force approach. This method uses a nested loop to
count the frequency of each element.
#### Algorithm:
1. For each element in the list, count how many times it appears in the list.
2. Track the element with the highest frequency.
3. Return the most frequent duplicate element.
```cpp
#include <iostream>
#include <vector>
using namespace std;
return mostFrequent;
}
int main() {
vector<int> nums = {7, 4, 5, 9, 5, 8, 3, 3, 5};
cout << "Brute Force Result: " << bruteForceMostFrequentDuplicate(nums)
<< endl;
return 0;
}
```
---
This approach uses a hash map (or dictionary) to store the frequency of each
element. It is more efficient than the brute force approach.
#### Algorithm:
1. Use a hash map to store the frequencies of each element in the list.
2. Iterate through the hash map to find the element with the highest frequency.
3. Return the most frequent duplicate element.
#### Code (in C++):
```cpp
#include <iostream>
#include <unordered_map>
#include <vector>
using namespace std;
return mostFrequent;
}
int main() {
vector<int> nums = {7, 4, 5, 9, 5, 8, 3, 3, 5};
cout << "Hash Map Result: " << hashMapMostFrequentDuplicate(nums) <<
endl;
return 0;
}
```
---
This method sorts the list first and then counts the frequency of each element by
traversing the sorted list.
#### Algorithm:
1. Sort the list.
2. Traverse the sorted list and count the frequency of each element.
3. Keep track of the element with the highest frequency and return it.
```cpp
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Function to find the most frequent duplicate using sorting
int sortingMostFrequentDuplicate(vector<int>& nums) {
sort(nums.begin(), nums.end());
return mostFrequent;
}
int main() {
vector<int> nums = {7, 4, 5, 9, 5, 8, 3, 3, 5};
cout << "Sorting Result: " << sortingMostFrequentDuplicate(nums) << endl;
return 0;
}
```
---
3. **Sorting-Based Approach**:
- Time Complexity: \(O(n \log n)\)
- Slightly less efficient than the hash map approach, but still better than brute
force.
You can design test cases with different input sizes and distributions of duplicates
to evaluate the performance of each method. For example, you might want to
test:
- Lists with no duplicates.
- Lists with many duplicates.
- Lists with varying sizes (e.g., 10 elements, 100 elements, 100,000 elements).
Don't forget to submit the required materials (codes, screenshots, and final
report) according to the project instructions.
/*****************************************
* (This comment block is added by the Judge System)
* Submission ID: 301768
* Submitted at: 2024-11-13 23:08:29
*
* User ID: 2556
* Username: 58499294
* Problem ID: 897
* Problem Name: Find The Most Frequent Duplicate
*/
#include <iostream>
#include <vector>
#include <sstream>
using namespace std;
int main() {
string line;
return 0;
}
/*****************************************
* (This comment block is added by the Judge System)
* Submission ID: 301771
* Submitted at: 2024-11-13 23:18:50
*
* User ID: 2556
* Username: 58499294
* Problem ID: 897
* Problem Name: Find The Most Frequent Duplicate
*/
#include <iostream>
#include <vector>
#include <sstream>
#include <unordered_map>
using namespace std;
int main() {
string line;
unordered_map<int,int>num;
int mostfreq = -1;
int max_frequency = 0;
for(int i=0 ; i<arr.size(); i++){
num[arr[i]]++;
}
return 0;
}