-1

I'm looking forward to a less time complexity based approach of finding index containing duplicate elements in an array using C language. E.g. data array= {1,2,3,4,5,3} Result: 2,5 Considering at Max only 2 elements can repeat in the array not more than that.

I have thought about brute force method which has a high complexity of O(n2) and also space complexity equal to size of array itself (create another array say x with all 0 value, size equal to data array). Firstly I can compare each element with the other in data array and if there is duplicate, update the indices of the duplicate elements with value 1 in array x (and in next term update common sharing position with value 2 and so on and in X). Finally the elements in array X with same value will tell what indices have sharing. Can someone guide me a simpler approach that can help to know indices have duplicate and a method to store it as well.

Thanks.

  • 1
    Don't think too hard about it. If complexity of your algorithm is your only concern you can always sort first with O(n log n) – Wyck Jun 04 '22 at 01:10
  • Well for a large **N**, `qsort` can work, otherwise the *brute force* approach will work just fine. – alex01011 Jun 04 '22 at 01:42
  • _If_ the _values_ are in a limited range, you can use a frequency table/bit vector to detect duplicates. For character strings, this is easy to do (because the value range is 0-255 and the table size is 256). It is an O(n) time solution. See my answer: https://stackoverflow.com/a/70946516/5382650 – Craig Estey Jun 04 '22 at 02:09

0 Answers0