You have probably been faced with a very simple and common problem: comparing unique elements in a list, two by two, when the order doesn't matter.
Comparingn elements withn elements is going to have a complexity ofn². But when the order doesn't matter, comparing element A to B is the same thing as comparting B to A, so the complexity is going to be less thann².
For this algorithm all we have to do is to implement a for loop that increments the initial position each time we finish comparing to the remaining items in the list.
But the question remains: How faster is it going to be in theory?
I didn't know the complexity of this second approach, soI googled it and it turns out that it isn(n-1)/2, where n is the number of elements. Now I wanted to know how faster it was going to be relative ton².
So all I had to do was to divide both complexities to see how they grow relative to each other. See the graph below:
It turns out that whenn goes to infinite, this value goes to 2. So that means that the second approach is about two times faster.
Now it is obvious to me that it is 2 times faster. I mean, each comparison should appear twice. For example, we have the elements A, B, and C so we would have AB, AC, BA, BC, CA, CB. Half of those are the same comparison when the order doesn't matter.
In this case if I had been smarter I wouldn't have needed to do any calculations. However, this very same exercise of comparing different algorithmic complexities can be useful for you when your theoretical gains are not that obvious, and this information might help you decide if the complexity of the implementation outweighs the gains in processing time.
Top comments(0)
For further actions, you may consider blocking this person and/orreporting abuse