This PEP is withdrawn by the author. He writes:
Removing duplicate elements from a list is a common task, butthere are only two reasons I can see for making it a built-in.The first is if it could be done much faster, which isn’t thecase. The second is if it makes it significantly easier towrite code. The introduction ofsets.pyeliminates thissituation since creating a sequence without duplicates is justa matter of choosing a different data structure: a set insteadof a list.
As described inPEP 218, sets are being added to the standardlibrary for Python 2.3.
This PEP proposes adding a method for removing duplicate elements tothe list object.
Removing duplicates from a list is a common task. I think it isuseful and general enough to belong as a method in list objects.It also has potential for faster execution when implemented in C,especially if optimization using hashing or sorted cannot be used.
On comp.lang.python there are many, many, posts[1] asking aboutthe best way to do this task. It’s a little tricky to implementoptimally and it would be nice to save people the trouble offiguring it out themselves.
Tim Peters suggests trying to use a hash table, then trying tosort, and finally falling back on brute force[2]. Should uniqmaintain list order at the expense of speed?
Is it spelled ‘uniq’ or ‘unique’?
I’ve written the brute force version. It’s about 20 lines of codeinlistobject.c. Adding support for hash table and sortedduplicate removal would only take another hour or so.
This document has been placed in the public domain.
Source:https://github.com/python/peps/blob/main/peps/pep-0270.rst
Last modified:2025-02-01 08:55:40 GMT