pyspark.pandas.Series.drop_duplicates¶
-
Series.drop_duplicates(keep: Union[bool, str] = 'first', inplace: bool = False) → Optional[pyspark.pandas.series.Series][source]¶ Return Series with duplicate values removed.
- Parameters
- keep{‘first’, ‘last’,
False}, default ‘first’ Method to handle dropping duplicates: - ‘first’ : Drop duplicates except for the first occurrence. - ‘last’ : Drop duplicates except for the last occurrence. -
False: Drop all duplicates.- inplacebool, default
False If
True, performs operation inplace and returns None.
- keep{‘first’, ‘last’,
- Returns
- Series
Series with duplicates dropped.
Examples
Generate a Series with duplicated entries.
>>> s = ps.Series(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'], ... name='animal') >>> s.sort_index() 0 lama 1 cow 2 lama 3 beetle 4 lama 5 hippo Name: animal, dtype: object
With the ‘keep’ parameter, the selection behavior of duplicated values can be changed. The value ‘first’ keeps the first occurrence for each set of duplicated entries. The default value of keep is ‘first’.
>>> s.drop_duplicates().sort_index() 0 lama 1 cow 3 beetle 5 hippo Name: animal, dtype: object
The value ‘last’ for parameter ‘keep’ keeps the last occurrence for each set of duplicated entries.
>>> s.drop_duplicates(keep='last').sort_index() 1 cow 3 beetle 4 lama 5 hippo Name: animal, dtype: object
The value
Falsefor parameter ‘keep’ discards all sets of duplicated entries. Setting the value of ‘inplace’ toTrueperforms the operation inplace and returnsNone.>>> s.drop_duplicates(keep=False, inplace=True) >>> s.sort_index() 1 cow 3 beetle 5 hippo Name: animal, dtype: object