Removing Duplicates

It is sometimes useful to remove consecutive duplicate records from a data stream. We showed in Section 4.1.2 that sort -u would do that job, but we also saw that the elimination is based on matching keys rather than matching records. The uniq command provides another way to filter data: it is frequently used in a pipeline to eliminate duplicate records downstream from a sort operation:

sort ... | uniq | ...

uniq has three useful options that find frequent application. The -c option prefixes each output line with a count of the number of times that it occurred, and we will use it in the word-frequency filter in Example 5-5 in Chapter 5. The -d option shows only lines that are duplicated, and the -u option shows just the nonduplicate lines. Here are some examples:

$ cat latin-numbers                      
            Show the test file
tres
unus
duo
tres
duo
tres

$ sort latin-numbers | uniq              
            Show unique sorted records
duo
tres
unus

$ sort latin-numbers | uniq -c           
            Count unique sorted records
      2 duo
      3 tres
      1 unus

$ sort latin-numbers | uniq -d           
            Show only duplicate records
duo
tres

$ sort latin-numbers | uniq -u           
            Show only nonduplicate records
unus

uniq is sometimes a useful complement to the diff utility for figuring out the differences between two similar data streams: dictionary word lists, pathnames in mirrored directory trees, telephone books, and so on. Most implementations have other options that you can find described in the manual pages for uniq(1), but their use is rare. Like sort, uniq is standardized by POSIX, so you can use it everywhere.

Get Classic Shell Scripting now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.