linux - list the uniq lines based on ":" delimiter -
this question has answer here:
- is there way 'uniq' column? 8 answers
i trying write script find unique lines(first occurance) based on columns/delimiters. in case understanding delimiter ":".
for example:
may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780852: ndtpq.c(20544): log may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780853: ndtpq.c(20544): log may 14 00:00:02 server1 ntp[1006]: ntpd[info]: 1430748798.780852: ndtpq.c(20544): log may 14 00:00:03 server1 ntp[1006]: ntpd[info]: 1430748799.780852: ndtpq.c(20544): log may 14 00:00:04 server1 ntp[1006]: ntpd[info]: 1430748800.780852: ndtpq.c(20544): log may 14 00:00:04 server1 ntp[1006]: ntpd[info]: 1430748800.790852: ndtpq.c(20544): log may 14 00:00:05 server1 ntp[1006]: ntpd[info]: 1430748801.790852: ndtpq.c(20544): thisis different log
desired output:
may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780852: ndtpq.c(20544): log may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780853: ndtpq.c(20544): log may 14 00:00:05 server1 ntp[1006]: ntpd[info]: 1430748801.790852: ndtpq.c(20544): thisis different log
i able find uniq log using following command ,i loosing timestamp using way.
cat filename |awk -f: '{print $7}'
this may do:
awk -f: '!seen[$nf]++' file may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780852: ndtpq.c(20544): log may 14 00:00:01 server1 ntp[1006]: ntpd[info]: 1430748797.780853: ndtpq.c(20544): log may 14 00:00:05 server1 ntp[1006]: ntpd[info]: 1430748801.790852: ndtpq.c(20544): thisis different log
it splits file using :
, looks @ last field, , prints unique.
Comments
Post a Comment