Issue
I have two files:
main1.txt
111
222
333
infoFile.txt
111
111
333
444
I need to compare both files and display how many times each line in file main1.txt
is repeated in infoFile.txt
, as an example:
111: Total 2
222: Total 0
333: Total 1
I've used grep -f main1.txt infoFile.txt | sort |uniq -c
but it removes all the strings that are not available in foFile.txt, while I need it to display the count of these as 0.
Solution
Using awk you can do:
awk 'FNR==NR{a[$1]++; next} {print $1 ": Total", ($1 in a)?a[$1]:0}' infoFile.txt main1.txt
111: Total 2
222: Total 0
333: Total 1
How it works:
FNR==NR
- Execute this block for first file only{a[$1]++; next}
- Create an associative arraya
with key as$1
and value as and incrementing count and then skip to next record{...}
- Execute this block for 2nd input filefor (i in a)
Iterate arraya
{print $1 ": Total", ($1 in a)?a[$1]:0}
- Print first field followed by text": Total "
then print 0 if first field from 2nd file doesn't exist in arraya
. Otherwise print the count from arraya
.
Answered By - anubhava Answer Checked By - Senaida (WPSolving Volunteer)