Issue
I use tail -f
to show the contents of a logfile.
What I want is when the logfile content changes, instead of appending the new lines to my screen, only the newly added lines should be shown on my screen.
So as if a clearscreen was made every time before printing the new lines.
I tried to find a solution by web search but couldn't find anything useful.
edit: In my case it happens that several lines will be added at once (it is a php error logfile). So I am looking for a solution where more than the single last line can be shown on screen.
Solution
Try
$ watch 'tac FILE | grep -m1 -C2 PATTERN | tac'
where
PATTERN is any keyword (or regexp) to identify errors you seek in the log,
tac
prints the lines in reverse,
-m
is a max count of matching lines to grep,
-C
is any number of lines of context (before and after the match) to show (optional).
That would be similar to
$ tail -f FILE | grep -C2 PATTERN
if you didn't mind just appending occurrences to the output in real-time.
But if you don't know any generic PATTERN to look for at all,
you'd have to just follow all the updates as the logfile grows:
$ tail -n0 -f FILE
Or even, create a copy of the logfile and then do a diff:
- Copy:
cp file.log{,.old}
- Refresh the webpage with your .php code (or whatever, to trigger the error)
- Run:
diff file.log{,.old}
(or, if you prefer sort to diff: $sort file.log{,.old} | uniq -u
)
The curly braces is shorthand for both filenames (see Brace Expansion in $ man bash
)
If you must avoid any temp copies, store the line count in memory:
z=$(grep -c ^ file.log)
- Refresh the webpage to trigger an error
tail -n +$z file.log
The latter approach can be built upon, to create a custom scripting solution more suitable for your needs (check timestamps, clear screen, filter specific errors, etc). For example, to only show the lines that belong to the last error message in the log file updated in real-time:
$ clear; z=$(grep -c ^ FILE); while true; do d=$(date -r FILE); sleep 1; b=$(date -r FILE); if [ "$d" != "$b" ]; then clear; tail -n +$z FILE; z=$(grep -c ^ FILE); fi; done
where
FILE
is, obviously, your log file name;
grep -c ^ FILE
counts all lines in a file (that is almost, but not entirely unlike cat FILE|wc -l
that would only count newlines);
sleep 1
sets the pause/delay between checking the file timestamps to 1 second, but you could change it to even a floating point number (the less the interval, the higher the CPU usage).
To simplify any repetitive invocations in future, you could save this compound command in a Bash script that could take a target logfile name as an argument, or define a shell function, or create an alias in your shell, or just reverse-search your bash history with CTRL+R. Hope it helps!
Answered By - Alex C. Answer Checked By - Marie Seifert (WPSolving Admin)