Issue
I'm seeing some confusing behavior with a Ruby script while running under nohup
. Basically I'm doing this:
require 'logger'
logger_file = open('/mnt/dbsdata/output.log', File::WRONLY | File::APPEND | File::CREAT)
LOGGER = Logger.new(logger_file)
LOGGER.level = Logger::INFO
def run_command(cmd,display=true)
if display
LOGGER.info "Executing: #{cmd}"
end
output = `#{cmd} 2>&1` ; results=$?.success?
if ! results
LOGGER.error "FAILED to execute #{cmd}"
LOGGER.error output
return false
end
return true
end
begin
run_command("some_longrunning_command", true)
run_command("some_other_longrunning_command",true)
# etc...
end
What's weird here is that both when using Logger as above, and just normal puts
to STDOUT (nohup.out), the output timing is way off. I'd think I could just tail the logfile and see real time logging messages (log message, execute command, repeat), however what's happening is a bulk flush of messages to log of many stale messages all at once, long after their command was already executed and completed.
When executing the script like so from the shell:
`nohup ruby myscript.rb &`
This behaves as expected when not run under nohup.
Anyone experienced this and know a good workaround?
Solution
I can't say for sure, but it sounds like an IO sync
issue. I/O is generally buffered for speed. RAM is a lot faster than disks, so reads and writes are temporarily stored until the data is requested by the code, or the disk is able to receive it. Ruby's IO class has the sync=
method that lets you tell Ruby to immediately flush the buffer on a write.
This should work:
logger_file = open('/mnt/dbsdata/output.log', File::WRONLY | File::APPEND | File::CREAT)
logger_file.sync = true
LOGGER = Logger.new(logger_file)
LOGGER.level = Logger::INFO
For simplicity, you can create the correct mode you want for your output using:
logger_file = File.open('/mnt/dbsdata/output.log', 'a')
Answered By - the Tin Man Answer Checked By - Senaida (WPSolving Volunteer)