The Python way to detach a process from the controlling terminal and run it in the background as a daemon.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | """Disk And Execution MONitor (Daemon)
Configurable daemon behaviors:
1.) The current working directory set to the "/" directory.
2.) The current file creation mode mask set to 0.
3.) Close all open files (1024).
4.) Redirect standard I/O streams to "/dev/null".
A failed call to fork() now raises an exception.
References:
1) Advanced Programming in the Unix Environment: W. Richard Stevens
2) Unix Programming Frequently Asked Questions:
http://www.erlenstar.demon.co.uk/unix/faq_toc.html
"""
__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
__revision__ = "$Id$"
__version__ = "0.2"
# Standard Python modules.
import os # Miscellaneous OS interfaces.
import sys # System-specific parameters and functions.
# Default daemon parameters.
# File mode creation mask of the daemon.
UMASK = 0
# Default working directory for the daemon.
WORKDIR = "/"
# Default maximum for the number of available file descriptors.
MAXFD = 1024
# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
REDIRECT_TO = os.devnull
else:
REDIRECT_TO = "/dev/null"
def createDaemon():
"""Detach a process from the controlling terminal and run it in the
background as a daemon.
"""
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
# be a process group leader, since the child receives a new process ID
# and inherits the parent's process group ID. This step is required
# to insure that the next call to os.setsid is successful.
pid = os.fork()
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)
if (pid == 0): # The first child.
# To become the session leader of this new session and the process group
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
# Is ignoring SIGHUP necessary?
#
# It's often suggested that the SIGHUP signal should be ignored before
# the second fork to avoid premature termination of the process. The
# reason is that when the first child terminates, all processes, e.g.
# the second child, in the orphaned group will be sent a SIGHUP.
#
# "However, as part of the session management system, there are exactly
# two cases where SIGHUP is sent on the death of a process:
#
# 1) When the process that dies is the session leader of a session that
# is attached to a terminal device, SIGHUP is sent to all processes
# in the foreground process group of that terminal device.
# 2) When the death of a process causes a process group to become
# orphaned, and one or more processes in the orphaned group are
# stopped, then SIGHUP and SIGCONT are sent to all members of the
# orphaned group." [2]
#
# The first case can be ignored since the child is guaranteed not to have
# a controlling terminal. The second case isn't so easy to dismiss.
# The process group is orphaned when the first child terminates and
# POSIX.1 requires that every STOPPED process in an orphaned process
# group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
# second child is not STOPPED though, we can safely forego ignoring the
# SIGHUP signal. In any case, there are no ill-effects if it is ignored.
#
# import signal # Set handlers for asynchronous events.
# signal.signal(signal.SIGHUP, signal.SIG_IGN)
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
# process responsible for its cleanup. And, since the first child is
# a session leader without a controlling terminal, it's possible for
# it to acquire one by opening a terminal in the future (System V-
# based systems). This second fork guarantees that the child is no
# longer a session leader, preventing the daemon from ever acquiring
# a controlling terminal.
pid = os.fork() # Fork a second child.
except OSError, e:
raise Exception, "%s [%d]" % (e.strerror, e.errno)
if (pid == 0): # The second child.
# Since the current working directory may be a mounted filesystem, we
# avoid the issue of not being able to unmount the filesystem at
# shutdown time by changing it to the root directory.
os.chdir(WORKDIR)
# We probably don't want the file mode creation mask inherited from
# the parent, so we give the child complete control over permissions.
os.umask(UMASK)
else:
# exit() or _exit()? See below.
os._exit(0) # Exit parent (the first child) of the second child.
else:
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
# closes any open file descriptors. Using exit() may cause all stdio
# streams to be flushed twice and any temporary files may be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
os._exit(0) # Exit parent of the first child.
# Close all open file descriptors. This prevents the child from keeping
# open any file descriptors inherited from the parent. There is a variety
# of methods to accomplish this task. Three are listed below.
#
# Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
# number of open file descriptors to close. If it doesn't exists, use
# the default value (configurable).
#
# try:
# maxfd = os.sysconf("SC_OPEN_MAX")
# except (AttributeError, ValueError):
# maxfd = MAXFD
#
# OR
#
# if (os.sysconf_names.has_key("SC_OPEN_MAX")):
# maxfd = os.sysconf("SC_OPEN_MAX")
# else:
# maxfd = MAXFD
#
# OR
#
# Use the getrlimit method to retrieve the maximum file descriptor number
# that can be opened by this process. If there is not limit on the
# resource, use the default value.
#
import resource # Resource usage information.
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = MAXFD
# Iterate through and close all file descriptors.
for fd in range(0, maxfd):
try:
os.close(fd)
except OSError: # ERROR, fd wasn't open to begin with (ignored)
pass
# Redirect the standard I/O file descriptors to the specified file. Since
# the daemon has no controlling terminal, most daemons redirect stdin,
# stdout, and stderr to /dev/null. This is done to prevent side-effects
# from reads and writes to the standard I/O file descriptors.
# This call to open is guaranteed to return the lowest file descriptor,
# which will be 0 (stdin), since it was closed above.
os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
# Duplicate standard input to standard output and standard error.
os.dup2(0, 1) # standard output (1)
os.dup2(0, 2) # standard error (2)
return(0)
if __name__ == "__main__":
retCode = createDaemon()
# The code, as is, will create a new file in the root directory, when
# executed with superuser privileges. The file will contain the following
# daemon related process parameters: return code, process ID, parent
# process group ID, session ID, user ID, effective user ID, real group ID,
# and the effective group ID. Notice the relationship between the daemon's
# process ID, process group ID, and its parent's process ID.
procParams = """
return code = %s
process ID = %s
parent process ID = %s
process group ID = %s
session ID = %s
user ID = %s
effective user ID = %s
real group ID = %s
effective group ID = %s
""" % (retCode, os.getpid(), os.getppid(), os.getpgrp(), os.getsid(0),
os.getuid(), os.geteuid(), os.getgid(), os.getegid())
open("createDaemon.log", "w").write(procParams + "\n")
sys.exit(retCode)
|
Updated and improved.
This recipe details how to implement/create a daemon in Python. Just call the createDaemon() function and it will daemonize your process. It's well documented and hopefully useful. Any ideas or suggestions are welcome. Enjoy.
References: 1) Advanced Programming in the Unix Environment: W. Richard Stevens 2) Unix Programming Frequently Asked Questions: http://www.erlenstar.demon.co.uk/unix/faq_toc.html
Problem with closing file descriptors. Nicely documented recipe. But I can't see how opening file descriptors like this would correctly handle stdin, stdout and stderr:
Obviously, you don't really want to close the existing ones, but I once saw a good trick in Python Standard Library (Lundh) for doing a similar thing:
I've used that quite a bit, with some success.
Problem with closing file descriptors. Nicely documented recipe. But I can't see how opening file descriptors like this would correctly handle stdin, stdout and stderr:
Obviously, you don't really want to close the existing ones, but I once saw a good trick in Python Standard Library (Lundh) for doing a similar thing:
I've used that quite a bit, with some success.
How it works.
How is this better than "&"? Hi, I'm newish to Python and I'm wondering how this approach to writing a daemon is better than running a Python script in the background (run from a Linux or UNIX command line) as follows? :
$ python myPython.py Hi, I'm newish to Python and I'm wondering how this approach to writing a daemon is better than running a Python script in the background (run from a Linux or UNIX command line) as follows? :
$ python myPython.py
Why O_RDWR for stdout and stderr. Good read.
One question. Why do you reopen fd's 1 and 2 using O_RDWR instead of O_WRONLY?
RE: Why O_RDWR for stdout and stderr.
The Daemon Process Is Using Most of the CPU. First off, this is some great advice for writting daemon applications.
Problem:
I am running a daemon application using your code to convert the process to a daemon. It works great, but shortly after running, it starts to occupy almost all the CPU as if the process is in spinlock. However, the process should be blocking (rather than spinning) as it is waiting to read from a named pipe. Any idea why the process is taking up most of the CPU and how to stop it from doing so? Here's the code after I call the "createDaemon()" function:
RE: The Daemon Process Is Using Most of the CPU.
error to syslog. I'm using this code, and it functioning very good. But sometimes (once a week) my application crashes. I tried to try/except everything, but somehow sometimes something goes wrong.
I've created a function log(), that I can use for logging. At this moment it sends me an e-mail and writes and entry in syslog. I want to redirect stderr to this function, but the problem is that strerr is a stream, and I don't really know how you can convert this to a log() call.
Does anyone has an idea or am I thinking wrong?
sorry, the answer is in the code I see now...
I feel ashamed. I didn't implement the full code. _and_ I pushed the wrong 'add comment' button.
Sorry for wasting time!
The answer is quite simpel:
I've put it here, maybe it can be usefull for somebody.
Simplify. I don't see why 2 forks are needed. I think all that's needed is: if (os.fork()) == 0: os.setsid() maxfd = os.sysconf("SC_OPEN_MAX") for fd in range(0, maxfd): try: os.close(fd) except OSError: # ERROR (ignore) pass
It always takes two forks to make a daemon. This is tradition. Some UNIXes don't require it. It doesn't hurt to do it on all UNIXes. The reason some UNIXes require it is to make sure that the daemon process is NOT a session leader. A session leader process may attempt to aquire a controlling terminal. By definition a daemon does not have a controlling terminal. This is one of the steps that might not be strictly necessary, but it will eliminate one possible source for faults.
Depends On Use.
<p> For long-running processes that are not tied to a terminal, for example. </p>
Depends On Use.
<p> To have a seperate, stand-alone process running that is abandoned by its parent to live its own life without ever knowing if its parant is alive or dead, for example. </p>
Benifit To The Parent. And don't forget that the parent process who double-fork()s their daemon child never has to worry about it turning into a Zombie when they die too.
Example of usage. I have tried to implement this but I am having trouble. Here is what I have tried.
</pre>
Error in your program. Because stderr is being redirected to /dev/null, you won't be informed of any errors in your program.
In this case, you have 'self' as a parameter to logit, but it's not class member. Just remove 'self' and it should work fine.
Also, you might want to use mode "a" rather than "w"; this way you'll see "Hello!" being added every 5 minutes, rather than just one "Hello!"
Random file descriptor. When using the random module after daemonize, then the file description is not accessible anymore, and it produce this error:
Traceback (most recent call last):
File "/usr/share/mille-xterm/lbserver/main.py", line 332, in ?
File "/usr/share/mille-xterm/lbserver/main.py", line 287, in main
File "/usr/lib/python2.4/random.py", line 110, in seed
File "/usr/lib/python2.4/os.py", line 728, in urandom
OSError: [Errno 9] Bad file descriptor
Should I preserve file descriptors? I don't see another way to do it.
Thanks for any hint,
Francis
This is python bug 1177468, it's apparently fixed in python cvs since 4th July 2005 but may not have made it to your distribution yet. Without the fixed os.py, your only option is to leave the file descriptors open.
Actually you do need to close 0,1,2 otherwise you won't get any stdout/stderr redirects from the os.open calls.
Can't quite get this to work.
Check /. The daemon code sets the current working dir of the process to /. You're file is either here or you don't have permission to write here. If you're doing serious logging check out syslog. There are some recipes on this site.
Handling SIGTERM. How can I make my daemon using this code handle SIGTERM.
See I have it created child processes when it starts up, and when the daemon is shut down, I want it to clean up those child processes. I figured I could put the clean up code in a function.
Here's what I tried so far:
But when running $ sudo kill -15 [PID] against it, that function never gets called.
Redirecting both stdout and stderr to the same file. Here's what I wanted:
Here's how I solved.
First of all, I needed to add:
To createDaemon() code. Closing underlying C buffers without closing (and thus notifying) sys.std* seems to be bad. There's also another recipe here about this. According to Python docs, if you close sys streams, the associated fd is not really closed. So we still need to close 0,1 and 2 fds.
This was the only change I made to createDaemon(). I think it's not a problem, because I was going to reassign sys.std* afterwards.
Now my code: bla.py
Ps: I had to replace "tumbler" for "aspn" in the URL to comment this!
Ps2: this is for Unix!
Thank you for this code snippet. Really useful!
Your signal handler is not written properly, you'll need to accept some arguments. The docs spell it out pretty well: http://docs.python.org/library/signal.html
Python likes to keep file descriptors open to the source of your program and any modules you may import.
Verify this yourself with strace (on Linux) and ktrace/kdump (BSD).
If you attempt to close them all, you'll see some spurious and confusing error messages.
At least I did, in my dfd_keeper project:
http://www.subspacefield.org/security/dfd_keeper/
I have created a package for dropping privileges in python, which may be of interest to those writing daemons:
http://www.subspacefield.org/~travis/python/privilege/
Although the posted code is useful for explaining how Python works, I think it's worth pointing out that when deploying a long-running program in a *nix production environment it is better to have the program (regardless of implementation language) talk to the world via STDIN, STDOUT, STDERR and control it via some utility like "supervise" from dameontools (which can handle automatic restarts on exit, logging, and other admin issues) than have the program put itself in the background:
Note that the daemontools package also provides ways to set the effective UID/GID of a process and impose softlimits:
Just wanted to say thanks for this great share. ActiveState is a fantastic resource. Keep up the great work!:D
Great code. Thanks for all the commenting. I am however having issues running this on a beaglebone running angstrom linux and python 2.7.2.
When running from angstrom, the daemon is created and I am dropped back to the terminal. The daemon process remains running but is killed once I log of my terminal. I have tested the exact same code on ubuntu and it runs correctly even after logging off. What is the issue?
I have also created a stripped down version of the "double fork magic" and the same thing happens, the daemon dies after logging out on angstrom but works flawlessly on ubuntu.
Any ideas?
My stripped code:
I think the main process has to wait for the termination of the first child before terminating.
os.wait() should be added in line 126 (before os._exit())
because: Let "pyDaemon.py" be the file from the top, then:
user@machine:~$ python pyDaemon.py & exit
will not (not always, race condition?) create "createDaemon.log", i think because the controlling tty closes before os.setsid() returns in the child, and therefore sighup will be sent.
With the addition of os.wait() it should work fine.
Thanks. Really useful