Welcome, guest | Sign In | My Account | Store | Cart

The Python way to detach a process from the controlling terminal and run it in the background as a daemon.

Python, 207 lines
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
"""Disk And Execution MONitor (Daemon)

Configurable daemon behaviors:

   1.) The current working directory set to the "/" directory.
   2.) The current file creation mode mask set to 0.
   3.) Close all open files (1024). 
   4.) Redirect standard I/O streams to "/dev/null".

A failed call to fork() now raises an exception.

References:
   1) Advanced Programming in the Unix Environment: W. Richard Stevens
   2) Unix Programming Frequently Asked Questions:
         http://www.erlenstar.demon.co.uk/unix/faq_toc.html
"""

__author__ = "Chad J. Schroeder"
__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"

__revision__ = "$Id$"
__version__ = "0.2"

# Standard Python modules.
import os               # Miscellaneous OS interfaces.
import sys              # System-specific parameters and functions.

# Default daemon parameters.
# File mode creation mask of the daemon.
UMASK = 0

# Default working directory for the daemon.
WORKDIR = "/"

# Default maximum for the number of available file descriptors.
MAXFD = 1024

# The standard I/O file descriptors are redirected to /dev/null by default.
if (hasattr(os, "devnull")):
   REDIRECT_TO = os.devnull
else:
   REDIRECT_TO = "/dev/null"

def createDaemon():
   """Detach a process from the controlling terminal and run it in the
   background as a daemon.
   """

   try:
      # Fork a child process so the parent can exit.  This returns control to
      # the command-line or shell.  It also guarantees that the child will not
      # be a process group leader, since the child receives a new process ID
      # and inherits the parent's process group ID.  This step is required
      # to insure that the next call to os.setsid is successful.
      pid = os.fork()
   except OSError, e:
      raise Exception, "%s [%d]" % (e.strerror, e.errno)

   if (pid == 0):	# The first child.
      # To become the session leader of this new session and the process group
      # leader of the new process group, we call os.setsid().  The process is
      # also guaranteed not to have a controlling terminal.
      os.setsid()

      # Is ignoring SIGHUP necessary?
      #
      # It's often suggested that the SIGHUP signal should be ignored before
      # the second fork to avoid premature termination of the process.  The
      # reason is that when the first child terminates, all processes, e.g.
      # the second child, in the orphaned group will be sent a SIGHUP.
      #
      # "However, as part of the session management system, there are exactly
      # two cases where SIGHUP is sent on the death of a process:
      #
      #   1) When the process that dies is the session leader of a session that
      #      is attached to a terminal device, SIGHUP is sent to all processes
      #      in the foreground process group of that terminal device.
      #   2) When the death of a process causes a process group to become
      #      orphaned, and one or more processes in the orphaned group are
      #      stopped, then SIGHUP and SIGCONT are sent to all members of the
      #      orphaned group." [2]
      #
      # The first case can be ignored since the child is guaranteed not to have
      # a controlling terminal.  The second case isn't so easy to dismiss.
      # The process group is orphaned when the first child terminates and
      # POSIX.1 requires that every STOPPED process in an orphaned process
      # group be sent a SIGHUP signal followed by a SIGCONT signal.  Since the
      # second child is not STOPPED though, we can safely forego ignoring the
      # SIGHUP signal.  In any case, there are no ill-effects if it is ignored.
      #
      # import signal           # Set handlers for asynchronous events.
      # signal.signal(signal.SIGHUP, signal.SIG_IGN)

      try:
         # Fork a second child and exit immediately to prevent zombies.  This
         # causes the second child process to be orphaned, making the init
         # process responsible for its cleanup.  And, since the first child is
         # a session leader without a controlling terminal, it's possible for
         # it to acquire one by opening a terminal in the future (System V-
         # based systems).  This second fork guarantees that the child is no
         # longer a session leader, preventing the daemon from ever acquiring
         # a controlling terminal.
         pid = os.fork()	# Fork a second child.
      except OSError, e:
         raise Exception, "%s [%d]" % (e.strerror, e.errno)

      if (pid == 0):	# The second child.
         # Since the current working directory may be a mounted filesystem, we
         # avoid the issue of not being able to unmount the filesystem at
         # shutdown time by changing it to the root directory.
         os.chdir(WORKDIR)
         # We probably don't want the file mode creation mask inherited from
         # the parent, so we give the child complete control over permissions.
         os.umask(UMASK)
      else:
         # exit() or _exit()?  See below.
         os._exit(0)	# Exit parent (the first child) of the second child.
   else:
      # exit() or _exit()?
      # _exit is like exit(), but it doesn't call any functions registered
      # with atexit (and on_exit) or any registered signal handlers.  It also
      # closes any open file descriptors.  Using exit() may cause all stdio
      # streams to be flushed twice and any temporary files may be unexpectedly
      # removed.  It's therefore recommended that child branches of a fork()
      # and the parent branch(es) of a daemon use _exit().
      os._exit(0)	# Exit parent of the first child.

   # Close all open file descriptors.  This prevents the child from keeping
   # open any file descriptors inherited from the parent.  There is a variety
   # of methods to accomplish this task.  Three are listed below.
   #
   # Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
   # number of open file descriptors to close.  If it doesn't exists, use
   # the default value (configurable).
   #
   # try:
   #    maxfd = os.sysconf("SC_OPEN_MAX")
   # except (AttributeError, ValueError):
   #    maxfd = MAXFD
   #
   # OR
   #
   # if (os.sysconf_names.has_key("SC_OPEN_MAX")):
   #    maxfd = os.sysconf("SC_OPEN_MAX")
   # else:
   #    maxfd = MAXFD
   #
   # OR
   #
   # Use the getrlimit method to retrieve the maximum file descriptor number
   # that can be opened by this process.  If there is not limit on the
   # resource, use the default value.
   #
   import resource		# Resource usage information.
   maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
   if (maxfd == resource.RLIM_INFINITY):
      maxfd = MAXFD
  
   # Iterate through and close all file descriptors.
   for fd in range(0, maxfd):
      try:
         os.close(fd)
      except OSError:	# ERROR, fd wasn't open to begin with (ignored)
         pass

   # Redirect the standard I/O file descriptors to the specified file.  Since
   # the daemon has no controlling terminal, most daemons redirect stdin,
   # stdout, and stderr to /dev/null.  This is done to prevent side-effects
   # from reads and writes to the standard I/O file descriptors.

   # This call to open is guaranteed to return the lowest file descriptor,
   # which will be 0 (stdin), since it was closed above.
   os.open(REDIRECT_TO, os.O_RDWR)	# standard input (0)

   # Duplicate standard input to standard output and standard error.
   os.dup2(0, 1)			# standard output (1)
   os.dup2(0, 2)			# standard error (2)

   return(0)

if __name__ == "__main__":

   retCode = createDaemon()

   # The code, as is, will create a new file in the root directory, when
   # executed with superuser privileges.  The file will contain the following
   # daemon related process parameters: return code, process ID, parent
   # process group ID, session ID, user ID, effective user ID, real group ID,
   # and the effective group ID.  Notice the relationship between the daemon's 
   # process ID, process group ID, and its parent's process ID.

   procParams = """
   return code = %s
   process ID = %s
   parent process ID = %s
   process group ID = %s
   session ID = %s
   user ID = %s
   effective user ID = %s
   real group ID = %s
   effective group ID = %s
   """ % (retCode, os.getpid(), os.getppid(), os.getpgrp(), os.getsid(0),
   os.getuid(), os.geteuid(), os.getgid(), os.getegid())

   open("createDaemon.log", "w").write(procParams + "\n")

   sys.exit(retCode)

Updated and improved.

This recipe details how to implement/create a daemon in Python. Just call the createDaemon() function and it will daemonize your process. It's well documented and hopefully useful. Any ideas or suggestions are welcome. Enjoy.

References: 1) Advanced Programming in the Unix Environment: W. Richard Stevens 2) Unix Programming Frequently Asked Questions: http://www.erlenstar.demon.co.uk/unix/faq_toc.html

35 comments

Graham Ashton 19 years, 12 months ago  # | flag

Problem with closing file descriptors. Nicely documented recipe. But I can't see how opening file descriptors like this would correctly handle stdin, stdout and stderr:

# Redirect the standard file descriptors to /dev/null.
os.open("/dev/null", os.O_RDONLY)    # standard input (0)
os.open("/dev/null", os.O_RDWR)      # standard output (1)
os.open("/dev/null", os.O_RDWR)      # standard error (2)

Obviously, you don't really want to close the existing ones, but I once saw a good trick in Python Standard Library (Lundh) for doing a similar thing:

class NullDevice:
    def write(self, s):
        pass
sys.stdin.close()
sys.stdout = NullDevice()
sys.stderr = NullDevice()

I've used that quite a bit, with some success.

Graham Ashton 19 years, 12 months ago  # | flag

Problem with closing file descriptors. Nicely documented recipe. But I can't see how opening file descriptors like this would correctly handle stdin, stdout and stderr:

# Redirect the standard file descriptors to /dev/null.
os.open("/dev/null", os.O_RDONLY)    # standard input (0)
os.open("/dev/null", os.O_RDWR)      # standard output (1)
os.open("/dev/null", os.O_RDWR)      # standard error (2)

Obviously, you don't really want to close the existing ones, but I once saw a good trick in Python Standard Library (Lundh) for doing a similar thing:

class NullDevice:
    def write(self, s):
        pass
sys.stdin.close()
sys.stdout = NullDevice()
sys.stderr = NullDevice()

I've used that quite a bit, with some success.

Chad J. Schroeder (author) 19 years, 11 months ago  # | flag

How it works.

In general, when creating a daemon, you  want to close all file
descriptors inherited  from the parent process.  createDaemon()
closes all file descriptors from 0 to maxfd.  This isn't always
necessary, but it's good practice.



Next, three calls to os.open() are made.  This function returns,
when  successful, the lowest file descriptor not currently open
for  the process.   Since  the  standard  fd's  (0, 1, 2)  were
previously closed,  they're now recreated in the daemon process
with an association to /dev/null rather than the actual standard
I/O streams.

Now, anytime a reference is made to the standard I/O streams in
the daemon process, it's redirected to /dev/null.



As visual proof, try the following.  Modify the os.open() calls
to:

os.open("/testlog", os.O_CREAT|os.O_APPEND|os.O_RDONLY) # stdin
os.open("/testlog", os.O_CREAT|os.O_APPEND|os.O_RDWR)   # stdout
os.open("/testlog", os.O_CREAT|os.O_APPEND|os.O_RDWR)   # stderr

And add the following to the end of the file:

...

# won't be created and no errors will be reported.
open("createDaemon.log", "w").write("rc: %s; pid: %d; ppid: %d; pgrp: %d\n"%\
   (retCode, os.getpid(), os.getppid(), os.getpgrp()))

sys.stdout.write("test stdout\n")
sys.stdout.flush()
sys.stderr.write("test stderr\n")
sys.stderr.flush()

...

The output contained in /testlog verifies the standard I/O streams,
stdout and stderr, are redirected.



Hope this helps.
Chad J. Schroeder (author) 19 years, 11 months ago  # | flag

Doug DeCoudras 19 years, 10 months ago  # | flag

How is this better than "&"? Hi, I'm newish to Python and I'm wondering how this approach to writing a daemon is better than running a Python script in the background (run from a Linux or UNIX command line) as follows? :

$ python myPython.py Hi, I'm newish to Python and I'm wondering how this approach to writing a daemon is better than running a Python script in the background (run from a Linux or UNIX command line) as follows? :

$ python myPython.py

Blair Zajac 19 years, 10 months ago  # | flag

Why O_RDWR for stdout and stderr. Good read.

One question. Why do you reopen fd's 1 and 2 using O_RDWR instead of O_WRONLY?

Chad J. Schroeder (author) 19 years, 10 months ago  # | flag

RE: Why O_RDWR for stdout and stderr.

I guess it's a matter of habit and how I've seen it done in the past.



When programming a daemon in C, I tend to open/create a file descriptor to "/dev/null" (or any file)
[fd = open(DEVNULL, O_RDWR)] with the RDWR flag and then duplicate the standard descriptors:



dup2(fd, STDIN_FILENO);
dup2(fd, STDOUT_FILENO);
dup2(fd, STDERR_FILENO);



I just carried the idea/style to the pythonized daemon code.
Brad Touesnard 19 years, 9 months ago  # | flag

The Daemon Process Is Using Most of the CPU. First off, this is some great advice for writting daemon applications.

Problem:

I am running a daemon application using your code to convert the process to a daemon. It works great, but shortly after running, it starts to occupy almost all the CPU as if the process is in spinlock. However, the process should be blocking (rather than spinning) as it is waiting to read from a named pipe. Any idea why the process is taking up most of the CPU and how to stop it from doing so? Here's the code after I call the "createDaemon()" function:

# Write process id to a file
fp = open(pid_file, 'w')
fp.write(str(os.getpid()))
fp.close()

fpipe = open(pipe_path, 'r')

while 1:
  log_line = fpipe.readline()

  input_list = log_line.split('\t')

  if len(input_list) < 4:
    continue

  file_path = input_list[4]

  slash_pos1 = file_path.find('/', 1)
  slash_pos2 = file_path.find('/', slash_pos1 + 1)

  homedir_path = file_path[:slash_pos2]
  logs_path = homedir_path + '/logs'

  if debug_on:
    print 'Logging to ' + logs_path

  if os.path.exists(logs_path):
    fp = open(logs_path + '/xferlog', 'a')
    fcntl.lockf(fp, fcntl.LOCK_EX)
    fp.write(log_line)
    fcntl.lockf(fp, fcntl.LOCK_UN)
    fp.close()

fpipe.close()
Chad J. Schroeder (author) 19 years, 9 months ago  # | flag

RE: The Daemon Process Is Using Most of the CPU.

My two cents -



It looks like you're creating a busy while loop.  The fpipe.readline()
returns immediately when there is nothing to read, it doesn't block,
creating the busy while loop until there is something to read.  This is why
your CPU usage rises.



You may want to look at using something like select and/or the low level
open, read, and write (and other tools) in the os module.
Gijs Molenaar 18 years, 10 months ago  # | flag

error to syslog. I'm using this code, and it functioning very good. But sometimes (once a week) my application crashes. I tried to try/except everything, but somehow sometimes something goes wrong.

I've created a function log(), that I can use for logging. At this moment it sends me an e-mail and writes and entry in syslog. I want to redirect stderr to this function, but the problem is that strerr is a stream, and I don't really know how you can convert this to a log() call.

Does anyone has an idea or am I thinking wrong?

Gijs Molenaar 18 years, 10 months ago  # | flag

sorry, the answer is in the code I see now...

I feel ashamed. I didn't implement the full code. _and_ I pushed the wrong 'add comment' button.

Sorry for wasting time!

Gijs Molenaar 18 years, 10 months ago  # | flag

The answer is quite simpel:

import sys

class LogErr:
    def write(self, data):
        print "log: " + data

t = LogErr()
sys.stderr = t
sys.stderr.write("test stderr\n")

I've put it here, maybe it can be usefull for somebody.

Neal Becker 18 years, 7 months ago  # | flag

Simplify. I don't see why 2 forks are needed. I think all that's needed is: if (os.fork()) == 0: os.setsid() maxfd = os.sysconf("SC_OPEN_MAX") for fd in range(0, maxfd): try: os.close(fd) except OSError: # ERROR (ignore) pass

    # Redirect the standard file descriptors to /dev/null.
    os.open("/dev/null", os.O_RDONLY)    # standard input (0)
    os.open("/dev/null", os.O_RDWR)       # standard output (1)
    os.open("/dev/null", os.O_RDWR)       # standard error (2)

else:
    sys.exit (0)
Noah Spurrier 18 years, 7 months ago  # | flag

It always takes two forks to make a daemon. This is tradition. Some UNIXes don't require it. It doesn't hurt to do it on all UNIXes. The reason some UNIXes require it is to make sure that the daemon process is NOT a session leader. A session leader process may attempt to aquire a controlling terminal. By definition a daemon does not have a controlling terminal. This is one of the steps that might not be strictly necessary, but it will eliminate one possible source for faults.

tuco Leone 18 years, 6 months ago  # | flag

Depends On Use.

"How is this better than [background task]?"

<p> For long-running processes that are not tied to a terminal, for example. </p>

tuco Leone 18 years, 6 months ago  # | flag

Depends On Use.

"I don't see why 2 forks are needed."

<p> To have a seperate, stand-alone process running that is abandoned by its parent to live its own life without ever knowing if its parant is alive or dead, for example. </p>

tuco Leone 18 years, 6 months ago  # | flag

Benifit To The Parent. And don't forget that the parent process who double-fork()s their daemon child never has to worry about it turning into a Zombie when they die too.

jd holt 18 years, 5 months ago  # | flag

Example of usage. I have tried to implement this but I am having trouble. Here is what I have tried.

#!/usr/bin/env python
import pyDaemon
import time
import sys
import os
pyDaemon.createDaemon()
def logit(self):
        fp = open('test.log','w')
        fp.write('Hello\n')
        fp.close()
while true:
        time.sleep(300)
        logit()
<pre>

I left the Daemon Code untouched and nothing happens.

Thanks,
         Josh holt

</pre>

Chris Cogdon 17 years, 10 months ago  # | flag

Error in your program. Because stderr is being redirected to /dev/null, you won't be informed of any errors in your program.

In this case, you have 'self' as a parameter to logit, but it's not class member. Just remove 'self' and it should work fine.

Also, you might want to use mode "a" rather than "w"; this way you'll see "Hello!" being added every 5 minutes, rather than just one "Hello!"

francis giraldeau 17 years, 10 months ago  # | flag

Random file descriptor. When using the random module after daemonize, then the file description is not accessible anymore, and it produce this error:

Traceback (most recent call last):

File "/usr/share/mille-xterm/lbserver/main.py", line 332, in ?

main()

File "/usr/share/mille-xterm/lbserver/main.py", line 287, in main

random.seed()

File "/usr/lib/python2.4/random.py", line 110, in seed

a = long(_hexlify(_urandom(16)), 16)

File "/usr/lib/python2.4/os.py", line 728, in urandom

bytes += read(_urandomfd, n - len(bytes))

OSError: [Errno 9] Bad file descriptor

Should I preserve file descriptors? I don't see another way to do it.

Thanks for any hint,

Francis

sasa sasa 17 years, 7 months ago  # | flag

This is python bug 1177468, it's apparently fixed in python cvs since 4th July 2005 but may not have made it to your distribution yet. Without the fixed os.py, your only option is to leave the file descriptors open.

sasa sasa 17 years, 6 months ago  # | flag

Actually you do need to close 0,1,2 otherwise you won't get any stdout/stderr redirects from the os.open calls.

Brandon Pierce 17 years, 5 months ago  # | flag

Can't quite get this to work.

Hello,

I've been testing this with the following code:

#!/usr/bin/python
import pyDaemon
import time

def logit():
    fp = open('test.log','a')
    fp.write('Hello\n')
    fp.close()

pyDaemon.createDaemon()
while 1:
        time.sleep(5)
        logit()

This is basically what one of the other folks earlier was doing.
I have it saves in a file called 'test.py'. I'm new to Python, and have
never done anything with daemons, so I wanted to see how it works.


If I start this using 'python test.py', it seem to start, and I can
see it running as a process, but nothing gets written to the log
file. The file exists and I gave it permissions of 777.


If I comment out the line that executes the createDaemon() function,
it works fine, aside from not being daemonized (naturally). Any ideas?

Thanks!

Brandon
Lloyd Carothers 17 years, 5 months ago  # | flag

Check /. The daemon code sets the current working dir of the process to /. You're file is either here or you don't have permission to write here. If you're doing serious logging check out syslog. There are some recipes on this site.

greg p 16 years, 6 months ago  # | flag

Handling SIGTERM. How can I make my daemon using this code handle SIGTERM.

See I have it created child processes when it starts up, and when the daemon is shut down, I want it to clean up those child processes. I figured I could put the clean up code in a function.

Here's what I tried so far:

def handle_sigterm():
    """Kill all child processes to clean up."""
    logging.info('handle_sigterm called.')

signal.signal(signal.SIGTERM,handle_sigterm)

But when running $ sudo kill -15 [PID] against it, that function never gets called.

Felipe Pereira 16 years, 3 months ago  # | flag

Redirecting both stdout and stderr to the same file. Here's what I wanted:

* print statements output should go to a logfile (even print >> sys.stderr)
* children (e.g. os.system calls) should output to the same logfile
* without touching daemonize.py code

Here's how I solved.

First of all, I needed to add:

sys.stdin.close()
sys.stdout.close()
sys.stderr.close()

To createDaemon() code. Closing underlying C buffers without closing (and thus notifying) sys.std* seems to be bad. There's also another recipe here about this. According to Python docs, if you close sys streams, the associated fd is not really closed. So we still need to close 0,1 and 2 fds.

This was the only change I made to createDaemon(). I think it's not a problem, because I was going to reassign sys.std* afterwards.

Now my code: bla.py

#!/usr/bin/python

import daemon
import os,sys,time

daemon.createDaemon()

sys.stdout.close() #we close /dev/null
sys.stderr.close()

os.close(2) # and associated fd's
os.close(1)

# now we open a new stdout
# * notice that underlying fd is 1
# * bufsize is 1 because we want stdout line buffered (it's my log file)
sys.stdout = open('/tmp/bla','w',1) # redirect stdout
os.dup2(1,2) # fd 2 is now a duplicate of fd 1
sys.stderr = os.fdopen(2,'a',0) # redirect stderr
# from now on sys.stderr appends to fd 2
# * bufsize is 0, I saw this somewhere, I guess no bufferization at all is better for stderr

# now some tests... we want to know if it's bufferized or not
print "stdout"
print >> sys.stderr, "stderr"
os.system("echo stdout-echo") # this is unix only...
os.system("echo stderr-echo > /dev/stderr")
# cat /tmp/bla and check that it's ok; to kill use: pkill -f bla.py
while 1:
        time.sleep(1)

sys.exit(0)

Ps: I had to replace "tumbler" for "aspn" in the URL to comment this!

Ps2: this is for Unix!

Florian Mayer 15 years, 9 months ago  # | flag

Thank you for this code snippet. Really useful!

david birdsong 15 years, 2 months ago  # | flag

Your signal handler is not written properly, you'll need to accept some arguments. The docs spell it out pretty well: http://docs.python.org/library/signal.html

Travis H. 14 years, 9 months ago  # | flag

Python likes to keep file descriptors open to the source of your program and any modules you may import.

Verify this yourself with strace (on Linux) and ktrace/kdump (BSD).

If you attempt to close them all, you'll see some spurious and confusing error messages.

At least I did, in my dfd_keeper project:

http://www.subspacefield.org/security/dfd_keeper/

Travis H. 13 years, 10 months ago  # | flag

I have created a package for dropping privileges in python, which may be of interest to those writing daemons:

http://www.subspacefield.org/~travis/python/privilege/

Peter Wolfenden 13 years, 10 months ago  # | flag

Although the posted code is useful for explaining how Python works, I think it's worth pointing out that when deploying a long-running program in a *nix production environment it is better to have the program (regardless of implementation language) talk to the world via STDIN, STDOUT, STDERR and control it via some utility like "supervise" from dameontools (which can handle automatic restarts on exit, logging, and other admin issues) than have the program put itself in the background:

http://cr.yp.to/daemontools/faq/create.html#fghack

Note that the daemontools package also provides ways to set the effective UID/GID of a process and impose softlimits:

http://cr.yp.to/daemontools.html
Ryan 13 years, 6 months ago  # | flag

Just wanted to say thanks for this great share. ActiveState is a fantastic resource. Keep up the great work!:D

Sean Siegel 12 years ago  # | flag

Great code. Thanks for all the commenting. I am however having issues running this on a beaglebone running angstrom linux and python 2.7.2.

When running from angstrom, the daemon is created and I am dropped back to the terminal. The daemon process remains running but is killed once I log of my terminal. I have tested the exact same code on ubuntu and it runs correctly even after logging off. What is the issue?

I have also created a stripped down version of the "double fork magic" and the same thing happens, the daemon dies after logging out on angstrom but works flawlessly on ubuntu.

Any ideas?

My stripped code:

import time
import os
import sys
import signal

pid = os.fork()
if(pid == 0):
    os.setsid()

    pid = os.fork()

    if(pid ==0):

        os.chdir("/")
        os.umask(0)

        while(True):
            time.sleep(1)
    else:
        os._exit(0)
else:
    os._exit(0)
flobbie 10 years, 1 month ago  # | flag

I think the main process has to wait for the termination of the first child before terminating.

os.wait() should be added in line 126 (before os._exit())

because: Let "pyDaemon.py" be the file from the top, then:

user@machine:~$ python pyDaemon.py & exit

will not (not always, race condition?) create "createDaemon.log", i think because the controlling tty closes before os.setsid() returns in the child, and therefore sighup will be sent.

With the addition of os.wait() it should work fine.

Vigneshwaran P 9 years, 5 months ago  # | flag

Thanks. Really useful