Because I didn't see a good implementation, I've posted here my own implementation of a lockfile context manager. It's POSIX-only because I don't have a Windows machine to test cross-platform atomicity on. Sorry about that.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | import contextlib, errno, os, time
@contextlib.contextmanager
def flock(path, wait_delay=.1):
while True:
try:
fd = os.open(path, os.O_CREAT | os.O_EXCL | os.O_RDWR)
except OSError, e:
if e.errno != errno.EEXIST:
raise
time.sleep(wait_delay)
continue
else:
break
try:
yield fd
finally:
os.unlink(path)
|
Usage is:
with flock('.lockfile'):
# do whatever.
If you want to actually use the file descriptor, 'as' and 'fdopen' are all you need:
with flock('.lockfile') as fd:
lockfile = os.fdopen(fd, 'r+')
I don't get why you unlink(remove) the file in the end? Could you explain?
Please delete the previous comment, I shouldn't post when it's 1AM. I misread lockfile as filelock.
repeated usage as:
is causing file descriptor leak!
See also recipe 576891
To fix the file descriptor leak the finally clause should also recycle the descriptor:
Researched the topic of having a single instance of a program running for about an hour and the simplest and robust solution is using unix domain sockets --linux only-- (see http://stackoverflow.com/a/1662504/1011025).
The code to acquire the lock is only 6 lines long: