I recently wrote a couple Celery tasks that are purely IO bound. So instead of using the default multiprocessing execution pool, I used the Eventlet execution pool. With just a small change in Celery settings, I was off to the races.
Wrong! After some amount of time, it just sits at 100% CPU and no longer processes tasks. Unfortunately, Celery calls monkey_patch a little late when coupled with Django. Django does some magic of its own to various components so monkey_patch needs to be called before Django does any initialization. After a little digging, I found I can just set an environment variable to prevent Celery from doing the monkey patching and at the same time use it to signal manage.py to call monkey_patch before the initialization my Django app.
Just add this to the top of your manage.py:
if os.environ.get('EVENTLET_NOPATCH'): import eventlet import eventlet.debug eventlet.monkey_patch() eventlet.debug.hub_prevent_multiple_readers(False) eventlet_timeout = os.environ.get('EVENTLET_TIMEOUT') if eventlet_timeout: eventlet.debug.hub_blocking_detection(True, float(eventlet_timeout))
Now when starting celeryd just add the environment variable
EVENTLET_NOPATCH='yes' to your manage.py command
EVENTLET_NOPATCH='yes' python ./manage.py celeryd -c 1000 -Q eventlet_tasks -P eventlet
Also, if you’re worker seems to be running a little slow, you can now add
EVENTLET_TIMEOUT='1.0' to cause Eventlet to print a stacktrace of the blocking thread to stderr. hub_blocking_detection takes a float in number of seconds to set the alarm for.
EVENTLET_TIMEOUT='0.1' EVENTLET_NOPATCH='yes' python ./manage.py celeryd -c 10 -Q eventlet_tasks -P eventlet