加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 编程开发 > Python > 正文

python – Celeryd multi with supervisord

发布时间:2020-12-20 12:33:13 所属栏目:Python 来源:网络整理
导读:尝试用芹菜多来运行supervisord(3.2.2). 似乎是supervisord无法处理它.单个芹菜工人工作正常. 这是我的supervisord配置 celery multi v3.1.20 (Cipater) Starting nodes... celery1@parzee-dev-app-sfo1: OKStale pidfile exists. Removing it. celery2@parz
尝试用芹菜多来运行supervisord(3.2.2).

似乎是supervisord无法处理它.单个芹菜工人工作正常.

这是我的supervisord配置

celery multi v3.1.20 (Cipater)
> Starting nodes...
    > celery1@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.
    > celery2@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.

celeryd.conf

; ==================================
;  celery worker supervisor example
; ==================================

[program:celery]
; Set full path to celery program if using virtualenv
command=/usr/local/src/imbue/application/imbue/supervisorctl/celeryd/celeryd.sh
process_name = %(program_name)s%(process_num)d@%(host_node_name)s
directory=/usr/local/src/imbue/application/imbue/conf/
numprocs=2
stderr_logfile=/usr/local/src/imbue/application/imbue/log/celeryd.err
logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log
stdout_logfile_backups = 10
stderr_logfile_backups = 10
stdout_logfile_maxbytes = 50MB
stderr_logfile_maxbytes = 50MB
autostart=true
autorestart=false
startsecs=10

我使用以下supervisord变量来模仿我开始芹菜的方式:

>%(program_name)s
>%(process_num)d
> @
>%(host_node_name)s

Supervisorctl

supervisorctl 
celery:celery1@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)

我尝试将/usr/local/lib/python2.7/dist-packages/supervisor/options.py中的此值从0更改为1:

numprocs_start = integer(get(section,'numprocs_start',1))

我还是得到:

celery:celery1@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1   EXITED    May 14 12:47 AM

Celery正在开始,但是supervisord没有跟踪它.

根@ parzee-DEV-APP-SFO1:在/ etc /主管#

ps -ef | grep celery
root      2728     1  1 00:46 ?        00:00:02 [celeryd: celery1@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery1@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/1.pid)
root      2973     1  1 00:46 ?        00:00:02 [celeryd: celery2@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery2@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/2.pid)

celery.sh

source ~/.profile
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log
CELERYD_OPTS=" --loglevel=DEBUG"
CELERY_WORKERS=2
CELERY_PROCESSES=16
cd /usr/local/src/imbue/application/imbue/conf
exec celery multi start $CELERY_WORKERS -P processes -c $CELERY_PROCESSES -n celeryd@{HOSTNAME} -f $CELERY_LOGFILE $CELERYD_OPTS

类似:
Running celeryd_multi with supervisor
How to use Supervisor + Django + Celery with multiple Queues and Workers?

解决方法

由于管理程序监视(启动/停止/重新启动)进程,因此该进程应在??前台运行(不应该进行守护程序).

Celery multi daemonizes自己,所以它不能与主管一起运行.

您可以为每个工作人员创建单独的流程并将其分组.

[program:worker1]
command=celery worker -l info -n worker1

[program:worker2]
command=celery worker -l info -n worker2

[group:workers]
programs=worker1,worker2

你也可以写一个像这样的makes daemon process run in foreground的shell脚本.

#! /usr/bin/env bash
set -eu

pidfile="/var/run/your-daemon.pid"
command=/usr/sbin/your-daemon

# Proxy signals
function kill_app(){
    kill $(cat $pidfile)
    exit 0 # exit okay
}
trap "kill_app" SIGINT SIGTERM

# Launch daemon
$celery multi start 2 -l INFO

sleep 2

# Loop while the pidfile and the process exist
while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do
    sleep 0.5
done
exit 1000 # exit unexpected

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读