linux – 从bash脚本启动时Logstash shutdown停止
我写了一个bash脚本,它在指定的文件夹中找到CSV文件,并使用正确的配置文件将它们管道到logstash中.但是,当运行此脚本时,我遇到以下错误,说关闭进程停止,导致无限循环,直到我用ctrl c手动停止它:
[2018-03-22T08:59:53,833][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"} [2018-03-22T08:59:54,211][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2018-03-22T08:59:57,970][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main","pipeline.workers"=>2,"pipeline.batch.size"=>125,"pipeline.batch.delay"=>50} [2018-03-22T08:59:58,116][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main",:thread=>"#<Thread:0xf6851b3 run>"} [2018-03-22T08:59:58,246][INFO ][logstash.agent ] Pipelines running {:count=>1,:pipelines=>["main"]} [2018-03-22T08:59:58,976][INFO ][logstash.outputs.file ] Opening file {:path=>"/home/kevin/otrs_customer_user"} [2018-03-22T09:00:06,471][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0,"stalling_thread_info"=>{["LogStash::Filters::CSV",{"separator"=>";","columns"=>["IOT","OID","SUM","XID","change_by","change_time","city","company","company2","create_by","create_time","customer_id","email","fax","first_name","id","inst_city","inst_first_name","inst_last_name","inst_street","inst_zip","last_name","login","mobile","phone","phone2","street","title","valid_id","varioCustomerId","zip"],"id"=>"f1c74146d6672ca71f489aac1b4c2a332ae515996657981e1ef44b441a7420c8"}]=>[{"thread_id"=>23,"name"=>nil,"current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}]}} [2018-03-22T09:00:06,484][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information. [2018-03-22T09:00:11,438][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0,"current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}]}} 当我用bash logstash -f xyz.config<手动运行相同的文件和相同的配置时myfile.config它可以根据需要工作,并且进程得到正确终止.在bash脚本中我基本上使用的是确切的命令,我遇到了上面的错误. 我也注意到问题似乎是随机的,而不是每次都在同一个文件和配置上. 我的配置包括一个stdin输入一个csv过滤器和用于测试json格式的输出到一个文件(也删除了stdout {}). 有没有人知道为什么我的进程在脚本执行期间停止?或者如果没有,是否有可能告诉logstash在停止时关闭? 示例配置: input { stdin { id => "${LS_FILE}" } } filter { mutate { add_field => { "foo_type" => "${FOO_TYPE}" } add_field => { "[@metadata][LS_FILE]" => "${LS_FILE}"} } if [@metadata][LS_FILE] == "contacts.csv" { csv { separator => ";" columns => [ "IOT","kundenid" ] } if [kundenid]{ mutate { update => { "kundenid" => "n-%{kundenid}" } } } } } output { if [@metadata][LS_FILE] == "contacts.csv" { file{ path => "~/contacts_file" codec => json_lines } } } 示例脚本: LOGSTASH="/customer/app/logstash-6.2.3/bin/logstash" for file in $(find $TARGETPATH -name *.csv) # Loop each file in given path do if [[ $file = *"foo"* ]]; then echo "Importing $file" export LS_FILE=$(basename $file) bash $LOGSTASH -f $CFG_FILE < $file # Starting logstash echo "file $file imported." fi done 我在bash脚本中导出环境变量,并将它们设置为logstash配置中的元数据,以便为不同的输入文件执行一些条件.文件中JSON的输出仅用于测试目的. 解决方法
当您尝试关闭时,Logstash会尝试执行各种步骤,
>它停止所有输入,过滤和输出插件 并且有各种因素使得关机过程非常难以预测,例如: >输入插件以慢速接收数据. 从Logstash documentation,
您可以在启动logstash时使用–pipeline.unsafe_shutdown标志,以便在停止关闭时强制终止进程.如果未启用–pipeline.unsafe_shutdown,则Logstash会继续运行并定期生成这些报告,这就是为什么问题在您的情况下似乎是随机的. 请记住,不安全的关闭,强制执行Logstash进程或崩溃 (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |