加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 百科 > 正文

ELK可视化报表解决多表聚合生成报表问题

发布时间:2020-12-14 01:06:17 所属栏目:百科 来源:网络整理
导读:大家平时都用过ELK做日志搜索和展示吧?kibana中有Visualize模块,能够生成各种各样的图表,但是只能对单一化表进行统计处理. 现在我们要面临一个这样的需求: 1.我有两张原始搜索表和订单表 2.现在要计算每天的搜索到下单的转化率并且在前端以报表的形式来进行

大家平时都用过ELK做日志搜索和展示吧?kibana中有Visualize模块,能够生成各种各样的图表,但是只能对单一化表进行统计处理.

现在我们要面临一个这样的需求:

1.我有两张原始搜索表和订单表

2.现在要计算每天的搜索到下单的转化率并且在前端以报表的形式来进行展示

分析:

这个问题涉及到每张表根据时间做统计后再次做处理生产一组数据,传递到前端JS生成要展示的报表的流程.

按道理说做这个过程不是很难,但是需要写相应的SQL,后台代码,前端JS代码,如果有需要这种报表,我们每次都要写类似的代码

很麻烦,现在想将这类需求的解决方案简化为某些查询语句,然后就可以自动化的出现相应的报表.


ELK官方文档地址 :https://www.elastic.co/guide/index.html


我是从ELK来尝试解决这个问题的,上面说到Visualize模块生成单表图表,查看了其发送的请求参数.发现其使用到了ES中的聚合(Aggregations).首先要验证下ES是否能支持上述的需求的解决,如果ES能直接进行多索引聚合后的计算工作,那么在看看前端时候

能够根据查询来画图即可.然后我将问题转化为求不同索引聚类后的再计算问题.带着问题查询官方问题,查找解决方案.


首先我们看一条基本的聚类语句(语句测试皆在ELK Dev Tools下):

{
  "size": 0,"query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "analyze_wildcard": true,"query": "*"
          }
        },{
          "range": {
            "@timestamp": {
              "gte": 1508401847040,"lte": 1508402747040,"format": "epoch_millis"
            }
          }
        }
      ],"must_not": []
    }
  },"aggs": {
    "out_field": {
      "date_histogram": {
        "field": "@timestamp","interval": "60s","time_zone": "Asia/Shanghai","min_doc_count": 1     
      },"aggs": {
        "sum_time":{
           "sum":{
               "field" : "time"
           }
        }
      }
    }
  }
}
这个语句的是意思是先过滤出要查询的时间段内的数据,然后第一层聚类根据时间间隔为60s聚类字段为@timestamp
然后再这个聚类桶结果中再次进行了一次聚类,聚类字段为time得求和.


首先,我们考虑的问题是,拿到两个索引集合的数据,这个ELK可以用索引前缀的统配拿到,理论上我们是可以拿到一个包含两个
索引集合结果的数据集的,下面我们要考的问题是,如果将两个索引集合在第一次聚类(时间聚类)后,分开进行聚类处理,即将两个索引分辨出来,这里我查找到聚类中有桶过滤一说:
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/search-aggregations-bucket-filter-aggregation.html
例子:

{
    "aggs" : {
        "red_products" : {
            "filter" : { "term": { "color": "red" } },"aggs" : {
                "avg_price" : { "avg" : { "field" : "price" } }
            }
        }
    }
}
意思是能在过滤后再进行聚类操作,有了这个桶过滤,我们仅仅需要设置一个能标识各个索引得通用字段,就能够分辨出来各个索引了,
从而对单独的表数据进行聚类处理.

再次,我们可以单独对某张索引聚类后,需要讲聚类后的结果再次进行组合处理,前文中说到的转化率,现在已经进行到了,分别拿到了两张表的某段时间(一天)得数据量,需要再对他们做一次除法来求转化率,这里的问题是,如何拿到两次聚类后的结果,并且进行处理?
这里还是要去查看官方问题.有一节提到桶脚本操作:
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/search-aggregations-pipeline-bucket-script-aggregation.html

{
    "bucket_script": {
        "buckets_path": {
            "my_var1": "the_sum","my_var2": "the_value_count"
        },"script": "params.my_var1 / params.my_var2"
    }
}

这里看到,我们能根据聚合的桶的路径来获取聚合后的结果,并且能写脚本再次进行计算处理.到这里我们的理论知识算是理清楚了,是可行的,下面写了一个实验性的查询及其结果的展示:

查询语句:
{
  "size": 0,"min_doc_count": 1,"order": {
                "sum_atime" : "desc" 
        }        
      },"aggs": {
        "sum_atime":{
           "sum":{
               "script": {
                  "inline": "doc['atime'].value","lang": "expression"
                }
           }
        },"sum_time":{
           "sum":{
               "field" : "time"
           }
        },"opr_doc_two":{
           "bucket_script": {
                        "buckets_path": {
                          "tShirtSales": "sum_atime","totalSales": "sum_time"
                        },"script": "params.tShirtSales / params.totalSales"
           }
        }
      }
    }
  }
}
结果展示:
{
  "took": 216,"timed_out": false,"_shards": {
    "total": 700,"successful": 265,"failed": 435,"failures": [
      {
        "shard": 0,"index": ".kibana","node": "53Qqb82cRXSQ2upasHdfbA","reason": {
          "type": "script_exception","reason": "link error","caused_by": {
            "type": "parse_exception","reason": "parse_exception: Field [atime] does not exist in mappings"
          },"script_stack": [
            "doc['atime'].value","     ^---- HERE"
          ],"script": "doc['atime'].value","lang": "expression"
        }
      }
    ]
  },"hits": {
    "total": 313546,"max_score": 0,"hits": []
  },"aggregations": {
    "out_field": {
      "buckets": [
        {
          "key_as_string": "2017-10-19T16:41:00.000+08:00","key": 1508402460000,"doc_count": 25583,"sum_time": {
            "value": 969439
          },"sum_atime": {
            "value": 651787
          },"opr_doc_two": {
            "value": 0.6723342056591493
          }
        },{
          "key_as_string": "2017-10-19T16:40:00.000+08:00","key": 1508402400000,"doc_count": 22246,"sum_time": {
            "value": 845529
          },"sum_atime": {
            "value": 554232
          },"opr_doc_two": {
            "value": 0.6554855007929947
          }
        },{
          "key_as_string": "2017-10-19T16:42:00.000+08:00","key": 1508402520000,"doc_count": 20575,"sum_time": {
            "value": 778218
          },"sum_atime": {
            "value": 521955
          },"opr_doc_two": {
            "value": 0.6707053807544929
          }
        },{
          "key_as_string": "2017-10-19T16:34:00.000+08:00","key": 1508402040000,"doc_count": 21250,"sum_time": {
            "value": 767982
          },"sum_atime": {
            "value": 513100
          },"opr_doc_two": {
            "value": 0.6681146172696756
          }
        },{
          "key_as_string": "2017-10-19T16:31:00.000+08:00","key": 1508401860000,"doc_count": 19940,"sum_time": {
            "value": 775634
          },"sum_atime": {
            "value": 509484
          },"opr_doc_two": {
            "value": 0.6568613547111137
          }
        },{
          "key_as_string": "2017-10-19T16:35:00.000+08:00","key": 1508402100000,"doc_count": 21473,"sum_time": {
            "value": 793853
          },"sum_atime": {
            "value": 502308
          },"opr_doc_two": {
            "value": 0.6327468687527792
          }
        },{
          "key_as_string": "2017-10-19T16:43:00.000+08:00","key": 1508402580000,"doc_count": 20519,"sum_time": {
            "value": 786313
          },"sum_atime": {
            "value": 501412
          },"opr_doc_two": {
            "value": 0.6376748190606031
          }
        },{
          "key_as_string": "2017-10-19T16:38:00.000+08:00","key": 1508402280000,"doc_count": 20088,"sum_time": {
            "value": 730432
          },"sum_atime": {
            "value": 482921
          },"opr_doc_two": {
            "value": 0.661144363883291
          }
        },{
          "key_as_string": "2017-10-19T16:39:00.000+08:00","key": 1508402340000,"doc_count": 19921,"sum_time": {
            "value": 719913
          },"sum_atime": {
            "value": 479382
          },"opr_doc_two": {
            "value": 0.6658887948960499
          }
        },{
          "key_as_string": "2017-10-19T16:33:00.000+08:00","key": 1508401980000,"doc_count": 19884,"sum_time": {
            "value": 735780
          },"sum_atime": {
            "value": 479306
          },"opr_doc_two": {
            "value": 0.6514256978988284
          }
        },{
          "key_as_string": "2017-10-19T16:36:00.000+08:00","key": 1508402160000,"doc_count": 20203,"sum_time": {
            "value": 739149
          },"sum_atime": {
            "value": 473916
          },"opr_doc_two": {
            "value": 0.6411643660479822
          }
        },{
          "key_as_string": "2017-10-19T16:44:00.000+08:00","key": 1508402640000,"doc_count": 19370,"sum_time": {
            "value": 712834
          },"sum_atime": {
            "value": 469712
          },"opr_doc_two": {
            "value": 0.6589360215702393
          }
        },{
          "key_as_string": "2017-10-19T16:45:00.000+08:00","key": 1508402700000,"doc_count": 17464,"sum_time": {
            "value": 670617
          },"sum_atime": {
            "value": 466811
          },"opr_doc_two": {
            "value": 0.696091807991745
          }
        },{
          "key_as_string": "2017-10-19T16:37:00.000+08:00","key": 1508402220000,"doc_count": 19610,"sum_time": {
            "value": 742356
          },"sum_atime": {
            "value": 465973
          },"opr_doc_two": {
            "value": 0.627694798721907
          }
        },{
          "key_as_string": "2017-10-19T16:32:00.000+08:00","key": 1508401920000,"doc_count": 21206,"sum_time": {
            "value": 734534
          },"sum_atime": {
            "value": 456836
          },"opr_doc_two": {
            "value": 0.6219398965874963
          }
        },{
          "key_as_string": "2017-10-19T16:30:00.000+08:00","key": 1508401800000,"doc_count": 4214,"sum_time": {
            "value": 158243
          },"sum_atime": {
            "value": 103033
          },"opr_doc_two": {
            "value": 0.6511062100693238
          }
        }
      ]
    }
  }
}

截图:



现在我们解决了理论上是否能实现的问题,接下来,需要创建些数据来验证下,如何来实现了.这里我自己搭建了一个测试使用ELK,搭建
教程: http://www.cnblogs.com/yuhuLin/p/7018858.html (里面有个ES的Head插件安装需要改下连接ES的地址JS)

上面我们准备好了环境和数据,然后要思考Visualize如何来实现这种效果,这里还是要看kibana得官方文档:

https://www.elastic.co/guide/en/kibana/current/visualize.html

看了一遍这个插件部分,没有讲述如何来实现自己的查询语句来画图的,然后我又找了下kibana得其余部分的文档发现了timelion
这个部分,看到官方的描述就感觉这个插件是可行的:
https://www.elastic.co/guide/en/kibana/current/timelion.html

Timelione

Timelion is a time series data visualizer that enables you to combine totally independent data sources within a single visualization. It’s driven by a simple expression language you use to retrieve time series data,perform calculations to tease out the answers to complex questions,and visualize the results.


Timelion是一个时间序列数据可视化工具,它使您能够 将完全独立的数据源组合在一个可视化中 。它是由一个简单的表达式语言驱动的,用于检索时间序列数据,执行计算来梳理复杂问题的答案,并可视化结果。

大家可以看看官方文档关于这个插件的使用例子,和Doc中的函数的用法.这里上一个最后的效果图吧,我仅仅做了两天的数据.

看看原始数据,自己算下,计算的比值是正确的:




到此,我们探索多表查询后出图表的过程就结束了,其中验证ES能进行聚合后再操作的步骤好像并没有用到,但是以后可用ES本身的查询
处理可能会用到.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读