虚拟化容器,大数据,DBA,中间件,监控。

ELK logstash KV过滤插件

20 12月
作者:admin|分类:大数据

过滤插件:通用配置字段


  • add_field 如果过滤成功,添加一个字段到这个事件
  • add_tags 如果过滤成功,添加任意数量的标签到这个事件
  • remove_field 如果过滤成功,从这个事件移除任意字段
  • remove_tag 如果过滤成功,从这个事件移除任意标签

 

Description


This filter helps automatically parse messages (or specific event fields) which are of the foo=bar variety.

For example, if you have a log message which contains ip=1.2.3.4 error=REFUSED, you can parse those automatically by configuring:

    filter {
      kv { }
    }

The above will result in a message of ip=1.2.3.4 error=REFUSED having the fields:

  • ip: 1.2.3.4
  • error: REFUSED

This is great for postfix, iptables, and other types of logs that tend towards key=value syntax.

You can configure any arbitrary strings to split your data on, in case your data is not structured using = signs and whitespace. For example, this filter can also be used to parse query parameters like foo=bar&baz=fizz by setting the field_split parameter to &.

 

 

过滤插件:KV


KV插件:接收一个键值数据,按照指定分隔符解析为Logstash事件中的数据结构,放到事件顶层。

常用字段:

• field_split 指定键值分隔符,默认空

field_split

  • Value type is string
  • Default value is " "

A string of characters to use as single-character field delimiters for parsing out key-value pairs.

These characters form a regex character class and thus you must escape special regex characters like [ or ] using \.

Example with URL Query Strings

For example, to split out the args from a url query string such as ?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345:

    filter {
      kv {
        field_split => "&?"
      }
    }

The above splits on both & and ? characters, giving you the following fields:

pin: 12345~0
d: 123
e: foo@bar.com
oq: bobo
ss: 12345

 示例如下:

 如果日志以键值存储那么用这个插件会比较方便,指定分隔符

[root@localhost ~]# cat /usr/local/logstash/conf.d/test.conf
input {
  file {
    path => "/var/log/nginx/*.log"
    exclude => "error.log"
    start_position => "beginning"
    tags => "web"
    tags => "nginx"
    type => "access"
    add_field => {
    "project" => "microservice"
    "app" => "product"
    }
  }
}

filter {
 kv {
 field_split => "&?"
 } 
}

output {
  elasticsearch {
    hosts => 
    ["192.168.179.102:9200","192.168.179.103:9200","192.168.179.104:9200"]
    index => "test-%{+YYYY.MM.dd}"
 }
}

配置好logstash之后让其重新加载配置 ,查看信息看是否有报错

 如果字段没有拆开只能在message里面去搜索

这样就很呆板,就不能多维度去查询了,可视化展示是基于格式化后某些字段进行展示的。所以字段很重要,可以通过key value做解析。

浏览455 评论0
返回
目录
返回
首页
ELK logstash过滤插件Grok的使用 ELK logstash json过滤器