ELK logstash输出插件Elasticsearch
输出插件(Output)
Elasticsearch
If you plan to use the Kibana web interface to analyze data transformed by Logstash, use the Elasticsearch output plugin to get your data into Elasticsearch.
如果您打算使用Kibana Web界面来分析Logstash转换的数据,请使用Elasticsearch输出插件将数据导入Elasticsearch。
Writing to different indices: best practices
You cannot use dynamic variable substitution when ilm_enabled
is true
and when using ilm_rollover_alias
.
If you’re sending events to the same Elasticsearch cluster, but you’re targeting different indices you can:
如果您要将事件发送到同一Elasticsearch集群,但要针对不同的索引,则可以:
- use different Elasticsearch outputs, each one with a different value for the
index
parameter(使用不同的Elasticsearch输出,每个输出的index
参数 值都不同) - use one Elasticsearch output and use the dynamic variable substitution for the
index
parameter(使用一个Elasticsearch输出并将动态变量替换为index
参数)
Each Elasticsearch output is a new client connected to the cluster:
- it has to initialize the client and connect to Elasticsearch (restart time is longer if you have more clients)
- it has an associated connection pool
In order to minimize the number of open connections to Elasticsearch, maximize the bulk size and reduce the number of "small" bulk requests (which could easily fill up the queue), it is usually more efficient to have a single Elasticsearch output.
Example:
output {
elasticsearch {
index => "%{[some_field][sub_field]}-%{+YYYY.MM.dd}"
}
}
What to do in case there is no field in the event containing the destination index prefix?
You can use the mutate
filter and conditionals to add a [@metadata]
field (see https://www.elastic.co/guide/en/logstash/7.9/event-dependent-configuration.html#metadata) to set the destination index for each event. The [@metadata]
fields will not be sent to Elasticsearch.
Example:
filter {
if [log_type] in [ "test", "staging" ] {
mutate { add_field => { "[@metadata][target_index]" => "test-%{+YYYY.MM}" } }
} else if [log_type] == "production" {
mutate { add_field => { "[@metadata][target_index]" => "prod-%{+YYYY.MM.dd}" } }
} else {
mutate { add_field => { "[@metadata][target_index]" => "unknown-%{+YYYY}" } }
}
}
output {
elasticsearch {
index => "%{[@metadata][target_index]}"
}
}
hosts
- Value type is uri
- Default value is
[//127.0.0.1]
Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the hosts
parameter. Remember the http
protocol uses the http address (eg. 9200, not 9300).
Examples:
`"127.0.0.1"`
`["127.0.0.1:9200","127.0.0.2:9200"]`
`["http://127.0.0.1"]`
`["https://127.0.0.1:9200"]`
`["https://127.0.0.1:9200/mypath"]` (If using a proxy on a subpath)
index
- Value type is string
-
Default value depends on whether
ecs_compatibility
is enabled:- ECS Compatibility disabled:
"logstash-%{+yyyy.MM.dd}"
- ECS Compatibility enabled:
"ecs-logstash-%{+yyyy.MM.dd}"
- ECS Compatibility disabled:
The index to write events to. This can be dynamic using the %{foo}
syntax. The default value will partition your indices by day so you can more easily delete old data or only search specific date ranges.
要写入事件的索引。使用%{foo}
语法可以是动态的。默认值将按天划分索引,因此您可以更轻松地删除旧数据或仅搜索特定日期范围。
目录 返回
首页