Open
Description
Hi I'm using this plugin to get logs from journal and then save them to files splitting by container name.
The configuration I'm using looks like this:
# Logs from docker-systemd
<source>
@type systemd
@id in_systemd_docker
matches [{ "_SYSTEMD_UNIT": "docker.service" }]
<storage>
@type local
persistent true
path /var/log/fluentd/journald-docker-cursor.json
</storage>
read_from_head false
tag docker.systemd
</source>
<match docker.systemd>
@type copy
<store>
@type file
@id out_file_docker
path /file-logs/${$.CONTAINER_TAG}/%Y/%m/%d/std${$.PRIORITY}
append true
<format>
@type single_value
message_key MESSAGE
</format>
<buffer $.CONTAINER_TAG,$.PRIORITY,time>
@type file
path /var/log/fluentd/file-buffers/
timekey 1d
flush_thread_interval 10
flush_mode interval
flush_interval 10s
flush_at_shutdown true
</buffer>
</store>
<store>
@type prometheus
<metric>
name fluentd_output_status_num_records_total
type counter
desc The total number of outgoing
</metric>
</store>
</match>
With this setting I'm only getting a throughput of ~1000 lines per second while, according to https://docs.fluentd.org/deployment/performance-tuning-single-process, FluentD should be able to run 5000 lines per second.
A few additional details:
-
I'm running FluentD inside a Docker container with 4Gb and 4096 CPU shares
-
Tried with local storage as well as shared storage
-
Tried removing the file output and using only with metrics as out