Fluent Bit is a fast Log Processor and Forwarder for Linux, Windows, Embedded Linux, MacOS and BSD family operating systems. It's part of the Graduated Fluentd Ecosystem and a CNCF sub-project.
Fluent Bit allows to collect log events or metrics from different sources, process them and deliver them to different backends such as Fluentd, Elasticsearch, Splunk, DataDog, Kafka, New Relic, Azure services, AWS services, Google services, NATS, InfluxDB or any custom HTTP end-point.
Fluent Bit comes with full SQL Stream Processing capabilities: data manipulation and analytics using SQL queries.
Fluent Bit runs on x86_64, x86, arm32v7, and arm64v8 architectures.
Fluent Bit is used widely in production environments. As of 2022, Fluent Bit surpasses 3 Billion downloads and continues to be deployed over 10 million times a day. The following is a preview of who uses Fluent Bit heavily in production:
If your company uses Fluent Bit and is not listed, feel free to open a GitHub issue and we will add the logo.
Our official project documentation for installation, configuration, deployment and development topics is located here:
If you aim to build Fluent Bit from sources, you can go ahead and start with the following commands.
cd build
cmake ..
make
bin/fluent-bit -i cpu -o stdout -f 1
If you are interested into more details, please refer to the Build & Install section.
We provide packages for most common Linux distributions:
Our Linux containers images are the most common deployment model, thousands of new installation happen every day, learn more about the available images and tags here.
Fluent Bit is fully supported on Windows environments, get started with these instructions.
Fluent Bit runs on Linux on IBM Z(s390x), but the WASM filter plugin is not. For the LUA filter plugin, it runs when libluajit
is installed on the system and fluent bit is built with FLB_LUAJIT
and FLB_PREFER_SYSTEM_LIB_LUAJIT
on.
Fluent Bit is based in a pluggable architecture where different plugins plays a major role in the data pipeline:
name | title | description |
---|---|---|
collectd | Collectd | Listen for UDP packets from Collectd. |
cpu | CPU Usage | measure total CPU usage of the system. |
disk | Disk Usage | measure Disk I/Os. |
dummy | Dummy | generate dummy event. |
exec | Exec | executes external program and collects event logs. |
forward | Forward | Fluentd forward protocol. |
head | Head | read first part of files. |
health | Health | Check health of TCP services. |
kmsg | Kernel Log Buffer | read the Linux Kernel log buffer messages. |
mem | Memory Usage | measure the total amount of memory used on the system. |
mqtt | MQTT | start a MQTT server and receive publish messages. |
netif | Network Traffic | measure network traffic. |
proc | Process | Check health of Process. |
random | Random | Generate Random samples. |
serial | Serial Interface | read data information from the serial interface. |
stdin | Standard Input | read data from the standard input. |
syslog | Syslog | read syslog messages from a Unix socket. |
systemd | Systemd | read logs from Systemd/Journald. |
tail | Tail | Tail log files. |
tcp | TCP | Listen for JSON messages over TCP. |
thermal | Thermal | measure system temperature(s). |
name | title | description |
---|---|---|
aws | AWS Metadata | Enrich logs with AWS Metadata. |
expect | Expect | Validate records match certain criteria in structure. |
grep | Grep | Match or exclude specific records by patterns. |
kubernetes | Kubernetes | Enrich logs with Kubernetes Metadata. |
lua | Lua | Filter records using Lua Scripts. |
parser | Parser | Parse record. |
record_modifier | Record Modifier | Modify record. |
rewrite_tag | Rewrite Tag | Re-emit records under new tag. |
stdout | Stdout | Print records to the standard output interface. |
throttle | Throttle | Apply rate limit to event flow. |
nest | Nest | Nest records under a specified key |
modify | Modify | Modifications to record. |
name | title | description |
---|---|---|
azure | Azure Log Analytics | Ingest records into Azure Log Analytics |
bigquery | BigQuery | Ingest records into Google BigQuery |
counter | Count Records | Simple records counter. |
datadog | Datadog | Ingest logs into Datadog. |
es | Elasticsearch | flush records to a Elasticsearch server. |
file | File | Flush records to a file. |
flowcounter | FlowCounter | Count records. |
forward | Forward | Fluentd forward protocol. |
gelf | GELF | Flush records to Graylog |
http | HTTP | Flush records to an HTTP end point. |
influxdb | InfluxDB | Flush records to InfluxDB time series database. |
kafka | Apache Kafka | Flush records to Apache Kafka |
kafka-rest | Kafka REST Proxy | Flush records to a Kafka REST Proxy server. |
loki | Loki | Flush records to Loki server. |
nats | NATS | Flush records to a NATS server. |
null | NULL | Throw away events. |
s3 | S3 | Flush records to s3 |
stackdriver | Google Stackdriver Logging | Flush records to Google Stackdriver Logging service. |
stdout | Standard Output | Flush records to the standard output. |
splunk | Splunk | Flush records to a Splunk Enterprise service |
tcp | TCP & TLS | Flush records to a TCP server. |
td | Treasure Data | Flush records to the Treasure Data cloud service for analytics. |
Fluent Bit is an open project, several individuals and companies contribute in different forms like coding, documenting, testing, spreading the word at events within others. If you want to learn more about contributing opportunities please reach out to us through our Community Channels.
If you are interested in contributing to Fluent bit with bug fixes, new features or coding in general, please refer to the code CONTRIBUTING guidelines. You can also refer the Beginners Guide to contributing to Fluent Bit here.
Feel free to join us on our Slack channel, Mailing List or IRC:
This program is under the terms of the Apache License v2.0.
Fluent Bit is sponsored and maintained by several companies in the Cloud Native community, including all the major cloud providers.
You can see a list of contributors here.