Stream any log file using OTel Collector
Stack Serverless
This guide shows you how to manually configure the Elastic Distribution of OpenTelemetry (EDOT) Collector to send your log data to Elasticsearch by configuring the otel.yml
file. For an Elastic Agent equivalent, refer to Stream any log file using Elastic Agent.
For more OpenTelemetry quickstarts, refer to EDOT quickstarts.
To follow the steps in this guide, you need an Elastic Stack deployment that includes:
- Elasticsearch for storing and searching data
- Kibana for visualizing and managing data
- Kibana user with
All
privileges on Fleet and Integrations. Because many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces. - Integrations Server (included by default in every Elastic Cloud Hosted deployment)
To get started quickly, create an Elastic Cloud Hosted deployment and host it on AWS, GCP, or Azure. Try it out for free.
The Admin role or higher is required to onboard log data. To learn more, refer to Assign user roles and privileges.
Complete these steps to install and configure the EDOT Collector and send your log data to Elastic Observability.
-
Download and install the EDOT Collector
On your host, download the EDOT Collector installation package that corresponds with your system:
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.1.3-linux-x86_64.tar.gz tar xzvf elastic-agent-9.1.3-linux-x86_64.tar.gz
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.1.3-darwin-x86_64.tar.gz tar xzvf elastic-agent-9.1.3-darwin-x86_64.tar.gz
# PowerShell 5.0+ wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.1.3-windows-x86_64.zip -OutFile elastic-agent-9.1.3-windows-x86_64.zip Expand-Archive .\elastic-agent-9.1.3-windows-x86_64.zip
-
Configure the EDOT Collector
Follow these steps to retrieve the managed OTLP endpoint URL for your Serverless project:
- In Elastic Cloud Serverless, open your Observability project.
- Go to Add data → Application → OpenTelemetry.
- Select Managed OTLP Endpoint in the second step.
- Copy the OTLP endpoint configuration value.
- Select Create API Key to generate an API key.
Replace
<ELASTIC_OTLP_ENDPOINT>
and<ELASTIC_API_KEY>
before applying the following commands:ELASTIC_OTLP_ENDPOINT=<ELASTIC_OTLP_ENDPOINT> && \ ELASTIC_API_KEY=<ELASTIC_API_KEY> && \ cp ./otel_samples/managed_otlp/logs_metrics_traces.yml ./otel.yml && \ mkdir -p ./data/otelcol && \ sed -i "s#\${env:STORAGE_DIR}#${PWD}/data/otelcol#g" ./otel.yml && \ sed -i "s#\${env:ELASTIC_OTLP_ENDPOINT}#${ELASTIC_OTLP_ENDPOINT}#g" ./otel.yml && \ sed -i "s#\${env:ELASTIC_API_KEY}#${ELASTIC_API_KEY}#g" ./otel.yml
ELASTIC_OTLP_ENDPOINT=<ELASTIC_OTLP_ENDPOINT> && \ ELASTIC_API_KEY=<ELASTIC_API_KEY> && \ cp ./otel_samples/managed_otlp/logs_metrics_traces.yml ./otel.yml && \ mkdir -p ./data/otelcol && \ sed -i '' "s#\${env:STORAGE_DIR}#${PWD}/data/otelcol#g" ./otel.yml && \ sed -i '' "s#\${env:ELASTIC_OTLP_ENDPOINT}#${ELASTIC_OTLP_ENDPOINT}#g" ./otel.yml && \ sed -i '' "s#\${env:ELASTIC_API_KEY}#${ELASTIC_API_KEY}#g" ./otel.yml
Remove-Item -Path .\otel.yml -ErrorAction SilentlyContinue Copy-Item .\otel_samples\managed_otlp\logs_metrics_traces.yml .\otel.yml New-Item -ItemType Directory -Force -Path .\data\otelcol | Out-Null $content = Get-Content .\otel.yml $content = $content -replace '\${env:STORAGE_DIR}', "$PWD\data\otelcol" $content = $content -replace '\${env:ELASTIC_OTLP_ENDPOINT}', "<ELASTIC_OTLP_ENDPOINT>" $content = $content -replace '\${env:ELASTIC_API_KEY}', "<ELASTIC_API_KEY>" $content | Set-Content .\otel.yml
-
Configure log file collection
To collect logs from specific log files, you need to modify the
otel.yml
configuration file. The configuration includes receivers, processors, and exporters that handle log data.Here's an example configuration for collecting log files with Elastic Stack:
otel.yml for logs collection (Elastic Stack)
receivers: # Receiver for platform specific log files filelog/platformlogs: include: [ /var/log/*.log ] retry_on_failure: enabled: true start_at: end storage: file_storage # start_at: beginning extensions: file_storage: directory: ${env:STORAGE_DIR} processors: resourcedetection: detectors: ["system"] system: hostname_sources: ["os"] resource_attributes: host.name: enabled: true host.id: enabled: false host.arch: enabled: true host.ip: enabled: true host.mac: enabled: true host.cpu.vendor.id: enabled: true host.cpu.family: enabled: true host.cpu.model.id: enabled: true host.cpu.model.name: enabled: true host.cpu.stepping: enabled: true host.cpu.cache.l2.size: enabled: true os.description: enabled: true os.type: enabled: true exporters: # Exporter to print the first 5 logs/metrics and then every 1000th debug: verbosity: detailed sampling_initial: 5 sampling_thereafter: 1000 # Exporter to send logs and metrics to Elasticsearch elasticsearch/otel: endpoints: ["${env:ELASTIC_ENDPOINT}"] api_key: ${env:ELASTIC_API_KEY} mapping: mode: otel service: extensions: [file_storage] pipelines: logs/platformlogs: receivers: [filelog/platformlogs] processors: [resourcedetection] exporters: [debug, elasticsearch/otel]
Here's an example configuration for collecting log files with Elastic Cloud Serverless:
otel.yml for logs collection (Serverless)
receivers: # Receiver for platform specific log files filelog/platformlogs: include: [/var/log/*.log] retry_on_failure: enabled: true start_at: end storage: file_storage # start_at: beginning extensions: file_storage: directory: ${env:STORAGE_DIR} processors: resourcedetection: detectors: ["system"] system: hostname_sources: ["os"] resource_attributes: host.name: enabled: true host.id: enabled: false host.arch: enabled: true host.ip: enabled: true host.mac: enabled: true host.cpu.vendor.id: enabled: true host.cpu.family: enabled: true host.cpu.model.id: enabled: true host.cpu.model.name: enabled: true host.cpu.stepping: enabled: true host.cpu.cache.l2.size: enabled: true os.description: enabled: true os.type: enabled: true exporters: # Exporter to print the first 5 logs/metrics and then every 1000th debug: verbosity: detailed sampling_initial: 5 sampling_thereafter: 1000 # Exporter to send logs and metrics to Elasticsearch Managed OTLP Input otlp/ingest: endpoint: ${env:ELASTIC_OTLP_ENDPOINT} headers: Authorization: ApiKey ${env:ELASTIC_API_KEY} service: extensions: [file_storage] pipelines: logs/platformlogs: receivers: [filelog/platformlogs] processors: [resourcedetection] exporters: [debug, otlp/ingest]
Key configuration elements:
receivers.filelog/platformlogs.include
: Specifies the path to your log files. You can use patterns like/var/log/*.log
.processors.resourcedetection
: Automatically detects and adds host system information to your logs.extensions.file_storage
: Provides persistent storage for the Collector's state.exporters
: Configures how data is sent to Elasticsearch (Elastic Stack) or OTLP endpoint (Serverless).
-
Run the EDOT Collector
Run the following command to run the EDOT Collector:
sudo ./otelcol --config otel.yml
.\elastic-agent.exe otel --config otel.yml
NoteThe Collector opens ports
4317
and4318
to receive application data from locally running OTel SDKs without authentication. This allows the SDKs to send data without any further configuration needed as they use this endpoint by default.
If you're not seeing your log files in the UI, verify the following:
- The path to your logs file under
include
is correct. - Your API key is properly set in the environment variables.
- The OTLP endpoint URL is correct and accessible.
- The Collector is running without errors (check the console output).
If you're still running into issues, see EDOT Collector troubleshooting and Configure EDOT Collector.
After you have your EDOT Collector configured and are streaming log data to Elasticsearch:
- Refer to the Explore log data documentation for information on exploring your log data in the UI, including searching and filtering your log data, getting information about the structure of log fields, and displaying your findings in a visualization.
- Refer to the Parse and organize logs documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data.
- Refer to the Filter and aggregate logs documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently.
- To collect telemetry from applications and use the EDOT Collector as a gateway, instrument your target applications following the setup instructions: