Datadog Installation Guide
Trademark Notice
Datadog® and the Datadog logo are trademarks of Datadog, Inc. All trademarks and logos are used for representation purposes only. No prior copyright or trademark authorization has been obtained. This content is for educational purposes only.
Datadog is a cloud-based monitoring and analytics platform for large-scale applications, providing full-stack observability with metrics, traces, and logs in one unified platform.
Prerequisites
- Datadog account (sign up at https://www.datadoghq.com/)
- API key from your Datadog account
- Admin access to your systems
Installation Methods
Linux Installation (Ubuntu/Debian)
One-Line Installation
# Set your API key
DD_API_KEY=your_api_key_here
# Run installation script
DD_AGENT_MAJOR_VERSION=7 DD_API_KEY=$DD_API_KEY DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
Manual Installation (Ubuntu/Debian)
# Set up Datadog repository
sudo apt-get update
sudo apt-get install -y apt-transport-https curl gnupg
# Add Datadog GPG key
sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable 7' > /etc/apt/sources.list.d/datadog.list"
sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
# Install Datadog Agent
sudo apt-get update
sudo apt-get install -y datadog-agent
# Configure API key
sudo sh -c "sed 's/api_key:.*/api_key: your_api_key_here/' /etc/datadog-agent/datadog.yaml.example > /etc/datadog-agent/datadog.yaml"
# Start the agent
sudo systemctl start datadog-agent
sudo systemctl enable datadog-agent
RHEL/CentOS/Fedora Installation
# One-line installation
DD_API_KEY=your_api_key_here DD_AGENT_MAJOR_VERSION=7 DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"
# Or manual installation
cat <<EOF | sudo tee /etc/yum.repos.d/datadog.repo
[datadog]
name=Datadog, Inc.
baseurl=https://yum.datadoghq.com/stable/7/x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_FD4BF915.public
EOF
# Install agent
sudo yum makecache
sudo yum install -y datadog-agent
# Configure and start
sudo sh -c "sed 's/api_key:.*/api_key: your_api_key_here/' /etc/datadog-agent/datadog.yaml.example > /etc/datadog-agent/datadog.yaml"
sudo systemctl start datadog-agent
sudo systemctl enable datadog-agent
Windows Installation
# Download the installer
Invoke-WebRequest -Uri "https://s3.amazonaws.com/ddagent-windows-stable/datadog-agent-7-latest.amd64.msi" -OutFile "$env:TEMP\datadog-agent.msi"
# Install with API key
Start-Process msiexec.exe -ArgumentList "/qn /i $env:TEMP\datadog-agent.msi APIKEY=""your_api_key_here""" -Wait
# Or install with additional parameters
$arguments = @(
"/qn",
"/i", "$env:TEMP\datadog-agent.msi",
"APIKEY=your_api_key_here",
"SITE=datadoghq.com",
"HOSTNAME=my-windows-server"
)
Start-Process msiexec.exe -ArgumentList $arguments -Wait
# Start the agent
Start-Service -Name "DatadogAgent"
macOS Installation
# Download and install via DMG
curl -O https://s3.amazonaws.com/dd-agent/datadog-agent-7-latest.dmg
sudo installer -pkg /Volumes/datadog_agent/datadog-agent.pkg -target /
# Or use Homebrew
brew install datadog/agent/datadog-agent
# Configure API key
sudo sh -c "sed 's/api_key:.*/api_key: your_api_key_here/' /opt/datadog-agent/etc/datadog.yaml.example > /opt/datadog-agent/etc/datadog.yaml"
# Start the agent
sudo launchctl load -w /Library/LaunchDaemons/com.datadoghq.agent.plist
Docker Installation
# Run Datadog Agent as container
docker run -d --name dd-agent \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=your_api_key_here \
-e DD_SITE="datadoghq.com" \
-e DD_LOGS_ENABLED=true \
-e DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true \
-e DD_PROCESS_AGENT_ENABLED=true \
-e DD_DOCKER_LABELS_AS_TAGS='{"*":"%%label%%"}' \
-e DD_TAGS='env:production role:webserver' \
gcr.io/datadoghq/agent:7
Kubernetes Installation (Helm)
# Add Datadog Helm repository
helm repo add datadog https://helm.datadoghq.com
helm repo update
# Create values file
cat <<EOF > datadog-values.yaml
datadog:
apiKey: your_api_key_here
site: datadoghq.com
logs:
enabled: true
containerCollectAll: true
apm:
portEnabled: true
processAgent:
enabled: true
systemProbe:
enabled: true
kubeStateMetricsEnabled: true
agents:
containers:
agent:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 200m
memory: 256Mi
EOF
# Install Datadog Agent
helm install datadog-agent datadog/datadog \
-f datadog-values.yaml \
--namespace datadog \
--create-namespace
Kubernetes DaemonSet (without Helm)
# datadog-agent.yaml
apiVersion: v1
kind: Secret
metadata:
name: datadog-secret
namespace: datadog
type: Opaque
data:
api-key: <base64-encoded-api-key>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: datadog-agent
namespace: datadog
spec:
selector:
matchLabels:
app: datadog-agent
template:
metadata:
labels:
app: datadog-agent
spec:
serviceAccountName: datadog-agent
containers:
- name: agent
image: gcr.io/datadoghq/agent:7
env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
name: datadog-secret
key: api-key
- name: DD_SITE
value: "datadoghq.com"
- name: DD_LOGS_ENABLED
value: "true"
- name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
value: "true"
- name: DD_APM_ENABLED
value: "true"
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
volumeMounts:
- name: dockersocketdir
mountPath: /host/var/run
readOnly: true
- name: procdir
mountPath: /host/proc
readOnly: true
- name: cgroups
mountPath: /host/sys/fs/cgroup
readOnly: true
volumes:
- name: dockersocketdir
hostPath:
path: /var/run
- name: procdir
hostPath:
path: /proc
- name: cgroups
hostPath:
path: /sys/fs/cgroup
Configuration
Main Configuration File
Edit /etc/datadog-agent/datadog.yaml:
# Core configuration
api_key: your_api_key_here
site: datadoghq.com
hostname: my-server
# Logging
logs_enabled: true
log_level: info
# Tags (for filtering and grouping)
tags:
- env:production
- role:webserver
- team:platform
# APM (Application Performance Monitoring)
apm_config:
enabled: true
apm_non_local_traffic: true
# Process monitoring
process_config:
enabled: true
process_collection:
enabled: true
# Network Performance Monitoring
network_config:
enabled: true
# Live processes
process_config:
enabled: "true"
Enable Integrations
Datadog supports 500+ integrations. Configuration files are in /etc/datadog-agent/conf.d/.
Monitor NGINX
# /etc/datadog-agent/conf.d/nginx.d/conf.yaml
init_config:
instances:
- nginx_status_url: http://localhost/nginx_status/
tags:
- instance:main
Monitor PostgreSQL
# /etc/datadog-agent/conf.d/postgres.d/conf.yaml
init_config:
instances:
- host: localhost
port: 5432
username: datadog
password: your_password
dbname: postgres
tags:
- env:production
Monitor Redis
# /etc/datadog-agent/conf.d/redisdb.d/conf.yaml
init_config:
instances:
- host: localhost
port: 6379
password: your_redis_password
APM (Application Performance Monitoring)
Node.js Application
# Install tracer
npm install dd-trace --save
// At the very start of your application
const tracer = require('dd-trace').init({
logInjection: true,
runtimeMetrics: true,
env: 'production',
service: 'my-api',
version: '1.0.0'
});
// Your application code
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello World!');
});
app.listen(3000);
Python Application
# Install tracer
pip install ddtrace
# Using middleware (Django/Flask)
from ddtrace import patch_all
patch_all()
# Or run with ddtrace-run command
# ddtrace-run python app.py
Java Application
# Download Java tracer
wget -O dd-java-agent.jar https://dtdg.co/latest-java-tracer
# Run application with tracer
java -javaagent:/path/to/dd-java-agent.jar \
-Ddd.service=my-service \
-Ddd.env=production \
-Ddd.version=1.0 \
-jar your-application.jar
Log Management
Configure Log Collection
# /etc/datadog-agent/datadog.yaml
logs_enabled: true
# Application log configuration
# /etc/datadog-agent/conf.d/app.d/conf.yaml
logs:
- type: file
path: "/var/log/myapp/*.log"
service: "myapp"
source: "nodejs"
tags:
- env:production
Docker Log Collection
# Enable log collection from all containers
docker run -d --name dd-agent \
-e DD_LOGS_ENABLED=true \
-e DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /opt/datadog-agent/run:/opt/datadog-agent/run:rw \
-v /var/lib/docker/containers:/var/lib/docker/containers:ro \
gcr.io/datadoghq/agent:7
Custom Metrics
Send Custom Metrics via DogStatsD
# Python example
from datadog import initialize, statsd
options = {
'statsd_host': '127.0.0.1',
'statsd_port': 8125
}
initialize(**options)
# Send metrics
statsd.increment('web.page_views')
statsd.gauge('database.connections', 10)
statsd.histogram('api.response_time', 250)
// Node.js example
const StatsD = require('node-dogstatsd').StatsD;
const dogstatsd = new StatsD();
// Send metrics
dogstatsd.increment('web.page_views');
dogstatsd.gauge('database.connections', 10);
dogstatsd.histogram('api.response_time', 250);
Verification
# Check agent status
sudo datadog-agent status
# Test connectivity
sudo datadog-agent check connectivity
# Run specific integration check
sudo datadog-agent check nginx
# View agent logs
sudo tail -f /var/log/datadog/agent.log
# Restart agent
sudo systemctl restart datadog-agent
Dashboard & Alerts
Create Dashboard via API
curl -X POST "https://api.datadoghq.com/api/v1/dashboard" \
-H "DD-API-KEY: ${DD_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \
-H "Content-Type: application/json" \
-d '{
"title": "My Dashboard",
"widgets": [{
"definition": {
"type": "timeseries",
"requests": [{
"q": "avg:system.cpu.user{*}"
}],
"title": "CPU Usage"
}
}],
"layout_type": "ordered"
}'
Create Monitor (Alert)
curl -X POST "https://api.datadoghq.com/api/v1/monitor" \
-H "DD-API-KEY: ${DD_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "High CPU Alert",
"type": "metric alert",
"query": "avg(last_5m):avg:system.cpu.user{*} > 90",
"message": "CPU usage is above 90% @pagerduty",
"tags": ["env:production"]
}'
Best Practices
- Use Tags Consistently: Tag all resources with
env,service,version - Monitor What Matters: Focus on RED metrics (Rate, Errors, Duration)
- Set Up Alerts: Configure meaningful alerts with proper thresholds
- Use APM: Enable distributed tracing for microservices
- Log Aggregation: Centralize logs from all services
- Custom Metrics: Track business-specific KPIs
- Dashboards: Create role-specific dashboards (dev, ops, business)
- Cost Management: Use sampling for high-volume traces
- Security: Use secret management for API keys, restrict access
- Documentation: Document metric meanings and alert runbooks
Troubleshooting
Agent Not Reporting
# Check agent status
sudo datadog-agent status
# Check connectivity
sudo datadog-agent check connectivity
# Verify API key
grep api_key /etc/datadog-agent/datadog.yaml
# Check firewall
curl -v https://api.datadoghq.com/api/v1/validate
High CPU Usage
# Reduce check frequency
# /etc/datadog-agent/datadog.yaml
check_runners: 1
# Disable unused integrations
# Move unwanted .yaml files out of conf.d/
Missing Metrics
# Verify integration is running
sudo datadog-agent check <integration_name>
# Check integration logs
sudo cat /var/log/datadog/agent.log | grep <integration_name>
Resources
- Documentation: https://docs.datadoghq.com/
- API Reference: https://docs.datadoghq.com/api/
- Integrations: https://docs.datadoghq.com/integrations/
- Community: https://datadoghq.slack.com/
- Status Page: https://status.datadoghq.com/
Note: Datadog is a commercial SaaS platform with different pricing tiers. Ensure you understand the cost implications of log ingestion, custom metrics, and APM before deploying at scale.