Explain This: Error Monitoring Can Be a Data Exfiltration Channel
Here is the uncomfortable truth: some data breaches ship as \\u201cobservability.\\u201d
Here is the uncomfortable truth: some data breaches ship as “observability.”
Error monitoring is supposed to help you fix bugs.
In practice, it can export stack traces, request metadata, and user context to a third party, at scale, through code paths that are rarely audited.
What it is
Error monitoring tools (Sentry, Bugsnag, Rollbar, LogRocket, and friends) capture failures in your app and send an event payload to a hosted service.
That payload often includes: - exception text - stack traces (file paths, function names, line numbers) - request metadata (URLs, headers) - breadcrumbs (what the user clicked before it broke) - extra context your team attaches during debugging
The key point is simple.
If your application can serialize it into an error event, a monitoring SDK can ship it out of your environment.
Why it matters
Because those payloads can contain the exact data you work hard not to leak: - PII - auth tokens - session IDs - API keys that were accidentally logged - internal endpoints - business logic that makes exploitation easier
Two details make this risk operational, not theoretical.
First, some SDKs always send fields teams forget to treat as sensitive.
For example, Sentry’s JavaScript SDK documentation notes that the full request URL and the full query string are always sent (depending on configuration, those can contain user identifiers).
Second, session replay and “helpful debugging context” expands your blast radius fast.
Sentry’s Session Replay can capture network request metadata, and in some configurations can include request and response bodies.
If you are in a regulated environment, you may have now exported regulated data into a vendor relationship that might not exist on paper.
Where teams screw up
1) They treat monitoring as plumbing. It gets added during a sprint, then no one re-evaluates it.
2) They never inventory what is being sent. The DSN exists, so the data flows.
3) They log first, think later. Developers attach “helpful context,” which is often sensitive.
4) They forget stack traces can include variable values. Some SDKs capture stack locals. That is fantastic for debugging and terrible for data minimization.
Sentry’s Python guidance explicitly warns that sensitive data may appear in stack locals, breadcrumbs, and query strings, and recommends scrubbing data before it leaves the local environment.
5) They assume the default configuration is safe enough. Defaults vary by language, integration, and what your team enables.
Sentry notes that its SDKs purposefully do not send PII by default, and that flipping the “send default PII” option changes what is collected.
What “reasonable” looks like
“Reasonable” is not “we have Sentry.”
“Reasonable” is: - you can list every monitoring destination in production - you have an approval process for adding telemetry vendors - you can prove you have data minimization controls on the payload - you have role-based access and retention limits - you can export and review what has been captured when an incident happens
If you cannot answer, “What data did we send to that vendor last week,” you do not have observability.
You have uncontrolled egress.
What to do this week (a practical checklist)
Step 1: Inventory and confirm ownership (30 minutes)
- List monitoring SDKs and agents in each application.
- Identify where they send data (DSNs, endpoints, project IDs).
- Confirm who owns the vendor account.
Step 2: Reduce payload risk immediately (same day)
- Disable automatic request body capture.
- Remove or hash user identifiers unless truly needed.
- Strip tokens, cookies, Authorization headers, and secrets by default.
- Turn off session replay unless you have a strong reason to keep it.
Step 3: Scrub at the source, not only in the dashboard (this week)
Dashboards can hide data from view. That is not the same as preventing egress.
Use SDK hooks to remove sensitive fields before the event is transmitted.
Sentry’s SDKs support a before_send hook (or equivalents) for exactly this reason.
Step 4: Treat it like any other third-party risk (this week)
- Verify contract terms: retention, deletion, subprocessors, location.
- Lock down access: least privilege, SSO, MFA, audit logs.
- Decide your “do not ship” list (tools or settings you will not allow).
Step 5: Put a guardrail in CI so this does not reappear (this week)
- Add a lightweight check that flags new monitoring packages.
- Require security approval before merging.
The takeaway
The question is not whether error monitoring is useful.
It is whether you are controlling the data you are exporting.
Subscribe if you want more operator-grade explainers that translate security into “reasonable,” and “reasonable” into action.
One question: does your team know where your stack traces go in production, without guessing?
Sources
- Sentry: data categories collected by the JavaScript SDK (includes URL and query string details)
- Sentry: scrubbing sensitive data (Python, includes
send_default_piiandbefore_sendguidance) - Sentry: data scrubbing overview and options
- Sentry: advanced data scrubbing
- Sentry developer docs: PII and scrubbing configuration details