You do not need OpenTelemetry SDKs to get distributed tracing in Spring Boot. Generate your own trace IDs at the boundary, propagate them through HTTP and asyncYou do not need OpenTelemetry SDKs to get distributed tracing in Spring Boot. Generate your own trace IDs at the boundary, propagate them through HTTP and async

Distributed Tracing in Spring Boot Without OpenTelemetry

\ Distributed tracing usually sounds complicated. Most Spring Boot developers assume they must install OpenTelemetry SDKs, collectors, agents, and a full backend like Jaeger or Tempo before they can trace requests across microservices. But you can build a lightweight, explicit tracing layer with plain Spring Boot and core Java that is good enough for many real‑world systems.​​

This step‑by‑step guide shows how to implement your own trace IDsasync‑safe context propagationMDC‑based correlated logs, and simple custom spans in Spring Boot — all without adding any tracing SDKs. By the end, you will have a minimal observability framework you fully control, ready for production and future OpenTelemetry adoption.​​


What You’ll Build​

You will build a simple tracing layer for a microservice system such as order-service → inventory-service → payment-service, all speaking over HTTP. Every request carries a shared X-Trace-ID header, and every log line includes that same trace ID for instant correlation.​

The framework will provide:​

  • Custom trace IDs (ULID/UUID)
  • Automatic HTTP propagation (incoming filters + outgoing interceptors)
  • Async context propagation for @Async, thread pools, and Reactor
  • MDC correlation so logs are grouped per trace
  • Optional JSON span logs ready for Kibana/Grafana dashboards

Prerequisites (What You Need / Don’t Need)​

You should already be comfortable with:​

  • Spring Boot basics: controllers, filters, interceptors, configuration
  • REST communication with RestTemplate or WebClient
  • Core Java concepts: ThreadLocal and logging MDC

You do not need:​​

  • OpenTelemetry, Jaeger, Zipkin, or any tracing SDK
  • Agents, sidecars, or collectors
  • Prior experience with distributed tracing

Step 1: Design Your Trace ID​

A trace ID is a single identifier that follows a request across all services. Common formats are:​

  • UUID – long, random, universally unique
  • ULID – shorter, time‑sortable, log‑friendly
  • Snowflake IDs – compact 64‑bit IDs for very high scale

For logs and searchability, ULID is an excellent choice because it is time‑ordered and easier to read than a UUID. Here is a simple ULID‑based trace ID generator:​

public class TraceIdGenerator { public static String generate() { return UlidCreator.getUlid().toString(); } }

You will call this generator at the gateway boundary (API gateway or first Spring Boot service) whenever a request does not already carry a trace ID.​


Step 2: Generate Trace IDs at the Boundary​

Every incoming HTTP request should either reuse an existing trace ID header or get a fresh one. A Spring Filter is a good place to centralize this logic.​

@Component public class TraceFilter implements Filter { @Override public void doFilter( ServletRequest request, ServletResponse response, FilterChain chain ) throws IOException, ServletException { HttpServletRequest http = (HttpServletRequest) request; String traceId = http.getHeader("X-Trace-ID"); if (traceId == null || traceId.isEmpty()) { traceId = TraceIdGenerator.generate(); } // Store in ThreadLocal TraceContext.setTraceId(traceId); // Put into MDC so every log line has it MDC.put("traceId", traceId); try { chain.doFilter(request, response); } finally { MDC.clear(); TraceContext.clear(); } } }

The TraceContext is a simple ThreadLocal holder:

public class TraceContext { private static final ThreadLocal<String> TRACE_ID = new ThreadLocal<>(); public static void setTraceId(String traceId) { TRACE_ID.set(traceId); } public static String getTraceId() { return TRACE_ID.get(); } public static void clear() { TRACE_ID.remove(); } }

Now every incoming request gets a trace ID, and every log line in that request can reference it.​


Step 3: Propagate Trace IDs on Outgoing HTTP Calls​

If your service calls another service, you must forward the trace ID in headers. Otherwise, each service will generate its own unrelated ID and you lose end‑to‑end visibility.​

RestTemplate Interceptor​

@Bean public RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplate(); restTemplate.getInterceptors().add((request, body, execution) -> { String traceId = TraceContext.getTraceId(); if (traceId != null && !traceId.isEmpty()) { request.getHeaders().add("X-Trace-ID", traceId); } return execution.execute(request, body); }); return restTemplate; }

Every outgoing call now carries X-Trace-ID, so the next service in the chain can reuse it in its own TraceFilter.​

WebClient Filter + Reactor Context​

WebClient uses Project Reactor, which does not automatically see ThreadLocal values. You need to put the trace ID into the Reactor Context and read it back in a filter.​

@Bean public WebClient webClient() { return WebClient.builder() .filter((request, next) -> Mono.deferContextual(ctx -> { String traceId = ctx.getOrDefault("traceId", "unknown"); ClientRequest mutated = ClientRequest.from(request) .header("X-Trace-ID", traceId) .build(); return next.exchange(mutated); }) ) .build(); }

When you build a reactive pipeline, inject the current trace ID into the context:

Mono.just("data") .contextWrite(ctx -> ctx.put("traceId", TraceContext.getTraceId()));

This is where many implementations break — Reactor discards ThreadLocal unless you explicitly bridge it.​


Step 4: Correlate Logs Automatically with MDC​

Mapped Diagnostic Context (MDC) lets you attach key–value pairs (like traceId) to the current thread so your logging framework includes them automatically.​

A typical Logback pattern might look like:

%date [%thread] %-5level %logger - traceId=%X{traceId} - %msg%n

Now, every log line contains traceId=..., so in Kibana, Grafana, CloudWatch, or Loki, you can filter by a single trace ID and see the entire cross‑service flow.​


Step 5: Handle Async Context Propagation (Very Important)​

ThreadLocal‑based context is lost when work hops to another thread:​

  • @Async methods
  • Thread pools and executors
  • CompletableFuture
  • Scheduled tasks and timers

To fix this, wrap your executors so they re‑establish trace ID and MDC in worker threads.​

Traceable Executor Wrapper​

public class TraceableExecutor implements Executor { private final Executor delegate; private final String traceId; public TraceableExecutor(Executor delegate, String traceId) { this.delegate = delegate; this.traceId = traceId; } @Override public void execute(Runnable command) { delegate.execute(() -> { try { TraceContext.setTraceId(traceId); MDC.put("traceId", traceId); command.run(); } finally { MDC.clear(); TraceContext.clear(); } }); } }

AsyncConfigurer Integration​

@Configuration @EnableAsync public class AsyncConfig implements AsyncConfigurer { @Override public Executor getAsyncExecutor() { String traceId = TraceContext.getTraceId(); return new TraceableExecutor( Executors.newCachedThreadPool(), traceId ); } }

Now even asynchronous tasks retain the correct trace ID and log correlation.​


Step 6: Extend Trace IDs to AWS Services (Optional)​

Cloud services do not automatically propagate your custom trace IDs. You must attach them explicitly as attributes or headers.​

SQS​

SendMessageRequest req = new SendMessageRequest() .withQueueUrl(queueUrl) .withMessageBody(body) .addMessageAttributesEntry( "traceId", new MessageAttributeValue() .withDataType("String") .withStringValue(TraceContext.getTraceId()) );

SNS​

PublishRequest req = new PublishRequest() .withTopicArn(topicArn) .withMessage(message) .addMessageAttributesEntry( "traceId", new MessageAttributeValue() .withDataType("String") .withStringValue(TraceContext.getTraceId()) );

API Gateway / Lambda​

Use custom headers such as X-Trace-ID or X-Correlation-ID, and ensure the first Lambda or microservice in the chain generates or reuses the trace ID. This keeps your tracing model consistent across HTTP, messaging, and serverless paths.​


Step 7: Implement Simple Custom Spans​

Trace IDs tell you which request you are seeing. Spans tell you what happened inside that request and for how long.​

A minimal CustomSpan helper might look like:

public class CustomSpan { private final String spanId = UUID.randomUUID().toString(); private final String parentSpanId; private final long startTime = System.currentTimeMillis(); public CustomSpan(String parentSpanId) { this.parentSpanId = parentSpanId; } public Map<String, Object> end(String name) { long endTime = System.currentTimeMillis(); long duration = endTime - startTime; Map<String, Object> span = new HashMap<>(); span.put("spanId", spanId); span.put("parentSpanId", parentSpanId); span.put("traceId", TraceContext.getTraceId()); span.put("name", name); span.put("durationMs", duration); return span; } }

Use it around key operations:

CustomSpan span = new CustomSpan(null); try { // perform operation: DB query, external call, business logic... } finally { log.info("span: {}", span.end("dbQuery")); }

This logs lightweight span records you can later aggregate or visualize.​


Step 8: Produce JSON Logs for Visualization​

A typical JSON span log might look like:

{ "traceId": "01HF8M3N9X9Q", "spanId": "db-42", "parentSpanId": "controller-1", "service": "payment-service", "operation": "fetchPayment", "durationMs": 38, "timestamp": "2025-12-03T12:10:00Z" }

Key fields:​

  • traceId – unique ID for the whole request
  • spanId – ID for this specific operation
  • parentSpanId – which span triggered this one
  • service – name of the producing microservice
  • operation – what the span represents
  • durationMs – latency of the operation
  • timestamp – when the span completed

Tools like ELK, CloudWatch Logs, and Loki can use these fields to build simple trace visualizations and latency dashboards.​​


Step 9: Visualize Traces Without a Tracing UI​

Even without OpenTelemetry or Jaeger, your logs can act as a tracing UI.​

  • Search by traceId in Kibana/Grafana/CloudWatch to see all related logs and spans.
  • Group by operation or service to find slow components.
  • Plot durationMs over time to catch regressions and performance hotspots.

This gives you practical, low‑overhead observability using tools you probably already run in production.​


Conclusion: Lightweight Today, OTEL‑Ready Tomorrow​​

Distributed tracing does not require heavy SDKs, agents, or a dedicated tracing backend. With custom trace IDs, MDC correlation, async propagation, and JSON spans, you can give your Spring Boot microservices clear, end‑to‑end visibility using only code you understand and control.​

If your team later adopts OpenTelemetry, this design still pays off: your X-Trace-ID maps cleanly to OTEL trace fields, your MDC patterns keep log correlation intact, and your services already understand how to propagate context. You can then replace these manual spans with OpenTelemetry instrumentation gradually, one service at a time, without losing the observability you built today.​​

\n

\

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.000529
$0.000529$0.000529
+1.20%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Best Router to Game and Stream 2025: Game and Stream Fast, Stable, and Lag-Free

The Best Router to Game and Stream 2025: Game and Stream Fast, Stable, and Lag-Free

The internet needs are at their peak, and the selection of the best router for gaming and streaming is the key to smooth internet experiences. Low latency, high
Share
Techbullion2025/12/26 01:22
‘Extreme fear’ returns to Bitcoin – Binance’s CZ sees a reward, not a warning

‘Extreme fear’ returns to Bitcoin – Binance’s CZ sees a reward, not a warning

The post ‘Extreme fear’ returns to Bitcoin – Binance’s CZ sees a reward, not a warning appeared on BitcoinEthereumNews.com. Journalist Posted: December 25, 2025
Share
BitcoinEthereumNews2025/12/26 01:14
How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

The post How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings appeared on BitcoinEthereumNews.com. contributor Posted: September 17, 2025 As digital assets continue to reshape global finance, cloud mining has become one of the most effective ways for investors to generate stable passive income. Addressing the growing demand for simplicity, security, and profitability, IeByte has officially upgraded its fully automated cloud mining platform, empowering both beginners and experienced investors to earn Bitcoin, Dogecoin, and other mainstream cryptocurrencies without the need for hardware or technical expertise. Why cloud mining in 2025? Traditional crypto mining requires expensive hardware, high electricity costs, and constant maintenance. In 2025, with blockchain networks becoming more competitive, these barriers have grown even higher. Cloud mining solves this by allowing users to lease professional mining power remotely, eliminating the upfront costs and complexity. IeByte stands at the forefront of this transformation, offering investors a transparent and seamless path to daily earnings. IeByte’s upgraded auto-cloud mining platform With its latest upgrade, IeByte introduces: Full Automation: Mining contracts can be activated in just one click, with all processes handled by IeByte’s servers. Enhanced Security: Bank-grade encryption, cold wallets, and real-time monitoring protect every transaction. Scalable Options: From starter packages to high-level investment contracts, investors can choose the plan that matches their goals. Global Reach: Already trusted by users in over 100 countries. Mining contracts for 2025 IeByte offers a wide range of contracts tailored for every investor level. From entry-level plans with daily returns to premium high-yield packages, the platform ensures maximum accessibility. Contract Type Duration Price Daily Reward Total Earnings (Principal + Profit) Starter Contract 1 Day $200 $6 $200 + $6 + $10 bonus Bronze Basic Contract 2 Days $500 $13.5 $500 + $27 Bronze Basic Contract 3 Days $1,200 $36 $1,200 + $108 Silver Advanced Contract 1 Day $5,000 $175 $5,000 + $175 Silver Advanced Contract 2 Days $8,000 $320 $8,000 + $640 Silver…
Share
BitcoinEthereumNews2025/09/17 23:48