ureq

Trait Middleware

Source
pub trait Middleware:
    Send
    + Sync
    + 'static {
    // Required method
    fn handle(
        &self,
        request: Request,
        next: MiddlewareNext<'_>,
    ) -> Result<Response, Error>;
}
Expand description

Chained processing of request (and response).

§Middleware as fn

The middleware trait is implemented for all functions that have the signature

Fn(Request, MiddlewareNext) -> Result<Response, Error>

That means the easiest way to implement middleware is by providing a fn, like so

fn my_middleware(req: Request, next: MiddlewareNext) -> Result<Response, Error> {
    // do middleware things

    // continue the middleware chain
    next.handle(req)
}

§Adding headers

A common use case is to add headers to the outgoing request. Here an example of how.

fn my_middleware(req: Request, next: MiddlewareNext) -> Result<Response, Error> {
    // set my bespoke header and continue the chain
    next.handle(req.set("X-My-Header", "value_42"))
}

let agent = ureq::builder()
    .middleware(my_middleware)
    .build();

let result: serde_json::Value =
    agent.get("http://httpbin.org/headers").call()?.into_json()?;

assert_eq!(&result["headers"]["X-My-Header"], "value_42");

§State

To maintain state between middleware invocations, we need to do something more elaborate than the simple fn and implement the Middleware trait directly.

§Example with mutex lock

In the examples directory there is an additional example count-bytes.rs which uses a mutex lock like shown below.

struct MyState {
    // whatever is needed
}

struct MyMiddleware(Arc<Mutex<MyState>>);

impl Middleware for MyMiddleware {
    fn handle(&self, request: Request, next: MiddlewareNext) -> Result<Response, Error> {
        // These extra brackets ensures we release the Mutex lock before continuing the
        // chain. There could also be scenarios where we want to maintain the lock through
        // the invocation, which would block other requests from proceeding concurrently
        // through the middleware.
        {
            let mut state = self.0.lock().unwrap();
            // do stuff with state
        }

        // continue middleware chain
        next.handle(request)
    }
}

§Example with atomic

This example shows how we can increase a counter for each request going through the agent.

use ureq::{Request, Response, Middleware, MiddlewareNext, Error};
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;

// Middleware that stores a counter state. This example uses an AtomicU64
// since the middleware is potentially shared by multiple threads running
// requests at the same time.
struct MyCounter(Arc<AtomicU64>);

impl Middleware for MyCounter {
    fn handle(&self, req: Request, next: MiddlewareNext) -> Result<Response, Error> {
        // increase the counter for each invocation
        self.0.fetch_add(1, Ordering::SeqCst);

        // continue the middleware chain
        next.handle(req)
    }
}

let shared_counter = Arc::new(AtomicU64::new(0));

let agent = ureq::builder()
    // Add our middleware
    .middleware(MyCounter(shared_counter.clone()))
    .build();

agent.get("http://httpbin.org/get").call()?;
agent.get("http://httpbin.org/get").call()?;

// Check we did indeed increase the counter twice.
assert_eq!(shared_counter.load(Ordering::SeqCst), 2);

Required Methods§

Source

fn handle( &self, request: Request, next: MiddlewareNext<'_>, ) -> Result<Response, Error>

Handle of the middleware logic.

Implementors§

Source§

impl<F> Middleware for F
where F: Fn(Request, MiddlewareNext<'_>) -> Result<Response, Error> + Send + Sync + 'static,