Skip to main content

OCaml 5 Concurrency: Migrating Legacy Lwt Code to Eio and Effect Handlers

 For over a decade, OCaml developers have relied on Lwt (Lightweight Threads) to handle concurrency. While Lwt successfully brought non-blocking I/O to a language without native support, it introduced "function coloring." Every function touching I/O had to return 'a Lwt.t, forcing codebases into chains of monadic binds (>>=) and intricate exception handling that obscured the business logic.

With OCaml 5, we have Multicore and, more importantly for I/O, Effect Handlers. These allow for direct-style concurrency. We no longer need the Promise monad to represent a suspended computation; the runtime stack itself can be suspended and resumed.

This post dissects the migration of a TCP echo server from Lwt to Eio (Effects-based I/O), demonstrating how direct style simplifies control flow and how Structured Concurrency (Eio.Switch) prevents resource leaks.

The Architecture Gap: Monads vs. Delimited Continuations

To migrate effectively, you must understand the mechanical shift under the hood.

The Lwt Model (Cooperative Threading via Heap Objects)

Lwt simulates threads by reifying continuations as heap objects (promises).

  1. The Bind: let* x = f () in ... constructs a closure for the rest of the function.
  2. The Event Loop: The engine (libev/Unix.select) polls for I/O. When ready, it invokes the stored closure.
  3. The Cost: This generates significant allocation pressure (closures) and breaks stack traces. Logic flow is dictated by the monad, not the language's native control structures.

The Eio Model (Direct Style via Effect Handlers)

Eio uses OCaml 5's Effect Handlers to implement non-blocking I/O that looks blocking.

  1. The Suspension: When an I/O operation (like Eio.Flow.read) blocks, it performs an Effect.
  2. The Handler: The Eio scheduler catches this effect, captures the current stack frame (continuation) efficiently, and schedules other fibers.
  3. The Resume: Once the OS (via io_uring or kqueue) signals data availability, the scheduler resumes the stack exactly where it left off.
  4. Result: You use standard whiletry/catch, and match without wrapping return types in IO containers.

The Migration: TCP Echo Server

We will look at a robust TCP server that handles multiple clients concurrently.

1. Legacy Lwt Implementation

This code is likely familiar. Note the heavy use of >>= (or let*), the recursive functions required for loops, and the explicit Lwt.join for concurrency.

(* dune: (libraries lwt lwt.unix) *)
open Lwt.Syntax

let handle_client (input_channel, output_channel) =
  let rec loop () =
    (* 1. Monadic Bind: Suspends execution here *)
    let* result = 
      try 
        let* line = Lwt_io.read_line input_channel in
        Lwt.return (Some line)
      with End_of_file -> Lwt.return None
    in
    match result with
    | None -> Lwt.return_unit
    | Some line ->
        (* 2. More Monadic chaining for writes *)
        let* () = Lwt_io.write_line output_channel ("Echo: " ^ line) in
        loop ()
  in
  (* 3. Explicit finalization is often messy in Lwt *)
  Lwt.finalize loop (fun () ->
    let* () = Lwt_io.close input_channel in
    Lwt_io.close output_channel
  )

let start_server port =
  let addr = Unix.(ADDR_INET (inet_addr_loopback, port)) in
  let* _server = 
    Lwt_io.establish_server_with_client_socket 
      addr 
      (fun _fd (ic, oc) -> 
        (* 4. Fire and forget - risky if exceptions aren't handled *)
        Lwt.async (fun () -> handle_client (ic, oc))
      )
  in
  let (forever, _) = Lwt.task () in
  forever

let () =
  Lwt_main.run (start_server 8080)

2. Modern Eio Implementation

The Eio version removes the monad wrapper. We introduce Structured Concurrency via Eio.Switch. This ensures that if the parent task is cancelled, all child fibers (client handlers) are automatically cancelled, and their resources (sockets) are closed deterministically.

Prerequisites:

opam install eio_main

The Code:

(* dune: (libraries eio_main) *)
open Eio.Std

let handle_client flow addr =
  traceln "Accepted connection from %a" Eio.Net.Sockaddr.pp addr;
  
  (* 1. Buffered Reading: Eio separates flows from buffering logic *)
  let reader = Eio.Buf_read.of_flow flow ~max_size:1024 in
  
  try
    (* 2. Standard Imperative Loop: No recursion needed for control flow *)
    while true do
      let line = Eio.Buf_read.line reader in
      Eio.Flow.copy_string ("Echo: " ^ line ^ "\n") flow
    done
  with End_of_file ->
    traceln "Connection closed by %a" Eio.Net.Sockaddr.pp addr
    (* Flow closes automatically when the switch scope ends *)

let start_server env port =
  let net = Eio.Stdenv.net env in
  let addr = `Tcp (Eio.Net.Ipaddr.V4.loopback, port) in
  
  (* 3. The Switch: Defines the lifetime of the server socket *)
  Eio.Switch.run @@ fun sw ->
    let socket = Eio.Net.listen net ~sw ~backlog:128 ~reuse_addr:true addr in
    traceln "Server listening on port %d" port;
    
    (* 4. The Accept Loop *)
    while true do
      Eio.Net.accept_fork ~sw socket ~on_error:(traceln "Error handling client: %a" Fmt.exn) handle_client
    done

let () =
  (* 5. The Entry Point: Selects the backend (io_uring/kqueue) *)
  Eio_main.run @@ fun env ->
    start_server env 8080

Technical Breakdown of the Solution

1. Structured Concurrency (Eio.Switch)

In Lwt, we used Lwt.async. This creates a detached promise. If the main server crashes or needs to shut down, those detached promises might keep running or hold file descriptors open.

In Eio, Eio.Switch.run creates a delimiting scope.

  • Eio.Net.accept_fork attaches the new fiber to the current switch sw.
  • If an exception bubbles up to Switch.run (or if we explicitly turn the switch off), every fiber attached to that switch is cancelled.
  • Resources created with ~sw (like the listening socket) are closed automatically when the switch exits.

2. Eio.Buf_read vs Lwt_io

Lwt conflates the I/O channel with buffering (Lwt_io.input_channel). Eio separates the raw Flow (the OS file descriptor wrapper) from the Buffer.

  • Eio.Buf_read.of_flow: Wraps a raw flow with an internal buffer.
  • Eio.Buf_read.line: Blocks the fiber if the buffer is empty, triggering an effect. The scheduler switches to another client until the OS provides data. To the developer, it looks like a synchronous read.

3. Resource Management

Notice handle_client in the Eio example does not explicitly close the socket. In Eio.Net.accept_fork, the flow passed to the callback is owned by that specific fiber. When the function returns (or raises an exception), Eio automatically closes the flow associated with that client context. This eliminates the Lwt.finalize boilerplate.

4. Backends

Eio_main.run is not just an event loop; it selects the optimal backend for the OS:

  • Linux: Uses io_uring (no syscall overhead for checking readiness).
  • macOS: Uses kqueue.
  • Windows: Uses IOCP (experimental/in-progress) or uv.

Migration Strategy Checklist

When porting a large Lwt codebase:

  1. Replace Lwt.t with Direct Types:
    • string Lwt.t becomes string.
    • unit Lwt.t becomes unit.
  2. Replace Lwt.catch with try/with: Standard OCaml exception handling works across async boundaries in Eio.
  3. Replace Lwt.join/pick with Fiber.all/first:
    • Use Fiber.both a b to run two tasks concurrently.
    • Use Fiber.first a b to return the result of whichever finishes first (cancelling the other).
  4. Inject Capabilities: Eio strictly controls access to OS resources (Network, FS, Clock) via the env variable passed to Eio_main.run. Pass this explicit object down, rather than relying on global hidden state like Lwt_unix.

Conclusion

Moving to Eio is not merely a syntactic upgrade; it is a shift from heap-allocated callback chains to stack-based, structured concurrency. The resulting code is easier to debug (readable stack traces), safer (automatic resource cleanup via Switches), and generally more performant due to the reduction in closure allocations and the utilization of modern OS async APIs like io_uring.