···6677This analysis synthesizes recommendations from 40+ HTTP client libraries across 9 programming languages (Python, JavaScript, Go, Rust, Java, PHP, Swift, Haskell, C++) to prioritize enhancements for the OCaml Requests HTTP client library. The recommendations cluster into three key areas: (1) Security & Spec Compliance issues that address vulnerabilities and RFC violations; (2) Feature Enhancements for middleware/interceptor architecture, progress callbacks, retry improvements, and proxy support that are near-universal across mature HTTP clients; (3) Architectural Improvements including connection pool observability and comprehensive timeout configuration. HTTP/2 support and middleware architecture are the most requested feature enhancements, mentioned by over 15 libraries each.
8899+### Recently Implemented
1010+1111+The following features have been implemented and removed from this list:
1212+1313+- **HTTP 100-Continue Support** (RFC 9110): Automatic `Expect: 100-continue` for large uploads with configurable threshold and timeout
1414+- **Cache-Control Header Parsing** (RFC 9111): Full `Cache_control` module for parsing request/response directives
1515+- **Conditional Request Helpers**: `Headers.if_none_match`, `Headers.if_modified_since`, `Response.etag()`, `Response.last_modified()`, and related cache freshness helpers
1616+9171018---
1119···121129**Implementation Notes:**
122130Add optional base_url field to session type. In request methods, use Uri.resolve to combine base with relative URL. Handle trailing slash per RFC 3986 - 'http://api.com/v1/' vs 'http://api.com/v1' have different resolution.
123131124124-### 7. Add HTTP 100-Continue Support for Large Uploads
125125-126126-Implement Expect: 100-continue protocol to allow servers to reject requests before uploading large bodies. Saves bandwidth for requests that will be rejected based on headers alone (auth failures, quota exceeded).
127127-128128-**RFC References:**
129129-- RFC 9110 Section 10.1.1 (Expect)
130130-- RFC 9110 Section 15.2.1 (100 Continue)
131131-132132-**Cross-Language Consensus:** 5 libraries
133133-**Source Libraries:** php/guzzle, haskell/http-streams, rust/curl-rust, cpp/webcc, go/req
134134-135135-**Affected Files:**
136136-- `lib/http_client.ml`
137137-- `lib/requests.ml`
138138-- `lib/body.ml`
139139-140140-**Implementation Notes:**
141141-For bodies above threshold (1MB), add Expect: 100-continue header. Send headers, wait for 100 Continue or error response with timeout, then send body. Make threshold and behavior configurable.
142142-143143-### 8. Add Brotli Compression Support
132132+### 7. Add Brotli Compression Support
144133145134Extend automatic decompression to support Brotli (br) content encoding. Brotli provides 15-25% better compression than gzip and is widely supported by CDNs and modern web servers.
146135···157146**Implementation Notes:**
158147Add 'br' to Accept-Encoding header. Use decompress library's Brotli support (if available) or add brotli OCaml binding. Handle Content-Encoding: br in decompress_body function.
159148160160-### 9. Add EventListener/Callback System for Observability
149149+### 8. Add EventListener/Callback System for Observability
161150162151Implement lifecycle callbacks for HTTP operations: request start/end, DNS resolution, connection events, header/body progress, errors. Enables APM integration, distributed tracing, and custom monitoring without log parsing.
163152···172161**Implementation Notes:**
173162Define event_listener record type with optional callbacks: on_request_start, on_dns_start/end, on_connect_start/end, on_tls_start/end, on_headers_received, on_body_progress, on_request_end, on_error. Invoke at appropriate points in request lifecycle.
174163175175-### 10. Add Certificate Pinning Support
164164+### 9. Add Certificate Pinning Support
176165177166Implement certificate pinning to constrain which TLS certificates are trusted. Protects against CA compromise and MITM attacks by pinning expected certificate public keys per domain.
178167···195184196185## Architectural Improvements
197186198198-### 11. Add Separate Write Timeout Configuration
187187+### 10. Add Separate Write Timeout Configuration
199188200189Add dedicated write timeout distinct from read timeout. Write stalls indicate different issues than read stalls (client bandwidth vs server processing). Important for detecting upload problems.
201190···211200**Implementation Notes:**
212201Add write_timeout field to Timeout.t (default 5s vs 30s read). Apply during Flow.copy_string and body send operations. Shorter write timeout catches stalled uploads faster.
213202214214-### 12. Add Response Body Caching for Multiple Access
203203+### 11. Add Response Body Caching for Multiple Access
215204216205Cache response body after first read to allow multiple accesses via text(), json(), body(). Currently calling text() then json() fails because the flow is consumed. Match user expectations from other libraries.
217206···225214**Implementation Notes:**
226215Add mutable cached_body field to Response.t. On first text()/body() call, consume flow and store string. Subsequent calls return cached value. Add consume_body() for explicit flow access when streaming is needed.
227216228228-### 13. Add Connection Pool Statistics and Monitoring
217217+### 12. Add Connection Pool Statistics and Monitoring
229218230219Expose connection pool metrics: active connections, idle connections, connection reuse rate, pool efficiency. Essential for performance tuning and diagnosing connection-related issues.
231220···239228**Implementation Notes:**
240229Add pool_stats() function to session that exposes Conpool.stats. Include metrics: connections_created, connections_reused, connections_idle, connections_active, pool_hits, pool_misses. Add to existing statistics record.
241230242242-### 14. Add Retry Budget to Prevent Retry Storms
231231+### 13. Add Retry Budget to Prevent Retry Storms
243232244233Implement token bucket-based retry budget limiting percentage of extra load retries can generate. Prevents cascading failures where retries overwhelm degraded systems.
245234···259248260249## Feature Enhancements
261250262262-### 15. Add HTTP/2 Protocol Support
251251+### 14. Add HTTP/2 Protocol Support
263252264253Implement HTTP/2 protocol with multiplexed streams, header compression (HPACK), and optional server push. Provides significant performance improvements for applications making concurrent requests to the same host.
265254···277266**Implementation Notes:**
278267Integrate h2 OCaml library for HTTP/2 framing. Add ALPN negotiation during TLS handshake. Support automatic protocol selection (prefer HTTP/2, fallback to HTTP/1.1). Add http_version config option.
279268280280-### 16. Add Unix Domain Socket Support
269269+### 15. Add Unix Domain Socket Support
281270282271Support connecting via Unix domain sockets for communication with local services like Docker daemon, systemd, and other local APIs without TCP overhead.
283272···297286298287## Feature Enhancements
299288300300-### 17. Add Response Caching with RFC 7234 Compliance
289289+### 16. Add Disk-Based Response Caching
301290302302-Implement disk-based HTTP response caching following RFC 7234 semantics. Respect Cache-Control headers, support conditional requests with ETags, and provide cache statistics.
291291+Implement disk-based HTTP response caching with LRU eviction and configurable size limits. The Cache-Control header parsing infrastructure is already in place via the `Cache_control` module.
303292304293**RFC References:**
305294- RFC 7234 (HTTP Caching)
···313302- `lib/requests.mli`
314303315304**Implementation Notes:**
316316-Add optional cache parameter to session. Implement LRU disk cache with configurable size. Parse Cache-Control headers. Support If-None-Match and If-Modified-Since. Handle 304 Not Modified responses.
305305+Add optional cache parameter to session. Implement LRU disk cache with configurable size. The library already provides `Cache_control.parse_response` for parsing Cache-Control headers and `Response.is_cacheable`, `Response.freshness_lifetime`, etc. for determining cacheability. Integrate with existing conditional request helpers (`Headers.if_none_match`, `Headers.if_modified_since`) to handle 304 Not Modified responses.
317306318307319308---
320309321310## Architectural Improvements
322311323323-### 18. Add Request ID Correlation for Logging
312312+### 17. Add Request ID Correlation for Logging
324313325314Assign unique IDs to requests and include in all related log messages. Enables tracing request lifecycle through retries, redirects, and errors in concurrent environments.
326315···338327339328---
340329341341-## Feature Enhancements
342342-343343-### 19. Add Conditional Request Helpers (ETag/Last-Modified)
344344-345345-Add convenience methods for HTTP caching headers: Response.etag(), Response.last_modified(), and request helpers for If-None-Match, If-Modified-Since. Simplifies implementing efficient caching patterns.
346346-347347-**RFC References:**
348348-- RFC 9110 Section 8.8.3 (ETag)
349349-- RFC 9110 Section 8.8.2 (Last-Modified)
350350-351351-**Cross-Language Consensus:** 3 libraries
352352-**Source Libraries:** java/http-request, python/requests, javascript/axios
353353-354354-**Affected Files:**
355355-- `lib/response.ml`
356356-- `lib/response.mli`
357357-- `lib/headers.ml`
358358-- `lib/requests.ml`
359359-360360-**Implementation Notes:**
361361-Add Response.etag() and Response.last_modified() helpers to extract headers. Add Headers.if_none_match and Headers.if_modified_since constructors. Parse date headers using Ptime.
362362-363363-364364----
365365-366330## Architectural Improvements
367331368368-### 20. Add HTTPError Body Population for Debugging
332332+### 18. Add HTTPError Body Population for Debugging
369333370334Automatically capture response body preview (first 1024 bytes) in HTTPError exceptions. Server error responses often contain detailed error messages useful for debugging.
371335
+95
lib/body.ml
···279279 | Stream _ -> failwith "Cannot convert streaming body to string for connection pooling (body must be materialized first)"
280280 | File _ -> failwith "Cannot convert file body to string for connection pooling (file must be read first)"
281281 | Multipart _ -> failwith "Cannot convert multipart body to string for connection pooling (must be encoded first)"
282282+283283+ let is_empty = function
284284+ | Empty -> true
285285+ | _ -> false
286286+287287+ let is_chunked = function
288288+ | Empty -> false
289289+ | String _ -> false
290290+ | Stream { length = Some _; _ } -> false
291291+ | Stream { length = None; _ } -> true
292292+ | File _ -> false
293293+ | Multipart _ -> true
294294+295295+ module Write = Eio.Buf_write
296296+297297+ let crlf w = Write.string w "\r\n"
298298+299299+ (** Copy from a flow source to the writer *)
300300+ let write_stream w source =
301301+ let buf = Cstruct.create 8192 in
302302+ let rec copy () =
303303+ match Eio.Flow.single_read source buf with
304304+ | n ->
305305+ Write.cstruct w (Cstruct.sub buf 0 n);
306306+ copy ()
307307+ | exception End_of_file -> ()
308308+ in
309309+ copy ()
310310+311311+ (** Write a chunk with hex size prefix *)
312312+ let write_chunk w data len =
313313+ Write.printf w "%x" len;
314314+ crlf w;
315315+ Write.cstruct w (Cstruct.sub data 0 len);
316316+ crlf w
317317+318318+ (** Copy from a flow source using chunked transfer encoding *)
319319+ let write_stream_chunked w source =
320320+ let buf = Cstruct.create 8192 in
321321+ let rec copy () =
322322+ match Eio.Flow.single_read source buf with
323323+ | n ->
324324+ write_chunk w buf n;
325325+ copy ()
326326+ | exception End_of_file ->
327327+ (* Final chunk *)
328328+ Write.string w "0";
329329+ crlf w;
330330+ crlf w
331331+ in
332332+ copy ()
333333+334334+ let write ~sw w = function
335335+ | Empty -> ()
336336+ | String { content; _ } ->
337337+ if content <> "" then Write.string w content
338338+ | Stream { source; _ } ->
339339+ write_stream w source
340340+ | File { file; _ } ->
341341+ let flow = Eio.Path.open_in ~sw file in
342342+ write_stream w (flow :> Eio.Flow.source_ty Eio.Resource.t)
343343+ | Multipart _ as body ->
344344+ (* For multipart, get the flow source and write it *)
345345+ (match to_flow_source ~sw body with
346346+ | Some source -> write_stream w source
347347+ | None -> ())
348348+349349+ let write_chunked ~sw w = function
350350+ | Empty ->
351351+ (* Empty body with chunked encoding is just final chunk *)
352352+ Write.string w "0";
353353+ crlf w;
354354+ crlf w
355355+ | String { content; _ } ->
356356+ if content <> "" then begin
357357+ Write.printf w "%x" (String.length content);
358358+ crlf w;
359359+ Write.string w content;
360360+ crlf w
361361+ end;
362362+ Write.string w "0";
363363+ crlf w;
364364+ crlf w
365365+ | Stream { source; _ } ->
366366+ write_stream_chunked w source
367367+ | File { file; _ } ->
368368+ let flow = Eio.Path.open_in ~sw file in
369369+ write_stream_chunked w (flow :> Eio.Flow.source_ty Eio.Resource.t)
370370+ | Multipart _ as body ->
371371+ (match to_flow_source ~sw body with
372372+ | Some source -> write_stream_chunked w source
373373+ | None ->
374374+ Write.string w "0";
375375+ crlf w;
376376+ crlf w)
282377end
+15
lib/body.mli
···151151 (** [to_string body] converts the body to a string for HTTP/1.1 requests.
152152 Only works for materialized bodies (String type).
153153 Raises Failure for streaming/file/multipart bodies. *)
154154+155155+ val is_empty : t -> bool
156156+ (** [is_empty body] returns true if the body is empty. *)
157157+158158+ val is_chunked : t -> bool
159159+ (** [is_chunked body] returns true if the body should use chunked transfer encoding
160160+ (i.e., it's a stream without known length or a multipart body). *)
161161+162162+ val write : sw:Eio.Switch.t -> Eio.Buf_write.t -> t -> unit
163163+ (** [write ~sw w body] writes the body content to the buffer writer.
164164+ Uses the switch to manage resources like file handles. *)
165165+166166+ val write_chunked : sw:Eio.Switch.t -> Eio.Buf_write.t -> t -> unit
167167+ (** [write_chunked ~sw w body] writes the body content using HTTP chunked
168168+ transfer encoding. Each chunk is prefixed with its hex size. *)
154169end
+79-370
lib/http_client.ml
···33 SPDX-License-Identifier: ISC
44 ---------------------------------------------------------------------------*)
5566-(** Low-level HTTP/1.1 client over raw TCP connections for connection pooling *)
66+(** Low-level HTTP/1.1 client over raw TCP connections for connection pooling
77+88+ This module orchestrates {!Http_write} for request serialization and
99+ {!Http_read} for response parsing, leveraging Eio's Buf_write and Buf_read
1010+ for efficient I/O.
1111+1212+ Types are imported from {!Http_types} and re-exported for API convenience. *)
713814let src = Logs.Src.create "requests.http_client" ~doc:"Low-level HTTP client"
915module Log = (val Logs.src_log src : Logs.LOG)
10161111-(** {1 Response Limits Configuration}
1717+(** {1 Types}
12181313- Per Recommendation #2: Configurable limits for response body size,
1414- header count, and header length to prevent DoS attacks. *)
1919+ Re-exported from {!Http_types} for API convenience.
2020+ We open Http_types to bring record field names into scope. *)
15211616-type limits = {
1717- max_response_body_size: int64; (** Maximum response body size in bytes *)
1818- max_header_size: int; (** Maximum size of a single header line *)
1919- max_header_count: int; (** Maximum number of headers *)
2020- max_decompressed_size: int64; (** Maximum decompressed size *)
2121- max_compression_ratio: float; (** Maximum compression ratio allowed *)
2222-}
2222+open Http_types
2323+2424+type limits = Http_types.limits
2525+let default_limits = Http_types.default_limits
23262424-let default_limits = {
2525- max_response_body_size = 104_857_600L; (* 100MB *)
2626- max_header_size = 16_384; (* 16KB *)
2727- max_header_count = 100;
2828- max_decompressed_size = 104_857_600L; (* 100MB *)
2929- max_compression_ratio = 100.0; (* 100:1 *)
3030-}
2727+type expect_100_config = Http_types.expect_100_config
2828+let default_expect_100_config = Http_types.default_expect_100_config
31293230(** {1 Decompression Support} *)
3331···167165 Log.warn (fun m -> m "Unknown Content-Encoding '%s', returning raw body" other);
168166 body
169167170170-(** {1 HTTP 100-Continue Configuration}
171171-172172- Per Recommendation #7: HTTP 100-Continue Support for Large Uploads.
173173- RFC 9110 Section 10.1.1 (Expect) and Section 15.2.1 (100 Continue) *)
174174-175175-type expect_100_config = {
176176- enabled : bool; (** Whether to use 100-continue at all *)
177177- threshold : int64; (** Body size threshold to trigger 100-continue (default: 1MB) *)
178178- timeout : float; (** Timeout to wait for 100 response (default: 1.0s) *)
179179-}
180180-181181-let default_expect_100_config = {
182182- enabled = true;
183183- threshold = 1_048_576L; (* 1MB *)
184184- timeout = 1.0; (* 1 second *)
185185-}
186186-187187-(** {1 Request Building} *)
188188-189189-(** Build HTTP/1.1 request headers only (no body), for 100-continue flow *)
190190-let build_request_headers ~method_ ~uri ~headers ~content_length =
191191- let path = Uri.path uri in
192192- let path = if path = "" then "/" else path in
193193- let query = Uri.query uri in
194194- let path_with_query =
195195- if query = [] then path
196196- else path ^ "?" ^ (Uri.encoded_of_query query)
197197- in
198198-199199- let host = match Uri.host uri with
200200- | Some h -> h
201201- | None -> raise (Error.err (Error.Invalid_url {
202202- url = Uri.to_string uri;
203203- reason = "URI must have a host"
204204- }))
205205- in
206206-207207- (* RFC 7230: default ports should be omitted from Host header *)
208208- let port = match Uri.port uri, Uri.scheme uri with
209209- | Some p, Some "https" when p <> 443 -> ":" ^ string_of_int p
210210- | Some p, Some "http" when p <> 80 -> ":" ^ string_of_int p
211211- | Some p, _ -> ":" ^ string_of_int p
212212- | None, _ -> ""
213213- in
214214-215215- (* Build request line *)
216216- let request_line = Printf.sprintf "%s %s HTTP/1.1\r\n" method_ path_with_query in
217217-218218- (* Ensure Host header is present *)
219219- let headers = if not (Headers.mem "host" headers) then
220220- Headers.add "host" (host ^ port) headers
221221- else headers in
222222-223223- (* Ensure Connection header for keep-alive *)
224224- let headers = if not (Headers.mem "connection" headers) then
225225- Headers.add "connection" "keep-alive" headers
226226- else headers in
227227-228228- (* Add Content-Length if we have a body length *)
229229- let headers = match content_length with
230230- | Some len when len > 0L && not (Headers.mem "content-length" headers) ->
231231- Headers.add "content-length" (Int64.to_string len) headers
232232- | _ -> headers
233233- in
234234-235235- (* Build headers section *)
236236- let headers_str =
237237- Headers.to_list headers
238238- |> List.map (fun (k, v) -> Printf.sprintf "%s: %s\r\n" k v)
239239- |> String.concat ""
240240- in
241241-242242- request_line ^ headers_str ^ "\r\n"
243243-244244-(** Build HTTP/1.1 request as a string *)
245245-let build_request ~method_ ~uri ~headers ~body_str =
246246- let path = Uri.path uri in
247247- let path = if path = "" then "/" else path in
248248- let query = Uri.query uri in
249249- let path_with_query =
250250- if query = [] then path
251251- else path ^ "?" ^ (Uri.encoded_of_query query)
252252- in
253253-254254- let host = match Uri.host uri with
255255- | Some h -> h
256256- | None -> raise (Error.err (Error.Invalid_url {
257257- url = Uri.to_string uri;
258258- reason = "URI must have a host"
259259- }))
260260- in
261261-262262- (* RFC 7230: default ports should be omitted from Host header *)
263263- let port = match Uri.port uri, Uri.scheme uri with
264264- | Some p, Some "https" when p <> 443 -> ":" ^ string_of_int p
265265- | Some p, Some "http" when p <> 80 -> ":" ^ string_of_int p
266266- | Some p, _ -> ":" ^ string_of_int p
267267- | None, _ -> ""
268268- in
269269-270270- (* Build request line *)
271271- let request_line = Printf.sprintf "%s %s HTTP/1.1\r\n" method_ path_with_query in
272272-273273- (* Ensure Host header is present *)
274274- let headers = if not (Headers.mem "host" headers) then
275275- Headers.add "host" (host ^ port) headers
276276- else headers in
277277-278278- (* Ensure Connection header for keep-alive *)
279279- let headers = if not (Headers.mem "connection" headers) then
280280- Headers.add "connection" "keep-alive" headers
281281- else headers in
282282-283283- (* Add Content-Length if we have a body *)
284284- let headers =
285285- if body_str <> "" && not (Headers.mem "content-length" headers) then
286286- let len = String.length body_str in
287287- Headers.add "content-length" (string_of_int len) headers
288288- else headers
289289- in
290290-291291- (* Build headers section *)
292292- let headers_str =
293293- Headers.to_list headers
294294- |> List.map (fun (k, v) -> Printf.sprintf "%s: %s\r\n" k v)
295295- |> String.concat ""
296296- in
297297-298298- request_line ^ headers_str ^ "\r\n" ^ body_str
299299-300300-(** {1 Response Parsing} *)
301301-302302-(** Parse HTTP response status line *)
303303-let parse_status_line line =
304304- match String.split_on_char ' ' line with
305305- | "HTTP/1.1" :: code :: _ | "HTTP/1.0" :: code :: _ ->
306306- (try int_of_string code
307307- with _ -> raise (Error.err (Error.Invalid_request {
308308- reason = "Invalid status code: " ^ code
309309- })))
310310- | _ -> raise (Error.err (Error.Invalid_request {
311311- reason = "Invalid status line: " ^ line
312312- }))
313313-314314-(** Parse HTTP headers from buffer reader with limits
315315-316316- Per Recommendation #2: Enforce header count and size limits *)
317317-let parse_headers ~limits buf_read =
318318- let rec read_headers acc count =
319319- let line = Eio.Buf_read.line buf_read in
320320-321321- (* Check for end of headers *)
322322- if line = "" then List.rev acc
323323- else begin
324324- (* Check header count limit *)
325325- if count >= limits.max_header_count then
326326- raise (Error.err (Error.Headers_too_large {
327327- limit = limits.max_header_count;
328328- actual = count + 1
329329- }));
330330-331331- (* Check header line size limit *)
332332- if String.length line > limits.max_header_size then
333333- raise (Error.err (Error.Headers_too_large {
334334- limit = limits.max_header_size;
335335- actual = String.length line
336336- }));
337337-338338- match String.index_opt line ':' with
339339- | None -> read_headers acc (count + 1)
340340- | Some idx ->
341341- let name = String.sub line 0 idx |> String.trim |> String.lowercase_ascii in
342342- let value = String.sub line (idx + 1) (String.length line - idx - 1) |> String.trim in
343343- read_headers ((name, value) :: acc) (count + 1)
344344- end
345345- in
346346- read_headers [] 0 |> Headers.of_list
347347-348348-(** Read body with Content-Length and size limit
349349-350350- Per Recommendation #26: Validate Content-Length matches actual body size
351351- Per Recommendation #2: Enforce body size limits *)
352352-let read_fixed_body ~limits buf_read length =
353353- (* Check size limit before allocating *)
354354- if length > limits.max_response_body_size then
355355- raise (Error.err (Error.Body_too_large {
356356- limit = limits.max_response_body_size;
357357- actual = Some length
358358- }));
359359-360360- let buf = Buffer.create (Int64.to_int length) in
361361- let bytes_read = ref 0L in
362362-363363- let rec read_n remaining =
364364- if remaining > 0L then begin
365365- let to_read = min 8192 (Int64.to_int remaining) in
366366- let chunk = Eio.Buf_read.take to_read buf_read in
367367- let chunk_len = String.length chunk in
368368-369369- if chunk_len = 0 then
370370- (* Connection closed prematurely - Content-Length mismatch *)
371371- raise (Error.err (Error.Content_length_mismatch {
372372- expected = length;
373373- actual = !bytes_read
374374- }))
375375- else begin
376376- Buffer.add_string buf chunk;
377377- bytes_read := Int64.add !bytes_read (Int64.of_int chunk_len);
378378- read_n (Int64.sub remaining (Int64.of_int chunk_len))
379379- end
380380- end
381381- in
382382- read_n length;
383383- Buffer.contents buf
384384-385385-(** Read chunked body with size limit
386386-387387- Per Recommendation #2: Enforce body size limits *)
388388-let read_chunked_body ~limits buf_read =
389389- let buf = Buffer.create 4096 in
390390- let total_size = ref 0L in
391391-392392- let rec read_chunks () =
393393- let size_line = Eio.Buf_read.line buf_read in
394394- (* Parse hex chunk size, ignore extensions after ';' *)
395395- let size_str = match String.index_opt size_line ';' with
396396- | Some idx -> String.sub size_line 0 idx
397397- | None -> size_line
398398- in
399399- let chunk_size = int_of_string ("0x" ^ size_str) in
400400-401401- if chunk_size = 0 then begin
402402- (* Read trailing headers (if any) until empty line *)
403403- let rec skip_trailers () =
404404- let line = Eio.Buf_read.line buf_read in
405405- if line <> "" then skip_trailers ()
406406- in
407407- skip_trailers ()
408408- end else begin
409409- (* Check size limit before reading chunk *)
410410- let new_total = Int64.add !total_size (Int64.of_int chunk_size) in
411411- if new_total > limits.max_response_body_size then
412412- raise (Error.err (Error.Body_too_large {
413413- limit = limits.max_response_body_size;
414414- actual = Some new_total
415415- }));
416416-417417- let chunk = Eio.Buf_read.take chunk_size buf_read in
418418- Buffer.add_string buf chunk;
419419- total_size := new_total;
420420- let _crlf = Eio.Buf_read.line buf_read in (* Read trailing CRLF *)
421421- read_chunks ()
422422- end
423423- in
424424- read_chunks ();
425425- Buffer.contents buf
426426-427168(** {1 Request Execution} *)
428169429429-(** Make HTTP request over a pooled connection *)
430430-let make_request ?(limits=default_limits) ~method_ ~uri ~headers ~body_str flow =
431431- Log.debug (fun m -> m "Making %s request to %s" method_ (Uri.to_string uri));
432432-433433- (* Build and send request *)
434434- let request_str = build_request ~method_ ~uri ~headers ~body_str in
435435- Eio.Flow.copy_string request_str flow;
436436-437437- (* Read and parse response *)
438438- let buf_read = Eio.Buf_read.of_flow flow ~max_size:max_int in
439439-440440- (* Parse status line *)
441441- let status_line = Eio.Buf_read.line buf_read in
442442- let status = parse_status_line status_line in
443443-444444- Log.debug (fun m -> m "Received response status: %d" status);
445445-446446- (* Parse headers with limits *)
447447- let resp_headers = parse_headers ~limits buf_read in
448448-449449- (* Determine how to read body *)
450450- let transfer_encoding = Headers.get "transfer-encoding" resp_headers in
451451- let content_length = Headers.get "content-length" resp_headers |> Option.map Int64.of_string in
170170+(** Make HTTP request over a pooled connection using Buf_write/Buf_read *)
171171+let make_request ?(limits=default_limits) ~sw ~method_ ~uri ~headers ~body flow =
172172+ Log.debug (fun m -> m "Making %s request to %s" (Method.to_string method_) (Uri.to_string uri));
452173453453- let body_str = match transfer_encoding, content_length with
454454- | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
455455- Log.debug (fun m -> m "Reading chunked response body");
456456- read_chunked_body ~limits buf_read
457457- | _, Some len ->
458458- Log.debug (fun m -> m "Reading fixed-length response body (%Ld bytes)" len);
459459- read_fixed_body ~limits buf_read len
460460- | Some other_te, None ->
461461- Log.warn (fun m -> m "Unsupported transfer-encoding: %s, assuming no body" other_te);
462462- ""
463463- | None, None ->
464464- Log.debug (fun m -> m "No body indicated");
465465- ""
466466- in
174174+ (* Write request using Buf_write - use write_and_flush to avoid nested switch issues *)
175175+ Http_write.write_and_flush flow (fun w ->
176176+ Http_write.request w ~sw ~method_ ~uri ~headers ~body
177177+ );
467178468468- (status, resp_headers, body_str)
179179+ (* Read response using Buf_read *)
180180+ let buf_read = Http_read.of_flow flow ~max_size:max_int in
181181+ Http_read.response ~limits buf_read
469182470183(** Make HTTP request with optional auto-decompression *)
471471-let make_request_decompress ?(limits=default_limits) ~method_ ~uri ~headers ~body_str ~auto_decompress flow =
472472- let (status, resp_headers, body_str) = make_request ~limits ~method_ ~uri ~headers ~body_str flow in
184184+let make_request_decompress ?(limits=default_limits) ~sw ~method_ ~uri ~headers ~body ~auto_decompress flow =
185185+ let (status, resp_headers, body_str) = make_request ~limits ~sw ~method_ ~uri ~headers ~body flow in
473186 if auto_decompress then
474187 let body_str = match Headers.get "content-encoding" resp_headers with
475188 | Some encoding -> decompress_body ~limits ~content_encoding:encoding body_str
···505218506219(** Wait for 100 Continue or error response with timeout.
507220 Returns Continue, Rejected, or Timeout. *)
508508-let wait_for_100_continue ~timeout flow =
509509- Log.debug (fun m -> m "Waiting for 100 Continue response (timeout: %.2fs)" timeout);
221221+let wait_for_100_continue ~limits ~timeout:_ flow =
222222+ Log.debug (fun m -> m "Waiting for 100 Continue response");
510223511511- (* We need to peek at the response without consuming it fully *)
512512- let buf_read = Eio.Buf_read.of_flow flow ~max_size:max_int in
224224+ let buf_read = Http_read.of_flow flow ~max_size:max_int in
513225514514- (* Try to read status line with timeout *)
515226 try
516516- (* Peek at available data - if server responds quickly, we'll see it *)
517517- let status_line = Eio.Buf_read.line buf_read in
518518- let status = parse_status_line status_line in
227227+ let status = Http_read.status_line buf_read in
519228520229 Log.debug (fun m -> m "Received response status %d while waiting for 100 Continue" status);
521230522231 if status = 100 then begin
523232 (* 100 Continue - read any headers (usually none) and return Continue *)
524524- let _ = parse_headers ~limits:default_limits buf_read in
233233+ let _ = Http_read.headers ~limits buf_read in
525234 Log.info (fun m -> m "Received 100 Continue, proceeding with body");
526235 Continue
527236 end else begin
528237 (* Error response - server rejected based on headers *)
529238 Log.info (fun m -> m "Server rejected request with status %d before body sent" status);
530530- let resp_headers = parse_headers ~limits:default_limits buf_read in
239239+ let resp_headers = Http_read.headers ~limits buf_read in
531240 let transfer_encoding = Headers.get "transfer-encoding" resp_headers in
532241 let content_length = Headers.get "content-length" resp_headers |> Option.map Int64.of_string in
533242 let body_str = match transfer_encoding, content_length with
534243 | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
535535- read_chunked_body ~limits:default_limits buf_read
536536- | _, Some len -> read_fixed_body ~limits:default_limits buf_read len
244244+ Http_read.chunked_body ~limits buf_read
245245+ | _, Some len -> Http_read.fixed_body ~limits ~length:len buf_read
537246 | _ -> ""
538247 in
539248 Rejected (status, resp_headers, body_str)
···562271 ?(limits=default_limits)
563272 ?(expect_100=default_expect_100_config)
564273 ~clock
274274+ ~sw
565275 ~method_
566276 ~uri
567277 ~headers
568568- ~body_str
278278+ ~body
569279 flow =
570570- let body_len = Int64.of_int (String.length body_str) in
280280+ let body_len = Body.content_length body |> Option.value ~default:0L in
571281572282 (* Determine if we should use 100-continue *)
573283 let use_100_continue =
574284 expect_100.enabled &&
575285 body_len >= expect_100.threshold &&
576576- body_str <> "" &&
286286+ body_len > 0L &&
577287 not (Headers.mem "expect" headers) (* Don't override explicit Expect header *)
578288 in
579289···581291 (* Standard request without 100-continue *)
582292 Log.debug (fun m -> m "100-continue not used (body_len=%Ld, threshold=%Ld, enabled=%b)"
583293 body_len expect_100.threshold expect_100.enabled);
584584- make_request ~limits ~method_ ~uri ~headers ~body_str flow
294294+ make_request ~limits ~sw ~method_ ~uri ~headers ~body flow
585295 end else begin
586296 Log.info (fun m -> m "Using 100-continue for large body (%Ld bytes)" body_len);
587297588588- (* Add Expect: 100-continue header *)
298298+ (* Add Expect: 100-continue header and Content-Type if present *)
589299 let headers_with_expect = Headers.expect_100_continue headers in
590590-591591- (* Build and send headers only *)
592592- let headers_str = build_request_headers
593593- ~method_ ~uri ~headers:headers_with_expect
594594- ~content_length:(Some body_len)
300300+ let headers_with_expect = match Body.content_type body with
301301+ | Some mime -> Headers.add "content-type" (Mime.to_string mime) headers_with_expect
302302+ | None -> headers_with_expect
595303 in
596596- Log.debug (fun m -> m "Sending request headers with Expect: 100-continue");
597597- Eio.Flow.copy_string headers_str flow;
304304+305305+ (* Send headers only using Buf_write *)
306306+ Http_write.write_and_flush flow (fun w ->
307307+ Http_write.request_headers_only w ~method_ ~uri
308308+ ~headers:headers_with_expect ~content_length:(Some body_len)
309309+ );
598310599311 (* Wait for 100 Continue or error response with timeout *)
600312 let result =
601313 try
602314 Eio.Time.with_timeout_exn clock expect_100.timeout (fun () ->
603603- wait_for_100_continue ~timeout:expect_100.timeout flow
315315+ wait_for_100_continue ~limits ~timeout:expect_100.timeout flow
604316 )
605317 with Eio.Time.Timeout ->
606318 Log.debug (fun m -> m "100-continue timeout expired, sending body anyway");
···611323 | Continue ->
612324 (* Server said continue - send body and read final response *)
613325 Log.debug (fun m -> m "Sending body after 100 Continue");
614614- Eio.Flow.copy_string body_str flow;
326326+327327+ (* Write body *)
328328+ Http_write.write_and_flush flow (fun w ->
329329+ if Body.Private.is_empty body then
330330+ ()
331331+ else if Body.Private.is_chunked body then
332332+ Body.Private.write_chunked ~sw w body
333333+ else
334334+ Body.Private.write ~sw w body
335335+ );
615336616337 (* Read final response *)
617617- let buf_read = Eio.Buf_read.of_flow flow ~max_size:max_int in
618618- let status_line = Eio.Buf_read.line buf_read in
619619- let status = parse_status_line status_line in
620620- let resp_headers = parse_headers ~limits buf_read in
621621- let transfer_encoding = Headers.get "transfer-encoding" resp_headers in
622622- let content_length = Headers.get "content-length" resp_headers |> Option.map Int64.of_string in
623623- let resp_body_str = match transfer_encoding, content_length with
624624- | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
625625- read_chunked_body ~limits buf_read
626626- | _, Some len -> read_fixed_body ~limits buf_read len
627627- | _ -> ""
628628- in
629629- (status, resp_headers, resp_body_str)
338338+ let buf_read = Http_read.of_flow flow ~max_size:max_int in
339339+ Http_read.response ~limits buf_read
630340631341 | Rejected (status, resp_headers, resp_body_str) ->
632342 (* Server rejected - return error response without sending body *)
···637347 | Timeout ->
638348 (* Timeout expired - send body anyway per RFC 9110 *)
639349 Log.debug (fun m -> m "Sending body after timeout");
640640- Eio.Flow.copy_string body_str flow;
350350+351351+ (* Write body *)
352352+ Http_write.write_and_flush flow (fun w ->
353353+ if Body.Private.is_empty body then
354354+ ()
355355+ else if Body.Private.is_chunked body then
356356+ Body.Private.write_chunked ~sw w body
357357+ else
358358+ Body.Private.write ~sw w body
359359+ );
641360642361 (* Read response *)
643643- let buf_read = Eio.Buf_read.of_flow flow ~max_size:max_int in
644644- let status_line = Eio.Buf_read.line buf_read in
645645- let status = parse_status_line status_line in
646646- let resp_headers = parse_headers ~limits buf_read in
647647- let transfer_encoding = Headers.get "transfer-encoding" resp_headers in
648648- let content_length = Headers.get "content-length" resp_headers |> Option.map Int64.of_string in
649649- let resp_body_str = match transfer_encoding, content_length with
650650- | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
651651- read_chunked_body ~limits buf_read
652652- | _, Some len -> read_fixed_body ~limits buf_read len
653653- | _ -> ""
654654- in
655655- (status, resp_headers, resp_body_str)
362362+ let buf_read = Http_read.of_flow flow ~max_size:max_int in
363363+ Http_read.response ~limits buf_read
656364 end
657365658366(** Make HTTP request with 100-continue support and optional auto-decompression *)
···660368 ?(limits=default_limits)
661369 ?(expect_100=default_expect_100_config)
662370 ~clock
371371+ ~sw
663372 ~method_
664373 ~uri
665374 ~headers
666666- ~body_str
375375+ ~body
667376 ~auto_decompress
668377 flow =
669378 let (status, resp_headers, body_str) =
670670- make_request_100_continue ~limits ~expect_100 ~clock ~method_ ~uri ~headers ~body_str flow
379379+ make_request_100_continue ~limits ~expect_100 ~clock ~sw ~method_ ~uri ~headers ~body flow
671380 in
672381 if auto_decompress then
673382 let body_str = match Headers.get "content-encoding" resp_headers with
+425
lib/http_read.ml
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** HTTP response parsing using Eio.Buf_read combinators
77+88+ This module provides efficient HTTP/1.1 response parsing using Eio's
99+ buffered read API with parser combinators for clean, composable parsing. *)
1010+1111+let src = Logs.Src.create "requests.http_read" ~doc:"HTTP response parsing"
1212+module Log = (val Logs.src_log src : Logs.LOG)
1313+1414+module Read = Eio.Buf_read
1515+1616+(** Import limits from Http_types - the single source of truth.
1717+ We open Http_types to bring record field names into scope. *)
1818+open Http_types
1919+type limits = Http_types.limits
2020+2121+(** {1 Character Predicates} *)
2222+2323+(** HTTP version characters: letters, digits, slash, dot *)
2424+let is_version_char = function
2525+ | 'A'..'Z' | 'a'..'z' | '0'..'9' | '/' | '.' -> true
2626+ | _ -> false
2727+2828+(** HTTP status code digits *)
2929+let is_digit = function
3030+ | '0'..'9' -> true
3131+ | _ -> false
3232+3333+(** RFC 9110 token characters for header names *)
3434+let is_token_char = function
3535+ | 'A'..'Z' | 'a'..'z' | '0'..'9' -> true
3636+ | '!' | '#' | '$' | '%' | '&' | '\'' | '*' | '+' | '-' | '.' -> true
3737+ | '^' | '_' | '`' | '|' | '~' -> true
3838+ | _ -> false
3939+4040+(** Hex digits for chunk size *)
4141+let is_hex_digit = function
4242+ | '0'..'9' | 'a'..'f' | 'A'..'F' -> true
4343+ | _ -> false
4444+4545+(** Optional whitespace *)
4646+let is_ows = function
4747+ | ' ' | '\t' -> true
4848+ | _ -> false
4949+5050+(** {1 Low-level Parsers} *)
5151+5252+let sp = Read.char ' '
5353+5454+let http_version r =
5555+ Read.take_while is_version_char r
5656+5757+let status_code r =
5858+ let code_str = Read.take_while is_digit r in
5959+ if String.length code_str <> 3 then
6060+ raise (Error.err (Error.Invalid_request {
6161+ reason = "Invalid status code: " ^ code_str
6262+ }));
6363+ try int_of_string code_str
6464+ with _ ->
6565+ raise (Error.err (Error.Invalid_request {
6666+ reason = "Invalid status code: " ^ code_str
6767+ }))
6868+6969+let reason_phrase r =
7070+ Read.line r
7171+7272+(** {1 Status Line Parser} *)
7373+7474+let status_line r =
7575+ let version = http_version r in
7676+ (* Validate HTTP version *)
7777+ (match version with
7878+ | "HTTP/1.1" | "HTTP/1.0" -> ()
7979+ | _ ->
8080+ raise (Error.err (Error.Invalid_request {
8181+ reason = "Invalid HTTP version: " ^ version
8282+ })));
8383+ sp r;
8484+ let code = status_code r in
8585+ sp r;
8686+ let _reason = reason_phrase r in
8787+ Log.debug (fun m -> m "Parsed status line: %s %d" version code);
8888+ code
8989+9090+(** {1 Header Parsing} *)
9191+9292+(** Parse a single header line. Returns ("", "") for empty line (end of headers). *)
9393+let header_line r =
9494+ let name = Read.take_while is_token_char r in
9595+ if name = "" then begin
9696+ (* Empty line - end of headers. Consume the CRLF. *)
9797+ let line = Read.line r in
9898+ if line <> "" then
9999+ raise (Error.err (Error.Invalid_request {
100100+ reason = "Expected empty line but got: " ^ line
101101+ }));
102102+ ("", "")
103103+ end else begin
104104+ Read.char ':' r;
105105+ Read.skip_while is_ows r;
106106+ let value = Read.line r in
107107+ (String.lowercase_ascii name, String.trim value)
108108+ end
109109+110110+(** Parse all headers with size and count limits *)
111111+let headers ~limits r =
112112+ let rec loop acc count =
113113+ (* Check header count limit *)
114114+ if count >= limits.max_header_count then
115115+ raise (Error.err (Error.Headers_too_large {
116116+ limit = limits.max_header_count;
117117+ actual = count + 1
118118+ }));
119119+120120+ let (name, value) = header_line r in
121121+122122+ if name = "" then begin
123123+ (* End of headers *)
124124+ Log.debug (fun m -> m "Parsed %d headers" count);
125125+ Headers.of_list (List.rev acc)
126126+ end else begin
127127+ (* Check header line size limit *)
128128+ let line_len = String.length name + String.length value + 2 in
129129+ if line_len > limits.max_header_size then
130130+ raise (Error.err (Error.Headers_too_large {
131131+ limit = limits.max_header_size;
132132+ actual = line_len
133133+ }));
134134+135135+ loop ((name, value) :: acc) (count + 1)
136136+ end
137137+ in
138138+ loop [] 0
139139+140140+(** {1 Body Parsing} *)
141141+142142+(** Read a fixed-length body with size limit checking *)
143143+let fixed_body ~limits ~length r =
144144+ (* Check size limit before allocating *)
145145+ if length > limits.max_response_body_size then
146146+ raise (Error.err (Error.Body_too_large {
147147+ limit = limits.max_response_body_size;
148148+ actual = Some length
149149+ }));
150150+151151+ Log.debug (fun m -> m "Reading fixed-length body: %Ld bytes" length);
152152+153153+ let len_int = Int64.to_int length in
154154+ let buf = Buffer.create len_int in
155155+ let bytes_read = ref 0L in
156156+157157+ let rec read_n remaining =
158158+ if remaining > 0L then begin
159159+ let to_read = min 8192 (Int64.to_int remaining) in
160160+ let chunk = Read.take to_read r in
161161+ let chunk_len = String.length chunk in
162162+163163+ if chunk_len = 0 then
164164+ (* Connection closed prematurely - Content-Length mismatch *)
165165+ raise (Error.err (Error.Content_length_mismatch {
166166+ expected = length;
167167+ actual = !bytes_read
168168+ }))
169169+ else begin
170170+ Buffer.add_string buf chunk;
171171+ bytes_read := Int64.add !bytes_read (Int64.of_int chunk_len);
172172+ read_n (Int64.sub remaining (Int64.of_int chunk_len))
173173+ end
174174+ end
175175+ in
176176+ read_n length;
177177+ Buffer.contents buf
178178+179179+(** Parse chunk size line (hex size with optional extensions) *)
180180+let chunk_size r =
181181+ let hex_str = Read.take_while is_hex_digit r in
182182+ if hex_str = "" then
183183+ raise (Error.err (Error.Invalid_request {
184184+ reason = "Empty chunk size"
185185+ }));
186186+ (* Skip any chunk extensions (after semicolon) *)
187187+ Read.skip_while (fun c -> c <> '\r' && c <> '\n') r;
188188+ let _ = Read.line r in (* Consume CRLF *)
189189+ try int_of_string ("0x" ^ hex_str)
190190+ with _ ->
191191+ raise (Error.err (Error.Invalid_request {
192192+ reason = "Invalid chunk size: " ^ hex_str
193193+ }))
194194+195195+(** Skip trailer headers after final chunk *)
196196+let skip_trailers r =
197197+ let rec loop () =
198198+ let line = Read.line r in
199199+ if line <> "" then loop ()
200200+ in
201201+ loop ()
202202+203203+(** Read a chunked transfer-encoded body with size limit checking *)
204204+let chunked_body ~limits r =
205205+ Log.debug (fun m -> m "Reading chunked body");
206206+ let buf = Buffer.create 4096 in
207207+ let total_size = ref 0L in
208208+209209+ let rec read_chunks () =
210210+ let size = chunk_size r in
211211+212212+ if size = 0 then begin
213213+ (* Final chunk - skip trailers *)
214214+ skip_trailers r;
215215+ Log.debug (fun m -> m "Chunked body complete: %Ld bytes" !total_size);
216216+ Buffer.contents buf
217217+ end else begin
218218+ (* Check size limit before reading chunk *)
219219+ let new_total = Int64.add !total_size (Int64.of_int size) in
220220+ if new_total > limits.max_response_body_size then
221221+ raise (Error.err (Error.Body_too_large {
222222+ limit = limits.max_response_body_size;
223223+ actual = Some new_total
224224+ }));
225225+226226+ let chunk = Read.take size r in
227227+ Buffer.add_string buf chunk;
228228+ total_size := new_total;
229229+ let _ = Read.line r in (* Consume trailing CRLF *)
230230+ read_chunks ()
231231+ end
232232+ in
233233+ read_chunks ()
234234+235235+(** {1 Streaming Body Sources} *)
236236+237237+(** A flow source that reads from a Buf_read with a fixed length limit *)
238238+module Fixed_body_source = struct
239239+ type t = {
240240+ buf_read : Read.t;
241241+ mutable remaining : int64;
242242+ }
243243+244244+ let single_read t dst =
245245+ if t.remaining <= 0L then raise End_of_file;
246246+247247+ let to_read = min (Cstruct.length dst) (Int64.to_int (min t.remaining 8192L)) in
248248+249249+ (* Ensure data is available *)
250250+ Read.ensure t.buf_read to_read;
251251+ let src = Read.peek t.buf_read in
252252+ let actual = min to_read (Cstruct.length src) in
253253+254254+ Cstruct.blit src 0 dst 0 actual;
255255+ Read.consume t.buf_read actual;
256256+ t.remaining <- Int64.sub t.remaining (Int64.of_int actual);
257257+ actual
258258+259259+ let read_methods = []
260260+end
261261+262262+let fixed_body_stream ~limits ~length buf_read =
263263+ (* Check size limit *)
264264+ if length > limits.max_response_body_size then
265265+ raise (Error.err (Error.Body_too_large {
266266+ limit = limits.max_response_body_size;
267267+ actual = Some length
268268+ }));
269269+270270+ let t = { Fixed_body_source.buf_read; remaining = length } in
271271+ let ops = Eio.Flow.Pi.source (module Fixed_body_source) in
272272+ Eio.Resource.T (t, ops)
273273+274274+(** A flow source that reads chunked transfer encoding from a Buf_read *)
275275+module Chunked_body_source = struct
276276+ type state =
277277+ | Reading_size
278278+ | Reading_chunk of int
279279+ | Reading_chunk_end
280280+ | Done
281281+282282+ type t = {
283283+ buf_read : Read.t;
284284+ mutable state : state;
285285+ mutable total_read : int64;
286286+ limits : limits;
287287+ }
288288+289289+ let read_chunk_size t =
290290+ let hex_str = Read.take_while is_hex_digit t.buf_read in
291291+ if hex_str = "" then 0
292292+ else begin
293293+ (* Skip extensions and CRLF *)
294294+ Read.skip_while (fun c -> c <> '\r' && c <> '\n') t.buf_read;
295295+ let _ = Read.line t.buf_read in
296296+ try int_of_string ("0x" ^ hex_str)
297297+ with _ -> 0
298298+ end
299299+300300+ let single_read t dst =
301301+ let rec aux () =
302302+ match t.state with
303303+ | Done -> raise End_of_file
304304+ | Reading_size ->
305305+ let size = read_chunk_size t in
306306+ if size = 0 then begin
307307+ (* Skip trailers *)
308308+ let rec skip () =
309309+ let line = Read.line t.buf_read in
310310+ if line <> "" then skip ()
311311+ in
312312+ skip ();
313313+ t.state <- Done;
314314+ raise End_of_file
315315+ end else begin
316316+ (* Check size limit *)
317317+ let new_total = Int64.add t.total_read (Int64.of_int size) in
318318+ if new_total > t.limits.max_response_body_size then
319319+ raise (Error.err (Error.Body_too_large {
320320+ limit = t.limits.max_response_body_size;
321321+ actual = Some new_total
322322+ }));
323323+ t.state <- Reading_chunk size;
324324+ aux ()
325325+ end
326326+ | Reading_chunk remaining ->
327327+ let to_read = min (Cstruct.length dst) remaining in
328328+ Read.ensure t.buf_read to_read;
329329+ let src = Read.peek t.buf_read in
330330+ let actual = min to_read (Cstruct.length src) in
331331+ Cstruct.blit src 0 dst 0 actual;
332332+ Read.consume t.buf_read actual;
333333+ t.total_read <- Int64.add t.total_read (Int64.of_int actual);
334334+ let new_remaining = remaining - actual in
335335+ if new_remaining = 0 then
336336+ t.state <- Reading_chunk_end
337337+ else
338338+ t.state <- Reading_chunk new_remaining;
339339+ actual
340340+ | Reading_chunk_end ->
341341+ let _ = Read.line t.buf_read in (* Consume trailing CRLF *)
342342+ t.state <- Reading_size;
343343+ aux ()
344344+ in
345345+ aux ()
346346+347347+ let read_methods = []
348348+end
349349+350350+let chunked_body_stream ~limits buf_read =
351351+ let t = {
352352+ Chunked_body_source.buf_read;
353353+ state = Reading_size;
354354+ total_read = 0L;
355355+ limits
356356+ } in
357357+ let ops = Eio.Flow.Pi.source (module Chunked_body_source) in
358358+ Eio.Resource.T (t, ops)
359359+360360+(** {1 High-level Response Parsing} *)
361361+362362+(** Parse complete response (status + headers + body) to string *)
363363+let response ~limits r =
364364+ let status = status_line r in
365365+ let hdrs = headers ~limits r in
366366+367367+ (* Determine how to read body *)
368368+ let transfer_encoding = Headers.get "transfer-encoding" hdrs in
369369+ let content_length = Headers.get "content-length" hdrs |> Option.map Int64.of_string in
370370+371371+ let body = match transfer_encoding, content_length with
372372+ | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
373373+ Log.debug (fun m -> m "Reading chunked response body");
374374+ chunked_body ~limits r
375375+ | _, Some len ->
376376+ Log.debug (fun m -> m "Reading fixed-length response body (%Ld bytes)" len);
377377+ fixed_body ~limits ~length:len r
378378+ | Some other_te, None ->
379379+ Log.warn (fun m -> m "Unsupported transfer-encoding: %s, assuming no body" other_te);
380380+ ""
381381+ | None, None ->
382382+ Log.debug (fun m -> m "No body indicated");
383383+ ""
384384+ in
385385+386386+ (status, hdrs, body)
387387+388388+(** Response with streaming body *)
389389+type stream_response = {
390390+ status : int;
391391+ headers : Headers.t;
392392+ body : [ `String of string
393393+ | `Stream of Eio.Flow.source_ty Eio.Resource.t
394394+ | `None ]
395395+}
396396+397397+let response_stream ~limits r =
398398+ let status = status_line r in
399399+ let hdrs = headers ~limits r in
400400+401401+ (* Determine body type *)
402402+ let transfer_encoding = Headers.get "transfer-encoding" hdrs in
403403+ let content_length = Headers.get "content-length" hdrs |> Option.map Int64.of_string in
404404+405405+ let body = match transfer_encoding, content_length with
406406+ | Some te, _ when String.lowercase_ascii te |> String.trim = "chunked" ->
407407+ Log.debug (fun m -> m "Creating chunked body stream");
408408+ `Stream (chunked_body_stream ~limits r)
409409+ | _, Some len ->
410410+ Log.debug (fun m -> m "Creating fixed-length body stream (%Ld bytes)" len);
411411+ `Stream (fixed_body_stream ~limits ~length:len r)
412412+ | Some other_te, None ->
413413+ Log.warn (fun m -> m "Unsupported transfer-encoding: %s, assuming no body" other_te);
414414+ `None
415415+ | None, None ->
416416+ Log.debug (fun m -> m "No body indicated");
417417+ `None
418418+ in
419419+420420+ { status; headers = hdrs; body }
421421+422422+(** {1 Convenience Functions} *)
423423+424424+let of_flow ?initial_size ~max_size flow =
425425+ Read.of_flow ?initial_size ~max_size flow
+106
lib/http_read.mli
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** HTTP response parsing using Eio.Buf_read combinators
77+88+ This module provides efficient HTTP/1.1 response parsing using Eio's
99+ buffered read API with parser combinators for clean, composable parsing.
1010+1111+ Example:
1212+ {[
1313+ let buf_read = Http_read.of_flow ~max_size:max_int flow in
1414+ let (status, headers, body) = Http_read.response ~limits buf_read
1515+ ]} *)
1616+1717+(** {1 Response Limits}
1818+1919+ This module uses {!Http_types.limits} from the shared types module. *)
2020+2121+type limits = Http_types.limits
2222+(** Alias for {!Http_types.limits}. See {!Http_types} for field documentation. *)
2323+2424+(** {1 Low-level Parsers} *)
2525+2626+val http_version : Eio.Buf_read.t -> string
2727+(** [http_version r] parses HTTP version string (e.g., "HTTP/1.1"). *)
2828+2929+val status_code : Eio.Buf_read.t -> int
3030+(** [status_code r] parses a 3-digit HTTP status code.
3131+ @raise Error.t if the status code is invalid. *)
3232+3333+val status_line : Eio.Buf_read.t -> int
3434+(** [status_line r] parses a complete HTTP status line and returns the status code.
3535+ Validates that the HTTP version is 1.0 or 1.1.
3636+ @raise Error.t if the status line is invalid. *)
3737+3838+(** {1 Header Parsing} *)
3939+4040+val header_line : Eio.Buf_read.t -> string * string
4141+(** [header_line r] parses a single header line.
4242+ Returns [(name, value)] where name is lowercase.
4343+ Returns [("", "")] for the empty line that terminates headers. *)
4444+4545+val headers : limits:limits -> Eio.Buf_read.t -> Headers.t
4646+(** [headers ~limits r] parses all headers until the terminating blank line.
4747+ Enforces header count and size limits.
4848+ @raise Error.Headers_too_large if limits are exceeded. *)
4949+5050+(** {1 Body Parsing} *)
5151+5252+val fixed_body : limits:limits -> length:int64 -> Eio.Buf_read.t -> string
5353+(** [fixed_body ~limits ~length r] reads exactly [length] bytes as the body.
5454+ @raise Error.Body_too_large if length exceeds limit.
5555+ @raise Error.Content_length_mismatch if EOF occurs before all bytes read. *)
5656+5757+val chunked_body : limits:limits -> Eio.Buf_read.t -> string
5858+(** [chunked_body ~limits r] reads a chunked transfer-encoded body.
5959+ Handles chunk sizes, extensions, and trailers.
6060+ @raise Error.Body_too_large if total body size exceeds limit. *)
6161+6262+(** {1 Streaming Body Sources} *)
6363+6464+val fixed_body_stream : limits:limits -> length:int64 ->
6565+ Eio.Buf_read.t -> Eio.Flow.source_ty Eio.Resource.t
6666+(** [fixed_body_stream ~limits ~length r] creates a flow source that reads
6767+ [length] bytes from [r]. Useful for large bodies to avoid loading
6868+ everything into memory at once. *)
6969+7070+val chunked_body_stream : limits:limits ->
7171+ Eio.Buf_read.t -> Eio.Flow.source_ty Eio.Resource.t
7272+(** [chunked_body_stream ~limits r] creates a flow source that reads
7373+ chunked transfer-encoded data from [r]. Decodes chunks on-the-fly. *)
7474+7575+(** {1 High-level Response Parsing} *)
7676+7777+val response : limits:limits -> Eio.Buf_read.t -> int * Headers.t * string
7878+(** [response ~limits r] parses a complete HTTP response including:
7979+ - Status line (returns status code)
8080+ - Headers
8181+ - Body (based on Transfer-Encoding or Content-Length)
8282+8383+ This reads the entire body into memory. For large responses,
8484+ use {!response_stream} instead. *)
8585+8686+(** {1 Streaming Response} *)
8787+8888+type stream_response = {
8989+ status : int;
9090+ headers : Headers.t;
9191+ body : [ `String of string
9292+ | `Stream of Eio.Flow.source_ty Eio.Resource.t
9393+ | `None ]
9494+}
9595+(** A parsed response with optional streaming body. *)
9696+9797+val response_stream : limits:limits -> Eio.Buf_read.t -> stream_response
9898+(** [response_stream ~limits r] parses status line and headers, then
9999+ returns a streaming body source instead of reading the body into memory.
100100+ Use this for large responses. *)
101101+102102+(** {1 Convenience Functions} *)
103103+104104+val of_flow : ?initial_size:int -> max_size:int -> _ Eio.Flow.source -> Eio.Buf_read.t
105105+(** [of_flow ~max_size flow] creates a buffered reader from [flow].
106106+ This is a thin wrapper around {!Eio.Buf_read.of_flow}. *)
+48
lib/http_types.ml
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** Shared types for HTTP protocol handling
77+88+ This module contains type definitions used across the HTTP client modules.
99+ It serves as the single source of truth for types shared between
1010+ {!Http_read}, {!Http_write}, and {!Http_client}. *)
1111+1212+(** {1 Response Limits}
1313+1414+ Per Recommendation #2: Configurable limits for response body size,
1515+ header count, and header length to prevent DoS attacks. *)
1616+1717+type limits = {
1818+ max_response_body_size: int64; (** Maximum response body size in bytes (default: 100MB) *)
1919+ max_header_size: int; (** Maximum size of a single header line (default: 16KB) *)
2020+ max_header_count: int; (** Maximum number of headers (default: 100) *)
2121+ max_decompressed_size: int64; (** Maximum decompressed size (default: 100MB) *)
2222+ max_compression_ratio: float; (** Maximum compression ratio allowed (default: 100:1) *)
2323+}
2424+2525+let default_limits = {
2626+ max_response_body_size = 104_857_600L; (* 100MB *)
2727+ max_header_size = 16_384; (* 16KB *)
2828+ max_header_count = 100;
2929+ max_decompressed_size = 104_857_600L; (* 100MB *)
3030+ max_compression_ratio = 100.0; (* 100:1 *)
3131+}
3232+3333+(** {1 HTTP 100-Continue Configuration}
3434+3535+ Per Recommendation #7: HTTP 100-Continue Support for Large Uploads.
3636+ RFC 9110 Section 10.1.1 (Expect) and Section 15.2.1 (100 Continue) *)
3737+3838+type expect_100_config = {
3939+ enabled : bool; (** Whether to use 100-continue at all *)
4040+ threshold : int64; (** Body size threshold to trigger 100-continue (default: 1MB) *)
4141+ timeout : float; (** Timeout to wait for 100 response (default: 1.0s) *)
4242+}
4343+4444+let default_expect_100_config = {
4545+ enabled = true;
4646+ threshold = 1_048_576L; (* 1MB *)
4747+ timeout = 1.0; (* 1 second *)
4848+}
+50
lib/http_types.mli
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** Shared types for HTTP protocol handling
77+88+ This module contains type definitions used across the HTTP client modules.
99+ It serves as the single source of truth for types shared between
1010+ {!Http_read}, {!Http_write}, and {!Http_client}. *)
1111+1212+(** {1 Response Limits}
1313+1414+ Configurable limits for response body size, header count, and header length
1515+ to prevent DoS attacks. *)
1616+1717+type limits = {
1818+ max_response_body_size: int64; (** Maximum response body size in bytes *)
1919+ max_header_size: int; (** Maximum size of a single header line *)
2020+ max_header_count: int; (** Maximum number of headers *)
2121+ max_decompressed_size: int64; (** Maximum decompressed size *)
2222+ max_compression_ratio: float; (** Maximum compression ratio allowed *)
2323+}
2424+(** Response size limits to prevent resource exhaustion. *)
2525+2626+val default_limits : limits
2727+(** Default limits:
2828+ - max_response_body_size: 100MB
2929+ - max_header_size: 16KB
3030+ - max_header_count: 100
3131+ - max_decompressed_size: 100MB
3232+ - max_compression_ratio: 100:1 *)
3333+3434+(** {1 HTTP 100-Continue Configuration}
3535+3636+ Configuration for the HTTP 100-Continue protocol, which allows clients
3737+ to check if the server will accept a request before sending a large body. *)
3838+3939+type expect_100_config = {
4040+ enabled : bool; (** Whether to use 100-continue at all *)
4141+ threshold : int64; (** Body size threshold to trigger 100-continue *)
4242+ timeout : float; (** Timeout to wait for 100 response in seconds *)
4343+}
4444+(** Configuration for HTTP 100-Continue support. *)
4545+4646+val default_expect_100_config : expect_100_config
4747+(** Default configuration:
4848+ - enabled: true
4949+ - threshold: 1MB
5050+ - timeout: 1.0s *)
+189
lib/http_write.ml
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** HTTP request serialization using Eio.Buf_write
77+88+ This module provides efficient HTTP/1.1 request serialization using Eio's
99+ buffered write API. It avoids intermediate string allocations by writing
1010+ directly to the output buffer. *)
1111+1212+let src = Logs.Src.create "requests.http_write" ~doc:"HTTP request serialization"
1313+module Log = (val Logs.src_log src : Logs.LOG)
1414+1515+module Write = Eio.Buf_write
1616+1717+(** {1 Low-level Writers} *)
1818+1919+let crlf w =
2020+ Write.string w "\r\n"
2121+2222+let sp w =
2323+ Write.char w ' '
2424+2525+(** {1 Request Line} *)
2626+2727+let request_line w ~method_ ~uri =
2828+ let path = Uri.path uri in
2929+ let path = if path = "" then "/" else path in
3030+ let query = Uri.query uri in
3131+ let path_with_query =
3232+ if query = [] then path
3333+ else path ^ "?" ^ (Uri.encoded_of_query query)
3434+ in
3535+ Write.string w method_;
3636+ sp w;
3737+ Write.string w path_with_query;
3838+ Write.string w " HTTP/1.1";
3939+ crlf w
4040+4141+(** {1 Header Writing} *)
4242+4343+let header w ~name ~value =
4444+ Write.string w name;
4545+ Write.string w ": ";
4646+ Write.string w value;
4747+ crlf w
4848+4949+let headers w hdrs =
5050+ Headers.to_list hdrs
5151+ |> List.iter (fun (name, value) -> header w ~name ~value);
5252+ crlf w
5353+5454+(** Build Host header value from URI *)
5555+let host_value uri =
5656+ let host = match Uri.host uri with
5757+ | Some h -> h
5858+ | None -> raise (Error.err (Error.Invalid_url {
5959+ url = Uri.to_string uri;
6060+ reason = "URI must have a host"
6161+ }))
6262+ in
6363+ (* RFC 7230: default ports should be omitted from Host header *)
6464+ match Uri.port uri, Uri.scheme uri with
6565+ | Some p, Some "https" when p <> 443 -> host ^ ":" ^ string_of_int p
6666+ | Some p, Some "http" when p <> 80 -> host ^ ":" ^ string_of_int p
6767+ | Some p, _ -> host ^ ":" ^ string_of_int p
6868+ | None, _ -> host
6969+7070+let request_headers w ~method_ ~uri ~headers:hdrs ~content_length =
7171+ (* Write request line *)
7272+ request_line w ~method_ ~uri;
7373+7474+ (* Ensure Host header is present *)
7575+ let hdrs = if not (Headers.mem "host" hdrs) then
7676+ Headers.add "host" (host_value uri) hdrs
7777+ else hdrs in
7878+7979+ (* Ensure Connection header for keep-alive *)
8080+ let hdrs = if not (Headers.mem "connection" hdrs) then
8181+ Headers.add "connection" "keep-alive" hdrs
8282+ else hdrs in
8383+8484+ (* Add Content-Length if we have a body length *)
8585+ let hdrs = match content_length with
8686+ | Some len when len > 0L && not (Headers.mem "content-length" hdrs) ->
8787+ Headers.add "content-length" (Int64.to_string len) hdrs
8888+ | _ -> hdrs
8989+ in
9090+9191+ (* Write all headers *)
9292+ headers w hdrs
9393+9494+(** {1 Body Writing} *)
9595+9696+let body_string w s =
9797+ if s <> "" then
9898+ Write.string w s
9999+100100+(** Copy from a flow source to the writer, chunk by chunk *)
101101+let body_stream w source =
102102+ let buf = Cstruct.create 8192 in
103103+ let rec copy () =
104104+ match Eio.Flow.single_read source buf with
105105+ | n ->
106106+ Write.cstruct w (Cstruct.sub buf 0 n);
107107+ copy ()
108108+ | exception End_of_file -> ()
109109+ in
110110+ copy ()
111111+112112+(** Write body using chunked transfer encoding *)
113113+let body_chunked w source =
114114+ let buf = Cstruct.create 8192 in
115115+ let rec copy () =
116116+ match Eio.Flow.single_read source buf with
117117+ | n ->
118118+ (* Write chunk size in hex *)
119119+ Write.printf w "%x" n;
120120+ crlf w;
121121+ (* Write chunk data *)
122122+ Write.cstruct w (Cstruct.sub buf 0 n);
123123+ crlf w;
124124+ copy ()
125125+ | exception End_of_file ->
126126+ (* Write final chunk *)
127127+ Write.string w "0";
128128+ crlf w;
129129+ crlf w
130130+ in
131131+ copy ()
132132+133133+(** {1 High-level Request Writing} *)
134134+135135+let request w ~sw ~method_ ~uri ~headers:hdrs ~body =
136136+ let method_str = Method.to_string method_ in
137137+138138+ (* Get content type and length from body *)
139139+ let content_type = Body.content_type body in
140140+ let content_length = Body.content_length body in
141141+142142+ (* Add Content-Type header if body has one *)
143143+ let hdrs = match content_type with
144144+ | Some mime when not (Headers.mem "content-type" hdrs) ->
145145+ Headers.add "content-type" (Mime.to_string mime) hdrs
146146+ | _ -> hdrs
147147+ in
148148+149149+ (* Determine if we need chunked encoding *)
150150+ let use_chunked = Body.Private.is_chunked body in
151151+152152+ let hdrs = if use_chunked && not (Headers.mem "transfer-encoding" hdrs) then
153153+ Headers.add "transfer-encoding" "chunked" hdrs
154154+ else hdrs in
155155+156156+ (* Write request line and headers *)
157157+ request_headers w ~method_:method_str ~uri ~headers:hdrs ~content_length;
158158+159159+ (* Write body *)
160160+ if Body.Private.is_empty body then
161161+ ()
162162+ else if use_chunked then
163163+ Body.Private.write_chunked ~sw w body
164164+ else
165165+ Body.Private.write ~sw w body
166166+167167+(** {1 Headers-Only Writing (for 100-continue)} *)
168168+169169+let request_headers_only w ~method_ ~uri ~headers:hdrs ~content_length =
170170+ let method_str = Method.to_string method_ in
171171+ request_headers w ~method_:method_str ~uri ~headers:hdrs ~content_length
172172+173173+(** {1 Convenience Wrappers} *)
174174+175175+let with_flow ?initial_size flow fn =
176176+ Write.with_flow ?initial_size flow fn
177177+178178+(** Write and flush directly to flow without creating a nested switch.
179179+ This is a simpler alternative to [with_flow] that avoids potential
180180+ issues with nested switches in the Eio fiber system. *)
181181+let write_and_flush ?(initial_size=0x1000) flow fn =
182182+ (* Create a writer without attaching to a switch *)
183183+ let w = Write.create initial_size in
184184+ (* Execute the writing function *)
185185+ fn w;
186186+ (* Serialize to string and copy to flow *)
187187+ let data = Write.serialize_to_string w in
188188+ if String.length data > 0 then
189189+ Eio.Flow.copy_string data flow
+101
lib/http_write.mli
···11+(*---------------------------------------------------------------------------
22+ Copyright (c) 2025 Anil Madhavapeddy <anil@recoil.org>. All rights reserved.
33+ SPDX-License-Identifier: ISC
44+ ---------------------------------------------------------------------------*)
55+66+(** HTTP request serialization using Eio.Buf_write
77+88+ This module provides efficient HTTP/1.1 request serialization using Eio's
99+ buffered write API. It avoids intermediate string allocations by writing
1010+ directly to the output buffer.
1111+1212+ Example:
1313+ {[
1414+ Http_write.with_flow flow (fun w ->
1515+ Http_write.request w ~sw ~method_:`GET ~uri
1616+ ~headers:Headers.empty ~body:Body.empty
1717+ )
1818+ ]} *)
1919+2020+(** {1 Low-level Writers} *)
2121+2222+val crlf : Eio.Buf_write.t -> unit
2323+(** [crlf w] writes a CRLF line terminator ("\r\n") to [w]. *)
2424+2525+val request_line : Eio.Buf_write.t -> method_:string -> uri:Uri.t -> unit
2626+(** [request_line w ~method_ ~uri] writes an HTTP request line.
2727+ For example: "GET /path?query HTTP/1.1\r\n" *)
2828+2929+val header : Eio.Buf_write.t -> name:string -> value:string -> unit
3030+(** [header w ~name ~value] writes a single header line.
3131+ For example: "Content-Type: application/json\r\n" *)
3232+3333+val headers : Eio.Buf_write.t -> Headers.t -> unit
3434+(** [headers w hdrs] writes all headers from [hdrs], followed by
3535+ a blank line (CRLF) to terminate the headers section. *)
3636+3737+(** {1 Request Headers} *)
3838+3939+val request_headers : Eio.Buf_write.t -> method_:string -> uri:Uri.t ->
4040+ headers:Headers.t -> content_length:int64 option -> unit
4141+(** [request_headers w ~method_ ~uri ~headers ~content_length] writes a complete
4242+ HTTP request header section, including:
4343+ - Request line (method, path, HTTP/1.1)
4444+ - Host header (extracted from URI if not present)
4545+ - Connection: keep-alive (if not present)
4646+ - Content-Length (if [content_length] provided and > 0)
4747+ - All headers from [headers]
4848+ - Terminating blank line *)
4949+5050+val request_headers_only : Eio.Buf_write.t -> method_:Method.t -> uri:Uri.t ->
5151+ headers:Headers.t -> content_length:int64 option -> unit
5252+(** [request_headers_only] is like {!request_headers} but takes a [Method.t]
5353+ instead of a string. Used for 100-continue flow where headers are sent first. *)
5454+5555+(** {1 Body Writing} *)
5656+5757+val body_string : Eio.Buf_write.t -> string -> unit
5858+(** [body_string w s] writes string [s] as the request body.
5959+ Does nothing if [s] is empty. *)
6060+6161+val body_stream : Eio.Buf_write.t -> Eio.Flow.source_ty Eio.Resource.t -> unit
6262+(** [body_stream w source] copies data from [source] to [w] until EOF.
6363+ Uses 8KB chunks for efficiency. The caller must ensure Content-Length
6464+ is set correctly in headers. *)
6565+6666+val body_chunked : Eio.Buf_write.t -> Eio.Flow.source_ty Eio.Resource.t -> unit
6767+(** [body_chunked w source] writes data from [source] using HTTP chunked
6868+ transfer encoding. Each chunk is prefixed with its size in hex,
6969+ followed by CRLF, the data, and another CRLF. Ends with "0\r\n\r\n". *)
7070+7171+(** {1 High-level Request Writing} *)
7272+7373+val request : Eio.Buf_write.t -> sw:Eio.Switch.t -> method_:Method.t ->
7474+ uri:Uri.t -> headers:Headers.t -> body:Body.t -> unit
7575+(** [request w ~sw ~method_ ~uri ~headers ~body] writes a complete HTTP request
7676+ including headers and body. Automatically handles:
7777+ - Content-Type header from body
7878+ - Content-Length header for sized bodies
7979+ - Transfer-Encoding: chunked for unsized streams
8080+ - Multipart body encoding *)
8181+8282+(** {1 Convenience Wrappers} *)
8383+8484+val with_flow : ?initial_size:int -> _ Eio.Flow.sink ->
8585+ (Eio.Buf_write.t -> 'a) -> 'a
8686+(** [with_flow flow fn] runs [fn writer] where [writer] is a buffer that
8787+ flushes to [flow]. Data is automatically flushed when [fn] returns.
8888+8989+ This is a thin wrapper around {!Eio.Buf_write.with_flow}.
9090+9191+ {b Note:} This function creates an internal switch and may cause issues
9292+ with nested fibers. Consider using {!write_and_flush} instead. *)
9393+9494+val write_and_flush : ?initial_size:int -> _ Eio.Flow.sink ->
9595+ (Eio.Buf_write.t -> unit) -> unit
9696+(** [write_and_flush flow fn] runs [fn writer] where [writer] is a buffer,
9797+ then serializes all written data to a string and copies it to [flow].
9898+9999+ Unlike {!with_flow}, this does not create a nested switch and is safe
100100+ to use in complex fiber hierarchies. The tradeoff is that the entire
101101+ request is buffered in memory before being written. *)
+6-5
lib/one.ml
···198198 headers
199199 in
200200201201- (* Convert body to string for sending *)
202202- let request_body_str = Option.fold ~none:"" ~some:Body.Private.to_string body in
201201+ (* Get request body, defaulting to empty *)
202202+ let request_body = Option.value ~default:Body.empty body in
203203204204 (* Track the original URL for cross-origin redirect detection *)
205205 let original_uri = Uri.of_string url in
···214214 ~timeout ~verify_tls ~tls_config ~min_tls_version in
215215216216 (* Build expect_100 config *)
217217- let expect_100_config = Http_client.{
217217+ let expect_100_config = Http_types.{
218218 enabled = expect_100_continue;
219219 threshold = expect_100_continue_threshold;
220220 timeout = Option.bind timeout Timeout.expect_100_continue |> Option.value ~default:1.0;
···225225 Http_client.make_request_100_continue_decompress
226226 ~expect_100:expect_100_config
227227 ~clock
228228- ~method_:method_str ~uri:uri_to_fetch
229229- ~headers:headers_for_request ~body_str:request_body_str
228228+ ~sw
229229+ ~method_ ~uri:uri_to_fetch
230230+ ~headers:headers_for_request ~body:request_body
230231 ~auto_decompress flow
231232 in
232233
+6-8
lib/requests.ml
···177177 in
178178179179 (* Build expect_100_continue configuration *)
180180- let expect_100_config = Http_client.{
180180+ let expect_100_config = Http_types.{
181181 enabled = expect_100_continue;
182182 threshold = expect_100_continue_threshold;
183183 timeout = Timeout.expect_100_continue timeout |> Option.value ~default:1.0;
···320320 base_headers
321321 in
322322323323- (* Convert body to string for sending *)
324324- let request_body_str = match body with
325325- | None -> ""
326326- | Some b -> Body.Private.to_string b
327327- in
323323+ (* Get request body, defaulting to empty *)
324324+ let request_body = Option.value ~default:Body.empty body in
328325329326 (* Helper to extract and store cookies from response headers *)
330327 let extract_cookies_from_headers resp_headers url_str =
···421418 Http_client.make_request_100_continue_decompress
422419 ~expect_100:t.expect_100_continue
423420 ~clock:t.clock
424424- ~method_:method_str ~uri:uri_to_fetch
425425- ~headers:headers_with_cookies ~body_str:request_body_str
421421+ ~sw:t.sw
422422+ ~method_ ~uri:uri_to_fetch
423423+ ~headers:headers_with_cookies ~body:request_body
426424 ~auto_decompress:t.auto_decompress flow
427425 )
428426 in