summary#
implement a comprehensive, modular benchmarking suite to evaluate snarl's performance against industry standards
shortcoming#
without a standardized, reproducible set of benchmarks, it is difficult to track performance regressions or validate optimization efforts. comparing the framework to established players like express, fastify, hono, and oak requires a structured approach that tests real-world scenarios (routing, json parsing, parameter extraction) under consistent conditions.
proposed solution#
- organize benchmarks by scenario (e.g.,
routing,parsing,serialization,params). each module should export a standard setup function to ensure consistency across different frameworks - implement harnesses for major competitors to provide context: express, fastify, hono, oak, and elysia (guess this be enough for now?). this ensures that our benchmarks ain't running in a vacuum o algo así
- standardized scenarios could be plain text, json parsing, dynamic routing, query parameters, and large response