Strands vs LangGraph: Benchmark results didn’t match my expectations — sanity check? #1391
T-Rishi444
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to sanity-check some results I got while comparing Strands with LangGraph. I ran a small, fair, apples-to-apples benchmark using the same set of questions (simple → complex) and the same evaluation criteria.
What I observed:
This made me wonder whether Strands is generally considered “better” mainly because of validation, auditability, and structured execution — rather than raw latency or token efficiency.
I’d love to understand:
I’m genuinely trying to learn here and make sure my assumptions and setup are sound. Any insight would be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions