Make WordPress Core

Opened 5 years ago

Last modified 5 years ago

#49634 new feature request

Performance Benchmarks for REST API

Reported by: mnelson4's profile mnelson4 Owned by:
Milestone: Awaiting Review Priority: normal
Severity: normal Version: 5.4
Component: REST API Keywords: dev-feedback
Focuses: performance Cc:

Description

We'd like to keep track of how fast, or slow, REST API requests are (and how they've changed as the code has changed). A bit like https://make.wordpress.org/core/2020/03/11/whats-new-in-gutenberg-11-march/ under "Performance Benchmark".

Are there any tools that would help with this? It would be pretty great if the benchmarks were part of a test suite, or release process? This idea was just generally mentioned on https://wordpress.slack.com/archives/C02RQC26G/p1584037020137500

Change History (9)

#1 @mnelson4
5 years ago

I asked @youknowriad in Slack about how they're computing Gutenberg's performance, and he pointed me to https://github.com/WordPress/gutenberg/blob/master/packages/e2e-tests/specs/performance/performance.test.js.
The gist of it is this: they use Puppeteer and just track how long things take themselves. No magic build tool or anything.

I also asked a wider web developer community on Reddit and have gotten crickets so far: https://www.reddit.com/r/webdev/comments/fhs392/how_to_add_performance_benchmarks_like_automated/

I had hoped to find a tool that would do something like:

  • set a universal unit that indicates performance regardless of the current system; Eg "1 foobar = time it takes for for($i=0;$i<1000;$i++){} to execute on your current system
  • let you define tests to benchmark, their times would be listed in "foobars"
  • the results wouldn't be so much "pass" or "fail" but more a range between "crazy fast" and "very poor"
  • could maybe be combined with a CI service like Travis or even more standardized results

But so far, no luck.

Right now, I'm leaning towards creating some PHPUnit tests take to execute. If anyone else thinks the idea of setting a universal unit ("foobar"), I'd like to try it.
We'd probably print the results to the console.

I'd appreciate feedback on these thoughts.

#2 @mnelson4
5 years ago

Crickets on stack overflow too (except one blessed soul who downvoted my question, I hope because they think I phrased the question badly or they just downvote everything mentioning WordPress, not because it’s not a worthwhile endeavor) see https://stackoverflow.com/questions/60676950/how-to-add-performance-benchmarks-like-automated-tests-to-a-project

#3 @mnelson4
5 years ago

Here’s a tool for node benchmarking tool for checking performance of an endpoint: https://github.com/alexfernandez/loadtest
That seems useful for testing a single site and how it handles load; I’m not sure if that’s what we’d want for the project as a whole.

#4 @mnelson4
5 years ago

So @kadamwhite, from what I gather, there's no industry standard for integrating performance benchmarks into a project's testing process. They're usually just built in an ad-hoc way. Do you have any thoughts on what you're envisioning, or how to get there?

This ticket was mentioned in Slack in #core-restapi by mnelson4. View the logs.


5 years ago

This ticket was mentioned in Slack in #core-restapi by mnelson4. View the logs.


5 years ago

#7 @mnelson4
5 years ago

In slack @kadamwhite said

The two options I was considering were, either something that we could run manually, perhaps against a VM or a designated testing target site each cycle; or, more interesting although certainly more overhead, something that would actually run in CI for more consistency
I think the landscape of tools that you turned up matches what I was looking at
I’d say maybe let’s start with the first option, and build a list of requests that we can run in a somewhat Manual fashion against a local VM target; a series of those runs can be averaged to get a metric for a given release cycle. Building something with puppeteer or Node that would run within CI can then be built on top of that, once we have a base set of cases we think are valid and representative
I am also hoping to run some flame graph tests in the next few days using some new features in our Altis virtual environments to get additional metrics

But either way the next step is to build a set of representative requests, like “100 posts,” “10 posts with embedded data,” etc and figure out how to get rough coverage across the API

I would think, anyway. Happy to be challenged, as you noted it’s annoying there’s so weirdly little prior art :)

Then I asked if we’d want to setup a server and send actual Http requesrts to it, or just run tests in PHP like PHPUnit, he said

since we'd need a database to be able to fulfill most requests I think standing up a container or VM and sending real requests had been my instinct. It'd conflate our results with the WP initialization cycle, but that's more realistic

So I’m thinking of actually trying to piggy back off Gutenberg team’s performance tests, but instead point them to the REST API endpoints... we’ll see how that goes...

#8 @mnelson4
5 years ago

@kadamwhite suggested https://jmeter.apache.org/ for benchmarking

This ticket was mentioned in Slack in #core-restapi by mnelson4. View the logs.


5 years ago

Note: See TracTickets for help on using tickets.