Opened 3 years ago
Last modified 2 years ago
#53737 assigned enhancement
Create a way to autogenerate end-to-end test code from manual testing steps for WordPress core
Reported by: | psykro | Owned by: | lucatume |
---|---|---|---|
Milestone: | Future Release | Priority: | normal |
Severity: | normal | Version: | |
Component: | Build/Test Tools | Keywords: | needs-patch |
Focuses: | Cc: |
Description
As part of the proposal to bring end-to-end (e2e) testing to WordPress core, we would like to find a way to autogenerate end-to-end tests from manual testing steps.
WordPress uses the Jest test framework for its e2e tests. Jest uses JavaScript code to create tests. Like most testing frameworks, it's built for developers who code the tests to be run on the front end, typically via a decoupled browser instance.
However, this means that only those familiar with writing JavaScript code would be able to create new or edit existing tests. This creates a barrier to entry for those who would like to contribute to testing efforts but are not coders.
Traditionally performing front-end testing would have been done by contributors following a manual process (i.e., physically filling in fields, clicking on buttons, etc.). Therefore, it would make sense to find a way to continue to allow these testers to contribute to the e2e test suite.
We would like to find a way to autogenerate e2e tests, to somehow record and save the process of manual testing and convert that into JavaScript code that is compatible with the Jest framework.
This initiative aims to empower any contributor to contribute to the e2e testing efforts without knowing how to code an automated test.
Change History (9)
This ticket was mentioned in Slack in #core-test by hellofromtonya. View the logs.
3 years ago
#3
in reply to:
↑ 2
@
3 years ago
@youknowriad Is there not a space for both to co-exist? What do I mean?
Imagine:
- a community of manual testers who create manual step-by-step instructions and workflows
- AND tooling that codifies these instructions/workflows (when they are ready to be codified)
- AND a community of qualified e2e automation developers who know core, know how to maintain the tests, and perform the code reviews and potential tweaks/refinement for the tests
- AND tool builders who find patterns and figure out to improve the codifying tooling to improve the quality of the tests being generated
In your experience with building and nurturing e2e test suites, could this multi-tiered approach yield a pipeline of e2e testing? Could it alleviate your concerns of nurturing flaky tests?
#4
@
3 years ago
@hellofromTonya That list seem decent, the most important thing for me is
AND a community of qualified e2e automation developers who know core, know how to maintain the tests, and perform the code reviews and potential tweaks/refinement for the tests
For me though, the second point has a very small value
AND tooling that codifies these instructions/workflows (when they are ready to be codified)
Of course, I won't stop contributors to explore creative solutions that prove me wrong but I think it's important to not put the cart before the horse: meaning adding hundreds of e2e autogenerated tests with a big potential of flakiness without first having a solid set of qualified e2e test developers and build habits around it from core contributors. First let's get core contributors familiar with these tests, how to write them (which is the simplest part), how to debug them, build instincts around solving flakiness...
#5
@
3 years ago
- Milestone changed from Awaiting Review to 5.9
- Owner set to lucatume
- Status changed from new to assigned
Notes from last week's team chat:
Process:
- Human tester does the manual testing steps
- Tooling records those steps
- Tooling converts the steps into e2e test code
- Test code is attached to the ticket
- Code is reviewed by a human is skilled in e2e tests and the thing under test
- Once approved, a core committer commits the test code into the project
- Skilled e2e test humans maintain the tests (including tuning and refinement)
Pilot initiative:
- Build a prototype -> @lucatume is working on this
- Start small with a handful of impactful e2e tests
- Get those tests stable
- Learn
- Iterate
Changing ownership to Luca for the prototype. And moving it into the 5.9 milestone.
This ticket was mentioned in Slack in #core-test by hellofromtonya. View the logs.
3 years ago
This ticket was mentioned in Slack in #core-test by netweb. View the logs.
3 years ago
#8
@
3 years ago
- Keywords needs-patch added
- Milestone changed from 5.9 to Future Release
Given the time remaining in the 5.9 cycle, punting this to a Future Release
as the next release cycle is not yet available. Punting though does not mean contribution needs to wait or stop. Rather, it just means that there's not enough time or momentum for it land in the cycle.
While this sounds great on principle, I'd like to share some concerns about this: e2e tests are tests that require constant monitoring and attention, meaning any subtle change in something that is potentially unrelated can result in flaky or failing tests. Debugging this flaky tests take time and the reason for these failures is very different from failure to another. A lot of e2e tests will most likely depend on testing plugins and themes, which need to be coded and can't be generated.
This means, that there's no reliable way to generate tests and e2e tests need to be crafted carefully and follow some guidelines to reduce the risk for flakiness.
I guess what I'm trying to say is that this is jumping steps, initially, we should focus on crafting a good set of e2e tests manually, ensure these tests are stable enough, learn from them.
I think having a way to generate tests might be useful for developers to "bootstrap" their tests but I don't think we'll be able to get something working without any manual change. I think the value of auto-generated e2e tests is largely overrated.