Skip to content

PYTHON-5055 - Convert test_client.py to unittest #2074

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 30 commits into from

Conversation

NoahStapp
Copy link
Contributor

@NoahStapp NoahStapp commented Jan 23, 2025

Main points of note:

  1. All stateful unittest methods have been converted to pytest fixtures.
  2. Every async test and fixture runs on the same loop that exists for the entire test session. This is separate from the pytest caching scope for fixtures, which is controlled by scope instead of loop_scope.
  3. ClientContext has been kept as a metadata store for our tests served by a session-scoped fixture.
  4. unit and integration pytest marks have been added. Going forward, all test classes should be marked with the appropriate one.

@NoahStapp NoahStapp closed this Jan 24, 2025
@NoahStapp NoahStapp reopened this Jan 24, 2025
@blink1073
Copy link
Member

evergreen retry

Copy link
Contributor

@sleepyStick sleepyStick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the Python changes look good to me. (I found things that I needed to change on my PR / test conversions as a result of reviewing this so that was helpful haha)

@NoahStapp
Copy link
Contributor Author

NoahStapp commented Jan 27, 2025

The failing test.test_write_concern.TestWriteConcern.test_invalid test is due to async teardown of our existing unittest suite. I believe this is due to the teardown occurring at the very end of the entire test session rather than at the end of the asynchronous tests. I've opened pytest-dev/pytest-asyncio#1052 due to issues scoping our teardown fixture to the test/asynchronous subpackage.

uv run ${UV_ARGS[*]} pytest "${ASYNC_PYTEST_ARGS[@]}" "--collect-only"
collected=$?
set -o errexit
# If we collected at least one async test, run all collected tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the benefit using the "collect then run" approach? Wouldn't it be simpler to run pytest even if all tests are skipped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. My original intention was to avoid the error-handling that would require, but this approach doesn't avoid that either.

test/utils.py Outdated
@@ -826,7 +827,7 @@ def lazy_client_trial(reset, target, test, get_client):
with frequent_thread_switches():
for _i in range(NTRIALS):
reset(collection)
lazy_client = get_client()
lazy_client = client
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this test is the same anymore. It's supposed to create a new client each iteration but this new code looks to reuse the same client.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, fixed.

@@ -166,7 +166,8 @@ async def get_async_mock_client(
standalones, members, mongoses, hello_hosts, arbiters, down_hosts, *args, **kwargs
)

await c.aconnect()
if "connect" not in kwargs or "connect" in kwargs and kwargs["connect"]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest if kwargs.get("connect", True):



@pytest.mark.usefixtures("require_integration")
@pytest.mark.integration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a simple way to avoid needing to mark the test as "integration" twice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pytest.mark.usefixtures("require_integration") isn't actually applying a mark, it's calling the require_integration as a requirement for the decorated test.

pytest -m require_integration won't select any tests, but pytest -m integration will.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this comment from the PR description is related:

unit and integration pytest marks have been added. Going forward, all test classes should be marked with the appropriate one.

If integration tests need to be tagged with require_integration, what's the benefit of using the unit vs integration marks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the integration and unit marks allows us to run only that set of tests. If we want to add additional subsets of tests that cross suites in the future, having those marks will be a good first step.

PYTEST_ARGS="-m $TEST_SUITES $PYTEST_ARGS"
# Workaround until unittest -> pytest conversion is complete
if [[ "$TEST_SUITES" == *"default_async"* ]]; then
ASYNC_PYTEST_ARGS=("-m asyncio" "--junitxml=xunit-results/TEST-asyncresults.xml" "${PYTEST_ARGS[@]}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about coverage? Does that pick up both test runs automatically?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shows it's not working as expected. The second coverage run will overwrite the .coverage output file by default: https://coverage.readthedocs.io/en/7.6.10/cmd.html#data-file

Copy link
Contributor Author

@NoahStapp NoahStapp Jan 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, does it make sense to use --append on the second run to keep the single file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative would be to use a separate file with COVERAGE_FILE and then coverage combine the two.

@NoahStapp NoahStapp closed this Jan 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants