-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
chore(performance-issues): Create experimental N+1 API Call detector #90070
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy of the current test file with small edits for the experimental group type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy of the current detector file with small edits for the experimental group type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi: Format on save did this
Codecov ReportAttention: Patch coverage is ✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## master #90070 +/- ##
========================================
Coverage 87.72% 87.72%
========================================
Files 10247 10248 +1
Lines 578341 578601 +260
Branches 22693 22691 -2
========================================
+ Hits 507347 507606 +259
- Misses 70551 70552 +1
Partials 443 443 |
8f94178
to
ecd5944
Compare
ecd5944
to
bec8253
Compare
bec8253
to
30bf665
Compare
""" | ||
|
||
__slots__ = ["stored_problems"] | ||
type = DetectorType.N_PLUS_ONE_API_CALLS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to create new DetectorType
for each of the experiments? or is that just adding bloat that we will need to delete later once we replace the original ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! we don't need to for this case because the DetectorType
is only used for settings (the ones in project > performance > [Detector Name]). It'd make sense to make a new one if we wanted to trial any new values or defaults, but we don't have to in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for putting the effort into detailing how we are going to trial these experiments
reverting as this is breaking CI |
PR reverted: 657d08b |
For context, I'm auditing the current performance issue detectors and will likely be making some changes to them. Risky changes like touching fingerprinting should probably run silently beforehand to ensure we catch false positives or fingerprint explosion (see guidance).
In this PR, I'm creating a structure to make these experiments easier to review and colocated since I'll be working on different detectors in parallel.
Going forward, my workflow will probably be the following:
/experiments
and set up a dummy group type. This copy will be identical to the current detector. (1st PR, This one)sentry createissueflag
(docs) to run in prod (3rd PR, inoptions-automator
)The reason I'm being so formal about this all is because I'll be parallelizing these experiments so I gotta keep this straight for myself as well 👍