You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Related to #386, I would like to open the discussion for evaluating the performance of basic functions.
Here are some facts/ideas.
Existing tools
PkgBenchmark.jl: A nice tool to create a suite of benchmarks and has functions to create nice reports on variations of performance. You can even create a markdown report which could be posted on the PR
NanoSoldier.jl: The tool used by Julia to evaluate the performance of the language. I don't believe this can be adapted easily to our setup, but I did not checked the details
Github action benchmark: Given a benchmark output will also create a report and has the possibility to directly create comments
Potential issues
Benchmark is highly dependent on the machine used, if we use the Github clusters, we might get a large variance in the results depending on the time of the day etc...
We cannot benchmark everything, which means we need to restrict ourselves to maybe the most used functions/kernels etc
Adding benchmarks can be a lot of work, can we find a framework where adding new tests is smooth
Other ideas
Not all PRs are performance-critical, we should be able to call whatever tool we use only when needed? Maybe every time for master and at will for some PRs.
What do we want to benchmark? Only pairwise, kernelmatrix or also the performance of the gradients on them?
The text was updated successfully, but these errors were encountered:
Related to #386, I would like to open the discussion for evaluating the performance of basic functions.
Here are some facts/ideas.
Existing tools
Potential issues
Other ideas
pairwise
,kernelmatrix
or also the performance of the gradients on them?The text was updated successfully, but these errors were encountered: