Fast and Easy Infinite Neural Networks in Python
-
Updated
Mar 1, 2024 - Jupyter Notebook
Fast and Easy Infinite Neural Networks in Python
CVPR 2024-Improved Implicit Neural Representation with Fourier Reparameterized Training
ICML2025-Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations
Existing literature about training-data analysis.
Official repository for "FOCUS: First Order Concentrated Updating Scheme"
Code for "Effect of equivariance on training dynamics"
Official repository for the EMNLP 2024 paper "How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics"
Source code for <Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies>
Code for 'Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics'
Supplementary code for the paper 'Dynamic Rescaling for Training GNNs' to be published at NeurIPS 2024
A unified framework for attributing model components, data, and training dynamics to model behavior.
Supplementary code for the paper 'Are GATs Out of Balance?' to be published at NeurIPS 2023
Add a description, image, and links to the training-dynamics topic page so that developers can more easily learn about it.
To associate your repository with the training-dynamics topic, visit your repo's landing page and select "manage topics."