-
Notifications
You must be signed in to change notification settings - Fork 6
About the HCL result in Table 3 #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thank you for your attention to our work! You can get an explanation of this problem by looking at the historical versions of this paper uploaded on the Arxiv website by the authors of HCL. So far, there are six versions of HCL. The paper submission deadline of CVPR 2022 is 2021.11.16, and the meeting date of NIPS 2021 is 2021.12.7-2021.12.10. So, when we submit the paper, HCL has not officially published, and the authors of HCL only uploaded v1, v2 and v3 on the Arxiv website, so we compared the results in the v3 version and submitted the paper. I'm sorry for the late reply because I haven't checked the repository recently. I hope the above answer is helpful to you. |
As far as I know, Foggy-Cityscapes has a total of three foggy levels (0.005, 0.01 and 0.02). Among them, 0.02, ALL (including three fog levels) and 0.01 are mainly used by researchers. The foggy level used in SFOD (33.5% on Cityscapes → Foggy-Cityscapes) is ALL. While, the authors of HCL did not mention which foggy level they used. In our paper, we used the training and validation images with a foggy level of 0.02 for training and testing. I speculate that the extremely large advances of experimental results in different papers may be caused by the utilization of different foggy level images. In the future, we will continue to delve into Source-Free object detection. We welcome your continued interest in our future work, thank you! |
Thank you for your excellent reply! |
I guess so. |
We find that in the Table 3. (Detection results on Cityscapes → Foggy-Cityscapes), the mAP of HCL (34.4) is different from the result reported in the original paper[1](39.7, Table 4) , we want to know the reason why the results of these two paper are different. Thank you very much
[1]Huang J, Guan D, Xiao A, et al. Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data[J]. Advances in Neural Information Processing Systems, 2021, 34: 3635-3649.
The text was updated successfully, but these errors were encountered: