-
Notifications
You must be signed in to change notification settings - Fork 6
wrong results during training and testing Overlook module #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Whoops! I got it.
I guess that's some kind of deprecated method. So I look into the first error:
However, due to the huge difference between python2 and python3, the difference between two vgg16 dicts is not only the difference between layer names but also the data structure of ordered directory, so the modification above actually didn't load pre-trained models to our models correctly. As I refer to utils.py, I noticed a try-except in load_dict:
I finally realized that if load_state_dict failed without "Strict=False", there is also an except, and the codes in except also failed because the second part error
It finally worked:
Thanks for your excellent work and tell me to close this issue if you want to! |
However the result is still different from the paper as #7 pointed out since I used python3. |
I'm only achieving a 24.24 mAP, which falls far short of the 35.8 mAP advertised in the paper. It's strange that even though I'm running Python 3 instead of 2, the difference is so noticeable. This seems to be more pronounced in the Cityscape dataset compared to others, but I'm unsure how to explain or correct this discrepancy. When evaluating the Pascal to Clipart dataset, the results are relatively close to the advertised mAP. |
I've given up working on this repo now. I'm considering renting GPUs to build the same environment as the author claimed if I find it really necessary for my future work. By the way, I can reproduce normal mAPs in my environment using this repo, on which this LODS repo is based. |
Thank you for your reply. I appreciate it. I'll definitely give it a try, although I must admit that my primary interest lies in source-free domain adaptation for object detection. By the way, if you happen to know of any other methods that have achieved promising results and come with working code, I would greatly appreciate it if you could share them. |
Hi, I was trying to reproduce your work on city->foggy setting, after training source model, everything seems to be ok:
but as I try to train and test the overlooking module, I got very low losses like:
after all iters, the results was weird:
Do you have similar experience about this or have you got some methods to fix this?
I only have 3090 recourses so I changed pytorch and cuda together with cuda>=11:
my python version is 3.9.15
and due to the error below:
I modified line 20 of Overlook/utils.py like below:
thanks for your reply in advance :)
The text was updated successfully, but these errors were encountered: