-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Try to fix georeferencing accuracy #1851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…y points directly. 2. update odm_georeferencing stage to convert and generate ply, laz, and texture model in UTM coordinate system. Some minor change to use pdal python api instead of command line tool
Hey thanks for the PR @originlake ! This is some good work on addressing the accuracy issue. I'm concerned about introducing a topocentric version of all assets, would it not be possible to just do the point-by-point transformation in export_geocoords (and perhaps run bundle adjustment with cameras, GCPs and points fixed after the transformation to address small changes in camera rotations/angles, assuming that we can only transform coordinates and not rotations)? |
I think the 3D coordinate system with UTM zone is not an accurate 3D cartesian coordinate system. The horizontal part is a traverse mercator projection, the warping is different from point to point, 1 meter on map could not be accurate 1 meter in real world, and the vertical part is the ellipsoid height with respect to ellipsoidal earth surface, which is also not a simple offset from topocentric coordinate system. I think doing bundle adjustment in such coordinate system could lead to worse camera parameter accuracy and hence affecting the dense point cloud accuracy? After all the 3d model accuracy itself is good, it's just the coordinate system conversion is not done accurately. |
I'm not sure to what degree this would affect accuracy, but I have a feeling that it might possibly be better, currently the transformation is done post-SFM/MVS, so while the absolute accuracy is higher, your relative accuracy is probably a bit lower (the code warps the models). Bundle adjustment might be able to find an optimal middle ground between the two. It's something that I would need to test. |
IMHO, the problem is not the 3D model being warped hence inaccurate. This doesn't mean a coordinate in the web-mercator CRS is wrong, it's just a reference that we can use to find the actual location on earth, when needed for accurate 3d space calculation, we can always convert it back to topocentric coordinate system. Usually in order to get good measurement accuracy, the surveyor will choose a coordinate system where center is defined closed to the survey region, some will even create a custom crs with origin center defined as their site center for best accuracy. They don't do topocentric, as far as I can tell, is because topocentric is inconvenient to do mapping on earth surface. Same for the geo coordinate exporting, at this stage, it's not about making the model accurate, it's about do the right conversion to the target CRS so we can still use it to accurately find the exact spot on the earth, if we need to export UTM, we do the correct conversion to UTM, if we need to export to US state plane, we do the correct conversion to US state plane, we don't need to do bundle adjustment each time exporting to a different coordinate system. The point location is always unique on earth, but the coordinate representation is different in different coordinate systems. Back to the problem, the 3D model itself is not warped after the conversion, its exact location on earth remains the same, it's just being represented in a warped coordinate system, doing measurement in such CRS is inaccurate, but it's the surveyor's responsibility to choose a better accurate CRS if needed. Anyway, these are just in theory, the scale of a normal datasets would probably not have large errors. We can probably compare the reprojection errors to see the difference. |
Yeah, bit of a classic problem. Actual reprojection errors are (probably) sufficiently small to be meaningless on most projects of most sizes. But, folks who care will want to know that the math is done correctly so they don't have to worry about unanticipated edge cases. |
From the Community, I mostly only see this issue raised by folks who have/use RTK GCPs, or have known survey markers they are comparing to and expect survey-grade accuracy with RTK-enabled input data. So, a cross-section of industry, commercial, and academic folks (mostly), and the occasional very savvy home-gamer. I think there is potential for a lot of benefit here. |
Indeed; does anyone have a small dataset with GCPs that clearly triggers this problem? |
We have seen the data, so lets hope some of it is readily sharable. |
@originlake I've looked more into this. I've implemented Kabsch's algorithm here (pierotofy/OpenSfM@7366eec) which instead computes a rigid transformation by sampling points from the area covering the model (either via GCP locations, or a bbox computed from the camera poses). I tested this on existing datasets with and without GCPs and seems to work on smaller areas, but I don't have a large dataset to test with, so perhaps you can try to replace If this works, then there's no need to keep separate models in topocentric coordinates. |
OK, I'll test your updates and report back. Unfortunately I can't share my dataset |
Also do note I might be wrong here, but it's difficult to try things without a dataset to test. |
I created a colab notebook can simulate the issue. The Idea is to create a plane with grids of coordinates in topocentric coordinate system, easting, northing are ranged from -2000m to 2000m, elevation is set to 1000. Every points in the grid will be converted to UTM representing the most accurate conversion as a baseline. Then can apply different method(Kabsch algorithm is used) to compute the errors between the baseline. At the bottom, I plotted the errors between two conversions, separated horizontal error and elevation error for better visualization. You can see the both the horizontal errors and vertical errors are non-uniform, and the error vector's direction is different. You can check it here, feel free to modify it https://colab.research.google.com/drive/1J3VBcmLbZ4YqVTrNXSisbjDr41E75ek3#scrollTo=lEMSfsirnm8T |
Thanks for testing and the colab. It makes sense. Keeping everything in topocentric seems like the way to go, I do have some concerns around performance (in particular for the PDAL pipeline stuff) and the extra 3D models, especially for multispectral datasets (which are missing currently) and the 2.5D only workflows where a 3D model is not built. Also disk usage concerns for downstream applications like NodeODM, which would currently include both textured models (topo and UTM), which is no good. Plus careful review and tons of testing, which is needed for changes like this. As a heads up, it might take me a while to take an active role in doing all of the above, due to other work priorities, but I do think we need to include this fix because it's critical for accurate results. 🙏 |
understandable, I was unsure about whether to keep a standalone topocentric file or simply overwrite it. But in order to make rerun work properly, this is the only way I could think off. Maybe you can come up with a better idea. For texture model conversion, only the obj file needs to be converted. The implementation I followed in the alignment feature, simply read the vertex and do conversion line by line, which should be a lot less efficient than using numpy to convert all vertices at once. Should probably introduce an efficient parser for better performance. For point cloud conversion, if we can have a newer version of PROJ lib, version 8.0 at least, with this new feature combined with pdal filters.projpipeline , should allow us to get rid of the python part of conversion to improve the speed. Let me know what I can help with. |
…xture files name; remove tmp change
Updated the code to convert all geomodel objs, the alignment part provided good reference for me to use. Still need to do GLTF conversion. But I think we should probably move the GLTF exporting to somewhere after the georeferencing stage, I noticed that the alignment section doesn't align GLTF file either. |
I would expect a lot of things to be needing re-arrangement (and that's not a trivial thing to do, because there's subtle things that require assets to be generated in a certain order, depending on flags). That's why I was hoping that transforming the coordinate system + running bundle adjustment would work. It would make the fix a lot easier. When I have time (and a dataset) I think I will try this approach. I know it's not ideal in theory, but we're talking about differences of centimeters (over a large area) and I'm just plain curious to see what would happen in practice. |
Note to self: all calls to Also possible memory/disk issues with 2.5D mesh generation, as the DEM generation assumes georeferenced units in order to not explode. Edit: same for the spacing estimation in odm_filterpoints. |
This issue was discussed earlier here https://community.opendronemap.org/t/model-accuracy-issues-related-to-geocoord-exports/13054/1
The horizontal error can get up to meters when the GCPs are a few kilometer away from the dataset center. The change would definitely slow down the processing speed a bit, but I think such big errors shouldn't be acceptable. My proposed fix is
_topocentric
in name. openmvs and odm_texturing do require the camera parameters remain unchanged to guarantee accuracy, so all these stages have to be kept in topocentricThe current implementation is done by using pdal-python to read and write the points, and use numpy to do bulk conversions. For obj file, I simply followed the implementation in the alignment method. For efficiency, if we can use newer version of PROJ lib, it supports topocentric conversion method, so we could define a proj pipeline and use pdal solely to do the conversion https://proj.org/en/stable/operations/conversions/topocentric.html#geocentric-to-topocentric-conversion. This should provide faster speed.
Currently I implemented converting
odm_georeferenced_model.laz
,point_cloud.ply
andodm_textured_model_geo.obj
, I need to check 25dmeshing related files as I don't normally use it. Let me know if any other file should be included.You could find some error calculations numerically in the post above. Below is a comparison between orthophotos processed in two ways, the gcp is around 1km away from the dataset center
Rigid transfrom(current way)


Point to point transform(proposed fix)
