I'm having an issue with CausalMUST3R. After using it with a pre-trained model to generate a point map from an image, I tried to align it with another input data by transforming the coordinates to match the camera view of the first image. However, the transformed point map scattered across the coordinate system. When training CausalMUST3R on the inference image, I'm wondering: Are the dense point maps generated from each viewpoint in the corresponding camera coordinate system or the world coordinate system? Your help would be greatly appreciated.
I'm having an issue with CausalMUST3R. After using it with a pre-trained model to generate a point map from an image, I tried to align it with another input data by transforming the coordinates to match the camera view of the first image. However, the transformed point map scattered across the coordinate system. When training CausalMUST3R on the inference image, I'm wondering: Are the dense point maps generated from each viewpoint in the corresponding camera coordinate system or the world coordinate system? Your help would be greatly appreciated.