Skip to content

Training big dataset scene is damaged #778

@gaojing418

Description

@gaojing418

I used colmap to align 4000 2k pictures, the sparse model has 2.7 million points, I used postshot to train everything is fine, but I used Gsplat1.5.2 to train 200,000 steps, the scene seems to be fine in the beginning stage, and it is completely damaged after reaching 20,000 steps (CUDA_VISIBLE_DEVICES=0 python gsplat/examples/simple_trainer.py mcmc --data_dir /gz-data/ --data_factor 1 --eval-steps -1 --strategy.cap-max 3000000 --max-steps 200000 --save-steps 200000 --ply-steps 200000 --ply-steps 200000 --save-ply )

https://github.com/user-attachments/assets/f767c663-8f06-4302-a473-d4c98b0d3e2d
Image
Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions