Cone Beam Computed Tomography (CBCT) is widely used in dentistry for diagnostics and treatment planning. CBCT Imaging has a long acquisition time and consequently, the patient is likely to move. This motion causes significant artifacts in the reconstructed data which may lead to misdiagnosis. Existing motion correction algorithms only address this issue partially, struggling with inconsistencies due to truncation, accuracy, and execution speed. On the other hand, a short-scan reconstruction using a subset of motion-free projections with appropriate weighting methods can have a sufficient clinical image quality for most diagnostic purposes. Therefore, a framework is used in this study to extract the motion-free part of the scanned projections with which a clean short-scan volume can be reconstructed without using correction algorithms. Motion artifacts are detected using deep learning with a slice-based prediction scheme followed by volume averaging to get the final result. A realistic motion simulation strategy and data augmentation has been implemented to address data scarcity. The framework has been validated by testing it with real motion-affected data while the model was trained only with simulated motion data. This shows the feasibility to apply the proposed framework to a broad variety of motion cases for further research.
Motion Artifacts Detection in Short-scan Dental CBCT Reconstructions
Abdul Salam Rasmi Asraf Ali
Primo
Methodology
;Andrea Fusiello
Secondo
Supervision
;
2023-01-01
Abstract
Cone Beam Computed Tomography (CBCT) is widely used in dentistry for diagnostics and treatment planning. CBCT Imaging has a long acquisition time and consequently, the patient is likely to move. This motion causes significant artifacts in the reconstructed data which may lead to misdiagnosis. Existing motion correction algorithms only address this issue partially, struggling with inconsistencies due to truncation, accuracy, and execution speed. On the other hand, a short-scan reconstruction using a subset of motion-free projections with appropriate weighting methods can have a sufficient clinical image quality for most diagnostic purposes. Therefore, a framework is used in this study to extract the motion-free part of the scanned projections with which a clean short-scan volume can be reconstructed without using correction algorithms. Motion artifacts are detected using deep learning with a slice-based prediction scheme followed by volume averaging to get the final result. A realistic motion simulation strategy and data augmentation has been implemented to address data scarcity. The framework has been validated by testing it with real motion-affected data while the model was trained only with simulated motion data. This shows the feasibility to apply the proposed framework to a broad variety of motion cases for further research.File | Dimensione | Formato | |
---|---|---|---|
MILLanD_2023.pdf
non disponibili
Tipologia:
Abstract
Licenza:
Non pubblico
Dimensione
419.27 kB
Formato
Adobe PDF
|
419.27 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
MILLanD_2023_Supplementary_Material.pdf
non disponibili
Licenza:
Non pubblico
Dimensione
547.05 kB
Formato
Adobe PDF
|
547.05 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.