*Result*: Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Title:
Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.
Authors:
Ye S; Department of Radiation Oncology, Stanford University, Stanford, California, USA., Chen Y; Department of Radiation Oncology, Stanford University, Stanford, California, USA., Wang S; Department of Radiation Oncology, Stanford University, Stanford, California, USA., Xing L; Department of Radiation Oncology, Stanford University, Stanford, California, USA., Gao Y; Department of Radiation Oncology, Stanford University, Stanford, California, USA.
Source:
Medical physics [Med Phys] 2025 Jul; Vol. 52 (7), pp. e17885. Date of Electronic Publication: 2025 May 19.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: John Wiley and Sons, Inc Country of Publication: United States NLM ID: 0425746 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 2473-4209 (Electronic) Linking ISSN: 00942405 NLM ISO Abbreviation: Med Phys Subsets: MEDLINE
Imprint Name(s):
Publication: 2017- : Hoboken, NJ : John Wiley and Sons, Inc.
Original Publication: Lancaster, Pa., Published for the American Assn. of Physicists in Medicine by the American Institute of Physics.
References:
Hu D, Zhang Y, Liu J, Luo S, Chen Y. DIOR: deep iterative optimization‐based residual‐learning for limited‐angle CT reconstruction. IEEE Trans Med Imaging. 2022;41:1778‐1790.
Zhou B, Zhou SK, Duncan JS, Liu C. Limited view tomographic reconstruction using a cascaded residual dense spatial‐channel attention network with projection data fidelity layer. IEEE Trans Med Imaging. 2021;40:1792‐1804.
Wang J, Zeng L, Wang C, Guo Y. ADMM‐based deep reconstruction for limited‐angle CT. Phys Med Biol. 2019;64:115011.
Cheng W, Wang Y, Li H, Duan Y. Learned full‐sampling reconstruction from incomplete data. IEEE Trans Comput Imaging. 2020;6:945‐957.
Barutcu S, Aslan S, Katsaggelos AK, Gürsoy D. Limited‐angle computed tomography with deep image and physics priors. Sci Rep. 2021;11:17740.
Hu D, Zhang Y, Li W, Zhang W, et al. SEA‐Net: structure‐enhanced attention network for limited‐angle CBCT reconstruction of clinical projection data. IEEE Trans Instrum Meas. 2023;72:1‐13.
Zang S, Zhang Y, Hu D, et al. Coarse‐to‐Fine Learning for Planning CT Enhanced Limited‐Angle CBCT Reconstruction. IEEE Trans Instrum Meas. 2024;73:1‐15.
Liu J, Anirudh R, Thiagarajan JJ, et al. DOLCE: a model‐based probabilistic diffusion framework for limited‐angle ct reconstruction. In: Proceedings of the IEEE International Conference on Computer Vision. 2023:10498‐10508.
Xie J, Shao H‐C, Li Y, Zhang Y. Prior frequency guided diffusion model for limited angle (LA)‐CBCT reconstruction. Phys Med Biol. 2024;69:135008.
Nagarajan V, Andreassen A, Neyshabur B. Understanding the failure modes of out‐of‐distribution generalization. arXiv preprint arXiv:2010.15775 (2020).
Henriksson J, Berger C, Borg M, Tornberg L, Sathyamoorthy SR, Englund C. Performance analysis of out‐of‐distribution detection on trained neural networks. Inf Softw Technol. 2021;130:106409.
Wei R, Song Z, Pan Z, Cao Y, Song Y, Dai J. Non‐coplanar CBCT image reconstruction using a generative adversarial network for non‐coplanar radiotherapy. J Appl Clin Med Phys. 2024;25:e14487.
Weese J, Penney GP, Desmedt P, Buzug TM, Hill DL, Hawkes DJ. Voxel‐based 2‐D/3‐D registration of fluoroscopy images and CT scans for image‐guided surgery. IEEE Trans Inf Technol Biomed. 1997;1:284‐293.
Bansal R, Staib LH, Chen Z, et al. A novel approach for the registration of 2D portal and 3D CT images for treatment setup verification in radiotherapy. In: Medical Image Computing and Computer‐Assisted Intervention–MICCAI'98: First International Conference Cambridge, MA, USA, October 11–13, 1998 Proceedings 1. Springer; 1998:1075‐1086.
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image‐guided interventions. Med Image Anal. 2012;16:642‐661.
Reyneke CJF, Lüthi M, Burdin V, Douglas TS, Vetter T, Mutsvangwa TE. Review of 2‐D/3‐D reconstruction using statistical shape and intensity models and X‐ray image synthesis: toward a unified framework. IEEE Rev Biomed Eng. 2018;12:269‐286.
Tang TS, MacIntyre N, Gill H, et al. Hardware‐assisted 2D/3D intensity‐based registration for assessing patellar tracking. In: Medical Image Computing and Computer‐Assisted Intervention–MICCAI 2004: 7th International Conference, Saint‐Malo, France, September 26‐29, 2004. Proceedings, Part II 7. Springer; 2004:1095‐1096.
Spoerk J, Bergmann H, Wanschitz F, Dong S, Birkfellner W. Fast DRR splat rendering using common consumer graphics hardware. Med Phys. 2007;34:4302‐4308.
Malzbender T. Fourier volume rendering. ACM Trans Graphics (TOG). 1993;12:233‐250.
Lacroute P, Levoy M. Fast volume rendering using a shear‐warp factorization of the viewing transformation. In: Proceedings of the 21st ACM Computer Graphics and Interactive Techniques. 1994:451‐458.
Wang F, Davis TE, Vemuri BC. Real‐time DRR generation using cylindrical harmonics. In: Medical Image Computing and Computer‐Assisted Intervention–MICCAI 2002: 5th International Conference Tokyo, Japan, September 25–28, 2002 Proceedings, Part II 5. Springer; 2002:671‐678.
Jia X, Wei W, Jia K. A GPU‐based DRR generation method using cubic window. In: 8th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE; 2012:403‐406.
Birkfellner W, Seemann R, Figl M, et al. Wobbled splatting–a fast perspective volume rendering method for simulation of X‐ray images from CT. Phys Med Biol. 2005;50:N73‐N84.
Miao S, Wang JZ, Liao R. Convolutional neural networks for robust and real‐time 2‐D/3‐D registration. In Deep Learning for Medical Image Analysis. Elsevier; 2017:271‐296.
Gouveia AR, Metz C, Freire L, Almeida P, Klein S. Registration‐by‐regression of coronary CTA and X‐ray angiography. Comput Methods Biomech Biomed Eng: Imaging Vis. 2017;5:208‐220.
Chou CR, Frederick B, Mageras G, Chang S, Pizer S. 2D/3D image registration using regression learning. Comput Vis Image Underst. 2013;117:1095‐1106.
Penney GP, Weese J, Little JA, Desmedt P, Hill DLG, Hawkes DJ. A comparison of similarity measures for use in 2‐D‐3‐D medical image registration. IEEE Trans Med Imagng. 1998;17:586‐595.
Weese J, Buzug TM, Lorenz C, Fassnacht C. An approach to 2D/3D registration of a vertebra in 2D X‐ray fluoroscopies with 3D CT images. In: International Conference on Computer Vision, Virtual Reality, and Robotics in Medicine. Springer; 1997:119‐128.
Van Der Bom I, Klein S, Staring M, Homan R, Bartels L, Pluim J. Evaluation of optimization methods for intensity‐based 2D‐3D registration in x‐ray guided interventions. In: Medical Imaging 2011: Image Processing. Vol 7962. SPIE; 2011:657‐671.
Aubert B, Cresson T, De Guise J, Vazquez C. X‐ray to DRR images translation for efficient multiple objects similarity measures in deformable model 3D/2D registration. IEEE Trans Med Imaging. 2022;42:897‐909.
Zhang Y. An unsupervised 2D–3D deformable registration network (2D3D‐RegNet) for cone‐beam CT estimation. Phys Med Biol. 2021;66:074001.
Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun ACM. 2021;65:99‐106.
Shen L, Pauly J, Xing L. NeRP: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction. IEEE Trans Neural Netw Learn Syst. (2022);35(1):770‐782.
Wolterink JM, Zwienenberg JC, Brune C. Implicit neural representations for deformable image registration. In: International Conference on Medical Imaging with Deep Learning. PMLR; 2022:1349‐1359.
Zhang Y, Shao H‐C, Pan T, Mengke T. Dynamic cone‐beam CT reconstruction using spatial and temporal implicit neural representation learning (STINR). Phys Med Biol. 2023;68:045005.
Shao H‐C, Mengke T, Pan T, Zhang Y. Dynamic CBCT imaging using prior model‐free spatiotemporal implicit neural representation (PMF‐STINR). Phys Med Biol. 2024;69:115030.
Kanopoulos N, Vasanthavada N, Baker RL. Design of an image edge detection filter using the Sobel operator. IEEE J Solid‐State Circuits. 1988;23:358‐367.
Gendrin C, Furtado H, Weber C, et al. Monitoring tumor motion by real time 2D/3D registration during radiotherapy. Radiother Oncol. 2012;102:274‐280.
Kingma DP. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
Zhang B, Faghihroohi S, Azampour MF, et al. A patient‐specific self‐supervised model for automatic X‐Ray/CT registration. In: Medical Image Computing and Computer Assisted Intervention–MICCAI. Springer; 2023:515‐524.
Bry V, Saenz D, Pappas E, Kalaitzakis G, Papanikolaou N, Rasmussen K. End‐to‐end comparison of surface‐guided imaging versus stereoscopic X‐rays for the SRS treatment of multiple metastases with a single isocenter using 3D anthropomorphic gel phantoms. J Appl Clin Med Phys. 2022;23:e13576.
Grant Information:
1R01CA256890 United States NH NIH HHS; R01CA275772 United States NH NIH HHS
Contributed Indexing:
Keywords: 2D–3D registration; deep learning; image‐guided radiation therapy; limited‐angle CBCT; non‐coplanar radiation therapy
Entry Date(s):
Date Created: 20250519 Date Completed: 20250714 Latest Revision: 20250714
Update Code:
20260130
DOI:
10.1002/mp.17885
PMID:
40387508
Database:
MEDLINE

*Further Information*

*Background: Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration.
Purpose: The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans.
Method: We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method.
Results: In the simulation data experiments, two X-ray projections of a head-and-neck image with INLINEMATH discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of INLINEMATH and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure.
Conclusion: We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.
(© 2025 American Association of Physicists in Medicine.)*