Medical ultrasound (US) imaging is a popular and convenient medical imaging modality thanks to its mobility, non-ionizing radiation, ease-of-use, and real-time data acquisition. Conventional US brightness mode (B-Mode) is one type of diagnostic medical imaging modality that represents tissue morphology by collecting and displaying the intensity information of a reflected acoustic wave. Moreover, US B-Mode imaging is frequently integrated with tracking systems and robotic systems in image-guided therapy (IGT) systems. Recently, these systems have also begun to incorporate advanced US imaging such as US elasticity imaging, photoacoustic imaging, and thermal imaging. Several software frameworks and toolkits have been developed for US imaging research and the integration of US data acquisition, processing and display with existing IGT systems. However, there is no software framework or toolkit that supports advanced US imaging research and advanced US IGT systems by providing low-level US data (channel data or radio-frequency (RF) data) essential for advanced US imaging. In this dissertation, we propose a new medical US imaging and interventional component framework for advanced US image-guided therapy based on network-distributed modularity, real-time computation and communication, and open-interface design specifications. Consequently, the framework can provide a modular research environment by supporting communication interfaces between heterogeneous systems to allow for flexible interventional US imaging research, and easy reconfiguration of an entire interventional US imaging system by adding or removing devices or equipment specific to each therapy. In addition, our proposed framework offers real-time synchronization between data from multiple data acquisition devices for advanced interventional US imaging research and integration of the US imaging system with other IGT systems. Moreover, we can easily implement and test new advanced ultrasound imaging techniques inside the proposed framework in real-time because our software framework is designed and optimized for advanced ultrasound research. The system’s flexibility, real-time performance, and open-interface are demonstrated and evaluated through performing experimental tests for several applications.
Speaker Biography
Hyun Jae Kang received his B.S.E degree in Mechanical engineering at Kyungpook National University (Daegu, Korea), his M.S.E degree in Mechanical Design & Production Engineering at Seoul National University (Seoul, Korea), and his M.S.E. degree in Computer Science at the Johns Hopkins University. Before coming to JHU, he worked as a R&D team manager and researcher in the image-guided surgical navigation team at CyberMed Inc. (Seoul, Korea) and was involved in the development of surgical navigation systems for neurosurgery, image-free total knee replacement and dental implants, as well as 16 patents. He is currently a Ph.D. candidate in Computer Science at JHU advised by Dr. Emad M. Boctor. During his Ph.D., his research has been dedicated to a real-time medical ultrasound imaging system and an interventional component-based framework for advanced ultrasound guided surgical systems. His research interests include modular software frameworks, network-distributed systems, and medical ultrasound image reconstruction.