Deep Learn Hub

Biomedical Applications

Enhanced image pre-processing, segmentation and Compression for efficient Transmission of Fetal Ultrasound Images.

Medical Ultrasound (US) imaging is commonly used in obstetrics for routine fetal scans due to its safe nature. However, manual fetal US scanning is a tedious process which may subjected to observer variabilities and fatigue errors. It may cause strain injuries in obstetricians. To address the aforementioned limitations, automated fetal US evaluation is introduced. We focus on fetal US image segmentation which is the most important step in this automated process. However, distorted and low contrast nature of fetal US images pose a challenge in ROI segmentation. Therefore, image pre-processing, i.e. filtering and intensity enhancements are performed prior to segmentation in this work. Apart from the distorted and low contrast nature of the fetal US images, presence of similar structures to the ROI, anatomical structure deformations, different appearance of the fetus throughout three trimesters, motion and, signal attenuation due to maternal mass result in significantly different images. Taking this wide range of fetal US images into account, we work with learning architectures for fetal US image segmentation as they perform advanced pattern recognition-based segmentation. After segmentation, either biometric parameters can be measured or images can be sent to an expert for remote monitoring if required. However, medical images are large in size. Hence, transmission of the images in real time is difficult. To address the limitation, we propose fetal US image compression after accurate segmentation prior to transmission. As ROI carries the important clinical information, preserving the data in ROI is required. Therefore, lossless image compression is applied to the ROI. Parallelly, lossy image compression is applied to the non-ROI and background.

Smart Stethoscope: An intelligent respiratory disease prediction system

Diagnosing and treating of lung diseases can be challenging since the signs and symptoms of a wide range of medical conditions can indicate interstitial lung diseases. Respiratory diseases impose and immense worldwide health burden. It is even more deadly when considering COVID-19 in present times. Auscultation is the most common and primary method of respiratory disease diagnosis. It is known to be non-expensive, non-invasive, safe and takes lesser time for diagnosis.However, diagnosis accuracy using auscultation is subjective to the experience and knowledge of the physician, and it requires extensive training. We propose a solution developed for respiratory disease diagnosis. ‘Smart Stethoscope’ is an intelligent platform for providing assistance in respiratory disease diagnosis and training of novice physicians, which is powered by state-of-theart artificial intelligence. This system performs 3 main functions. Smart Stethoscope provides realtime respiratory diagnosis predictions for lung sounds collected via auscultation in the real-time prediction mode. It also contains a training mode for trainee doctors and medical students to improve their respiratory disease differential diagnosis skills. The expert mode in the system is used to continuously improve the system’s prediction performance by getting validations and evaluations from pulmonologists. These 3 services will be provided via the web and the mobile applications. The respiratory disease diagnosis prediction model of the smart stethoscope is developed by combining a state-of-the-art neural network with ensembling transformer neural network, a new filter pipeline and feature extraction techniques. The system gets a lung sound input from the digital stethoscope, then the sound is filtered and passed through an automatic breathing cycle detection algorithm. After that in the feature engineering part, for each breathing cycle, 2 Mel-Spectrograms are engineered and they are input to the dual CRNN model. Based on the predictions of all breathing cycles of the given lung sound, the model outputs the differential respiratory disease diagnosis. Our proposed convolutional recurrent neural network (CRNN) model achieved an accuracy of 98% on 6 class classification of breathing cycles for ICBHF17 scientific challenge respiratory sound database.

A combination of eye gaze detection and brain sensing to improve the accuracy of human-computer interaction for disabled people.

Communication is a vital concept in the world today. People use mobile phones, laptops, tablets, and many other digital devices to be connected with people who are far away from them. They can get any update from other people, and it is at their fingertips. However, communication for disabled people is genuinely challenging as they cannot access any digital device using their hands. Therefore, they need assistive technology for their communication. Many kinds of research have been done in this field to improve the quality of the communication of disabled people. According to that, pure eye gaze tracking systems and pure BCI are solving this issue partially. However, the efficiency of the collective eye gaze and BCI systems are not sufficient. Therefore, by avoiding the drawbacks of current methods, we hope to increase the efficiency to develop a user-friendly Human-Computer Interaction for disabled people. The proposed system will track the eye gaze similar to the pure eye gaze tracking systems, and at stage 4, the system will use the brain activity signal to improve the gaze point estimation by mixing the brain signal of the user.