Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 07 Oct 2024 at 01:40 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2024-10-03
Multiscale morphological trajectories to support management of free-flowing rivers: the Vjosa in South-East Europe.
Journal of environmental management, 370:122541 pii:S0301-4797(24)02527-1 [Epub ahead of print].
Free-flowing rivers (FFRs) are fundamental references for river management, providing the opportunity to investigate river functioning under minimal anthropic disturbance. However, large free-flowing rivers are rare in Europe and worldwide, and knowledge of their dynamics is often scarce due to a lack of data and baseline studies. So far, their characterization is mainly grounded in the longitudinal connectivity assessment, with scarce integration of further hydro-morphological aspects, particularly concerning the processes and drivers of changes in their morphology over time scales of management relevance. This work aims to broaden the characterization of FFRs by reconstructing their catchment-scale morphological evolutionary trajectories and understanding their driving causes, to support their management better. This is achieved by integrating freely available global data including Landsat imagery and climatic reanalysis with the few locally available quantitative and qualitative information. The analysis of possible drivers of change at the catchment and reach scale assesses hydrological variability, flow regulation, land use change, sediment mining and bank protection works. We applied this approach to the Vjosa River (Albania), a model ecosystem of European significance and one of the few FFRs in Europe. The Vjosa was recently declared a Wild River National Park. We investigated its catchment-scale morphological changes over 50 years, considering four reaches of the Vjosa and four reaches of its main tributaries. Satellite imagery was analyzed taking advantage of Google Earth Engine cloud computing platform. The analysis reveals a catchment-scale response to climatic fluctuations, especially in the most natural reaches, with a significant narrowing of the active river corridor, following a flood-intense period in the early 1960s. The narrowing rate gradually decreased, from 35% before 1985 to 24% between 1985 and 2000, reaching a new equilibrium from 2000 to 2020. However, the recent trajectories of the lowland reaches have been impacted by human pressures, particularly sediment mining, which intensified after the 1990s, suggesting that these reaches may instead be far from equilibrium and adjusting to such persistent stressor. Identifying the key drivers of change and building catchment-scale knowledge of geomorphic change can inform the management of riverine protected areas, and the proposed integrated approach is a promising tool to help overcome the data scarcity typical of the limited remaining large FFRs.
Additional Links: PMID-39362158
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39362158,
year = {2024},
author = {Crivellaro, M and Serrao, L and Bertoldi, W and Bizzi, S and Vitti, A and Hauer, C and Skrame, K and Cekrezi, B and Zolezzi, G},
title = {Multiscale morphological trajectories to support management of free-flowing rivers: the Vjosa in South-East Europe.},
journal = {Journal of environmental management},
volume = {370},
number = {},
pages = {122541},
doi = {10.1016/j.jenvman.2024.122541},
pmid = {39362158},
issn = {1095-8630},
abstract = {Free-flowing rivers (FFRs) are fundamental references for river management, providing the opportunity to investigate river functioning under minimal anthropic disturbance. However, large free-flowing rivers are rare in Europe and worldwide, and knowledge of their dynamics is often scarce due to a lack of data and baseline studies. So far, their characterization is mainly grounded in the longitudinal connectivity assessment, with scarce integration of further hydro-morphological aspects, particularly concerning the processes and drivers of changes in their morphology over time scales of management relevance. This work aims to broaden the characterization of FFRs by reconstructing their catchment-scale morphological evolutionary trajectories and understanding their driving causes, to support their management better. This is achieved by integrating freely available global data including Landsat imagery and climatic reanalysis with the few locally available quantitative and qualitative information. The analysis of possible drivers of change at the catchment and reach scale assesses hydrological variability, flow regulation, land use change, sediment mining and bank protection works. We applied this approach to the Vjosa River (Albania), a model ecosystem of European significance and one of the few FFRs in Europe. The Vjosa was recently declared a Wild River National Park. We investigated its catchment-scale morphological changes over 50 years, considering four reaches of the Vjosa and four reaches of its main tributaries. Satellite imagery was analyzed taking advantage of Google Earth Engine cloud computing platform. The analysis reveals a catchment-scale response to climatic fluctuations, especially in the most natural reaches, with a significant narrowing of the active river corridor, following a flood-intense period in the early 1960s. The narrowing rate gradually decreased, from 35% before 1985 to 24% between 1985 and 2000, reaching a new equilibrium from 2000 to 2020. However, the recent trajectories of the lowland reaches have been impacted by human pressures, particularly sediment mining, which intensified after the 1990s, suggesting that these reaches may instead be far from equilibrium and adjusting to such persistent stressor. Identifying the key drivers of change and building catchment-scale knowledge of geomorphic change can inform the management of riverine protected areas, and the proposed integrated approach is a promising tool to help overcome the data scarcity typical of the limited remaining large FFRs.},
}
RevDate: 2024-10-03
CmpDate: 2024-09-30
An empirical study for mitigating sustainable cloud computing challenges using ISM-ANN.
PloS one, 19(9):e0308971 pii:PONE-D-24-13688.
The significance of cloud computing methods in everyday life is growing as a result of the exponential advancement and refinement of artificial technology. As cloud computing makes more progress, it will bring with it new opportunities and threats that affect the long-term health of society and the environment. Many questions remain unanswered regarding sustainability, such as, "How will widely available computing systems affect environmental equilibrium"? When hundreds of millions of microcomputers are invisible to each other, what will society look like? What does this mean for social sustainability? This paper empirically investigates the ethical challenges and practices of cloud computing about sustainable development. We conducted a systematic literature review followed by a questionnaire survey and identified 11 sustainable cloud computing challenges (SCCCs) and 66 practices for addressing the identified challenges. Interpretive structural modeling (ISM) and Artificial Neural Networks (ANN) were then used to identify and analyze the interrelationship between the SCCCs. Then, based on the results of the ISM, 11 process areas were determined to develop the proposed sustainable cloud computing challenges mitigation model (SCCCMM). The SCCCMM includes four main categories: Requirements specification, Quality of Service (QoS) and Service Legal Agreement (SLA), Complexity and Cyber security, and Trust. The model was subsequently tested with a real-world case study that was connected to the environment. In a sustainable cloud computing organization, the results demonstrate that the proposed SCCCMM aids in estimating the level of mitigation. The participants in the case study also appreciated the suggested SCCCMM for its practicality, user-friendliness, and overall usefulness. When it comes to the sustainability of their software products, we believe that organizations involved in cloud computing can benefit from the suggested SCCCMM. Additionally, researchers and industry practitioners can expect the proposed model to provide a strong foundation for developing new sustainable methods and tools for cloud computing.
Additional Links: PMID-39348369
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39348369,
year = {2024},
author = {Alwageed, HS and Keshta, I and Khan, RA and Alzahrani, A and Tariq, MU and Ghani, A},
title = {An empirical study for mitigating sustainable cloud computing challenges using ISM-ANN.},
journal = {PloS one},
volume = {19},
number = {9},
pages = {e0308971},
doi = {10.1371/journal.pone.0308971},
pmid = {39348369},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Computer Security ; *Neural Networks, Computer ; Humans ; Surveys and Questionnaires ; Sustainable Development ; },
abstract = {The significance of cloud computing methods in everyday life is growing as a result of the exponential advancement and refinement of artificial technology. As cloud computing makes more progress, it will bring with it new opportunities and threats that affect the long-term health of society and the environment. Many questions remain unanswered regarding sustainability, such as, "How will widely available computing systems affect environmental equilibrium"? When hundreds of millions of microcomputers are invisible to each other, what will society look like? What does this mean for social sustainability? This paper empirically investigates the ethical challenges and practices of cloud computing about sustainable development. We conducted a systematic literature review followed by a questionnaire survey and identified 11 sustainable cloud computing challenges (SCCCs) and 66 practices for addressing the identified challenges. Interpretive structural modeling (ISM) and Artificial Neural Networks (ANN) were then used to identify and analyze the interrelationship between the SCCCs. Then, based on the results of the ISM, 11 process areas were determined to develop the proposed sustainable cloud computing challenges mitigation model (SCCCMM). The SCCCMM includes four main categories: Requirements specification, Quality of Service (QoS) and Service Legal Agreement (SLA), Complexity and Cyber security, and Trust. The model was subsequently tested with a real-world case study that was connected to the environment. In a sustainable cloud computing organization, the results demonstrate that the proposed SCCCMM aids in estimating the level of mitigation. The participants in the case study also appreciated the suggested SCCCMM for its practicality, user-friendliness, and overall usefulness. When it comes to the sustainability of their software products, we believe that organizations involved in cloud computing can benefit from the suggested SCCCMM. Additionally, researchers and industry practitioners can expect the proposed model to provide a strong foundation for developing new sustainable methods and tools for cloud computing.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Computer Security
*Neural Networks, Computer
Humans
Surveys and Questionnaires
Sustainable Development
RevDate: 2024-09-30
Long-term stability of squeezed light in a fiber-based system using automated alignment.
The Review of scientific instruments, 95(9):.
Providing a cloud service for optical quantum computing requires stabilizing the optical system for extended periods. It is advantageous to construct a fiber-based system, which does not require spatial alignment. However, fiber-based systems are instead subject to fiber-specific instabilities. For instance, there are phase drifts due to ambient temperature changes and external disturbances and polarization fluctuations due to the finite polarization extinction ratio of fiber components. Here, we report the success of measuring squeezed light with a fiber system for 24 h. To do this, we introduce stabilization mechanics to suppress fluctuations in the fiber system and an integrated controller to automatically align the entire system. The squeezed light at a wavelength of 1545.3 nm is measured every 2 min, where automated alignments are inserted every 30 min. The squeezing levels with an average of -4.42 dB are recorded with an extremely small standard deviation of 0.08 dB over 24 h. With the technologies developed here, we can build complicated optical setups with the fiber-based system and operate them automatically for extended periods, which is promising for cloud service of quantum computation.
Additional Links: PMID-39345166
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39345166,
year = {2024},
author = {Nakamura, T and Nomura, T and Endo, M and Sakaguchi, A and Ruofan, H and Kashiwazaki, T and Umeki, T and Takase, K and Asavanant, W and Yoshikawa, JI and Furusawa, A},
title = {Long-term stability of squeezed light in a fiber-based system using automated alignment.},
journal = {The Review of scientific instruments},
volume = {95},
number = {9},
pages = {},
doi = {10.1063/5.0203988},
pmid = {39345166},
issn = {1089-7623},
abstract = {Providing a cloud service for optical quantum computing requires stabilizing the optical system for extended periods. It is advantageous to construct a fiber-based system, which does not require spatial alignment. However, fiber-based systems are instead subject to fiber-specific instabilities. For instance, there are phase drifts due to ambient temperature changes and external disturbances and polarization fluctuations due to the finite polarization extinction ratio of fiber components. Here, we report the success of measuring squeezed light with a fiber system for 24 h. To do this, we introduce stabilization mechanics to suppress fluctuations in the fiber system and an integrated controller to automatically align the entire system. The squeezed light at a wavelength of 1545.3 nm is measured every 2 min, where automated alignments are inserted every 30 min. The squeezing levels with an average of -4.42 dB are recorded with an extremely small standard deviation of 0.08 dB over 24 h. With the technologies developed here, we can build complicated optical setups with the fiber-based system and operate them automatically for extended periods, which is promising for cloud service of quantum computation.},
}
RevDate: 2024-09-30
Modeling and Analyzing the Availability of Technical Professional Profiles for the Success of Smart Cities Projects in Europe.
Sensors (Basel, Switzerland), 24(18):.
The success of developing and implementing Smart Cities (SC) projects depends on a varied set of factors, where the availability of a qualified technical workforce is a critical one. The combination of ICT requirements, like the effectiveness and quality of solutions merging IoT, cloud computing, sensors, and communications with the work from many varied disciplines (e.g., civil engineering, architecture, etc.), mixed with aspects of environmental and business sustainability, makes the management of these projects really challenging. Reports forecast a scarcity of qualified candidates, given this complexity and the growth of activity in SC projects. The European project SMACITE has addressed the requirements of the qualification of an ICT workforce with an analysis of multiples sources of information from the labor market, feedback from involved stakeholders, and the literature. The goal was the development of two occupational ICT profiles as a reference for training and for the availability of candidates for job vacancies. The result is two ICT role profiles for engineers and technicians, mapped with the European skills frameworks ESCO and EN16234. The profiles determined the whole set of requirements, including not only the technical areas and soft skills, but also additional technical areas and sustainability and managerial skills and the analysis of different sources of information. Our work has also determined which existing ESCO occupations are similar to the two reference profiles, so they are better adapted to SC projects. The training activities of SMACITE have also suggested the amount of training expected for a varied sample of candidates who want to be qualified for SC projects.
Additional Links: PMID-39338834
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39338834,
year = {2024},
author = {López-Baldominos, I and Pospelova, V and Fernández-Sanz, L and Castillo-Martínez, A},
title = {Modeling and Analyzing the Availability of Technical Professional Profiles for the Success of Smart Cities Projects in Europe.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {18},
pages = {},
pmid = {39338834},
issn = {1424-8220},
support = {101052513//European Commission/ ; },
abstract = {The success of developing and implementing Smart Cities (SC) projects depends on a varied set of factors, where the availability of a qualified technical workforce is a critical one. The combination of ICT requirements, like the effectiveness and quality of solutions merging IoT, cloud computing, sensors, and communications with the work from many varied disciplines (e.g., civil engineering, architecture, etc.), mixed with aspects of environmental and business sustainability, makes the management of these projects really challenging. Reports forecast a scarcity of qualified candidates, given this complexity and the growth of activity in SC projects. The European project SMACITE has addressed the requirements of the qualification of an ICT workforce with an analysis of multiples sources of information from the labor market, feedback from involved stakeholders, and the literature. The goal was the development of two occupational ICT profiles as a reference for training and for the availability of candidates for job vacancies. The result is two ICT role profiles for engineers and technicians, mapped with the European skills frameworks ESCO and EN16234. The profiles determined the whole set of requirements, including not only the technical areas and soft skills, but also additional technical areas and sustainability and managerial skills and the analysis of different sources of information. Our work has also determined which existing ESCO occupations are similar to the two reference profiles, so they are better adapted to SC projects. The training activities of SMACITE have also suggested the amount of training expected for a varied sample of candidates who want to be qualified for SC projects.},
}
RevDate: 2024-09-28
A Survey on Reduction of Energy Consumption in Fog Networks-Communications and Computations.
Sensors (Basel, Switzerland), 24(18): pii:s24186064.
Fog networking has become an established architecture addressing various applications with strict latency, jitter, and bandwidth constraints. Fog Nodes (FNs) allow for flexible and effective computation offloading and content distribution. However, the transmission of computational tasks, the processing of these tasks, and finally sending the results back still incur energy costs. We survey the literature on fog computing, focusing on energy consumption. We take a holistic approach and look at energy consumed by devices located in all network tiers from the things tier through the fog tier to the cloud tier, including communication links between the tiers. Furthermore, fog network modeling is analyzed with particular emphasis on application scenarios and the energy consumed for communication and computation. We perform a detailed analysis of model parameterization, which is crucial for the results presented in the surveyed works. Finally, we survey energy-saving methods, putting them into different classification systems and considering the results presented in the surveyed works. Based on our analysis, we present a classification and comparison of the fog algorithmic models, where energy is spent on communication and computation, and where delay is incurred. We also classify the scenarios examined by the surveyed works with respect to the assumed parameters. Moreover, we systematize methods used to save energy in a fog network. These methods are compared with respect to their scenarios, objectives, constraints, and decision variables. Finally, we discuss future trends in fog networking and how related technologies and economics shall trade their increasing development with energy consumption.
Additional Links: PMID-39338808
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39338808,
year = {2024},
author = {Kopras, B and Idzikowski, F and Bogucka, H},
title = {A Survey on Reduction of Energy Consumption in Fog Networks-Communications and Computations.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {18},
pages = {},
doi = {10.3390/s24186064},
pmid = {39338808},
issn = {1424-8220},
support = {CYBERSECIDENT/487845/IV/NCBR/2021//National Centre for Research and Development/ ; 2023/05/Y/ST7/00002//National Science Center/ ; },
abstract = {Fog networking has become an established architecture addressing various applications with strict latency, jitter, and bandwidth constraints. Fog Nodes (FNs) allow for flexible and effective computation offloading and content distribution. However, the transmission of computational tasks, the processing of these tasks, and finally sending the results back still incur energy costs. We survey the literature on fog computing, focusing on energy consumption. We take a holistic approach and look at energy consumed by devices located in all network tiers from the things tier through the fog tier to the cloud tier, including communication links between the tiers. Furthermore, fog network modeling is analyzed with particular emphasis on application scenarios and the energy consumed for communication and computation. We perform a detailed analysis of model parameterization, which is crucial for the results presented in the surveyed works. Finally, we survey energy-saving methods, putting them into different classification systems and considering the results presented in the surveyed works. Based on our analysis, we present a classification and comparison of the fog algorithmic models, where energy is spent on communication and computation, and where delay is incurred. We also classify the scenarios examined by the surveyed works with respect to the assumed parameters. Moreover, we systematize methods used to save energy in a fog network. These methods are compared with respect to their scenarios, objectives, constraints, and decision variables. Finally, we discuss future trends in fog networking and how related technologies and economics shall trade their increasing development with energy consumption.},
}
RevDate: 2024-09-28
The Impact of an Automation System Built with Jenkins on the Efficiency of Container-Based System Deployment.
Sensors (Basel, Switzerland), 24(18): pii:s24186002.
This paper evaluated deployment efficiency by comparing manual deployment with automated deployment through a CI/CD pipeline using Jenkins. This study involved moving from a manual deployment process to an automated system using Jenkins and experimenting with both deployment methods in a real-world environment. The results showed that the automated deployment system significantly reduced the deployment time compared to manual deployment and significantly reduced the error rate. Manual deployment required human intervention at each step, making it time-consuming and prone to mistakes, while automated deployment using Jenkins automated each step to ensure consistency and maximized time efficiency through parallel processing. Automated testing verified the stability of the code before deployment, minimizing errors. This study demonstrates the effectiveness of adopting a CI/CD pipeline and shows that automated systems can provide high efficiency in real-world production environments. It also highlights the importance of security measures to prevent sensitive information leakage during CI/CD, suggesting the use of secrecy management tools and environment variables and limiting access rights. This research will contribute to exploring the applicability of CI/CD pipelines in different environments and, in doing so, validate the universality of automated systems.
Additional Links: PMID-39338747
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39338747,
year = {2024},
author = {Hyun, G and Oak, J and Kim, D and Kim, K},
title = {The Impact of an Automation System Built with Jenkins on the Efficiency of Container-Based System Deployment.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {18},
pages = {},
doi = {10.3390/s24186002},
pmid = {39338747},
issn = {1424-8220},
support = {NRF-2021R1A2C2013933//Ministry of Science and ICT/ ; S3224694//Ministry of SMEs and Startups/ ; },
abstract = {This paper evaluated deployment efficiency by comparing manual deployment with automated deployment through a CI/CD pipeline using Jenkins. This study involved moving from a manual deployment process to an automated system using Jenkins and experimenting with both deployment methods in a real-world environment. The results showed that the automated deployment system significantly reduced the deployment time compared to manual deployment and significantly reduced the error rate. Manual deployment required human intervention at each step, making it time-consuming and prone to mistakes, while automated deployment using Jenkins automated each step to ensure consistency and maximized time efficiency through parallel processing. Automated testing verified the stability of the code before deployment, minimizing errors. This study demonstrates the effectiveness of adopting a CI/CD pipeline and shows that automated systems can provide high efficiency in real-world production environments. It also highlights the importance of security measures to prevent sensitive information leakage during CI/CD, suggesting the use of secrecy management tools and environment variables and limiting access rights. This research will contribute to exploring the applicability of CI/CD pipelines in different environments and, in doing so, validate the universality of automated systems.},
}
RevDate: 2024-09-28
CmpDate: 2024-09-28
Image Processing for Smart Agriculture Applications Using Cloud-Fog Computing.
Sensors (Basel, Switzerland), 24(18): pii:s24185965.
The widespread use of IoT devices has led to the generation of a huge amount of data and driven the need for analytical solutions in many areas of human activities, such as the field of smart agriculture. Continuous monitoring of crop growth stages enables timely interventions, such as control of weeds and plant diseases, as well as pest control, ensuring optimal development. Decision-making systems in smart agriculture involve image analysis with the potential to increase productivity, efficiency and sustainability. By applying Convolutional Neural Networks (CNNs), state recognition and classification can be performed based on images from specific locations. Thus, we have developed a solution for early problem detection and resource management optimization. The main concept of the proposed solution relies on a direct connection between Cloud and Edge devices, which is achieved through Fog computing. The goal of our work is creation of a deep learning model for image classification that can be optimized and adapted for implementation on devices with limited hardware resources at the level of Fog computing. This could increase the importance of image processing in the reduction of agricultural operating costs and manual labor. As a result of the off-load data processing at Edge and Fog devices, the system responsiveness can be improved, the costs associated with data transmission and storage can be reduced, and the overall system reliability and security can be increased. The proposed solution can choose classification algorithms to find a trade-off between size and accuracy of the model optimized for devices with limited hardware resources. After testing our model for tomato disease classification compiled for execution on FPGA, it was found that the decrease in test accuracy is as small as 0.83% (from 96.29% to 95.46%).
Additional Links: PMID-39338710
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39338710,
year = {2024},
author = {Marković, D and Stamenković, Z and Đorđević, B and Ranđić, S},
title = {Image Processing for Smart Agriculture Applications Using Cloud-Fog Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {18},
pages = {},
doi = {10.3390/s24185965},
pmid = {39338710},
issn = {1424-8220},
support = {16DHBKIO20//Brandenburg/Bayern Initiative for Integration of Artificial Intelligence - Hardware Subjects in University Curriculum (BB-KI-Chips)/ ; },
mesh = {*Agriculture/methods ; *Image Processing, Computer-Assisted/methods ; *Cloud Computing ; *Neural Networks, Computer ; Crops, Agricultural ; Algorithms ; Humans ; Deep Learning ; },
abstract = {The widespread use of IoT devices has led to the generation of a huge amount of data and driven the need for analytical solutions in many areas of human activities, such as the field of smart agriculture. Continuous monitoring of crop growth stages enables timely interventions, such as control of weeds and plant diseases, as well as pest control, ensuring optimal development. Decision-making systems in smart agriculture involve image analysis with the potential to increase productivity, efficiency and sustainability. By applying Convolutional Neural Networks (CNNs), state recognition and classification can be performed based on images from specific locations. Thus, we have developed a solution for early problem detection and resource management optimization. The main concept of the proposed solution relies on a direct connection between Cloud and Edge devices, which is achieved through Fog computing. The goal of our work is creation of a deep learning model for image classification that can be optimized and adapted for implementation on devices with limited hardware resources at the level of Fog computing. This could increase the importance of image processing in the reduction of agricultural operating costs and manual labor. As a result of the off-load data processing at Edge and Fog devices, the system responsiveness can be improved, the costs associated with data transmission and storage can be reduced, and the overall system reliability and security can be increased. The proposed solution can choose classification algorithms to find a trade-off between size and accuracy of the model optimized for devices with limited hardware resources. After testing our model for tomato disease classification compiled for execution on FPGA, it was found that the decrease in test accuracy is as small as 0.83% (from 96.29% to 95.46%).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Agriculture/methods
*Image Processing, Computer-Assisted/methods
*Cloud Computing
*Neural Networks, Computer
Crops, Agricultural
Algorithms
Humans
Deep Learning
RevDate: 2024-09-28
Design and Evaluation of Real-Time Data Storage and Signal Processing in a Long-Range Distributed Acoustic Sensing (DAS) Using Cloud-Based Services.
Sensors (Basel, Switzerland), 24(18): pii:s24185948.
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1-110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system's performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets.
Additional Links: PMID-39338693
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39338693,
year = {2024},
author = {Nur, A and Muanenda, Y},
title = {Design and Evaluation of Real-Time Data Storage and Signal Processing in a Long-Range Distributed Acoustic Sensing (DAS) Using Cloud-Based Services.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {18},
pages = {},
doi = {10.3390/s24185948},
pmid = {39338693},
issn = {1424-8220},
abstract = {In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1-110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system's performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets.},
}
RevDate: 2024-09-25
Energy and time-aware scheduling in diverse virtualized cloud computing environments using optimized self-attention progressive generative adversarial network.
Network (Bristol, England) [Epub ahead of print].
The rapid growth of cloud computing has led to the widespread adoption of heterogeneous virtualized environments, offering scalable and flexible resources to meet diverse user demands. However, the increasing complexity and variability in workload characteristics pose significant challenges in optimizing energy consumption. Many scheduling algorithms have been suggested to address this. Therefore, a self-attention-based progressive generative adversarial network optimized with Dwarf Mongoose algorithm adopted Energy and Deadline Aware Scheduling in heterogeneous virtualized cloud computing (SAPGAN-DMA-DAS-HVCC) is proposed in this paper. Here, a self-attention based progressive generative adversarial network (SAPGAN) is proposed to schedule activities in a cloud environment with an objective function of makespan and energy consumption. Then Dwarf Mongoose algorithm is proposed to optimize the weight parameters of SAPGAN. Outcome of proposed approach SAPGAN-DMA-DAS-HVCC contains 32.77%, 34.83% and 35.76% higher right skewed makespan, 31.52%, 33.28% and 29.14% lower cost when analysed to the existing models, like task scheduling in heterogeneous cloud environment utilizing mean grey wolf optimization approach, energy and performance-efficient task scheduling in heterogeneous virtualized Energy and Performance Efficient Task Scheduling Algorithm, energy and make span aware scheduling of deadline sensitive tasks on the cloud environment, respectively.
Additional Links: PMID-39320977
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39320977,
year = {2024},
author = {Senthilkumar, G and Anandamurugan, S},
title = {Energy and time-aware scheduling in diverse virtualized cloud computing environments using optimized self-attention progressive generative adversarial network.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-20},
doi = {10.1080/0954898X.2024.2391401},
pmid = {39320977},
issn = {1361-6536},
abstract = {The rapid growth of cloud computing has led to the widespread adoption of heterogeneous virtualized environments, offering scalable and flexible resources to meet diverse user demands. However, the increasing complexity and variability in workload characteristics pose significant challenges in optimizing energy consumption. Many scheduling algorithms have been suggested to address this. Therefore, a self-attention-based progressive generative adversarial network optimized with Dwarf Mongoose algorithm adopted Energy and Deadline Aware Scheduling in heterogeneous virtualized cloud computing (SAPGAN-DMA-DAS-HVCC) is proposed in this paper. Here, a self-attention based progressive generative adversarial network (SAPGAN) is proposed to schedule activities in a cloud environment with an objective function of makespan and energy consumption. Then Dwarf Mongoose algorithm is proposed to optimize the weight parameters of SAPGAN. Outcome of proposed approach SAPGAN-DMA-DAS-HVCC contains 32.77%, 34.83% and 35.76% higher right skewed makespan, 31.52%, 33.28% and 29.14% lower cost when analysed to the existing models, like task scheduling in heterogeneous cloud environment utilizing mean grey wolf optimization approach, energy and performance-efficient task scheduling in heterogeneous virtualized Energy and Performance Efficient Task Scheduling Algorithm, energy and make span aware scheduling of deadline sensitive tasks on the cloud environment, respectively.},
}
RevDate: 2024-09-24
Enhancing EEG Data Quality and Precision for Cloud-Based Clinical Applications: An Evaluation of the SLOG Framework.
Biomedical physics & engineering express [Epub ahead of print].
Automation is revamping our preprocessing pipelines, and accelerating the delivery of personalized digital medicine. It improves efficiency, reduces costs, and allows clinicians to treat patients without significant delays. However, the influx of multimodal data highlights the need to protect sensitive information, such as clinical data, and safeguard data fidelity. One of the neuroimaging modalities that produces large amounts of time-series data is Electroencephalography (EEG). It captures the neural dynamics in a task or resting brain state with high temporal resolution. EEG electrodes placed on the scalp acquire electrical activity from the brain. These electrical potentials attenuate as they cross multiple layers of brain tissue and fluid yielding relatively weaker signals than noise - low signal-to-noise ratio. EEG signals are further distorted by internal physiological artifacts, such as eye movements (EOG) or heartbeat (ECG), and external noise, such as line noise 50 Hz. EOG artefacts, due to their proximity to the frontal brain regions, are particularly challenging to eliminate. Therefore, a widely used EOG rejection method, independent component analysis (ICA), demands manual inspection of the marked EOG components before they are rejected from the EEG data. We underscore the inaccuracy of automatized ICA rejection and provide an auxiliary algorithm - Second Layer Inspection for EOG (SLOG) in the clinical environment. SLOG based on spatial and temporal patterns of eye movements, re-examines the already marked EOG artifacts and confirms no EEG-related activity is mistakenly eliminated in this artifact rejection step. SLOG achieved a 99% precision rate on the simulated dataset while 85% precision on the real EEG dataset. One of the primary considerations for cloud-based applications are operational costs, including computing power. Algorithms like SLOG allow us to maintain data fidelity and precision without overloading the cloud platforms and maxing out our budgets.
Additional Links: PMID-39315479
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39315479,
year = {2024},
author = {Ghani, A and Heinrich, H and Brown, T and Schellhorn, K},
title = {Enhancing EEG Data Quality and Precision for Cloud-Based Clinical Applications: An Evaluation of the SLOG Framework.},
journal = {Biomedical physics & engineering express},
volume = {},
number = {},
pages = {},
doi = {10.1088/2057-1976/ad7e2d},
pmid = {39315479},
issn = {2057-1976},
abstract = {Automation is revamping our preprocessing pipelines, and accelerating the delivery of personalized digital medicine. It improves efficiency, reduces costs, and allows clinicians to treat patients without significant delays. However, the influx of multimodal data highlights the need to protect sensitive information, such as clinical data, and safeguard data fidelity. One of the neuroimaging modalities that produces large amounts of time-series data is Electroencephalography (EEG). It captures the neural dynamics in a task or resting brain state with high temporal resolution. EEG electrodes placed on the scalp acquire electrical activity from the brain. These electrical potentials attenuate as they cross multiple layers of brain tissue and fluid yielding relatively weaker signals than noise - low signal-to-noise ratio. EEG signals are further distorted by internal physiological artifacts, such as eye movements (EOG) or heartbeat (ECG), and external noise, such as line noise 50 Hz. EOG artefacts, due to their proximity to the frontal brain regions, are particularly challenging to eliminate. Therefore, a widely used EOG rejection method, independent component analysis (ICA), demands manual inspection of the marked EOG components before they are rejected from the EEG data. We underscore the inaccuracy of automatized ICA rejection and provide an auxiliary algorithm - Second Layer Inspection for EOG (SLOG) in the clinical environment. SLOG based on spatial and temporal patterns of eye movements, re-examines the already marked EOG artifacts and confirms no EEG-related activity is mistakenly eliminated in this artifact rejection step. SLOG achieved a 99% precision rate on the simulated dataset while 85% precision on the real EEG dataset. One of the primary considerations for cloud-based applications are operational costs, including computing power. Algorithms like SLOG allow us to maintain data fidelity and precision without overloading the cloud platforms and maxing out our budgets.},
}
RevDate: 2024-09-24
Hybrid computing framework security in dynamic offloading for IoT-enabled smart home system.
PeerJ. Computer science, 10:e2211.
In the distributed computing era, cloud computing has completely changed organizational operations by facilitating simple access to resources. However, the rapid development of the IoT has led to collaborative computing, which raises scalability and security challenges. To fully realize the potential of the Internet of Things (IoT) in smart home technologies, there is still a need for strong data security solutions, which are essential in dynamic offloading in conjunction with edge, fog, and cloud computing. This research on smart home challenges covers in-depth examinations of data security, privacy, processing speed, storage capacity restrictions, and analytics inside networked IoT devices. We introduce the Trusted IoT Big Data Analytics (TIBDA) framework as a comprehensive solution to reshape smart living. Our primary focus is mitigating pervasive data security and privacy issues. TIBDA incorporates robust trust mechanisms, prioritizing data privacy and reliability for secure processing and user information confidentiality within the smart home environment. We achieve this by employing a hybrid cryptosystem that combines Elliptic Curve Cryptography (ECC), Post Quantum Cryptography (PQC), and Blockchain technology (BCT) to protect user privacy and confidentiality. Additionally, we comprehensively compared four prominent Artificial Intelligence anomaly detection algorithms (Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptic Envelope). We utilized machine learning classification algorithms (random forest, k-nearest neighbors, support vector machines, linear discriminant analysis, and quadratic discriminant analysis) for detecting malicious and non-malicious activities in smart home systems. Furthermore, the main part of the research is with the help of an artificial neural network (ANN) dynamic algorithm; the TIBDA framework designs a hybrid computing system that integrates edge, fog, and cloud architecture and efficiently supports numerous users while processing data from IoT devices in real-time. The analysis shows that TIBDA outperforms these systems significantly across various metrics. In terms of response time, TIBDA demonstrated a reduction of 10-20% compared to the other systems under varying user loads, device counts, and transaction volumes. Regarding security, TIBDA's AUC values were consistently higher by 5-15%, indicating superior protection against threats. Additionally, TIBDA exhibited the highest trustworthiness with an uptime percentage 10-12% greater than its competitors. TIBDA's Isolation Forest algorithm achieved an accuracy of 99.30%, and the random forest algorithm achieved an accuracy of 94.70%, outperforming other methods by 8-11%. Furthermore, our ANN-based offloading decision-making model achieved a validation accuracy of 99% and reduced loss to 0.11, demonstrating significant improvements in resource utilization and system performance.
Additional Links: PMID-39314732
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39314732,
year = {2024},
author = {Khan, S and Jiangbin, Z and Ullah, F and Pervez Akhter, M and Khan, S and Awwad, FA and Ismail, EAA},
title = {Hybrid computing framework security in dynamic offloading for IoT-enabled smart home system.},
journal = {PeerJ. Computer science},
volume = {10},
number = {},
pages = {e2211},
pmid = {39314732},
issn = {2376-5992},
abstract = {In the distributed computing era, cloud computing has completely changed organizational operations by facilitating simple access to resources. However, the rapid development of the IoT has led to collaborative computing, which raises scalability and security challenges. To fully realize the potential of the Internet of Things (IoT) in smart home technologies, there is still a need for strong data security solutions, which are essential in dynamic offloading in conjunction with edge, fog, and cloud computing. This research on smart home challenges covers in-depth examinations of data security, privacy, processing speed, storage capacity restrictions, and analytics inside networked IoT devices. We introduce the Trusted IoT Big Data Analytics (TIBDA) framework as a comprehensive solution to reshape smart living. Our primary focus is mitigating pervasive data security and privacy issues. TIBDA incorporates robust trust mechanisms, prioritizing data privacy and reliability for secure processing and user information confidentiality within the smart home environment. We achieve this by employing a hybrid cryptosystem that combines Elliptic Curve Cryptography (ECC), Post Quantum Cryptography (PQC), and Blockchain technology (BCT) to protect user privacy and confidentiality. Additionally, we comprehensively compared four prominent Artificial Intelligence anomaly detection algorithms (Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptic Envelope). We utilized machine learning classification algorithms (random forest, k-nearest neighbors, support vector machines, linear discriminant analysis, and quadratic discriminant analysis) for detecting malicious and non-malicious activities in smart home systems. Furthermore, the main part of the research is with the help of an artificial neural network (ANN) dynamic algorithm; the TIBDA framework designs a hybrid computing system that integrates edge, fog, and cloud architecture and efficiently supports numerous users while processing data from IoT devices in real-time. The analysis shows that TIBDA outperforms these systems significantly across various metrics. In terms of response time, TIBDA demonstrated a reduction of 10-20% compared to the other systems under varying user loads, device counts, and transaction volumes. Regarding security, TIBDA's AUC values were consistently higher by 5-15%, indicating superior protection against threats. Additionally, TIBDA exhibited the highest trustworthiness with an uptime percentage 10-12% greater than its competitors. TIBDA's Isolation Forest algorithm achieved an accuracy of 99.30%, and the random forest algorithm achieved an accuracy of 94.70%, outperforming other methods by 8-11%. Furthermore, our ANN-based offloading decision-making model achieved a validation accuracy of 99% and reduced loss to 0.11, demonstrating significant improvements in resource utilization and system performance.},
}
RevDate: 2024-09-23
Long-term spatiotemporal mapping in lacustrine environment by remote sensing:Review with case study, challenges, and future directions.
Water research, 267:122457 pii:S0043-1354(24)01356-3 [Epub ahead of print].
Satellite remote sensing, unlike traditional ship-based sampling, possess the advantage of revisit capabilities and provides over 40 years of data support for observing lake environments at local, regional, and global scales. In recent years, global freshwater and coastal waters have faced adverse environmental issues, including harmful phytoplankton blooms, eutrophication, and extreme temperatures. To comprehensively address the goal of 'reviewing the past, assessing the present, and predicting the future', research increasingly focuses on developing and producing algorithms and products for long-term and large-scale mapping. This paper provides a comprehensive review of related research, evaluating the current status, shortcomings, and future trends of remote sensing datasets, monitoring targets, technical methods, and data processing platforms. The analysis demonstrated that the long-term spatiotemporal dynamic lake monitoring transition is thriving: (i) evolving from single data sources to satellite collaborative observations to keep a trade-off between temporal and spatial resolutions, (ii) shifting from single research targets to diversified and multidimensional objectives, (iii) progressing from empirical/mechanism models to machine/deep/transfer learning algorithms, (iv) moving from local processing to cloud-based platforms and parallel computing. Future directions include, but are not limited to: (i) establishing a global sampling data-sharing platform, (ii) developing precise atmospheric correction algorithms, (iii) building next-generation ocean color sensors and virtual constellation networks, (iv) introducing Interpretable Machine Learning (IML) and Explainable Artificial Intelligence (XAI) models, (v) integrating cloud computing, big data/model/computer, and Internet of Things (IoT) technologies, (vi) crossing disciplines with earth sciences, hydrology, computer science, and human geography, etc. In summary, this work offers valuable references and insights for academic research and government decision-making, which are crucial for enhancing the long-term tracking of aquatic ecological environment and achieving the Sustainable Development Goals (SDGs).
Additional Links: PMID-39312829
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39312829,
year = {2024},
author = {Lai, L and Liu, Y and Zhang, Y and Cao, Z and Yin, Y and Chen, X and Jin, J and Wu, S},
title = {Long-term spatiotemporal mapping in lacustrine environment by remote sensing:Review with case study, challenges, and future directions.},
journal = {Water research},
volume = {267},
number = {},
pages = {122457},
doi = {10.1016/j.watres.2024.122457},
pmid = {39312829},
issn = {1879-2448},
abstract = {Satellite remote sensing, unlike traditional ship-based sampling, possess the advantage of revisit capabilities and provides over 40 years of data support for observing lake environments at local, regional, and global scales. In recent years, global freshwater and coastal waters have faced adverse environmental issues, including harmful phytoplankton blooms, eutrophication, and extreme temperatures. To comprehensively address the goal of 'reviewing the past, assessing the present, and predicting the future', research increasingly focuses on developing and producing algorithms and products for long-term and large-scale mapping. This paper provides a comprehensive review of related research, evaluating the current status, shortcomings, and future trends of remote sensing datasets, monitoring targets, technical methods, and data processing platforms. The analysis demonstrated that the long-term spatiotemporal dynamic lake monitoring transition is thriving: (i) evolving from single data sources to satellite collaborative observations to keep a trade-off between temporal and spatial resolutions, (ii) shifting from single research targets to diversified and multidimensional objectives, (iii) progressing from empirical/mechanism models to machine/deep/transfer learning algorithms, (iv) moving from local processing to cloud-based platforms and parallel computing. Future directions include, but are not limited to: (i) establishing a global sampling data-sharing platform, (ii) developing precise atmospheric correction algorithms, (iii) building next-generation ocean color sensors and virtual constellation networks, (iv) introducing Interpretable Machine Learning (IML) and Explainable Artificial Intelligence (XAI) models, (v) integrating cloud computing, big data/model/computer, and Internet of Things (IoT) technologies, (vi) crossing disciplines with earth sciences, hydrology, computer science, and human geography, etc. In summary, this work offers valuable references and insights for academic research and government decision-making, which are crucial for enhancing the long-term tracking of aquatic ecological environment and achieving the Sustainable Development Goals (SDGs).},
}
RevDate: 2024-09-23
A robust algorithm for authenticated health data access via blockchain and cloud computing.
PloS one, 19(9):e0307039 pii:PONE-D-24-09310.
In modern healthcare, providers increasingly use cloud services to store and share electronic medical records. However, traditional cloud hosting, which depends on intermediaries, poses risks to privacy and security, including inadequate control over access, data auditing, and tracking data origins. Additionally, current schemes face significant limitations such as scalability concerns, high computational overhead, practical implementation challenges, and issues with interoperability and data standardization. Unauthorized data access by cloud providers further exacerbates these concerns. Blockchain technology, known for its secure and decentralized nature, offers a solution by enabling secure data auditing in sharing systems. This research integrates blockchain into healthcare for efficient record management. We proposed a blockchain-based method for secure EHR management and integrated Ciphertext-Policy Attribute-Based Encryption (CP-ABE) for fine-grained access control. The proposed algorithm combines blockchain and smart contracts with a cloud-based healthcare Service Management System (SMS) to ensure secure and accessible EHRs. Smart contracts automate key management, encryption, and decryption processes, enhancing data security and integrity. The blockchain ledger authenticates data transactions, while the cloud provides scalability. The SMS manages access requests, enhancing resource allocation and response times. A dual authentication system confirms patient keys before granting data access, with failed attempts leading to access revocation and incident logging. Our analyses show that this algorithm significantly improves the security and efficiency of health data exchanges. By combining blockchain's decentralized structure with the cloud's scalability, this approach significantly improves EHR security protocols in modern healthcare setting.
Additional Links: PMID-39312513
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39312513,
year = {2024},
author = {Shahzad, A and Chen, W and Shaheen, M and Zhang, Y and Ahmad, F},
title = {A robust algorithm for authenticated health data access via blockchain and cloud computing.},
journal = {PloS one},
volume = {19},
number = {9},
pages = {e0307039},
doi = {10.1371/journal.pone.0307039},
pmid = {39312513},
issn = {1932-6203},
abstract = {In modern healthcare, providers increasingly use cloud services to store and share electronic medical records. However, traditional cloud hosting, which depends on intermediaries, poses risks to privacy and security, including inadequate control over access, data auditing, and tracking data origins. Additionally, current schemes face significant limitations such as scalability concerns, high computational overhead, practical implementation challenges, and issues with interoperability and data standardization. Unauthorized data access by cloud providers further exacerbates these concerns. Blockchain technology, known for its secure and decentralized nature, offers a solution by enabling secure data auditing in sharing systems. This research integrates blockchain into healthcare for efficient record management. We proposed a blockchain-based method for secure EHR management and integrated Ciphertext-Policy Attribute-Based Encryption (CP-ABE) for fine-grained access control. The proposed algorithm combines blockchain and smart contracts with a cloud-based healthcare Service Management System (SMS) to ensure secure and accessible EHRs. Smart contracts automate key management, encryption, and decryption processes, enhancing data security and integrity. The blockchain ledger authenticates data transactions, while the cloud provides scalability. The SMS manages access requests, enhancing resource allocation and response times. A dual authentication system confirms patient keys before granting data access, with failed attempts leading to access revocation and incident logging. Our analyses show that this algorithm significantly improves the security and efficiency of health data exchanges. By combining blockchain's decentralized structure with the cloud's scalability, this approach significantly improves EHR security protocols in modern healthcare setting.},
}
RevDate: 2024-09-23
The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies.
Sociology of health & illness [Epub ahead of print].
Emerging technologies of artificial intelligence (AI) and automated decision-making (ADM) promise to advance many industries. Healthcare is a key locus for new developments, where operational improvements are magnified by the bigger-picture promise of improved care and outcomes for patients. Forming the zeitgeist of contemporary sociotechnical innovation in healthcare, media portrayals of these technologies can shape how they are implemented, experienced and understood across healthcare systems. This article identifies current applications of AI and ADM within Australian healthcare contexts and analyses how these technologies are being portrayed within news and industry media. It offers a categorisation of leading applications of AI and ADM: monitoring and tracking, data management and analysis, cloud computing, and robotics. Discussing how AI and ADM are depicted in relation to health and care practices, it examines the sense of promise that is enlivened in these representations. The article concludes by considering the implications of promissory discourses for how technologies are understood and integrated into practices and sites of healthcare.
Additional Links: PMID-39311476
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39311476,
year = {2024},
author = {Watson, A and Wozniak-O'Connor, V},
title = {The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies.},
journal = {Sociology of health & illness},
volume = {},
number = {},
pages = {},
doi = {10.1111/1467-9566.13840},
pmid = {39311476},
issn = {1467-9566},
support = {CE200100005//Australian Research Council/ ; },
abstract = {Emerging technologies of artificial intelligence (AI) and automated decision-making (ADM) promise to advance many industries. Healthcare is a key locus for new developments, where operational improvements are magnified by the bigger-picture promise of improved care and outcomes for patients. Forming the zeitgeist of contemporary sociotechnical innovation in healthcare, media portrayals of these technologies can shape how they are implemented, experienced and understood across healthcare systems. This article identifies current applications of AI and ADM within Australian healthcare contexts and analyses how these technologies are being portrayed within news and industry media. It offers a categorisation of leading applications of AI and ADM: monitoring and tracking, data management and analysis, cloud computing, and robotics. Discussing how AI and ADM are depicted in relation to health and care practices, it examines the sense of promise that is enlivened in these representations. The article concludes by considering the implications of promissory discourses for how technologies are understood and integrated into practices and sites of healthcare.},
}
RevDate: 2024-09-23
Transforming Clinical Research: The Power of High-Throughput Omics Integration.
Proteomes, 12(3): pii:proteomes12030025.
High-throughput omics technologies have dramatically changed biological research, providing unprecedented insights into the complexity of living systems. This review presents a comprehensive examination of the current landscape of high-throughput omics pipelines, covering key technologies, data integration techniques and their diverse applications. It looks at advances in next-generation sequencing, mass spectrometry and microarray platforms and highlights their contribution to data volume and precision. In addition, this review looks at the critical role of bioinformatics tools and statistical methods in managing the large datasets generated by these technologies. By integrating multi-omics data, researchers can gain a holistic understanding of biological systems, leading to the identification of new biomarkers and therapeutic targets, particularly in complex diseases such as cancer. The review also looks at the integration of omics data into electronic health records (EHRs) and the potential for cloud computing and big data analytics to improve data storage, analysis and sharing. Despite significant advances, there are still challenges such as data complexity, technical limitations and ethical issues. Future directions include the development of more sophisticated computational tools and the application of advanced machine learning techniques, which are critical for addressing the complexity and heterogeneity of omics datasets. This review aims to serve as a valuable resource for researchers and practitioners, highlighting the transformative potential of high-throughput omics technologies in advancing personalized medicine and improving clinical outcomes.
Additional Links: PMID-39311198
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39311198,
year = {2024},
author = {Vitorino, R},
title = {Transforming Clinical Research: The Power of High-Throughput Omics Integration.},
journal = {Proteomes},
volume = {12},
number = {3},
pages = {},
doi = {10.3390/proteomes12030025},
pmid = {39311198},
issn = {2227-7382},
abstract = {High-throughput omics technologies have dramatically changed biological research, providing unprecedented insights into the complexity of living systems. This review presents a comprehensive examination of the current landscape of high-throughput omics pipelines, covering key technologies, data integration techniques and their diverse applications. It looks at advances in next-generation sequencing, mass spectrometry and microarray platforms and highlights their contribution to data volume and precision. In addition, this review looks at the critical role of bioinformatics tools and statistical methods in managing the large datasets generated by these technologies. By integrating multi-omics data, researchers can gain a holistic understanding of biological systems, leading to the identification of new biomarkers and therapeutic targets, particularly in complex diseases such as cancer. The review also looks at the integration of omics data into electronic health records (EHRs) and the potential for cloud computing and big data analytics to improve data storage, analysis and sharing. Despite significant advances, there are still challenges such as data complexity, technical limitations and ethical issues. Future directions include the development of more sophisticated computational tools and the application of advanced machine learning techniques, which are critical for addressing the complexity and heterogeneity of omics datasets. This review aims to serve as a valuable resource for researchers and practitioners, highlighting the transformative potential of high-throughput omics technologies in advancing personalized medicine and improving clinical outcomes.},
}
RevDate: 2024-09-23
Securing the IoT-enabled smart healthcare system: A PUF-based resource-efficient authentication mechanism.
Heliyon, 10(18):e37577.
As the Internet of Things (IoT) continues its rapid expansion, cloud computing has become integral to various smart healthcare applications. However, the proliferation of digital health services raises significant concerns regarding security and data privacy, making the protection of sensitive medical information paramount. To effectively tackle these challenges, it is crucial to establish resilient network infrastructure and data storage systems capable of defending against malicious entities and permitting access exclusively to authorized users. This requires the deployment of a robust authentication mechanism, wherein medical IoT devices, users (such as doctors or nurses), and servers undergo registration with a trusted authority. The process entails users retrieving data from the cloud server, while IoT devices collect patient data. Before granting access to data retrieval or storage, the cloud server verifies the authenticity of both the user and the IoT device, ensuring secure and authorized interactions within the system. With millions of interconnected smart medical IoT devices autonomously gathering and analyzing vital patient data, the importance of robust security measures becomes increasingly evident. Standard security protocols are fundamental in fortifying smart healthcare applications against potential threats. To confront these issues, this paper introduces a secure and resource-efficient cloud-enabled authentication mechanism. Through empirical analysis, it is demonstrated that our authentication mechanism effectively reduces computational and communication overheads, thereby improving overall system efficiency. Furthermore, both informal and formal analyses affirm the mechanism's resilience against potential cyberattacks, highlighting its effectiveness in safeguarding smart healthcare applications.
Additional Links: PMID-39309907
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39309907,
year = {2024},
author = {Alruwaili, O and Tanveer, M and Alotaibi, FM and Abdelfattah, W and Armghan, A and Alserhani, FM},
title = {Securing the IoT-enabled smart healthcare system: A PUF-based resource-efficient authentication mechanism.},
journal = {Heliyon},
volume = {10},
number = {18},
pages = {e37577},
pmid = {39309907},
issn = {2405-8440},
abstract = {As the Internet of Things (IoT) continues its rapid expansion, cloud computing has become integral to various smart healthcare applications. However, the proliferation of digital health services raises significant concerns regarding security and data privacy, making the protection of sensitive medical information paramount. To effectively tackle these challenges, it is crucial to establish resilient network infrastructure and data storage systems capable of defending against malicious entities and permitting access exclusively to authorized users. This requires the deployment of a robust authentication mechanism, wherein medical IoT devices, users (such as doctors or nurses), and servers undergo registration with a trusted authority. The process entails users retrieving data from the cloud server, while IoT devices collect patient data. Before granting access to data retrieval or storage, the cloud server verifies the authenticity of both the user and the IoT device, ensuring secure and authorized interactions within the system. With millions of interconnected smart medical IoT devices autonomously gathering and analyzing vital patient data, the importance of robust security measures becomes increasingly evident. Standard security protocols are fundamental in fortifying smart healthcare applications against potential threats. To confront these issues, this paper introduces a secure and resource-efficient cloud-enabled authentication mechanism. Through empirical analysis, it is demonstrated that our authentication mechanism effectively reduces computational and communication overheads, thereby improving overall system efficiency. Furthermore, both informal and formal analyses affirm the mechanism's resilience against potential cyberattacks, highlighting its effectiveness in safeguarding smart healthcare applications.},
}
RevDate: 2024-09-22
CmpDate: 2024-09-22
[Progress in application of machine learning in epidemiology].
Zhonghua liu xing bing xue za zhi = Zhonghua liuxingbingxue zazhi, 45(9):1321-1326.
Population based health data collection and analysis are important in epidemiological research. In recent years, with the rapid development of big data, Internet and cloud computing, artificial intelligence has gradually attracted attention of epidemiological researchers. More and more researchers are trying to use artificial intelligence algorithms for genome sequencing and medical image data mining, and for disease diagnosis, risk prediction and others. In recent years, machine learning, a branch of artificial intelligence, has been widely used in epidemiological research. This paper summarizes the key fields and progress in the application of machine learning in epidemiology, reviews the development history of machine learning, analyzes the classic cases and current challenges in its application in epidemiological research, and introduces the current application scenarios and future development trends of machine learning and artificial intelligence algorithms for the better exploration of the epidemiological research value of massive medical health data in China.
Additional Links: PMID-39307708
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39307708,
year = {2024},
author = {Mai, KT and Liu, XT and Lin, XY and Liu, SY and Zhao, CK and Du, JB},
title = {[Progress in application of machine learning in epidemiology].},
journal = {Zhonghua liu xing bing xue za zhi = Zhonghua liuxingbingxue zazhi},
volume = {45},
number = {9},
pages = {1321-1326},
doi = {10.3760/cma.j.cn112338-20240322-00148},
pmid = {39307708},
issn = {0254-6450},
support = {2021YFC2700705//National Key Research and Development Program of China/ ; 202310312014Z//Undergraduate Innovation and Entrepreneurship Training Program/ ; },
mesh = {*Machine Learning ; Humans ; China/epidemiology ; Artificial Intelligence ; Data Mining/methods ; Algorithms ; Big Data ; Epidemiology ; },
abstract = {Population based health data collection and analysis are important in epidemiological research. In recent years, with the rapid development of big data, Internet and cloud computing, artificial intelligence has gradually attracted attention of epidemiological researchers. More and more researchers are trying to use artificial intelligence algorithms for genome sequencing and medical image data mining, and for disease diagnosis, risk prediction and others. In recent years, machine learning, a branch of artificial intelligence, has been widely used in epidemiological research. This paper summarizes the key fields and progress in the application of machine learning in epidemiology, reviews the development history of machine learning, analyzes the classic cases and current challenges in its application in epidemiological research, and introduces the current application scenarios and future development trends of machine learning and artificial intelligence algorithms for the better exploration of the epidemiological research value of massive medical health data in China.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Machine Learning
Humans
China/epidemiology
Artificial Intelligence
Data Mining/methods
Algorithms
Big Data
Epidemiology
RevDate: 2024-09-22
Efficient deep reinforcement learning based task scheduler in multi cloud environment.
Scientific reports, 14(1):21850.
Task scheduling problem (TSP) is huge challenge in cloud computing paradigm as number of tasks comes to cloud application platform vary from time to time and all the tasks consists of variable length, runtime capacities. All these tasks may generated from various heterogeneous resources which comes onto cloud console directly effects the performance of cloud paradigm with increase in makespan, energy consumption, resource costs. Traditional task scheduling algorithms cannot handle these type of complex workloads in cloud paradigm. Many authors developed Task Scheduling algorithms by using metaheuristic techniques, hybrid approaches but all these algorithms give near optimal solutions but still TSP is a highly challenging and dynamic scenario as it resembles NP hard problem. Therefore, to tackle the TSP in cloud computing paradigm and schedule the tasks in an effective way in cloud paradigm, we formulated Adaptive Task scheduler which segments all the tasks comes to cloud console as sub tasks and fed these to the scheduler which is modeled by Improved Asynchronous Advantage Actor Critic Algorithm(IA3C) to generate schedules. This scheduling process is carried out in two stages. In first stage, all incoming tasks are segmented as sub tasks. After segmentation, all these sub tasks according to their size, execution time, communication time are grouped together and fed to the (ATSIA3C) scheduler. In the second stage, it checks for the above said constraints and disperse them onto the corresponding suitable processing capacity VMs resided in datacenters. Proposed ATSIA3C is simulated on Cloudsim. Extensive simulations are conducted using both fabricated worklogs and as well as realtime supercomputing worklogs. Our proposed mechanism evaluated over baseline algorithms i.e. RATS-HM, AINN-BPSO, MOABCQ. From results it is evident that our proposed ATSIA3C outperforms existing task schedulers by improving makespan by 70.49%. Resource cost is improved by 77.42%. Energy Consumption is improved over compared algorithms 74.24% in multi cloud environment by proposed ATSIA3C.
Additional Links: PMID-39300104
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39300104,
year = {2024},
author = {Mangalampalli, S and Karri, GR and Ratnamani, MV and Mohanty, SN and Jabr, BA and Ali, YA and Ali, S and Abdullaeva, BS},
title = {Efficient deep reinforcement learning based task scheduler in multi cloud environment.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {21850},
pmid = {39300104},
issn = {2045-2322},
abstract = {Task scheduling problem (TSP) is huge challenge in cloud computing paradigm as number of tasks comes to cloud application platform vary from time to time and all the tasks consists of variable length, runtime capacities. All these tasks may generated from various heterogeneous resources which comes onto cloud console directly effects the performance of cloud paradigm with increase in makespan, energy consumption, resource costs. Traditional task scheduling algorithms cannot handle these type of complex workloads in cloud paradigm. Many authors developed Task Scheduling algorithms by using metaheuristic techniques, hybrid approaches but all these algorithms give near optimal solutions but still TSP is a highly challenging and dynamic scenario as it resembles NP hard problem. Therefore, to tackle the TSP in cloud computing paradigm and schedule the tasks in an effective way in cloud paradigm, we formulated Adaptive Task scheduler which segments all the tasks comes to cloud console as sub tasks and fed these to the scheduler which is modeled by Improved Asynchronous Advantage Actor Critic Algorithm(IA3C) to generate schedules. This scheduling process is carried out in two stages. In first stage, all incoming tasks are segmented as sub tasks. After segmentation, all these sub tasks according to their size, execution time, communication time are grouped together and fed to the (ATSIA3C) scheduler. In the second stage, it checks for the above said constraints and disperse them onto the corresponding suitable processing capacity VMs resided in datacenters. Proposed ATSIA3C is simulated on Cloudsim. Extensive simulations are conducted using both fabricated worklogs and as well as realtime supercomputing worklogs. Our proposed mechanism evaluated over baseline algorithms i.e. RATS-HM, AINN-BPSO, MOABCQ. From results it is evident that our proposed ATSIA3C outperforms existing task schedulers by improving makespan by 70.49%. Resource cost is improved by 77.42%. Energy Consumption is improved over compared algorithms 74.24% in multi cloud environment by proposed ATSIA3C.},
}
RevDate: 2024-09-19
Cloud-fog architecture-based control of smart island microgrid in master-slave organization using disturbance observer-based hybrid backstepping sliding mode controller.
Heliyon, 10(17):e37453.
Distributed control is an effective method to coordinate the microgrid with various components, and also in a smart microgrid, communication graph layouts are essential since changing the topology unexpectedly could disrupt the operation of the distributed controllers, and also an imbalance may occur between the production and load. Hence, reducing the exchanged data between units and system operator is essential in order to reduce the transmitted data volume and computational burden. For this purpose, an islanded microgrid with multiple agents which is using cloud-fog computing is proposed here, in order to reduce the computing burden on the central control unit as well as reducing data exchange among units. To balance the production power and loads in a smart island with a stable voltage/frequency, a hybrid backstepping sliding mode controller (BSMC) with disturbance observer (DO) is suggested to control voltage/frequency and current in the MG-based master-slave organization. Therefore, this paper proposes a DO-driven BSMC for controlling voltage/frequency, and power of energy sources within a Master-Slave organization; in addition, the study proposes a clod-fog computing for enhancing performance, reducing transferred data volume, and processing information on time. In the extensive simulations, the suggested controller shows a reduction in steady-state error, a fast response, and a lower total harmonic distortion (THD) for nonlinear and linear loads less than 0.33 %. The fog layer serves as a local processing level, so it reduces the exchanged data between cloud and fog nodes.
Additional Links: PMID-39296026
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39296026,
year = {2024},
author = {Azizi, MA and Niknam, T and Dehghani, M and Jokar, H},
title = {Cloud-fog architecture-based control of smart island microgrid in master-slave organization using disturbance observer-based hybrid backstepping sliding mode controller.},
journal = {Heliyon},
volume = {10},
number = {17},
pages = {e37453},
pmid = {39296026},
issn = {2405-8440},
abstract = {Distributed control is an effective method to coordinate the microgrid with various components, and also in a smart microgrid, communication graph layouts are essential since changing the topology unexpectedly could disrupt the operation of the distributed controllers, and also an imbalance may occur between the production and load. Hence, reducing the exchanged data between units and system operator is essential in order to reduce the transmitted data volume and computational burden. For this purpose, an islanded microgrid with multiple agents which is using cloud-fog computing is proposed here, in order to reduce the computing burden on the central control unit as well as reducing data exchange among units. To balance the production power and loads in a smart island with a stable voltage/frequency, a hybrid backstepping sliding mode controller (BSMC) with disturbance observer (DO) is suggested to control voltage/frequency and current in the MG-based master-slave organization. Therefore, this paper proposes a DO-driven BSMC for controlling voltage/frequency, and power of energy sources within a Master-Slave organization; in addition, the study proposes a clod-fog computing for enhancing performance, reducing transferred data volume, and processing information on time. In the extensive simulations, the suggested controller shows a reduction in steady-state error, a fast response, and a lower total harmonic distortion (THD) for nonlinear and linear loads less than 0.33 %. The fog layer serves as a local processing level, so it reduces the exchanged data between cloud and fog nodes.},
}
RevDate: 2024-09-19
Primary care practitioner and patient perspectives on care following bariatric surgery: A meta-synthesis of qualitative research.
Obesity reviews : an official journal of the International Association for the Study of Obesity [Epub ahead of print].
Primary care is central to ongoing health care following bariatric surgery and patients indicate a preference for receiving follow-up support by their primary care practitioner (PCP). This meta-synthesis investigates the perspectives of both PCPs and patients in post-bariatric surgery care provided by PCPs. The aim was to synthesize themes from qualitative research to recommend improvements in post-bariatric surgery clinical care in primary care settings. Systematic searches of Scopus, Medline, EMBASE, PsycINFO, the Cochrane Library, and Google Scholar resulted in the inclusion of eight papers in the meta-synthesis. Papers were critiqued using the Critical Appraisal Skills Program (CASP) and thematically coded in Quirkos Cloud. Seven themes were reached by author consensus including stigma and judgment; clinician barriers and facilitators; patient-related support needs; communication considerations; patient context or determinants; health care setting; and adapting to life after surgery. PCPs reported barriers including poor communication and guidance from bariatric surgery centers, limited knowledge and training in bariatric patient care, and patients who may have unrealistic outcomes and poor health literacy. Patients seek comprehensive care from their PCP, however, barriers hindering the provision of this care include adverse surgical outcomes, a poor relationship with their PCP, and limited and short-term follow-up care from the PCP. Insights from this meta-synthesis offer actionable recommendations for PCPs and bariatric surgery centers to enhance patient care immediately.
Additional Links: PMID-39295428
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39295428,
year = {2024},
author = {Badorrek, S and Franklin, J and McBride, KA and Conway, L and Williams, K},
title = {Primary care practitioner and patient perspectives on care following bariatric surgery: A meta-synthesis of qualitative research.},
journal = {Obesity reviews : an official journal of the International Association for the Study of Obesity},
volume = {},
number = {},
pages = {e13829},
doi = {10.1111/obr.13829},
pmid = {39295428},
issn = {1467-789X},
support = {//University of Sydney/ ; },
abstract = {Primary care is central to ongoing health care following bariatric surgery and patients indicate a preference for receiving follow-up support by their primary care practitioner (PCP). This meta-synthesis investigates the perspectives of both PCPs and patients in post-bariatric surgery care provided by PCPs. The aim was to synthesize themes from qualitative research to recommend improvements in post-bariatric surgery clinical care in primary care settings. Systematic searches of Scopus, Medline, EMBASE, PsycINFO, the Cochrane Library, and Google Scholar resulted in the inclusion of eight papers in the meta-synthesis. Papers were critiqued using the Critical Appraisal Skills Program (CASP) and thematically coded in Quirkos Cloud. Seven themes were reached by author consensus including stigma and judgment; clinician barriers and facilitators; patient-related support needs; communication considerations; patient context or determinants; health care setting; and adapting to life after surgery. PCPs reported barriers including poor communication and guidance from bariatric surgery centers, limited knowledge and training in bariatric patient care, and patients who may have unrealistic outcomes and poor health literacy. Patients seek comprehensive care from their PCP, however, barriers hindering the provision of this care include adverse surgical outcomes, a poor relationship with their PCP, and limited and short-term follow-up care from the PCP. Insights from this meta-synthesis offer actionable recommendations for PCPs and bariatric surgery centers to enhance patient care immediately.},
}
RevDate: 2024-09-18
Clinical and biobehavioral phenotypic assessments and data harmonization for the RE-JOIN research consortium: Recommendations for common data element selection.
Neurobiology of pain (Cambridge, Mass.), 16:100163.
BACKGROUND: The Restoring Joint Health and Function to Reduce Pain (RE-JOIN) Consortium is part of the Helping to End Addiction Long-term® (HEAL) Initiative. HEAL is an ambitious, NIH-wide initiative to speed scientific solutions to stem the national opioid public health crisis. The RE-JOIN consortium's over-arching goal is to define how chronic joint pain-mediating neurons innervate different articular and peri-articular tissues, with a focus on the knee and temporomandibular joints (TMJ) across species employing the latest neuroscience approaches. The aim of this manuscript is to elucidate the human data gathered by the RE-JOIN consortium, as well as to expound upon its underlying rationale and the methodologies and protocols for harmonization and standardization that have been instituted by the RE-JOIN Consortium.
METHODS: The consortium-wide human models working subgroup established the RE-JOIN minimal harmonized data elements that will be collected across all human studies and set the stage to develop parallel pre-clinical data collection standards. Data harmonization considerations included requirements from the HEAL program and recommendations from the consortium's researchers and experts on informatics, knowledge management, and data curation.
RESULTS: Multidisciplinary experts - including preclinical and clinical researchers, with both clinician-scientists- developed the RE-JOIN's Minimal Human Data Standard with required domains and outcome measures to be collected across projects and institutions. The RE-JOIN minimal data standard will include HEAL Common Data Elements (CDEs) (e.g., standardized demographics, general pain, psychosocial and functional measures), and RE-JOIN common data elements (R-CDE) (i.e., both general and joint-specific standardized and clinically important self-reported pain and function measures, as well as pressure pain thresholds part of quantitative sensory testing). In addition, discretionary, site-specific measures will be collected by individual institutions (e.g., expanded quantitative sensory testing and gait biomechanical assessments), specific to the knee or TMJ. Research teams will submit datasets of standardized metadata to the RE-JOIN Data Coordinating Center (DCG) via a secure cloud-based central data repository and computing infrastructure for researchers to share and conduct analyses on data collected by or acquired for RE-JOIN. RE-JOIN datasets will have protected health information (PHI) removed and be publicly available on the SPARC portal and accessible through the HEAL Data Ecosystem.
CONCLUSION: Data Harmonization efforts provide the multidisciplinary consortium with an opportunity to effectively collaborate across decentralized research teams, and data standardization sets the framework for efficient future analyses of RE-JOIN data collected by the consortium. The harmonized phenotypic information obtained will significantly enhance our understanding of the neurobiology of the pain-pathology relationships in humans, providing valuable insights for comparison with pre-clinical models.
Additional Links: PMID-39281853
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39281853,
year = {2024},
author = {Cruz-Almeida, Y and Mehta, B and Haelterman, NA and Johnson, AJ and Heiting, C and Ernberg, M and Orange, D and Lotz, M and Boccanfuso, J and Smith, SB and Pela, M and Boline, J and Otero, M and Allen, K and Perez, D and Donnelly, C and Almarza, A and Olmer, M and Balkhi, H and Wagenaar, J and Martone, M and , },
title = {Clinical and biobehavioral phenotypic assessments and data harmonization for the RE-JOIN research consortium: Recommendations for common data element selection.},
journal = {Neurobiology of pain (Cambridge, Mass.)},
volume = {16},
number = {},
pages = {100163},
pmid = {39281853},
issn = {2452-073X},
abstract = {BACKGROUND: The Restoring Joint Health and Function to Reduce Pain (RE-JOIN) Consortium is part of the Helping to End Addiction Long-term® (HEAL) Initiative. HEAL is an ambitious, NIH-wide initiative to speed scientific solutions to stem the national opioid public health crisis. The RE-JOIN consortium's over-arching goal is to define how chronic joint pain-mediating neurons innervate different articular and peri-articular tissues, with a focus on the knee and temporomandibular joints (TMJ) across species employing the latest neuroscience approaches. The aim of this manuscript is to elucidate the human data gathered by the RE-JOIN consortium, as well as to expound upon its underlying rationale and the methodologies and protocols for harmonization and standardization that have been instituted by the RE-JOIN Consortium.
METHODS: The consortium-wide human models working subgroup established the RE-JOIN minimal harmonized data elements that will be collected across all human studies and set the stage to develop parallel pre-clinical data collection standards. Data harmonization considerations included requirements from the HEAL program and recommendations from the consortium's researchers and experts on informatics, knowledge management, and data curation.
RESULTS: Multidisciplinary experts - including preclinical and clinical researchers, with both clinician-scientists- developed the RE-JOIN's Minimal Human Data Standard with required domains and outcome measures to be collected across projects and institutions. The RE-JOIN minimal data standard will include HEAL Common Data Elements (CDEs) (e.g., standardized demographics, general pain, psychosocial and functional measures), and RE-JOIN common data elements (R-CDE) (i.e., both general and joint-specific standardized and clinically important self-reported pain and function measures, as well as pressure pain thresholds part of quantitative sensory testing). In addition, discretionary, site-specific measures will be collected by individual institutions (e.g., expanded quantitative sensory testing and gait biomechanical assessments), specific to the knee or TMJ. Research teams will submit datasets of standardized metadata to the RE-JOIN Data Coordinating Center (DCG) via a secure cloud-based central data repository and computing infrastructure for researchers to share and conduct analyses on data collected by or acquired for RE-JOIN. RE-JOIN datasets will have protected health information (PHI) removed and be publicly available on the SPARC portal and accessible through the HEAL Data Ecosystem.
CONCLUSION: Data Harmonization efforts provide the multidisciplinary consortium with an opportunity to effectively collaborate across decentralized research teams, and data standardization sets the framework for efficient future analyses of RE-JOIN data collected by the consortium. The harmonized phenotypic information obtained will significantly enhance our understanding of the neurobiology of the pain-pathology relationships in humans, providing valuable insights for comparison with pre-clinical models.},
}
RevDate: 2024-09-17
CmpDate: 2024-09-15
Integrating meta-heuristic with named data networking for secure edge computing in IoT enabled healthcare monitoring system.
Scientific reports, 14(1):21532.
The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.
Additional Links: PMID-39278954
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39278954,
year = {2024},
author = {Manogaran, N and Nandagopal, M and Abi, NE and Seerangan, K and Balusamy, B and Selvarajan, S},
title = {Integrating meta-heuristic with named data networking for secure edge computing in IoT enabled healthcare monitoring system.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {21532},
pmid = {39278954},
issn = {2045-2322},
mesh = {*Internet of Things ; *Computer Security ; Humans ; *Cloud Computing ; Heuristics ; Algorithms ; Delivery of Health Care ; Computer Communication Networks ; },
abstract = {The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Internet of Things
*Computer Security
Humans
*Cloud Computing
Heuristics
Algorithms
Delivery of Health Care
Computer Communication Networks
RevDate: 2024-09-14
Auto-Scaling Techniques in Cloud Computing: Issues and Research Directions.
Sensors (Basel, Switzerland), 24(17):.
In the dynamic world of cloud computing, auto-scaling stands as a beacon of efficiency, dynamically aligning resources with fluctuating demands. This paper presents a comprehensive review of auto-scaling techniques, highlighting significant advancements and persisting challenges in the field. First, we overview the fundamental principles and mechanisms of auto-scaling, including its role in improving cost efficiency, performance, and energy consumption in cloud services. We then discuss various strategies employed in auto-scaling, ranging from threshold-based rules and queuing theory to sophisticated machine learning and time series analysis approaches. After that, we explore the critical issues in auto-scaling practices and review several studies that demonstrate how these challenges can be addressed. We then conclude by offering insights into several promising research directions, emphasizing the development of predictive scaling mechanisms and the integration of advanced machine learning techniques to achieve more effective and efficient auto-scaling solutions.
Additional Links: PMID-39275461
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39275461,
year = {2024},
author = {Alharthi, S and Alshamsi, A and Alseiari, A and Alwarafy, A},
title = {Auto-Scaling Techniques in Cloud Computing: Issues and Research Directions.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {17},
pages = {},
pmid = {39275461},
issn = {1424-8220},
support = {12T047//United Arab Emirates University/ ; },
abstract = {In the dynamic world of cloud computing, auto-scaling stands as a beacon of efficiency, dynamically aligning resources with fluctuating demands. This paper presents a comprehensive review of auto-scaling techniques, highlighting significant advancements and persisting challenges in the field. First, we overview the fundamental principles and mechanisms of auto-scaling, including its role in improving cost efficiency, performance, and energy consumption in cloud services. We then discuss various strategies employed in auto-scaling, ranging from threshold-based rules and queuing theory to sophisticated machine learning and time series analysis approaches. After that, we explore the critical issues in auto-scaling practices and review several studies that demonstrate how these challenges can be addressed. We then conclude by offering insights into several promising research directions, emphasizing the development of predictive scaling mechanisms and the integration of advanced machine learning techniques to achieve more effective and efficient auto-scaling solutions.},
}
RevDate: 2024-09-13
Development and evaluation of a training curriculum to engage researchers on accessing and analyzing the All of Us data.
Journal of the American Medical Informatics Association : JAMIA pii:7756403 [Epub ahead of print].
OBJECTIVE: The All of Us Evenings with Genetics (EwG) Research Program at Baylor College of Medicine (BCM), funded to engage research scholars to work with the All of Us data, developed a training curriculum for the Researcher Workbench, the platform to access and analyze All of Us data. All of Us EwG developed the curriculum so that it could teach scholars regardless of their skills and background in programming languages and cloud computing. All of Us EwG delivered this curriculum at the first annual All of Us EwG Faculty Summit in May 2022. The curriculum was evaluated both during and after the Faculty Summit so that it could be improved for future training.
MATERIALS AND METHODS: Surveys were administered to assess scholars' familiarity with the programming languages and computational tools required to use the Researcher Workbench. The curriculum was developed using backward design and was informed by the survey results, a review of available resources for training users on the Researcher Workbench, and All of Us EwG members' collective experience training students. The curriculum was evaluated using feedback surveys during the Faculty Summit as well as virtual meetings and emails following the Faculty Summit.
RESULTS: The evaluation results demonstrated the success of the curriculum and identified areas for improvement.
DISCUSSION AND CONCLUSION: The curriculum has been adapted and improved in response to evaluations and in response to changes to the All of Us data and infrastructure to train more researchers through this program and other scholarly programs.
Additional Links: PMID-39269931
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39269931,
year = {2024},
author = {Coleman, JR and Baker, JN and Ketkar, S and Butler, AM and Williams, L and Hammonds-Odie, L and Atkinson, EG and Murray, DD and Lee, B and Worley, KC},
title = {Development and evaluation of a training curriculum to engage researchers on accessing and analyzing the All of Us data.},
journal = {Journal of the American Medical Informatics Association : JAMIA},
volume = {},
number = {},
pages = {},
doi = {10.1093/jamia/ocae240},
pmid = {39269931},
issn = {1527-974X},
support = {OT2 OD031932/NH/NIH HHS/United States ; 1 OT2 OD026549//Office of the Director: Regional Medical Centers/ ; HHSN 263201600085U//Federally Qualified Health Centers/ ; 5 U2C OD023196//Data and Research Center/ ; 1 U24 OD023163//Participant Technology Systems Center/ ; 3 OT2 OD023205//Communications and Engagement/ ; },
abstract = {OBJECTIVE: The All of Us Evenings with Genetics (EwG) Research Program at Baylor College of Medicine (BCM), funded to engage research scholars to work with the All of Us data, developed a training curriculum for the Researcher Workbench, the platform to access and analyze All of Us data. All of Us EwG developed the curriculum so that it could teach scholars regardless of their skills and background in programming languages and cloud computing. All of Us EwG delivered this curriculum at the first annual All of Us EwG Faculty Summit in May 2022. The curriculum was evaluated both during and after the Faculty Summit so that it could be improved for future training.
MATERIALS AND METHODS: Surveys were administered to assess scholars' familiarity with the programming languages and computational tools required to use the Researcher Workbench. The curriculum was developed using backward design and was informed by the survey results, a review of available resources for training users on the Researcher Workbench, and All of Us EwG members' collective experience training students. The curriculum was evaluated using feedback surveys during the Faculty Summit as well as virtual meetings and emails following the Faculty Summit.
RESULTS: The evaluation results demonstrated the success of the curriculum and identified areas for improvement.
DISCUSSION AND CONCLUSION: The curriculum has been adapted and improved in response to evaluations and in response to changes to the All of Us data and infrastructure to train more researchers through this program and other scholarly programs.},
}
RevDate: 2024-09-13
niimath and fslmaths: replication as a method to enhance popular neuroimaging tools.
Aperture neuro, 4:.
Neuroimaging involves the acquisition of extensive 3D images and 4D time series data to gain insights into brain structure and function. The analysis of such data necessitates both spatial and temporal processing. In this context, "fslmaths" has established itself as a foundational software tool within our field, facilitating domain-specific image processing. Here, we introduce "niimath," a clone of fslmaths. While the term "clone" often carries negative connotations, we illustrate the merits of replicating widely-used tools, touching on aspects of licensing, performance optimization, and portability. For instance, our work enables the popular functions of fslmaths to be disseminated in various forms, such as a high-performance compiled R package known as "imbibe", a Windows executable, and a WebAssembly plugin compatible with JavaScript. This versatility is demonstrated through our NiiVue live demo web page. This application allows 'edge computing' where image processing can be done with a zero-footprint tool that runs on any web device without requiring private data to be shared to the cloud. Furthermore, our efforts have contributed back to FSL, which has integrated the optimizations that we've developed. This synergy has enhanced the overall transparency, utility and efficiency of tools widely relied upon in the neuroimaging community.
Additional Links: PMID-39268148
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39268148,
year = {2024},
author = {Rorden, C and Webster, M and Drake, C and Jenkinson, M and Clayden, JD and Li, N and Hanayik, T},
title = {niimath and fslmaths: replication as a method to enhance popular neuroimaging tools.},
journal = {Aperture neuro},
volume = {4},
number = {},
pages = {},
pmid = {39268148},
issn = {2957-3963},
abstract = {Neuroimaging involves the acquisition of extensive 3D images and 4D time series data to gain insights into brain structure and function. The analysis of such data necessitates both spatial and temporal processing. In this context, "fslmaths" has established itself as a foundational software tool within our field, facilitating domain-specific image processing. Here, we introduce "niimath," a clone of fslmaths. While the term "clone" often carries negative connotations, we illustrate the merits of replicating widely-used tools, touching on aspects of licensing, performance optimization, and portability. For instance, our work enables the popular functions of fslmaths to be disseminated in various forms, such as a high-performance compiled R package known as "imbibe", a Windows executable, and a WebAssembly plugin compatible with JavaScript. This versatility is demonstrated through our NiiVue live demo web page. This application allows 'edge computing' where image processing can be done with a zero-footprint tool that runs on any web device without requiring private data to be shared to the cloud. Furthermore, our efforts have contributed back to FSL, which has integrated the optimizations that we've developed. This synergy has enhanced the overall transparency, utility and efficiency of tools widely relied upon in the neuroimaging community.},
}
RevDate: 2024-09-12
CmpDate: 2024-09-12
Biofilm marker discovery with cloud-based dockerized metagenomics analysis of microbial communities.
Briefings in bioinformatics, 25(Supplement_1):.
In an environment, microbes often work in communities to achieve most of their essential functions, including the production of essential nutrients. Microbial biofilms are communities of microbes that attach to a nonliving or living surface by embedding themselves into a self-secreted matrix of extracellular polymeric substances. These communities work together to enhance their colonization of surfaces, produce essential nutrients, and achieve their essential functions for growth and survival. They often consist of diverse microbes including bacteria, viruses, and fungi. Biofilms play a critical role in influencing plant phenotypes and human microbial infections. Understanding how these biofilms impact plant health, human health, and the environment is important for analyzing genotype-phenotype-driven rule-of-life functions. Such fundamental knowledge can be used to precisely control the growth of biofilms on a given surface. Metagenomics is a powerful tool for analyzing biofilm genomes through function-based gene and protein sequence identification (functional metagenomics) and sequence-based function identification (sequence metagenomics). Metagenomic sequencing enables a comprehensive sampling of all genes in all organisms present within a biofilm sample. However, the complexity of biofilm metagenomic study warrants the increasing need to follow the Findability, Accessibility, Interoperability, and Reusable (FAIR) Guiding Principles for scientific data management. This will ensure that scientific findings can be more easily validated by the research community. This study proposes a dockerized, self-learning bioinformatics workflow to increase the community adoption of metagenomics toolkits in a metagenomics and meta-transcriptomics investigation. Our biofilm metagenomics workflow self-learning module includes integrated learning resources with an interactive dockerized workflow. This module will allow learners to analyze resources that are beneficial for aggregating knowledge about biofilm marker genes, proteins, and metabolic pathways as they define the composition of specific microbial communities. Cloud and dockerized technology can allow novice learners-even those with minimal knowledge in computer science-to use complicated bioinformatics tools. Our cloud-based, dockerized workflow splits biofilm microbiome metagenomics analyses into four easy-to-follow submodules. A variety of tools are built into each submodule. As students navigate these submodules, they learn about each tool used to accomplish the task. The downstream analysis is conducted using processed data obtained from online resources or raw data processed via Nextflow pipelines. This analysis takes place within Vertex AI's Jupyter notebook instance with R and Python kernels. Subsequently, results are stored and visualized in Google Cloud storage buckets, alleviating the computational burden on local resources. The result is a comprehensive tutorial that guides bioinformaticians of any skill level through the entire workflow. It enables them to comprehend and implement the necessary processes involved in this integrated workflow from start to finish. This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39266450
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39266450,
year = {2024},
author = {Gnimpieba, EZ and Hartman, TW and Do, T and Zylla, J and Aryal, S and Haas, SJ and Agany, DDM and Gurung, BDS and Doe, V and Yosufzai, Z and Pan, D and Campbell, R and Huber, VC and Sani, R and Gadhamshetty, V and Lushbough, C},
title = {Biofilm marker discovery with cloud-based dockerized metagenomics analysis of microbial communities.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae429},
pmid = {39266450},
issn = {1477-4054},
support = {#1849206//National Science Foundation/ ; //Institutional Development Award/ ; /GM/NIGMS NIH HHS/United States ; P20GM103443/NH/NIH HHS/United States ; },
mesh = {*Biofilms/growth & development ; *Metagenomics/methods ; Microbiota/genetics ; Cloud Computing ; Humans ; Computational Biology/methods ; },
abstract = {In an environment, microbes often work in communities to achieve most of their essential functions, including the production of essential nutrients. Microbial biofilms are communities of microbes that attach to a nonliving or living surface by embedding themselves into a self-secreted matrix of extracellular polymeric substances. These communities work together to enhance their colonization of surfaces, produce essential nutrients, and achieve their essential functions for growth and survival. They often consist of diverse microbes including bacteria, viruses, and fungi. Biofilms play a critical role in influencing plant phenotypes and human microbial infections. Understanding how these biofilms impact plant health, human health, and the environment is important for analyzing genotype-phenotype-driven rule-of-life functions. Such fundamental knowledge can be used to precisely control the growth of biofilms on a given surface. Metagenomics is a powerful tool for analyzing biofilm genomes through function-based gene and protein sequence identification (functional metagenomics) and sequence-based function identification (sequence metagenomics). Metagenomic sequencing enables a comprehensive sampling of all genes in all organisms present within a biofilm sample. However, the complexity of biofilm metagenomic study warrants the increasing need to follow the Findability, Accessibility, Interoperability, and Reusable (FAIR) Guiding Principles for scientific data management. This will ensure that scientific findings can be more easily validated by the research community. This study proposes a dockerized, self-learning bioinformatics workflow to increase the community adoption of metagenomics toolkits in a metagenomics and meta-transcriptomics investigation. Our biofilm metagenomics workflow self-learning module includes integrated learning resources with an interactive dockerized workflow. This module will allow learners to analyze resources that are beneficial for aggregating knowledge about biofilm marker genes, proteins, and metabolic pathways as they define the composition of specific microbial communities. Cloud and dockerized technology can allow novice learners-even those with minimal knowledge in computer science-to use complicated bioinformatics tools. Our cloud-based, dockerized workflow splits biofilm microbiome metagenomics analyses into four easy-to-follow submodules. A variety of tools are built into each submodule. As students navigate these submodules, they learn about each tool used to accomplish the task. The downstream analysis is conducted using processed data obtained from online resources or raw data processed via Nextflow pipelines. This analysis takes place within Vertex AI's Jupyter notebook instance with R and Python kernels. Subsequently, results are stored and visualized in Google Cloud storage buckets, alleviating the computational burden on local resources. The result is a comprehensive tutorial that guides bioinformaticians of any skill level through the entire workflow. It enables them to comprehend and implement the necessary processes involved in this integrated workflow from start to finish. This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Biofilms/growth & development
*Metagenomics/methods
Microbiota/genetics
Cloud Computing
Humans
Computational Biology/methods
RevDate: 2024-09-11
Linear symmetric self-selecting 14-bit kinetic molecular memristors.
Nature [Epub ahead of print].
Artificial Intelligence (AI) is the domain of large resource-intensive data centres that limit access to a small community of developers[1,2]. Neuromorphic hardware promises greatly improved space and energy efficiency for AI but is presently only capable of low-accuracy operations, such as inferencing in neural networks[3-5]. Core computing tasks of signal processing, neural network training and natural language processing demand far higher computing resolution, beyond that of individual neuromorphic circuit elements[6-8]. Here we introduce an analog molecular memristor based on a Ru-complex of an azo-aromatic ligand with 14-bit resolution. Precise kinetic control over a transition between two thermodynamically stable molecular electronic states facilitates 16,520 distinct analog conductance levels, which can be linearly and symmetrically updated or written individually in one time step, substantially simplifying the weight update procedure over existing neuromorphic platforms[3]. The circuit elements are unidirectional, facilitating a selector-less 64 × 64 crossbar-based dot-product engine that enables vector-matrix multiplication, including Fourier transform, in a single time step. We achieved more than 73 dB signal-to-noise-ratio, four orders of magnitude improvement over the state-of-the-art methods[9-11], while consuming 460× less energy than digital computers[12,13]. Accelerators leveraging these molecular crossbars could transform neuromorphic computing, extending it beyond niche applications and augmenting the core of digital electronics from the cloud to the edge[12,13].
Additional Links: PMID-39261726
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39261726,
year = {2024},
author = {Sharma, D and Rath, SP and Kundu, B and Korkmaz, A and S, H and Thompson, D and Bhat, N and Goswami, S and Williams, RS and Goswami, S},
title = {Linear symmetric self-selecting 14-bit kinetic molecular memristors.},
journal = {Nature},
volume = {},
number = {},
pages = {},
pmid = {39261726},
issn = {1476-4687},
abstract = {Artificial Intelligence (AI) is the domain of large resource-intensive data centres that limit access to a small community of developers[1,2]. Neuromorphic hardware promises greatly improved space and energy efficiency for AI but is presently only capable of low-accuracy operations, such as inferencing in neural networks[3-5]. Core computing tasks of signal processing, neural network training and natural language processing demand far higher computing resolution, beyond that of individual neuromorphic circuit elements[6-8]. Here we introduce an analog molecular memristor based on a Ru-complex of an azo-aromatic ligand with 14-bit resolution. Precise kinetic control over a transition between two thermodynamically stable molecular electronic states facilitates 16,520 distinct analog conductance levels, which can be linearly and symmetrically updated or written individually in one time step, substantially simplifying the weight update procedure over existing neuromorphic platforms[3]. The circuit elements are unidirectional, facilitating a selector-less 64 × 64 crossbar-based dot-product engine that enables vector-matrix multiplication, including Fourier transform, in a single time step. We achieved more than 73 dB signal-to-noise-ratio, four orders of magnitude improvement over the state-of-the-art methods[9-11], while consuming 460× less energy than digital computers[12,13]. Accelerators leveraging these molecular crossbars could transform neuromorphic computing, extending it beyond niche applications and augmenting the core of digital electronics from the cloud to the edge[12,13].},
}
RevDate: 2024-09-11
Scalable spatiotemporal prediction with Bayesian neural fields.
Nature communications, 15(1):7942.
Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in diverse applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting. As the scale of modern datasets increases, there is a growing need for statistical methods that are flexible enough to capture complex spatiotemporal dynamics and scalable enough to handle many observations. This article introduces the Bayesian Neural Field (BAYESNF), a domain-general statistical model that infers rich spatiotemporal probability distributions for data-analysis tasks including forecasting, interpolation, and variography. BAYESNF integrates a deep neural network architecture for high-capacity function estimation with hierarchical Bayesian inference for robust predictive uncertainty quantification. Evaluations against prominent baselines show that BAYESNF delivers improvements on prediction problems from climate and public health data containing tens to hundreds of thousands of measurements. Accompanying the paper is an open-source software package (https://github.com/google/bayesnf) that runs on GPU and TPU accelerators through the JAX machine learning platform.
Additional Links: PMID-39261468
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39261468,
year = {2024},
author = {Saad, F and Burnim, J and Carroll, C and Patton, B and Köster, U and A Saurous, R and Hoffman, M},
title = {Scalable spatiotemporal prediction with Bayesian neural fields.},
journal = {Nature communications},
volume = {15},
number = {1},
pages = {7942},
pmid = {39261468},
issn = {2041-1723},
abstract = {Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in diverse applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting. As the scale of modern datasets increases, there is a growing need for statistical methods that are flexible enough to capture complex spatiotemporal dynamics and scalable enough to handle many observations. This article introduces the Bayesian Neural Field (BAYESNF), a domain-general statistical model that infers rich spatiotemporal probability distributions for data-analysis tasks including forecasting, interpolation, and variography. BAYESNF integrates a deep neural network architecture for high-capacity function estimation with hierarchical Bayesian inference for robust predictive uncertainty quantification. Evaluations against prominent baselines show that BAYESNF delivers improvements on prediction problems from climate and public health data containing tens to hundreds of thousands of measurements. Accompanying the paper is an open-source software package (https://github.com/google/bayesnf) that runs on GPU and TPU accelerators through the JAX machine learning platform.},
}
RevDate: 2024-09-11
Sanctions and opportunities: Factors affecting China's high-tech SMEs adoption of artificial intelligence computing leasing business.
Heliyon, 10(16):e36620 pii:S2405-8440(24)12651-5.
Due to sanctions, more Chinese high-tech SMEs are turning to rent AI computing power through cloud service providers. Therefore, it is necessary to give a variety of suggestions for China's high-tech SMEs to better develop AI applications through computing power leasing. Because traditional theories are difficult to explain this new technology adoption behavior, this research combines and extends TTF and UTAUT2 theories to take an empirical research. A total of 387 questionnaires were received, of which incomplete questionnaires and invalid questionnaires were issued, leaving 281 valid questionnaires. The results indicate that SME innovativeness, perceived risk, performance expectancy, price value and task technology fit are all significantly related to usage, whereas task technology fit moderates the other relationships significantly. Results give a variety of suggestions for China's high-tech SMEs to better develop AI applications through computing power leasing in the context of sanctions. This study not only suggests ways to increase the competitiveness of SMEs by optimizing leasing services but also give directions in investors' investment decisions. The findings are also applicable to the large-scale application of China's domestic AI chips in computing power leasing scenarios in the future.
Additional Links: PMID-39258203
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39258203,
year = {2024},
author = {Sun, W and Tohirovich Dedahanov, A and Li, WP and Young Shin, H},
title = {Sanctions and opportunities: Factors affecting China's high-tech SMEs adoption of artificial intelligence computing leasing business.},
journal = {Heliyon},
volume = {10},
number = {16},
pages = {e36620},
doi = {10.1016/j.heliyon.2024.e36620},
pmid = {39258203},
issn = {2405-8440},
abstract = {Due to sanctions, more Chinese high-tech SMEs are turning to rent AI computing power through cloud service providers. Therefore, it is necessary to give a variety of suggestions for China's high-tech SMEs to better develop AI applications through computing power leasing. Because traditional theories are difficult to explain this new technology adoption behavior, this research combines and extends TTF and UTAUT2 theories to take an empirical research. A total of 387 questionnaires were received, of which incomplete questionnaires and invalid questionnaires were issued, leaving 281 valid questionnaires. The results indicate that SME innovativeness, perceived risk, performance expectancy, price value and task technology fit are all significantly related to usage, whereas task technology fit moderates the other relationships significantly. Results give a variety of suggestions for China's high-tech SMEs to better develop AI applications through computing power leasing in the context of sanctions. This study not only suggests ways to increase the competitiveness of SMEs by optimizing leasing services but also give directions in investors' investment decisions. The findings are also applicable to the large-scale application of China's domestic AI chips in computing power leasing scenarios in the future.},
}
RevDate: 2024-09-10
An improved identity-based public audit protocol for cloud storage.
Heliyon, 10(16):e36273 pii:S2405-8440(24)12304-3.
With the rapid development of informatization, a vast amount of data is continuously generated and accumulated, leading to the emergence of cloud storage services. However, data stored in the cloud is beyond the control of users, posing various security risks. Cloud data auditing technology enables the inspection of data integrity in the cloud without the necessity of data downloading. Among these, public auditing schemes have experienced rapid development due to their ability to avoid additional user auditing expenses. However, malicious third-party auditors can compromise data privacy. This paper proposes an improved identity-based cloud auditing scheme that can resist malicious auditors. This scheme is also constructed on an identity-based public auditing scheme using blockchain to prevent malicious auditing. We found the scheme is not secure because a malicious cloud server can forge authentication tags for outsourced data blocks, while our scheme has not these security flaws. Through security proofs and performance analysis, we further demonstrate that our scheme is secure and efficient. Additionally, our scheme has typical application scenarios.
Additional Links: PMID-39253244
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39253244,
year = {2024},
author = {Wang, H and Zhang, Y and Wang, XA and Yang, X},
title = {An improved identity-based public audit protocol for cloud storage.},
journal = {Heliyon},
volume = {10},
number = {16},
pages = {e36273},
doi = {10.1016/j.heliyon.2024.e36273},
pmid = {39253244},
issn = {2405-8440},
abstract = {With the rapid development of informatization, a vast amount of data is continuously generated and accumulated, leading to the emergence of cloud storage services. However, data stored in the cloud is beyond the control of users, posing various security risks. Cloud data auditing technology enables the inspection of data integrity in the cloud without the necessity of data downloading. Among these, public auditing schemes have experienced rapid development due to their ability to avoid additional user auditing expenses. However, malicious third-party auditors can compromise data privacy. This paper proposes an improved identity-based cloud auditing scheme that can resist malicious auditors. This scheme is also constructed on an identity-based public auditing scheme using blockchain to prevent malicious auditing. We found the scheme is not secure because a malicious cloud server can forge authentication tags for outsourced data blocks, while our scheme has not these security flaws. Through security proofs and performance analysis, we further demonstrate that our scheme is secure and efficient. Additionally, our scheme has typical application scenarios.},
}
RevDate: 2024-09-10
Study of the patterns of variations in ice lakes and the factors influencing these changes on the southeastern Tibetan plateau.
Heliyon, 10(16):e36406 pii:S2405-8440(24)12437-1.
The ice lakes in the southeastern Qinghai-Tibet Plateau have exhibited a pronounced expansion against the backdrop of global warming, consequently amplifying the local risk of ice lake outburst disasters. However, surveys of ice lake changes in the entire region have consistently been incomplete due to the prevalent high cloud density. On the basis of Landsat remote sensing images and the Google Earth Engine (GEE) cloud computing platform, in this study, the full convolution segmentation algorithm is utilized to accurately and comprehensively map the regional distribution of ice lakes in southeastern Tibet at consistent time intervals in 1993, 2008, and 2023. Furthermore, the formation, distribution, and dynamic changes in these ice lakes are investigated. The numbers of ice lakes discovered in 1993, 2008, and 2023 were 2520, 3198, and 3877, respectively. These lakes covered areas of approximately 337.64 ± 36.86 km[2], 363.92 ± 40.90 km[2], and 395.74 ± 22.72 km[2], respectively. These ice lakes are located primarily between altitudes of 4442 m and 4909 m. The total area experienced an annual growth rate of approximately 0.57 % from 1993 to 2023. In the present study, the long-term variations in ice lakes in each district and county are examined. These findings indicate that between 1993 and 2023, the expansion of ice lakes was more pronounced in regions with a large number of marine glaciers. Notably, Basu County presented the highest annual growth rate of the ice lake population, at 6.23 %, followed by Bomi County, at 4.28 %, and finally, Zayul County, at 2.94 %. The accelerated shrinkage of marine glaciers induced by global warming is the primary driver behind the expansion of ice lakes. The results obtained from this research will enhance our overall understanding of the complex dynamics and mechanisms that govern the formation of ice lakes while also offering valuable perspectives on the potential risks linked to their expansion in this particular area.
Additional Links: PMID-39253170
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39253170,
year = {2024},
author = {Mingwei, YU and Feng, LI and Yonggang, GUO and Libin, SU and Deshun, QIN},
title = {Study of the patterns of variations in ice lakes and the factors influencing these changes on the southeastern Tibetan plateau.},
journal = {Heliyon},
volume = {10},
number = {16},
pages = {e36406},
doi = {10.1016/j.heliyon.2024.e36406},
pmid = {39253170},
issn = {2405-8440},
abstract = {The ice lakes in the southeastern Qinghai-Tibet Plateau have exhibited a pronounced expansion against the backdrop of global warming, consequently amplifying the local risk of ice lake outburst disasters. However, surveys of ice lake changes in the entire region have consistently been incomplete due to the prevalent high cloud density. On the basis of Landsat remote sensing images and the Google Earth Engine (GEE) cloud computing platform, in this study, the full convolution segmentation algorithm is utilized to accurately and comprehensively map the regional distribution of ice lakes in southeastern Tibet at consistent time intervals in 1993, 2008, and 2023. Furthermore, the formation, distribution, and dynamic changes in these ice lakes are investigated. The numbers of ice lakes discovered in 1993, 2008, and 2023 were 2520, 3198, and 3877, respectively. These lakes covered areas of approximately 337.64 ± 36.86 km[2], 363.92 ± 40.90 km[2], and 395.74 ± 22.72 km[2], respectively. These ice lakes are located primarily between altitudes of 4442 m and 4909 m. The total area experienced an annual growth rate of approximately 0.57 % from 1993 to 2023. In the present study, the long-term variations in ice lakes in each district and county are examined. These findings indicate that between 1993 and 2023, the expansion of ice lakes was more pronounced in regions with a large number of marine glaciers. Notably, Basu County presented the highest annual growth rate of the ice lake population, at 6.23 %, followed by Bomi County, at 4.28 %, and finally, Zayul County, at 2.94 %. The accelerated shrinkage of marine glaciers induced by global warming is the primary driver behind the expansion of ice lakes. The results obtained from this research will enhance our overall understanding of the complex dynamics and mechanisms that govern the formation of ice lakes while also offering valuable perspectives on the potential risks linked to their expansion in this particular area.},
}
RevDate: 2024-09-10
CmpDate: 2024-09-10
Enhancing rural healthcare through internet-based remote collaborative outpatient services: A comprehensive evaluation in Changzhi, Shanxi Province.
Medicine, 103(36):e39614.
BACKGROUND: The advancement of digital technology, particularly telemedicine, has become crucial in improving healthcare access in rural areas. By integrating cloud computing and mHealth technologies, Internet-based Collaborative Outpatient Clinics offer a promising solution to overcome the limitations of traditional healthcare delivery in underserved communities.
METHODS: A trial was conducted in 4 counties of Changzhi City in Shanxi Province, China. The system extended to 495 rural communities and served over 5000 rural residents. Deep learning algorithms were employed to analyze medical data patterns to increase the accuracy of diagnoses and the quality of personalized treatment recommendations.
RESULTS: After the implementation of the system, there was a significant improvement in the satisfaction levels of rural residents regarding medical services; the accuracy of medical consultations increased by 30%, and the convenience of medical access improved by 50%. There was also a notable enhancement in overall health management. Satisfaction rates among healthcare professionals and rural inhabitants were over 90% and 85%, respectively, indicating that the system has had a significant positive impact on the quality of health-care services.
CONCLUSION: The study confirms the feasibility of implementing telemedicine services in rural areas and offers evidence and an operational framework for promoting innovative healthcare models on a large scale.
Additional Links: PMID-39252255
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39252255,
year = {2024},
author = {Zhao, H and Zhang, Z and Tang, J},
title = {Enhancing rural healthcare through internet-based remote collaborative outpatient services: A comprehensive evaluation in Changzhi, Shanxi Province.},
journal = {Medicine},
volume = {103},
number = {36},
pages = {e39614},
doi = {10.1097/MD.0000000000039614},
pmid = {39252255},
issn = {1536-5964},
support = {HPYJ202202//Heping Hospital Affiliated to Changzhi Medical College Faculty Research Fund/ ; },
mesh = {Humans ; China ; *Rural Health Services/organization & administration ; *Telemedicine ; *Internet ; Male ; Female ; *Patient Satisfaction ; Adult ; Middle Aged ; Health Services Accessibility ; Ambulatory Care/methods/organization & administration ; Rural Population ; Aged ; Young Adult ; Adolescent ; },
abstract = {BACKGROUND: The advancement of digital technology, particularly telemedicine, has become crucial in improving healthcare access in rural areas. By integrating cloud computing and mHealth technologies, Internet-based Collaborative Outpatient Clinics offer a promising solution to overcome the limitations of traditional healthcare delivery in underserved communities.
METHODS: A trial was conducted in 4 counties of Changzhi City in Shanxi Province, China. The system extended to 495 rural communities and served over 5000 rural residents. Deep learning algorithms were employed to analyze medical data patterns to increase the accuracy of diagnoses and the quality of personalized treatment recommendations.
RESULTS: After the implementation of the system, there was a significant improvement in the satisfaction levels of rural residents regarding medical services; the accuracy of medical consultations increased by 30%, and the convenience of medical access improved by 50%. There was also a notable enhancement in overall health management. Satisfaction rates among healthcare professionals and rural inhabitants were over 90% and 85%, respectively, indicating that the system has had a significant positive impact on the quality of health-care services.
CONCLUSION: The study confirms the feasibility of implementing telemedicine services in rural areas and offers evidence and an operational framework for promoting innovative healthcare models on a large scale.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
China
*Rural Health Services/organization & administration
*Telemedicine
*Internet
Male
Female
*Patient Satisfaction
Adult
Middle Aged
Health Services Accessibility
Ambulatory Care/methods/organization & administration
Rural Population
Aged
Young Adult
Adolescent
RevDate: 2024-09-10
Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service.
Computing and software for big science, 8(1):17.
Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.
Additional Links: PMID-39248308
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39248308,
year = {2024},
author = {, and Hayrapetyan, A and Tumasyan, A and Adam, W and Andrejkovic, JW and Bergauer, T and Chatterjee, S and Damanakis, K and Dragicevic, M and Hussain, PS and Jeitler, M and Krammer, N and Li, A and Liko, D and Mikulec, I and Schieck, J and Schöfbeck, R and Schwarz, D and Sonawane, M and Templ, S and Waltenberger, W and Wulz, CE and Darwish, MR and Janssen, T and Mechelen, PV and Bols, ES and D'Hondt, J and Dansana, S and De Moor, A and Delcourt, M and Faham, HE and Lowette, S and Makarenko, I and Müller, D and Sahasransu, AR and Tavernier, S and Tytgat, M and Onsem, GPV and Putte, SV and Vannerom, D and Clerbaux, B and Das, AK and De Lentdecker, G and Favart, L and Gianneios, P and Hohov, D and Jaramillo, J and Khalilzadeh, A and Khan, FA and Lee, K and Mahdavikhorrami, M and Malara, A and Paredes, S and Thomas, L and Bemden, MV and Velde, CV and Vanlaer, P and De Coen, M and Dobur, D and Hong, Y and Knolle, J and Lambrecht, L and Mestdach, G and Amarilo, KM and Rendón, C and Samalan, A and Skovpen, K and Bossche, NVD and Linden, JV and Wezenbeek, L and Benecke, A and Bethani, A and Bruno, G and Caputo, C and Delaere, C and Donertas, IS and Giammanco, A and Jaffel, K and Jain, S and Lemaitre, V and Lidrych, J and Mastrapasqua, P and Mondal, K and Tran, TT and Wertz, S and Alves, GA and Coelho, E and Hensel, C and De Oliveira, TM and Moraes, A and Teles, PR and Soeiro, M and Júnior, WLA and Pereira, MAG and Filho, MBF and Malbouisson, HB and Carvalho, W and Chinellato, J and Da Costa, EM and Da Silveira, GG and De Jesus Damiao, D and De Souza, SF and De Souza, RG and Martins, J and Herrera, CM and Mundim, L and Nogima, H and Pinheiro, JP and Santoro, A and Sznajder, A and Thiel, M and Pereira, AV and Bernardes, CA and Calligaris, L and Tomei, TRFP and Gregores, EM and Mercadante, PG and Novaes, SF and Orzari, B and Padula, SS and Aleksandrov, A and Antchev, G and Hadjiiska, R and Iaydjiev, P and Misheva, M and Shopova, M and Sultanov, G and Dimitrov, A and Litov, L and Pavlov, B and Petkov, P and Petrov, A and Shumka, E and Keshri, S and Thakur, S and Cheng, T and Javaid, T and Yuan, L and Hu, Z and Liu, J and Yi, K and Chen, GM and Chen, HS and Chen, M and Iemmi, F and Jiang, CH and Kapoor, A and Liao, H and Liu, ZA and Sharma, R and Song, JN and Tao, J and Wang, C and Wang, J and Wang, Z and Zhang, H and Agapitos, A and Ban, Y and Levin, A and Li, C and Li, Q and Mao, Y and Qian, SJ and Sun, X and Wang, D and Yang, H and Zhang, L and Zhou, C and You, Z and Lu, N and Bauer, G and Gao, X and Leggat, D and Okawa, H and Lin, Z and Lu, C and Xiao, M and Avila, C and Trujillo, DAB and Cabrera, A and Florez, C and Fraga, J and Vega, JAR and Guisao, JM and Ramirez, F and Rodriguez, M and Alvarez, JDR and Giljanovic, D and Godinovic, N and Lelas, D and Sculac, A and Kovac, M and Sculac, T and Bargassa, P and Brigljevic, V and Chitroda, BK and Ferencek, D and Mishra, S and Starodumov, A and Susa, T and Attikis, A and Christoforou, K and Konstantinou, S and Mousa, J and Nicolaou, C and Ptochos, F and Razis, PA and Rykaczewski, H and Saka, H and Stepennov, A and Finger, M and Finger, M and Kveton, A and Ayala, E and Jarrin, EC and Abdelalim, AA and Salama, E and Mahmoud, MA and Mohammed, Y and Ehataht, K and Kadastik, M and Lange, T and Nandan, S and Nielsen, C and Pata, J and Raidal, M and Tani, L and Veelken, C and Kirschenmann, H and Osterberg, K and Voutilainen, M and Bharthuar, S and Brücken, E and Garcia, F and Kallonen, KTS and Kinnunen, R and Lampén, T and Lassila-Perini, K and Lehti, S and Lindén, T and Martikainen, L and Myllymäki, M and Rantanen, MM and Siikonen, H and Tuominen, E and Tuominiemi, J and Luukka, P and Petrow, H and Besancon, M and Couderc, F and Dejardin, M and Denegri, D and Faure, JL and Ferri, F and Ganjour, S and Gras, P and de Monchenault, GH and Lohezic, V and Malcles, J and Rander, J and Rosowsky, A and Sahin, MÖ and Savoy-Navarro, A and Simkina, P and Titov, M and Tornago, M and Barrera, CB and Beaudette, F and Perraguin, AB and Busson, P and Cappati, A and Charlot, C and Chiusi, M and Damas, F and Davignon, O and De Wit, A and Alves, BAFS and Ghosh, S and Gilbert, A and de Cassagnac, RG and Hakimi, A and Harikrishnan, B and Kalipoliti, L and Liu, G and Motta, J and Nguyen, M and Ochando, C and Portales, L and Salerno, R and Sauvan, JB and Sirois, Y and Tarabini, A and Vernazza, E and Zabi, A and Zghiche, A and Agram, JL and Andrea, J and Apparu, D and Bloch, D and Brom, JM and Chabert, EC and Collard, C and Falke, S and Goerlach, U and Grimault, C and Haeberle, R and Bihan, AL and Meena, M and Saha, G and Sessini, MA and Hove, PV and Beauceron, S and Blancon, B and Boudoul, G and Chanon, N and Choi, J and Contardo, D and Depasse, P and Dozen, C and Mamouni, HE and Fay, J and Gascon, S and Gouzevitch, M and Greenberg, C and Grenier, G and Ille, B and Laktineh, IB and Lethuillier, M and Mirabito, L and Perries, S and Purohit, A and Donckt, MV and Verdier, P and Xiao, J and Bagaturia, I and Lomidze, I and Tsamalaidze, Z and Botta, V and Feld, L and Klein, K and Lipinski, M and Meuser, D and Pauls, A and Röwert, N and Teroerde, M and Diekmann, S and Dodonova, A and Eich, N and Eliseev, D and Engelke, F and Erdmann, J and Erdmann, M and Fackeldey, P and Fischer, B and Hebbeker, T and Hoepfner, K and Ivone, F and Jung, A and Lee, MY and Mausolf, F and Merschmeyer, M and Meyer, A and Mukherjee, S and Noll, D and Nowotny, F and Pozdnyakov, A and Rath, Y and Redjeb, W and Rehm, F and Reithler, H and Sarkar, U and Sarkisovi, V and Schmidt, A and Sharma, A and Spah, JL and Stein, A and Da Silva De Araujo, FT and Vigilante, L and Wiedenbeck, S and Zaleski, S and Dziwok, C and Flügge, G and Ahmad, WH and Kress, T and Nowack, A and Pooth, O and Stahl, A and Ziemons, T and Zotz, A and Petersen, HA and Martin, MA and Alimena, J and Amoroso, S and An, Y and Baxter, S and Bayatmakou, M and Gonzalez, HB and Behnke, O and Belvedere, A and Bhattacharya, S and Blekman, F and Borras, K and Campbell, A and Cardini, A and Cheng, C and Colombina, F and Rodríguez, SC and Silva, GC and De Silva, M and Eckerlin, G and Eckstein, D and Banos, LIE and Filatov, O and Gallo, E and Geiser, A and Giraldi, A and Guglielmi, V and Guthoff, M and Hinzmann, A and Jafari, A and Jeppe, L and Jomhari, NZ and Kaech, B and Kasemann, M and Kleinwort, C and Kogler, R and Komm, M and Krücker, D and Lange, W and Pernia, DL and Lipka, K and Lohmann, W and Mankel, R and Melzer-Pellmann, IA and Morentin, MM and Meyer, AB and Milella, G and Mussgiller, A and Nair, LP and Nürnberg, A and Otarid, Y and Park, J and Adán, DP and Ranken, E and Raspereza, A and Lopes, BR and Rübenach, J and Saggio, A and Scham, M and Schnake, S and Schütze, P and Schwanenberger, C and Selivanova, D and Sharko, K and Shchedrolosiev, M and Ricardo, RES and Stafford, D and Vazzoler, F and Barroso, AV and Walsh, R and Wang, Q and Wen, Y and Wichmann, K and Wiens, L and Wissing, C and Yang, Y and Santos, AZC and Albrecht, A and Albrecht, S and Antonello, M and Bein, S and Benato, L and Bollweg, S and Bonanomi, M and Connor, P and Eich, M and Morabit, KE and Fischer, Y and Garbers, C and Garutti, E and Grohsjean, A and Haller, J and Jabusch, HR and Kasieczka, G and Keicher, P and Klanner, R and Korcari, W and Kramer, T and Kutzner, V and Labe, F and Lange, J and Lobanov, A and Matthies, C and Mehta, A and Moureaux, L and Mrowietz, M and Nigamova, A and Nissan, Y and Paasch, A and Rodriguez, KJP and Quadfasel, T and Raciti, B and Rieger, M and Savoiu, D and Schindler, J and Schleper, P and Schröder, M and Schwandt, J and Sommerhalder, M and Stadie, H and Steinbrück, G and Tews, A and Wolf, M and Brommer, S and Burkart, M and Butz, E and Chwalek, T and Dierlamm, A and Droll, A and Faltermann, N and Giffels, M and Gottmann, A and Hartmann, F and Hofsaess, R and Horzela, M and Husemann, U and Kieseler, J and Klute, M and Koppenhöfer, R and Lawhorn, JM and Link, M and Lintuluoto, A and Maier, S and Mitra, S and Mormile, M and Müller, T and Neukum, M and Oh, M and Presilla, M and Quast, G and Rabbertz, K and Regnery, B and Shadskiy, N and Shvetsov, I and Simonis, HJ and Toms, M and Trevisani, N and Cube, RFV and Wassmer, M and Wieland, S and Wittig, F and Wolf, R and Zuo, X and Anagnostou, G and Daskalakis, G and Kyriakis, A and Papadopoulos, A and Stakia, A and Kontaxakis, P and Melachroinos, G and Panagiotou, A and Papavergou, I and Paraskevas, I and Saoulidou, N and Theofilatos, K and Tziaferi, E and Vellidis, K and Zisopoulos, I and Bakas, G and Chatzistavrou, T and Karapostoli, G and Kousouris, K and Papakrivopoulos, I and Siamarkou, E and Tsipolitis, G and Zacharopoulou, A and Adamidis, K and Bestintzanos, I and Evangelou, I and Foudas, C and Kamtsikis, C and Katsoulis, P and Kokkas, P and Kioseoglou, PGK and Manthos, N and Papadopoulos, I and Strologas, J and Bartók, M and Hajdu, C and Horvath, D and Márton, K and Sikler, F and Veszpremi, V and Csanád, M and Farkas, K and Gadallah, MMA and Kadlecsik, Á and Major, P and Mandal, K and Pásztor, G and Rádl, AJ and Veres, GI and Raics, P and Ujvari, B and Zilizi, G and Bencze, G and Czellar, S and Molnar, J and Szillasi, Z and Csorgo, T and Nemes, F and Novak, T and Babbar, J and Bansal, S and Beri, SB and Bhatnagar, V and Chaudhary, G and Chauhan, S and Dhingra, N and Kaur, A and Kaur, A and Kaur, H and Kaur, M and Kumar, S and Sandeep, K and Sheokand, T and Singh, JB and Singla, A and Ahmed, A and Bhardwaj, A and Chhetri, A and Choudhary, BC and Kumar, A and Kumar, A and Naimuddin, M and Ranjan, K and Saumya, S and Baradia, S and Barman, S and Bhattacharya, S and Dutta, S and Dutta, S and Sarkar, S and Ameen, MM and Behera, PK and Behera, SC and Chatterjee, S and Jana, P and Kalbhor, P and Komaragiri, JR and Kumar, D and Pujahari, PR and Saha, NR and Sharma, A and Sikdar, AK and Verma, S and Dugad, S and Kumar, M and Mohanty, GB and Suryadevara, P and Bala, A and Banerjee, S and Chatterjee, RM and Dewanjee, RK and Guchait, M and Jain, S and Jaiswal, A and Karmakar, S and Kumar, S and Majumder, G and Mazumdar, K and Parolia, S and Thachayath, A and Bahinipati, S and Kar, C and Maity, D and Mal, P and Mishra, T and Bindhu, VKMN and Naskar, K and Nayak, A and Sadangi, P and Saha, P and Swain, SK and Varghese, S and Vats, D and Acharya, S and Alpana, A and Dube, S and Gomber, B and Kansal, B and Laha, A and Sahu, B and Sharma, S and Vaish, KY and Bakhshiansohi, H and Khazaie, E and Zeinali, M and Chenarani, S and Etesami, SM and Khakzad, M and Najafabadi, MM and Grunewald, M and Abbrescia, M and Aly, R and Colaleo, A and Creanza, D and D'Anzi, B and De Filippis, N and De Palma, M and Florio, AD and Elmetenawee, W and Fiore, L and Iaselli, G and Louka, M and Maggi, G and Maggi, M and Margjeka, I and Mastrapasqua, V and My, S and Nuzzo, S and Pellecchia, A and Pompili, A and Pugliese, G and Radogna, R and Ramirez-Sanchez, G and Ramos, D and Ranieri, A and Silvestris, L and Simone, FM and Sözbilir, Ü and Stamerra, A and Venditti, R and Verwilligen, P and Zaza, A and Abbiendi, G and Battilana, C and Bonacorsi, D and Borgonovi, L and Campanini, R and Capiluppi, P and Castro, A and Cavallo, FR and Cuffiani, M and Dallavalle, GM and Diotalevi, T and Fanfani, A and Fasanella, D and Giacomelli, P and Giommi, L and Grandi, C and Guiducci, L and Meo, SL and Lunerti, L and Marcellini, S and Masetti, G and Navarria, FL and Perrotta, A and Primavera, F and Rossi, AM and Rovelli, T and Siroli, GP and Costa, S and Mattia, AD and Potenza, R and Tricomi, A and Tuve, C and Assiouras, P and Barbagli, G and Bardelli, G and Camaiani, B and Cassese, A and Ceccarelli, R and Ciulli, V and Civinini, C and D'Alessandro, R and Focardi, E and Kello, T and Latino, G and Lenzi, P and Lizzo, M and Meschini, M and Paoletti, S and Papanastassiou, A and Sguazzoni, G and Viliani, L and Benussi, L and Bianco, S and Meola, S and Piccolo, D and Chatagnon, P and Ferro, F and Robutti, E and Tosi, S and Benaglia, A and Boldrini, G and Brivio, F and Cetorelli, F and De Guio, F and Dinardo, ME and Dini, P and Gennai, S and Gerosa, R and Ghezzi, A and Govoni, P and Guzzi, L and Lucchini, MT and Malberti, M and Malvezzi, S and Massironi, A and Menasce, D and Moroni, L and Paganoni, M and Pedrini, D and Pinolini, BS and Ragazzi, S and de Fatis, TT and Zuolo, D and Buontempo, S and Cagnotta, A and Carnevali, F and Cavallo, N and Fabozzi, F and Iorio, AOM and Lista, L and Paolucci, P and Rossi, B and Sciacca, C and Ardino, R and Azzi, P and Bacchetta, N and Bisello, D and Bortignon, P and Bragagnolo, A and Checchia, P and Dorigo, T and Gasparini, U and Lusiani, E and Margoni, M and Marini, F and Meneguzzo, AT and Migliorini, M and Passaseo, M and Pazzini, J and Ronchese, P and Rossin, R and Sgaravatto, M and Simonetto, F and Strong, G and Tosi, M and Triossi, A and Ventura, S and Yarar, H and Zanetti, M and Zotto, P and Zucchetta, A and Zumerle, G and Zeid, SA and Aimè, C and Braghieri, A and Calzaferri, S and Fiorina, D and Montagna, P and Re, V and Riccardi, C and Salvini, P and Vai, I and Vitulo, P and Ajmal, S and Bilei, GM and Ciangottini, D and Fanò, L and Magherini, M and Mantovani, G and Mariani, V and Menichelli, M and Moscatelli, F and Rossi, A and Santocchia, A and Spiga, D and Tedeschi, T and Asenov, P and Azzurri, P and Bagliesi, G and Bhattacharya, R and Bianchini, L and Boccali, T and Bossini, E and Bruschini, D and Castaldi, R and Ciocci, MA and Cipriani, M and D'Amante, V and Dell'Orso, R and Donato, S and Giassi, A and Ligabue, F and Figueiredo, DM and Messineo, A and Musich, M and Palla, F and Rizzi, A and Rolandi, G and Chowdhury, SR and Sarkar, T and Scribano, A and Spagnolo, P and Tenchini, R and Tonelli, G and Turini, N and Venturi, A and Verdini, PG and Barria, P and Basile, C and Campana, M and Cavallari, F and Mendez, LC and Re, DD and Marco, ED and Diemoz, M and Errico, F and Longo, E and Meridiani, P and Mijuskovic, J and Organtini, G and Pandolfi, F and Paramatti, R and Quaranta, C and Rahatlou, S and Rovelli, C and Santanastasio, F and Soffi, L and Amapane, N and Arcidiacono, R and Argiro, S and Arneodo, M and Bartosik, N and Bellan, R and Bellora, A and Biino, C and Borca, C and Cartiglia, N and Costa, M and Covarelli, R and Demaria, N and Finco, L and Grippo, M and Kiani, B and Legger, F and Luongo, F and Mariotti, C and Markovic, L and Maselli, S and Mecca, A and Migliore, E and Monteno, M and Mulargia, R and Obertino, MM and Ortona, G and Pacher, L and Pastrone, N and Pelliccioni, M and Ruspa, M and Siviero, F and Sola, V and Solano, A and Staiano, A and Tarricone, C and Trocino, D and Umoret, G and Vlasov, E and Belforte, S and Candelise, V and Casarsa, M and Cossutti, F and De Leo, K and Ricca, GD and Dogra, S and Hong, J and Huh, C and Kim, B and Kim, DH and Kim, J and Lee, H and Lee, SW and Moon, CS and Oh, YD and Ryu, MS and Sekmen, S and Yang, YC and Kim, MS and Bak, G and Gwak, P and Kim, H and Moon, DH and Asilar, E and Kim, D and Kim, TJ and Merlin, JA and Choi, S and Han, S and Hong, B and Lee, K and Lee, KS and Lee, S and Park, J and Park, SK and Yoo, J and Goh, J and Yang, S and Kim, HS and Kim, Y and Lee, S and Almond, J and Bhyun, JH and Choi, J and Jun, W and Kim, J and Ko, S and Kwon, H and Lee, H and Lee, J and Lee, J and Oh, BH and Oh, SB and Seo, H and Yang, UK and Yoon, I and Jang, W and Kang, DY and Kang, Y and Kim, S and Ko, B and Lee, JSH and Lee, Y and Park, IC and Roh, Y and Watson, IJ and Ha, S and Yoo, HD and Choi, M and Kim, MR and Lee, H and Lee, Y and Yu, I and Beyrouthy, T and Maghrbi, Y and Dreimanis, K and Gaile, A and Pikurs, G and Potrebko, A and Seidel, M and Veckalns, V and Strautnieks, NR and Ambrozas, M and Juodagalvis, A and Rinkevicius, A and Tamulaitis, G and Norjoharuddeen, NB and Yusuff, I and Zolkapli, Z and Benitez, JF and Hernandez, AC and Acosta, HAE and Maríñez, LGG and Coello, ML and Quijada, JAM and Sehrawat, A and Palomo, LV and Ayala, G and Castilla-Valdez, H and Ledesma, HC and De La Cruz-Burelo, E and La Cruz, IH and Lopez-Fernandez, R and Herrera, CAM and Hernández, AS and Barrera, CO and García, MR and Bautista, I and Pedraza, I and Ibarguen, HAS and Estrada, CU and Bubanja, I and Raicevic, N and Butler, PH and Ahmad, A and Asghar, MI and Awais, A and Awan, MIM and Hoorani, HR and Khan, WA and Avati, V and Grzanka, L and Malawski, M and Bialkowska, H and Bluj, M and Boimska, B and Górski, M and Kazana, M and Szleper, M and Zalewski, P and Bunkowski, K and Doroba, K and Kalinowski, A and Konecki, M and Krolikowski, J and Muhammad, A and Pozniak, K and Zabolotny, W and Araujo, M and Bastos, D and Da Cruz E Silva, CB and Boletti, A and Bozzo, M and Camporesi, T and Da Molin, G and Faccioli, P and Gallinaro, M and Hollar, J and Leonardo, N and Niknejad, T and Petrilli, A and Pisano, M and Seixas, J and Varela, J and Wulff, JW and Adzic, P and Milenovic, P and Dordevic, M and Milosevic, J and Rekovic, V and Aguilar-Benitez, M and Maestre, JA and Bedoya, CF and Cepeda, M and Cerrada, M and Colino, N and De La Cruz, B and Peris, AD and Valle, AED and Val, DFD and Ramos, JPF and Flix, J and Fouz, MC and Lopez, OG and Lopez, SG and Hernandez, JM and Josa, MI and Moran, D and Perez, CMM and Tobar, ÁN and Dengra, CP and Yzquierdo, AP and Pelayo, JP and Redondo, I and Ferrero, DDR and Romero, L and Navas, SS and Gómez, LU and Escobar, JV and Willmott, C and de Trocóniz, JF and Gonzalez, BA and Cuevas, J and Menendez, JF and Folgueras, S and Caballero, IG and Fernández, JRG and Cortezon, EP and Álvarez, CR and Bouza, VR and Rodríguez, AS and Trapote, A and Villalba, CV and Vischia, P and Bhowmik, S and Fernández, SB and Cifuentes, JAB and Cabrillo, IJ and Calderon, A and Campderros, JD and Fernandez, M and Gomez, G and García, CL and Rivero, CM and Arbol, PMRD and Matorras, F and Cuevas, PM and Ramos, EN and Gomez, JP and Scodellaro, L and Vila, I and Garcia, JMV and Jayananda, MK and Kailasapathy, B and Sonnadara, DUJ and Wickramarathna, DDC and Dharmaratna, WGD and Liyanage, K and Perera, N and Wickramage, N and Abbaneo, D and Amendola, C and Auffray, E and Auzinger, G and Baechler, J and Barney, D and Martínez, AB and Bianco, M and Bilin, B and Anuar, AAB and Bocci, A and Botta, C and Brondolin, E and Caillol, C and Cerminara, G and Chernyavskaya, N and d'Enterria, D and Dabrowski, A and David, A and De Roeck, A and Defranchis, MM and Deile, M and Dobson, M and Forthomme, L and Franzoni, G and Funk, W and Giani, S and Gigi, D and Gill, K and Glege, F and Gouskos, L and Haranko, M and Hegeman, J and Huber, B and Innocente, V and James, T and Janot, P and Laurila, S and Lecoq, P and Leutgeb, E and Lourenço, C and Maier, B and Malgeri, L and Mannelli, M and Marini, AC and Matthewman, M and Meijers, F and Mersi, S and Meschi, E and Milosevic, V and Monti, F and Moortgat, F and Mulders, M and Neutelings, I and Orfanelli, S and Pantaleo, F and Petrucciani, G and Pfeiffer, A and Pierini, M and Piparo, D and Qu, H and Rabady, D and Gutiérrez, GR and Rovere, M and Sakulin, H and Scarfi, S and Schwick, C and Selvaggi, M and Sharma, A and Shchelina, K and Silva, P and Sphicas, P and Leiton, AGS and Steen, A and Summers, S and Treille, D and Tropea, P and Tsirou, A and Walter, D and Wanczyk, J and Wang, J and Wuchterl, S and Zehetner, P and Zejdl, P and Zeuner, WD and Bevilacqua, T and Caminada, L and Ebrahimi, A and Erdmann, W and Horisberger, R and Ingram, Q and Kaestli, HC and Kotlinski, D and Lange, C and Missiroli, M and Noehte, L and Rohe, T and Aarrestad, TK and Androsov, K and Backhaus, M and Calandri, A and Cazzaniga, C and Datta, K and De Cosa, A and Dissertori, G and Dittmar, M and Donegà, M and Eble, F and Galli, M and Gedia, K and Glessgen, F and Grab, C and Hits, D and Lustermann, W and Lyon, AM and Manzoni, RA and Marchegiani, M and Marchese, L and Perez, CM and Mascellani, A and Nessi-Tedaldi, F and Pauss, F and Perovic, V and Pigazzini, S and Reissel, C and Reitenspiess, T and Ristic, B and Riti, F and Seidita, R and Steggemann, J and Valsecchi, D and Wallny, R and Amsler, C and Bärtschi, P and Brzhechko, D and Canelli, MF and Cormier, K and Heikkilä, JK and Huwiler, M and Jin, W and Jofrehei, A and Kilminster, B and Leontsinis, S and Liechti, SP and Macchiolo, A and Meiring, P and Molinatti, U and Reimers, A and Robmann, P and Cruz, SS and Senger, M and Stäger, F and Takahashi, Y and Tramontano, R and Adloff, C and Bhowmik, D and Kuo, CM and Lin, W and Rout, PK and Tiwari, PC and Yu, SS and Ceard, L and Chao, Y and Chen, KF and Chen, PS and Chen, ZG and De Iorio, A and Hou, WS and Hsu, TH and Kao, YW and Khurana, R and Kole, G and Li, YY and Lu, RS and Paganis, E and Su, XF and Thomas-Wilsker, J and Tsai, LS and Wu, HY and Yazgan, E and Asawatangtrakuldee, C and Srimanobhas, N and Wachirapusitanand, V and Agyel, D and Boran, F and Demiroglu, ZS and Dolek, F and Dumanoglu, I and Eskut, E and Guler, Y and Guler, EG and Isik, C and Kara, O and Topaksu, AK and Kiminsu, U and Onengut, G and Ozdemir, K and Polatoz, A and Tali, B and Tok, UG and Turkcapar, S and Uslan, E and Zorbakir, IS and Yalvac, M and Akgun, B and Atakisi, IO and Gülmez, E and Kaya, M and Kaya, O and Tekten, S and Cakir, A and Cankocak, K and Komurcu, Y and Sen, S and Aydilek, O and Cerci, S and Epshteyn, V and Hacisahinoglu, B and Hos, I and Kaynak, B and Ozkorucuklu, S and Potok, O and Sert, H and Simsek, C and Zorbilmez, C and Isildak, B and Cerci, DS and Boyaryntsev, A and Grynyov, B and Levchuk, L and Anthony, D and Brooke, JJ and Bundock, A and Bury, F and Clement, E and Cussans, D and Flacher, H and Glowacki, M and Goldstein, J and Heath, HF and Kreczko, L and Paramesvaran, S and Robertshaw, L and Nasr-Storey, SSE and Smith, VJ and Stylianou, N and Pass, KW and White, R and Ball, AH and Bell, KW and Belyaev, A and Brew, C and Brown, RM and Cockerill, DJA and Cooke, C and Ellis, KV and Harder, K and Harper, S and Holmberg, ML and Linacre, J and Manolopoulos, K and Newbold, DM and Olaiya, E and Petyt, D and Reis, T and Salvi, G and Schuh, T and Shepherd-Themistocleous, CH and Tomalin, IR and Williams, T and Bainbridge, R and Bloch, P and Brown, CE and Buchmuller, O and Cacchio, V and Montoya, CAC and Chahal, GS and Colling, D and Dancu, JS and Das, I and Dauncey, P and Davies, G and Davies, J and Negra, MD and Fayer, S and Fedi, G and Hall, G and Hassanshahi, MH and Howard, A and Iles, G and Knight, M and Langford, J and Holgado, JL and Lyons, L and Magnan, AM and Malik, S and Mieskolainen, M and Nash, J and Pesaresi, M and Radburn-Smith, BC and Richards, A and Rose, A and Savva, K and Seez, C and Shukla, R and Tapper, A and Uchida, K and Uttley, GP and Vage, LH and Virdee, T and Vojinovic, M and Wardle, N and Winterbottom, D and Coldham, K and Cole, JE and Khan, A and Kyberd, P and Reid, ID and Abdullin, S and Brinkerhoff, A and Caraway, B and Dittmann, J and Hatakeyama, K and Hiltbrand, J and McMaster, B and Saunders, M and Sawant, S and Sutantawibul, C and Wilson, J and Bartek, R and Dominguez, A and Escamilla, CH and Simsek, AE and Uniyal, R and Hernandez, AMV and Bam, B and Chudasama, R and Cooper, SI and Gleyzer, SV and Perez, CU and Rumerio, P and Usai, E and Yi, R and Akpinar, A and Arcaro, D and Cosby, C and Demiragli, Z and Erice, C and Fangmeier, C and Madrazo, CF and Fontanesi, E and Gastler, D and Golf, F and Jeon, S and Reed, I and Rohlf, J and Salyer, K and Sperka, D and Spitzbart, D and Suarez, I and Tsatsos, A and Yuan, S and Zecchinelli, AG and Benelli, G and Coubez, X and Cutts, D and Hadley, M and Heintz, U and Hogan, JM and Kwon, T and Landsberg, G and Lau, KT and Li, D and Luo, J and Mondal, S and Narain, M and Pervan, N and Sagir, S and Simpson, F and Stamenkovic, M and Yan, X and Zhang, W and Abbott, S and Bonilla, J and Brainerd, C and Breedon, R and De La Barca Sanchez, MC and Chertok, M and Citron, M and Conway, J and Cox, PT and Erbacher, R and Jensen, F and Kukral, O and Mocellin, G and Mulhearn, M and Pellett, D and Wei, W and Yao, Y and Zhang, F and Bachtis, M and Cousins, R and Datta, A and Avila, GF and Hauser, J and Ignatenko, M and Iqbal, MA and Lam, T and Manca, E and Prado, AND and Saltzberg, D and Valuev, V and Clare, R and Gary, JW and Gordon, M and Hanson, G and Si, W and Wimpenny, S and Branson, JG and Cittolin, S and Cooperstein, S and Diaz, D and Duarte, J and Giannini, L and Guiang, J and Kansal, R and Krutelyov, V and Lee, R and Letts, J and Masciovecchio, M and Mokhtar, F and Mukherjee, S and Pieri, M and Quinnan, M and Narayanan, BVS and Sharma, V and Tadel, M and Vourliotis, E and Würthwein, F and Xiang, Y and Yagil, A and Barzdukas, A and Brennan, L and Campagnari, C and Incandela, J and Kim, J and Li, AJ and Masterson, P and Mei, H and Richman, J and Sarica, U and Schmitz, R and Setti, F and Sheplock, J and Stuart, D and Vámi, TÁ and Wang, S and Bornheim, A and Cerri, O and Latorre, A and Mao, J and Newman, HB and Spiropulu, M and Vlimant, JR and Wang, C and Xie, S and Zhu, RY and Alison, J and An, S and Andrews, MB and Bryant, P and Cremonesi, M and Dutta, V and Ferguson, T and Harilal, A and Liu, C and Mudholkar, T and Murthy, S and Palit, P and Paulini, M and Roberts, A and Sanchez, A and Terrill, W and Cumalat, JP and Ford, WT and Hart, A and Hassani, A and Karathanasis, G and MacDonald, E and Manganelli, N and Perloff, A and Savard, C and Schonbeck, N and Stenson, K and Ulmer, KA and Wagner, SR and Zipper, N and Alexander, J and Bright-Thonney, S and Chen, X and Cranshaw, DJ and Fan, J and Fan, X and Gadkari, D and Hogan, S and Kotamnives, P and Monroy, J and Oshiro, M and Patterson, JR and Reichert, J and Reid, M and Ryd, A and Thom, J and Wittich, P and Zou, R and Albrow, M and Alyari, M and Amram, O and Apollinari, G and Apresyan, A and Bauerdick, LAT and Berry, D and Berryhill, J and Bhat, PC and Burkett, K and Butler, JN and Canepa, A and Cerati, GB and Cheung, HWK and Chlebana, F and Cummings, G and Dickinson, J and Dutta, I and Elvira, VD and Feng, Y and Freeman, J and Gandrakota, A and Gecse, Z and Gray, L and Green, D and Grummer, A and Grünendahl, S and Guerrero, D and Gutsche, O and Harris, RM and Heller, R and Herwig, TC and Hirschauer, J and Horyn, L and Jayatilaka, B and Jindariani, S and Johnson, M and Joshi, U and Klijnsma, T and Klima, B and Kwok, KHM and Lammel, S and Lincoln, D and Lipton, R and Liu, T and Madrid, C and Maeshima, K and Mantilla, C and Mason, D and McBride, P and Merkel, P and Mrenna, S and Nahn, S and Ngadiuba, J and Noonan, D and Papadimitriou, V and Pastika, N and Pedro, K and Pena, C and Ravera, F and Hall, AR and Ristori, L and Sexton-Kennedy, E and Smith, N and Soha, A and Spiegel, L and Stoynev, S and Strait, J and Taylor, L and Tkaczyk, S and Tran, NV and Uplegger, L and Vaandering, EW and Zoi, I and Aruta, C and Avery, P and Bourilkov, D and Cadamuro, L and Chang, P and Cherepanov, V and Field, RD and Koenig, E and Kolosova, M and Konigsberg, J and Korytov, A and Matchev, K and Menendez, N and Mitselmakher, G and Mohrman, K and Madhu, AM and Rawal, N and Rosenzweig, D and Rosenzweig, S and Wang, J and Adams, T and Kadhim, AA and Askew, A and Bower, S and Habibullah, R and Hagopian, V and Hashmi, R and Kim, RS and Kim, S and Kolberg, T and Martinez, G and Prosper, H and Prova, PR and Wulansatiti, M and Yohay, R and Zhang, J and Alsufyani, B and Baarmand, MM and Butalla, S and Elkafrawy, T and Hohlmann, M and Verma, RK and Rahmani, M and Yanes, E and Adams, MR and Baty, A and Bennett, C and Cavanaugh, R and Franco, RE and Evdokimov, O and Gerber, CE and Hofman, DJ and Lee, JH and Lemos, DS and Merrit, AH and Mills, C and Nanda, S and Oh, G and Ozek, B and Pilipovic, D and Pradhan, R and Roy, T and Rudrabhatla, S and Tonjes, MB and Varelas, N and Ye, Z and Yoo, J and Alhusseini, M and Blend, D and Dilsiz, K and Emediato, L and Karaman, G and Köseyan, OK and Merlo, JP and Mestvirishvili, A and Nachtman, J and Neogi, O and Ogul, H and Onel, Y and Penzo, A and Snyder, C and Tiras, E and Blumenfeld, B and Corcodilos, L and Davis, J and Gritsan, AV and Kang, L and Kyriacou, S and Maksimovic, P and Roguljic, M and Roskes, J and Sekhar, S and Swartz, M and Abreu, A and Alcerro, LFA and Anguiano, J and Baringer, P and Bean, A and Flowers, Z and Grove, D and King, J and Krintiras, G and Lazarovits, M and Mahieu, CL and Marquez, J and Minafra, N and Murray, M and Nickel, M and Pitt, M and Popescu, S and Rogan, C and Royon, C and Salvatico, R and Sanders, S and Smith, C and Wang, Q and Wilson, G and Allmond, B and Ivanov, A and Kaadze, K and Kalogeropoulos, A and Kim, D and Maravin, Y and Natoli, J and Roy, D and Sorrentino, G and Rebassoo, F and Wright, D and Baden, A and Belloni, A and Chen, YM and Eno, SC and Hadley, NJ and Jabeen, S and Kellogg, RG and Koeth, T and Lai, Y and Lascio, S and Mignerey, AC and Nabili, S and Palmer, C and Papageorgakis, C and Paranjpe, MM and Wang, L and Bendavid, J and Cali, IA and D'Alfonso, M and Eysermans, J and Freer, C and Gomez-Ceballos, G and Goncharov, M and Grosso, G and Harris, P and Hoang, D and Kovalskyi, D and Krupa, J and Lavezzo, L and Lee, YJ and Long, K and Novak, A and Paus, C and Rankin, D and Roland, C and Roland, G and Rothman, S and Stephans, GSF and Wang, Z and Wyslouch, B and Yang, TJ and Crossman, B and Joshi, BM and Kapsiak, C and Krohn, M and Mahon, D and Mans, J and Marzocchi, B and Pandey, S and Revering, M and Rusack, R and Saradhy, R and Schroeder, N and Strobbe, N and Wadud, MA and Cremaldi, LM and Bloom, K and Claes, DR and Haza, G and Hossain, J and Joo, C and Kravchenko, I and Siado, JE and Tabb, W and Vagnerini, A and Wightman, A and Yan, F and Yu, D and Bandyopadhyay, H and Hay, L and Iashvili, I and Kharchilava, A and Morris, M and Nguyen, D and Rappoccio, S and Sfar, HR and Williams, A and Alverson, G and Barberis, E and Dervan, J and Haddad, Y and Han, Y and Krishna, A and Li, J and Lu, M and Madigan, G and Mccarthy, R and Morse, DM and Nguyen, V and Orimoto, T and Parker, A and Skinnari, L and Wang, B and Wood, D and Bhattacharya, S and Bueghly, J and Chen, Z and Dittmer, S and Hahn, KA and Liu, Y and Miao, Y and Monk, DG and Schmitt, MH and Taliercio, A and Velasco, M and Agarwal, G and Band, R and Bucci, R and Castells, S and Das, A and Goldouzian, R and Hildreth, M and Ho, KW and Anampa, KH and Ivanov, T and Jessop, C and Lannon, K and Lawrence, J and Loukas, N and Lutton, L and Mariano, J and Marinelli, N and Mcalister, I and McCauley, T and Mcgrady, C and Moore, C and Musienko, Y and Nelson, H and Osherson, M and Piccinelli, A and Ruchti, R and Townsend, A and Wan, Y and Wayne, M and Yockey, H and Zarucki, M and Zygala, L and Basnet, A and Bylsma, B and Carrigan, M and Durkin, LS and Hill, C and Joyce, M and Ornelas, MN and Wei, K and Winer, BL and Yates, BR and Addesa, FM and Bouchamaoui, H and Das, P and Dezoort, G and Elmer, P and Frankenthal, A and Greenberg, B and Haubrich, N and Kopp, G and Kwan, S and Lange, D and Loeliger, A and Marlow, D and Ojalvo, I and Olsen, J and Shevelev, A and Stickland, D and Tully, C and Malik, S and Bakshi, AS and Barnes, VE and Chandra, S and Chawla, R and Das, S and Gu, A and Gutay, L and Jones, M and Jung, AW and Kondratyev, D and Koshy, AM and Liu, M and Negro, G and Neumeister, N and Paspalaki, G and Piperov, S and Scheurer, V and Schulte, JF and Stojanovic, M and Thieman, J and Virdi, AK and Wang, F and Xie, W and Dolen, J and Parashar, N and Pathak, A and Acosta, D and Carnahan, T and Ecklund, KM and Manteca, PJF and Freed, S and Gardner, P and Geurts, FJM and Li, W and Colin, OM and Padley, BP and Redjimi, R and Rotter, J and Yigitbasi, E and Zhang, Y and Bodek, A and de Barbaro, P and Demina, R and Dulemba, JL and Garcia-Bellido, A and Hindrichs, O and Khukhunaishvili, A and Parmar, N and Parygin, P and Popova, E and Taus, R and Goulianos, K and Chiarito, B and Chou, JP and Gershtein, Y and Halkiadakis, E and Heindl, M and Houghton, C and Jaroslawski, D and Karacheban, O and Laflotte, I and Lath, A and Montalvo, R and Nash, K and Routray, H and Salur, S and Schnetzer, S and Somalwar, S and Stone, R and Thayil, SA and Thomas, S and Vora, J and Wang, H and Acharya, H and Ally, D and Delannoy, AG and Fiorendi, S and Higginbotham, S and Holmes, T and Kanuganti, AR and Karunarathna, N and Lee, L and Nibigira, E and Spanier, S and Aebi, D and Ahmad, M and Bouhali, O and Eusebi, R and Gilmore, J and Huang, T and Kamon, T and Kim, H and Luo, S and Mueller, R and Overton, D and Rathjens, D and Safonov, A and Akchurin, N and Damgov, J and Hegde, V and Hussain, A and Kazhykarim, Y and Lamichhane, K and Lee, SW and Mankel, A and Peltola, T and Volobouev, I and Whitbeck, A and Appelt, E and Chen, Y and Greene, S and Gurrola, A and Johns, W and Elayavalli, RK and Melo, A and Romeo, F and Sheldon, P and Tuo, S and Velkovska, J and Viinikainen, J and Cardwell, B and Cox, B and Hakala, J and Hirosky, R and Ledovskoy, A and Neu, C and Lara, CEP and Karchin, PE and Aravind, A and Banerjee, S and Black, K and Bose, T and Dasu, S and De Bruyn, I and Everaerts, P and Galloni, C and He, H and Herndon, M and Herve, A and Koraka, CK and Lanaro, A and Loveless, R and Sreekala, JM and Mallampalli, A and Mohammadi, A and Mondal, S and Parida, G and Pétré, L and Pinna, D and Savin, A and Shang, V and Sharma, V and Smith, WH and Teague, D and Tsoi, HF and Vetens, W and Warden, A and Afanasiev, S and Andreev, V and Andreev, Y and Aushev, T and Azarkin, M and Babaev, A and Belyaev, A and Blinov, V and Boos, E and Borshch, V and Budkouski, D and Chadeeva, M and Chekhovsky, V and Chistov, R and Demiyanov, A and Dermenev, A and Dimova, T and Druzhkin, D and Dubinin, M and Dudko, L and Ershov, A and Gavrilov, G and Gavrilov, V and Gninenko, S and Golovtcov, V and Golubev, N and Golutvin, I and Gorbunov, I and Gribushin, A and Ivanov, Y and Kachanov, V and Karjavine, V and Karneyeu, A and Kim, V and Kirakosyan, M and Kirpichnikov, D and Kirsanov, M and Klyukhin, V and Kodolova, O and Korenkov, V and Kozyrev, A and Krasnikov, N and Lanev, A and Levchenko, P and Lychkovskaya, N and Makarenko, V and Malakhov, A and Matveev, V and Murzin, V and Nikitenko, A and Obraztsov, S and Oreshkin, V and Palichik, V and Perelygin, V and Petrushanko, S and Polikarpov, S and Popov, V and Radchenko, O and Savina, M and Savrin, V and Shalaev, V and Shmatov, S and Shulha, S and Skovpen, Y and Slabospitskii, S and Smirnov, V and Snigirev, A and Sosnov, D and Sulimov, V and Tcherniaev, E and Terkulov, A and Teryaev, O and Tlisova, I and Toropin, A and Uvarov, L and Uzunian, A and Vorobyev, A and Voytishin, N and Yuldashev, BS and Zarubin, A and Zhizhin, I and Zhokin, A},
title = {Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service.},
journal = {Computing and software for big science},
volume = {8},
number = {1},
pages = {17},
pmid = {39248308},
issn = {2510-2044},
abstract = {Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.},
}
RevDate: 2024-09-09
Establishing the longitudinal hemodynamic mapping framework for wearable-driven coronary digital twins.
NPJ digital medicine, 7(1):236.
Understanding the evolving nature of coronary hemodynamics is crucial for early disease detection and monitoring progression. We require digital twins that mimic a patient's circulatory system by integrating continuous physiological data and computing hemodynamic patterns over months. Current models match clinical flow measurements but are limited to single heartbeats. To this end, we introduced the longitudinal hemodynamic mapping framework (LHMF), designed to tackle critical challenges: (1) computational intractability of explicit methods; (2) boundary conditions reflecting varying activity states; and (3) accessible computing resources for clinical translation. We show negligible error (0.0002-0.004%) between LHMF and explicit data of 750 heartbeats. We deployed LHMF across traditional and cloud-based platforms, demonstrating high-throughput simulations on heterogeneous systems. Additionally, we established LHMFC, where hemodynamically similar heartbeats are clustered to avoid redundant simulations, accurately reconstructing longitudinal hemodynamic maps (LHMs). This study captured 3D hemodynamics over 4.5 million heartbeats, paving the way for cardiovascular digital twins.
Additional Links: PMID-39242829
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39242829,
year = {2024},
author = {Tanade, C and Khan, NS and Rakestraw, E and Ladd, WD and Draeger, EW and Randles, A},
title = {Establishing the longitudinal hemodynamic mapping framework for wearable-driven coronary digital twins.},
journal = {NPJ digital medicine},
volume = {7},
number = {1},
pages = {236},
pmid = {39242829},
issn = {2398-6352},
support = {DP1AG082343//U.S. Department of Health & Human Services | National Institutes of Health (NIH)/ ; 164486//National Science Foundation (NSF)/ ; DP1AG082343//U.S. Department of Health & Human Services | National Institutes of Health (NIH)/ ; DP1AG082343//U.S. Department of Health & Human Services | National Institutes of Health (NIH)/ ; },
abstract = {Understanding the evolving nature of coronary hemodynamics is crucial for early disease detection and monitoring progression. We require digital twins that mimic a patient's circulatory system by integrating continuous physiological data and computing hemodynamic patterns over months. Current models match clinical flow measurements but are limited to single heartbeats. To this end, we introduced the longitudinal hemodynamic mapping framework (LHMF), designed to tackle critical challenges: (1) computational intractability of explicit methods; (2) boundary conditions reflecting varying activity states; and (3) accessible computing resources for clinical translation. We show negligible error (0.0002-0.004%) between LHMF and explicit data of 750 heartbeats. We deployed LHMF across traditional and cloud-based platforms, demonstrating high-throughput simulations on heterogeneous systems. Additionally, we established LHMFC, where hemodynamically similar heartbeats are clustered to avoid redundant simulations, accurately reconstructing longitudinal hemodynamic maps (LHMs). This study captured 3D hemodynamics over 4.5 million heartbeats, paving the way for cardiovascular digital twins.},
}
RevDate: 2024-09-05
CmpDate: 2024-09-06
Development of an online authentic radiology viewing and reporting platform to test the skills of radiology trainees in Low- and Middle-Income Countries.
BMC medical education, 24(1):969.
BACKGROUND: Diagnostic radiology residents in low- and middle-income countries (LMICs) may have to provide significant contributions to the clinical workload before the completion of their residency training. Because of time constraints inherent to the delivery of acute care, some of the most clinically impactful diagnostic radiology errors arise from the use of Computed Tomography (CT) in the management of acutely ill patients. As a result, it is paramount to ensure that radiology trainees reach adequate skill levels prior to assuming independent on-call responsibilities. We partnered with the radiology residency program at the Aga Khan University Hospital in Nairobi (Kenya) to evaluate a novel cloud-based testing method that provides an authentic radiology viewing and interpretation environment. It is based on Lifetrack, a unique Google Chrome-based Picture Archiving and Communication System, that enables a complete viewing environment for any scan, and provides a novel report generation tool based on Active Templates which are a patented structured reporting method. We applied it to evaluate the skills of AKUHN trainees on entire CT scans representing the spectrum of acute non-trauma abdominal pathology encountered in a typical on-call setting. We aimed to demonstrate the feasibility of remotely testing the authentic practice of radiology and to show that important observations can be made from such a Lifetrack-based testing approach regarding the radiology skills of an individual practitioner or of a cohort of trainees.
METHODS: A total of 13 anonymized trainees with experience from 12 months to over 4 years took part in the study. Individually accessing the Lifetrack tool they were tested on 37 abdominal CT scans (including one normal scan) over six 2-hour sessions on consecutive days. All cases carried the same clinical history of acute abdominal pain. During each session the trainees accessed the corresponding Lifetrack test set using clinical workstations, reviewed the CT scans, and formulated an opinion for the acute diagnosis, any secondary pathology, and incidental findings on the scan. Their scan interpretations were composed using the Lifetrack report generation system based on active templates in which segments of text can be selected to assemble a detailed report. All reports generated by the trainees were scored on four different interpretive components: (a) acute diagnosis, (b) unrelated secondary diagnosis, (c) number of missed incidental findings, and (d) number of overcalls. A 3-score aggregate was defined from the first three interpretive elements. A cumulative score modified the 3-score aggregate for the negative effect of interpretive overcalls.
RESULTS: A total of 436 scan interpretations and scores were available from 13 trainees tested on 37 cases. The acute diagnosis score ranged from 0 to 1 with a mean of 0.68 ± 0.36 and median of 0.78 (IQR: 0.5-1), and there were 436 scores. An unrelated secondary diagnosis was present in 11 cases, resulting in 130 secondary diagnosis scores. The unrelated secondary diagnosis score ranged from 0 to 1, with mean score of 0.48 ± 0.46 and median of 0.5 (IQR: 0-1). There were 32 cases with incidental findings, yielding 390 scores for incidental findings. The number of missed incidental findings ranged from 0 to 5 with a median at 1 (IQR: 1-2). The incidental findings score ranged from 0 to 1 with a mean of 0.4 ± 0.38 and median of 0.33 (IQR: 0- 0.66). The number of overcalls ranged from 0 to 3 with a median at 0 (IQR: 0-1) and a mean of 0.36 ± 0.63. The 3-score aggregate ranged from 0 to 100 with a mean of 65.5 ± 32.5 and median of 77.3 (IQR: 45.0, 92.5). The cumulative score ranged from - 30 to 100 with a mean of 61.9 ± 35.5 and median of 71.4 (IQR: 37.4, 92.0). The mean acute diagnosis scores and SD by training period were 0.62 ± 0.03, 0.80 ± 0.05, 0.71 ± 0.05, 0.58 ± 0.07, and 0.66 ± 0.05 for trainees with ≤ 12 months, 12-24 months, 24-36 months, 36-48 months and > 48 months respectively. The mean acute diagnosis score of 12-24 months training was the only statistically significant greater score when compared to ≤ 12 months by the ANOVA with Tukey testing (p = 0.0002). We found a similar trend with distribution of 3-score aggregates and cumulative scores. There were no significant associations when the training period was categorized as less than and more than 2 years. We looked at the distribution of the 3-score aggregate versus the number of overcalls by trainee, and we found that the 3-score aggregate was inversely related to the number of overcalls. Heatmaps and raincloud plots provided an illustrative means to visualize the relative performance of trainees across cases.
CONCLUSION: We demonstrated the feasibility of remotely testing the authentic practice of radiology and showed that important observations can be made from our Lifetrack-based testing approach regarding radiology skills of an individual or a cohort. From observed weaknesses areas for targeted teaching can be implemented, and retesting could reveal their impact. This methodology can be customized to different LMIC environments and expanded to board certification examinations.
Additional Links: PMID-39237930
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39237930,
year = {2024},
author = {Vesselle, H and Chiramal, JA and Hawes, SE and Schulze, E and Nguyen, T and Ndumia, R and Vinayak, S},
title = {Development of an online authentic radiology viewing and reporting platform to test the skills of radiology trainees in Low- and Middle-Income Countries.},
journal = {BMC medical education},
volume = {24},
number = {1},
pages = {969},
pmid = {39237930},
issn = {1472-6920},
mesh = {Humans ; *Radiology/education ; *Developing Countries ; Kenya ; *Internship and Residency ; *Clinical Competence ; *Radiology Information Systems ; Tomography, X-Ray Computed ; },
abstract = {BACKGROUND: Diagnostic radiology residents in low- and middle-income countries (LMICs) may have to provide significant contributions to the clinical workload before the completion of their residency training. Because of time constraints inherent to the delivery of acute care, some of the most clinically impactful diagnostic radiology errors arise from the use of Computed Tomography (CT) in the management of acutely ill patients. As a result, it is paramount to ensure that radiology trainees reach adequate skill levels prior to assuming independent on-call responsibilities. We partnered with the radiology residency program at the Aga Khan University Hospital in Nairobi (Kenya) to evaluate a novel cloud-based testing method that provides an authentic radiology viewing and interpretation environment. It is based on Lifetrack, a unique Google Chrome-based Picture Archiving and Communication System, that enables a complete viewing environment for any scan, and provides a novel report generation tool based on Active Templates which are a patented structured reporting method. We applied it to evaluate the skills of AKUHN trainees on entire CT scans representing the spectrum of acute non-trauma abdominal pathology encountered in a typical on-call setting. We aimed to demonstrate the feasibility of remotely testing the authentic practice of radiology and to show that important observations can be made from such a Lifetrack-based testing approach regarding the radiology skills of an individual practitioner or of a cohort of trainees.
METHODS: A total of 13 anonymized trainees with experience from 12 months to over 4 years took part in the study. Individually accessing the Lifetrack tool they were tested on 37 abdominal CT scans (including one normal scan) over six 2-hour sessions on consecutive days. All cases carried the same clinical history of acute abdominal pain. During each session the trainees accessed the corresponding Lifetrack test set using clinical workstations, reviewed the CT scans, and formulated an opinion for the acute diagnosis, any secondary pathology, and incidental findings on the scan. Their scan interpretations were composed using the Lifetrack report generation system based on active templates in which segments of text can be selected to assemble a detailed report. All reports generated by the trainees were scored on four different interpretive components: (a) acute diagnosis, (b) unrelated secondary diagnosis, (c) number of missed incidental findings, and (d) number of overcalls. A 3-score aggregate was defined from the first three interpretive elements. A cumulative score modified the 3-score aggregate for the negative effect of interpretive overcalls.
RESULTS: A total of 436 scan interpretations and scores were available from 13 trainees tested on 37 cases. The acute diagnosis score ranged from 0 to 1 with a mean of 0.68 ± 0.36 and median of 0.78 (IQR: 0.5-1), and there were 436 scores. An unrelated secondary diagnosis was present in 11 cases, resulting in 130 secondary diagnosis scores. The unrelated secondary diagnosis score ranged from 0 to 1, with mean score of 0.48 ± 0.46 and median of 0.5 (IQR: 0-1). There were 32 cases with incidental findings, yielding 390 scores for incidental findings. The number of missed incidental findings ranged from 0 to 5 with a median at 1 (IQR: 1-2). The incidental findings score ranged from 0 to 1 with a mean of 0.4 ± 0.38 and median of 0.33 (IQR: 0- 0.66). The number of overcalls ranged from 0 to 3 with a median at 0 (IQR: 0-1) and a mean of 0.36 ± 0.63. The 3-score aggregate ranged from 0 to 100 with a mean of 65.5 ± 32.5 and median of 77.3 (IQR: 45.0, 92.5). The cumulative score ranged from - 30 to 100 with a mean of 61.9 ± 35.5 and median of 71.4 (IQR: 37.4, 92.0). The mean acute diagnosis scores and SD by training period were 0.62 ± 0.03, 0.80 ± 0.05, 0.71 ± 0.05, 0.58 ± 0.07, and 0.66 ± 0.05 for trainees with ≤ 12 months, 12-24 months, 24-36 months, 36-48 months and > 48 months respectively. The mean acute diagnosis score of 12-24 months training was the only statistically significant greater score when compared to ≤ 12 months by the ANOVA with Tukey testing (p = 0.0002). We found a similar trend with distribution of 3-score aggregates and cumulative scores. There were no significant associations when the training period was categorized as less than and more than 2 years. We looked at the distribution of the 3-score aggregate versus the number of overcalls by trainee, and we found that the 3-score aggregate was inversely related to the number of overcalls. Heatmaps and raincloud plots provided an illustrative means to visualize the relative performance of trainees across cases.
CONCLUSION: We demonstrated the feasibility of remotely testing the authentic practice of radiology and showed that important observations can be made from our Lifetrack-based testing approach regarding radiology skills of an individual or a cohort. From observed weaknesses areas for targeted teaching can be implemented, and retesting could reveal their impact. This methodology can be customized to different LMIC environments and expanded to board certification examinations.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Radiology/education
*Developing Countries
Kenya
*Internship and Residency
*Clinical Competence
*Radiology Information Systems
Tomography, X-Ray Computed
RevDate: 2024-09-05
CmpDate: 2024-09-05
Cloud Readiness of German Hospitals: Development and Application of an Evaluation Scale.
Studies in health technology and informatics, 317:11-19.
BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector.
METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items.
RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector.
CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.
Additional Links: PMID-39234702
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39234702,
year = {2024},
author = {Holtz, A and Liebe, JD},
title = {Cloud Readiness of German Hospitals: Development and Application of an Evaluation Scale.},
journal = {Studies in health technology and informatics},
volume = {317},
number = {},
pages = {11-19},
doi = {10.3233/SHTI240832},
pmid = {39234702},
issn = {1879-8365},
mesh = {Germany ; *Cloud Computing ; Hospitals ; Computer Security ; Humans ; Surveys and Questionnaires ; },
abstract = {BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector.
METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items.
RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector.
CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Germany
*Cloud Computing
Hospitals
Computer Security
Humans
Surveys and Questionnaires
RevDate: 2024-09-04
Fog-assisted de-duplicated data exchange in distributed edge computing networks.
Scientific reports, 14(1):20595.
The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs.
Additional Links: PMID-39232132
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39232132,
year = {2024},
author = {Said, G and Ghani, A and Ullah, A and Alzahrani, A and Azeem, M and Ahmad, R and Kim, DH},
title = {Fog-assisted de-duplicated data exchange in distributed edge computing networks.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {20595},
pmid = {39232132},
issn = {2045-2322},
abstract = {The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs.},
}
RevDate: 2024-09-04
CmpDate: 2024-09-04
A unified web cloud computing platform MiMedSurv for microbiome causal mediation analysis with survival responses.
Scientific reports, 14(1):20650.
In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server (http://mimedsurv.micloud.kr).
Additional Links: PMID-39232070
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39232070,
year = {2024},
author = {Jang, H and Koh, H},
title = {A unified web cloud computing platform MiMedSurv for microbiome causal mediation analysis with survival responses.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {20650},
pmid = {39232070},
issn = {2045-2322},
support = {2021R1C1C1013861//National Research Foundation of Korea/ ; },
mesh = {Humans ; *Microbiota ; *Cloud Computing ; *Internet ; Software ; Survival Analysis ; },
abstract = {In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server (http://mimedsurv.micloud.kr).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Microbiota
*Cloud Computing
*Internet
Software
Survival Analysis
RevDate: 2024-09-03
Water-glycan interactions drive the SARS-CoV-2 spike dynamics: insights into glycan-gate control and camouflage mechanisms.
Chemical science [Epub ahead of print].
To develop therapeutic strategies against COVID-19, we introduce a high-resolution all-atom polarizable model capturing many-body effects of protein, glycan, solvent, and membrane components in SARS-CoV-2 spike protein open and closed states. Employing μs-long molecular dynamics simulations powered by high-performance cloud-computing and unsupervised density-driven adaptive sampling, we investigated the differences in bulk-solvent-glycan and protein-solvent-glycan interfaces between these states. We unraveled a sophisticated solvent-glycan polarization interaction network involving the N165/N343 glycan-gate patterns that provide structural support for the open state and identified key water molecules that could potentially be targeted to destabilize this configuration. In the closed state, the reduced solvent polarization diminishes the overall N165/N343 dipoles, yet internal interactions and a reorganized sugar coat stabilize this state. Despite variations, our glycan-solvent accessibility analysis reveals the glycan shield capability to conserve constant interactions with the solvent, effectively camouflaging the virus from immune detection in both states. The presented insights advance our comprehension of viral pathogenesis at an atomic level, offering potential to combat COVID-19.
Additional Links: PMID-39220162
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39220162,
year = {2024},
author = {Blazhynska, M and Lagardère, L and Liu, C and Adjoua, O and Ren, P and Piquemal, JP},
title = {Water-glycan interactions drive the SARS-CoV-2 spike dynamics: insights into glycan-gate control and camouflage mechanisms.},
journal = {Chemical science},
volume = {},
number = {},
pages = {},
pmid = {39220162},
issn = {2041-6520},
abstract = {To develop therapeutic strategies against COVID-19, we introduce a high-resolution all-atom polarizable model capturing many-body effects of protein, glycan, solvent, and membrane components in SARS-CoV-2 spike protein open and closed states. Employing μs-long molecular dynamics simulations powered by high-performance cloud-computing and unsupervised density-driven adaptive sampling, we investigated the differences in bulk-solvent-glycan and protein-solvent-glycan interfaces between these states. We unraveled a sophisticated solvent-glycan polarization interaction network involving the N165/N343 glycan-gate patterns that provide structural support for the open state and identified key water molecules that could potentially be targeted to destabilize this configuration. In the closed state, the reduced solvent polarization diminishes the overall N165/N343 dipoles, yet internal interactions and a reorganized sugar coat stabilize this state. Despite variations, our glycan-solvent accessibility analysis reveals the glycan shield capability to conserve constant interactions with the solvent, effectively camouflaging the virus from immune detection in both states. The presented insights advance our comprehension of viral pathogenesis at an atomic level, offering potential to combat COVID-19.},
}
RevDate: 2024-09-01
Improving rapid flood impact assessment: An enhanced multi-sensor approach including a new flood mapping method based on Sentinel-2 data.
Journal of environmental management, 369:122326 pii:S0301-4797(24)02312-0 [Epub ahead of print].
Rapid flood impact assessment methods need complete and accurate flood maps to provide reliable information for disaster risk management, in particular for emergency response and recovery and reconstruction plans. With the aim of improving the rapid assessment of flood impacts, this work presents a new impact assessment method characterized by an enhanced satellite multi-sensor approach for flood mapping, which improves the characterization of the hazard. This includes a novel flood mapping method based on the new multi-temporal Modified Normalized Difference Water Index (MNDWI) that uses multi-temporal statistics computed on time-series of Sentinel-2 multi-spectral satellite images. The multi-temporal aspect of the MNDWI improves characterization of land cover over time and enhances the temporary flooded areas, which can be extracted through a thresholding technique, allowing the delineation of more precise and complete flood maps. The methodology, if implemented in cloud-based environments such as Google Earth Engine (GEE), is computationally light and robust, allowing the derivation of flood maps in matters of minutes, also for large areas. The flood mapping and impact assessment method has been applied to the seasonal flood occurred in South Sudan in 2020, using Sentinel-1, Sentinel-2 and PlanetScope satellite imagery. Flood impacts were assessed considering damages to buildings, roads, and cropland. The multi-sensor approach estimated an impact of 57.4 million USD (considering a middle-bound scenario), higher than what estimated by using Sentinel-1 data only, and Sentinel-2 data only (respectively 24% and 78% of the estimation resulting from the multi-sensor approach). This work highlights the effectiveness and importance of considering multi-source satellite data for flood mapping in a context of disaster risk management, to better inform disaster response, recovery and reconstruction plans.
Additional Links: PMID-39217900
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39217900,
year = {2024},
author = {Cian, F and Delgado Blasco, JM and Ivanescu, C},
title = {Improving rapid flood impact assessment: An enhanced multi-sensor approach including a new flood mapping method based on Sentinel-2 data.},
journal = {Journal of environmental management},
volume = {369},
number = {},
pages = {122326},
doi = {10.1016/j.jenvman.2024.122326},
pmid = {39217900},
issn = {1095-8630},
abstract = {Rapid flood impact assessment methods need complete and accurate flood maps to provide reliable information for disaster risk management, in particular for emergency response and recovery and reconstruction plans. With the aim of improving the rapid assessment of flood impacts, this work presents a new impact assessment method characterized by an enhanced satellite multi-sensor approach for flood mapping, which improves the characterization of the hazard. This includes a novel flood mapping method based on the new multi-temporal Modified Normalized Difference Water Index (MNDWI) that uses multi-temporal statistics computed on time-series of Sentinel-2 multi-spectral satellite images. The multi-temporal aspect of the MNDWI improves characterization of land cover over time and enhances the temporary flooded areas, which can be extracted through a thresholding technique, allowing the delineation of more precise and complete flood maps. The methodology, if implemented in cloud-based environments such as Google Earth Engine (GEE), is computationally light and robust, allowing the derivation of flood maps in matters of minutes, also for large areas. The flood mapping and impact assessment method has been applied to the seasonal flood occurred in South Sudan in 2020, using Sentinel-1, Sentinel-2 and PlanetScope satellite imagery. Flood impacts were assessed considering damages to buildings, roads, and cropland. The multi-sensor approach estimated an impact of 57.4 million USD (considering a middle-bound scenario), higher than what estimated by using Sentinel-1 data only, and Sentinel-2 data only (respectively 24% and 78% of the estimation resulting from the multi-sensor approach). This work highlights the effectiveness and importance of considering multi-source satellite data for flood mapping in a context of disaster risk management, to better inform disaster response, recovery and reconstruction plans.},
}
RevDate: 2024-08-29
Beehive Smart Detector Device for the Detection of Critical Conditions That Utilize Edge Device Computations and Deep Learning Inferences.
Sensors (Basel, Switzerland), 24(16):.
This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3-5% better accuracy results.
Additional Links: PMID-39205138
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205138,
year = {2024},
author = {Kontogiannis, S},
title = {Beehive Smart Detector Device for the Detection of Critical Conditions That Utilize Edge Device Computations and Deep Learning Inferences.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
pmid = {39205138},
issn = {1424-8220},
abstract = {This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3-5% better accuracy results.},
}
RevDate: 2024-08-29
Decentralized System Synchronization among Collaborative Robots via 5G Technology.
Sensors (Basel, Switzerland), 24(16): pii:s24165382.
In this article, we propose a distributed synchronization solution to achieve decentralized coordination in a system of collaborative robots. This is done by leveraging cloud-based computing and 5G technology to exchange causal ordering messages between the robots, eliminating the need for centralized control entities or programmable logic controllers in the system. The proposed solution is described, mathematically formulated, implemented in software, and validated over realistic network conditions. Further, the performance of the decentralized solution via 5G technology is compared to that achieved with traditional coordinated/uncoordinated cabled control systems. The results indicate that the proposed decentralized solution leveraging cloud-based 5G wireless is scalable to systems of up to 10 collaborative robots with comparable efficiency to that from standard cabled systems. The proposed solution has direct application in the control of producer-consumer and automated assembly line robotic applications.
Additional Links: PMID-39205076
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205076,
year = {2024},
author = {Celik, AE and Rodriguez, I and Ayestaran, RG and Yavuz, SC},
title = {Decentralized System Synchronization among Collaborative Robots via 5G Technology.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165382},
pmid = {39205076},
issn = {1424-8220},
support = {RYC-2020-030676-I//Ministerio de Ciencia, Innovación y Universidades/ ; },
abstract = {In this article, we propose a distributed synchronization solution to achieve decentralized coordination in a system of collaborative robots. This is done by leveraging cloud-based computing and 5G technology to exchange causal ordering messages between the robots, eliminating the need for centralized control entities or programmable logic controllers in the system. The proposed solution is described, mathematically formulated, implemented in software, and validated over realistic network conditions. Further, the performance of the decentralized solution via 5G technology is compared to that achieved with traditional coordinated/uncoordinated cabled control systems. The results indicate that the proposed decentralized solution leveraging cloud-based 5G wireless is scalable to systems of up to 10 collaborative robots with comparable efficiency to that from standard cabled systems. The proposed solution has direct application in the control of producer-consumer and automated assembly line robotic applications.},
}
RevDate: 2024-08-29
A Survey on IoT Application Architectures.
Sensors (Basel, Switzerland), 24(16): pii:s24165320.
The proliferation of the IoT has led to the development of diverse application architectures to optimize IoT systems' deployment, operation, and maintenance. This survey provides a comprehensive overview of the existing IoT application architectures, highlighting their key features, strengths, and limitations. The architectures are categorized based on their deployment models, such as cloud, edge, and fog computing approaches, each offering distinct advantages regarding scalability, latency, and resource efficiency. Cloud architectures leverage centralized data processing and storage capabilities to support large-scale IoT applications but often suffer from high latency and bandwidth constraints. Edge architectures mitigate these issues by bringing computation closer to the data source, enhancing real-time processing, and reducing network congestion. Fog architectures combine the strengths of both cloud and edge paradigms, offering a balanced solution for complex IoT environments. This survey also examines emerging trends and technologies in IoT application management, such as the solutions provided by the major IoT service providers like Intel, AWS, Microsoft Azure, and GCP. Through this study, the survey identifies latency, privacy, and deployment difficulties as key areas for future research. It highlights the need to advance IoT Edge architectures to reduce network traffic, improve data privacy, and enhance interoperability by developing multi-application and multi-protocol edge gateways for efficient IoT application management.
Additional Links: PMID-39205014
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205014,
year = {2024},
author = {Dauda, A and Flauzac, O and Nolot, F},
title = {A Survey on IoT Application Architectures.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165320},
pmid = {39205014},
issn = {1424-8220},
support = {1711/20//Petroleum Technology Development Fund (PTDF) Nigeria/ ; },
abstract = {The proliferation of the IoT has led to the development of diverse application architectures to optimize IoT systems' deployment, operation, and maintenance. This survey provides a comprehensive overview of the existing IoT application architectures, highlighting their key features, strengths, and limitations. The architectures are categorized based on their deployment models, such as cloud, edge, and fog computing approaches, each offering distinct advantages regarding scalability, latency, and resource efficiency. Cloud architectures leverage centralized data processing and storage capabilities to support large-scale IoT applications but often suffer from high latency and bandwidth constraints. Edge architectures mitigate these issues by bringing computation closer to the data source, enhancing real-time processing, and reducing network congestion. Fog architectures combine the strengths of both cloud and edge paradigms, offering a balanced solution for complex IoT environments. This survey also examines emerging trends and technologies in IoT application management, such as the solutions provided by the major IoT service providers like Intel, AWS, Microsoft Azure, and GCP. Through this study, the survey identifies latency, privacy, and deployment difficulties as key areas for future research. It highlights the need to advance IoT Edge architectures to reduce network traffic, improve data privacy, and enhance interoperability by developing multi-application and multi-protocol edge gateways for efficient IoT application management.},
}
RevDate: 2024-08-29
An End-to-End Deep Learning Framework for Fault Detection in Marine Machinery.
Sensors (Basel, Switzerland), 24(16): pii:s24165310.
The Industrial Internet of Things has enabled the integration and analysis of vast volumes of data across various industries, with the maritime sector being no exception. Advances in cloud computing and deep learning (DL) are continuously reshaping the industry, particularly in optimizing maritime operations such as Predictive Maintenance (PdM). In this study, we propose a novel DL-based framework focusing on the fault detection task of PdM in marine operations, leveraging time-series data from sensors installed on shipboard machinery. The framework is designed as a scalable and cost-efficient software solution, encompassing all stages from data collection and pre-processing at the edge to the deployment and lifecycle management of DL models. The proposed DL architecture utilizes Graph Attention Networks (GATs) to extract spatio-temporal information from the time-series data and provides explainable predictions through a feature-wise scoring mechanism. Additionally, a custom evaluation metric with real-world applicability is employed, prioritizing both prediction accuracy and the timeliness of fault identification. To demonstrate the effectiveness of our framework, we conduct experiments on three types of open-source datasets relevant to PdM: electrical data, bearing datasets, and data from water circulation experiments.
Additional Links: PMID-39205003
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205003,
year = {2024},
author = {Rigas, S and Tzouveli, P and Kollias, S},
title = {An End-to-End Deep Learning Framework for Fault Detection in Marine Machinery.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165310},
pmid = {39205003},
issn = {1424-8220},
support = {ATHINAIKI RIVIERA - ATTP4-0325990//Greece and European Union: Attica 2014-2020/ ; },
abstract = {The Industrial Internet of Things has enabled the integration and analysis of vast volumes of data across various industries, with the maritime sector being no exception. Advances in cloud computing and deep learning (DL) are continuously reshaping the industry, particularly in optimizing maritime operations such as Predictive Maintenance (PdM). In this study, we propose a novel DL-based framework focusing on the fault detection task of PdM in marine operations, leveraging time-series data from sensors installed on shipboard machinery. The framework is designed as a scalable and cost-efficient software solution, encompassing all stages from data collection and pre-processing at the edge to the deployment and lifecycle management of DL models. The proposed DL architecture utilizes Graph Attention Networks (GATs) to extract spatio-temporal information from the time-series data and provides explainable predictions through a feature-wise scoring mechanism. Additionally, a custom evaluation metric with real-world applicability is employed, prioritizing both prediction accuracy and the timeliness of fault identification. To demonstrate the effectiveness of our framework, we conduct experiments on three types of open-source datasets relevant to PdM: electrical data, bearing datasets, and data from water circulation experiments.},
}
RevDate: 2024-08-29
Presenting the COGNIFOG Framework: Architecture, Building Blocks and Road toward Cognitive Connectivity.
Sensors (Basel, Switzerland), 24(16): pii:s24165283.
In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.
Additional Links: PMID-39204979
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39204979,
year = {2024},
author = {Adame, T and Amri, E and Antonopoulos, G and Azaiez, S and Berne, A and Camargo, JS and Kakoulidis, H and Kleisarchaki, S and Llamedo, A and Prasinos, M and Psara, K and Shumaiev, K},
title = {Presenting the COGNIFOG Framework: Architecture, Building Blocks and Road toward Cognitive Connectivity.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165283},
pmid = {39204979},
issn = {1424-8220},
support = {101092968//European Union/ ; },
abstract = {In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.},
}
RevDate: 2024-08-29
Integral-Valued Pythagorean Fuzzy-Set-Based Dyna Q+ Framework for Task Scheduling in Cloud Computing.
Sensors (Basel, Switzerland), 24(16): pii:s24165272.
Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent.
Additional Links: PMID-39204967
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39204967,
year = {2024},
author = {Krishnamurthy, B and Shiva, SG},
title = {Integral-Valued Pythagorean Fuzzy-Set-Based Dyna Q+ Framework for Task Scheduling in Cloud Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165272},
pmid = {39204967},
issn = {1424-8220},
abstract = {Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent.},
}
RevDate: 2024-08-27
Learning Implicit Fields for Point Cloud Filtering.
IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].
Since point clouds acquired by scanners inevitably contain noise, recovering a clean version from a noisy point cloud is essential for further 3D geometry processing applications. Several data-driven approaches have been recently introduced to overcome the drawbacks of traditional filtering algorithms, such as less robust preservation of sharp features and tedious tuning for multiple parameters. Most of these methods achieve filtering by directly regressing the position/displacement of each point, which may blur detailed features and is prone to uneven distribution. In this paper, we propose a novel data-driven method that explores the implicit fields. Our assumption is that the given noisy points implicitly define a surface, and we attempt to obtain a point's movement direction and distance separately based on the predicted signed distance fields (SDFs). Taking a noisy point cloud as input, we first obtain a consistent alignment by incorporating the global points into local patches. We then feed them into an encoder-decoder structure and predict a 7D vector consisting of SDFs. Subsequently, the distance can be obtained directly from the first element in the vector, and the movement direction can be obtained by computing the gradient descent from the last six elements (i.e., six surrounding SDFs). We finally obtain the filtered results by moving each point with its predicted distance along its movement direction. Our method can produce feature-preserving results without requiring explicit normals. Experiments demonstrate that our method visually outperforms state-of-the-art methods and generally produces better quantitative results than position-based methods (both learning and non-learning).
Additional Links: PMID-39190508
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39190508,
year = {2024},
author = {Wang, J and Lu, X and Wang, M and Hou, F and He, Y},
title = {Learning Implicit Fields for Point Cloud Filtering.},
journal = {IEEE transactions on visualization and computer graphics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TVCG.2024.3450699},
pmid = {39190508},
issn = {1941-0506},
abstract = {Since point clouds acquired by scanners inevitably contain noise, recovering a clean version from a noisy point cloud is essential for further 3D geometry processing applications. Several data-driven approaches have been recently introduced to overcome the drawbacks of traditional filtering algorithms, such as less robust preservation of sharp features and tedious tuning for multiple parameters. Most of these methods achieve filtering by directly regressing the position/displacement of each point, which may blur detailed features and is prone to uneven distribution. In this paper, we propose a novel data-driven method that explores the implicit fields. Our assumption is that the given noisy points implicitly define a surface, and we attempt to obtain a point's movement direction and distance separately based on the predicted signed distance fields (SDFs). Taking a noisy point cloud as input, we first obtain a consistent alignment by incorporating the global points into local patches. We then feed them into an encoder-decoder structure and predict a 7D vector consisting of SDFs. Subsequently, the distance can be obtained directly from the first element in the vector, and the movement direction can be obtained by computing the gradient descent from the last six elements (i.e., six surrounding SDFs). We finally obtain the filtered results by moving each point with its predicted distance along its movement direction. Our method can produce feature-preserving results without requiring explicit normals. Experiments demonstrate that our method visually outperforms state-of-the-art methods and generally produces better quantitative results than position-based methods (both learning and non-learning).},
}
RevDate: 2024-08-27
Exploring Factors Influencing Pregnant Women's Perceptions and Attitudes Towards Midwifery Care in Romania: Implications for Maternal Health Education Strategies.
Nursing reports (Pavia, Italy), 14(3):1807-1818 pii:nursrep14030134.
BACKGROUND: Midwives are strong advocates for vaginal births. However, their visibility and accessibility are poorly perceived by women in Romania. Consequently, the women's options are limited to a single direction when pregnancy occurs, involving the family doctor, the obstetrician, and often an interventional technical approach at the time of birth. The aim of this research is to identify specific variables that affect the perceptions and attitudes of pregnant women towards the care provided by midwives. This knowledge could contribute to the development of more effective education and information strategies within maternal health services.
METHODS: A cross-sectional observational analytical survey was conducted in Romania among pregnant women from the general population. Data were collected through a self-administered questionnaire, with informed consent obtained from each participating pregnant woman. The questionnaire was administered online using the cloud-based Google Forms platform and was available on the internet for seven months, from January to July 2023. The questionnaire was distributed through various media channels, both individually and in communication groups, in the form of a link. All questions were mandatory, and the questionnaire could only be submitted after answering all questions.
RESULTS: A total of 1301 individual responses were collected. The analysis of the socio-demographic and obstetrical profile of the pregnant women revealed that approximately half, 689 (52.95%), of the participants were aged between 18-29 years, and 1060 (81.47%) of the participants were married. Among our group of 1301 pregnant women, 973 (74.78%) had higher education, and 987 (75.86%) had a regular job. A majority of the survey participants, 936 (71.94%), lived in an urban geographic area, while 476 (36.58%) had attended childbirth education courses, and 791 (60.79%) were in the third trimester of pregnancy. A total of 298 (22.9%) respondents did not want to give birth in a hospital, and one-third, 347 (26.67%), did not place significant importance on control over the childbirth process.
CONCLUSIONS: The main factors influencing women's decisions regarding perinatal care and the importance of midwives as a component of the maternal-infant care team are modifiable, and thorough educational and psychological preparation would reduce the increasing predominance of preference for cesarean section, thereby promoting healthier and more woman- and child-centered perinatal care.
Additional Links: PMID-39189264
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39189264,
year = {2024},
author = {Radu, MC and Armean, MS and Pop-Tudose, M and Medar, C and Manolescu, LSC},
title = {Exploring Factors Influencing Pregnant Women's Perceptions and Attitudes Towards Midwifery Care in Romania: Implications for Maternal Health Education Strategies.},
journal = {Nursing reports (Pavia, Italy)},
volume = {14},
number = {3},
pages = {1807-1818},
doi = {10.3390/nursrep14030134},
pmid = {39189264},
issn = {2039-4403},
abstract = {BACKGROUND: Midwives are strong advocates for vaginal births. However, their visibility and accessibility are poorly perceived by women in Romania. Consequently, the women's options are limited to a single direction when pregnancy occurs, involving the family doctor, the obstetrician, and often an interventional technical approach at the time of birth. The aim of this research is to identify specific variables that affect the perceptions and attitudes of pregnant women towards the care provided by midwives. This knowledge could contribute to the development of more effective education and information strategies within maternal health services.
METHODS: A cross-sectional observational analytical survey was conducted in Romania among pregnant women from the general population. Data were collected through a self-administered questionnaire, with informed consent obtained from each participating pregnant woman. The questionnaire was administered online using the cloud-based Google Forms platform and was available on the internet for seven months, from January to July 2023. The questionnaire was distributed through various media channels, both individually and in communication groups, in the form of a link. All questions were mandatory, and the questionnaire could only be submitted after answering all questions.
RESULTS: A total of 1301 individual responses were collected. The analysis of the socio-demographic and obstetrical profile of the pregnant women revealed that approximately half, 689 (52.95%), of the participants were aged between 18-29 years, and 1060 (81.47%) of the participants were married. Among our group of 1301 pregnant women, 973 (74.78%) had higher education, and 987 (75.86%) had a regular job. A majority of the survey participants, 936 (71.94%), lived in an urban geographic area, while 476 (36.58%) had attended childbirth education courses, and 791 (60.79%) were in the third trimester of pregnancy. A total of 298 (22.9%) respondents did not want to give birth in a hospital, and one-third, 347 (26.67%), did not place significant importance on control over the childbirth process.
CONCLUSIONS: The main factors influencing women's decisions regarding perinatal care and the importance of midwives as a component of the maternal-infant care team are modifiable, and thorough educational and psychological preparation would reduce the increasing predominance of preference for cesarean section, thereby promoting healthier and more woman- and child-centered perinatal care.},
}
RevDate: 2024-08-26
An enhanced approach for predicting air pollution using quantum support vector machine.
Scientific reports, 14(1):19521.
The essence of quantum machine learning is to optimize problem-solving by executing machine learning algorithms on quantum computers and exploiting potent laws such as superposition and entanglement. Support vector machine (SVM) is widely recognized as one of the most effective classification machine learning techniques currently available. Since, in conventional systems, the SVM kernel technique tends to sluggish down and even fail as datasets become increasingly complex or jumbled. To compare the execution time and accuracy of conventional SVM classification to that of quantum SVM classification, the appropriate quantum features for mapping need to be selected. As the dataset grows complex, the importance of selecting an appropriate feature map that outperforms or performs as well as the classification grows. This paper utilizes conventional SVM to select an optimal feature map and benchmark dataset for predicting air quality. Experimental evidence demonstrates that the precision of quantum SVM surpasses that of classical SVM for air quality assessment. Using quantum labs from IBM's quantum computer cloud, conventional and quantum computing have been compared. When applied to the same dataset, the conventional SVM achieved an accuracy of 91% and 87% respectively, whereas the quantum SVM demonstrated an accuracy of 97% and 94% respectively for air quality prediction. The study introduces the use of quantum Support Vector Machines (SVM) for predicting air quality. It emphasizes the novel method of choosing the best quantum feature maps. Through the utilization of quantum-enhanced feature mapping, our objective is to exceed the constraints of classical SVM and achieve unparalleled levels of precision and effectiveness. We conduct precise experiments utilizing IBM's state-of-the-art quantum computer cloud to compare the performance of conventional and quantum SVM algorithms on a shared dataset.
Additional Links: PMID-39187555
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39187555,
year = {2024},
author = {Farooq, O and Shahid, M and Arshad, S and Altaf, A and Iqbal, F and Vera, YAM and Flores, MAL and Ashraf, I},
title = {An enhanced approach for predicting air pollution using quantum support vector machine.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {19521},
pmid = {39187555},
issn = {2045-2322},
abstract = {The essence of quantum machine learning is to optimize problem-solving by executing machine learning algorithms on quantum computers and exploiting potent laws such as superposition and entanglement. Support vector machine (SVM) is widely recognized as one of the most effective classification machine learning techniques currently available. Since, in conventional systems, the SVM kernel technique tends to sluggish down and even fail as datasets become increasingly complex or jumbled. To compare the execution time and accuracy of conventional SVM classification to that of quantum SVM classification, the appropriate quantum features for mapping need to be selected. As the dataset grows complex, the importance of selecting an appropriate feature map that outperforms or performs as well as the classification grows. This paper utilizes conventional SVM to select an optimal feature map and benchmark dataset for predicting air quality. Experimental evidence demonstrates that the precision of quantum SVM surpasses that of classical SVM for air quality assessment. Using quantum labs from IBM's quantum computer cloud, conventional and quantum computing have been compared. When applied to the same dataset, the conventional SVM achieved an accuracy of 91% and 87% respectively, whereas the quantum SVM demonstrated an accuracy of 97% and 94% respectively for air quality prediction. The study introduces the use of quantum Support Vector Machines (SVM) for predicting air quality. It emphasizes the novel method of choosing the best quantum feature maps. Through the utilization of quantum-enhanced feature mapping, our objective is to exceed the constraints of classical SVM and achieve unparalleled levels of precision and effectiveness. We conduct precise experiments utilizing IBM's state-of-the-art quantum computer cloud to compare the performance of conventional and quantum SVM algorithms on a shared dataset.},
}
RevDate: 2024-08-27
AnoPrimer: Primer Design in malaria vectors informed by range-wide genomic variation.
Wellcome open research, 9:255.
The major malaria mosquitoes, Anopheles gambiae s.l and Anopheles funestus, are some of the most studied organisms in medical research and also some of the most genetically diverse. When designing polymerase chain reaction (PCR) or hybridisation-based molecular assays, reliable primer and probe design is crucial. However, single nucleotide polymorphisms (SNPs) in primer binding sites can prevent primer binding, leading to null alleles, or bind suboptimally, leading to preferential amplification of specific alleles. Given the extreme genetic diversity of Anopheles mosquitoes, researchers need to consider this genetic variation when designing primers and probes to avoid amplification problems. In this note, we present a Python package, AnoPrimer, which exploits the Ag1000G and Af1000 datasets and allows users to rapidly design primers in An. gambiae or An. funestus, whilst summarising genetic variation in the primer binding sites and visualising the position of primer pairs. AnoPrimer allows the design of both genomic DNA and cDNA primers and hybridisation probes. By coupling this Python package with Google Colaboratory, AnoPrimer is an open and accessible platform for primer and probe design, hosted in the cloud for free. AnoPrimer is available here https://github.com/sanjaynagi/AnoPrimer and we hope it will be a useful resource for the community to design probe and primer sets that can be reliably deployed across the An. gambiae and funestus species ranges.
Additional Links: PMID-39184128
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39184128,
year = {2024},
author = {Nagi, SC and Ashraf, F and Miles, A and Donnelly, MJ},
title = {AnoPrimer: Primer Design in malaria vectors informed by range-wide genomic variation.},
journal = {Wellcome open research},
volume = {9},
number = {},
pages = {255},
pmid = {39184128},
issn = {2398-502X},
abstract = {The major malaria mosquitoes, Anopheles gambiae s.l and Anopheles funestus, are some of the most studied organisms in medical research and also some of the most genetically diverse. When designing polymerase chain reaction (PCR) or hybridisation-based molecular assays, reliable primer and probe design is crucial. However, single nucleotide polymorphisms (SNPs) in primer binding sites can prevent primer binding, leading to null alleles, or bind suboptimally, leading to preferential amplification of specific alleles. Given the extreme genetic diversity of Anopheles mosquitoes, researchers need to consider this genetic variation when designing primers and probes to avoid amplification problems. In this note, we present a Python package, AnoPrimer, which exploits the Ag1000G and Af1000 datasets and allows users to rapidly design primers in An. gambiae or An. funestus, whilst summarising genetic variation in the primer binding sites and visualising the position of primer pairs. AnoPrimer allows the design of both genomic DNA and cDNA primers and hybridisation probes. By coupling this Python package with Google Colaboratory, AnoPrimer is an open and accessible platform for primer and probe design, hosted in the cloud for free. AnoPrimer is available here https://github.com/sanjaynagi/AnoPrimer and we hope it will be a useful resource for the community to design probe and primer sets that can be reliably deployed across the An. gambiae and funestus species ranges.},
}
RevDate: 2024-08-26
Genetic algorithm with skew mutation for heterogeneous resource-aware task offloading in edge-cloud computing.
Heliyon, 10(12):e32399 pii:S2405-8440(24)08430-5.
Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.
Additional Links: PMID-39183823
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39183823,
year = {2024},
author = {Chen, M and Qi, P and Chu, Y and Wang, B and Wang, F and Cao, J},
title = {Genetic algorithm with skew mutation for heterogeneous resource-aware task offloading in edge-cloud computing.},
journal = {Heliyon},
volume = {10},
number = {12},
pages = {e32399},
doi = {10.1016/j.heliyon.2024.e32399},
pmid = {39183823},
issn = {2405-8440},
abstract = {Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.},
}
RevDate: 2024-08-26
Giant Kerr nonlinearity of terahertz waves mediated by stimulated phonon polaritons in a microcavity chip.
Light, science & applications, 13(1):212.
Optical Kerr effect, in which input light intensity linearly alters the refractive index, has enabled the generation of optical solitons, supercontinuum spectra, and frequency combs, playing vital roles in the on-chip devices, fiber communications, and quantum manipulations. Especially, terahertz Kerr effect, featuring fascinating prospects in future high-rate computing, artificial intelligence, and cloud-based technologies, encounters a great challenge due to the rather low power density and feeble Kerr response. Here, we demonstrate a giant terahertz frequency Kerr nonlinearity mediated by stimulated phonon polaritons. Under the influences of the giant Kerr nonlinearity, the power-dependent refractive index change would result in a frequency shift in the microcavity, which was experimentally demonstrated via the measurement of the resonant mode of a chip-scale lithium niobate Fabry-Pérot microcavity. Attributed to the existence of stimulated phonon polaritons, the nonlinear coefficient extracted from the frequency shifts is orders of magnitude larger than that of visible and infrared light, which is also theoretically demonstrated by nonlinear Huang equations. This work opens an avenue for many rich and fruitful terahertz Kerr effect based physical, chemical, and biological systems that have terahertz fingerprints.
Additional Links: PMID-39179595
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39179595,
year = {2024},
author = {Huang, Y and Lu, Y and Li, W and Xu, X and Jiang, X and Ma, R and Chen, L and Ruan, N and Wu, Q and Xu, J},
title = {Giant Kerr nonlinearity of terahertz waves mediated by stimulated phonon polaritons in a microcavity chip.},
journal = {Light, science & applications},
volume = {13},
number = {1},
pages = {212},
pmid = {39179595},
issn = {2047-7538},
support = {11974192//National Natural Science Foundation of China (National Science Foundation of China)/ ; 62205158//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
abstract = {Optical Kerr effect, in which input light intensity linearly alters the refractive index, has enabled the generation of optical solitons, supercontinuum spectra, and frequency combs, playing vital roles in the on-chip devices, fiber communications, and quantum manipulations. Especially, terahertz Kerr effect, featuring fascinating prospects in future high-rate computing, artificial intelligence, and cloud-based technologies, encounters a great challenge due to the rather low power density and feeble Kerr response. Here, we demonstrate a giant terahertz frequency Kerr nonlinearity mediated by stimulated phonon polaritons. Under the influences of the giant Kerr nonlinearity, the power-dependent refractive index change would result in a frequency shift in the microcavity, which was experimentally demonstrated via the measurement of the resonant mode of a chip-scale lithium niobate Fabry-Pérot microcavity. Attributed to the existence of stimulated phonon polaritons, the nonlinear coefficient extracted from the frequency shifts is orders of magnitude larger than that of visible and infrared light, which is also theoretically demonstrated by nonlinear Huang equations. This work opens an avenue for many rich and fruitful terahertz Kerr effect based physical, chemical, and biological systems that have terahertz fingerprints.},
}
RevDate: 2024-08-23
CmpDate: 2024-08-23
Remote Monitoring, AI, Machine Learning and Mobile Ultrasound Integration upon 5G Internet in the Prehospital Care to Support the Golden Hour Principle and Optimize Outcomes in Severe Trauma and Emergency Surgery.
Studies in health technology and informatics, 316:1807-1811.
AIM: Feasibility and reliability evaluation of 5G internet networks (5G IN) upon Artificial Intelligence (AI)/Machine Learning (ML), of telemonitoring and mobile ultrasound (m u/s) in an ambulance car (AC)- integrated in the pre-hospital setting (PS)- to support the Golden Hour Principle (GHP) and optimize outcomes in severe trauma (TRS).
MATERIAL AND METHODS: (PS) organization and care upon (5G IN) high bandwidths (10 GB/s) mobile tele-communication (mTC) experimentation by using the experimental Cobot PROMETHEUS III, pn:100016 by simulation upon six severe trauma clinical cases by ten (N1=10) experts: Four professional rescuers (n1=4), three trauma surgeons (n2=3), a radiologist (n3=1) and two information technology specialists (n4=2) to evaluate feasibility, reliability and clinical usability for instant risk, prognosis and triage computation, decision support and treatment planning by (AI)/(ML) computations in (PS) of (TRS) as well as by performing (PS) (m u/s).
RESULTS: A. Trauma severity scales instant computations by the Cobot PROMETHEUS III, pn 100016)) based on AI and ML complex algorithms and Cloud Computing, telemonitoring and r showed very high feasibility and reliability upon (5GIN) under specific, technological, training and ergonomic prerequisites B. Measured be-directional (m u/s) images data sharing between (AC) and (ED/TC) showed very high feasibility and reliability upon (5G IN) under specific, technological and ergonomic conditions in (TRS).
CONCLUSION: Integration of (PS) tele-monitoring with (AI)/(ML) and (PS) (m u/s) upon (5GIN) via the Cobot PROMETHEUS III, (pn 100016) in severe (TRS/ES), seems feasible and under specific prerequisites reliable to support the (GHP) and optimize outcomes in adult and pediatric (TRS/ES).
Additional Links: PMID-39176842
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39176842,
year = {2024},
author = {Mammas, CS and Mamma, AS},
title = {Remote Monitoring, AI, Machine Learning and Mobile Ultrasound Integration upon 5G Internet in the Prehospital Care to Support the Golden Hour Principle and Optimize Outcomes in Severe Trauma and Emergency Surgery.},
journal = {Studies in health technology and informatics},
volume = {316},
number = {},
pages = {1807-1811},
doi = {10.3233/SHTI240782},
pmid = {39176842},
issn = {1879-8365},
mesh = {Humans ; *Machine Learning ; *Ultrasonography ; *Emergency Medical Services ; *Wounds and Injuries/diagnostic imaging/therapy ; Telemedicine ; Artificial Intelligence ; Internet ; Feasibility Studies ; Reproducibility of Results ; },
abstract = {AIM: Feasibility and reliability evaluation of 5G internet networks (5G IN) upon Artificial Intelligence (AI)/Machine Learning (ML), of telemonitoring and mobile ultrasound (m u/s) in an ambulance car (AC)- integrated in the pre-hospital setting (PS)- to support the Golden Hour Principle (GHP) and optimize outcomes in severe trauma (TRS).
MATERIAL AND METHODS: (PS) organization and care upon (5G IN) high bandwidths (10 GB/s) mobile tele-communication (mTC) experimentation by using the experimental Cobot PROMETHEUS III, pn:100016 by simulation upon six severe trauma clinical cases by ten (N1=10) experts: Four professional rescuers (n1=4), three trauma surgeons (n2=3), a radiologist (n3=1) and two information technology specialists (n4=2) to evaluate feasibility, reliability and clinical usability for instant risk, prognosis and triage computation, decision support and treatment planning by (AI)/(ML) computations in (PS) of (TRS) as well as by performing (PS) (m u/s).
RESULTS: A. Trauma severity scales instant computations by the Cobot PROMETHEUS III, pn 100016)) based on AI and ML complex algorithms and Cloud Computing, telemonitoring and r showed very high feasibility and reliability upon (5GIN) under specific, technological, training and ergonomic prerequisites B. Measured be-directional (m u/s) images data sharing between (AC) and (ED/TC) showed very high feasibility and reliability upon (5G IN) under specific, technological and ergonomic conditions in (TRS).
CONCLUSION: Integration of (PS) tele-monitoring with (AI)/(ML) and (PS) (m u/s) upon (5GIN) via the Cobot PROMETHEUS III, (pn 100016) in severe (TRS/ES), seems feasible and under specific prerequisites reliable to support the (GHP) and optimize outcomes in adult and pediatric (TRS/ES).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Machine Learning
*Ultrasonography
*Emergency Medical Services
*Wounds and Injuries/diagnostic imaging/therapy
Telemedicine
Artificial Intelligence
Internet
Feasibility Studies
Reproducibility of Results
RevDate: 2024-08-20
Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.
Network (Bristol, England) [Epub ahead of print].
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
Additional Links: PMID-39163538
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39163538,
year = {2024},
author = {Komarasamy, D and Ramaganthan, SM and Kandaswamy, DM and Mony, G},
title = {Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-30},
doi = {10.1080/0954898X.2024.2391395},
pmid = {39163538},
issn = {1361-6536},
abstract = {In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.},
}
RevDate: 2024-08-20
Evolving Software Architecture Design in Telemedicine: A PRISMA-based Systematic Review.
Healthcare informatics research, 30(3):184-193.
OBJECTIVES: This article presents a systematic review of recent advancements in telemedicine architectures for continuous monitoring, providing a comprehensive overview of the evolving software engineering practices underpinning these systems. The review aims to illuminate the critical role of telemedicine in delivering healthcare services, especially during global health crises, and to emphasize the importance of effectiveness, security, interoperability, and scalability in these systems.
METHODS: A systematic review methodology was employed, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework. As the primary research method, the PubMed, IEEE Xplore, and Scopus databases were searched to identify articles relevant to telemedicine architectures for continuous monitoring. Seventeen articles were selected for analysis, and a methodical approach was employed to investigate and synthesize the findings.
RESULTS: The review identified a notable trend towards the integration of emerging technologies into telemedicine architectures. Key areas of focus include interoperability, security, and scalability. Innovations such as cognitive radio technology, behavior-based control architectures, Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standards, cloud computing, decentralized systems, and blockchain technology are addressing challenges in remote healthcare delivery and continuous monitoring.
CONCLUSIONS: This review highlights major advancements in telemedicine architectures, emphasizing the integration of advanced technologies to improve interoperability, security, and scalability. The findings underscore the successful application of cognitive radio technology, behavior-based control, HL7 FHIR standards, cloud computing, decentralized systems, and blockchain in advancing remote healthcare delivery.
Additional Links: PMID-39160778
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39160778,
year = {2024},
author = {Jat, AS and Grønli, TM and Ghinea, G and Assres, G},
title = {Evolving Software Architecture Design in Telemedicine: A PRISMA-based Systematic Review.},
journal = {Healthcare informatics research},
volume = {30},
number = {3},
pages = {184-193},
doi = {10.4258/hir.2024.30.3.184},
pmid = {39160778},
issn = {2093-3681},
support = {//Kristiania University College/ ; },
abstract = {OBJECTIVES: This article presents a systematic review of recent advancements in telemedicine architectures for continuous monitoring, providing a comprehensive overview of the evolving software engineering practices underpinning these systems. The review aims to illuminate the critical role of telemedicine in delivering healthcare services, especially during global health crises, and to emphasize the importance of effectiveness, security, interoperability, and scalability in these systems.
METHODS: A systematic review methodology was employed, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework. As the primary research method, the PubMed, IEEE Xplore, and Scopus databases were searched to identify articles relevant to telemedicine architectures for continuous monitoring. Seventeen articles were selected for analysis, and a methodical approach was employed to investigate and synthesize the findings.
RESULTS: The review identified a notable trend towards the integration of emerging technologies into telemedicine architectures. Key areas of focus include interoperability, security, and scalability. Innovations such as cognitive radio technology, behavior-based control architectures, Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standards, cloud computing, decentralized systems, and blockchain technology are addressing challenges in remote healthcare delivery and continuous monitoring.
CONCLUSIONS: This review highlights major advancements in telemedicine architectures, emphasizing the integration of advanced technologies to improve interoperability, security, and scalability. The findings underscore the successful application of cognitive radio technology, behavior-based control, HL7 FHIR standards, cloud computing, decentralized systems, and blockchain in advancing remote healthcare delivery.},
}
RevDate: 2024-08-20
War, emotions, mental health, and artificial intelligence.
Frontiers in psychology, 15:1394045.
During the war time dysregulation of negative emotions such as fear, anger, hatred, frustration, sadness, humiliation, and hopelessness can overrule normal societal values, culture, and endanger global peace and security, and mental health in affected societies. Therefore, it is understandable that the range and power of negative emotions may play important roles in consideration of human behavior in any armed conflict. The estimation and assessment of dominant negative emotions during war time are crucial but are challenged by the complexity of emotions' neuro-psycho-physiology. Currently available natural language processing (NLP) tools have comprehensive computational methods to analyze and understand the emotional content of related textual data in war-inflicted societies. Innovative AI-driven technologies incorporating machine learning, neuro-linguistic programming, cloud infrastructure, and novel digital therapeutic tools and applications present an immense potential to enhance mental health care worldwide. This advancement could make mental health services more cost-effective and readily accessible. Due to the inadequate number of psychiatrists and limited psychiatric resources in coping with mental health consequences of war and traumas, new digital therapeutic wearable devices supported by AI tools and means might be promising approach in psychiatry of future. Transformation of negative dominant emotional maps might be undertaken by the simultaneous combination of online cognitive behavioral therapy (CBT) on individual level, as well as usage of emotionally based strategic communications (EBSC) on a public level. The proposed positive emotional transformation by means of CBT and EBSC may provide important leverage in efforts to protect mental health of civil population in war-inflicted societies. AI-based tools that can be applied in design of EBSC stimuli, like Open AI Chat GPT or Google Gemini may have great potential to significantly enhance emotionally based strategic communications by more comprehensive understanding of semantic and linguistic analysis of available text datasets of war-traumatized society. Human in the loop enhanced by Chat GPT and Gemini can aid in design and development of emotionally annotated messages that resonate among targeted population, amplifying the impact of strategic communications in shaping human dominant emotional maps into a more positive by CBT and EBCS.
Additional Links: PMID-39156807
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39156807,
year = {2024},
author = {Cosic, K and Kopilas, V and Jovanovic, T},
title = {War, emotions, mental health, and artificial intelligence.},
journal = {Frontiers in psychology},
volume = {15},
number = {},
pages = {1394045},
pmid = {39156807},
issn = {1664-1078},
abstract = {During the war time dysregulation of negative emotions such as fear, anger, hatred, frustration, sadness, humiliation, and hopelessness can overrule normal societal values, culture, and endanger global peace and security, and mental health in affected societies. Therefore, it is understandable that the range and power of negative emotions may play important roles in consideration of human behavior in any armed conflict. The estimation and assessment of dominant negative emotions during war time are crucial but are challenged by the complexity of emotions' neuro-psycho-physiology. Currently available natural language processing (NLP) tools have comprehensive computational methods to analyze and understand the emotional content of related textual data in war-inflicted societies. Innovative AI-driven technologies incorporating machine learning, neuro-linguistic programming, cloud infrastructure, and novel digital therapeutic tools and applications present an immense potential to enhance mental health care worldwide. This advancement could make mental health services more cost-effective and readily accessible. Due to the inadequate number of psychiatrists and limited psychiatric resources in coping with mental health consequences of war and traumas, new digital therapeutic wearable devices supported by AI tools and means might be promising approach in psychiatry of future. Transformation of negative dominant emotional maps might be undertaken by the simultaneous combination of online cognitive behavioral therapy (CBT) on individual level, as well as usage of emotionally based strategic communications (EBSC) on a public level. The proposed positive emotional transformation by means of CBT and EBSC may provide important leverage in efforts to protect mental health of civil population in war-inflicted societies. AI-based tools that can be applied in design of EBSC stimuli, like Open AI Chat GPT or Google Gemini may have great potential to significantly enhance emotionally based strategic communications by more comprehensive understanding of semantic and linguistic analysis of available text datasets of war-traumatized society. Human in the loop enhanced by Chat GPT and Gemini can aid in design and development of emotionally annotated messages that resonate among targeted population, amplifying the impact of strategic communications in shaping human dominant emotional maps into a more positive by CBT and EBCS.},
}
RevDate: 2024-08-16
CmpDate: 2024-08-16
Research on privacy protection in the context of healthcare data based on knowledge map.
Medicine, 103(33):e39370.
With the rapid development of emerging information technologies such as artificial intelligence, cloud computing, and the Internet of Things, the world has entered the era of big data. In the face of growing medical big data, research on the privacy protection of personal information has attracted more and more attention, but few studies have analyzed and forecasted the research hotspots and future development trends on the privacy protection. Presently, to systematically and comprehensively summarize the relevant privacy protection literature in the context of big healthcare data, a bibliometric analysis was conducted to clarify the spatial and temporal distribution and research hotspots of privacy protection using the information visualization software CiteSpace. The literature papers related to privacy protection in the Web of Science were collected from 2012 to 2023. Through analysis of the time, author and countries distribution of relevant publications, we found that after 2013, research on the privacy protection has received increasing attention and the core institution of privacy protection research is the university, but the countries show weak cooperation. Additionally, keywords like privacy, big data, internet, challenge, care, and information have high centralities and frequency, indicating the research hotspots and research trends in the field of the privacy protection. All the findings will provide a comprehensive privacy protection research knowledge structure for scholars in the field of privacy protection research under the background of health big data, which can help them quickly grasp the research hotspots and choose future research projects.
Additional Links: PMID-39151500
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39151500,
year = {2024},
author = {Ouyang, T and Yang, J and Gu, Z and Zhang, L and Wang, D and Wang, Y and Yang, Y},
title = {Research on privacy protection in the context of healthcare data based on knowledge map.},
journal = {Medicine},
volume = {103},
number = {33},
pages = {e39370},
pmid = {39151500},
issn = {1536-5964},
support = {Grant No.2023Ah040102//Major Scientific Research Project of Anhui Provincial Department of Education/ ; Grant No.2022Ah010038 and No.2023sdxx027//Anhui Province quality projects/ ; Grant no.2021rwzd12//Key humanities projects of Anhui University of Traditional Chinese Medicine/ ; Grant No.JNFX2023020//Middle-aged Young Teacher Training Action Project of Anhui Provincial Department of Education/ ; Grant No.2023jyxm0370//General Project of Teaching Research in Anhui Province/ ; },
mesh = {Humans ; *Big Data ; *Computer Security ; *Privacy ; *Confidentiality ; Bibliometrics ; },
abstract = {With the rapid development of emerging information technologies such as artificial intelligence, cloud computing, and the Internet of Things, the world has entered the era of big data. In the face of growing medical big data, research on the privacy protection of personal information has attracted more and more attention, but few studies have analyzed and forecasted the research hotspots and future development trends on the privacy protection. Presently, to systematically and comprehensively summarize the relevant privacy protection literature in the context of big healthcare data, a bibliometric analysis was conducted to clarify the spatial and temporal distribution and research hotspots of privacy protection using the information visualization software CiteSpace. The literature papers related to privacy protection in the Web of Science were collected from 2012 to 2023. Through analysis of the time, author and countries distribution of relevant publications, we found that after 2013, research on the privacy protection has received increasing attention and the core institution of privacy protection research is the university, but the countries show weak cooperation. Additionally, keywords like privacy, big data, internet, challenge, care, and information have high centralities and frequency, indicating the research hotspots and research trends in the field of the privacy protection. All the findings will provide a comprehensive privacy protection research knowledge structure for scholars in the field of privacy protection research under the background of health big data, which can help them quickly grasp the research hotspots and choose future research projects.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Big Data
*Computer Security
*Privacy
*Confidentiality
Bibliometrics
RevDate: 2024-08-16
CmpDate: 2024-08-16
FitScore: a fast machine learning-based score for 3D virtual screening enrichment.
Journal of computer-aided molecular design, 38(1):29.
Enhancing virtual screening enrichment has become an urgent problem in computational chemistry, driven by increasingly large databases of commercially available compounds, without a commensurate drop in in vitro screening costs. Docking these large databases is possible with cloud-scale computing. However, rapid docking necessitates compromises in scoring, often leading to poor enrichment and an abundance of false positives in docking results. This work describes a new scoring function composed of two parts - a knowledge-based component that predicts the probability of a particular atom type being in a particular receptor environment, and a tunable weight matrix that converts the probability predictions into a dimensionless score suitable for virtual screening enrichment. This score, the FitScore, represents the compatibility between the ligand and the binding site and is capable of a high degree of enrichment across standardized docking test sets.
Additional Links: PMID-39150579
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39150579,
year = {2024},
author = {Gehlhaar, DK and Mermelstein, DJ},
title = {FitScore: a fast machine learning-based score for 3D virtual screening enrichment.},
journal = {Journal of computer-aided molecular design},
volume = {38},
number = {1},
pages = {29},
pmid = {39150579},
issn = {1573-4951},
mesh = {*Machine Learning ; Ligands ; *Molecular Docking Simulation ; Binding Sites ; Humans ; Protein Binding ; Proteins/chemistry/metabolism ; Software ; Drug Evaluation, Preclinical/methods ; Drug Discovery/methods ; },
abstract = {Enhancing virtual screening enrichment has become an urgent problem in computational chemistry, driven by increasingly large databases of commercially available compounds, without a commensurate drop in in vitro screening costs. Docking these large databases is possible with cloud-scale computing. However, rapid docking necessitates compromises in scoring, often leading to poor enrichment and an abundance of false positives in docking results. This work describes a new scoring function composed of two parts - a knowledge-based component that predicts the probability of a particular atom type being in a particular receptor environment, and a tunable weight matrix that converts the probability predictions into a dimensionless score suitable for virtual screening enrichment. This score, the FitScore, represents the compatibility between the ligand and the binding site and is capable of a high degree of enrichment across standardized docking test sets.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Machine Learning
Ligands
*Molecular Docking Simulation
Binding Sites
Humans
Protein Binding
Proteins/chemistry/metabolism
Software
Drug Evaluation, Preclinical/methods
Drug Discovery/methods
RevDate: 2024-08-16
Discovering patterns and trends in customer service technologies patents using large language model.
Heliyon, 10(14):e34701 pii:S2405-8440(24)10732-3.
The definition of service has evolved from a focus on material value in manufacturing before the 2000s to a customer-centric value based on the significant growth of the service industry. Digital transformation has become essential for companies in the service industry due to the incorporation of digital technology through the Fourth Industrial Revolution and COVID-19. This study utilised Bidirectional Encoder Representations from Transformer (BERT) to analyse 3029 international patents related to the customer service industry and digital transformation registered between 2000 and 2022. Through topic modelling, this study identified 10 major topics in the customer service industry and analysed their yearly trends. Our findings show that as of 2022, the trend with the highest frequency is user-centric network service design, while cloud computing has experienced the steepest increase in the last five years. User-centric network services have been steadily developing since the inception of the Internet. Cloud computing is one of the key technologies being developed intensively in 2023 for the digital transformation of customer service. This study identifies time series trends of customer service industry patents and suggests the effectiveness of using BERTopic to predict future trends in technology.
Additional Links: PMID-39149018
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39149018,
year = {2024},
author = {Kim, C and Lee, J},
title = {Discovering patterns and trends in customer service technologies patents using large language model.},
journal = {Heliyon},
volume = {10},
number = {14},
pages = {e34701},
doi = {10.1016/j.heliyon.2024.e34701},
pmid = {39149018},
issn = {2405-8440},
abstract = {The definition of service has evolved from a focus on material value in manufacturing before the 2000s to a customer-centric value based on the significant growth of the service industry. Digital transformation has become essential for companies in the service industry due to the incorporation of digital technology through the Fourth Industrial Revolution and COVID-19. This study utilised Bidirectional Encoder Representations from Transformer (BERT) to analyse 3029 international patents related to the customer service industry and digital transformation registered between 2000 and 2022. Through topic modelling, this study identified 10 major topics in the customer service industry and analysed their yearly trends. Our findings show that as of 2022, the trend with the highest frequency is user-centric network service design, while cloud computing has experienced the steepest increase in the last five years. User-centric network services have been steadily developing since the inception of the Internet. Cloud computing is one of the key technologies being developed intensively in 2023 for the digital transformation of customer service. This study identifies time series trends of customer service industry patents and suggests the effectiveness of using BERTopic to predict future trends in technology.},
}
RevDate: 2024-08-15
Forest disturbance regimes and trends in continental Spain (1985- 2023) using dense Landsat time series.
Environmental research pii:S0013-9351(24)01707-9 [Epub ahead of print].
Forest disturbance regimes across biomes are being altered by interactive effects of global change. Establishing baselines for assessing change requires detailed quantitative data on past disturbance events, but such data are scarce and difficult to obtain over large spatial and temporal scales. The integration of remote sensing with dense time series analysis and cloud computing platforms is enhancing the ability to monitor historical disturbances, and especially non-stand replacing events along climatic gradients. Since the integration of such tools is still scarce in Mediterranean regions, here, we combine dense Landsat time series and the Continuous Change Detection and Classification - Spectral Mixture Analysis (CCDC-SMA) method to monitor forest disturbance in continental Spain from 1985 to 2023. We adapted the CCDC-SMA method for improved disturbance detection creating new spectral libraries representative of the study region, and quantified the year, month, severity, return interval, and type of disturbance (stand replacing, non-stand replacing) at a 30 m resolution. In addition, we characterised forest disturbance regimes and trends (patch size and severity, and frequency of events) of events larger than 0.5 ha at the national scale by biome (Mediterranean and temperate) and forest type (broadleaf, needleleaf and mixed). We quantified more than 2.9 million patches of disturbed forest, covering 4.6 Mha over the region and period studied. Forest disturbances were on average larger but less severe in the Mediterranean than in the temperate biome, and significantly larger and more severe in needleleaf than in mixed and broadleaf forests. Since the late 1980s, forest disturbances have decreased in size and severity while increasing in frequency across all biomes and forest types. These results have important implications as they confirm that disturbance regimes in continental Spain are changing and should therefore be considered in forest strategic planning for policy development and implementation.
Additional Links: PMID-39147188
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39147188,
year = {2024},
author = {Miguel, S and Ruiz-Benito, P and Rebollo, P and Viana-Soto, A and Mihai, MC and García-Martín, A and Tanase, M},
title = {Forest disturbance regimes and trends in continental Spain (1985- 2023) using dense Landsat time series.},
journal = {Environmental research},
volume = {},
number = {},
pages = {119802},
doi = {10.1016/j.envres.2024.119802},
pmid = {39147188},
issn = {1096-0953},
abstract = {Forest disturbance regimes across biomes are being altered by interactive effects of global change. Establishing baselines for assessing change requires detailed quantitative data on past disturbance events, but such data are scarce and difficult to obtain over large spatial and temporal scales. The integration of remote sensing with dense time series analysis and cloud computing platforms is enhancing the ability to monitor historical disturbances, and especially non-stand replacing events along climatic gradients. Since the integration of such tools is still scarce in Mediterranean regions, here, we combine dense Landsat time series and the Continuous Change Detection and Classification - Spectral Mixture Analysis (CCDC-SMA) method to monitor forest disturbance in continental Spain from 1985 to 2023. We adapted the CCDC-SMA method for improved disturbance detection creating new spectral libraries representative of the study region, and quantified the year, month, severity, return interval, and type of disturbance (stand replacing, non-stand replacing) at a 30 m resolution. In addition, we characterised forest disturbance regimes and trends (patch size and severity, and frequency of events) of events larger than 0.5 ha at the national scale by biome (Mediterranean and temperate) and forest type (broadleaf, needleleaf and mixed). We quantified more than 2.9 million patches of disturbed forest, covering 4.6 Mha over the region and period studied. Forest disturbances were on average larger but less severe in the Mediterranean than in the temperate biome, and significantly larger and more severe in needleleaf than in mixed and broadleaf forests. Since the late 1980s, forest disturbances have decreased in size and severity while increasing in frequency across all biomes and forest types. These results have important implications as they confirm that disturbance regimes in continental Spain are changing and should therefore be considered in forest strategic planning for policy development and implementation.},
}
RevDate: 2024-08-15
CmpDate: 2024-08-15
An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment.
PloS one, 19(8):e0304517 pii:PONE-D-24-07054.
Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.
Additional Links: PMID-39146286
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39146286,
year = {2024},
author = {Zohora, MF and Farhin, F and Kaiser, MS},
title = {An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment.},
journal = {PloS one},
volume = {19},
number = {8},
pages = {e0304517},
doi = {10.1371/journal.pone.0304517},
pmid = {39146286},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Algorithms ; Time Factors ; },
abstract = {Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Algorithms
Time Factors
RevDate: 2024-08-14
Physical Reservoir Computing Using van der Waals Ferroelectrics for Acoustic Keyword Spotting.
ACS nano [Epub ahead of print].
Acoustic keyword spotting (KWS) plays a pivotal role in the voice-activated systems of artificial intelligence (AI), allowing for hands-free interactions between humans and smart devices through information retrieval of the voice commands. The cloud computing technology integrated with the artificial neural networks has been employed to execute the KWS tasks, which however suffers from propagation delay and the risk of privacy breach. Here, we report a single-node reservoir computing (RC) system based on the CuInP2S6 (CIPS)/graphene heterostructure planar device for implementing the KWS task with low computation cost. Through deliberately tuning the Schottky barrier height at the ferroelectric CIPS interfaces for the thermionic injection and transport of the electrons, the typical nonlinear current response and fading memory characteristics are achieved in the device. Additionally, the device exhibits diverse synaptic plasticity with an excellent separation capability of the temporal information. We construct a RC system through employing the ferroelectric device as the physical node to spot the acoustic keywords, i.e., the natural numbers from 1 to 9 based on simulation, in which the system demonstrates outstanding performance with high accuracy rate (>94.6%) and recall rate (>92.0%). Our work promises physical RC in single-node configuration as a prospective computing platform to process the acoustic keywords, promoting its applications in the artificial auditory system at the edge.
Additional Links: PMID-39140427
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39140427,
year = {2024},
author = {Cao, Y and Zhang, Z and Qin, BW and Sang, W and Li, H and Wang, T and Tan, F and Gan, Y and Zhang, X and Liu, T and Xiang, D and Lin, W and Liu, Q},
title = {Physical Reservoir Computing Using van der Waals Ferroelectrics for Acoustic Keyword Spotting.},
journal = {ACS nano},
volume = {},
number = {},
pages = {},
doi = {10.1021/acsnano.4c06144},
pmid = {39140427},
issn = {1936-086X},
abstract = {Acoustic keyword spotting (KWS) plays a pivotal role in the voice-activated systems of artificial intelligence (AI), allowing for hands-free interactions between humans and smart devices through information retrieval of the voice commands. The cloud computing technology integrated with the artificial neural networks has been employed to execute the KWS tasks, which however suffers from propagation delay and the risk of privacy breach. Here, we report a single-node reservoir computing (RC) system based on the CuInP2S6 (CIPS)/graphene heterostructure planar device for implementing the KWS task with low computation cost. Through deliberately tuning the Schottky barrier height at the ferroelectric CIPS interfaces for the thermionic injection and transport of the electrons, the typical nonlinear current response and fading memory characteristics are achieved in the device. Additionally, the device exhibits diverse synaptic plasticity with an excellent separation capability of the temporal information. We construct a RC system through employing the ferroelectric device as the physical node to spot the acoustic keywords, i.e., the natural numbers from 1 to 9 based on simulation, in which the system demonstrates outstanding performance with high accuracy rate (>94.6%) and recall rate (>92.0%). Our work promises physical RC in single-node configuration as a prospective computing platform to process the acoustic keywords, promoting its applications in the artificial auditory system at the edge.},
}
RevDate: 2024-08-14
Balancing efficacy and computational burden: weighted mean, multiple imputation, and inverse probability weighting methods for item non-response in reliable scales.
Journal of the American Medical Informatics Association : JAMIA pii:7733273 [Epub ahead of print].
IMPORTANCE: Scales often arise from multi-item questionnaires, yet commonly face item non-response. Traditional solutions use weighted mean (WMean) from available responses, but potentially overlook missing data intricacies. Advanced methods like multiple imputation (MI) address broader missing data, but demand increased computational resources. Researchers frequently use survey data in the All of Us Research Program (All of Us), and it is imperative to determine if the increased computational burden of employing MI to handle non-response is justifiable.
OBJECTIVES: Using the 5-item Physical Activity Neighborhood Environment Scale (PANES) in All of Us, this study assessed the tradeoff between efficacy and computational demands of WMean, MI, and inverse probability weighting (IPW) when dealing with item non-response.
MATERIALS AND METHODS: Synthetic missingness, allowing 1 or more item non-response, was introduced into PANES across 3 missing mechanisms and various missing percentages (10%-50%). Each scenario compared WMean of complete questions, MI, and IPW on bias, variability, coverage probability, and computation time.
RESULTS: All methods showed minimal biases (all <5.5%) for good internal consistency, with WMean suffered most with poor consistency. IPW showed considerable variability with increasing missing percentage. MI required significantly more computational resources, taking >8000 and >100 times longer than WMean and IPW in full data analysis, respectively.
DISCUSSION AND CONCLUSION: The marginal performance advantages of MI for item non-response in highly reliable scales do not warrant its escalated cloud computational burden in All of Us, particularly when coupled with computationally demanding post-imputation analyses. Researchers using survey scales with low missingness could utilize WMean to reduce computing burden.
Additional Links: PMID-39138951
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39138951,
year = {2024},
author = {Guide, A and Garbett, S and Feng, X and Mapes, BM and Cook, J and Sulieman, L and Cronin, RM and Chen, Q},
title = {Balancing efficacy and computational burden: weighted mean, multiple imputation, and inverse probability weighting methods for item non-response in reliable scales.},
journal = {Journal of the American Medical Informatics Association : JAMIA},
volume = {},
number = {},
pages = {},
doi = {10.1093/jamia/ocae217},
pmid = {39138951},
issn = {1527-974X},
support = {3OT2OD035404/NH/NIH HHS/United States ; },
abstract = {IMPORTANCE: Scales often arise from multi-item questionnaires, yet commonly face item non-response. Traditional solutions use weighted mean (WMean) from available responses, but potentially overlook missing data intricacies. Advanced methods like multiple imputation (MI) address broader missing data, but demand increased computational resources. Researchers frequently use survey data in the All of Us Research Program (All of Us), and it is imperative to determine if the increased computational burden of employing MI to handle non-response is justifiable.
OBJECTIVES: Using the 5-item Physical Activity Neighborhood Environment Scale (PANES) in All of Us, this study assessed the tradeoff between efficacy and computational demands of WMean, MI, and inverse probability weighting (IPW) when dealing with item non-response.
MATERIALS AND METHODS: Synthetic missingness, allowing 1 or more item non-response, was introduced into PANES across 3 missing mechanisms and various missing percentages (10%-50%). Each scenario compared WMean of complete questions, MI, and IPW on bias, variability, coverage probability, and computation time.
RESULTS: All methods showed minimal biases (all <5.5%) for good internal consistency, with WMean suffered most with poor consistency. IPW showed considerable variability with increasing missing percentage. MI required significantly more computational resources, taking >8000 and >100 times longer than WMean and IPW in full data analysis, respectively.
DISCUSSION AND CONCLUSION: The marginal performance advantages of MI for item non-response in highly reliable scales do not warrant its escalated cloud computational burden in All of Us, particularly when coupled with computationally demanding post-imputation analyses. Researchers using survey scales with low missingness could utilize WMean to reduce computing burden.},
}
RevDate: 2024-08-13
CmpDate: 2024-08-13
End-to-end reproducible AI pipelines in radiology using the cloud.
Nature communications, 15(1):6931.
Artificial intelligence (AI) algorithms hold the potential to revolutionize radiology. However, a significant portion of the published literature lacks transparency and reproducibility, which hampers sustained progress toward clinical translation. Although several reporting guidelines have been proposed, identifying practical means to address these issues remains challenging. Here, we show the potential of cloud-based infrastructure for implementing and sharing transparent and reproducible AI-based radiology pipelines. We demonstrate end-to-end reproducibility from retrieving cloud-hosted data, through data pre-processing, deep learning inference, and post-processing, to the analysis and reporting of the final results. We successfully implement two distinct use cases, starting from recent literature on AI-based biomarkers for cancer imaging. Using cloud-hosted data and computing, we confirm the findings of these studies and extend the validation to previously unseen data for one of the use cases. Furthermore, we provide the community with transparent and easy-to-extend examples of pipelines impactful for the broader oncology field. Our approach demonstrates the potential of cloud resources for implementing, sharing, and using reproducible and transparent AI pipelines, which can accelerate the translation into clinical solutions.
Additional Links: PMID-39138215
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39138215,
year = {2024},
author = {Bontempi, D and Nuernberg, L and Pai, S and Krishnaswamy, D and Thiriveedhi, V and Hosny, A and Mak, RH and Farahani, K and Kikinis, R and Fedorov, A and Aerts, HJWL},
title = {End-to-end reproducible AI pipelines in radiology using the cloud.},
journal = {Nature communications},
volume = {15},
number = {1},
pages = {6931},
pmid = {39138215},
issn = {2041-1723},
support = {866504//EC | EU Framework Programme for Research and Innovation H2020 | H2020 Priority Excellent Science | H2020 European Research Council (H2020 Excellent Science - European Research Council)/ ; HHSN261201500003l//Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.)/ ; },
mesh = {*Cloud Computing ; Humans ; *Artificial Intelligence ; Reproducibility of Results ; Deep Learning ; Radiology/methods/standards ; Algorithms ; Neoplasms/diagnostic imaging ; Image Processing, Computer-Assisted/methods ; },
abstract = {Artificial intelligence (AI) algorithms hold the potential to revolutionize radiology. However, a significant portion of the published literature lacks transparency and reproducibility, which hampers sustained progress toward clinical translation. Although several reporting guidelines have been proposed, identifying practical means to address these issues remains challenging. Here, we show the potential of cloud-based infrastructure for implementing and sharing transparent and reproducible AI-based radiology pipelines. We demonstrate end-to-end reproducibility from retrieving cloud-hosted data, through data pre-processing, deep learning inference, and post-processing, to the analysis and reporting of the final results. We successfully implement two distinct use cases, starting from recent literature on AI-based biomarkers for cancer imaging. Using cloud-hosted data and computing, we confirm the findings of these studies and extend the validation to previously unseen data for one of the use cases. Furthermore, we provide the community with transparent and easy-to-extend examples of pipelines impactful for the broader oncology field. Our approach demonstrates the potential of cloud resources for implementing, sharing, and using reproducible and transparent AI pipelines, which can accelerate the translation into clinical solutions.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Humans
*Artificial Intelligence
Reproducibility of Results
Deep Learning
Radiology/methods/standards
Algorithms
Neoplasms/diagnostic imaging
Image Processing, Computer-Assisted/methods
RevDate: 2024-08-13
Volatile tin oxide memristor for neuromorphic computing.
iScience, 27(8):110479.
The rise of neuromorphic systems has addressed the shortcomings of current computing architectures, especially regarding energy efficiency and scalability. These systems use cutting-edge technologies such as Pt/SnOx/TiN memristors, which efficiently mimic synaptic behavior and provide potential solutions to modern computing challenges. Moreover, their unipolar resistive switching ability enables precise modulation of the synaptic weights, facilitating energy-efficient parallel processing that is similar to biological synapses. Additionally, memristors' spike-rate-dependent plasticity enhances the adaptability of neural circuits, offering promising applications in intelligent computing. Integrating memristors into edge computing architectures further highlights their importance in tackling the security and efficiency issues associated with conventional cloud computing models.
Additional Links: PMID-39129832
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39129832,
year = {2024},
author = {Ju, D and Kim, S},
title = {Volatile tin oxide memristor for neuromorphic computing.},
journal = {iScience},
volume = {27},
number = {8},
pages = {110479},
pmid = {39129832},
issn = {2589-0042},
abstract = {The rise of neuromorphic systems has addressed the shortcomings of current computing architectures, especially regarding energy efficiency and scalability. These systems use cutting-edge technologies such as Pt/SnOx/TiN memristors, which efficiently mimic synaptic behavior and provide potential solutions to modern computing challenges. Moreover, their unipolar resistive switching ability enables precise modulation of the synaptic weights, facilitating energy-efficient parallel processing that is similar to biological synapses. Additionally, memristors' spike-rate-dependent plasticity enhances the adaptability of neural circuits, offering promising applications in intelligent computing. Integrating memristors into edge computing architectures further highlights their importance in tackling the security and efficiency issues associated with conventional cloud computing models.},
}
RevDate: 2024-08-12
Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway.
Sensors (Basel, Switzerland), 24(15):.
Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R[2] of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment.
Additional Links: PMID-39124116
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39124116,
year = {2024},
author = {Pazhanivel, DB and Velu, AN and Palaniappan, BS},
title = {Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {15},
pages = {},
pmid = {39124116},
issn = {1424-8220},
abstract = {Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R[2] of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment.},
}
RevDate: 2024-08-10
Architectures for Industrial AIoT Applications.
Sensors (Basel, Switzerland), 24(15): pii:s24154929.
Industry 4.0 introduced new concepts, technologies, and paradigms, such as Cyber Physical Systems (CPSs), Industrial Internet of Things (IIoT) and, more recently, Artificial Intelligence of Things (AIoT). These paradigms ease the creation of complex systems by integrating heterogeneous devices. As a result, the structure of the production systems is changing completely. In this scenario, the adoption of reference architectures based on standards may guide designers and developers to create complex AIoT applications. This article surveys the main reference architectures available for industrial AIoT applications, analyzing their key characteristics, objectives, and benefits; it also presents some use cases that may help designers create new applications. The main goal of this review is to help engineers identify the alternative that best suits every application. The authors conclude that existing reference architectures are a necessary tool for standardizing AIoT applications, since they may guide developers in the process of developing new applications. However, the use of reference architectures in real AIoT industrial applications is still incipient, so more development effort is needed in order for it to be widely adopted.
Additional Links: PMID-39123976
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39123976,
year = {2024},
author = {Villar, E and Martín Toral, I and Calvo, I and Barambones, O and Fernández-Bustamante, P},
title = {Architectures for Industrial AIoT Applications.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {15},
pages = {},
doi = {10.3390/s24154929},
pmid = {39123976},
issn = {1424-8220},
abstract = {Industry 4.0 introduced new concepts, technologies, and paradigms, such as Cyber Physical Systems (CPSs), Industrial Internet of Things (IIoT) and, more recently, Artificial Intelligence of Things (AIoT). These paradigms ease the creation of complex systems by integrating heterogeneous devices. As a result, the structure of the production systems is changing completely. In this scenario, the adoption of reference architectures based on standards may guide designers and developers to create complex AIoT applications. This article surveys the main reference architectures available for industrial AIoT applications, analyzing their key characteristics, objectives, and benefits; it also presents some use cases that may help designers create new applications. The main goal of this review is to help engineers identify the alternative that best suits every application. The authors conclude that existing reference architectures are a necessary tool for standardizing AIoT applications, since they may guide developers in the process of developing new applications. However, the use of reference architectures in real AIoT industrial applications is still incipient, so more development effort is needed in order for it to be widely adopted.},
}
RevDate: 2024-08-08
Industry 4.0 Technologies in Maternal Health Care: Bibliometric Analysis and Research Agenda.
JMIR pediatrics and parenting, 7:e47848 pii:v7i1e47848.
BACKGROUND: Industry 4.0 (I4.0) technologies have improved operations in health care facilities by optimizing processes, leading to efficient systems and tools to assist health care personnel and patients.
OBJECTIVE: This study investigates the current implementation and impact of I4.0 technologies within maternal health care, explicitly focusing on transforming care processes, treatment methods, and automated pregnancy monitoring. Additionally, it conducts a thematic landscape mapping, offering a nuanced understanding of this emerging field. Building on this analysis, a future research agenda is proposed, highlighting critical areas for future investigations.
METHODS: A bibliometric analysis of publications retrieved from the Scopus database was conducted to examine how the research into I4.0 technologies in maternal health care evolved from 1985 to 2022. A search strategy was used to screen the eligible publications using the abstract and full-text reading. The most productive and influential journals; authors', institutions', and countries' influence on maternal health care; and current trends and thematic evolution were computed using the Bibliometrix R package (R Core Team).
RESULTS: A total of 1003 unique papers in English were retrieved using the search string, and 136 papers were retained after the inclusion and exclusion criteria were implemented, covering 37 years from 1985 to 2022. The annual growth rate of publications was 9.53%, with 88.9% (n=121) of the publications observed in 2016-2022. In the thematic analysis, 4 clusters were identified-artificial neural networks, data mining, machine learning, and the Internet of Things. Artificial intelligence, deep learning, risk prediction, digital health, telemedicine, wearable devices, mobile health care, and cloud computing remained the dominant research themes in 2016-2022.
CONCLUSIONS: This bibliometric analysis reviews the state of the art in the evolution and structure of I4.0 technologies in maternal health care and how they may be used to optimize the operational processes. A conceptual framework with 4 performance factors-risk prediction, hospital care, health record management, and self-care-is suggested for process improvement. a research agenda is also proposed for governance, adoption, infrastructure, privacy, and security.
Additional Links: PMID-39116433
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39116433,
year = {2024},
author = {Sibanda, K and Ndayizigamiye, P and Twinomurinzi, H},
title = {Industry 4.0 Technologies in Maternal Health Care: Bibliometric Analysis and Research Agenda.},
journal = {JMIR pediatrics and parenting},
volume = {7},
number = {},
pages = {e47848},
doi = {10.2196/47848},
pmid = {39116433},
issn = {2561-6722},
abstract = {BACKGROUND: Industry 4.0 (I4.0) technologies have improved operations in health care facilities by optimizing processes, leading to efficient systems and tools to assist health care personnel and patients.
OBJECTIVE: This study investigates the current implementation and impact of I4.0 technologies within maternal health care, explicitly focusing on transforming care processes, treatment methods, and automated pregnancy monitoring. Additionally, it conducts a thematic landscape mapping, offering a nuanced understanding of this emerging field. Building on this analysis, a future research agenda is proposed, highlighting critical areas for future investigations.
METHODS: A bibliometric analysis of publications retrieved from the Scopus database was conducted to examine how the research into I4.0 technologies in maternal health care evolved from 1985 to 2022. A search strategy was used to screen the eligible publications using the abstract and full-text reading. The most productive and influential journals; authors', institutions', and countries' influence on maternal health care; and current trends and thematic evolution were computed using the Bibliometrix R package (R Core Team).
RESULTS: A total of 1003 unique papers in English were retrieved using the search string, and 136 papers were retained after the inclusion and exclusion criteria were implemented, covering 37 years from 1985 to 2022. The annual growth rate of publications was 9.53%, with 88.9% (n=121) of the publications observed in 2016-2022. In the thematic analysis, 4 clusters were identified-artificial neural networks, data mining, machine learning, and the Internet of Things. Artificial intelligence, deep learning, risk prediction, digital health, telemedicine, wearable devices, mobile health care, and cloud computing remained the dominant research themes in 2016-2022.
CONCLUSIONS: This bibliometric analysis reviews the state of the art in the evolution and structure of I4.0 technologies in maternal health care and how they may be used to optimize the operational processes. A conceptual framework with 4 performance factors-risk prediction, hospital care, health record management, and self-care-is suggested for process improvement. a research agenda is also proposed for governance, adoption, infrastructure, privacy, and security.},
}
RevDate: 2024-08-07
Mapping agricultural tile drainage in the US Midwest using explainable random forest machine learning and satellite imagery.
The Science of the total environment pii:S0048-9697(24)05433-0 [Epub ahead of print].
There has been an increase in tile drained area across the US Midwest and other regions worldwide due to agricultural expansion, intensification, and climate variability. Despite this growth, spatially explicit tile drainage maps remain scarce, which limits the accuracy of hydrologic modeling and implementation of nutrient reduction strategies. Here, we developed a machine-learning model to provide a Spatially Explicit Estimate of Tile Drainage (SEETileDrain) across the US Midwest in 2017 at a 30-m resolution. This model used 31 satellite-derived and environmental features after removing less important and highly correlated features. It was trained with 60,938 tile and non-tile ground truth points within the Google Earth Engine cloud-computing platform. We also used multiple feature importance metrics and Accumulated Local Effects to interpret the machine learning model. The results show that our model achieved good accuracy, with 96 % of points classified correctly and an F1 score of 0.90. When tile drainage area is aggregated to the county scale, it agreed well (r[2] = 0.69) with the reported area from the Ag Census. We found that Land Surface Temperature (LST) along with climate- and soil-related features were the most important factors for classification. The top-ranked feature is the median summer nighttime LST, followed by median summer soil moisture percent. This study demonstrates the potential of applying satellite remote sensing to map spatially explicit agricultural tile drainage across large regions. The results should be useful for land use change monitoring and hydrologic and nutrient models, including those designed to achieve cost-effective agricultural water and nutrient management strategies. The algorithms developed here should also be applicable for other remote sensing mapping applications.
Additional Links: PMID-39111449
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39111449,
year = {2024},
author = {Wan, L and Kendall, AD and Rapp, J and Hyndman, DW},
title = {Mapping agricultural tile drainage in the US Midwest using explainable random forest machine learning and satellite imagery.},
journal = {The Science of the total environment},
volume = {},
number = {},
pages = {175283},
doi = {10.1016/j.scitotenv.2024.175283},
pmid = {39111449},
issn = {1879-1026},
abstract = {There has been an increase in tile drained area across the US Midwest and other regions worldwide due to agricultural expansion, intensification, and climate variability. Despite this growth, spatially explicit tile drainage maps remain scarce, which limits the accuracy of hydrologic modeling and implementation of nutrient reduction strategies. Here, we developed a machine-learning model to provide a Spatially Explicit Estimate of Tile Drainage (SEETileDrain) across the US Midwest in 2017 at a 30-m resolution. This model used 31 satellite-derived and environmental features after removing less important and highly correlated features. It was trained with 60,938 tile and non-tile ground truth points within the Google Earth Engine cloud-computing platform. We also used multiple feature importance metrics and Accumulated Local Effects to interpret the machine learning model. The results show that our model achieved good accuracy, with 96 % of points classified correctly and an F1 score of 0.90. When tile drainage area is aggregated to the county scale, it agreed well (r[2] = 0.69) with the reported area from the Ag Census. We found that Land Surface Temperature (LST) along with climate- and soil-related features were the most important factors for classification. The top-ranked feature is the median summer nighttime LST, followed by median summer soil moisture percent. This study demonstrates the potential of applying satellite remote sensing to map spatially explicit agricultural tile drainage across large regions. The results should be useful for land use change monitoring and hydrologic and nutrient models, including those designed to achieve cost-effective agricultural water and nutrient management strategies. The algorithms developed here should also be applicable for other remote sensing mapping applications.},
}
RevDate: 2024-08-06
CmpDate: 2024-08-06
Towards understanding climate change impacts: monitoring the vegetation dynamics of terrestrial national parks in Indonesia.
Scientific reports, 14(1):18257.
Monitoring vegetation dynamics in terrestrial national parks (TNPs) is crucial for ensuring sustainable environmental management and mitigating the potential negative impacts of short- and long-term disturbances understanding the effect of climate change within natural and protected areas. This study aims to monitor the vegetation dynamics of TNPs in Indonesia by first categorizing them into the regions of Sumatra, Jawa, Kalimantan, Sulawesi, and Eastern Indonesia and then applying ready-to-use MODIS EVI time-series imageries (MOD13Q1) taken from 2000 to 2022 on the GEE cloud-computing platform. Specifically, this research investigates the greening and browning fraction trends using Sen's slope, considers seasonality by analyzing the maximum and minimum EVI values, and assesses anomalous years by comparing the annual time series and long-term median EVI value. The findings reveal significantly increasing greening trends in most TNPs, except Danau Sentarum, from 2000 to 2022. The seasonality analysis shows that most TNPs exhibit peak and trough greenness at the end of the rainy and dry seasons, respectively, as the vegetation response to precipitation increases and decreases. Anomalies in seasonality that is affected by climate change was detected in all of the regions. To increase TNPs resilience, suggested measures include active reforestation and implementation of Assisted Natural Regeneration, strengthen the enforcement of fundamental managerial task, and forest fire management.
Additional Links: PMID-39107423
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39107423,
year = {2024},
author = {Ramdani, F and Setiani, P and Sianturi, R},
title = {Towards understanding climate change impacts: monitoring the vegetation dynamics of terrestrial national parks in Indonesia.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18257},
pmid = {39107423},
issn = {2045-2322},
mesh = {*Climate Change ; Indonesia ; *Parks, Recreational ; *Conservation of Natural Resources ; Seasons ; Environmental Monitoring/methods ; Ecosystem ; Plants ; },
abstract = {Monitoring vegetation dynamics in terrestrial national parks (TNPs) is crucial for ensuring sustainable environmental management and mitigating the potential negative impacts of short- and long-term disturbances understanding the effect of climate change within natural and protected areas. This study aims to monitor the vegetation dynamics of TNPs in Indonesia by first categorizing them into the regions of Sumatra, Jawa, Kalimantan, Sulawesi, and Eastern Indonesia and then applying ready-to-use MODIS EVI time-series imageries (MOD13Q1) taken from 2000 to 2022 on the GEE cloud-computing platform. Specifically, this research investigates the greening and browning fraction trends using Sen's slope, considers seasonality by analyzing the maximum and minimum EVI values, and assesses anomalous years by comparing the annual time series and long-term median EVI value. The findings reveal significantly increasing greening trends in most TNPs, except Danau Sentarum, from 2000 to 2022. The seasonality analysis shows that most TNPs exhibit peak and trough greenness at the end of the rainy and dry seasons, respectively, as the vegetation response to precipitation increases and decreases. Anomalies in seasonality that is affected by climate change was detected in all of the regions. To increase TNPs resilience, suggested measures include active reforestation and implementation of Assisted Natural Regeneration, strengthen the enforcement of fundamental managerial task, and forest fire management.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Climate Change
Indonesia
*Parks, Recreational
*Conservation of Natural Resources
Seasons
Environmental Monitoring/methods
Ecosystem
Plants
RevDate: 2024-08-05
CmpDate: 2024-08-05
Transcriptomics and epigenetic data integration learning module on Google Cloud.
Briefings in bioinformatics, 25(Supplement_1):.
Multi-omics (genomics, transcriptomics, epigenomics, proteomics, metabolomics, etc.) research approaches are vital for understanding the hierarchical complexity of human biology and have proven to be extremely valuable in cancer research and precision medicine. Emerging scientific advances in recent years have made high-throughput genome-wide sequencing a central focus in molecular research by allowing for the collective analysis of various kinds of molecular biological data from different types of specimens in a single tissue or even at the level of a single cell. Additionally, with the help of improved computational resources and data mining, researchers are able to integrate data from different multi-omics regimes to identify new prognostic, diagnostic, or predictive biomarkers, uncover novel therapeutic targets, and develop more personalized treatment protocols for patients. For the research community to parse the scientifically and clinically meaningful information out of all the biological data being generated each day more efficiently with less wasted resources, being familiar with and comfortable using advanced analytical tools, such as Google Cloud Platform becomes imperative. This project is an interdisciplinary, cross-organizational effort to provide a guided learning module for integrating transcriptomics and epigenetics data analysis protocols into a comprehensive analysis pipeline for users to implement in their own work, utilizing the cloud computing infrastructure on Google Cloud. The learning module consists of three submodules that guide the user through tutorial examples that illustrate the analysis of RNA-sequence and Reduced-Representation Bisulfite Sequencing data. The examples are in the form of breast cancer case studies, and the data sets were procured from the public repository Gene Expression Omnibus. The first submodule is devoted to transcriptomics analysis with the RNA sequencing data, the second submodule focuses on epigenetics analysis using the DNA methylation data, and the third submodule integrates the two methods for a deeper biological understanding. The modules begin with data collection and preprocessing, with further downstream analysis performed in a Vertex AI Jupyter notebook instance with an R kernel. Analysis results are returned to Google Cloud buckets for storage and visualization, removing the computational strain from local resources. The final product is a start-to-finish tutorial for the researchers with limited experience in multi-omics to integrate transcriptomics and epigenetics data analysis into a comprehensive pipeline to perform their own biological research.This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [16] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39101486
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39101486,
year = {2024},
author = {Ruprecht, NA and Kennedy, JD and Bansal, B and Singhal, S and Sens, D and Maggio, A and Doe, V and Hawkins, D and Campbel, R and O'Connell, K and Gill, JS and Schaefer, K and Singhal, SK},
title = {Transcriptomics and epigenetic data integration learning module on Google Cloud.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
pmid = {39101486},
issn = {1477-4054},
support = {P20GM103442//National Institute of General Medical Sciences of the National Institutes of Health/ ; },
mesh = {Humans ; *Cloud Computing ; *Epigenomics/methods ; Epigenesis, Genetic ; Transcriptome ; Computational Biology/methods ; Gene Expression Profiling/methods ; Software ; Data Mining/methods ; },
abstract = {Multi-omics (genomics, transcriptomics, epigenomics, proteomics, metabolomics, etc.) research approaches are vital for understanding the hierarchical complexity of human biology and have proven to be extremely valuable in cancer research and precision medicine. Emerging scientific advances in recent years have made high-throughput genome-wide sequencing a central focus in molecular research by allowing for the collective analysis of various kinds of molecular biological data from different types of specimens in a single tissue or even at the level of a single cell. Additionally, with the help of improved computational resources and data mining, researchers are able to integrate data from different multi-omics regimes to identify new prognostic, diagnostic, or predictive biomarkers, uncover novel therapeutic targets, and develop more personalized treatment protocols for patients. For the research community to parse the scientifically and clinically meaningful information out of all the biological data being generated each day more efficiently with less wasted resources, being familiar with and comfortable using advanced analytical tools, such as Google Cloud Platform becomes imperative. This project is an interdisciplinary, cross-organizational effort to provide a guided learning module for integrating transcriptomics and epigenetics data analysis protocols into a comprehensive analysis pipeline for users to implement in their own work, utilizing the cloud computing infrastructure on Google Cloud. The learning module consists of three submodules that guide the user through tutorial examples that illustrate the analysis of RNA-sequence and Reduced-Representation Bisulfite Sequencing data. The examples are in the form of breast cancer case studies, and the data sets were procured from the public repository Gene Expression Omnibus. The first submodule is devoted to transcriptomics analysis with the RNA sequencing data, the second submodule focuses on epigenetics analysis using the DNA methylation data, and the third submodule integrates the two methods for a deeper biological understanding. The modules begin with data collection and preprocessing, with further downstream analysis performed in a Vertex AI Jupyter notebook instance with an R kernel. Analysis results are returned to Google Cloud buckets for storage and visualization, removing the computational strain from local resources. The final product is a start-to-finish tutorial for the researchers with limited experience in multi-omics to integrate transcriptomics and epigenetics data analysis into a comprehensive pipeline to perform their own biological research.This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [16] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
*Epigenomics/methods
Epigenesis, Genetic
Transcriptome
Computational Biology/methods
Gene Expression Profiling/methods
Software
Data Mining/methods
RevDate: 2024-08-04
Trust value evaluation of cloud service providers using fuzzy inference based analytical process.
Scientific reports, 14(1):18028.
Users can purchase virtualized computer resources using the cloud computing concept, which is a novel and innovative way of computing. It offers numerous advantages for IT and healthcare industries over traditional methods. However, a lack of trust between CSUs and CSPs is hindering the widespread adoption of cloud computing across industries. Since cloud computing offers a wide range of trust models and strategies, it is essential to analyze the service using a detailed methodology in order to choose the appropriate cloud service for various user types. Finding a wide variety of comprehensive elements that are both required and sufficient for evaluating any cloud service is vital in order to achieve that. As a result, this study suggests an accurate, fuzzy logic-based trust evaluation model for evaluating the trustworthiness of a cloud service provider. Here, we examine how fuzzy logic raises the efficiency of trust evaluation. Trust is assessed using Quality of Service (QoS) characteristics like security, privacy, dynamicity, data integrity, and performance. The outcomes of a MATLAB simulation demonstrate the viability of the suggested strategy in a cloud setting.
Additional Links: PMID-39098886
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39098886,
year = {2024},
author = {John, J and John Singh, K},
title = {Trust value evaluation of cloud service providers using fuzzy inference based analytical process.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18028},
pmid = {39098886},
issn = {2045-2322},
abstract = {Users can purchase virtualized computer resources using the cloud computing concept, which is a novel and innovative way of computing. It offers numerous advantages for IT and healthcare industries over traditional methods. However, a lack of trust between CSUs and CSPs is hindering the widespread adoption of cloud computing across industries. Since cloud computing offers a wide range of trust models and strategies, it is essential to analyze the service using a detailed methodology in order to choose the appropriate cloud service for various user types. Finding a wide variety of comprehensive elements that are both required and sufficient for evaluating any cloud service is vital in order to achieve that. As a result, this study suggests an accurate, fuzzy logic-based trust evaluation model for evaluating the trustworthiness of a cloud service provider. Here, we examine how fuzzy logic raises the efficiency of trust evaluation. Trust is assessed using Quality of Service (QoS) characteristics like security, privacy, dynamicity, data integrity, and performance. The outcomes of a MATLAB simulation demonstrate the viability of the suggested strategy in a cloud setting.},
}
RevDate: 2024-08-03
Cloud computing load prediction method based on CNN-BiLSTM model under low-carbon background.
Scientific reports, 14(1):18004.
With the establishment of the "double carbon" goal, various industries are actively exploring ways to reduce carbon emissions. Cloud data centers, represented by cloud computing, often have the problem of mismatch between load requests and resource supply, resulting in excessive carbon emissions. Based on this, this paper proposes a complete method for cloud computing carbon emission prediction. Firstly, the convolutional neural network and bidirectional long-term and short-term memory neural network (CNN-BiLSTM) combined model are used to predict the cloud computing load. The real-time prediction power is obtained by real-time prediction load of cloud computing, and then the carbon emission prediction is obtained by power calculation. Develop a dynamic server carbon emission prediction model, so that the server carbon emission can change with the change of CPU utilization, so as to achieve the purpose of low carbon emission reduction. In this paper, Google cluster data is used to predict the load. The experimental results show that the CNN-BiLSTM combined model has good prediction effect. Compared with the multi-layer feed forward neural network model (BP), long short-term memory network model (LSTM), bidirectional long short-term memory network model (BiLSTM), modal decomposition and convolution long time series neural network model (CEEMDAN-ConvLSTM), the MSE index decreased by 52 % , 50 % , 34 % and 45 % respectively.
Additional Links: PMID-39097607
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39097607,
year = {2024},
author = {Zhang, H and Li, J and Yang, H},
title = {Cloud computing load prediction method based on CNN-BiLSTM model under low-carbon background.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18004},
pmid = {39097607},
issn = {2045-2322},
support = {XJ2023004301//Basic scientific research business fee of central colleges and universities/ ; },
abstract = {With the establishment of the "double carbon" goal, various industries are actively exploring ways to reduce carbon emissions. Cloud data centers, represented by cloud computing, often have the problem of mismatch between load requests and resource supply, resulting in excessive carbon emissions. Based on this, this paper proposes a complete method for cloud computing carbon emission prediction. Firstly, the convolutional neural network and bidirectional long-term and short-term memory neural network (CNN-BiLSTM) combined model are used to predict the cloud computing load. The real-time prediction power is obtained by real-time prediction load of cloud computing, and then the carbon emission prediction is obtained by power calculation. Develop a dynamic server carbon emission prediction model, so that the server carbon emission can change with the change of CPU utilization, so as to achieve the purpose of low carbon emission reduction. In this paper, Google cluster data is used to predict the load. The experimental results show that the CNN-BiLSTM combined model has good prediction effect. Compared with the multi-layer feed forward neural network model (BP), long short-term memory network model (LSTM), bidirectional long short-term memory network model (BiLSTM), modal decomposition and convolution long time series neural network model (CEEMDAN-ConvLSTM), the MSE index decreased by 52 % , 50 % , 34 % and 45 % respectively.},
}
RevDate: 2024-08-02
Leonhard Med, a trusted research environment for processing sensitive research data.
Journal of integrative bioinformatics [Epub ahead of print].
This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.
Additional Links: PMID-39092509
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39092509,
year = {2024},
author = {Okoniewski, MJ and Wiegand, A and Schmid, DC and Bolliger, C and Bovino, C and Belluco, M and Wüst, T and Byrde, O and Maffioletti, S and Rinn, B},
title = {Leonhard Med, a trusted research environment for processing sensitive research data.},
journal = {Journal of integrative bioinformatics},
volume = {},
number = {},
pages = {},
pmid = {39092509},
issn = {1613-4516},
abstract = {This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.},
}
RevDate: 2024-08-01
CmpDate: 2024-08-01
Optimized intrusion detection in IoT and fog computing using ensemble learning and advanced feature selection.
PloS one, 19(8):e0304082.
The proliferation of Internet of Things (IoT) devices and fog computing architectures has introduced major security and cyber threats. Intrusion detection systems have become effective in monitoring network traffic and activities to identify anomalies that are indicative of attacks. However, constraints such as limited computing resources at fog nodes render conventional intrusion detection techniques impractical. This paper proposes a novel framework that integrates stacked autoencoders, CatBoost, and an optimised transformer-CNN-LSTM ensemble tailored for intrusion detection in fog and IoT networks. Autoencoders extract robust features from high-dimensional traffic data while reducing the dimensionality of the efficiency at fog nodes. CatBoost refines features through predictive selection. The ensemble model combines self-attention, convolutions, and recurrence for comprehensive traffic analysis in the cloud. Evaluations of the NSL-KDD, UNSW-NB15, and AWID benchmarks demonstrate an accuracy of over 99% in detecting threats across traditional, hybrid enterprises and wireless environments. Integrated edge preprocessing and cloud-based ensemble learning pipelines enable efficient and accurate anomaly detection. The results highlight the viability of securing real-world fog and the IoT infrastructure against continuously evolving cyber-attacks.
Additional Links: PMID-39088558
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39088558,
year = {2024},
author = {Tawfik, M},
title = {Optimized intrusion detection in IoT and fog computing using ensemble learning and advanced feature selection.},
journal = {PloS one},
volume = {19},
number = {8},
pages = {e0304082},
pmid = {39088558},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Internet of Things ; Computer Security ; Neural Networks, Computer ; Algorithms ; Machine Learning ; },
abstract = {The proliferation of Internet of Things (IoT) devices and fog computing architectures has introduced major security and cyber threats. Intrusion detection systems have become effective in monitoring network traffic and activities to identify anomalies that are indicative of attacks. However, constraints such as limited computing resources at fog nodes render conventional intrusion detection techniques impractical. This paper proposes a novel framework that integrates stacked autoencoders, CatBoost, and an optimised transformer-CNN-LSTM ensemble tailored for intrusion detection in fog and IoT networks. Autoencoders extract robust features from high-dimensional traffic data while reducing the dimensionality of the efficiency at fog nodes. CatBoost refines features through predictive selection. The ensemble model combines self-attention, convolutions, and recurrence for comprehensive traffic analysis in the cloud. Evaluations of the NSL-KDD, UNSW-NB15, and AWID benchmarks demonstrate an accuracy of over 99% in detecting threats across traditional, hybrid enterprises and wireless environments. Integrated edge preprocessing and cloud-based ensemble learning pipelines enable efficient and accurate anomaly detection. The results highlight the viability of securing real-world fog and the IoT infrastructure against continuously evolving cyber-attacks.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Internet of Things
Computer Security
Neural Networks, Computer
Algorithms
Machine Learning
RevDate: 2024-07-31
Electron-driven molecular processes for cyanopolyacetylenes HC2n+1N (n = 3, 4, and 5).
Physical chemistry chemical physics : PCCP [Epub ahead of print].
Linear carbon series cyanopolyacetylenes (HC2n+1N) (n = 3, 4, and 5) are astromolecules found in the atmosphere of Titan and interstellar media such as TMC-1 (Taurus molecular cloud-1). All these compounds are also detected in IRC + 10 216. In the present work, we comprehensively investigate electron interaction with important cyanopolyacetylene compounds, viz. HC7N (cyano-tri-acetylene), HC9N (cyano-tetra-acetylene), and HC11N (cyano-penta-acetylene). The study covers incident electron energies ranging from the ionization threshold to 5 keV. Various electron-driven molecular processes are quantified in terms of total cross-sections. The quantum spherical complex optical potential (SCOP) is used to determine elastic (Qel) and inelastic (Qinel) cross-sections. Ionization is the most important inelastic effect that opens various chemical pathways for the generation of different molecular species; we computed the ionization cross-section (Qion) and discrete electronic excitation cross-section (ΣQexc) using the complex scattering potential-ionization contribution (CSP-ic) method. The cyanopolyacetylene compounds are difficult to handle experimentally owing to the health risks involved. Therefore, there are no prior experimental data available for these molecules; only Qion have been reported theoretically. Thus, the present work is the maiden report on computing Qel, Qinel, ΣQexc, and QT. In order to provide an alternative approach and further validation of the present work, we employed our recently developed two-parameter semi-empirical method (2p-SEM) to compute Qel and QT. Additionally, we predict the polarizability of the HC11N molecule, which has not been reported in the existing literature. This prediction is based on a correlation study of polarizabilities of molecules with Qion values from the same series of molecules.
Additional Links: PMID-39081193
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39081193,
year = {2024},
author = {Mer, P and Limbachiya, C},
title = {Electron-driven molecular processes for cyanopolyacetylenes HC2n+1N (n = 3, 4, and 5).},
journal = {Physical chemistry chemical physics : PCCP},
volume = {},
number = {},
pages = {},
doi = {10.1039/d4cp02665a},
pmid = {39081193},
issn = {1463-9084},
abstract = {Linear carbon series cyanopolyacetylenes (HC2n+1N) (n = 3, 4, and 5) are astromolecules found in the atmosphere of Titan and interstellar media such as TMC-1 (Taurus molecular cloud-1). All these compounds are also detected in IRC + 10 216. In the present work, we comprehensively investigate electron interaction with important cyanopolyacetylene compounds, viz. HC7N (cyano-tri-acetylene), HC9N (cyano-tetra-acetylene), and HC11N (cyano-penta-acetylene). The study covers incident electron energies ranging from the ionization threshold to 5 keV. Various electron-driven molecular processes are quantified in terms of total cross-sections. The quantum spherical complex optical potential (SCOP) is used to determine elastic (Qel) and inelastic (Qinel) cross-sections. Ionization is the most important inelastic effect that opens various chemical pathways for the generation of different molecular species; we computed the ionization cross-section (Qion) and discrete electronic excitation cross-section (ΣQexc) using the complex scattering potential-ionization contribution (CSP-ic) method. The cyanopolyacetylene compounds are difficult to handle experimentally owing to the health risks involved. Therefore, there are no prior experimental data available for these molecules; only Qion have been reported theoretically. Thus, the present work is the maiden report on computing Qel, Qinel, ΣQexc, and QT. In order to provide an alternative approach and further validation of the present work, we employed our recently developed two-parameter semi-empirical method (2p-SEM) to compute Qel and QT. Additionally, we predict the polarizability of the HC11N molecule, which has not been reported in the existing literature. This prediction is based on a correlation study of polarizabilities of molecules with Qion values from the same series of molecules.},
}
RevDate: 2024-07-30
AI Accelerator with Ultralightweight Time-Period CNN-Based Model for Arrhythmia Classification.
IEEE transactions on biomedical circuits and systems, PP: [Epub ahead of print].
This work proposes a classification system for arrhythmias, aiming to enhance the efficiency of the diagnostic process for cardiologists. The proposed algorithm includes a naive preprocessing procedure for electrocardiography (ECG) data applicable to various ECG databases. Additionally, this work proposes an ultralightweight model for arrhythmia classification based on a convolutional neural network and incorporating R-peak interval features to represent long-term rhythm information, thereby improving the model's classification performance. The proposed model is trained and tested by using the MIT-BIH and NCKU-CBIC databases in accordance with the classification standards of the Association for the Advancement of Medical Instrumentation (AAMI), achieving high accuracies of 98.32% and 97.1%. This work applies the arrhythmia classification algorithm to a web-based system, thus providing a graphical interface. The cloud-based execution of automated artificial intelligence (AI) classification allows cardiologists and patients to view ECG wave conditions instantly, thereby remarkably enhancing the quality of medical examination. This work also designs a customized integrated circuit for the hardware implementation of an AI accelerator. The accelerator utilizes a parallelized processing element array architecture to perform convolution and fully connected layer operations. It introduces proposed hybrid stationary techniques, combining input and weight stationary modes to increase data reuse drastically and reduce hardware execution cycles and power consumption, ultimately achieving high-performance computing. This accelerator is implemented in the form of a chip by using the TSMC 180 nm CMOS process. It exhibits a power consumption of 122 μW, a classification latency of 6.8 ms, and an energy efficiency of 0.83 μJ/classification.
Additional Links: PMID-39078761
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39078761,
year = {2024},
author = {Lee, SY and Ku, MY and Tseng, WC and Chen, JY},
title = {AI Accelerator with Ultralightweight Time-Period CNN-Based Model for Arrhythmia Classification.},
journal = {IEEE transactions on biomedical circuits and systems},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TBCAS.2024.3435718},
pmid = {39078761},
issn = {1940-9990},
abstract = {This work proposes a classification system for arrhythmias, aiming to enhance the efficiency of the diagnostic process for cardiologists. The proposed algorithm includes a naive preprocessing procedure for electrocardiography (ECG) data applicable to various ECG databases. Additionally, this work proposes an ultralightweight model for arrhythmia classification based on a convolutional neural network and incorporating R-peak interval features to represent long-term rhythm information, thereby improving the model's classification performance. The proposed model is trained and tested by using the MIT-BIH and NCKU-CBIC databases in accordance with the classification standards of the Association for the Advancement of Medical Instrumentation (AAMI), achieving high accuracies of 98.32% and 97.1%. This work applies the arrhythmia classification algorithm to a web-based system, thus providing a graphical interface. The cloud-based execution of automated artificial intelligence (AI) classification allows cardiologists and patients to view ECG wave conditions instantly, thereby remarkably enhancing the quality of medical examination. This work also designs a customized integrated circuit for the hardware implementation of an AI accelerator. The accelerator utilizes a parallelized processing element array architecture to perform convolution and fully connected layer operations. It introduces proposed hybrid stationary techniques, combining input and weight stationary modes to increase data reuse drastically and reduce hardware execution cycles and power consumption, ultimately achieving high-performance computing. This accelerator is implemented in the form of a chip by using the TSMC 180 nm CMOS process. It exhibits a power consumption of 122 μW, a classification latency of 6.8 ms, and an energy efficiency of 0.83 μJ/classification.},
}
RevDate: 2024-07-30
IoT-based emergency cardiac death risk rescue alert system.
MethodsX, 13:102834.
The use of technology in healthcare is one of the most critical application areas today. With the development of medical applications, people's quality of life has improved. However, it is impractical and unnecessary for medium-risk people to receive specialized daily hospital monitoring. Due to their health status, they will be exposed to a high risk of severe health damage or even life-threatening conditions without monitoring. Therefore, remote, real-time, low-cost, wearable, and effective monitoring is ideal for this problem. Many researchers mentioned that their studies could use electrocardiogram (ECG) detection to discover emergencies. However, how to respond to discovered emergencies in household life is still a research gap in this field.•This paper proposes a real-time monitoring of ECG signals and sending them to the cloud for Sudden Cardiac Death (SCD) prediction.•Unlike previous studies, the proposed system has an additional emergency response mechanism to alert nearby community healthcare workers when SCD is predicted to occur.
Additional Links: PMID-39071997
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39071997,
year = {2024},
author = {Rehman, SU and Sadek, I and Huang, B and Manickam, S and Mahmoud, LN},
title = {IoT-based emergency cardiac death risk rescue alert system.},
journal = {MethodsX},
volume = {13},
number = {},
pages = {102834},
pmid = {39071997},
issn = {2215-0161},
abstract = {The use of technology in healthcare is one of the most critical application areas today. With the development of medical applications, people's quality of life has improved. However, it is impractical and unnecessary for medium-risk people to receive specialized daily hospital monitoring. Due to their health status, they will be exposed to a high risk of severe health damage or even life-threatening conditions without monitoring. Therefore, remote, real-time, low-cost, wearable, and effective monitoring is ideal for this problem. Many researchers mentioned that their studies could use electrocardiogram (ECG) detection to discover emergencies. However, how to respond to discovered emergencies in household life is still a research gap in this field.•This paper proposes a real-time monitoring of ECG signals and sending them to the cloud for Sudden Cardiac Death (SCD) prediction.•Unlike previous studies, the proposed system has an additional emergency response mechanism to alert nearby community healthcare workers when SCD is predicted to occur.},
}
RevDate: 2024-07-26
Wigner kernels: Body-ordered equivariant machine learning without a basis.
The Journal of chemical physics, 161(4):.
Machine-learning models based on a point-cloud representation of a physical object are ubiquitous in scientific applications and particularly well-suited to the atomic-scale description of molecules and materials. Among the many different approaches that have been pursued, the description of local atomic environments in terms of their discretized neighbor densities has been used widely and very successfully. We propose a novel density-based method, which involves computing "Wigner kernels." These are fully equivariant and body-ordered kernels that can be computed iteratively at a cost that is independent of the basis used to discretize the density and grows only linearly with the maximum body-order considered. Wigner kernels represent the infinite-width limit of feature-space models, whose dimensionality and computational cost instead scale exponentially with the increasing order of correlations. We present several examples of the accuracy of models based on Wigner kernels in chemical applications, for both scalar and tensorial targets, reaching an accuracy that is competitive with state-of-the-art deep-learning architectures. We discuss the broader relevance of these findings to equivariant geometric machine-learning.
Additional Links: PMID-39056390
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39056390,
year = {2024},
author = {Bigi, F and Pozdnyakov, SN and Ceriotti, M},
title = {Wigner kernels: Body-ordered equivariant machine learning without a basis.},
journal = {The Journal of chemical physics},
volume = {161},
number = {4},
pages = {},
doi = {10.1063/5.0208746},
pmid = {39056390},
issn = {1089-7690},
abstract = {Machine-learning models based on a point-cloud representation of a physical object are ubiquitous in scientific applications and particularly well-suited to the atomic-scale description of molecules and materials. Among the many different approaches that have been pursued, the description of local atomic environments in terms of their discretized neighbor densities has been used widely and very successfully. We propose a novel density-based method, which involves computing "Wigner kernels." These are fully equivariant and body-ordered kernels that can be computed iteratively at a cost that is independent of the basis used to discretize the density and grows only linearly with the maximum body-order considered. Wigner kernels represent the infinite-width limit of feature-space models, whose dimensionality and computational cost instead scale exponentially with the increasing order of correlations. We present several examples of the accuracy of models based on Wigner kernels in chemical applications, for both scalar and tensorial targets, reaching an accuracy that is competitive with state-of-the-art deep-learning architectures. We discuss the broader relevance of these findings to equivariant geometric machine-learning.},
}
RevDate: 2024-07-26
A fourfold-objective-based cloud privacy preservation model with proposed association rule hiding and deep learning assisted optimal key generation.
Network (Bristol, England) [Epub ahead of print].
Numerous studies have been conducted in an attempt to preserve cloud privacy, yet the majority of cutting-edge solutions fall short when it comes to handling sensitive data. This research proposes a "privacy preservation model in the cloud environment". The four stages of recommended security preservation methodology are "identification of sensitive data, generation of an optimal tuned key, suggested data sanitization, and data restoration". Initially, owner's data enters the Sensitive data identification process. The sensitive information in the input (owner's data) is identified via Augmented Dynamic Itemset Counting (ADIC) based Associative Rule Mining Model. Subsequently, the identified sensitive data are sanitized via the newly created tuned key. The generated tuned key is formulated with new fourfold objective-hybrid optimization approach-based deep learning approach. The optimally tuned key is generated with LSTM on the basis of fourfold objectives and the new hybrid MUAOA. The created keys, as well as generated sensitive rules, are fed into the deep learning model. The MUAOA technique is a conceptual blend of standard AOA and CMBO, respectively. As a result, unauthorized people will be unable to access information. Finally, comparative evaluation is undergone and proposed LSTM+MUAOA has achieved higher values on privacy about 5.21 compared to other existing models.
Additional Links: PMID-39054942
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39054942,
year = {2024},
author = {Sharma, S and Tyagi, S},
title = {A fourfold-objective-based cloud privacy preservation model with proposed association rule hiding and deep learning assisted optimal key generation.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-36},
doi = {10.1080/0954898X.2024.2378836},
pmid = {39054942},
issn = {1361-6536},
abstract = {Numerous studies have been conducted in an attempt to preserve cloud privacy, yet the majority of cutting-edge solutions fall short when it comes to handling sensitive data. This research proposes a "privacy preservation model in the cloud environment". The four stages of recommended security preservation methodology are "identification of sensitive data, generation of an optimal tuned key, suggested data sanitization, and data restoration". Initially, owner's data enters the Sensitive data identification process. The sensitive information in the input (owner's data) is identified via Augmented Dynamic Itemset Counting (ADIC) based Associative Rule Mining Model. Subsequently, the identified sensitive data are sanitized via the newly created tuned key. The generated tuned key is formulated with new fourfold objective-hybrid optimization approach-based deep learning approach. The optimally tuned key is generated with LSTM on the basis of fourfold objectives and the new hybrid MUAOA. The created keys, as well as generated sensitive rules, are fed into the deep learning model. The MUAOA technique is a conceptual blend of standard AOA and CMBO, respectively. As a result, unauthorized people will be unable to access information. Finally, comparative evaluation is undergone and proposed LSTM+MUAOA has achieved higher values on privacy about 5.21 compared to other existing models.},
}
RevDate: 2024-07-25
CmpDate: 2024-07-25
Use Mobile Apps to Link to Google Forms to Conduct Online Surveys.
Studies in health technology and informatics, 315:567-568.
The study aimed to evaluate changes in anxiety levels in patients with coronary artery disease before and after cardiac catheterization. The mobile applications LINE and GOOGLE were used to collect online data. A total of 188 patients participated in the study conducted at a regional teaching hospital in eastern Taiwan, and 51 of them completed the questionnaire twice, with a response rate of 27.1%. Although the second study noted the problem of incomplete data and low response rates, this study shows that online research methodology can still be improved and that using electronic questionnaires for data collection and statistical analysis reduces the risk of errors in online research and saves time in documentation. It is recommended to provide clear and detailed instructions when conducting online surveys and to review them carefully upon completion to ensure the completeness of the data collected.
Additional Links: PMID-39049325
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39049325,
year = {2024},
author = {Chen, SY and Tu, MH},
title = {Use Mobile Apps to Link to Google Forms to Conduct Online Surveys.},
journal = {Studies in health technology and informatics},
volume = {315},
number = {},
pages = {567-568},
doi = {10.3233/SHTI240219},
pmid = {39049325},
issn = {1879-8365},
mesh = {Taiwan ; Humans ; *Mobile Applications ; Surveys and Questionnaires ; Coronary Artery Disease ; Anxiety ; Male ; Female ; Middle Aged ; Internet ; },
abstract = {The study aimed to evaluate changes in anxiety levels in patients with coronary artery disease before and after cardiac catheterization. The mobile applications LINE and GOOGLE were used to collect online data. A total of 188 patients participated in the study conducted at a regional teaching hospital in eastern Taiwan, and 51 of them completed the questionnaire twice, with a response rate of 27.1%. Although the second study noted the problem of incomplete data and low response rates, this study shows that online research methodology can still be improved and that using electronic questionnaires for data collection and statistical analysis reduces the risk of errors in online research and saves time in documentation. It is recommended to provide clear and detailed instructions when conducting online surveys and to review them carefully upon completion to ensure the completeness of the data collected.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Taiwan
Humans
*Mobile Applications
Surveys and Questionnaires
Coronary Artery Disease
Anxiety
Male
Female
Middle Aged
Internet
RevDate: 2024-07-23
CmpDate: 2024-07-23
CCPA: cloud-based, self-learning modules for consensus pathway analysis using GO, KEGG and Reactome.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041916
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041916,
year = {2024},
author = {Nguyen, H and Pham, VD and Nguyen, H and Tran, B and Petereit, J and Nguyen, T},
title = {CCPA: cloud-based, self-learning modules for consensus pathway analysis using GO, KEGG and Reactome.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae222},
pmid = {39041916},
issn = {1477-4054},
support = {2343019 and 2203236//National Science Foundation/ ; 80NSSC22M0255/NASA/NASA/United States ; GM103440 and 1R44GM152152-01/GM/NIGMS NIH HHS/United States ; 1U01CA274573-01A1/CA/NCI NIH HHS/United States ; },
mesh = {*Cloud Computing ; *Software ; Humans ; Computational Biology/methods/education ; Animals ; Gene Ontology ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
Humans
Computational Biology/methods/education
Animals
Gene Ontology
RevDate: 2024-07-23
CmpDate: 2024-07-23
Identifying and training deep learning neural networks on biomedical-related datasets.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041915
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041915,
year = {2024},
author = {Woessner, AE and Anjum, U and Salman, H and Lear, J and Turner, JT and Campbell, R and Beaudry, L and Zhan, J and Cornett, LE and Gauch, S and Quinn, KP},
title = {Identifying and training deep learning neural networks on biomedical-related datasets.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae232},
pmid = {39041915},
issn = {1477-4054},
support = {R01EB031032/NH/NIH HHS/United States ; NIH P20GM139768//Arkansas Integrative Metabolic Research Center/ ; 3P20GM103429-21S2//National Institutes of General Medical Sciences (NIGMS)/ ; },
mesh = {*Deep Learning ; *Neural Networks, Computer ; Humans ; Biomedical Research ; Algorithms ; Cloud Computing ; },
abstract = {This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
*Neural Networks, Computer
Humans
Biomedical Research
Algorithms
Cloud Computing
RevDate: 2024-07-23
CmpDate: 2024-07-23
Understanding proteome quantification in an interactive learning module on Google Cloud Platform.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on protein quantification in an interactive format that uses appropriate cloud resources for data access and analyses. Quantitative proteomics is a rapidly growing discipline due to the cutting-edge technologies of high resolution mass spectrometry. There are many data types to consider for proteome quantification including data dependent acquisition, data independent acquisition, multiplexing with Tandem Mass Tag reporter ions, spectral counts, and more. As part of the NIH NIGMS Sandbox effort, we developed a learning module to introduce students to mass spectrometry terminology, normalization methods, statistical designs, and basics of R programming. By utilizing the Google Cloud environment, the learning module is easily accessible without the need for complex installation procedures. The proteome quantification module demonstrates the analysis using a provided TMT10plex data set using MS3 reporter ion intensity quantitative values in a Jupyter notebook with an R kernel. The learning module begins with the raw intensities, performs normalization, and differential abundance analysis using limma models, and is designed for researchers with a basic understanding of mass spectrometry and R programming language. Learners walk away with a better understanding of how to navigate Google Cloud Platform for proteomic research, and with the basics of mass spectrometry data analysis at the command line. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041914
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041914,
year = {2024},
author = {O'Connell, KA and Kopchick, B and Carlson, T and Belardo, D and Byrum, SD},
title = {Understanding proteome quantification in an interactive learning module on Google Cloud Platform.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae235},
pmid = {39041914},
issn = {1477-4054},
support = {//UAMS Winthrop P. Rockefeller Cancer Institute/ ; OIA-1946391//National Science Foundation Award/ ; R24GM137786//National Institutes of Health National Institute of General Medical Sciences (NIH/NIGMS)/ ; },
mesh = {*Cloud Computing ; *Proteome/metabolism ; *Proteomics/methods ; *Software ; Mass Spectrometry ; Humans ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on protein quantification in an interactive format that uses appropriate cloud resources for data access and analyses. Quantitative proteomics is a rapidly growing discipline due to the cutting-edge technologies of high resolution mass spectrometry. There are many data types to consider for proteome quantification including data dependent acquisition, data independent acquisition, multiplexing with Tandem Mass Tag reporter ions, spectral counts, and more. As part of the NIH NIGMS Sandbox effort, we developed a learning module to introduce students to mass spectrometry terminology, normalization methods, statistical designs, and basics of R programming. By utilizing the Google Cloud environment, the learning module is easily accessible without the need for complex installation procedures. The proteome quantification module demonstrates the analysis using a provided TMT10plex data set using MS3 reporter ion intensity quantitative values in a Jupyter notebook with an R kernel. The learning module begins with the raw intensities, performs normalization, and differential abundance analysis using limma models, and is designed for researchers with a basic understanding of mass spectrometry and R programming language. Learners walk away with a better understanding of how to navigate Google Cloud Platform for proteomic research, and with the basics of mass spectrometry data analysis at the command line. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Proteome/metabolism
*Proteomics/methods
*Software
Mass Spectrometry
Humans
RevDate: 2024-07-23
CmpDate: 2024-07-23
Whole-genome bisulfite sequencing data analysis learning module on Google Cloud Platform.
Briefings in bioinformatics, 25(Supplement_1):.
This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041913
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041913,
year = {2024},
author = {Qin, Y and Maggio, A and Hawkins, D and Beaudry, L and Kim, A and Pan, D and Gong, T and Fu, Y and Yang, H and Deng, Y},
title = {Whole-genome bisulfite sequencing data analysis learning module on Google Cloud Platform.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae236},
pmid = {39041913},
issn = {1477-4054},
support = {P20GM103466/NH/NIH HHS/United States ; },
mesh = {*Cloud Computing ; *DNA Methylation ; *Whole Genome Sequencing/methods ; *Software ; Sulfites/chemistry ; Humans ; Epigenesis, Genetic ; Computational Biology/methods ; },
abstract = {This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*DNA Methylation
*Whole Genome Sequencing/methods
*Software
Sulfites/chemistry
Humans
Epigenesis, Genetic
Computational Biology/methods
RevDate: 2024-07-23
CmpDate: 2024-07-23
A cloud-based learning module for biomarker discovery.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on basic principles in biomarker discovery in an interactive format that uses appropriate cloud resources for data access and analyses. In collaboration with Google Cloud, Deloitte Consulting and NIGMS, the Rhode Island INBRE Molecular Informatics Core developed a cloud-based training module for biomarker discovery. The module consists of nine submodules covering various topics on biomarker discovery and assessment and is deployed on the Google Cloud Platform and available for public use through the NIGMS Sandbox. The submodules are written as a series of Jupyter Notebooks utilizing R and Bioconductor for biomarker and omics data analysis. The submodules cover the following topics: 1) introduction to biomarkers; 2) introduction to R data structures; 3) introduction to linear models; 4) introduction to exploratory analysis; 5) rat renal ischemia-reperfusion injury case study; (6) linear and logistic regression for comparison of quantitative biomarkers; 7) exploratory analysis of proteomics IRI data; 8) identification of IRI biomarkers from proteomic data; and 9) machine learning methods for biomarker discovery. Each notebook includes an in-line quiz for self-assessment on the submodule topic and an overview video is available on YouTube (https://www.youtube.com/watch?v=2-Q9Ax8EW84). This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041912
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041912,
year = {2024},
author = {Hemme, CL and Beaudry, L and Yosufzai, Z and Kim, A and Pan, D and Campbell, R and Price, M and Cho, BP},
title = {A cloud-based learning module for biomarker discovery.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae126},
pmid = {39041912},
issn = {1477-4054},
support = {P20GM103430/NH/NIH HHS/United States ; },
mesh = {*Cloud Computing ; *Biomarkers/metabolism ; Animals ; Software ; Humans ; Rats ; Machine Learning ; Computational Biology/methods ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on basic principles in biomarker discovery in an interactive format that uses appropriate cloud resources for data access and analyses. In collaboration with Google Cloud, Deloitte Consulting and NIGMS, the Rhode Island INBRE Molecular Informatics Core developed a cloud-based training module for biomarker discovery. The module consists of nine submodules covering various topics on biomarker discovery and assessment and is deployed on the Google Cloud Platform and available for public use through the NIGMS Sandbox. The submodules are written as a series of Jupyter Notebooks utilizing R and Bioconductor for biomarker and omics data analysis. The submodules cover the following topics: 1) introduction to biomarkers; 2) introduction to R data structures; 3) introduction to linear models; 4) introduction to exploratory analysis; 5) rat renal ischemia-reperfusion injury case study; (6) linear and logistic regression for comparison of quantitative biomarkers; 7) exploratory analysis of proteomics IRI data; 8) identification of IRI biomarkers from proteomic data; and 9) machine learning methods for biomarker discovery. Each notebook includes an in-line quiz for self-assessment on the submodule topic and an overview video is available on YouTube (https://www.youtube.com/watch?v=2-Q9Ax8EW84). This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Biomarkers/metabolism
Animals
Software
Humans
Rats
Machine Learning
Computational Biology/methods
RevDate: 2024-07-23
CmpDate: 2024-07-23
Cloud-based introduction to BASH programming for biologists.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041911
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041911,
year = {2024},
author = {Wilkins, OM and Campbell, R and Yosufzai, Z and Doe, V and Soucy, SM},
title = {Cloud-based introduction to BASH programming for biologists.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae244},
pmid = {39041911},
issn = {1477-4054},
support = {P20GM130454//National Institutes of General Medical Science/ ; },
mesh = {*Cloud Computing ; *Software ; *Computational Biology/methods ; Programming Languages ; High-Throughput Nucleotide Sequencing/methods ; Genomics/methods ; Humans ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
*Computational Biology/methods
Programming Languages
High-Throughput Nucleotide Sequencing/methods
Genomics/methods
Humans
RevDate: 2024-07-23
CmpDate: 2024-07-23
CloudATAC: a cloud-based framework for ATAC-Seq data analysis.
Briefings in bioinformatics, 25(Supplement_1):.
Assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) generates genome-wide chromatin accessibility profiles, providing valuable insights into epigenetic gene regulation at both pooled-cell and single-cell population levels. Comprehensive analysis of ATAC-seq data involves the use of various interdependent programs. Learning the correct sequence of steps needed to process the data can represent a major hurdle. Selecting appropriate parameters at each stage, including pre-analysis, core analysis, and advanced downstream analysis, is important to ensure accurate analysis and interpretation of ATAC-seq data. Additionally, obtaining and working within a limited computational environment presents a significant challenge to non-bioinformatic researchers. Therefore, we present Cloud ATAC, an open-source, cloud-based interactive framework with a scalable, flexible, and streamlined analysis framework based on the best practices approach for pooled-cell and single-cell ATAC-seq data. These frameworks use on-demand computational power and memory, scalability, and a secure and compliant environment provided by the Google Cloud. Additionally, we leverage Jupyter Notebook's interactive computing platform that combines live code, tutorials, narrative text, flashcards, quizzes, and custom visualizations to enhance learning and analysis. Further, leveraging GPU instances has significantly improved the run-time of the single-cell framework. The source codes and data are publicly available through NIH Cloud lab https://github.com/NIGMS/ATAC-Seq-and-Single-Cell-ATAC-Seq-Analysis. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041910
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041910,
year = {2024},
author = {Veerappa, AM and Rowley, MJ and Maggio, A and Beaudry, L and Hawkins, D and Kim, A and Sethi, S and Sorgen, PL and Guda, C},
title = {CloudATAC: a cloud-based framework for ATAC-Seq data analysis.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae090},
pmid = {39041910},
issn = {1477-4054},
support = {NIH/NIGMS P20 GM103427//NOSI supplement to the parent IDeA Networks of Biomedical Research Excellence (INBRE) Program/ ; },
mesh = {*Cloud Computing ; *Software ; *High-Throughput Nucleotide Sequencing/methods ; Humans ; Computational Biology/methods ; Chromatin Immunoprecipitation Sequencing/methods ; Single-Cell Analysis/methods ; Chromatin/genetics/metabolism ; },
abstract = {Assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) generates genome-wide chromatin accessibility profiles, providing valuable insights into epigenetic gene regulation at both pooled-cell and single-cell population levels. Comprehensive analysis of ATAC-seq data involves the use of various interdependent programs. Learning the correct sequence of steps needed to process the data can represent a major hurdle. Selecting appropriate parameters at each stage, including pre-analysis, core analysis, and advanced downstream analysis, is important to ensure accurate analysis and interpretation of ATAC-seq data. Additionally, obtaining and working within a limited computational environment presents a significant challenge to non-bioinformatic researchers. Therefore, we present Cloud ATAC, an open-source, cloud-based interactive framework with a scalable, flexible, and streamlined analysis framework based on the best practices approach for pooled-cell and single-cell ATAC-seq data. These frameworks use on-demand computational power and memory, scalability, and a secure and compliant environment provided by the Google Cloud. Additionally, we leverage Jupyter Notebook's interactive computing platform that combines live code, tutorials, narrative text, flashcards, quizzes, and custom visualizations to enhance learning and analysis. Further, leveraging GPU instances has significantly improved the run-time of the single-cell framework. The source codes and data are publicly available through NIH Cloud lab https://github.com/NIGMS/ATAC-Seq-and-Single-Cell-ATAC-Seq-Analysis. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
*High-Throughput Nucleotide Sequencing/methods
Humans
Computational Biology/methods
Chromatin Immunoprecipitation Sequencing/methods
Single-Cell Analysis/methods
Chromatin/genetics/metabolism
RevDate: 2024-07-23
Enhancing security in smart healthcare systems: Using intelligent edge computing with a novel Salp Swarm Optimization and radial basis neural network algorithm.
Heliyon, 10(13):e33792.
A smart healthcare system (SHS) is a health service system that employs advanced technologies such as wearable devices, the Internet of Things (IoT), and mobile internet to dynamically access information and connect people and institutions related to healthcare, thereby actively managing and responding to medical ecosystem needs. Edge computing (EC) plays a significant role in SHS as it enables real-time data processing and analysis at the data source, which reduces latency and improves medical intervention speed. However, the integration of patient information, including electronic health records (EHRs), into the SHS framework induces security and privacy concerns. To address these issues, an intelligent EC framework was proposed in this study. The objective of this study is to accurately identify security threats and ensure secure data transmission in the SHS environment. The proposed EC framework leverages the effectiveness of Salp Swarm Optimization and Radial Basis Functional Neural Network (SS-RBFN) for enhancing security and data privacy. The proposed methodology commences with the collection of healthcare information, which is then pre-processed to ensure the consistency and quality of the database for further analysis. Subsequently, the SS-RBFN algorithm was trained using the pre-processed database to distinguish between normal and malicious data streams accurately, offering continuous monitoring in the SHS environment. Additionally, a Rivest-Shamir-Adelman (RSA) approach was applied to safeguard data against security threats during transmission to cloud storage. The proposed model was trained and validated using the IoT-based healthcare database available at Kaggle, and the experimental results demonstrated that it achieved 99.87 % accuracy, 99.76 % precision, 99.49 % f-measure, 98.99 % recall, 97.37 % throughput, and 1.2s latency. Furthermore, the results achieved by the proposed model were compared with the existing models to validate its effectiveness in enhancing security.
Additional Links: PMID-39040324
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39040324,
year = {2024},
author = {Almalawi, A and Zafar, A and Unhelkar, B and Hassan, S and Alqurashi, F and Khan, AI and Fahad, A and Alam, MM},
title = {Enhancing security in smart healthcare systems: Using intelligent edge computing with a novel Salp Swarm Optimization and radial basis neural network algorithm.},
journal = {Heliyon},
volume = {10},
number = {13},
pages = {e33792},
pmid = {39040324},
issn = {2405-8440},
abstract = {A smart healthcare system (SHS) is a health service system that employs advanced technologies such as wearable devices, the Internet of Things (IoT), and mobile internet to dynamically access information and connect people and institutions related to healthcare, thereby actively managing and responding to medical ecosystem needs. Edge computing (EC) plays a significant role in SHS as it enables real-time data processing and analysis at the data source, which reduces latency and improves medical intervention speed. However, the integration of patient information, including electronic health records (EHRs), into the SHS framework induces security and privacy concerns. To address these issues, an intelligent EC framework was proposed in this study. The objective of this study is to accurately identify security threats and ensure secure data transmission in the SHS environment. The proposed EC framework leverages the effectiveness of Salp Swarm Optimization and Radial Basis Functional Neural Network (SS-RBFN) for enhancing security and data privacy. The proposed methodology commences with the collection of healthcare information, which is then pre-processed to ensure the consistency and quality of the database for further analysis. Subsequently, the SS-RBFN algorithm was trained using the pre-processed database to distinguish between normal and malicious data streams accurately, offering continuous monitoring in the SHS environment. Additionally, a Rivest-Shamir-Adelman (RSA) approach was applied to safeguard data against security threats during transmission to cloud storage. The proposed model was trained and validated using the IoT-based healthcare database available at Kaggle, and the experimental results demonstrated that it achieved 99.87 % accuracy, 99.76 % precision, 99.49 % f-measure, 98.99 % recall, 97.37 % throughput, and 1.2s latency. Furthermore, the results achieved by the proposed model were compared with the existing models to validate its effectiveness in enhancing security.},
}
RevDate: 2024-07-22
CmpDate: 2024-07-22
Self-learning activation functions to increase accuracy of privacy-preserving Convolutional Neural Networks with homomorphic encryption.
PloS one, 19(7):e0306420 pii:PONE-D-23-25899.
The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.
Additional Links: PMID-39038028
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39038028,
year = {2024},
author = {Pulido-Gaytan, B and Tchernykh, A},
title = {Self-learning activation functions to increase accuracy of privacy-preserving Convolutional Neural Networks with homomorphic encryption.},
journal = {PloS one},
volume = {19},
number = {7},
pages = {e0306420},
doi = {10.1371/journal.pone.0306420},
pmid = {39038028},
issn = {1932-6203},
mesh = {*Neural Networks, Computer ; *Computer Security ; *Privacy ; Humans ; Algorithms ; Cloud Computing ; },
abstract = {The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Neural Networks, Computer
*Computer Security
*Privacy
Humans
Algorithms
Cloud Computing
RevDate: 2024-07-19
Process Manufacturing Intelligence Empowered by Industrial Metaverse: A Survey.
IEEE transactions on cybernetics, PP: [Epub ahead of print].
The intelligent goal of process manufacturing is to achieve high efficiency and greening of the entire production. Whereas the information system it used is functionally independent, resulting to knowledge gaps between each level. Decision-making still requires lots of knowledge workers making manually. The industrial metaverse is a necessary means to bridge the knowledge gaps by sharing and collaborative decision-making. Considering the safety and stability requirements of the process manufacturing, this article conducts a thorough survey on the process manufacturing intelligence empowered by industrial metaverse. First, it analyzes the current status and challenges of process manufacturing intelligence, and then summarizes the latest developments about key enabling technologies of industrial metaverse, such as interconnection technologies, artificial intelligence, cloud-edge computing, digital twin (DT), immersive interaction, and blockchain technology. On this basis, taking into account the characteristics of process manufacturing, a construction approach and architecture for the process industrial metaverse is proposed: a virtual-real fused industrial metaverse construction method that combines DTs with physical avatar, which can effectively ensure the safety of metaverse's application in industrial scenarios. Finally, we conducted preliminary exploration and research, to prove the feasibility of proposed method.
Additional Links: PMID-39028603
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39028603,
year = {2024},
author = {Luo, W and Huang, K and Liang, X and Ren, H and Zhou, N and Zhang, C and Yang, C and Gui, W},
title = {Process Manufacturing Intelligence Empowered by Industrial Metaverse: A Survey.},
journal = {IEEE transactions on cybernetics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TCYB.2024.3420958},
pmid = {39028603},
issn = {2168-2275},
abstract = {The intelligent goal of process manufacturing is to achieve high efficiency and greening of the entire production. Whereas the information system it used is functionally independent, resulting to knowledge gaps between each level. Decision-making still requires lots of knowledge workers making manually. The industrial metaverse is a necessary means to bridge the knowledge gaps by sharing and collaborative decision-making. Considering the safety and stability requirements of the process manufacturing, this article conducts a thorough survey on the process manufacturing intelligence empowered by industrial metaverse. First, it analyzes the current status and challenges of process manufacturing intelligence, and then summarizes the latest developments about key enabling technologies of industrial metaverse, such as interconnection technologies, artificial intelligence, cloud-edge computing, digital twin (DT), immersive interaction, and blockchain technology. On this basis, taking into account the characteristics of process manufacturing, a construction approach and architecture for the process industrial metaverse is proposed: a virtual-real fused industrial metaverse construction method that combines DTs with physical avatar, which can effectively ensure the safety of metaverse's application in industrial scenarios. Finally, we conducted preliminary exploration and research, to prove the feasibility of proposed method.},
}
RevDate: 2024-07-18
CmpDate: 2024-07-18
Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.
Pain, 165(8):1793-1805.
Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.
Additional Links: PMID-39024163
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39024163,
year = {2024},
author = {McCoy, ES and Park, SK and Patel, RP and Ryan, DF and Mullen, ZJ and Nesbitt, JJ and Lopez, JE and Taylor-Blake, B and Vanden, KA and Krantz, JL and Hu, W and Garris, RL and Snyder, MG and Lima, LV and Sotocinal, SG and Austin, JS and Kashlan, AD and Shah, S and Trocinski, AK and Pudipeddi, SS and Major, RM and Bazick, HO and Klein, MR and Mogil, JS and Wu, G and Zylka, MJ},
title = {Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.},
journal = {Pain},
volume = {165},
number = {8},
pages = {1793-1805},
doi = {10.1097/j.pain.0000000000003187},
pmid = {39024163},
issn = {1872-6623},
support = {R01NS114259//National Institute of Neurological Disorders and Stroke, National Science Foundation/ ; },
mesh = {Animals ; Mice ; *Facial Expression ; Female ; *Software/standards ; *Mice, Inbred C57BL ; *Pain Measurement/methods/standards ; Male ; Pain/diagnosis ; },
abstract = {Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Animals
Mice
*Facial Expression
Female
*Software/standards
*Mice, Inbred C57BL
*Pain Measurement/methods/standards
Male
Pain/diagnosis
RevDate: 2024-07-18
Innovative Hybrid Cloud Solutions for Physical Medicine and Telerehabilitation Research.
International journal of telerehabilitation, 16(1):e6635.
PURPOSE: The primary objective of this study was to develop and implement a Hybrid Cloud Environment for Telerehabilitation (HCET) to enhance patient care and research in the Physical Medicine and Rehabilitation (PM&R) domain. This environment aims to integrate advanced information and communication technologies to support both traditional in-person therapy and digital health solutions.
BACKGROUND: Telerehabilitation is emerging as a core component of modern healthcare, especially within the PM&R field. By applying digital health technologies, telerehabilitation provides continuous, comprehensive support for patient rehabilitation, bridging the gap between traditional therapy, and remote healthcare delivery. This study focuses on the design, and implementation of a hybrid HCET system tailored for the PM&R domain.
METHODS: The study involved the development of a comprehensive architectural and structural organization for the HCET, including a three-layer model (infrastructure, platform, service layers). Core components of the HCET were designed and implemented, such as the Hospital Information System (HIS) for PM&R, the MedRehabBot system, and the MedLocalGPT project. These components were integrated using advanced technologies like large language models (LLMs), word embeddings, and ontology-related approaches, along with APIs for enhanced functionality and interaction.
FINDINGS: The HCET system was successfully implemented and is operational, providing a robust platform for telerehabilitation. Key features include the MVP of the HIS for PM&R, supporting patient profile management, and rehabilitation goal tracking; the MedRehabBot and WhiteBookBot systems; and the MedLocalGPT project, which offers sophisticated querying capabilities, and access to extensive domain-specific knowledge. The system supports both Ukrainian and English languages, ensuring broad accessibility and usability.
INTERPRETATION: The practical implementation, and operation of the HCET system demonstrate its potential to transform telerehabilitation within the PM&R domain. By integrating advanced technologies, and providing comprehensive digital health solutions, the HCET enhances patient care, supports ongoing rehabilitation, and facilitates advanced research. Future work will focus on optimizing services and expanding language support to further improve the system's functionality and impact.
Additional Links: PMID-39022436
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39022436,
year = {2024},
author = {Malakhov, KS},
title = {Innovative Hybrid Cloud Solutions for Physical Medicine and Telerehabilitation Research.},
journal = {International journal of telerehabilitation},
volume = {16},
number = {1},
pages = {e6635},
pmid = {39022436},
issn = {1945-2020},
abstract = {PURPOSE: The primary objective of this study was to develop and implement a Hybrid Cloud Environment for Telerehabilitation (HCET) to enhance patient care and research in the Physical Medicine and Rehabilitation (PM&R) domain. This environment aims to integrate advanced information and communication technologies to support both traditional in-person therapy and digital health solutions.
BACKGROUND: Telerehabilitation is emerging as a core component of modern healthcare, especially within the PM&R field. By applying digital health technologies, telerehabilitation provides continuous, comprehensive support for patient rehabilitation, bridging the gap between traditional therapy, and remote healthcare delivery. This study focuses on the design, and implementation of a hybrid HCET system tailored for the PM&R domain.
METHODS: The study involved the development of a comprehensive architectural and structural organization for the HCET, including a three-layer model (infrastructure, platform, service layers). Core components of the HCET were designed and implemented, such as the Hospital Information System (HIS) for PM&R, the MedRehabBot system, and the MedLocalGPT project. These components were integrated using advanced technologies like large language models (LLMs), word embeddings, and ontology-related approaches, along with APIs for enhanced functionality and interaction.
FINDINGS: The HCET system was successfully implemented and is operational, providing a robust platform for telerehabilitation. Key features include the MVP of the HIS for PM&R, supporting patient profile management, and rehabilitation goal tracking; the MedRehabBot and WhiteBookBot systems; and the MedLocalGPT project, which offers sophisticated querying capabilities, and access to extensive domain-specific knowledge. The system supports both Ukrainian and English languages, ensuring broad accessibility and usability.
INTERPRETATION: The practical implementation, and operation of the HCET system demonstrate its potential to transform telerehabilitation within the PM&R domain. By integrating advanced technologies, and providing comprehensive digital health solutions, the HCET enhances patient care, supports ongoing rehabilitation, and facilitates advanced research. Future work will focus on optimizing services and expanding language support to further improve the system's functionality and impact.},
}
RevDate: 2024-07-17
CmpDate: 2024-07-17
Variability in wet and dry snow radar zones in the North of the Antarctic Peninsula using a cloud computing environment.
Anais da Academia Brasileira de Ciencias, 96(suppl 2):e20230704 pii:S0001-37652024000401101.
This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ[0]) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.
Additional Links: PMID-39016361
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39016361,
year = {2024},
author = {Idalino, FD and Rosa, KKD and Hillebrand, FL and Arigony-Neto, J and Mendes, CW and Simões, JC},
title = {Variability in wet and dry snow radar zones in the North of the Antarctic Peninsula using a cloud computing environment.},
journal = {Anais da Academia Brasileira de Ciencias},
volume = {96},
number = {suppl 2},
pages = {e20230704},
doi = {10.1590/0001-3765202420230704},
pmid = {39016361},
issn = {1678-2690},
mesh = {Antarctic Regions ; *Snow ; *Radar ; *Cloud Computing ; Seasons ; Environmental Monitoring/methods ; Temperature ; },
abstract = {This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ[0]) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Antarctic Regions
*Snow
*Radar
*Cloud Computing
Seasons
Environmental Monitoring/methods
Temperature
RevDate: 2024-07-15
"Alexa, Cycle The Blood Pressure": A Voice Control Interface Method for Anesthesia Monitoring.
Anesthesia and analgesia pii:00000539-990000000-00865 [Epub ahead of print].
BACKGROUND: Anesthesia monitors and devices are usually controlled with some combination of dials, keypads, a keyboard, or a touch screen. Thus, anesthesiologists can operate their monitors only when they are physically close to them, and not otherwise task-loaded with sterile procedures such as line or block placement. Voice recognition technology has become commonplace and may offer advantages in anesthesia practice such as reducing surface contamination rates and allowing anesthesiologists to effect changes in monitoring and therapy when they would otherwise presently be unable to do so. We hypothesized that this technology is practicable and that anesthesiologists would consider it useful.
METHODS: A novel voice-driven prototype controller was designed for the GE Solar 8000M anesthesia patient monitor. The apparatus was implemented using a Raspberry Pi 4 single-board computer, an external conference audio device, a Google Cloud Speech-to-Text platform, and a modified Solar controller to effect commands. Fifty anesthesia providers tested the prototype. Evaluations and surveys were completed in a nonclinical environment to avoid any ethical or safety concerns regarding the use of the device in direct patient care. All anesthesiologists sampled were fluent English speakers; many with inflections from their first language or national origin, reflecting diversity in the population of practicing anesthesiologists.
RESULTS: The prototype was uniformly well-received by anesthesiologists. Ease-of-use, usefulness, and effectiveness were assessed on a Likert scale with means of 9.96, 7.22, and 8.48 of 10, respectively. No population cofactors were associated with these results. Advancing level of training (eg, nonattending versus attending) was not correlated with any preference. Accent of country or region was not correlated with any preference. Vocal pitch register did not correlate with any preference. Statistical analyses were performed with analysis of variance and the unpaired t-test.
CONCLUSIONS: The use of voice recognition to control operating room monitors was well-received anesthesia providers. Additional commands are easily implemented on the prototype controller. No adverse relationship was found between acceptability and level of anesthesia experience, pitch of voice, or presence of accent. Voice recognition is a promising method of controlling anesthesia monitors and devices that could potentially increase usability and situational awareness in circumstances where the anesthesiologist is otherwise out-of-position or task-loaded.
Additional Links: PMID-39008420
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39008420,
year = {2024},
author = {Lee, G and Connor, CW},
title = {"Alexa, Cycle The Blood Pressure": A Voice Control Interface Method for Anesthesia Monitoring.},
journal = {Anesthesia and analgesia},
volume = {},
number = {},
pages = {},
doi = {10.1213/ANE.0000000000007003},
pmid = {39008420},
issn = {1526-7598},
abstract = {BACKGROUND: Anesthesia monitors and devices are usually controlled with some combination of dials, keypads, a keyboard, or a touch screen. Thus, anesthesiologists can operate their monitors only when they are physically close to them, and not otherwise task-loaded with sterile procedures such as line or block placement. Voice recognition technology has become commonplace and may offer advantages in anesthesia practice such as reducing surface contamination rates and allowing anesthesiologists to effect changes in monitoring and therapy when they would otherwise presently be unable to do so. We hypothesized that this technology is practicable and that anesthesiologists would consider it useful.
METHODS: A novel voice-driven prototype controller was designed for the GE Solar 8000M anesthesia patient monitor. The apparatus was implemented using a Raspberry Pi 4 single-board computer, an external conference audio device, a Google Cloud Speech-to-Text platform, and a modified Solar controller to effect commands. Fifty anesthesia providers tested the prototype. Evaluations and surveys were completed in a nonclinical environment to avoid any ethical or safety concerns regarding the use of the device in direct patient care. All anesthesiologists sampled were fluent English speakers; many with inflections from their first language or national origin, reflecting diversity in the population of practicing anesthesiologists.
RESULTS: The prototype was uniformly well-received by anesthesiologists. Ease-of-use, usefulness, and effectiveness were assessed on a Likert scale with means of 9.96, 7.22, and 8.48 of 10, respectively. No population cofactors were associated with these results. Advancing level of training (eg, nonattending versus attending) was not correlated with any preference. Accent of country or region was not correlated with any preference. Vocal pitch register did not correlate with any preference. Statistical analyses were performed with analysis of variance and the unpaired t-test.
CONCLUSIONS: The use of voice recognition to control operating room monitors was well-received anesthesia providers. Additional commands are easily implemented on the prototype controller. No adverse relationship was found between acceptability and level of anesthesia experience, pitch of voice, or presence of accent. Voice recognition is a promising method of controlling anesthesia monitors and devices that could potentially increase usability and situational awareness in circumstances where the anesthesiologist is otherwise out-of-position or task-loaded.},
}
RevDate: 2024-07-15
Replica Exchange of Expanded Ensembles: A Generalized Ensemble Approach with Enhanced Flexibility and Parallelizability.
Journal of chemical theory and computation [Epub ahead of print].
Generalized ensemble methods such as Hamiltonian replica exchange (HREX) and expanded ensemble (EE) have been shown effective in free energy calculations for various contexts, given their ability to circumvent free energy barriers via nonphysical pathways defined by states with different modified Hamiltonians. However, both HREX and EE methods come with drawbacks, such as limited flexibility in parameter specification or the lack of parallelizability for more complicated applications. To address this challenge, we present the method of replica exchange of expanded ensembles (REXEE), which integrates the principles of HREX and EE methods by periodically exchanging coordinates of EE replicas sampling different yet overlapping sets of alchemical states. With the solvation free energy calculation of anthracene and binding free energy calculation of the CB7-10 binding complex, we show that the REXEE method achieves the same level of accuracy in free energy calculations as the HREX and EE methods, while offering enhanced flexibility and parallelizability. Additionally, we examined REXEE simulations with various setups to understand how different exchange frequencies and replica configurations influence the sampling efficiency in the fixed-weight phase and the weight convergence in the weight-updating phase. The REXEE approach can be further extended to support asynchronous parallelization schemes, allowing looser communications between larger numbers of loosely coupled processors such as cloud computing and therefore promising much more scalable and adaptive executions of alchemical free energy calculations. All algorithms for the REXEE method are available in the Python package ensemble_md, which offers an interface for REXEE simulation management without modifying the source code in GROMACS.
Additional Links: PMID-39007702
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39007702,
year = {2024},
author = {Hsu, WT and Shirts, MR},
title = {Replica Exchange of Expanded Ensembles: A Generalized Ensemble Approach with Enhanced Flexibility and Parallelizability.},
journal = {Journal of chemical theory and computation},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jctc.4c00484},
pmid = {39007702},
issn = {1549-9626},
abstract = {Generalized ensemble methods such as Hamiltonian replica exchange (HREX) and expanded ensemble (EE) have been shown effective in free energy calculations for various contexts, given their ability to circumvent free energy barriers via nonphysical pathways defined by states with different modified Hamiltonians. However, both HREX and EE methods come with drawbacks, such as limited flexibility in parameter specification or the lack of parallelizability for more complicated applications. To address this challenge, we present the method of replica exchange of expanded ensembles (REXEE), which integrates the principles of HREX and EE methods by periodically exchanging coordinates of EE replicas sampling different yet overlapping sets of alchemical states. With the solvation free energy calculation of anthracene and binding free energy calculation of the CB7-10 binding complex, we show that the REXEE method achieves the same level of accuracy in free energy calculations as the HREX and EE methods, while offering enhanced flexibility and parallelizability. Additionally, we examined REXEE simulations with various setups to understand how different exchange frequencies and replica configurations influence the sampling efficiency in the fixed-weight phase and the weight convergence in the weight-updating phase. The REXEE approach can be further extended to support asynchronous parallelization schemes, allowing looser communications between larger numbers of loosely coupled processors such as cloud computing and therefore promising much more scalable and adaptive executions of alchemical free energy calculations. All algorithms for the REXEE method are available in the Python package ensemble_md, which offers an interface for REXEE simulation management without modifying the source code in GROMACS.},
}
RevDate: 2024-07-13
Smart city energy efficient data privacy preservation protocol based on biometrics and fuzzy commitment scheme.
Scientific reports, 14(1):16223.
Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.
Additional Links: PMID-39003319
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39003319,
year = {2024},
author = {Nyangaresi, VO and Abduljabbar, ZA and Mutlaq, KA and Bulbul, SS and Ma, J and Aldarwish, AJY and Honi, DG and Al Sibahee, MA and Neamah, HA},
title = {Smart city energy efficient data privacy preservation protocol based on biometrics and fuzzy commitment scheme.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {16223},
pmid = {39003319},
issn = {2045-2322},
support = {GDRC202132//Natural Science Foundation of Top Talent of SZTU/ ; },
abstract = {Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.},
}
RevDate: 2024-07-13
Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework.
Sensors (Basel, Switzerland), 24(13): pii:s24134308.
The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.
Additional Links: PMID-39001087
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39001087,
year = {2024},
author = {Alwakeel, AM and Alnaim, AK},
title = {Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134308},
pmid = {39001087},
issn = {1424-8220},
support = {XXXXXX//King Faisal University/ ; },
abstract = {The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.},
}
RevDate: 2024-07-13
Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities.
Sensors (Basel, Switzerland), 24(13): pii:s24134254.
The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities' IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework's high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework's impressive performance, scalability, and security capabilities.
Additional Links: PMID-39001032
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39001032,
year = {2024},
author = {Alwakeel, AM and Alnaim, AK},
title = {Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134254},
pmid = {39001032},
issn = {1424-8220},
support = {000000//King Faisal University/ ; },
abstract = {The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities' IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework's high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework's impressive performance, scalability, and security capabilities.},
}
RevDate: 2024-07-13
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing.
Sensors (Basel, Switzerland), 24(13): pii:s24134195.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance.
Additional Links: PMID-39000973
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39000973,
year = {2024},
author = {Shahid, U and Ahmed, G and Siddiqui, S and Shuja, J and Balogun, AO},
title = {Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134195},
pmid = {39000973},
issn = {1424-8220},
support = {015LA0-049//Universiti Teknologi Petronas/ ; },
abstract = {Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.