Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 08 Sep 2024 at 01:40 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2024-09-05
CmpDate: 2024-09-06
Development of an online authentic radiology viewing and reporting platform to test the skills of radiology trainees in Low- and Middle-Income Countries.
BMC medical education, 24(1):969.
BACKGROUND: Diagnostic radiology residents in low- and middle-income countries (LMICs) may have to provide significant contributions to the clinical workload before the completion of their residency training. Because of time constraints inherent to the delivery of acute care, some of the most clinically impactful diagnostic radiology errors arise from the use of Computed Tomography (CT) in the management of acutely ill patients. As a result, it is paramount to ensure that radiology trainees reach adequate skill levels prior to assuming independent on-call responsibilities. We partnered with the radiology residency program at the Aga Khan University Hospital in Nairobi (Kenya) to evaluate a novel cloud-based testing method that provides an authentic radiology viewing and interpretation environment. It is based on Lifetrack, a unique Google Chrome-based Picture Archiving and Communication System, that enables a complete viewing environment for any scan, and provides a novel report generation tool based on Active Templates which are a patented structured reporting method. We applied it to evaluate the skills of AKUHN trainees on entire CT scans representing the spectrum of acute non-trauma abdominal pathology encountered in a typical on-call setting. We aimed to demonstrate the feasibility of remotely testing the authentic practice of radiology and to show that important observations can be made from such a Lifetrack-based testing approach regarding the radiology skills of an individual practitioner or of a cohort of trainees.
METHODS: A total of 13 anonymized trainees with experience from 12 months to over 4 years took part in the study. Individually accessing the Lifetrack tool they were tested on 37 abdominal CT scans (including one normal scan) over six 2-hour sessions on consecutive days. All cases carried the same clinical history of acute abdominal pain. During each session the trainees accessed the corresponding Lifetrack test set using clinical workstations, reviewed the CT scans, and formulated an opinion for the acute diagnosis, any secondary pathology, and incidental findings on the scan. Their scan interpretations were composed using the Lifetrack report generation system based on active templates in which segments of text can be selected to assemble a detailed report. All reports generated by the trainees were scored on four different interpretive components: (a) acute diagnosis, (b) unrelated secondary diagnosis, (c) number of missed incidental findings, and (d) number of overcalls. A 3-score aggregate was defined from the first three interpretive elements. A cumulative score modified the 3-score aggregate for the negative effect of interpretive overcalls.
RESULTS: A total of 436 scan interpretations and scores were available from 13 trainees tested on 37 cases. The acute diagnosis score ranged from 0 to 1 with a mean of 0.68 ± 0.36 and median of 0.78 (IQR: 0.5-1), and there were 436 scores. An unrelated secondary diagnosis was present in 11 cases, resulting in 130 secondary diagnosis scores. The unrelated secondary diagnosis score ranged from 0 to 1, with mean score of 0.48 ± 0.46 and median of 0.5 (IQR: 0-1). There were 32 cases with incidental findings, yielding 390 scores for incidental findings. The number of missed incidental findings ranged from 0 to 5 with a median at 1 (IQR: 1-2). The incidental findings score ranged from 0 to 1 with a mean of 0.4 ± 0.38 and median of 0.33 (IQR: 0- 0.66). The number of overcalls ranged from 0 to 3 with a median at 0 (IQR: 0-1) and a mean of 0.36 ± 0.63. The 3-score aggregate ranged from 0 to 100 with a mean of 65.5 ± 32.5 and median of 77.3 (IQR: 45.0, 92.5). The cumulative score ranged from - 30 to 100 with a mean of 61.9 ± 35.5 and median of 71.4 (IQR: 37.4, 92.0). The mean acute diagnosis scores and SD by training period were 0.62 ± 0.03, 0.80 ± 0.05, 0.71 ± 0.05, 0.58 ± 0.07, and 0.66 ± 0.05 for trainees with ≤ 12 months, 12-24 months, 24-36 months, 36-48 months and > 48 months respectively. The mean acute diagnosis score of 12-24 months training was the only statistically significant greater score when compared to ≤ 12 months by the ANOVA with Tukey testing (p = 0.0002). We found a similar trend with distribution of 3-score aggregates and cumulative scores. There were no significant associations when the training period was categorized as less than and more than 2 years. We looked at the distribution of the 3-score aggregate versus the number of overcalls by trainee, and we found that the 3-score aggregate was inversely related to the number of overcalls. Heatmaps and raincloud plots provided an illustrative means to visualize the relative performance of trainees across cases.
CONCLUSION: We demonstrated the feasibility of remotely testing the authentic practice of radiology and showed that important observations can be made from our Lifetrack-based testing approach regarding radiology skills of an individual or a cohort. From observed weaknesses areas for targeted teaching can be implemented, and retesting could reveal their impact. This methodology can be customized to different LMIC environments and expanded to board certification examinations.
Additional Links: PMID-39237930
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39237930,
year = {2024},
author = {Vesselle, H and Chiramal, JA and Hawes, SE and Schulze, E and Nguyen, T and Ndumia, R and Vinayak, S},
title = {Development of an online authentic radiology viewing and reporting platform to test the skills of radiology trainees in Low- and Middle-Income Countries.},
journal = {BMC medical education},
volume = {24},
number = {1},
pages = {969},
pmid = {39237930},
issn = {1472-6920},
mesh = {Humans ; *Radiology/education ; *Developing Countries ; Kenya ; *Internship and Residency ; *Clinical Competence ; *Radiology Information Systems ; Tomography, X-Ray Computed ; },
abstract = {BACKGROUND: Diagnostic radiology residents in low- and middle-income countries (LMICs) may have to provide significant contributions to the clinical workload before the completion of their residency training. Because of time constraints inherent to the delivery of acute care, some of the most clinically impactful diagnostic radiology errors arise from the use of Computed Tomography (CT) in the management of acutely ill patients. As a result, it is paramount to ensure that radiology trainees reach adequate skill levels prior to assuming independent on-call responsibilities. We partnered with the radiology residency program at the Aga Khan University Hospital in Nairobi (Kenya) to evaluate a novel cloud-based testing method that provides an authentic radiology viewing and interpretation environment. It is based on Lifetrack, a unique Google Chrome-based Picture Archiving and Communication System, that enables a complete viewing environment for any scan, and provides a novel report generation tool based on Active Templates which are a patented structured reporting method. We applied it to evaluate the skills of AKUHN trainees on entire CT scans representing the spectrum of acute non-trauma abdominal pathology encountered in a typical on-call setting. We aimed to demonstrate the feasibility of remotely testing the authentic practice of radiology and to show that important observations can be made from such a Lifetrack-based testing approach regarding the radiology skills of an individual practitioner or of a cohort of trainees.
METHODS: A total of 13 anonymized trainees with experience from 12 months to over 4 years took part in the study. Individually accessing the Lifetrack tool they were tested on 37 abdominal CT scans (including one normal scan) over six 2-hour sessions on consecutive days. All cases carried the same clinical history of acute abdominal pain. During each session the trainees accessed the corresponding Lifetrack test set using clinical workstations, reviewed the CT scans, and formulated an opinion for the acute diagnosis, any secondary pathology, and incidental findings on the scan. Their scan interpretations were composed using the Lifetrack report generation system based on active templates in which segments of text can be selected to assemble a detailed report. All reports generated by the trainees were scored on four different interpretive components: (a) acute diagnosis, (b) unrelated secondary diagnosis, (c) number of missed incidental findings, and (d) number of overcalls. A 3-score aggregate was defined from the first three interpretive elements. A cumulative score modified the 3-score aggregate for the negative effect of interpretive overcalls.
RESULTS: A total of 436 scan interpretations and scores were available from 13 trainees tested on 37 cases. The acute diagnosis score ranged from 0 to 1 with a mean of 0.68 ± 0.36 and median of 0.78 (IQR: 0.5-1), and there were 436 scores. An unrelated secondary diagnosis was present in 11 cases, resulting in 130 secondary diagnosis scores. The unrelated secondary diagnosis score ranged from 0 to 1, with mean score of 0.48 ± 0.46 and median of 0.5 (IQR: 0-1). There were 32 cases with incidental findings, yielding 390 scores for incidental findings. The number of missed incidental findings ranged from 0 to 5 with a median at 1 (IQR: 1-2). The incidental findings score ranged from 0 to 1 with a mean of 0.4 ± 0.38 and median of 0.33 (IQR: 0- 0.66). The number of overcalls ranged from 0 to 3 with a median at 0 (IQR: 0-1) and a mean of 0.36 ± 0.63. The 3-score aggregate ranged from 0 to 100 with a mean of 65.5 ± 32.5 and median of 77.3 (IQR: 45.0, 92.5). The cumulative score ranged from - 30 to 100 with a mean of 61.9 ± 35.5 and median of 71.4 (IQR: 37.4, 92.0). The mean acute diagnosis scores and SD by training period were 0.62 ± 0.03, 0.80 ± 0.05, 0.71 ± 0.05, 0.58 ± 0.07, and 0.66 ± 0.05 for trainees with ≤ 12 months, 12-24 months, 24-36 months, 36-48 months and > 48 months respectively. The mean acute diagnosis score of 12-24 months training was the only statistically significant greater score when compared to ≤ 12 months by the ANOVA with Tukey testing (p = 0.0002). We found a similar trend with distribution of 3-score aggregates and cumulative scores. There were no significant associations when the training period was categorized as less than and more than 2 years. We looked at the distribution of the 3-score aggregate versus the number of overcalls by trainee, and we found that the 3-score aggregate was inversely related to the number of overcalls. Heatmaps and raincloud plots provided an illustrative means to visualize the relative performance of trainees across cases.
CONCLUSION: We demonstrated the feasibility of remotely testing the authentic practice of radiology and showed that important observations can be made from our Lifetrack-based testing approach regarding radiology skills of an individual or a cohort. From observed weaknesses areas for targeted teaching can be implemented, and retesting could reveal their impact. This methodology can be customized to different LMIC environments and expanded to board certification examinations.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Radiology/education
*Developing Countries
Kenya
*Internship and Residency
*Clinical Competence
*Radiology Information Systems
Tomography, X-Ray Computed
RevDate: 2024-09-05
CmpDate: 2024-09-05
Cloud Readiness of German Hospitals: Development and Application of an Evaluation Scale.
Studies in health technology and informatics, 317:11-19.
BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector.
METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items.
RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector.
CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.
Additional Links: PMID-39234702
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39234702,
year = {2024},
author = {Holtz, A and Liebe, JD},
title = {Cloud Readiness of German Hospitals: Development and Application of an Evaluation Scale.},
journal = {Studies in health technology and informatics},
volume = {317},
number = {},
pages = {11-19},
doi = {10.3233/SHTI240832},
pmid = {39234702},
issn = {1879-8365},
mesh = {Germany ; *Cloud Computing ; Hospitals ; Computer Security ; Humans ; Surveys and Questionnaires ; },
abstract = {BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector.
METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items.
RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector.
CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Germany
*Cloud Computing
Hospitals
Computer Security
Humans
Surveys and Questionnaires
RevDate: 2024-09-04
Fog-assisted de-duplicated data exchange in distributed edge computing networks.
Scientific reports, 14(1):20595.
The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs.
Additional Links: PMID-39232132
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39232132,
year = {2024},
author = {Said, G and Ghani, A and Ullah, A and Alzahrani, A and Azeem, M and Ahmad, R and Kim, DH},
title = {Fog-assisted de-duplicated data exchange in distributed edge computing networks.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {20595},
pmid = {39232132},
issn = {2045-2322},
abstract = {The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs.},
}
RevDate: 2024-09-04
CmpDate: 2024-09-04
A unified web cloud computing platform MiMedSurv for microbiome causal mediation analysis with survival responses.
Scientific reports, 14(1):20650.
In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server (http://mimedsurv.micloud.kr).
Additional Links: PMID-39232070
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39232070,
year = {2024},
author = {Jang, H and Koh, H},
title = {A unified web cloud computing platform MiMedSurv for microbiome causal mediation analysis with survival responses.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {20650},
pmid = {39232070},
issn = {2045-2322},
support = {2021R1C1C1013861//National Research Foundation of Korea/ ; },
mesh = {Humans ; *Microbiota ; *Cloud Computing ; *Internet ; Software ; Survival Analysis ; },
abstract = {In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server (http://mimedsurv.micloud.kr).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Microbiota
*Cloud Computing
*Internet
Software
Survival Analysis
RevDate: 2024-09-03
Water-glycan interactions drive the SARS-CoV-2 spike dynamics: insights into glycan-gate control and camouflage mechanisms.
Chemical science [Epub ahead of print].
To develop therapeutic strategies against COVID-19, we introduce a high-resolution all-atom polarizable model capturing many-body effects of protein, glycan, solvent, and membrane components in SARS-CoV-2 spike protein open and closed states. Employing μs-long molecular dynamics simulations powered by high-performance cloud-computing and unsupervised density-driven adaptive sampling, we investigated the differences in bulk-solvent-glycan and protein-solvent-glycan interfaces between these states. We unraveled a sophisticated solvent-glycan polarization interaction network involving the N165/N343 glycan-gate patterns that provide structural support for the open state and identified key water molecules that could potentially be targeted to destabilize this configuration. In the closed state, the reduced solvent polarization diminishes the overall N165/N343 dipoles, yet internal interactions and a reorganized sugar coat stabilize this state. Despite variations, our glycan-solvent accessibility analysis reveals the glycan shield capability to conserve constant interactions with the solvent, effectively camouflaging the virus from immune detection in both states. The presented insights advance our comprehension of viral pathogenesis at an atomic level, offering potential to combat COVID-19.
Additional Links: PMID-39220162
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39220162,
year = {2024},
author = {Blazhynska, M and Lagardère, L and Liu, C and Adjoua, O and Ren, P and Piquemal, JP},
title = {Water-glycan interactions drive the SARS-CoV-2 spike dynamics: insights into glycan-gate control and camouflage mechanisms.},
journal = {Chemical science},
volume = {},
number = {},
pages = {},
pmid = {39220162},
issn = {2041-6520},
abstract = {To develop therapeutic strategies against COVID-19, we introduce a high-resolution all-atom polarizable model capturing many-body effects of protein, glycan, solvent, and membrane components in SARS-CoV-2 spike protein open and closed states. Employing μs-long molecular dynamics simulations powered by high-performance cloud-computing and unsupervised density-driven adaptive sampling, we investigated the differences in bulk-solvent-glycan and protein-solvent-glycan interfaces between these states. We unraveled a sophisticated solvent-glycan polarization interaction network involving the N165/N343 glycan-gate patterns that provide structural support for the open state and identified key water molecules that could potentially be targeted to destabilize this configuration. In the closed state, the reduced solvent polarization diminishes the overall N165/N343 dipoles, yet internal interactions and a reorganized sugar coat stabilize this state. Despite variations, our glycan-solvent accessibility analysis reveals the glycan shield capability to conserve constant interactions with the solvent, effectively camouflaging the virus from immune detection in both states. The presented insights advance our comprehension of viral pathogenesis at an atomic level, offering potential to combat COVID-19.},
}
RevDate: 2024-09-01
Improving rapid flood impact assessment: An enhanced multi-sensor approach including a new flood mapping method based on Sentinel-2 data.
Journal of environmental management, 369:122326 pii:S0301-4797(24)02312-0 [Epub ahead of print].
Rapid flood impact assessment methods need complete and accurate flood maps to provide reliable information for disaster risk management, in particular for emergency response and recovery and reconstruction plans. With the aim of improving the rapid assessment of flood impacts, this work presents a new impact assessment method characterized by an enhanced satellite multi-sensor approach for flood mapping, which improves the characterization of the hazard. This includes a novel flood mapping method based on the new multi-temporal Modified Normalized Difference Water Index (MNDWI) that uses multi-temporal statistics computed on time-series of Sentinel-2 multi-spectral satellite images. The multi-temporal aspect of the MNDWI improves characterization of land cover over time and enhances the temporary flooded areas, which can be extracted through a thresholding technique, allowing the delineation of more precise and complete flood maps. The methodology, if implemented in cloud-based environments such as Google Earth Engine (GEE), is computationally light and robust, allowing the derivation of flood maps in matters of minutes, also for large areas. The flood mapping and impact assessment method has been applied to the seasonal flood occurred in South Sudan in 2020, using Sentinel-1, Sentinel-2 and PlanetScope satellite imagery. Flood impacts were assessed considering damages to buildings, roads, and cropland. The multi-sensor approach estimated an impact of 57.4 million USD (considering a middle-bound scenario), higher than what estimated by using Sentinel-1 data only, and Sentinel-2 data only (respectively 24% and 78% of the estimation resulting from the multi-sensor approach). This work highlights the effectiveness and importance of considering multi-source satellite data for flood mapping in a context of disaster risk management, to better inform disaster response, recovery and reconstruction plans.
Additional Links: PMID-39217900
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39217900,
year = {2024},
author = {Cian, F and Delgado Blasco, JM and Ivanescu, C},
title = {Improving rapid flood impact assessment: An enhanced multi-sensor approach including a new flood mapping method based on Sentinel-2 data.},
journal = {Journal of environmental management},
volume = {369},
number = {},
pages = {122326},
doi = {10.1016/j.jenvman.2024.122326},
pmid = {39217900},
issn = {1095-8630},
abstract = {Rapid flood impact assessment methods need complete and accurate flood maps to provide reliable information for disaster risk management, in particular for emergency response and recovery and reconstruction plans. With the aim of improving the rapid assessment of flood impacts, this work presents a new impact assessment method characterized by an enhanced satellite multi-sensor approach for flood mapping, which improves the characterization of the hazard. This includes a novel flood mapping method based on the new multi-temporal Modified Normalized Difference Water Index (MNDWI) that uses multi-temporal statistics computed on time-series of Sentinel-2 multi-spectral satellite images. The multi-temporal aspect of the MNDWI improves characterization of land cover over time and enhances the temporary flooded areas, which can be extracted through a thresholding technique, allowing the delineation of more precise and complete flood maps. The methodology, if implemented in cloud-based environments such as Google Earth Engine (GEE), is computationally light and robust, allowing the derivation of flood maps in matters of minutes, also for large areas. The flood mapping and impact assessment method has been applied to the seasonal flood occurred in South Sudan in 2020, using Sentinel-1, Sentinel-2 and PlanetScope satellite imagery. Flood impacts were assessed considering damages to buildings, roads, and cropland. The multi-sensor approach estimated an impact of 57.4 million USD (considering a middle-bound scenario), higher than what estimated by using Sentinel-1 data only, and Sentinel-2 data only (respectively 24% and 78% of the estimation resulting from the multi-sensor approach). This work highlights the effectiveness and importance of considering multi-source satellite data for flood mapping in a context of disaster risk management, to better inform disaster response, recovery and reconstruction plans.},
}
RevDate: 2024-08-29
Beehive Smart Detector Device for the Detection of Critical Conditions That Utilize Edge Device Computations and Deep Learning Inferences.
Sensors (Basel, Switzerland), 24(16):.
This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3-5% better accuracy results.
Additional Links: PMID-39205138
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205138,
year = {2024},
author = {Kontogiannis, S},
title = {Beehive Smart Detector Device for the Detection of Critical Conditions That Utilize Edge Device Computations and Deep Learning Inferences.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
pmid = {39205138},
issn = {1424-8220},
abstract = {This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3-5% better accuracy results.},
}
RevDate: 2024-08-29
Decentralized System Synchronization among Collaborative Robots via 5G Technology.
Sensors (Basel, Switzerland), 24(16): pii:s24165382.
In this article, we propose a distributed synchronization solution to achieve decentralized coordination in a system of collaborative robots. This is done by leveraging cloud-based computing and 5G technology to exchange causal ordering messages between the robots, eliminating the need for centralized control entities or programmable logic controllers in the system. The proposed solution is described, mathematically formulated, implemented in software, and validated over realistic network conditions. Further, the performance of the decentralized solution via 5G technology is compared to that achieved with traditional coordinated/uncoordinated cabled control systems. The results indicate that the proposed decentralized solution leveraging cloud-based 5G wireless is scalable to systems of up to 10 collaborative robots with comparable efficiency to that from standard cabled systems. The proposed solution has direct application in the control of producer-consumer and automated assembly line robotic applications.
Additional Links: PMID-39205076
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205076,
year = {2024},
author = {Celik, AE and Rodriguez, I and Ayestaran, RG and Yavuz, SC},
title = {Decentralized System Synchronization among Collaborative Robots via 5G Technology.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165382},
pmid = {39205076},
issn = {1424-8220},
support = {RYC-2020-030676-I//Ministerio de Ciencia, Innovación y Universidades/ ; },
abstract = {In this article, we propose a distributed synchronization solution to achieve decentralized coordination in a system of collaborative robots. This is done by leveraging cloud-based computing and 5G technology to exchange causal ordering messages between the robots, eliminating the need for centralized control entities or programmable logic controllers in the system. The proposed solution is described, mathematically formulated, implemented in software, and validated over realistic network conditions. Further, the performance of the decentralized solution via 5G technology is compared to that achieved with traditional coordinated/uncoordinated cabled control systems. The results indicate that the proposed decentralized solution leveraging cloud-based 5G wireless is scalable to systems of up to 10 collaborative robots with comparable efficiency to that from standard cabled systems. The proposed solution has direct application in the control of producer-consumer and automated assembly line robotic applications.},
}
RevDate: 2024-08-29
A Survey on IoT Application Architectures.
Sensors (Basel, Switzerland), 24(16): pii:s24165320.
The proliferation of the IoT has led to the development of diverse application architectures to optimize IoT systems' deployment, operation, and maintenance. This survey provides a comprehensive overview of the existing IoT application architectures, highlighting their key features, strengths, and limitations. The architectures are categorized based on their deployment models, such as cloud, edge, and fog computing approaches, each offering distinct advantages regarding scalability, latency, and resource efficiency. Cloud architectures leverage centralized data processing and storage capabilities to support large-scale IoT applications but often suffer from high latency and bandwidth constraints. Edge architectures mitigate these issues by bringing computation closer to the data source, enhancing real-time processing, and reducing network congestion. Fog architectures combine the strengths of both cloud and edge paradigms, offering a balanced solution for complex IoT environments. This survey also examines emerging trends and technologies in IoT application management, such as the solutions provided by the major IoT service providers like Intel, AWS, Microsoft Azure, and GCP. Through this study, the survey identifies latency, privacy, and deployment difficulties as key areas for future research. It highlights the need to advance IoT Edge architectures to reduce network traffic, improve data privacy, and enhance interoperability by developing multi-application and multi-protocol edge gateways for efficient IoT application management.
Additional Links: PMID-39205014
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205014,
year = {2024},
author = {Dauda, A and Flauzac, O and Nolot, F},
title = {A Survey on IoT Application Architectures.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165320},
pmid = {39205014},
issn = {1424-8220},
support = {1711/20//Petroleum Technology Development Fund (PTDF) Nigeria/ ; },
abstract = {The proliferation of the IoT has led to the development of diverse application architectures to optimize IoT systems' deployment, operation, and maintenance. This survey provides a comprehensive overview of the existing IoT application architectures, highlighting their key features, strengths, and limitations. The architectures are categorized based on their deployment models, such as cloud, edge, and fog computing approaches, each offering distinct advantages regarding scalability, latency, and resource efficiency. Cloud architectures leverage centralized data processing and storage capabilities to support large-scale IoT applications but often suffer from high latency and bandwidth constraints. Edge architectures mitigate these issues by bringing computation closer to the data source, enhancing real-time processing, and reducing network congestion. Fog architectures combine the strengths of both cloud and edge paradigms, offering a balanced solution for complex IoT environments. This survey also examines emerging trends and technologies in IoT application management, such as the solutions provided by the major IoT service providers like Intel, AWS, Microsoft Azure, and GCP. Through this study, the survey identifies latency, privacy, and deployment difficulties as key areas for future research. It highlights the need to advance IoT Edge architectures to reduce network traffic, improve data privacy, and enhance interoperability by developing multi-application and multi-protocol edge gateways for efficient IoT application management.},
}
RevDate: 2024-08-29
An End-to-End Deep Learning Framework for Fault Detection in Marine Machinery.
Sensors (Basel, Switzerland), 24(16): pii:s24165310.
The Industrial Internet of Things has enabled the integration and analysis of vast volumes of data across various industries, with the maritime sector being no exception. Advances in cloud computing and deep learning (DL) are continuously reshaping the industry, particularly in optimizing maritime operations such as Predictive Maintenance (PdM). In this study, we propose a novel DL-based framework focusing on the fault detection task of PdM in marine operations, leveraging time-series data from sensors installed on shipboard machinery. The framework is designed as a scalable and cost-efficient software solution, encompassing all stages from data collection and pre-processing at the edge to the deployment and lifecycle management of DL models. The proposed DL architecture utilizes Graph Attention Networks (GATs) to extract spatio-temporal information from the time-series data and provides explainable predictions through a feature-wise scoring mechanism. Additionally, a custom evaluation metric with real-world applicability is employed, prioritizing both prediction accuracy and the timeliness of fault identification. To demonstrate the effectiveness of our framework, we conduct experiments on three types of open-source datasets relevant to PdM: electrical data, bearing datasets, and data from water circulation experiments.
Additional Links: PMID-39205003
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39205003,
year = {2024},
author = {Rigas, S and Tzouveli, P and Kollias, S},
title = {An End-to-End Deep Learning Framework for Fault Detection in Marine Machinery.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165310},
pmid = {39205003},
issn = {1424-8220},
support = {ATHINAIKI RIVIERA - ATTP4-0325990//Greece and European Union: Attica 2014-2020/ ; },
abstract = {The Industrial Internet of Things has enabled the integration and analysis of vast volumes of data across various industries, with the maritime sector being no exception. Advances in cloud computing and deep learning (DL) are continuously reshaping the industry, particularly in optimizing maritime operations such as Predictive Maintenance (PdM). In this study, we propose a novel DL-based framework focusing on the fault detection task of PdM in marine operations, leveraging time-series data from sensors installed on shipboard machinery. The framework is designed as a scalable and cost-efficient software solution, encompassing all stages from data collection and pre-processing at the edge to the deployment and lifecycle management of DL models. The proposed DL architecture utilizes Graph Attention Networks (GATs) to extract spatio-temporal information from the time-series data and provides explainable predictions through a feature-wise scoring mechanism. Additionally, a custom evaluation metric with real-world applicability is employed, prioritizing both prediction accuracy and the timeliness of fault identification. To demonstrate the effectiveness of our framework, we conduct experiments on three types of open-source datasets relevant to PdM: electrical data, bearing datasets, and data from water circulation experiments.},
}
RevDate: 2024-08-29
Presenting the COGNIFOG Framework: Architecture, Building Blocks and Road toward Cognitive Connectivity.
Sensors (Basel, Switzerland), 24(16): pii:s24165283.
In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.
Additional Links: PMID-39204979
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39204979,
year = {2024},
author = {Adame, T and Amri, E and Antonopoulos, G and Azaiez, S and Berne, A and Camargo, JS and Kakoulidis, H and Kleisarchaki, S and Llamedo, A and Prasinos, M and Psara, K and Shumaiev, K},
title = {Presenting the COGNIFOG Framework: Architecture, Building Blocks and Road toward Cognitive Connectivity.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165283},
pmid = {39204979},
issn = {1424-8220},
support = {101092968//European Union/ ; },
abstract = {In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.},
}
RevDate: 2024-08-29
Integral-Valued Pythagorean Fuzzy-Set-Based Dyna Q+ Framework for Task Scheduling in Cloud Computing.
Sensors (Basel, Switzerland), 24(16): pii:s24165272.
Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent.
Additional Links: PMID-39204967
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39204967,
year = {2024},
author = {Krishnamurthy, B and Shiva, SG},
title = {Integral-Valued Pythagorean Fuzzy-Set-Based Dyna Q+ Framework for Task Scheduling in Cloud Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {16},
pages = {},
doi = {10.3390/s24165272},
pmid = {39204967},
issn = {1424-8220},
abstract = {Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent.},
}
RevDate: 2024-08-27
Learning Implicit Fields for Point Cloud Filtering.
IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].
Since point clouds acquired by scanners inevitably contain noise, recovering a clean version from a noisy point cloud is essential for further 3D geometry processing applications. Several data-driven approaches have been recently introduced to overcome the drawbacks of traditional filtering algorithms, such as less robust preservation of sharp features and tedious tuning for multiple parameters. Most of these methods achieve filtering by directly regressing the position/displacement of each point, which may blur detailed features and is prone to uneven distribution. In this paper, we propose a novel data-driven method that explores the implicit fields. Our assumption is that the given noisy points implicitly define a surface, and we attempt to obtain a point's movement direction and distance separately based on the predicted signed distance fields (SDFs). Taking a noisy point cloud as input, we first obtain a consistent alignment by incorporating the global points into local patches. We then feed them into an encoder-decoder structure and predict a 7D vector consisting of SDFs. Subsequently, the distance can be obtained directly from the first element in the vector, and the movement direction can be obtained by computing the gradient descent from the last six elements (i.e., six surrounding SDFs). We finally obtain the filtered results by moving each point with its predicted distance along its movement direction. Our method can produce feature-preserving results without requiring explicit normals. Experiments demonstrate that our method visually outperforms state-of-the-art methods and generally produces better quantitative results than position-based methods (both learning and non-learning).
Additional Links: PMID-39190508
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39190508,
year = {2024},
author = {Wang, J and Lu, X and Wang, M and Hou, F and He, Y},
title = {Learning Implicit Fields for Point Cloud Filtering.},
journal = {IEEE transactions on visualization and computer graphics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TVCG.2024.3450699},
pmid = {39190508},
issn = {1941-0506},
abstract = {Since point clouds acquired by scanners inevitably contain noise, recovering a clean version from a noisy point cloud is essential for further 3D geometry processing applications. Several data-driven approaches have been recently introduced to overcome the drawbacks of traditional filtering algorithms, such as less robust preservation of sharp features and tedious tuning for multiple parameters. Most of these methods achieve filtering by directly regressing the position/displacement of each point, which may blur detailed features and is prone to uneven distribution. In this paper, we propose a novel data-driven method that explores the implicit fields. Our assumption is that the given noisy points implicitly define a surface, and we attempt to obtain a point's movement direction and distance separately based on the predicted signed distance fields (SDFs). Taking a noisy point cloud as input, we first obtain a consistent alignment by incorporating the global points into local patches. We then feed them into an encoder-decoder structure and predict a 7D vector consisting of SDFs. Subsequently, the distance can be obtained directly from the first element in the vector, and the movement direction can be obtained by computing the gradient descent from the last six elements (i.e., six surrounding SDFs). We finally obtain the filtered results by moving each point with its predicted distance along its movement direction. Our method can produce feature-preserving results without requiring explicit normals. Experiments demonstrate that our method visually outperforms state-of-the-art methods and generally produces better quantitative results than position-based methods (both learning and non-learning).},
}
RevDate: 2024-08-27
Exploring Factors Influencing Pregnant Women's Perceptions and Attitudes Towards Midwifery Care in Romania: Implications for Maternal Health Education Strategies.
Nursing reports (Pavia, Italy), 14(3):1807-1818 pii:nursrep14030134.
BACKGROUND: Midwives are strong advocates for vaginal births. However, their visibility and accessibility are poorly perceived by women in Romania. Consequently, the women's options are limited to a single direction when pregnancy occurs, involving the family doctor, the obstetrician, and often an interventional technical approach at the time of birth. The aim of this research is to identify specific variables that affect the perceptions and attitudes of pregnant women towards the care provided by midwives. This knowledge could contribute to the development of more effective education and information strategies within maternal health services.
METHODS: A cross-sectional observational analytical survey was conducted in Romania among pregnant women from the general population. Data were collected through a self-administered questionnaire, with informed consent obtained from each participating pregnant woman. The questionnaire was administered online using the cloud-based Google Forms platform and was available on the internet for seven months, from January to July 2023. The questionnaire was distributed through various media channels, both individually and in communication groups, in the form of a link. All questions were mandatory, and the questionnaire could only be submitted after answering all questions.
RESULTS: A total of 1301 individual responses were collected. The analysis of the socio-demographic and obstetrical profile of the pregnant women revealed that approximately half, 689 (52.95%), of the participants were aged between 18-29 years, and 1060 (81.47%) of the participants were married. Among our group of 1301 pregnant women, 973 (74.78%) had higher education, and 987 (75.86%) had a regular job. A majority of the survey participants, 936 (71.94%), lived in an urban geographic area, while 476 (36.58%) had attended childbirth education courses, and 791 (60.79%) were in the third trimester of pregnancy. A total of 298 (22.9%) respondents did not want to give birth in a hospital, and one-third, 347 (26.67%), did not place significant importance on control over the childbirth process.
CONCLUSIONS: The main factors influencing women's decisions regarding perinatal care and the importance of midwives as a component of the maternal-infant care team are modifiable, and thorough educational and psychological preparation would reduce the increasing predominance of preference for cesarean section, thereby promoting healthier and more woman- and child-centered perinatal care.
Additional Links: PMID-39189264
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39189264,
year = {2024},
author = {Radu, MC and Armean, MS and Pop-Tudose, M and Medar, C and Manolescu, LSC},
title = {Exploring Factors Influencing Pregnant Women's Perceptions and Attitudes Towards Midwifery Care in Romania: Implications for Maternal Health Education Strategies.},
journal = {Nursing reports (Pavia, Italy)},
volume = {14},
number = {3},
pages = {1807-1818},
doi = {10.3390/nursrep14030134},
pmid = {39189264},
issn = {2039-4403},
abstract = {BACKGROUND: Midwives are strong advocates for vaginal births. However, their visibility and accessibility are poorly perceived by women in Romania. Consequently, the women's options are limited to a single direction when pregnancy occurs, involving the family doctor, the obstetrician, and often an interventional technical approach at the time of birth. The aim of this research is to identify specific variables that affect the perceptions and attitudes of pregnant women towards the care provided by midwives. This knowledge could contribute to the development of more effective education and information strategies within maternal health services.
METHODS: A cross-sectional observational analytical survey was conducted in Romania among pregnant women from the general population. Data were collected through a self-administered questionnaire, with informed consent obtained from each participating pregnant woman. The questionnaire was administered online using the cloud-based Google Forms platform and was available on the internet for seven months, from January to July 2023. The questionnaire was distributed through various media channels, both individually and in communication groups, in the form of a link. All questions were mandatory, and the questionnaire could only be submitted after answering all questions.
RESULTS: A total of 1301 individual responses were collected. The analysis of the socio-demographic and obstetrical profile of the pregnant women revealed that approximately half, 689 (52.95%), of the participants were aged between 18-29 years, and 1060 (81.47%) of the participants were married. Among our group of 1301 pregnant women, 973 (74.78%) had higher education, and 987 (75.86%) had a regular job. A majority of the survey participants, 936 (71.94%), lived in an urban geographic area, while 476 (36.58%) had attended childbirth education courses, and 791 (60.79%) were in the third trimester of pregnancy. A total of 298 (22.9%) respondents did not want to give birth in a hospital, and one-third, 347 (26.67%), did not place significant importance on control over the childbirth process.
CONCLUSIONS: The main factors influencing women's decisions regarding perinatal care and the importance of midwives as a component of the maternal-infant care team are modifiable, and thorough educational and psychological preparation would reduce the increasing predominance of preference for cesarean section, thereby promoting healthier and more woman- and child-centered perinatal care.},
}
RevDate: 2024-08-26
An enhanced approach for predicting air pollution using quantum support vector machine.
Scientific reports, 14(1):19521.
The essence of quantum machine learning is to optimize problem-solving by executing machine learning algorithms on quantum computers and exploiting potent laws such as superposition and entanglement. Support vector machine (SVM) is widely recognized as one of the most effective classification machine learning techniques currently available. Since, in conventional systems, the SVM kernel technique tends to sluggish down and even fail as datasets become increasingly complex or jumbled. To compare the execution time and accuracy of conventional SVM classification to that of quantum SVM classification, the appropriate quantum features for mapping need to be selected. As the dataset grows complex, the importance of selecting an appropriate feature map that outperforms or performs as well as the classification grows. This paper utilizes conventional SVM to select an optimal feature map and benchmark dataset for predicting air quality. Experimental evidence demonstrates that the precision of quantum SVM surpasses that of classical SVM for air quality assessment. Using quantum labs from IBM's quantum computer cloud, conventional and quantum computing have been compared. When applied to the same dataset, the conventional SVM achieved an accuracy of 91% and 87% respectively, whereas the quantum SVM demonstrated an accuracy of 97% and 94% respectively for air quality prediction. The study introduces the use of quantum Support Vector Machines (SVM) for predicting air quality. It emphasizes the novel method of choosing the best quantum feature maps. Through the utilization of quantum-enhanced feature mapping, our objective is to exceed the constraints of classical SVM and achieve unparalleled levels of precision and effectiveness. We conduct precise experiments utilizing IBM's state-of-the-art quantum computer cloud to compare the performance of conventional and quantum SVM algorithms on a shared dataset.
Additional Links: PMID-39187555
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39187555,
year = {2024},
author = {Farooq, O and Shahid, M and Arshad, S and Altaf, A and Iqbal, F and Vera, YAM and Flores, MAL and Ashraf, I},
title = {An enhanced approach for predicting air pollution using quantum support vector machine.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {19521},
pmid = {39187555},
issn = {2045-2322},
abstract = {The essence of quantum machine learning is to optimize problem-solving by executing machine learning algorithms on quantum computers and exploiting potent laws such as superposition and entanglement. Support vector machine (SVM) is widely recognized as one of the most effective classification machine learning techniques currently available. Since, in conventional systems, the SVM kernel technique tends to sluggish down and even fail as datasets become increasingly complex or jumbled. To compare the execution time and accuracy of conventional SVM classification to that of quantum SVM classification, the appropriate quantum features for mapping need to be selected. As the dataset grows complex, the importance of selecting an appropriate feature map that outperforms or performs as well as the classification grows. This paper utilizes conventional SVM to select an optimal feature map and benchmark dataset for predicting air quality. Experimental evidence demonstrates that the precision of quantum SVM surpasses that of classical SVM for air quality assessment. Using quantum labs from IBM's quantum computer cloud, conventional and quantum computing have been compared. When applied to the same dataset, the conventional SVM achieved an accuracy of 91% and 87% respectively, whereas the quantum SVM demonstrated an accuracy of 97% and 94% respectively for air quality prediction. The study introduces the use of quantum Support Vector Machines (SVM) for predicting air quality. It emphasizes the novel method of choosing the best quantum feature maps. Through the utilization of quantum-enhanced feature mapping, our objective is to exceed the constraints of classical SVM and achieve unparalleled levels of precision and effectiveness. We conduct precise experiments utilizing IBM's state-of-the-art quantum computer cloud to compare the performance of conventional and quantum SVM algorithms on a shared dataset.},
}
RevDate: 2024-08-27
AnoPrimer: Primer Design in malaria vectors informed by range-wide genomic variation.
Wellcome open research, 9:255.
The major malaria mosquitoes, Anopheles gambiae s.l and Anopheles funestus, are some of the most studied organisms in medical research and also some of the most genetically diverse. When designing polymerase chain reaction (PCR) or hybridisation-based molecular assays, reliable primer and probe design is crucial. However, single nucleotide polymorphisms (SNPs) in primer binding sites can prevent primer binding, leading to null alleles, or bind suboptimally, leading to preferential amplification of specific alleles. Given the extreme genetic diversity of Anopheles mosquitoes, researchers need to consider this genetic variation when designing primers and probes to avoid amplification problems. In this note, we present a Python package, AnoPrimer, which exploits the Ag1000G and Af1000 datasets and allows users to rapidly design primers in An. gambiae or An. funestus, whilst summarising genetic variation in the primer binding sites and visualising the position of primer pairs. AnoPrimer allows the design of both genomic DNA and cDNA primers and hybridisation probes. By coupling this Python package with Google Colaboratory, AnoPrimer is an open and accessible platform for primer and probe design, hosted in the cloud for free. AnoPrimer is available here https://github.com/sanjaynagi/AnoPrimer and we hope it will be a useful resource for the community to design probe and primer sets that can be reliably deployed across the An. gambiae and funestus species ranges.
Additional Links: PMID-39184128
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39184128,
year = {2024},
author = {Nagi, SC and Ashraf, F and Miles, A and Donnelly, MJ},
title = {AnoPrimer: Primer Design in malaria vectors informed by range-wide genomic variation.},
journal = {Wellcome open research},
volume = {9},
number = {},
pages = {255},
pmid = {39184128},
issn = {2398-502X},
abstract = {The major malaria mosquitoes, Anopheles gambiae s.l and Anopheles funestus, are some of the most studied organisms in medical research and also some of the most genetically diverse. When designing polymerase chain reaction (PCR) or hybridisation-based molecular assays, reliable primer and probe design is crucial. However, single nucleotide polymorphisms (SNPs) in primer binding sites can prevent primer binding, leading to null alleles, or bind suboptimally, leading to preferential amplification of specific alleles. Given the extreme genetic diversity of Anopheles mosquitoes, researchers need to consider this genetic variation when designing primers and probes to avoid amplification problems. In this note, we present a Python package, AnoPrimer, which exploits the Ag1000G and Af1000 datasets and allows users to rapidly design primers in An. gambiae or An. funestus, whilst summarising genetic variation in the primer binding sites and visualising the position of primer pairs. AnoPrimer allows the design of both genomic DNA and cDNA primers and hybridisation probes. By coupling this Python package with Google Colaboratory, AnoPrimer is an open and accessible platform for primer and probe design, hosted in the cloud for free. AnoPrimer is available here https://github.com/sanjaynagi/AnoPrimer and we hope it will be a useful resource for the community to design probe and primer sets that can be reliably deployed across the An. gambiae and funestus species ranges.},
}
RevDate: 2024-08-26
Genetic algorithm with skew mutation for heterogeneous resource-aware task offloading in edge-cloud computing.
Heliyon, 10(12):e32399 pii:S2405-8440(24)08430-5.
Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.
Additional Links: PMID-39183823
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39183823,
year = {2024},
author = {Chen, M and Qi, P and Chu, Y and Wang, B and Wang, F and Cao, J},
title = {Genetic algorithm with skew mutation for heterogeneous resource-aware task offloading in edge-cloud computing.},
journal = {Heliyon},
volume = {10},
number = {12},
pages = {e32399},
doi = {10.1016/j.heliyon.2024.e32399},
pmid = {39183823},
issn = {2405-8440},
abstract = {Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.},
}
RevDate: 2024-08-26
Giant Kerr nonlinearity of terahertz waves mediated by stimulated phonon polaritons in a microcavity chip.
Light, science & applications, 13(1):212.
Optical Kerr effect, in which input light intensity linearly alters the refractive index, has enabled the generation of optical solitons, supercontinuum spectra, and frequency combs, playing vital roles in the on-chip devices, fiber communications, and quantum manipulations. Especially, terahertz Kerr effect, featuring fascinating prospects in future high-rate computing, artificial intelligence, and cloud-based technologies, encounters a great challenge due to the rather low power density and feeble Kerr response. Here, we demonstrate a giant terahertz frequency Kerr nonlinearity mediated by stimulated phonon polaritons. Under the influences of the giant Kerr nonlinearity, the power-dependent refractive index change would result in a frequency shift in the microcavity, which was experimentally demonstrated via the measurement of the resonant mode of a chip-scale lithium niobate Fabry-Pérot microcavity. Attributed to the existence of stimulated phonon polaritons, the nonlinear coefficient extracted from the frequency shifts is orders of magnitude larger than that of visible and infrared light, which is also theoretically demonstrated by nonlinear Huang equations. This work opens an avenue for many rich and fruitful terahertz Kerr effect based physical, chemical, and biological systems that have terahertz fingerprints.
Additional Links: PMID-39179595
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39179595,
year = {2024},
author = {Huang, Y and Lu, Y and Li, W and Xu, X and Jiang, X and Ma, R and Chen, L and Ruan, N and Wu, Q and Xu, J},
title = {Giant Kerr nonlinearity of terahertz waves mediated by stimulated phonon polaritons in a microcavity chip.},
journal = {Light, science & applications},
volume = {13},
number = {1},
pages = {212},
pmid = {39179595},
issn = {2047-7538},
support = {11974192//National Natural Science Foundation of China (National Science Foundation of China)/ ; 62205158//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
abstract = {Optical Kerr effect, in which input light intensity linearly alters the refractive index, has enabled the generation of optical solitons, supercontinuum spectra, and frequency combs, playing vital roles in the on-chip devices, fiber communications, and quantum manipulations. Especially, terahertz Kerr effect, featuring fascinating prospects in future high-rate computing, artificial intelligence, and cloud-based technologies, encounters a great challenge due to the rather low power density and feeble Kerr response. Here, we demonstrate a giant terahertz frequency Kerr nonlinearity mediated by stimulated phonon polaritons. Under the influences of the giant Kerr nonlinearity, the power-dependent refractive index change would result in a frequency shift in the microcavity, which was experimentally demonstrated via the measurement of the resonant mode of a chip-scale lithium niobate Fabry-Pérot microcavity. Attributed to the existence of stimulated phonon polaritons, the nonlinear coefficient extracted from the frequency shifts is orders of magnitude larger than that of visible and infrared light, which is also theoretically demonstrated by nonlinear Huang equations. This work opens an avenue for many rich and fruitful terahertz Kerr effect based physical, chemical, and biological systems that have terahertz fingerprints.},
}
RevDate: 2024-08-23
CmpDate: 2024-08-23
Remote Monitoring, AI, Machine Learning and Mobile Ultrasound Integration upon 5G Internet in the Prehospital Care to Support the Golden Hour Principle and Optimize Outcomes in Severe Trauma and Emergency Surgery.
Studies in health technology and informatics, 316:1807-1811.
AIM: Feasibility and reliability evaluation of 5G internet networks (5G IN) upon Artificial Intelligence (AI)/Machine Learning (ML), of telemonitoring and mobile ultrasound (m u/s) in an ambulance car (AC)- integrated in the pre-hospital setting (PS)- to support the Golden Hour Principle (GHP) and optimize outcomes in severe trauma (TRS).
MATERIAL AND METHODS: (PS) organization and care upon (5G IN) high bandwidths (10 GB/s) mobile tele-communication (mTC) experimentation by using the experimental Cobot PROMETHEUS III, pn:100016 by simulation upon six severe trauma clinical cases by ten (N1=10) experts: Four professional rescuers (n1=4), three trauma surgeons (n2=3), a radiologist (n3=1) and two information technology specialists (n4=2) to evaluate feasibility, reliability and clinical usability for instant risk, prognosis and triage computation, decision support and treatment planning by (AI)/(ML) computations in (PS) of (TRS) as well as by performing (PS) (m u/s).
RESULTS: A. Trauma severity scales instant computations by the Cobot PROMETHEUS III, pn 100016)) based on AI and ML complex algorithms and Cloud Computing, telemonitoring and r showed very high feasibility and reliability upon (5GIN) under specific, technological, training and ergonomic prerequisites B. Measured be-directional (m u/s) images data sharing between (AC) and (ED/TC) showed very high feasibility and reliability upon (5G IN) under specific, technological and ergonomic conditions in (TRS).
CONCLUSION: Integration of (PS) tele-monitoring with (AI)/(ML) and (PS) (m u/s) upon (5GIN) via the Cobot PROMETHEUS III, (pn 100016) in severe (TRS/ES), seems feasible and under specific prerequisites reliable to support the (GHP) and optimize outcomes in adult and pediatric (TRS/ES).
Additional Links: PMID-39176842
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39176842,
year = {2024},
author = {Mammas, CS and Mamma, AS},
title = {Remote Monitoring, AI, Machine Learning and Mobile Ultrasound Integration upon 5G Internet in the Prehospital Care to Support the Golden Hour Principle and Optimize Outcomes in Severe Trauma and Emergency Surgery.},
journal = {Studies in health technology and informatics},
volume = {316},
number = {},
pages = {1807-1811},
doi = {10.3233/SHTI240782},
pmid = {39176842},
issn = {1879-8365},
mesh = {Humans ; *Machine Learning ; *Ultrasonography ; *Emergency Medical Services ; *Wounds and Injuries/diagnostic imaging/therapy ; Telemedicine ; Artificial Intelligence ; Internet ; Feasibility Studies ; Reproducibility of Results ; },
abstract = {AIM: Feasibility and reliability evaluation of 5G internet networks (5G IN) upon Artificial Intelligence (AI)/Machine Learning (ML), of telemonitoring and mobile ultrasound (m u/s) in an ambulance car (AC)- integrated in the pre-hospital setting (PS)- to support the Golden Hour Principle (GHP) and optimize outcomes in severe trauma (TRS).
MATERIAL AND METHODS: (PS) organization and care upon (5G IN) high bandwidths (10 GB/s) mobile tele-communication (mTC) experimentation by using the experimental Cobot PROMETHEUS III, pn:100016 by simulation upon six severe trauma clinical cases by ten (N1=10) experts: Four professional rescuers (n1=4), three trauma surgeons (n2=3), a radiologist (n3=1) and two information technology specialists (n4=2) to evaluate feasibility, reliability and clinical usability for instant risk, prognosis and triage computation, decision support and treatment planning by (AI)/(ML) computations in (PS) of (TRS) as well as by performing (PS) (m u/s).
RESULTS: A. Trauma severity scales instant computations by the Cobot PROMETHEUS III, pn 100016)) based on AI and ML complex algorithms and Cloud Computing, telemonitoring and r showed very high feasibility and reliability upon (5GIN) under specific, technological, training and ergonomic prerequisites B. Measured be-directional (m u/s) images data sharing between (AC) and (ED/TC) showed very high feasibility and reliability upon (5G IN) under specific, technological and ergonomic conditions in (TRS).
CONCLUSION: Integration of (PS) tele-monitoring with (AI)/(ML) and (PS) (m u/s) upon (5GIN) via the Cobot PROMETHEUS III, (pn 100016) in severe (TRS/ES), seems feasible and under specific prerequisites reliable to support the (GHP) and optimize outcomes in adult and pediatric (TRS/ES).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Machine Learning
*Ultrasonography
*Emergency Medical Services
*Wounds and Injuries/diagnostic imaging/therapy
Telemedicine
Artificial Intelligence
Internet
Feasibility Studies
Reproducibility of Results
RevDate: 2024-08-20
Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.
Network (Bristol, England) [Epub ahead of print].
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
Additional Links: PMID-39163538
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39163538,
year = {2024},
author = {Komarasamy, D and Ramaganthan, SM and Kandaswamy, DM and Mony, G},
title = {Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-30},
doi = {10.1080/0954898X.2024.2391395},
pmid = {39163538},
issn = {1361-6536},
abstract = {In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.},
}
RevDate: 2024-08-20
Evolving Software Architecture Design in Telemedicine: A PRISMA-based Systematic Review.
Healthcare informatics research, 30(3):184-193.
OBJECTIVES: This article presents a systematic review of recent advancements in telemedicine architectures for continuous monitoring, providing a comprehensive overview of the evolving software engineering practices underpinning these systems. The review aims to illuminate the critical role of telemedicine in delivering healthcare services, especially during global health crises, and to emphasize the importance of effectiveness, security, interoperability, and scalability in these systems.
METHODS: A systematic review methodology was employed, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework. As the primary research method, the PubMed, IEEE Xplore, and Scopus databases were searched to identify articles relevant to telemedicine architectures for continuous monitoring. Seventeen articles were selected for analysis, and a methodical approach was employed to investigate and synthesize the findings.
RESULTS: The review identified a notable trend towards the integration of emerging technologies into telemedicine architectures. Key areas of focus include interoperability, security, and scalability. Innovations such as cognitive radio technology, behavior-based control architectures, Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standards, cloud computing, decentralized systems, and blockchain technology are addressing challenges in remote healthcare delivery and continuous monitoring.
CONCLUSIONS: This review highlights major advancements in telemedicine architectures, emphasizing the integration of advanced technologies to improve interoperability, security, and scalability. The findings underscore the successful application of cognitive radio technology, behavior-based control, HL7 FHIR standards, cloud computing, decentralized systems, and blockchain in advancing remote healthcare delivery.
Additional Links: PMID-39160778
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39160778,
year = {2024},
author = {Jat, AS and Grønli, TM and Ghinea, G and Assres, G},
title = {Evolving Software Architecture Design in Telemedicine: A PRISMA-based Systematic Review.},
journal = {Healthcare informatics research},
volume = {30},
number = {3},
pages = {184-193},
doi = {10.4258/hir.2024.30.3.184},
pmid = {39160778},
issn = {2093-3681},
support = {//Kristiania University College/ ; },
abstract = {OBJECTIVES: This article presents a systematic review of recent advancements in telemedicine architectures for continuous monitoring, providing a comprehensive overview of the evolving software engineering practices underpinning these systems. The review aims to illuminate the critical role of telemedicine in delivering healthcare services, especially during global health crises, and to emphasize the importance of effectiveness, security, interoperability, and scalability in these systems.
METHODS: A systematic review methodology was employed, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework. As the primary research method, the PubMed, IEEE Xplore, and Scopus databases were searched to identify articles relevant to telemedicine architectures for continuous monitoring. Seventeen articles were selected for analysis, and a methodical approach was employed to investigate and synthesize the findings.
RESULTS: The review identified a notable trend towards the integration of emerging technologies into telemedicine architectures. Key areas of focus include interoperability, security, and scalability. Innovations such as cognitive radio technology, behavior-based control architectures, Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standards, cloud computing, decentralized systems, and blockchain technology are addressing challenges in remote healthcare delivery and continuous monitoring.
CONCLUSIONS: This review highlights major advancements in telemedicine architectures, emphasizing the integration of advanced technologies to improve interoperability, security, and scalability. The findings underscore the successful application of cognitive radio technology, behavior-based control, HL7 FHIR standards, cloud computing, decentralized systems, and blockchain in advancing remote healthcare delivery.},
}
RevDate: 2024-08-20
War, emotions, mental health, and artificial intelligence.
Frontiers in psychology, 15:1394045.
During the war time dysregulation of negative emotions such as fear, anger, hatred, frustration, sadness, humiliation, and hopelessness can overrule normal societal values, culture, and endanger global peace and security, and mental health in affected societies. Therefore, it is understandable that the range and power of negative emotions may play important roles in consideration of human behavior in any armed conflict. The estimation and assessment of dominant negative emotions during war time are crucial but are challenged by the complexity of emotions' neuro-psycho-physiology. Currently available natural language processing (NLP) tools have comprehensive computational methods to analyze and understand the emotional content of related textual data in war-inflicted societies. Innovative AI-driven technologies incorporating machine learning, neuro-linguistic programming, cloud infrastructure, and novel digital therapeutic tools and applications present an immense potential to enhance mental health care worldwide. This advancement could make mental health services more cost-effective and readily accessible. Due to the inadequate number of psychiatrists and limited psychiatric resources in coping with mental health consequences of war and traumas, new digital therapeutic wearable devices supported by AI tools and means might be promising approach in psychiatry of future. Transformation of negative dominant emotional maps might be undertaken by the simultaneous combination of online cognitive behavioral therapy (CBT) on individual level, as well as usage of emotionally based strategic communications (EBSC) on a public level. The proposed positive emotional transformation by means of CBT and EBSC may provide important leverage in efforts to protect mental health of civil population in war-inflicted societies. AI-based tools that can be applied in design of EBSC stimuli, like Open AI Chat GPT or Google Gemini may have great potential to significantly enhance emotionally based strategic communications by more comprehensive understanding of semantic and linguistic analysis of available text datasets of war-traumatized society. Human in the loop enhanced by Chat GPT and Gemini can aid in design and development of emotionally annotated messages that resonate among targeted population, amplifying the impact of strategic communications in shaping human dominant emotional maps into a more positive by CBT and EBCS.
Additional Links: PMID-39156807
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39156807,
year = {2024},
author = {Cosic, K and Kopilas, V and Jovanovic, T},
title = {War, emotions, mental health, and artificial intelligence.},
journal = {Frontiers in psychology},
volume = {15},
number = {},
pages = {1394045},
pmid = {39156807},
issn = {1664-1078},
abstract = {During the war time dysregulation of negative emotions such as fear, anger, hatred, frustration, sadness, humiliation, and hopelessness can overrule normal societal values, culture, and endanger global peace and security, and mental health in affected societies. Therefore, it is understandable that the range and power of negative emotions may play important roles in consideration of human behavior in any armed conflict. The estimation and assessment of dominant negative emotions during war time are crucial but are challenged by the complexity of emotions' neuro-psycho-physiology. Currently available natural language processing (NLP) tools have comprehensive computational methods to analyze and understand the emotional content of related textual data in war-inflicted societies. Innovative AI-driven technologies incorporating machine learning, neuro-linguistic programming, cloud infrastructure, and novel digital therapeutic tools and applications present an immense potential to enhance mental health care worldwide. This advancement could make mental health services more cost-effective and readily accessible. Due to the inadequate number of psychiatrists and limited psychiatric resources in coping with mental health consequences of war and traumas, new digital therapeutic wearable devices supported by AI tools and means might be promising approach in psychiatry of future. Transformation of negative dominant emotional maps might be undertaken by the simultaneous combination of online cognitive behavioral therapy (CBT) on individual level, as well as usage of emotionally based strategic communications (EBSC) on a public level. The proposed positive emotional transformation by means of CBT and EBSC may provide important leverage in efforts to protect mental health of civil population in war-inflicted societies. AI-based tools that can be applied in design of EBSC stimuli, like Open AI Chat GPT or Google Gemini may have great potential to significantly enhance emotionally based strategic communications by more comprehensive understanding of semantic and linguistic analysis of available text datasets of war-traumatized society. Human in the loop enhanced by Chat GPT and Gemini can aid in design and development of emotionally annotated messages that resonate among targeted population, amplifying the impact of strategic communications in shaping human dominant emotional maps into a more positive by CBT and EBCS.},
}
RevDate: 2024-08-16
CmpDate: 2024-08-16
Research on privacy protection in the context of healthcare data based on knowledge map.
Medicine, 103(33):e39370.
With the rapid development of emerging information technologies such as artificial intelligence, cloud computing, and the Internet of Things, the world has entered the era of big data. In the face of growing medical big data, research on the privacy protection of personal information has attracted more and more attention, but few studies have analyzed and forecasted the research hotspots and future development trends on the privacy protection. Presently, to systematically and comprehensively summarize the relevant privacy protection literature in the context of big healthcare data, a bibliometric analysis was conducted to clarify the spatial and temporal distribution and research hotspots of privacy protection using the information visualization software CiteSpace. The literature papers related to privacy protection in the Web of Science were collected from 2012 to 2023. Through analysis of the time, author and countries distribution of relevant publications, we found that after 2013, research on the privacy protection has received increasing attention and the core institution of privacy protection research is the university, but the countries show weak cooperation. Additionally, keywords like privacy, big data, internet, challenge, care, and information have high centralities and frequency, indicating the research hotspots and research trends in the field of the privacy protection. All the findings will provide a comprehensive privacy protection research knowledge structure for scholars in the field of privacy protection research under the background of health big data, which can help them quickly grasp the research hotspots and choose future research projects.
Additional Links: PMID-39151500
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39151500,
year = {2024},
author = {Ouyang, T and Yang, J and Gu, Z and Zhang, L and Wang, D and Wang, Y and Yang, Y},
title = {Research on privacy protection in the context of healthcare data based on knowledge map.},
journal = {Medicine},
volume = {103},
number = {33},
pages = {e39370},
pmid = {39151500},
issn = {1536-5964},
support = {Grant No.2023Ah040102//Major Scientific Research Project of Anhui Provincial Department of Education/ ; Grant No.2022Ah010038 and No.2023sdxx027//Anhui Province quality projects/ ; Grant no.2021rwzd12//Key humanities projects of Anhui University of Traditional Chinese Medicine/ ; Grant No.JNFX2023020//Middle-aged Young Teacher Training Action Project of Anhui Provincial Department of Education/ ; Grant No.2023jyxm0370//General Project of Teaching Research in Anhui Province/ ; },
mesh = {Humans ; *Big Data ; *Computer Security ; *Privacy ; *Confidentiality ; Bibliometrics ; },
abstract = {With the rapid development of emerging information technologies such as artificial intelligence, cloud computing, and the Internet of Things, the world has entered the era of big data. In the face of growing medical big data, research on the privacy protection of personal information has attracted more and more attention, but few studies have analyzed and forecasted the research hotspots and future development trends on the privacy protection. Presently, to systematically and comprehensively summarize the relevant privacy protection literature in the context of big healthcare data, a bibliometric analysis was conducted to clarify the spatial and temporal distribution and research hotspots of privacy protection using the information visualization software CiteSpace. The literature papers related to privacy protection in the Web of Science were collected from 2012 to 2023. Through analysis of the time, author and countries distribution of relevant publications, we found that after 2013, research on the privacy protection has received increasing attention and the core institution of privacy protection research is the university, but the countries show weak cooperation. Additionally, keywords like privacy, big data, internet, challenge, care, and information have high centralities and frequency, indicating the research hotspots and research trends in the field of the privacy protection. All the findings will provide a comprehensive privacy protection research knowledge structure for scholars in the field of privacy protection research under the background of health big data, which can help them quickly grasp the research hotspots and choose future research projects.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Big Data
*Computer Security
*Privacy
*Confidentiality
Bibliometrics
RevDate: 2024-08-16
CmpDate: 2024-08-16
FitScore: a fast machine learning-based score for 3D virtual screening enrichment.
Journal of computer-aided molecular design, 38(1):29.
Enhancing virtual screening enrichment has become an urgent problem in computational chemistry, driven by increasingly large databases of commercially available compounds, without a commensurate drop in in vitro screening costs. Docking these large databases is possible with cloud-scale computing. However, rapid docking necessitates compromises in scoring, often leading to poor enrichment and an abundance of false positives in docking results. This work describes a new scoring function composed of two parts - a knowledge-based component that predicts the probability of a particular atom type being in a particular receptor environment, and a tunable weight matrix that converts the probability predictions into a dimensionless score suitable for virtual screening enrichment. This score, the FitScore, represents the compatibility between the ligand and the binding site and is capable of a high degree of enrichment across standardized docking test sets.
Additional Links: PMID-39150579
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39150579,
year = {2024},
author = {Gehlhaar, DK and Mermelstein, DJ},
title = {FitScore: a fast machine learning-based score for 3D virtual screening enrichment.},
journal = {Journal of computer-aided molecular design},
volume = {38},
number = {1},
pages = {29},
pmid = {39150579},
issn = {1573-4951},
mesh = {*Machine Learning ; Ligands ; *Molecular Docking Simulation ; Binding Sites ; Humans ; Protein Binding ; Proteins/chemistry/metabolism ; Software ; Drug Evaluation, Preclinical/methods ; Drug Discovery/methods ; },
abstract = {Enhancing virtual screening enrichment has become an urgent problem in computational chemistry, driven by increasingly large databases of commercially available compounds, without a commensurate drop in in vitro screening costs. Docking these large databases is possible with cloud-scale computing. However, rapid docking necessitates compromises in scoring, often leading to poor enrichment and an abundance of false positives in docking results. This work describes a new scoring function composed of two parts - a knowledge-based component that predicts the probability of a particular atom type being in a particular receptor environment, and a tunable weight matrix that converts the probability predictions into a dimensionless score suitable for virtual screening enrichment. This score, the FitScore, represents the compatibility between the ligand and the binding site and is capable of a high degree of enrichment across standardized docking test sets.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Machine Learning
Ligands
*Molecular Docking Simulation
Binding Sites
Humans
Protein Binding
Proteins/chemistry/metabolism
Software
Drug Evaluation, Preclinical/methods
Drug Discovery/methods
RevDate: 2024-08-16
Discovering patterns and trends in customer service technologies patents using large language model.
Heliyon, 10(14):e34701 pii:S2405-8440(24)10732-3.
The definition of service has evolved from a focus on material value in manufacturing before the 2000s to a customer-centric value based on the significant growth of the service industry. Digital transformation has become essential for companies in the service industry due to the incorporation of digital technology through the Fourth Industrial Revolution and COVID-19. This study utilised Bidirectional Encoder Representations from Transformer (BERT) to analyse 3029 international patents related to the customer service industry and digital transformation registered between 2000 and 2022. Through topic modelling, this study identified 10 major topics in the customer service industry and analysed their yearly trends. Our findings show that as of 2022, the trend with the highest frequency is user-centric network service design, while cloud computing has experienced the steepest increase in the last five years. User-centric network services have been steadily developing since the inception of the Internet. Cloud computing is one of the key technologies being developed intensively in 2023 for the digital transformation of customer service. This study identifies time series trends of customer service industry patents and suggests the effectiveness of using BERTopic to predict future trends in technology.
Additional Links: PMID-39149018
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39149018,
year = {2024},
author = {Kim, C and Lee, J},
title = {Discovering patterns and trends in customer service technologies patents using large language model.},
journal = {Heliyon},
volume = {10},
number = {14},
pages = {e34701},
doi = {10.1016/j.heliyon.2024.e34701},
pmid = {39149018},
issn = {2405-8440},
abstract = {The definition of service has evolved from a focus on material value in manufacturing before the 2000s to a customer-centric value based on the significant growth of the service industry. Digital transformation has become essential for companies in the service industry due to the incorporation of digital technology through the Fourth Industrial Revolution and COVID-19. This study utilised Bidirectional Encoder Representations from Transformer (BERT) to analyse 3029 international patents related to the customer service industry and digital transformation registered between 2000 and 2022. Through topic modelling, this study identified 10 major topics in the customer service industry and analysed their yearly trends. Our findings show that as of 2022, the trend with the highest frequency is user-centric network service design, while cloud computing has experienced the steepest increase in the last five years. User-centric network services have been steadily developing since the inception of the Internet. Cloud computing is one of the key technologies being developed intensively in 2023 for the digital transformation of customer service. This study identifies time series trends of customer service industry patents and suggests the effectiveness of using BERTopic to predict future trends in technology.},
}
RevDate: 2024-08-15
Forest disturbance regimes and trends in continental Spain (1985- 2023) using dense Landsat time series.
Environmental research pii:S0013-9351(24)01707-9 [Epub ahead of print].
Forest disturbance regimes across biomes are being altered by interactive effects of global change. Establishing baselines for assessing change requires detailed quantitative data on past disturbance events, but such data are scarce and difficult to obtain over large spatial and temporal scales. The integration of remote sensing with dense time series analysis and cloud computing platforms is enhancing the ability to monitor historical disturbances, and especially non-stand replacing events along climatic gradients. Since the integration of such tools is still scarce in Mediterranean regions, here, we combine dense Landsat time series and the Continuous Change Detection and Classification - Spectral Mixture Analysis (CCDC-SMA) method to monitor forest disturbance in continental Spain from 1985 to 2023. We adapted the CCDC-SMA method for improved disturbance detection creating new spectral libraries representative of the study region, and quantified the year, month, severity, return interval, and type of disturbance (stand replacing, non-stand replacing) at a 30 m resolution. In addition, we characterised forest disturbance regimes and trends (patch size and severity, and frequency of events) of events larger than 0.5 ha at the national scale by biome (Mediterranean and temperate) and forest type (broadleaf, needleleaf and mixed). We quantified more than 2.9 million patches of disturbed forest, covering 4.6 Mha over the region and period studied. Forest disturbances were on average larger but less severe in the Mediterranean than in the temperate biome, and significantly larger and more severe in needleleaf than in mixed and broadleaf forests. Since the late 1980s, forest disturbances have decreased in size and severity while increasing in frequency across all biomes and forest types. These results have important implications as they confirm that disturbance regimes in continental Spain are changing and should therefore be considered in forest strategic planning for policy development and implementation.
Additional Links: PMID-39147188
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39147188,
year = {2024},
author = {Miguel, S and Ruiz-Benito, P and Rebollo, P and Viana-Soto, A and Mihai, MC and García-Martín, A and Tanase, M},
title = {Forest disturbance regimes and trends in continental Spain (1985- 2023) using dense Landsat time series.},
journal = {Environmental research},
volume = {},
number = {},
pages = {119802},
doi = {10.1016/j.envres.2024.119802},
pmid = {39147188},
issn = {1096-0953},
abstract = {Forest disturbance regimes across biomes are being altered by interactive effects of global change. Establishing baselines for assessing change requires detailed quantitative data on past disturbance events, but such data are scarce and difficult to obtain over large spatial and temporal scales. The integration of remote sensing with dense time series analysis and cloud computing platforms is enhancing the ability to monitor historical disturbances, and especially non-stand replacing events along climatic gradients. Since the integration of such tools is still scarce in Mediterranean regions, here, we combine dense Landsat time series and the Continuous Change Detection and Classification - Spectral Mixture Analysis (CCDC-SMA) method to monitor forest disturbance in continental Spain from 1985 to 2023. We adapted the CCDC-SMA method for improved disturbance detection creating new spectral libraries representative of the study region, and quantified the year, month, severity, return interval, and type of disturbance (stand replacing, non-stand replacing) at a 30 m resolution. In addition, we characterised forest disturbance regimes and trends (patch size and severity, and frequency of events) of events larger than 0.5 ha at the national scale by biome (Mediterranean and temperate) and forest type (broadleaf, needleleaf and mixed). We quantified more than 2.9 million patches of disturbed forest, covering 4.6 Mha over the region and period studied. Forest disturbances were on average larger but less severe in the Mediterranean than in the temperate biome, and significantly larger and more severe in needleleaf than in mixed and broadleaf forests. Since the late 1980s, forest disturbances have decreased in size and severity while increasing in frequency across all biomes and forest types. These results have important implications as they confirm that disturbance regimes in continental Spain are changing and should therefore be considered in forest strategic planning for policy development and implementation.},
}
RevDate: 2024-08-15
CmpDate: 2024-08-15
An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment.
PloS one, 19(8):e0304517 pii:PONE-D-24-07054.
Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.
Additional Links: PMID-39146286
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39146286,
year = {2024},
author = {Zohora, MF and Farhin, F and Kaiser, MS},
title = {An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment.},
journal = {PloS one},
volume = {19},
number = {8},
pages = {e0304517},
doi = {10.1371/journal.pone.0304517},
pmid = {39146286},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Algorithms ; Time Factors ; },
abstract = {Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Algorithms
Time Factors
RevDate: 2024-08-14
Physical Reservoir Computing Using van der Waals Ferroelectrics for Acoustic Keyword Spotting.
ACS nano [Epub ahead of print].
Acoustic keyword spotting (KWS) plays a pivotal role in the voice-activated systems of artificial intelligence (AI), allowing for hands-free interactions between humans and smart devices through information retrieval of the voice commands. The cloud computing technology integrated with the artificial neural networks has been employed to execute the KWS tasks, which however suffers from propagation delay and the risk of privacy breach. Here, we report a single-node reservoir computing (RC) system based on the CuInP2S6 (CIPS)/graphene heterostructure planar device for implementing the KWS task with low computation cost. Through deliberately tuning the Schottky barrier height at the ferroelectric CIPS interfaces for the thermionic injection and transport of the electrons, the typical nonlinear current response and fading memory characteristics are achieved in the device. Additionally, the device exhibits diverse synaptic plasticity with an excellent separation capability of the temporal information. We construct a RC system through employing the ferroelectric device as the physical node to spot the acoustic keywords, i.e., the natural numbers from 1 to 9 based on simulation, in which the system demonstrates outstanding performance with high accuracy rate (>94.6%) and recall rate (>92.0%). Our work promises physical RC in single-node configuration as a prospective computing platform to process the acoustic keywords, promoting its applications in the artificial auditory system at the edge.
Additional Links: PMID-39140427
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39140427,
year = {2024},
author = {Cao, Y and Zhang, Z and Qin, BW and Sang, W and Li, H and Wang, T and Tan, F and Gan, Y and Zhang, X and Liu, T and Xiang, D and Lin, W and Liu, Q},
title = {Physical Reservoir Computing Using van der Waals Ferroelectrics for Acoustic Keyword Spotting.},
journal = {ACS nano},
volume = {},
number = {},
pages = {},
doi = {10.1021/acsnano.4c06144},
pmid = {39140427},
issn = {1936-086X},
abstract = {Acoustic keyword spotting (KWS) plays a pivotal role in the voice-activated systems of artificial intelligence (AI), allowing for hands-free interactions between humans and smart devices through information retrieval of the voice commands. The cloud computing technology integrated with the artificial neural networks has been employed to execute the KWS tasks, which however suffers from propagation delay and the risk of privacy breach. Here, we report a single-node reservoir computing (RC) system based on the CuInP2S6 (CIPS)/graphene heterostructure planar device for implementing the KWS task with low computation cost. Through deliberately tuning the Schottky barrier height at the ferroelectric CIPS interfaces for the thermionic injection and transport of the electrons, the typical nonlinear current response and fading memory characteristics are achieved in the device. Additionally, the device exhibits diverse synaptic plasticity with an excellent separation capability of the temporal information. We construct a RC system through employing the ferroelectric device as the physical node to spot the acoustic keywords, i.e., the natural numbers from 1 to 9 based on simulation, in which the system demonstrates outstanding performance with high accuracy rate (>94.6%) and recall rate (>92.0%). Our work promises physical RC in single-node configuration as a prospective computing platform to process the acoustic keywords, promoting its applications in the artificial auditory system at the edge.},
}
RevDate: 2024-08-14
Balancing efficacy and computational burden: weighted mean, multiple imputation, and inverse probability weighting methods for item non-response in reliable scales.
Journal of the American Medical Informatics Association : JAMIA pii:7733273 [Epub ahead of print].
IMPORTANCE: Scales often arise from multi-item questionnaires, yet commonly face item non-response. Traditional solutions use weighted mean (WMean) from available responses, but potentially overlook missing data intricacies. Advanced methods like multiple imputation (MI) address broader missing data, but demand increased computational resources. Researchers frequently use survey data in the All of Us Research Program (All of Us), and it is imperative to determine if the increased computational burden of employing MI to handle non-response is justifiable.
OBJECTIVES: Using the 5-item Physical Activity Neighborhood Environment Scale (PANES) in All of Us, this study assessed the tradeoff between efficacy and computational demands of WMean, MI, and inverse probability weighting (IPW) when dealing with item non-response.
MATERIALS AND METHODS: Synthetic missingness, allowing 1 or more item non-response, was introduced into PANES across 3 missing mechanisms and various missing percentages (10%-50%). Each scenario compared WMean of complete questions, MI, and IPW on bias, variability, coverage probability, and computation time.
RESULTS: All methods showed minimal biases (all <5.5%) for good internal consistency, with WMean suffered most with poor consistency. IPW showed considerable variability with increasing missing percentage. MI required significantly more computational resources, taking >8000 and >100 times longer than WMean and IPW in full data analysis, respectively.
DISCUSSION AND CONCLUSION: The marginal performance advantages of MI for item non-response in highly reliable scales do not warrant its escalated cloud computational burden in All of Us, particularly when coupled with computationally demanding post-imputation analyses. Researchers using survey scales with low missingness could utilize WMean to reduce computing burden.
Additional Links: PMID-39138951
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39138951,
year = {2024},
author = {Guide, A and Garbett, S and Feng, X and Mapes, BM and Cook, J and Sulieman, L and Cronin, RM and Chen, Q},
title = {Balancing efficacy and computational burden: weighted mean, multiple imputation, and inverse probability weighting methods for item non-response in reliable scales.},
journal = {Journal of the American Medical Informatics Association : JAMIA},
volume = {},
number = {},
pages = {},
doi = {10.1093/jamia/ocae217},
pmid = {39138951},
issn = {1527-974X},
support = {3OT2OD035404/NH/NIH HHS/United States ; },
abstract = {IMPORTANCE: Scales often arise from multi-item questionnaires, yet commonly face item non-response. Traditional solutions use weighted mean (WMean) from available responses, but potentially overlook missing data intricacies. Advanced methods like multiple imputation (MI) address broader missing data, but demand increased computational resources. Researchers frequently use survey data in the All of Us Research Program (All of Us), and it is imperative to determine if the increased computational burden of employing MI to handle non-response is justifiable.
OBJECTIVES: Using the 5-item Physical Activity Neighborhood Environment Scale (PANES) in All of Us, this study assessed the tradeoff between efficacy and computational demands of WMean, MI, and inverse probability weighting (IPW) when dealing with item non-response.
MATERIALS AND METHODS: Synthetic missingness, allowing 1 or more item non-response, was introduced into PANES across 3 missing mechanisms and various missing percentages (10%-50%). Each scenario compared WMean of complete questions, MI, and IPW on bias, variability, coverage probability, and computation time.
RESULTS: All methods showed minimal biases (all <5.5%) for good internal consistency, with WMean suffered most with poor consistency. IPW showed considerable variability with increasing missing percentage. MI required significantly more computational resources, taking >8000 and >100 times longer than WMean and IPW in full data analysis, respectively.
DISCUSSION AND CONCLUSION: The marginal performance advantages of MI for item non-response in highly reliable scales do not warrant its escalated cloud computational burden in All of Us, particularly when coupled with computationally demanding post-imputation analyses. Researchers using survey scales with low missingness could utilize WMean to reduce computing burden.},
}
RevDate: 2024-08-13
CmpDate: 2024-08-13
End-to-end reproducible AI pipelines in radiology using the cloud.
Nature communications, 15(1):6931.
Artificial intelligence (AI) algorithms hold the potential to revolutionize radiology. However, a significant portion of the published literature lacks transparency and reproducibility, which hampers sustained progress toward clinical translation. Although several reporting guidelines have been proposed, identifying practical means to address these issues remains challenging. Here, we show the potential of cloud-based infrastructure for implementing and sharing transparent and reproducible AI-based radiology pipelines. We demonstrate end-to-end reproducibility from retrieving cloud-hosted data, through data pre-processing, deep learning inference, and post-processing, to the analysis and reporting of the final results. We successfully implement two distinct use cases, starting from recent literature on AI-based biomarkers for cancer imaging. Using cloud-hosted data and computing, we confirm the findings of these studies and extend the validation to previously unseen data for one of the use cases. Furthermore, we provide the community with transparent and easy-to-extend examples of pipelines impactful for the broader oncology field. Our approach demonstrates the potential of cloud resources for implementing, sharing, and using reproducible and transparent AI pipelines, which can accelerate the translation into clinical solutions.
Additional Links: PMID-39138215
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39138215,
year = {2024},
author = {Bontempi, D and Nuernberg, L and Pai, S and Krishnaswamy, D and Thiriveedhi, V and Hosny, A and Mak, RH and Farahani, K and Kikinis, R and Fedorov, A and Aerts, HJWL},
title = {End-to-end reproducible AI pipelines in radiology using the cloud.},
journal = {Nature communications},
volume = {15},
number = {1},
pages = {6931},
pmid = {39138215},
issn = {2041-1723},
support = {866504//EC | EU Framework Programme for Research and Innovation H2020 | H2020 Priority Excellent Science | H2020 European Research Council (H2020 Excellent Science - European Research Council)/ ; HHSN261201500003l//Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.)/ ; },
mesh = {*Cloud Computing ; Humans ; *Artificial Intelligence ; Reproducibility of Results ; Deep Learning ; Radiology/methods/standards ; Algorithms ; Neoplasms/diagnostic imaging ; Image Processing, Computer-Assisted/methods ; },
abstract = {Artificial intelligence (AI) algorithms hold the potential to revolutionize radiology. However, a significant portion of the published literature lacks transparency and reproducibility, which hampers sustained progress toward clinical translation. Although several reporting guidelines have been proposed, identifying practical means to address these issues remains challenging. Here, we show the potential of cloud-based infrastructure for implementing and sharing transparent and reproducible AI-based radiology pipelines. We demonstrate end-to-end reproducibility from retrieving cloud-hosted data, through data pre-processing, deep learning inference, and post-processing, to the analysis and reporting of the final results. We successfully implement two distinct use cases, starting from recent literature on AI-based biomarkers for cancer imaging. Using cloud-hosted data and computing, we confirm the findings of these studies and extend the validation to previously unseen data for one of the use cases. Furthermore, we provide the community with transparent and easy-to-extend examples of pipelines impactful for the broader oncology field. Our approach demonstrates the potential of cloud resources for implementing, sharing, and using reproducible and transparent AI pipelines, which can accelerate the translation into clinical solutions.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Humans
*Artificial Intelligence
Reproducibility of Results
Deep Learning
Radiology/methods/standards
Algorithms
Neoplasms/diagnostic imaging
Image Processing, Computer-Assisted/methods
RevDate: 2024-08-13
Volatile tin oxide memristor for neuromorphic computing.
iScience, 27(8):110479.
The rise of neuromorphic systems has addressed the shortcomings of current computing architectures, especially regarding energy efficiency and scalability. These systems use cutting-edge technologies such as Pt/SnOx/TiN memristors, which efficiently mimic synaptic behavior and provide potential solutions to modern computing challenges. Moreover, their unipolar resistive switching ability enables precise modulation of the synaptic weights, facilitating energy-efficient parallel processing that is similar to biological synapses. Additionally, memristors' spike-rate-dependent plasticity enhances the adaptability of neural circuits, offering promising applications in intelligent computing. Integrating memristors into edge computing architectures further highlights their importance in tackling the security and efficiency issues associated with conventional cloud computing models.
Additional Links: PMID-39129832
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39129832,
year = {2024},
author = {Ju, D and Kim, S},
title = {Volatile tin oxide memristor for neuromorphic computing.},
journal = {iScience},
volume = {27},
number = {8},
pages = {110479},
pmid = {39129832},
issn = {2589-0042},
abstract = {The rise of neuromorphic systems has addressed the shortcomings of current computing architectures, especially regarding energy efficiency and scalability. These systems use cutting-edge technologies such as Pt/SnOx/TiN memristors, which efficiently mimic synaptic behavior and provide potential solutions to modern computing challenges. Moreover, their unipolar resistive switching ability enables precise modulation of the synaptic weights, facilitating energy-efficient parallel processing that is similar to biological synapses. Additionally, memristors' spike-rate-dependent plasticity enhances the adaptability of neural circuits, offering promising applications in intelligent computing. Integrating memristors into edge computing architectures further highlights their importance in tackling the security and efficiency issues associated with conventional cloud computing models.},
}
RevDate: 2024-08-12
Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway.
Sensors (Basel, Switzerland), 24(15):.
Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R[2] of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment.
Additional Links: PMID-39124116
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39124116,
year = {2024},
author = {Pazhanivel, DB and Velu, AN and Palaniappan, BS},
title = {Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {15},
pages = {},
pmid = {39124116},
issn = {1424-8220},
abstract = {Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R[2] of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment.},
}
RevDate: 2024-08-10
Architectures for Industrial AIoT Applications.
Sensors (Basel, Switzerland), 24(15): pii:s24154929.
Industry 4.0 introduced new concepts, technologies, and paradigms, such as Cyber Physical Systems (CPSs), Industrial Internet of Things (IIoT) and, more recently, Artificial Intelligence of Things (AIoT). These paradigms ease the creation of complex systems by integrating heterogeneous devices. As a result, the structure of the production systems is changing completely. In this scenario, the adoption of reference architectures based on standards may guide designers and developers to create complex AIoT applications. This article surveys the main reference architectures available for industrial AIoT applications, analyzing their key characteristics, objectives, and benefits; it also presents some use cases that may help designers create new applications. The main goal of this review is to help engineers identify the alternative that best suits every application. The authors conclude that existing reference architectures are a necessary tool for standardizing AIoT applications, since they may guide developers in the process of developing new applications. However, the use of reference architectures in real AIoT industrial applications is still incipient, so more development effort is needed in order for it to be widely adopted.
Additional Links: PMID-39123976
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39123976,
year = {2024},
author = {Villar, E and Martín Toral, I and Calvo, I and Barambones, O and Fernández-Bustamante, P},
title = {Architectures for Industrial AIoT Applications.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {15},
pages = {},
doi = {10.3390/s24154929},
pmid = {39123976},
issn = {1424-8220},
abstract = {Industry 4.0 introduced new concepts, technologies, and paradigms, such as Cyber Physical Systems (CPSs), Industrial Internet of Things (IIoT) and, more recently, Artificial Intelligence of Things (AIoT). These paradigms ease the creation of complex systems by integrating heterogeneous devices. As a result, the structure of the production systems is changing completely. In this scenario, the adoption of reference architectures based on standards may guide designers and developers to create complex AIoT applications. This article surveys the main reference architectures available for industrial AIoT applications, analyzing their key characteristics, objectives, and benefits; it also presents some use cases that may help designers create new applications. The main goal of this review is to help engineers identify the alternative that best suits every application. The authors conclude that existing reference architectures are a necessary tool for standardizing AIoT applications, since they may guide developers in the process of developing new applications. However, the use of reference architectures in real AIoT industrial applications is still incipient, so more development effort is needed in order for it to be widely adopted.},
}
RevDate: 2024-08-08
Industry 4.0 Technologies in Maternal Health Care: Bibliometric Analysis and Research Agenda.
JMIR pediatrics and parenting, 7:e47848 pii:v7i1e47848.
BACKGROUND: Industry 4.0 (I4.0) technologies have improved operations in health care facilities by optimizing processes, leading to efficient systems and tools to assist health care personnel and patients.
OBJECTIVE: This study investigates the current implementation and impact of I4.0 technologies within maternal health care, explicitly focusing on transforming care processes, treatment methods, and automated pregnancy monitoring. Additionally, it conducts a thematic landscape mapping, offering a nuanced understanding of this emerging field. Building on this analysis, a future research agenda is proposed, highlighting critical areas for future investigations.
METHODS: A bibliometric analysis of publications retrieved from the Scopus database was conducted to examine how the research into I4.0 technologies in maternal health care evolved from 1985 to 2022. A search strategy was used to screen the eligible publications using the abstract and full-text reading. The most productive and influential journals; authors', institutions', and countries' influence on maternal health care; and current trends and thematic evolution were computed using the Bibliometrix R package (R Core Team).
RESULTS: A total of 1003 unique papers in English were retrieved using the search string, and 136 papers were retained after the inclusion and exclusion criteria were implemented, covering 37 years from 1985 to 2022. The annual growth rate of publications was 9.53%, with 88.9% (n=121) of the publications observed in 2016-2022. In the thematic analysis, 4 clusters were identified-artificial neural networks, data mining, machine learning, and the Internet of Things. Artificial intelligence, deep learning, risk prediction, digital health, telemedicine, wearable devices, mobile health care, and cloud computing remained the dominant research themes in 2016-2022.
CONCLUSIONS: This bibliometric analysis reviews the state of the art in the evolution and structure of I4.0 technologies in maternal health care and how they may be used to optimize the operational processes. A conceptual framework with 4 performance factors-risk prediction, hospital care, health record management, and self-care-is suggested for process improvement. a research agenda is also proposed for governance, adoption, infrastructure, privacy, and security.
Additional Links: PMID-39116433
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39116433,
year = {2024},
author = {Sibanda, K and Ndayizigamiye, P and Twinomurinzi, H},
title = {Industry 4.0 Technologies in Maternal Health Care: Bibliometric Analysis and Research Agenda.},
journal = {JMIR pediatrics and parenting},
volume = {7},
number = {},
pages = {e47848},
doi = {10.2196/47848},
pmid = {39116433},
issn = {2561-6722},
abstract = {BACKGROUND: Industry 4.0 (I4.0) technologies have improved operations in health care facilities by optimizing processes, leading to efficient systems and tools to assist health care personnel and patients.
OBJECTIVE: This study investigates the current implementation and impact of I4.0 technologies within maternal health care, explicitly focusing on transforming care processes, treatment methods, and automated pregnancy monitoring. Additionally, it conducts a thematic landscape mapping, offering a nuanced understanding of this emerging field. Building on this analysis, a future research agenda is proposed, highlighting critical areas for future investigations.
METHODS: A bibliometric analysis of publications retrieved from the Scopus database was conducted to examine how the research into I4.0 technologies in maternal health care evolved from 1985 to 2022. A search strategy was used to screen the eligible publications using the abstract and full-text reading. The most productive and influential journals; authors', institutions', and countries' influence on maternal health care; and current trends and thematic evolution were computed using the Bibliometrix R package (R Core Team).
RESULTS: A total of 1003 unique papers in English were retrieved using the search string, and 136 papers were retained after the inclusion and exclusion criteria were implemented, covering 37 years from 1985 to 2022. The annual growth rate of publications was 9.53%, with 88.9% (n=121) of the publications observed in 2016-2022. In the thematic analysis, 4 clusters were identified-artificial neural networks, data mining, machine learning, and the Internet of Things. Artificial intelligence, deep learning, risk prediction, digital health, telemedicine, wearable devices, mobile health care, and cloud computing remained the dominant research themes in 2016-2022.
CONCLUSIONS: This bibliometric analysis reviews the state of the art in the evolution and structure of I4.0 technologies in maternal health care and how they may be used to optimize the operational processes. A conceptual framework with 4 performance factors-risk prediction, hospital care, health record management, and self-care-is suggested for process improvement. a research agenda is also proposed for governance, adoption, infrastructure, privacy, and security.},
}
RevDate: 2024-08-07
Mapping agricultural tile drainage in the US Midwest using explainable random forest machine learning and satellite imagery.
The Science of the total environment pii:S0048-9697(24)05433-0 [Epub ahead of print].
There has been an increase in tile drained area across the US Midwest and other regions worldwide due to agricultural expansion, intensification, and climate variability. Despite this growth, spatially explicit tile drainage maps remain scarce, which limits the accuracy of hydrologic modeling and implementation of nutrient reduction strategies. Here, we developed a machine-learning model to provide a Spatially Explicit Estimate of Tile Drainage (SEETileDrain) across the US Midwest in 2017 at a 30-m resolution. This model used 31 satellite-derived and environmental features after removing less important and highly correlated features. It was trained with 60,938 tile and non-tile ground truth points within the Google Earth Engine cloud-computing platform. We also used multiple feature importance metrics and Accumulated Local Effects to interpret the machine learning model. The results show that our model achieved good accuracy, with 96 % of points classified correctly and an F1 score of 0.90. When tile drainage area is aggregated to the county scale, it agreed well (r[2] = 0.69) with the reported area from the Ag Census. We found that Land Surface Temperature (LST) along with climate- and soil-related features were the most important factors for classification. The top-ranked feature is the median summer nighttime LST, followed by median summer soil moisture percent. This study demonstrates the potential of applying satellite remote sensing to map spatially explicit agricultural tile drainage across large regions. The results should be useful for land use change monitoring and hydrologic and nutrient models, including those designed to achieve cost-effective agricultural water and nutrient management strategies. The algorithms developed here should also be applicable for other remote sensing mapping applications.
Additional Links: PMID-39111449
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39111449,
year = {2024},
author = {Wan, L and Kendall, AD and Rapp, J and Hyndman, DW},
title = {Mapping agricultural tile drainage in the US Midwest using explainable random forest machine learning and satellite imagery.},
journal = {The Science of the total environment},
volume = {},
number = {},
pages = {175283},
doi = {10.1016/j.scitotenv.2024.175283},
pmid = {39111449},
issn = {1879-1026},
abstract = {There has been an increase in tile drained area across the US Midwest and other regions worldwide due to agricultural expansion, intensification, and climate variability. Despite this growth, spatially explicit tile drainage maps remain scarce, which limits the accuracy of hydrologic modeling and implementation of nutrient reduction strategies. Here, we developed a machine-learning model to provide a Spatially Explicit Estimate of Tile Drainage (SEETileDrain) across the US Midwest in 2017 at a 30-m resolution. This model used 31 satellite-derived and environmental features after removing less important and highly correlated features. It was trained with 60,938 tile and non-tile ground truth points within the Google Earth Engine cloud-computing platform. We also used multiple feature importance metrics and Accumulated Local Effects to interpret the machine learning model. The results show that our model achieved good accuracy, with 96 % of points classified correctly and an F1 score of 0.90. When tile drainage area is aggregated to the county scale, it agreed well (r[2] = 0.69) with the reported area from the Ag Census. We found that Land Surface Temperature (LST) along with climate- and soil-related features were the most important factors for classification. The top-ranked feature is the median summer nighttime LST, followed by median summer soil moisture percent. This study demonstrates the potential of applying satellite remote sensing to map spatially explicit agricultural tile drainage across large regions. The results should be useful for land use change monitoring and hydrologic and nutrient models, including those designed to achieve cost-effective agricultural water and nutrient management strategies. The algorithms developed here should also be applicable for other remote sensing mapping applications.},
}
RevDate: 2024-08-06
CmpDate: 2024-08-06
Towards understanding climate change impacts: monitoring the vegetation dynamics of terrestrial national parks in Indonesia.
Scientific reports, 14(1):18257.
Monitoring vegetation dynamics in terrestrial national parks (TNPs) is crucial for ensuring sustainable environmental management and mitigating the potential negative impacts of short- and long-term disturbances understanding the effect of climate change within natural and protected areas. This study aims to monitor the vegetation dynamics of TNPs in Indonesia by first categorizing them into the regions of Sumatra, Jawa, Kalimantan, Sulawesi, and Eastern Indonesia and then applying ready-to-use MODIS EVI time-series imageries (MOD13Q1) taken from 2000 to 2022 on the GEE cloud-computing platform. Specifically, this research investigates the greening and browning fraction trends using Sen's slope, considers seasonality by analyzing the maximum and minimum EVI values, and assesses anomalous years by comparing the annual time series and long-term median EVI value. The findings reveal significantly increasing greening trends in most TNPs, except Danau Sentarum, from 2000 to 2022. The seasonality analysis shows that most TNPs exhibit peak and trough greenness at the end of the rainy and dry seasons, respectively, as the vegetation response to precipitation increases and decreases. Anomalies in seasonality that is affected by climate change was detected in all of the regions. To increase TNPs resilience, suggested measures include active reforestation and implementation of Assisted Natural Regeneration, strengthen the enforcement of fundamental managerial task, and forest fire management.
Additional Links: PMID-39107423
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39107423,
year = {2024},
author = {Ramdani, F and Setiani, P and Sianturi, R},
title = {Towards understanding climate change impacts: monitoring the vegetation dynamics of terrestrial national parks in Indonesia.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18257},
pmid = {39107423},
issn = {2045-2322},
mesh = {*Climate Change ; Indonesia ; *Parks, Recreational ; *Conservation of Natural Resources ; Seasons ; Environmental Monitoring/methods ; Ecosystem ; Plants ; },
abstract = {Monitoring vegetation dynamics in terrestrial national parks (TNPs) is crucial for ensuring sustainable environmental management and mitigating the potential negative impacts of short- and long-term disturbances understanding the effect of climate change within natural and protected areas. This study aims to monitor the vegetation dynamics of TNPs in Indonesia by first categorizing them into the regions of Sumatra, Jawa, Kalimantan, Sulawesi, and Eastern Indonesia and then applying ready-to-use MODIS EVI time-series imageries (MOD13Q1) taken from 2000 to 2022 on the GEE cloud-computing platform. Specifically, this research investigates the greening and browning fraction trends using Sen's slope, considers seasonality by analyzing the maximum and minimum EVI values, and assesses anomalous years by comparing the annual time series and long-term median EVI value. The findings reveal significantly increasing greening trends in most TNPs, except Danau Sentarum, from 2000 to 2022. The seasonality analysis shows that most TNPs exhibit peak and trough greenness at the end of the rainy and dry seasons, respectively, as the vegetation response to precipitation increases and decreases. Anomalies in seasonality that is affected by climate change was detected in all of the regions. To increase TNPs resilience, suggested measures include active reforestation and implementation of Assisted Natural Regeneration, strengthen the enforcement of fundamental managerial task, and forest fire management.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Climate Change
Indonesia
*Parks, Recreational
*Conservation of Natural Resources
Seasons
Environmental Monitoring/methods
Ecosystem
Plants
RevDate: 2024-08-05
CmpDate: 2024-08-05
Transcriptomics and epigenetic data integration learning module on Google Cloud.
Briefings in bioinformatics, 25(Supplement_1):.
Multi-omics (genomics, transcriptomics, epigenomics, proteomics, metabolomics, etc.) research approaches are vital for understanding the hierarchical complexity of human biology and have proven to be extremely valuable in cancer research and precision medicine. Emerging scientific advances in recent years have made high-throughput genome-wide sequencing a central focus in molecular research by allowing for the collective analysis of various kinds of molecular biological data from different types of specimens in a single tissue or even at the level of a single cell. Additionally, with the help of improved computational resources and data mining, researchers are able to integrate data from different multi-omics regimes to identify new prognostic, diagnostic, or predictive biomarkers, uncover novel therapeutic targets, and develop more personalized treatment protocols for patients. For the research community to parse the scientifically and clinically meaningful information out of all the biological data being generated each day more efficiently with less wasted resources, being familiar with and comfortable using advanced analytical tools, such as Google Cloud Platform becomes imperative. This project is an interdisciplinary, cross-organizational effort to provide a guided learning module for integrating transcriptomics and epigenetics data analysis protocols into a comprehensive analysis pipeline for users to implement in their own work, utilizing the cloud computing infrastructure on Google Cloud. The learning module consists of three submodules that guide the user through tutorial examples that illustrate the analysis of RNA-sequence and Reduced-Representation Bisulfite Sequencing data. The examples are in the form of breast cancer case studies, and the data sets were procured from the public repository Gene Expression Omnibus. The first submodule is devoted to transcriptomics analysis with the RNA sequencing data, the second submodule focuses on epigenetics analysis using the DNA methylation data, and the third submodule integrates the two methods for a deeper biological understanding. The modules begin with data collection and preprocessing, with further downstream analysis performed in a Vertex AI Jupyter notebook instance with an R kernel. Analysis results are returned to Google Cloud buckets for storage and visualization, removing the computational strain from local resources. The final product is a start-to-finish tutorial for the researchers with limited experience in multi-omics to integrate transcriptomics and epigenetics data analysis into a comprehensive pipeline to perform their own biological research.This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [16] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39101486
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39101486,
year = {2024},
author = {Ruprecht, NA and Kennedy, JD and Bansal, B and Singhal, S and Sens, D and Maggio, A and Doe, V and Hawkins, D and Campbel, R and O'Connell, K and Gill, JS and Schaefer, K and Singhal, SK},
title = {Transcriptomics and epigenetic data integration learning module on Google Cloud.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
pmid = {39101486},
issn = {1477-4054},
support = {P20GM103442//National Institute of General Medical Sciences of the National Institutes of Health/ ; },
mesh = {Humans ; *Cloud Computing ; *Epigenomics/methods ; Epigenesis, Genetic ; Transcriptome ; Computational Biology/methods ; Gene Expression Profiling/methods ; Software ; Data Mining/methods ; },
abstract = {Multi-omics (genomics, transcriptomics, epigenomics, proteomics, metabolomics, etc.) research approaches are vital for understanding the hierarchical complexity of human biology and have proven to be extremely valuable in cancer research and precision medicine. Emerging scientific advances in recent years have made high-throughput genome-wide sequencing a central focus in molecular research by allowing for the collective analysis of various kinds of molecular biological data from different types of specimens in a single tissue or even at the level of a single cell. Additionally, with the help of improved computational resources and data mining, researchers are able to integrate data from different multi-omics regimes to identify new prognostic, diagnostic, or predictive biomarkers, uncover novel therapeutic targets, and develop more personalized treatment protocols for patients. For the research community to parse the scientifically and clinically meaningful information out of all the biological data being generated each day more efficiently with less wasted resources, being familiar with and comfortable using advanced analytical tools, such as Google Cloud Platform becomes imperative. This project is an interdisciplinary, cross-organizational effort to provide a guided learning module for integrating transcriptomics and epigenetics data analysis protocols into a comprehensive analysis pipeline for users to implement in their own work, utilizing the cloud computing infrastructure on Google Cloud. The learning module consists of three submodules that guide the user through tutorial examples that illustrate the analysis of RNA-sequence and Reduced-Representation Bisulfite Sequencing data. The examples are in the form of breast cancer case studies, and the data sets were procured from the public repository Gene Expression Omnibus. The first submodule is devoted to transcriptomics analysis with the RNA sequencing data, the second submodule focuses on epigenetics analysis using the DNA methylation data, and the third submodule integrates the two methods for a deeper biological understanding. The modules begin with data collection and preprocessing, with further downstream analysis performed in a Vertex AI Jupyter notebook instance with an R kernel. Analysis results are returned to Google Cloud buckets for storage and visualization, removing the computational strain from local resources. The final product is a start-to-finish tutorial for the researchers with limited experience in multi-omics to integrate transcriptomics and epigenetics data analysis into a comprehensive pipeline to perform their own biological research.This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [16] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
*Epigenomics/methods
Epigenesis, Genetic
Transcriptome
Computational Biology/methods
Gene Expression Profiling/methods
Software
Data Mining/methods
RevDate: 2024-08-04
Trust value evaluation of cloud service providers using fuzzy inference based analytical process.
Scientific reports, 14(1):18028.
Users can purchase virtualized computer resources using the cloud computing concept, which is a novel and innovative way of computing. It offers numerous advantages for IT and healthcare industries over traditional methods. However, a lack of trust between CSUs and CSPs is hindering the widespread adoption of cloud computing across industries. Since cloud computing offers a wide range of trust models and strategies, it is essential to analyze the service using a detailed methodology in order to choose the appropriate cloud service for various user types. Finding a wide variety of comprehensive elements that are both required and sufficient for evaluating any cloud service is vital in order to achieve that. As a result, this study suggests an accurate, fuzzy logic-based trust evaluation model for evaluating the trustworthiness of a cloud service provider. Here, we examine how fuzzy logic raises the efficiency of trust evaluation. Trust is assessed using Quality of Service (QoS) characteristics like security, privacy, dynamicity, data integrity, and performance. The outcomes of a MATLAB simulation demonstrate the viability of the suggested strategy in a cloud setting.
Additional Links: PMID-39098886
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39098886,
year = {2024},
author = {John, J and John Singh, K},
title = {Trust value evaluation of cloud service providers using fuzzy inference based analytical process.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18028},
pmid = {39098886},
issn = {2045-2322},
abstract = {Users can purchase virtualized computer resources using the cloud computing concept, which is a novel and innovative way of computing. It offers numerous advantages for IT and healthcare industries over traditional methods. However, a lack of trust between CSUs and CSPs is hindering the widespread adoption of cloud computing across industries. Since cloud computing offers a wide range of trust models and strategies, it is essential to analyze the service using a detailed methodology in order to choose the appropriate cloud service for various user types. Finding a wide variety of comprehensive elements that are both required and sufficient for evaluating any cloud service is vital in order to achieve that. As a result, this study suggests an accurate, fuzzy logic-based trust evaluation model for evaluating the trustworthiness of a cloud service provider. Here, we examine how fuzzy logic raises the efficiency of trust evaluation. Trust is assessed using Quality of Service (QoS) characteristics like security, privacy, dynamicity, data integrity, and performance. The outcomes of a MATLAB simulation demonstrate the viability of the suggested strategy in a cloud setting.},
}
RevDate: 2024-08-03
Cloud computing load prediction method based on CNN-BiLSTM model under low-carbon background.
Scientific reports, 14(1):18004.
With the establishment of the "double carbon" goal, various industries are actively exploring ways to reduce carbon emissions. Cloud data centers, represented by cloud computing, often have the problem of mismatch between load requests and resource supply, resulting in excessive carbon emissions. Based on this, this paper proposes a complete method for cloud computing carbon emission prediction. Firstly, the convolutional neural network and bidirectional long-term and short-term memory neural network (CNN-BiLSTM) combined model are used to predict the cloud computing load. The real-time prediction power is obtained by real-time prediction load of cloud computing, and then the carbon emission prediction is obtained by power calculation. Develop a dynamic server carbon emission prediction model, so that the server carbon emission can change with the change of CPU utilization, so as to achieve the purpose of low carbon emission reduction. In this paper, Google cluster data is used to predict the load. The experimental results show that the CNN-BiLSTM combined model has good prediction effect. Compared with the multi-layer feed forward neural network model (BP), long short-term memory network model (LSTM), bidirectional long short-term memory network model (BiLSTM), modal decomposition and convolution long time series neural network model (CEEMDAN-ConvLSTM), the MSE index decreased by 52 % , 50 % , 34 % and 45 % respectively.
Additional Links: PMID-39097607
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39097607,
year = {2024},
author = {Zhang, H and Li, J and Yang, H},
title = {Cloud computing load prediction method based on CNN-BiLSTM model under low-carbon background.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {18004},
pmid = {39097607},
issn = {2045-2322},
support = {XJ2023004301//Basic scientific research business fee of central colleges and universities/ ; },
abstract = {With the establishment of the "double carbon" goal, various industries are actively exploring ways to reduce carbon emissions. Cloud data centers, represented by cloud computing, often have the problem of mismatch between load requests and resource supply, resulting in excessive carbon emissions. Based on this, this paper proposes a complete method for cloud computing carbon emission prediction. Firstly, the convolutional neural network and bidirectional long-term and short-term memory neural network (CNN-BiLSTM) combined model are used to predict the cloud computing load. The real-time prediction power is obtained by real-time prediction load of cloud computing, and then the carbon emission prediction is obtained by power calculation. Develop a dynamic server carbon emission prediction model, so that the server carbon emission can change with the change of CPU utilization, so as to achieve the purpose of low carbon emission reduction. In this paper, Google cluster data is used to predict the load. The experimental results show that the CNN-BiLSTM combined model has good prediction effect. Compared with the multi-layer feed forward neural network model (BP), long short-term memory network model (LSTM), bidirectional long short-term memory network model (BiLSTM), modal decomposition and convolution long time series neural network model (CEEMDAN-ConvLSTM), the MSE index decreased by 52 % , 50 % , 34 % and 45 % respectively.},
}
RevDate: 2024-08-02
Leonhard Med, a trusted research environment for processing sensitive research data.
Journal of integrative bioinformatics [Epub ahead of print].
This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.
Additional Links: PMID-39092509
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39092509,
year = {2024},
author = {Okoniewski, MJ and Wiegand, A and Schmid, DC and Bolliger, C and Bovino, C and Belluco, M and Wüst, T and Byrde, O and Maffioletti, S and Rinn, B},
title = {Leonhard Med, a trusted research environment for processing sensitive research data.},
journal = {Journal of integrative bioinformatics},
volume = {},
number = {},
pages = {},
pmid = {39092509},
issn = {1613-4516},
abstract = {This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.},
}
RevDate: 2024-08-01
CmpDate: 2024-08-01
Optimized intrusion detection in IoT and fog computing using ensemble learning and advanced feature selection.
PloS one, 19(8):e0304082.
The proliferation of Internet of Things (IoT) devices and fog computing architectures has introduced major security and cyber threats. Intrusion detection systems have become effective in monitoring network traffic and activities to identify anomalies that are indicative of attacks. However, constraints such as limited computing resources at fog nodes render conventional intrusion detection techniques impractical. This paper proposes a novel framework that integrates stacked autoencoders, CatBoost, and an optimised transformer-CNN-LSTM ensemble tailored for intrusion detection in fog and IoT networks. Autoencoders extract robust features from high-dimensional traffic data while reducing the dimensionality of the efficiency at fog nodes. CatBoost refines features through predictive selection. The ensemble model combines self-attention, convolutions, and recurrence for comprehensive traffic analysis in the cloud. Evaluations of the NSL-KDD, UNSW-NB15, and AWID benchmarks demonstrate an accuracy of over 99% in detecting threats across traditional, hybrid enterprises and wireless environments. Integrated edge preprocessing and cloud-based ensemble learning pipelines enable efficient and accurate anomaly detection. The results highlight the viability of securing real-world fog and the IoT infrastructure against continuously evolving cyber-attacks.
Additional Links: PMID-39088558
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39088558,
year = {2024},
author = {Tawfik, M},
title = {Optimized intrusion detection in IoT and fog computing using ensemble learning and advanced feature selection.},
journal = {PloS one},
volume = {19},
number = {8},
pages = {e0304082},
pmid = {39088558},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Internet of Things ; Computer Security ; Neural Networks, Computer ; Algorithms ; Machine Learning ; },
abstract = {The proliferation of Internet of Things (IoT) devices and fog computing architectures has introduced major security and cyber threats. Intrusion detection systems have become effective in monitoring network traffic and activities to identify anomalies that are indicative of attacks. However, constraints such as limited computing resources at fog nodes render conventional intrusion detection techniques impractical. This paper proposes a novel framework that integrates stacked autoencoders, CatBoost, and an optimised transformer-CNN-LSTM ensemble tailored for intrusion detection in fog and IoT networks. Autoencoders extract robust features from high-dimensional traffic data while reducing the dimensionality of the efficiency at fog nodes. CatBoost refines features through predictive selection. The ensemble model combines self-attention, convolutions, and recurrence for comprehensive traffic analysis in the cloud. Evaluations of the NSL-KDD, UNSW-NB15, and AWID benchmarks demonstrate an accuracy of over 99% in detecting threats across traditional, hybrid enterprises and wireless environments. Integrated edge preprocessing and cloud-based ensemble learning pipelines enable efficient and accurate anomaly detection. The results highlight the viability of securing real-world fog and the IoT infrastructure against continuously evolving cyber-attacks.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Internet of Things
Computer Security
Neural Networks, Computer
Algorithms
Machine Learning
RevDate: 2024-07-31
Electron-driven molecular processes for cyanopolyacetylenes HC2n+1N (n = 3, 4, and 5).
Physical chemistry chemical physics : PCCP [Epub ahead of print].
Linear carbon series cyanopolyacetylenes (HC2n+1N) (n = 3, 4, and 5) are astromolecules found in the atmosphere of Titan and interstellar media such as TMC-1 (Taurus molecular cloud-1). All these compounds are also detected in IRC + 10 216. In the present work, we comprehensively investigate electron interaction with important cyanopolyacetylene compounds, viz. HC7N (cyano-tri-acetylene), HC9N (cyano-tetra-acetylene), and HC11N (cyano-penta-acetylene). The study covers incident electron energies ranging from the ionization threshold to 5 keV. Various electron-driven molecular processes are quantified in terms of total cross-sections. The quantum spherical complex optical potential (SCOP) is used to determine elastic (Qel) and inelastic (Qinel) cross-sections. Ionization is the most important inelastic effect that opens various chemical pathways for the generation of different molecular species; we computed the ionization cross-section (Qion) and discrete electronic excitation cross-section (ΣQexc) using the complex scattering potential-ionization contribution (CSP-ic) method. The cyanopolyacetylene compounds are difficult to handle experimentally owing to the health risks involved. Therefore, there are no prior experimental data available for these molecules; only Qion have been reported theoretically. Thus, the present work is the maiden report on computing Qel, Qinel, ΣQexc, and QT. In order to provide an alternative approach and further validation of the present work, we employed our recently developed two-parameter semi-empirical method (2p-SEM) to compute Qel and QT. Additionally, we predict the polarizability of the HC11N molecule, which has not been reported in the existing literature. This prediction is based on a correlation study of polarizabilities of molecules with Qion values from the same series of molecules.
Additional Links: PMID-39081193
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39081193,
year = {2024},
author = {Mer, P and Limbachiya, C},
title = {Electron-driven molecular processes for cyanopolyacetylenes HC2n+1N (n = 3, 4, and 5).},
journal = {Physical chemistry chemical physics : PCCP},
volume = {},
number = {},
pages = {},
doi = {10.1039/d4cp02665a},
pmid = {39081193},
issn = {1463-9084},
abstract = {Linear carbon series cyanopolyacetylenes (HC2n+1N) (n = 3, 4, and 5) are astromolecules found in the atmosphere of Titan and interstellar media such as TMC-1 (Taurus molecular cloud-1). All these compounds are also detected in IRC + 10 216. In the present work, we comprehensively investigate electron interaction with important cyanopolyacetylene compounds, viz. HC7N (cyano-tri-acetylene), HC9N (cyano-tetra-acetylene), and HC11N (cyano-penta-acetylene). The study covers incident electron energies ranging from the ionization threshold to 5 keV. Various electron-driven molecular processes are quantified in terms of total cross-sections. The quantum spherical complex optical potential (SCOP) is used to determine elastic (Qel) and inelastic (Qinel) cross-sections. Ionization is the most important inelastic effect that opens various chemical pathways for the generation of different molecular species; we computed the ionization cross-section (Qion) and discrete electronic excitation cross-section (ΣQexc) using the complex scattering potential-ionization contribution (CSP-ic) method. The cyanopolyacetylene compounds are difficult to handle experimentally owing to the health risks involved. Therefore, there are no prior experimental data available for these molecules; only Qion have been reported theoretically. Thus, the present work is the maiden report on computing Qel, Qinel, ΣQexc, and QT. In order to provide an alternative approach and further validation of the present work, we employed our recently developed two-parameter semi-empirical method (2p-SEM) to compute Qel and QT. Additionally, we predict the polarizability of the HC11N molecule, which has not been reported in the existing literature. This prediction is based on a correlation study of polarizabilities of molecules with Qion values from the same series of molecules.},
}
RevDate: 2024-07-30
AI Accelerator with Ultralightweight Time-Period CNN-Based Model for Arrhythmia Classification.
IEEE transactions on biomedical circuits and systems, PP: [Epub ahead of print].
This work proposes a classification system for arrhythmias, aiming to enhance the efficiency of the diagnostic process for cardiologists. The proposed algorithm includes a naive preprocessing procedure for electrocardiography (ECG) data applicable to various ECG databases. Additionally, this work proposes an ultralightweight model for arrhythmia classification based on a convolutional neural network and incorporating R-peak interval features to represent long-term rhythm information, thereby improving the model's classification performance. The proposed model is trained and tested by using the MIT-BIH and NCKU-CBIC databases in accordance with the classification standards of the Association for the Advancement of Medical Instrumentation (AAMI), achieving high accuracies of 98.32% and 97.1%. This work applies the arrhythmia classification algorithm to a web-based system, thus providing a graphical interface. The cloud-based execution of automated artificial intelligence (AI) classification allows cardiologists and patients to view ECG wave conditions instantly, thereby remarkably enhancing the quality of medical examination. This work also designs a customized integrated circuit for the hardware implementation of an AI accelerator. The accelerator utilizes a parallelized processing element array architecture to perform convolution and fully connected layer operations. It introduces proposed hybrid stationary techniques, combining input and weight stationary modes to increase data reuse drastically and reduce hardware execution cycles and power consumption, ultimately achieving high-performance computing. This accelerator is implemented in the form of a chip by using the TSMC 180 nm CMOS process. It exhibits a power consumption of 122 μW, a classification latency of 6.8 ms, and an energy efficiency of 0.83 μJ/classification.
Additional Links: PMID-39078761
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39078761,
year = {2024},
author = {Lee, SY and Ku, MY and Tseng, WC and Chen, JY},
title = {AI Accelerator with Ultralightweight Time-Period CNN-Based Model for Arrhythmia Classification.},
journal = {IEEE transactions on biomedical circuits and systems},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TBCAS.2024.3435718},
pmid = {39078761},
issn = {1940-9990},
abstract = {This work proposes a classification system for arrhythmias, aiming to enhance the efficiency of the diagnostic process for cardiologists. The proposed algorithm includes a naive preprocessing procedure for electrocardiography (ECG) data applicable to various ECG databases. Additionally, this work proposes an ultralightweight model for arrhythmia classification based on a convolutional neural network and incorporating R-peak interval features to represent long-term rhythm information, thereby improving the model's classification performance. The proposed model is trained and tested by using the MIT-BIH and NCKU-CBIC databases in accordance with the classification standards of the Association for the Advancement of Medical Instrumentation (AAMI), achieving high accuracies of 98.32% and 97.1%. This work applies the arrhythmia classification algorithm to a web-based system, thus providing a graphical interface. The cloud-based execution of automated artificial intelligence (AI) classification allows cardiologists and patients to view ECG wave conditions instantly, thereby remarkably enhancing the quality of medical examination. This work also designs a customized integrated circuit for the hardware implementation of an AI accelerator. The accelerator utilizes a parallelized processing element array architecture to perform convolution and fully connected layer operations. It introduces proposed hybrid stationary techniques, combining input and weight stationary modes to increase data reuse drastically and reduce hardware execution cycles and power consumption, ultimately achieving high-performance computing. This accelerator is implemented in the form of a chip by using the TSMC 180 nm CMOS process. It exhibits a power consumption of 122 μW, a classification latency of 6.8 ms, and an energy efficiency of 0.83 μJ/classification.},
}
RevDate: 2024-07-30
IoT-based emergency cardiac death risk rescue alert system.
MethodsX, 13:102834.
The use of technology in healthcare is one of the most critical application areas today. With the development of medical applications, people's quality of life has improved. However, it is impractical and unnecessary for medium-risk people to receive specialized daily hospital monitoring. Due to their health status, they will be exposed to a high risk of severe health damage or even life-threatening conditions without monitoring. Therefore, remote, real-time, low-cost, wearable, and effective monitoring is ideal for this problem. Many researchers mentioned that their studies could use electrocardiogram (ECG) detection to discover emergencies. However, how to respond to discovered emergencies in household life is still a research gap in this field.•This paper proposes a real-time monitoring of ECG signals and sending them to the cloud for Sudden Cardiac Death (SCD) prediction.•Unlike previous studies, the proposed system has an additional emergency response mechanism to alert nearby community healthcare workers when SCD is predicted to occur.
Additional Links: PMID-39071997
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39071997,
year = {2024},
author = {Rehman, SU and Sadek, I and Huang, B and Manickam, S and Mahmoud, LN},
title = {IoT-based emergency cardiac death risk rescue alert system.},
journal = {MethodsX},
volume = {13},
number = {},
pages = {102834},
pmid = {39071997},
issn = {2215-0161},
abstract = {The use of technology in healthcare is one of the most critical application areas today. With the development of medical applications, people's quality of life has improved. However, it is impractical and unnecessary for medium-risk people to receive specialized daily hospital monitoring. Due to their health status, they will be exposed to a high risk of severe health damage or even life-threatening conditions without monitoring. Therefore, remote, real-time, low-cost, wearable, and effective monitoring is ideal for this problem. Many researchers mentioned that their studies could use electrocardiogram (ECG) detection to discover emergencies. However, how to respond to discovered emergencies in household life is still a research gap in this field.•This paper proposes a real-time monitoring of ECG signals and sending them to the cloud for Sudden Cardiac Death (SCD) prediction.•Unlike previous studies, the proposed system has an additional emergency response mechanism to alert nearby community healthcare workers when SCD is predicted to occur.},
}
RevDate: 2024-07-26
Wigner kernels: Body-ordered equivariant machine learning without a basis.
The Journal of chemical physics, 161(4):.
Machine-learning models based on a point-cloud representation of a physical object are ubiquitous in scientific applications and particularly well-suited to the atomic-scale description of molecules and materials. Among the many different approaches that have been pursued, the description of local atomic environments in terms of their discretized neighbor densities has been used widely and very successfully. We propose a novel density-based method, which involves computing "Wigner kernels." These are fully equivariant and body-ordered kernels that can be computed iteratively at a cost that is independent of the basis used to discretize the density and grows only linearly with the maximum body-order considered. Wigner kernels represent the infinite-width limit of feature-space models, whose dimensionality and computational cost instead scale exponentially with the increasing order of correlations. We present several examples of the accuracy of models based on Wigner kernels in chemical applications, for both scalar and tensorial targets, reaching an accuracy that is competitive with state-of-the-art deep-learning architectures. We discuss the broader relevance of these findings to equivariant geometric machine-learning.
Additional Links: PMID-39056390
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39056390,
year = {2024},
author = {Bigi, F and Pozdnyakov, SN and Ceriotti, M},
title = {Wigner kernels: Body-ordered equivariant machine learning without a basis.},
journal = {The Journal of chemical physics},
volume = {161},
number = {4},
pages = {},
doi = {10.1063/5.0208746},
pmid = {39056390},
issn = {1089-7690},
abstract = {Machine-learning models based on a point-cloud representation of a physical object are ubiquitous in scientific applications and particularly well-suited to the atomic-scale description of molecules and materials. Among the many different approaches that have been pursued, the description of local atomic environments in terms of their discretized neighbor densities has been used widely and very successfully. We propose a novel density-based method, which involves computing "Wigner kernels." These are fully equivariant and body-ordered kernels that can be computed iteratively at a cost that is independent of the basis used to discretize the density and grows only linearly with the maximum body-order considered. Wigner kernels represent the infinite-width limit of feature-space models, whose dimensionality and computational cost instead scale exponentially with the increasing order of correlations. We present several examples of the accuracy of models based on Wigner kernels in chemical applications, for both scalar and tensorial targets, reaching an accuracy that is competitive with state-of-the-art deep-learning architectures. We discuss the broader relevance of these findings to equivariant geometric machine-learning.},
}
RevDate: 2024-07-26
A fourfold-objective-based cloud privacy preservation model with proposed association rule hiding and deep learning assisted optimal key generation.
Network (Bristol, England) [Epub ahead of print].
Numerous studies have been conducted in an attempt to preserve cloud privacy, yet the majority of cutting-edge solutions fall short when it comes to handling sensitive data. This research proposes a "privacy preservation model in the cloud environment". The four stages of recommended security preservation methodology are "identification of sensitive data, generation of an optimal tuned key, suggested data sanitization, and data restoration". Initially, owner's data enters the Sensitive data identification process. The sensitive information in the input (owner's data) is identified via Augmented Dynamic Itemset Counting (ADIC) based Associative Rule Mining Model. Subsequently, the identified sensitive data are sanitized via the newly created tuned key. The generated tuned key is formulated with new fourfold objective-hybrid optimization approach-based deep learning approach. The optimally tuned key is generated with LSTM on the basis of fourfold objectives and the new hybrid MUAOA. The created keys, as well as generated sensitive rules, are fed into the deep learning model. The MUAOA technique is a conceptual blend of standard AOA and CMBO, respectively. As a result, unauthorized people will be unable to access information. Finally, comparative evaluation is undergone and proposed LSTM+MUAOA has achieved higher values on privacy about 5.21 compared to other existing models.
Additional Links: PMID-39054942
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39054942,
year = {2024},
author = {Sharma, S and Tyagi, S},
title = {A fourfold-objective-based cloud privacy preservation model with proposed association rule hiding and deep learning assisted optimal key generation.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-36},
doi = {10.1080/0954898X.2024.2378836},
pmid = {39054942},
issn = {1361-6536},
abstract = {Numerous studies have been conducted in an attempt to preserve cloud privacy, yet the majority of cutting-edge solutions fall short when it comes to handling sensitive data. This research proposes a "privacy preservation model in the cloud environment". The four stages of recommended security preservation methodology are "identification of sensitive data, generation of an optimal tuned key, suggested data sanitization, and data restoration". Initially, owner's data enters the Sensitive data identification process. The sensitive information in the input (owner's data) is identified via Augmented Dynamic Itemset Counting (ADIC) based Associative Rule Mining Model. Subsequently, the identified sensitive data are sanitized via the newly created tuned key. The generated tuned key is formulated with new fourfold objective-hybrid optimization approach-based deep learning approach. The optimally tuned key is generated with LSTM on the basis of fourfold objectives and the new hybrid MUAOA. The created keys, as well as generated sensitive rules, are fed into the deep learning model. The MUAOA technique is a conceptual blend of standard AOA and CMBO, respectively. As a result, unauthorized people will be unable to access information. Finally, comparative evaluation is undergone and proposed LSTM+MUAOA has achieved higher values on privacy about 5.21 compared to other existing models.},
}
RevDate: 2024-07-25
CmpDate: 2024-07-25
Use Mobile Apps to Link to Google Forms to Conduct Online Surveys.
Studies in health technology and informatics, 315:567-568.
The study aimed to evaluate changes in anxiety levels in patients with coronary artery disease before and after cardiac catheterization. The mobile applications LINE and GOOGLE were used to collect online data. A total of 188 patients participated in the study conducted at a regional teaching hospital in eastern Taiwan, and 51 of them completed the questionnaire twice, with a response rate of 27.1%. Although the second study noted the problem of incomplete data and low response rates, this study shows that online research methodology can still be improved and that using electronic questionnaires for data collection and statistical analysis reduces the risk of errors in online research and saves time in documentation. It is recommended to provide clear and detailed instructions when conducting online surveys and to review them carefully upon completion to ensure the completeness of the data collected.
Additional Links: PMID-39049325
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39049325,
year = {2024},
author = {Chen, SY and Tu, MH},
title = {Use Mobile Apps to Link to Google Forms to Conduct Online Surveys.},
journal = {Studies in health technology and informatics},
volume = {315},
number = {},
pages = {567-568},
doi = {10.3233/SHTI240219},
pmid = {39049325},
issn = {1879-8365},
mesh = {Taiwan ; Humans ; *Mobile Applications ; Surveys and Questionnaires ; Coronary Artery Disease ; Anxiety ; Male ; Female ; Middle Aged ; Internet ; },
abstract = {The study aimed to evaluate changes in anxiety levels in patients with coronary artery disease before and after cardiac catheterization. The mobile applications LINE and GOOGLE were used to collect online data. A total of 188 patients participated in the study conducted at a regional teaching hospital in eastern Taiwan, and 51 of them completed the questionnaire twice, with a response rate of 27.1%. Although the second study noted the problem of incomplete data and low response rates, this study shows that online research methodology can still be improved and that using electronic questionnaires for data collection and statistical analysis reduces the risk of errors in online research and saves time in documentation. It is recommended to provide clear and detailed instructions when conducting online surveys and to review them carefully upon completion to ensure the completeness of the data collected.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Taiwan
Humans
*Mobile Applications
Surveys and Questionnaires
Coronary Artery Disease
Anxiety
Male
Female
Middle Aged
Internet
RevDate: 2024-07-23
CmpDate: 2024-07-23
CCPA: cloud-based, self-learning modules for consensus pathway analysis using GO, KEGG and Reactome.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041916
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041916,
year = {2024},
author = {Nguyen, H and Pham, VD and Nguyen, H and Tran, B and Petereit, J and Nguyen, T},
title = {CCPA: cloud-based, self-learning modules for consensus pathway analysis using GO, KEGG and Reactome.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae222},
pmid = {39041916},
issn = {1477-4054},
support = {2343019 and 2203236//National Science Foundation/ ; 80NSSC22M0255/NASA/NASA/United States ; GM103440 and 1R44GM152152-01/GM/NIGMS NIH HHS/United States ; 1U01CA274573-01A1/CA/NCI NIH HHS/United States ; },
mesh = {*Cloud Computing ; *Software ; Humans ; Computational Biology/methods/education ; Animals ; Gene Ontology ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
Humans
Computational Biology/methods/education
Animals
Gene Ontology
RevDate: 2024-07-23
CmpDate: 2024-07-23
Identifying and training deep learning neural networks on biomedical-related datasets.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041915
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041915,
year = {2024},
author = {Woessner, AE and Anjum, U and Salman, H and Lear, J and Turner, JT and Campbell, R and Beaudry, L and Zhan, J and Cornett, LE and Gauch, S and Quinn, KP},
title = {Identifying and training deep learning neural networks on biomedical-related datasets.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae232},
pmid = {39041915},
issn = {1477-4054},
support = {R01EB031032/NH/NIH HHS/United States ; NIH P20GM139768//Arkansas Integrative Metabolic Research Center/ ; 3P20GM103429-21S2//National Institutes of General Medical Sciences (NIGMS)/ ; },
mesh = {*Deep Learning ; *Neural Networks, Computer ; Humans ; Biomedical Research ; Algorithms ; Cloud Computing ; },
abstract = {This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
*Neural Networks, Computer
Humans
Biomedical Research
Algorithms
Cloud Computing
RevDate: 2024-07-23
CmpDate: 2024-07-23
Understanding proteome quantification in an interactive learning module on Google Cloud Platform.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on protein quantification in an interactive format that uses appropriate cloud resources for data access and analyses. Quantitative proteomics is a rapidly growing discipline due to the cutting-edge technologies of high resolution mass spectrometry. There are many data types to consider for proteome quantification including data dependent acquisition, data independent acquisition, multiplexing with Tandem Mass Tag reporter ions, spectral counts, and more. As part of the NIH NIGMS Sandbox effort, we developed a learning module to introduce students to mass spectrometry terminology, normalization methods, statistical designs, and basics of R programming. By utilizing the Google Cloud environment, the learning module is easily accessible without the need for complex installation procedures. The proteome quantification module demonstrates the analysis using a provided TMT10plex data set using MS3 reporter ion intensity quantitative values in a Jupyter notebook with an R kernel. The learning module begins with the raw intensities, performs normalization, and differential abundance analysis using limma models, and is designed for researchers with a basic understanding of mass spectrometry and R programming language. Learners walk away with a better understanding of how to navigate Google Cloud Platform for proteomic research, and with the basics of mass spectrometry data analysis at the command line. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041914
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041914,
year = {2024},
author = {O'Connell, KA and Kopchick, B and Carlson, T and Belardo, D and Byrum, SD},
title = {Understanding proteome quantification in an interactive learning module on Google Cloud Platform.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae235},
pmid = {39041914},
issn = {1477-4054},
support = {//UAMS Winthrop P. Rockefeller Cancer Institute/ ; OIA-1946391//National Science Foundation Award/ ; R24GM137786//National Institutes of Health National Institute of General Medical Sciences (NIH/NIGMS)/ ; },
mesh = {*Cloud Computing ; *Proteome/metabolism ; *Proteomics/methods ; *Software ; Mass Spectrometry ; Humans ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on protein quantification in an interactive format that uses appropriate cloud resources for data access and analyses. Quantitative proteomics is a rapidly growing discipline due to the cutting-edge technologies of high resolution mass spectrometry. There are many data types to consider for proteome quantification including data dependent acquisition, data independent acquisition, multiplexing with Tandem Mass Tag reporter ions, spectral counts, and more. As part of the NIH NIGMS Sandbox effort, we developed a learning module to introduce students to mass spectrometry terminology, normalization methods, statistical designs, and basics of R programming. By utilizing the Google Cloud environment, the learning module is easily accessible without the need for complex installation procedures. The proteome quantification module demonstrates the analysis using a provided TMT10plex data set using MS3 reporter ion intensity quantitative values in a Jupyter notebook with an R kernel. The learning module begins with the raw intensities, performs normalization, and differential abundance analysis using limma models, and is designed for researchers with a basic understanding of mass spectrometry and R programming language. Learners walk away with a better understanding of how to navigate Google Cloud Platform for proteomic research, and with the basics of mass spectrometry data analysis at the command line. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Proteome/metabolism
*Proteomics/methods
*Software
Mass Spectrometry
Humans
RevDate: 2024-07-23
CmpDate: 2024-07-23
Whole-genome bisulfite sequencing data analysis learning module on Google Cloud Platform.
Briefings in bioinformatics, 25(Supplement_1):.
This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041913
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041913,
year = {2024},
author = {Qin, Y and Maggio, A and Hawkins, D and Beaudry, L and Kim, A and Pan, D and Gong, T and Fu, Y and Yang, H and Deng, Y},
title = {Whole-genome bisulfite sequencing data analysis learning module on Google Cloud Platform.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae236},
pmid = {39041913},
issn = {1477-4054},
support = {P20GM103466/NH/NIH HHS/United States ; },
mesh = {*Cloud Computing ; *DNA Methylation ; *Whole Genome Sequencing/methods ; *Software ; Sulfites/chemistry ; Humans ; Epigenesis, Genetic ; Computational Biology/methods ; },
abstract = {This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*DNA Methylation
*Whole Genome Sequencing/methods
*Software
Sulfites/chemistry
Humans
Epigenesis, Genetic
Computational Biology/methods
RevDate: 2024-07-23
CmpDate: 2024-07-23
A cloud-based learning module for biomarker discovery.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on basic principles in biomarker discovery in an interactive format that uses appropriate cloud resources for data access and analyses. In collaboration with Google Cloud, Deloitte Consulting and NIGMS, the Rhode Island INBRE Molecular Informatics Core developed a cloud-based training module for biomarker discovery. The module consists of nine submodules covering various topics on biomarker discovery and assessment and is deployed on the Google Cloud Platform and available for public use through the NIGMS Sandbox. The submodules are written as a series of Jupyter Notebooks utilizing R and Bioconductor for biomarker and omics data analysis. The submodules cover the following topics: 1) introduction to biomarkers; 2) introduction to R data structures; 3) introduction to linear models; 4) introduction to exploratory analysis; 5) rat renal ischemia-reperfusion injury case study; (6) linear and logistic regression for comparison of quantitative biomarkers; 7) exploratory analysis of proteomics IRI data; 8) identification of IRI biomarkers from proteomic data; and 9) machine learning methods for biomarker discovery. Each notebook includes an in-line quiz for self-assessment on the submodule topic and an overview video is available on YouTube (https://www.youtube.com/watch?v=2-Q9Ax8EW84). This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041912
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041912,
year = {2024},
author = {Hemme, CL and Beaudry, L and Yosufzai, Z and Kim, A and Pan, D and Campbell, R and Price, M and Cho, BP},
title = {A cloud-based learning module for biomarker discovery.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae126},
pmid = {39041912},
issn = {1477-4054},
support = {P20GM103430/NH/NIH HHS/United States ; },
mesh = {*Cloud Computing ; *Biomarkers/metabolism ; Animals ; Software ; Humans ; Rats ; Machine Learning ; Computational Biology/methods ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on basic principles in biomarker discovery in an interactive format that uses appropriate cloud resources for data access and analyses. In collaboration with Google Cloud, Deloitte Consulting and NIGMS, the Rhode Island INBRE Molecular Informatics Core developed a cloud-based training module for biomarker discovery. The module consists of nine submodules covering various topics on biomarker discovery and assessment and is deployed on the Google Cloud Platform and available for public use through the NIGMS Sandbox. The submodules are written as a series of Jupyter Notebooks utilizing R and Bioconductor for biomarker and omics data analysis. The submodules cover the following topics: 1) introduction to biomarkers; 2) introduction to R data structures; 3) introduction to linear models; 4) introduction to exploratory analysis; 5) rat renal ischemia-reperfusion injury case study; (6) linear and logistic regression for comparison of quantitative biomarkers; 7) exploratory analysis of proteomics IRI data; 8) identification of IRI biomarkers from proteomic data; and 9) machine learning methods for biomarker discovery. Each notebook includes an in-line quiz for self-assessment on the submodule topic and an overview video is available on YouTube (https://www.youtube.com/watch?v=2-Q9Ax8EW84). This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Biomarkers/metabolism
Animals
Software
Humans
Rats
Machine Learning
Computational Biology/methods
RevDate: 2024-07-23
CmpDate: 2024-07-23
Cloud-based introduction to BASH programming for biologists.
Briefings in bioinformatics, 25(Supplement_1):.
This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041911
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041911,
year = {2024},
author = {Wilkins, OM and Campbell, R and Yosufzai, Z and Doe, V and Soucy, SM},
title = {Cloud-based introduction to BASH programming for biologists.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae244},
pmid = {39041911},
issn = {1477-4054},
support = {P20GM130454//National Institutes of General Medical Science/ ; },
mesh = {*Cloud Computing ; *Software ; *Computational Biology/methods ; Programming Languages ; High-Throughput Nucleotide Sequencing/methods ; Genomics/methods ; Humans ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
*Computational Biology/methods
Programming Languages
High-Throughput Nucleotide Sequencing/methods
Genomics/methods
Humans
RevDate: 2024-07-23
CmpDate: 2024-07-23
CloudATAC: a cloud-based framework for ATAC-Seq data analysis.
Briefings in bioinformatics, 25(Supplement_1):.
Assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) generates genome-wide chromatin accessibility profiles, providing valuable insights into epigenetic gene regulation at both pooled-cell and single-cell population levels. Comprehensive analysis of ATAC-seq data involves the use of various interdependent programs. Learning the correct sequence of steps needed to process the data can represent a major hurdle. Selecting appropriate parameters at each stage, including pre-analysis, core analysis, and advanced downstream analysis, is important to ensure accurate analysis and interpretation of ATAC-seq data. Additionally, obtaining and working within a limited computational environment presents a significant challenge to non-bioinformatic researchers. Therefore, we present Cloud ATAC, an open-source, cloud-based interactive framework with a scalable, flexible, and streamlined analysis framework based on the best practices approach for pooled-cell and single-cell ATAC-seq data. These frameworks use on-demand computational power and memory, scalability, and a secure and compliant environment provided by the Google Cloud. Additionally, we leverage Jupyter Notebook's interactive computing platform that combines live code, tutorials, narrative text, flashcards, quizzes, and custom visualizations to enhance learning and analysis. Further, leveraging GPU instances has significantly improved the run-time of the single-cell framework. The source codes and data are publicly available through NIH Cloud lab https://github.com/NIGMS/ATAC-Seq-and-Single-Cell-ATAC-Seq-Analysis. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Additional Links: PMID-39041910
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39041910,
year = {2024},
author = {Veerappa, AM and Rowley, MJ and Maggio, A and Beaudry, L and Hawkins, D and Kim, A and Sethi, S and Sorgen, PL and Guda, C},
title = {CloudATAC: a cloud-based framework for ATAC-Seq data analysis.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {Supplement_1},
pages = {},
doi = {10.1093/bib/bbae090},
pmid = {39041910},
issn = {1477-4054},
support = {NIH/NIGMS P20 GM103427//NOSI supplement to the parent IDeA Networks of Biomedical Research Excellence (INBRE) Program/ ; },
mesh = {*Cloud Computing ; *Software ; *High-Throughput Nucleotide Sequencing/methods ; Humans ; Computational Biology/methods ; Chromatin Immunoprecipitation Sequencing/methods ; Single-Cell Analysis/methods ; Chromatin/genetics/metabolism ; },
abstract = {Assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) generates genome-wide chromatin accessibility profiles, providing valuable insights into epigenetic gene regulation at both pooled-cell and single-cell population levels. Comprehensive analysis of ATAC-seq data involves the use of various interdependent programs. Learning the correct sequence of steps needed to process the data can represent a major hurdle. Selecting appropriate parameters at each stage, including pre-analysis, core analysis, and advanced downstream analysis, is important to ensure accurate analysis and interpretation of ATAC-seq data. Additionally, obtaining and working within a limited computational environment presents a significant challenge to non-bioinformatic researchers. Therefore, we present Cloud ATAC, an open-source, cloud-based interactive framework with a scalable, flexible, and streamlined analysis framework based on the best practices approach for pooled-cell and single-cell ATAC-seq data. These frameworks use on-demand computational power and memory, scalability, and a secure and compliant environment provided by the Google Cloud. Additionally, we leverage Jupyter Notebook's interactive computing platform that combines live code, tutorials, narrative text, flashcards, quizzes, and custom visualizations to enhance learning and analysis. Further, leveraging GPU instances has significantly improved the run-time of the single-cell framework. The source codes and data are publicly available through NIH Cloud lab https://github.com/NIGMS/ATAC-Seq-and-Single-Cell-ATAC-Seq-Analysis. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Software
*High-Throughput Nucleotide Sequencing/methods
Humans
Computational Biology/methods
Chromatin Immunoprecipitation Sequencing/methods
Single-Cell Analysis/methods
Chromatin/genetics/metabolism
RevDate: 2024-07-23
Enhancing security in smart healthcare systems: Using intelligent edge computing with a novel Salp Swarm Optimization and radial basis neural network algorithm.
Heliyon, 10(13):e33792.
A smart healthcare system (SHS) is a health service system that employs advanced technologies such as wearable devices, the Internet of Things (IoT), and mobile internet to dynamically access information and connect people and institutions related to healthcare, thereby actively managing and responding to medical ecosystem needs. Edge computing (EC) plays a significant role in SHS as it enables real-time data processing and analysis at the data source, which reduces latency and improves medical intervention speed. However, the integration of patient information, including electronic health records (EHRs), into the SHS framework induces security and privacy concerns. To address these issues, an intelligent EC framework was proposed in this study. The objective of this study is to accurately identify security threats and ensure secure data transmission in the SHS environment. The proposed EC framework leverages the effectiveness of Salp Swarm Optimization and Radial Basis Functional Neural Network (SS-RBFN) for enhancing security and data privacy. The proposed methodology commences with the collection of healthcare information, which is then pre-processed to ensure the consistency and quality of the database for further analysis. Subsequently, the SS-RBFN algorithm was trained using the pre-processed database to distinguish between normal and malicious data streams accurately, offering continuous monitoring in the SHS environment. Additionally, a Rivest-Shamir-Adelman (RSA) approach was applied to safeguard data against security threats during transmission to cloud storage. The proposed model was trained and validated using the IoT-based healthcare database available at Kaggle, and the experimental results demonstrated that it achieved 99.87 % accuracy, 99.76 % precision, 99.49 % f-measure, 98.99 % recall, 97.37 % throughput, and 1.2s latency. Furthermore, the results achieved by the proposed model were compared with the existing models to validate its effectiveness in enhancing security.
Additional Links: PMID-39040324
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39040324,
year = {2024},
author = {Almalawi, A and Zafar, A and Unhelkar, B and Hassan, S and Alqurashi, F and Khan, AI and Fahad, A and Alam, MM},
title = {Enhancing security in smart healthcare systems: Using intelligent edge computing with a novel Salp Swarm Optimization and radial basis neural network algorithm.},
journal = {Heliyon},
volume = {10},
number = {13},
pages = {e33792},
pmid = {39040324},
issn = {2405-8440},
abstract = {A smart healthcare system (SHS) is a health service system that employs advanced technologies such as wearable devices, the Internet of Things (IoT), and mobile internet to dynamically access information and connect people and institutions related to healthcare, thereby actively managing and responding to medical ecosystem needs. Edge computing (EC) plays a significant role in SHS as it enables real-time data processing and analysis at the data source, which reduces latency and improves medical intervention speed. However, the integration of patient information, including electronic health records (EHRs), into the SHS framework induces security and privacy concerns. To address these issues, an intelligent EC framework was proposed in this study. The objective of this study is to accurately identify security threats and ensure secure data transmission in the SHS environment. The proposed EC framework leverages the effectiveness of Salp Swarm Optimization and Radial Basis Functional Neural Network (SS-RBFN) for enhancing security and data privacy. The proposed methodology commences with the collection of healthcare information, which is then pre-processed to ensure the consistency and quality of the database for further analysis. Subsequently, the SS-RBFN algorithm was trained using the pre-processed database to distinguish between normal and malicious data streams accurately, offering continuous monitoring in the SHS environment. Additionally, a Rivest-Shamir-Adelman (RSA) approach was applied to safeguard data against security threats during transmission to cloud storage. The proposed model was trained and validated using the IoT-based healthcare database available at Kaggle, and the experimental results demonstrated that it achieved 99.87 % accuracy, 99.76 % precision, 99.49 % f-measure, 98.99 % recall, 97.37 % throughput, and 1.2s latency. Furthermore, the results achieved by the proposed model were compared with the existing models to validate its effectiveness in enhancing security.},
}
RevDate: 2024-07-22
CmpDate: 2024-07-22
Self-learning activation functions to increase accuracy of privacy-preserving Convolutional Neural Networks with homomorphic encryption.
PloS one, 19(7):e0306420 pii:PONE-D-23-25899.
The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.
Additional Links: PMID-39038028
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39038028,
year = {2024},
author = {Pulido-Gaytan, B and Tchernykh, A},
title = {Self-learning activation functions to increase accuracy of privacy-preserving Convolutional Neural Networks with homomorphic encryption.},
journal = {PloS one},
volume = {19},
number = {7},
pages = {e0306420},
doi = {10.1371/journal.pone.0306420},
pmid = {39038028},
issn = {1932-6203},
mesh = {*Neural Networks, Computer ; *Computer Security ; *Privacy ; Humans ; Algorithms ; Cloud Computing ; },
abstract = {The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Neural Networks, Computer
*Computer Security
*Privacy
Humans
Algorithms
Cloud Computing
RevDate: 2024-07-19
Process Manufacturing Intelligence Empowered by Industrial Metaverse: A Survey.
IEEE transactions on cybernetics, PP: [Epub ahead of print].
The intelligent goal of process manufacturing is to achieve high efficiency and greening of the entire production. Whereas the information system it used is functionally independent, resulting to knowledge gaps between each level. Decision-making still requires lots of knowledge workers making manually. The industrial metaverse is a necessary means to bridge the knowledge gaps by sharing and collaborative decision-making. Considering the safety and stability requirements of the process manufacturing, this article conducts a thorough survey on the process manufacturing intelligence empowered by industrial metaverse. First, it analyzes the current status and challenges of process manufacturing intelligence, and then summarizes the latest developments about key enabling technologies of industrial metaverse, such as interconnection technologies, artificial intelligence, cloud-edge computing, digital twin (DT), immersive interaction, and blockchain technology. On this basis, taking into account the characteristics of process manufacturing, a construction approach and architecture for the process industrial metaverse is proposed: a virtual-real fused industrial metaverse construction method that combines DTs with physical avatar, which can effectively ensure the safety of metaverse's application in industrial scenarios. Finally, we conducted preliminary exploration and research, to prove the feasibility of proposed method.
Additional Links: PMID-39028603
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39028603,
year = {2024},
author = {Luo, W and Huang, K and Liang, X and Ren, H and Zhou, N and Zhang, C and Yang, C and Gui, W},
title = {Process Manufacturing Intelligence Empowered by Industrial Metaverse: A Survey.},
journal = {IEEE transactions on cybernetics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TCYB.2024.3420958},
pmid = {39028603},
issn = {2168-2275},
abstract = {The intelligent goal of process manufacturing is to achieve high efficiency and greening of the entire production. Whereas the information system it used is functionally independent, resulting to knowledge gaps between each level. Decision-making still requires lots of knowledge workers making manually. The industrial metaverse is a necessary means to bridge the knowledge gaps by sharing and collaborative decision-making. Considering the safety and stability requirements of the process manufacturing, this article conducts a thorough survey on the process manufacturing intelligence empowered by industrial metaverse. First, it analyzes the current status and challenges of process manufacturing intelligence, and then summarizes the latest developments about key enabling technologies of industrial metaverse, such as interconnection technologies, artificial intelligence, cloud-edge computing, digital twin (DT), immersive interaction, and blockchain technology. On this basis, taking into account the characteristics of process manufacturing, a construction approach and architecture for the process industrial metaverse is proposed: a virtual-real fused industrial metaverse construction method that combines DTs with physical avatar, which can effectively ensure the safety of metaverse's application in industrial scenarios. Finally, we conducted preliminary exploration and research, to prove the feasibility of proposed method.},
}
RevDate: 2024-07-18
CmpDate: 2024-07-18
Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.
Pain, 165(8):1793-1805.
Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.
Additional Links: PMID-39024163
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39024163,
year = {2024},
author = {McCoy, ES and Park, SK and Patel, RP and Ryan, DF and Mullen, ZJ and Nesbitt, JJ and Lopez, JE and Taylor-Blake, B and Vanden, KA and Krantz, JL and Hu, W and Garris, RL and Snyder, MG and Lima, LV and Sotocinal, SG and Austin, JS and Kashlan, AD and Shah, S and Trocinski, AK and Pudipeddi, SS and Major, RM and Bazick, HO and Klein, MR and Mogil, JS and Wu, G and Zylka, MJ},
title = {Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.},
journal = {Pain},
volume = {165},
number = {8},
pages = {1793-1805},
doi = {10.1097/j.pain.0000000000003187},
pmid = {39024163},
issn = {1872-6623},
support = {R01NS114259//National Institute of Neurological Disorders and Stroke, National Science Foundation/ ; },
mesh = {Animals ; Mice ; *Facial Expression ; Female ; *Software/standards ; *Mice, Inbred C57BL ; *Pain Measurement/methods/standards ; Male ; Pain/diagnosis ; },
abstract = {Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Animals
Mice
*Facial Expression
Female
*Software/standards
*Mice, Inbred C57BL
*Pain Measurement/methods/standards
Male
Pain/diagnosis
RevDate: 2024-07-18
Innovative Hybrid Cloud Solutions for Physical Medicine and Telerehabilitation Research.
International journal of telerehabilitation, 16(1):e6635.
PURPOSE: The primary objective of this study was to develop and implement a Hybrid Cloud Environment for Telerehabilitation (HCET) to enhance patient care and research in the Physical Medicine and Rehabilitation (PM&R) domain. This environment aims to integrate advanced information and communication technologies to support both traditional in-person therapy and digital health solutions.
BACKGROUND: Telerehabilitation is emerging as a core component of modern healthcare, especially within the PM&R field. By applying digital health technologies, telerehabilitation provides continuous, comprehensive support for patient rehabilitation, bridging the gap between traditional therapy, and remote healthcare delivery. This study focuses on the design, and implementation of a hybrid HCET system tailored for the PM&R domain.
METHODS: The study involved the development of a comprehensive architectural and structural organization for the HCET, including a three-layer model (infrastructure, platform, service layers). Core components of the HCET were designed and implemented, such as the Hospital Information System (HIS) for PM&R, the MedRehabBot system, and the MedLocalGPT project. These components were integrated using advanced technologies like large language models (LLMs), word embeddings, and ontology-related approaches, along with APIs for enhanced functionality and interaction.
FINDINGS: The HCET system was successfully implemented and is operational, providing a robust platform for telerehabilitation. Key features include the MVP of the HIS for PM&R, supporting patient profile management, and rehabilitation goal tracking; the MedRehabBot and WhiteBookBot systems; and the MedLocalGPT project, which offers sophisticated querying capabilities, and access to extensive domain-specific knowledge. The system supports both Ukrainian and English languages, ensuring broad accessibility and usability.
INTERPRETATION: The practical implementation, and operation of the HCET system demonstrate its potential to transform telerehabilitation within the PM&R domain. By integrating advanced technologies, and providing comprehensive digital health solutions, the HCET enhances patient care, supports ongoing rehabilitation, and facilitates advanced research. Future work will focus on optimizing services and expanding language support to further improve the system's functionality and impact.
Additional Links: PMID-39022436
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39022436,
year = {2024},
author = {Malakhov, KS},
title = {Innovative Hybrid Cloud Solutions for Physical Medicine and Telerehabilitation Research.},
journal = {International journal of telerehabilitation},
volume = {16},
number = {1},
pages = {e6635},
pmid = {39022436},
issn = {1945-2020},
abstract = {PURPOSE: The primary objective of this study was to develop and implement a Hybrid Cloud Environment for Telerehabilitation (HCET) to enhance patient care and research in the Physical Medicine and Rehabilitation (PM&R) domain. This environment aims to integrate advanced information and communication technologies to support both traditional in-person therapy and digital health solutions.
BACKGROUND: Telerehabilitation is emerging as a core component of modern healthcare, especially within the PM&R field. By applying digital health technologies, telerehabilitation provides continuous, comprehensive support for patient rehabilitation, bridging the gap between traditional therapy, and remote healthcare delivery. This study focuses on the design, and implementation of a hybrid HCET system tailored for the PM&R domain.
METHODS: The study involved the development of a comprehensive architectural and structural organization for the HCET, including a three-layer model (infrastructure, platform, service layers). Core components of the HCET were designed and implemented, such as the Hospital Information System (HIS) for PM&R, the MedRehabBot system, and the MedLocalGPT project. These components were integrated using advanced technologies like large language models (LLMs), word embeddings, and ontology-related approaches, along with APIs for enhanced functionality and interaction.
FINDINGS: The HCET system was successfully implemented and is operational, providing a robust platform for telerehabilitation. Key features include the MVP of the HIS for PM&R, supporting patient profile management, and rehabilitation goal tracking; the MedRehabBot and WhiteBookBot systems; and the MedLocalGPT project, which offers sophisticated querying capabilities, and access to extensive domain-specific knowledge. The system supports both Ukrainian and English languages, ensuring broad accessibility and usability.
INTERPRETATION: The practical implementation, and operation of the HCET system demonstrate its potential to transform telerehabilitation within the PM&R domain. By integrating advanced technologies, and providing comprehensive digital health solutions, the HCET enhances patient care, supports ongoing rehabilitation, and facilitates advanced research. Future work will focus on optimizing services and expanding language support to further improve the system's functionality and impact.},
}
RevDate: 2024-07-17
CmpDate: 2024-07-17
Variability in wet and dry snow radar zones in the North of the Antarctic Peninsula using a cloud computing environment.
Anais da Academia Brasileira de Ciencias, 96(suppl 2):e20230704 pii:S0001-37652024000401101.
This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ[0]) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.
Additional Links: PMID-39016361
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39016361,
year = {2024},
author = {Idalino, FD and Rosa, KKD and Hillebrand, FL and Arigony-Neto, J and Mendes, CW and Simões, JC},
title = {Variability in wet and dry snow radar zones in the North of the Antarctic Peninsula using a cloud computing environment.},
journal = {Anais da Academia Brasileira de Ciencias},
volume = {96},
number = {suppl 2},
pages = {e20230704},
doi = {10.1590/0001-3765202420230704},
pmid = {39016361},
issn = {1678-2690},
mesh = {Antarctic Regions ; *Snow ; *Radar ; *Cloud Computing ; Seasons ; Environmental Monitoring/methods ; Temperature ; },
abstract = {This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ[0]) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Antarctic Regions
*Snow
*Radar
*Cloud Computing
Seasons
Environmental Monitoring/methods
Temperature
RevDate: 2024-07-15
"Alexa, Cycle The Blood Pressure": A Voice Control Interface Method for Anesthesia Monitoring.
Anesthesia and analgesia pii:00000539-990000000-00865 [Epub ahead of print].
BACKGROUND: Anesthesia monitors and devices are usually controlled with some combination of dials, keypads, a keyboard, or a touch screen. Thus, anesthesiologists can operate their monitors only when they are physically close to them, and not otherwise task-loaded with sterile procedures such as line or block placement. Voice recognition technology has become commonplace and may offer advantages in anesthesia practice such as reducing surface contamination rates and allowing anesthesiologists to effect changes in monitoring and therapy when they would otherwise presently be unable to do so. We hypothesized that this technology is practicable and that anesthesiologists would consider it useful.
METHODS: A novel voice-driven prototype controller was designed for the GE Solar 8000M anesthesia patient monitor. The apparatus was implemented using a Raspberry Pi 4 single-board computer, an external conference audio device, a Google Cloud Speech-to-Text platform, and a modified Solar controller to effect commands. Fifty anesthesia providers tested the prototype. Evaluations and surveys were completed in a nonclinical environment to avoid any ethical or safety concerns regarding the use of the device in direct patient care. All anesthesiologists sampled were fluent English speakers; many with inflections from their first language or national origin, reflecting diversity in the population of practicing anesthesiologists.
RESULTS: The prototype was uniformly well-received by anesthesiologists. Ease-of-use, usefulness, and effectiveness were assessed on a Likert scale with means of 9.96, 7.22, and 8.48 of 10, respectively. No population cofactors were associated with these results. Advancing level of training (eg, nonattending versus attending) was not correlated with any preference. Accent of country or region was not correlated with any preference. Vocal pitch register did not correlate with any preference. Statistical analyses were performed with analysis of variance and the unpaired t-test.
CONCLUSIONS: The use of voice recognition to control operating room monitors was well-received anesthesia providers. Additional commands are easily implemented on the prototype controller. No adverse relationship was found between acceptability and level of anesthesia experience, pitch of voice, or presence of accent. Voice recognition is a promising method of controlling anesthesia monitors and devices that could potentially increase usability and situational awareness in circumstances where the anesthesiologist is otherwise out-of-position or task-loaded.
Additional Links: PMID-39008420
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39008420,
year = {2024},
author = {Lee, G and Connor, CW},
title = {"Alexa, Cycle The Blood Pressure": A Voice Control Interface Method for Anesthesia Monitoring.},
journal = {Anesthesia and analgesia},
volume = {},
number = {},
pages = {},
doi = {10.1213/ANE.0000000000007003},
pmid = {39008420},
issn = {1526-7598},
abstract = {BACKGROUND: Anesthesia monitors and devices are usually controlled with some combination of dials, keypads, a keyboard, or a touch screen. Thus, anesthesiologists can operate their monitors only when they are physically close to them, and not otherwise task-loaded with sterile procedures such as line or block placement. Voice recognition technology has become commonplace and may offer advantages in anesthesia practice such as reducing surface contamination rates and allowing anesthesiologists to effect changes in monitoring and therapy when they would otherwise presently be unable to do so. We hypothesized that this technology is practicable and that anesthesiologists would consider it useful.
METHODS: A novel voice-driven prototype controller was designed for the GE Solar 8000M anesthesia patient monitor. The apparatus was implemented using a Raspberry Pi 4 single-board computer, an external conference audio device, a Google Cloud Speech-to-Text platform, and a modified Solar controller to effect commands. Fifty anesthesia providers tested the prototype. Evaluations and surveys were completed in a nonclinical environment to avoid any ethical or safety concerns regarding the use of the device in direct patient care. All anesthesiologists sampled were fluent English speakers; many with inflections from their first language or national origin, reflecting diversity in the population of practicing anesthesiologists.
RESULTS: The prototype was uniformly well-received by anesthesiologists. Ease-of-use, usefulness, and effectiveness were assessed on a Likert scale with means of 9.96, 7.22, and 8.48 of 10, respectively. No population cofactors were associated with these results. Advancing level of training (eg, nonattending versus attending) was not correlated with any preference. Accent of country or region was not correlated with any preference. Vocal pitch register did not correlate with any preference. Statistical analyses were performed with analysis of variance and the unpaired t-test.
CONCLUSIONS: The use of voice recognition to control operating room monitors was well-received anesthesia providers. Additional commands are easily implemented on the prototype controller. No adverse relationship was found between acceptability and level of anesthesia experience, pitch of voice, or presence of accent. Voice recognition is a promising method of controlling anesthesia monitors and devices that could potentially increase usability and situational awareness in circumstances where the anesthesiologist is otherwise out-of-position or task-loaded.},
}
RevDate: 2024-07-15
Replica Exchange of Expanded Ensembles: A Generalized Ensemble Approach with Enhanced Flexibility and Parallelizability.
Journal of chemical theory and computation [Epub ahead of print].
Generalized ensemble methods such as Hamiltonian replica exchange (HREX) and expanded ensemble (EE) have been shown effective in free energy calculations for various contexts, given their ability to circumvent free energy barriers via nonphysical pathways defined by states with different modified Hamiltonians. However, both HREX and EE methods come with drawbacks, such as limited flexibility in parameter specification or the lack of parallelizability for more complicated applications. To address this challenge, we present the method of replica exchange of expanded ensembles (REXEE), which integrates the principles of HREX and EE methods by periodically exchanging coordinates of EE replicas sampling different yet overlapping sets of alchemical states. With the solvation free energy calculation of anthracene and binding free energy calculation of the CB7-10 binding complex, we show that the REXEE method achieves the same level of accuracy in free energy calculations as the HREX and EE methods, while offering enhanced flexibility and parallelizability. Additionally, we examined REXEE simulations with various setups to understand how different exchange frequencies and replica configurations influence the sampling efficiency in the fixed-weight phase and the weight convergence in the weight-updating phase. The REXEE approach can be further extended to support asynchronous parallelization schemes, allowing looser communications between larger numbers of loosely coupled processors such as cloud computing and therefore promising much more scalable and adaptive executions of alchemical free energy calculations. All algorithms for the REXEE method are available in the Python package ensemble_md, which offers an interface for REXEE simulation management without modifying the source code in GROMACS.
Additional Links: PMID-39007702
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39007702,
year = {2024},
author = {Hsu, WT and Shirts, MR},
title = {Replica Exchange of Expanded Ensembles: A Generalized Ensemble Approach with Enhanced Flexibility and Parallelizability.},
journal = {Journal of chemical theory and computation},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jctc.4c00484},
pmid = {39007702},
issn = {1549-9626},
abstract = {Generalized ensemble methods such as Hamiltonian replica exchange (HREX) and expanded ensemble (EE) have been shown effective in free energy calculations for various contexts, given their ability to circumvent free energy barriers via nonphysical pathways defined by states with different modified Hamiltonians. However, both HREX and EE methods come with drawbacks, such as limited flexibility in parameter specification or the lack of parallelizability for more complicated applications. To address this challenge, we present the method of replica exchange of expanded ensembles (REXEE), which integrates the principles of HREX and EE methods by periodically exchanging coordinates of EE replicas sampling different yet overlapping sets of alchemical states. With the solvation free energy calculation of anthracene and binding free energy calculation of the CB7-10 binding complex, we show that the REXEE method achieves the same level of accuracy in free energy calculations as the HREX and EE methods, while offering enhanced flexibility and parallelizability. Additionally, we examined REXEE simulations with various setups to understand how different exchange frequencies and replica configurations influence the sampling efficiency in the fixed-weight phase and the weight convergence in the weight-updating phase. The REXEE approach can be further extended to support asynchronous parallelization schemes, allowing looser communications between larger numbers of loosely coupled processors such as cloud computing and therefore promising much more scalable and adaptive executions of alchemical free energy calculations. All algorithms for the REXEE method are available in the Python package ensemble_md, which offers an interface for REXEE simulation management without modifying the source code in GROMACS.},
}
RevDate: 2024-07-13
Smart city energy efficient data privacy preservation protocol based on biometrics and fuzzy commitment scheme.
Scientific reports, 14(1):16223.
Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.
Additional Links: PMID-39003319
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39003319,
year = {2024},
author = {Nyangaresi, VO and Abduljabbar, ZA and Mutlaq, KA and Bulbul, SS and Ma, J and Aldarwish, AJY and Honi, DG and Al Sibahee, MA and Neamah, HA},
title = {Smart city energy efficient data privacy preservation protocol based on biometrics and fuzzy commitment scheme.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {16223},
pmid = {39003319},
issn = {2045-2322},
support = {GDRC202132//Natural Science Foundation of Top Talent of SZTU/ ; },
abstract = {Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.},
}
RevDate: 2024-07-13
Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework.
Sensors (Basel, Switzerland), 24(13): pii:s24134308.
The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.
Additional Links: PMID-39001087
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39001087,
year = {2024},
author = {Alwakeel, AM and Alnaim, AK},
title = {Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134308},
pmid = {39001087},
issn = {1424-8220},
support = {XXXXXX//King Faisal University/ ; },
abstract = {The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.},
}
RevDate: 2024-07-13
Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities.
Sensors (Basel, Switzerland), 24(13): pii:s24134254.
The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities' IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework's high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework's impressive performance, scalability, and security capabilities.
Additional Links: PMID-39001032
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39001032,
year = {2024},
author = {Alwakeel, AM and Alnaim, AK},
title = {Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134254},
pmid = {39001032},
issn = {1424-8220},
support = {000000//King Faisal University/ ; },
abstract = {The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities' IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework's high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework's impressive performance, scalability, and security capabilities.},
}
RevDate: 2024-07-13
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing.
Sensors (Basel, Switzerland), 24(13): pii:s24134195.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance.
Additional Links: PMID-39000973
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39000973,
year = {2024},
author = {Shahid, U and Ahmed, G and Siddiqui, S and Shuja, J and Balogun, AO},
title = {Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134195},
pmid = {39000973},
issn = {1424-8220},
support = {015LA0-049//Universiti Teknologi Petronas/ ; },
abstract = {Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance.},
}
RevDate: 2024-07-13
Federated Learning-Oriented Edge Computing Framework for the IIoT.
Sensors (Basel, Switzerland), 24(13): pii:s24134182.
With the maturity of artificial intelligence (AI) technology, applications of AI in edge computing will greatly promote the development of industrial technology. However, the existing studies on the edge computing framework for the Industrial Internet of Things (IIoT) still face several challenges, such as deep hardware and software coupling, diverse protocols, difficult deployment of AI models, insufficient computing capabilities of edge devices, and sensitivity to delay and energy consumption. To solve the above problems, this paper proposes a software-defined AI-oriented three-layer IIoT edge computing framework and presents the design and implementation of an AI-oriented edge computing system, aiming to support device access, enable the acceptance and deployment of AI models from the cloud, and allow the whole process from data acquisition to model training to be completed at the edge. In addition, this paper proposes a time series-based method for device selection and computation offloading in the federated learning process, which selectively offloads the tasks of inefficient nodes to the edge computing center to reduce the training delay and energy consumption. Finally, experiments carried out to verify the feasibility and effectiveness of the proposed method are reported. The model training time with the proposed method is generally 30% to 50% less than that with the random device selection method, and the training energy consumption under the proposed method is generally 35% to 55% less.
Additional Links: PMID-39000960
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid39000960,
year = {2024},
author = {Liu, X and Dong, X and Jia, N and Zhao, W},
title = {Federated Learning-Oriented Edge Computing Framework for the IIoT.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {13},
pages = {},
doi = {10.3390/s24134182},
pmid = {39000960},
issn = {1424-8220},
support = {2022YFB3305700//The National Key Research and Development Program of China/ ; },
abstract = {With the maturity of artificial intelligence (AI) technology, applications of AI in edge computing will greatly promote the development of industrial technology. However, the existing studies on the edge computing framework for the Industrial Internet of Things (IIoT) still face several challenges, such as deep hardware and software coupling, diverse protocols, difficult deployment of AI models, insufficient computing capabilities of edge devices, and sensitivity to delay and energy consumption. To solve the above problems, this paper proposes a software-defined AI-oriented three-layer IIoT edge computing framework and presents the design and implementation of an AI-oriented edge computing system, aiming to support device access, enable the acceptance and deployment of AI models from the cloud, and allow the whole process from data acquisition to model training to be completed at the edge. In addition, this paper proposes a time series-based method for device selection and computation offloading in the federated learning process, which selectively offloads the tasks of inefficient nodes to the edge computing center to reduce the training delay and energy consumption. Finally, experiments carried out to verify the feasibility and effectiveness of the proposed method are reported. The model training time with the proposed method is generally 30% to 50% less than that with the random device selection method, and the training energy consumption under the proposed method is generally 35% to 55% less.},
}
RevDate: 2024-07-13
Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years.
Healthcare (Basel, Switzerland), 12(13): pii:healthcare12131266.
BACKGROUND: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR's potential uses and research directions in medicine.
METHODS: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) to evaluate VR in medicine in articles published between 1 January 2012 and 31 December 2023. These data were analyzed using CiteSpace 6.2. R2 software. Present limitations and future opportunities were summarized based on the data.
RESULTS: A total of 2143 related publications from 86 countries and regions were analyzed. The country with the highest number of publications is the USA, with 461 articles. The University of London has the most publications among institutions, with 43 articles. The burst keywords represent the research frontier from 2020 to 2023, such as "task analysis", "deep learning", and "machine learning".
CONCLUSION: The number of publications on VR applications in the medical field has been steadily increasing year by year. The USA is the leading country in this area, while the University of London stands out as the most published, and most influential institution. Currently, there is a strong focus on integrating VR and AI to address complex issues such as medical education and training, rehabilitation, and surgical navigation. Looking ahead, the future trend involves integrating VR, augmented reality (AR), and mixed reality (MR) with the Internet of Things (IoT), wireless sensor networks (WSNs), big data analysis (BDA), and cloud computing (CC) technologies to develop intelligent healthcare systems within hospitals or medical centers.
Additional Links: PMID-38998801
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38998801,
year = {2024},
author = {Zuo, G and Wang, R and Wan, C and Zhang, Z and Zhang, S and Yang, W},
title = {Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years.},
journal = {Healthcare (Basel, Switzerland)},
volume = {12},
number = {13},
pages = {},
doi = {10.3390/healthcare12131266},
pmid = {38998801},
issn = {2227-9032},
support = {SZSM202311012//Sanming Project of Medicine in Shenzen Municipality/ ; },
abstract = {BACKGROUND: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR's potential uses and research directions in medicine.
METHODS: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) to evaluate VR in medicine in articles published between 1 January 2012 and 31 December 2023. These data were analyzed using CiteSpace 6.2. R2 software. Present limitations and future opportunities were summarized based on the data.
RESULTS: A total of 2143 related publications from 86 countries and regions were analyzed. The country with the highest number of publications is the USA, with 461 articles. The University of London has the most publications among institutions, with 43 articles. The burst keywords represent the research frontier from 2020 to 2023, such as "task analysis", "deep learning", and "machine learning".
CONCLUSION: The number of publications on VR applications in the medical field has been steadily increasing year by year. The USA is the leading country in this area, while the University of London stands out as the most published, and most influential institution. Currently, there is a strong focus on integrating VR and AI to address complex issues such as medical education and training, rehabilitation, and surgical navigation. Looking ahead, the future trend involves integrating VR, augmented reality (AR), and mixed reality (MR) with the Internet of Things (IoT), wireless sensor networks (WSNs), big data analysis (BDA), and cloud computing (CC) technologies to develop intelligent healthcare systems within hospitals or medical centers.},
}
RevDate: 2024-07-12
CmpDate: 2024-07-12
Reusable tutorials for using cloud-based computing environments for the analysis of bacterial gene expression data from bulk RNA sequencing.
Briefings in bioinformatics, 25(4):.
This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on RNA sequencing (RNAseq) data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical research is increasingly data-driven, and dependent upon data management and analysis methods that facilitate rigorous, robust, and reproducible research. Cloud-based computing resources provide opportunities to broaden the application of bioinformatics and data science in research. Two obstacles for researchers, particularly those at small institutions, are: (i) access to bioinformatics analysis environments tailored to their research; and (ii) training in how to use Cloud-based computing resources. We developed five reusable tutorials for bulk RNAseq data analysis to address these obstacles. Using Jupyter notebooks run on the Google Cloud Platform, the tutorials guide the user through a workflow featuring an RNAseq dataset from a study of prophage altered drug resistance in Mycobacterium chelonae. The first tutorial uses a subset of the data so users can learn analysis steps rapidly, and the second uses the entire dataset. Next, a tutorial demonstrates how to analyze the read count data to generate lists of differentially expressed genes using R/DESeq2. Additional tutorials generate read counts using the Snakemake workflow manager and Nextflow with Google Batch. All tutorials are open-source and can be used as templates for other analysis.
Additional Links: PMID-38997128
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38997128,
year = {2024},
author = {Allers, S and O'Connell, KA and Carlson, T and Belardo, D and King, BL},
title = {Reusable tutorials for using cloud-based computing environments for the analysis of bacterial gene expression data from bulk RNA sequencing.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {4},
pages = {},
doi = {10.1093/bib/bbae301},
pmid = {38997128},
issn = {1477-4054},
support = {P20GM103423//National Institute of General Medical Sciences of the National Institutes of Health to the Maine INBRE Program/ ; },
mesh = {*Cloud Computing ; *Computational Biology/methods ; *Sequence Analysis, RNA/methods ; *Software ; Gene Expression Regulation, Bacterial ; },
abstract = {This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on RNA sequencing (RNAseq) data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical research is increasingly data-driven, and dependent upon data management and analysis methods that facilitate rigorous, robust, and reproducible research. Cloud-based computing resources provide opportunities to broaden the application of bioinformatics and data science in research. Two obstacles for researchers, particularly those at small institutions, are: (i) access to bioinformatics analysis environments tailored to their research; and (ii) training in how to use Cloud-based computing resources. We developed five reusable tutorials for bulk RNAseq data analysis to address these obstacles. Using Jupyter notebooks run on the Google Cloud Platform, the tutorials guide the user through a workflow featuring an RNAseq dataset from a study of prophage altered drug resistance in Mycobacterium chelonae. The first tutorial uses a subset of the data so users can learn analysis steps rapidly, and the second uses the entire dataset. Next, a tutorial demonstrates how to analyze the read count data to generate lists of differentially expressed genes using R/DESeq2. Additional tutorials generate read counts using the Snakemake workflow manager and Nextflow with Google Batch. All tutorials are open-source and can be used as templates for other analysis.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Computational Biology/methods
*Sequence Analysis, RNA/methods
*Software
Gene Expression Regulation, Bacterial
RevDate: 2024-07-11
Hybrid YSGOA and neural networks based software failure prediction in cloud systems.
Scientific reports, 14(1):16035.
In the realm of cloud computing, ensuring the dependability and robustness of software systems is paramount. The intricate and evolving nature of cloud infrastructures, however, presents substantial obstacles in the pre-emptive identification and rectification of software anomalies. This study introduces an innovative methodology that amalgamates hybrid optimization algorithms with Neural Networks (NN) to refine the prediction of software malfunctions. The core objective is to augment the purity metric of our method across diverse operational conditions. This is accomplished through the utilization of two distinct optimization algorithms: the Yellow Saddle Goat Fish Algorithm (YSGA), which is instrumental in the discernment of pivotal features linked to software failures, and the Grasshopper Optimization Algorithm (GOA), which further polishes the feature compilation. These features are then processed by Neural Networks (NN), capitalizing on their proficiency in deciphering intricate data patterns and interconnections. The NNs are integral to the classification of instances predicated on the ascertained features. Our evaluation, conducted using the Failure-Dataset-OpenStack database and MATLAB Software, demonstrates that the hybrid optimization strategy employed for feature selection significantly curtails complexity and expedites processing.
Additional Links: PMID-38992079
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38992079,
year = {2024},
author = {Kaur, R and Vaithiyanathan, R},
title = {Hybrid YSGOA and neural networks based software failure prediction in cloud systems.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {16035},
pmid = {38992079},
issn = {2045-2322},
abstract = {In the realm of cloud computing, ensuring the dependability and robustness of software systems is paramount. The intricate and evolving nature of cloud infrastructures, however, presents substantial obstacles in the pre-emptive identification and rectification of software anomalies. This study introduces an innovative methodology that amalgamates hybrid optimization algorithms with Neural Networks (NN) to refine the prediction of software malfunctions. The core objective is to augment the purity metric of our method across diverse operational conditions. This is accomplished through the utilization of two distinct optimization algorithms: the Yellow Saddle Goat Fish Algorithm (YSGA), which is instrumental in the discernment of pivotal features linked to software failures, and the Grasshopper Optimization Algorithm (GOA), which further polishes the feature compilation. These features are then processed by Neural Networks (NN), capitalizing on their proficiency in deciphering intricate data patterns and interconnections. The NNs are integral to the classification of instances predicated on the ascertained features. Our evaluation, conducted using the Failure-Dataset-OpenStack database and MATLAB Software, demonstrates that the hybrid optimization strategy employed for feature selection significantly curtails complexity and expedites processing.},
}
RevDate: 2024-07-11
MEDINA Catalogue of Cloud Security controls and metrics: Towards Continuous Cloud Security compliance.
Open research Europe, 4:90.
In order to address current challenges on security certification of European ICT products, processes and services, the European Comission, through ENISA (European Union Agency for Cybersecurity), has developed the European Cybersecurity Certification Scheme for Cloud Services (EUCS). This paper presents the overview of the H2020 MEDINA project approach and tools to support the adoption of EUCS and offers a detailed description of one of the core components of the framework, the MEDINA Catalogue of Controls and Metrics. The main objective of the MEDINA Catalogue is to provide automated functionalities for CSPs' compliance managers and auditors to ease the certification process towards EUCS, through the provision of all information and guidance related to the scheme, namely categories, controls, security requirements, assurance levels, etc. The tool has been enhanced with all the research and implementation works performed in MEDINA, such as definition of compliance metrics, suggestion of related implementation guidelines, alignment of similar controls in other schemes, and a set of self-assessment questionnaires, which are presented and discussed in this paper.
Additional Links: PMID-38988330
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38988330,
year = {2024},
author = {Martinez, C and Etxaniz, I and Molinuevo, A and Alonso, J},
title = {MEDINA Catalogue of Cloud Security controls and metrics: Towards Continuous Cloud Security compliance.},
journal = {Open research Europe},
volume = {4},
number = {},
pages = {90},
pmid = {38988330},
issn = {2732-5121},
abstract = {In order to address current challenges on security certification of European ICT products, processes and services, the European Comission, through ENISA (European Union Agency for Cybersecurity), has developed the European Cybersecurity Certification Scheme for Cloud Services (EUCS). This paper presents the overview of the H2020 MEDINA project approach and tools to support the adoption of EUCS and offers a detailed description of one of the core components of the framework, the MEDINA Catalogue of Controls and Metrics. The main objective of the MEDINA Catalogue is to provide automated functionalities for CSPs' compliance managers and auditors to ease the certification process towards EUCS, through the provision of all information and guidance related to the scheme, namely categories, controls, security requirements, assurance levels, etc. The tool has been enhanced with all the research and implementation works performed in MEDINA, such as definition of compliance metrics, suggestion of related implementation guidelines, alignment of similar controls in other schemes, and a set of self-assessment questionnaires, which are presented and discussed in this paper.},
}
RevDate: 2024-07-10
Advancements in heuristic task scheduling for IoT applications in fog-cloud computing: challenges and prospects.
PeerJ. Computer science, 10:e2128.
Fog computing has emerged as a prospective paradigm to address the computational requirements of IoT applications, extending the capabilities of cloud computing to the network edge. Task scheduling is pivotal in enhancing energy efficiency, optimizing resource utilization and ensuring the timely execution of tasks within fog computing environments. This article presents a comprehensive review of the advancements in task scheduling methodologies for fog computing systems, covering priority-based, greedy heuristics, metaheuristics, learning-based, hybrid heuristics, and nature-inspired heuristic approaches. Through a systematic analysis of relevant literature, we highlight the strengths and limitations of each approach and identify key challenges facing fog computing task scheduling, including dynamic environments, heterogeneity, scalability, resource constraints, security concerns, and algorithm transparency. Furthermore, we propose future research directions to address these challenges, including the integration of machine learning techniques for real-time adaptation, leveraging federated learning for collaborative scheduling, developing resource-aware and energy-efficient algorithms, incorporating security-aware techniques, and advancing explainable AI methodologies. By addressing these challenges and pursuing these research directions, we aim to facilitate the development of more robust, adaptable, and efficient task-scheduling solutions for fog computing environments, ultimately fostering trust, security, and sustainability in fog computing systems and facilitating their widespread adoption across diverse applications and domains.
Additional Links: PMID-38983206
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38983206,
year = {2024},
author = {Alsadie, D},
title = {Advancements in heuristic task scheduling for IoT applications in fog-cloud computing: challenges and prospects.},
journal = {PeerJ. Computer science},
volume = {10},
number = {},
pages = {e2128},
pmid = {38983206},
issn = {2376-5992},
abstract = {Fog computing has emerged as a prospective paradigm to address the computational requirements of IoT applications, extending the capabilities of cloud computing to the network edge. Task scheduling is pivotal in enhancing energy efficiency, optimizing resource utilization and ensuring the timely execution of tasks within fog computing environments. This article presents a comprehensive review of the advancements in task scheduling methodologies for fog computing systems, covering priority-based, greedy heuristics, metaheuristics, learning-based, hybrid heuristics, and nature-inspired heuristic approaches. Through a systematic analysis of relevant literature, we highlight the strengths and limitations of each approach and identify key challenges facing fog computing task scheduling, including dynamic environments, heterogeneity, scalability, resource constraints, security concerns, and algorithm transparency. Furthermore, we propose future research directions to address these challenges, including the integration of machine learning techniques for real-time adaptation, leveraging federated learning for collaborative scheduling, developing resource-aware and energy-efficient algorithms, incorporating security-aware techniques, and advancing explainable AI methodologies. By addressing these challenges and pursuing these research directions, we aim to facilitate the development of more robust, adaptable, and efficient task-scheduling solutions for fog computing environments, ultimately fostering trust, security, and sustainability in fog computing systems and facilitating their widespread adoption across diverse applications and domains.},
}
RevDate: 2024-07-09
Accelerating Computational Materials Discovery with Machine Learning and Cloud High-Performance Computing: from Large-Scale Screening to Experimental Validation.
Journal of the American Chemical Society [Epub ahead of print].
High-throughput computational materials discovery has promised significant acceleration of the design and discovery of new materials for many years. Despite a surge in interest and activity, the constraints imposed by large-scale computational resources present a significant bottleneck. Furthermore, examples of very large-scale computational discovery carried out through experimental validation remain scarce, especially for materials with product applicability. Here, we demonstrate how this vision became reality by combining state-of-the-art machine learning (ML) models and traditional physics-based models on cloud high-performance computing (HPC) resources to quickly navigate through more than 32 million candidates and predict around half a million potentially stable materials. By focusing on solid-state electrolytes for battery applications, our discovery pipeline further identified 18 promising candidates with new compositions and rediscovered a decade's worth of collective knowledge in the field as a byproduct. We then synthesized and experimentally characterized the structures and conductivities of our top candidates, the NaxLi3-xYCl6 (0≤ x≤ 3) series, demonstrating the potential of these compounds to serve as solid electrolytes. Additional candidate materials that are currently under experimental investigation could offer more examples of the computational discovery of new phases of Li- and Na-conducting solid electrolytes. The showcased screening of millions of materials candidates highlights the transformative potential of advanced ML and HPC methodologies, propelling materials discovery into a new era of efficiency and innovation.
Additional Links: PMID-38980280
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38980280,
year = {2024},
author = {Chen, C and Nguyen, DT and Lee, SJ and Baker, NA and Karakoti, AS and Lauw, L and Owen, C and Mueller, KT and Bilodeau, BA and Murugesan, V and Troyer, M},
title = {Accelerating Computational Materials Discovery with Machine Learning and Cloud High-Performance Computing: from Large-Scale Screening to Experimental Validation.},
journal = {Journal of the American Chemical Society},
volume = {},
number = {},
pages = {},
doi = {10.1021/jacs.4c03849},
pmid = {38980280},
issn = {1520-5126},
abstract = {High-throughput computational materials discovery has promised significant acceleration of the design and discovery of new materials for many years. Despite a surge in interest and activity, the constraints imposed by large-scale computational resources present a significant bottleneck. Furthermore, examples of very large-scale computational discovery carried out through experimental validation remain scarce, especially for materials with product applicability. Here, we demonstrate how this vision became reality by combining state-of-the-art machine learning (ML) models and traditional physics-based models on cloud high-performance computing (HPC) resources to quickly navigate through more than 32 million candidates and predict around half a million potentially stable materials. By focusing on solid-state electrolytes for battery applications, our discovery pipeline further identified 18 promising candidates with new compositions and rediscovered a decade's worth of collective knowledge in the field as a byproduct. We then synthesized and experimentally characterized the structures and conductivities of our top candidates, the NaxLi3-xYCl6 (0≤ x≤ 3) series, demonstrating the potential of these compounds to serve as solid electrolytes. Additional candidate materials that are currently under experimental investigation could offer more examples of the computational discovery of new phases of Li- and Na-conducting solid electrolytes. The showcased screening of millions of materials candidates highlights the transformative potential of advanced ML and HPC methodologies, propelling materials discovery into a new era of efficiency and innovation.},
}
RevDate: 2024-07-08
Multi-level authentication for security in cloud using improved quantum key distribution.
Network (Bristol, England) [Epub ahead of print].
Cloud computing is an on-demand virtual-based technology to develop, configure, and modify applications online through the internet. It enables the users to handle various operations such as storage, back-up, and recovery of data, data analysis, delivery of software applications, implementation of new services and applications, hosting websites and blogs, and streaming of audio and video files. Thereby, it provides us many benefits although it is backlashed due to problems related to cloud security like data leakage, data loss, cyber attacks, etc. To address the security concerns, researchers have developed a variety of authentication mechanisms. This means that the authentication procedure used in the suggested method is multi-levelled. As a result, a better QKD method is offered to strengthen cloud security against different types of security risks. Key generation for enhanced QKD is based on the ABE public key cryptography approach. Here, an approach named CPABE is used in improved QKD. The Improved QKD scored the reduced KCA attack ratings of 0.3193, this is superior to CMMLA (0.7915), CPABE (0.8916), AES (0.5277), Blowfish (0.6144), and ECC (0.4287), accordingly. Finally, this multi-level authentication using an improved QKD approach is analysed under various measures and validates the enhancement over the state-of-the-art models.
Additional Links: PMID-38975754
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38975754,
year = {2024},
author = {Kumar, A and Verma, G},
title = {Multi-level authentication for security in cloud using improved quantum key distribution.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-21},
doi = {10.1080/0954898X.2024.2367480},
pmid = {38975754},
issn = {1361-6536},
abstract = {Cloud computing is an on-demand virtual-based technology to develop, configure, and modify applications online through the internet. It enables the users to handle various operations such as storage, back-up, and recovery of data, data analysis, delivery of software applications, implementation of new services and applications, hosting websites and blogs, and streaming of audio and video files. Thereby, it provides us many benefits although it is backlashed due to problems related to cloud security like data leakage, data loss, cyber attacks, etc. To address the security concerns, researchers have developed a variety of authentication mechanisms. This means that the authentication procedure used in the suggested method is multi-levelled. As a result, a better QKD method is offered to strengthen cloud security against different types of security risks. Key generation for enhanced QKD is based on the ABE public key cryptography approach. Here, an approach named CPABE is used in improved QKD. The Improved QKD scored the reduced KCA attack ratings of 0.3193, this is superior to CMMLA (0.7915), CPABE (0.8916), AES (0.5277), Blowfish (0.6144), and ECC (0.4287), accordingly. Finally, this multi-level authentication using an improved QKD approach is analysed under various measures and validates the enhancement over the state-of-the-art models.},
}
RevDate: 2024-07-08
Efficient and accountable anti-leakage attribute-based encryption scheme for cloud storage.
Heliyon, 10(12):e32404 pii:S2405-8440(24)08435-4.
To ensure secure and flexible data sharing in cloud storage, attribute-based encryption (ABE) is introduced to meet the requirements of fine-grained access control and secure one-to-many data sharing. However, the computational burden imposed by attribute encryption renders it unsuitable for resource-constrained environments such as the Internet of Things (IoT) and edge computing. Furthermore, the issue of accountability for illegal keys is crucial, as authorized users may actively disclose or sell authorization keys for personal gain, and keys may also passively leak due to management negligence or hacking incidents. Additionally, since all authorization keys are generated by the attribute authorization center, there is a potential risk of unauthorized key forgery. In response to these challenges, this paper proposes an efficient and accountable leakage-resistant scheme based on attribute encryption. The scheme adopts more secure online/offline encryption mechanisms and cloud server-assisted decryption to alleviate the computational burden on resource-constrained devices. For illegal keys, the scheme supports accountability for both users and the authorization center, allowing the revocation of decryption privileges for malicious users. In the case of passively leaked keys, timely key updates and revocation of decryption capabilities for leaked keys are implemented. Finally, the paper provides selective security and accountability proofs for the scheme under standard models. Efficiency analysis and experimental results demonstrate that the proposed scheme enhances encryption/decryption efficiency, and the storage overhead for accountability is also extremely low.
Additional Links: PMID-38975165
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38975165,
year = {2024},
author = {Yan, L and Wang, G and Feng, H and Liu, P and Gao, H and Zhang, W and Hu, H and Pan, F},
title = {Efficient and accountable anti-leakage attribute-based encryption scheme for cloud storage.},
journal = {Heliyon},
volume = {10},
number = {12},
pages = {e32404},
doi = {10.1016/j.heliyon.2024.e32404},
pmid = {38975165},
issn = {2405-8440},
abstract = {To ensure secure and flexible data sharing in cloud storage, attribute-based encryption (ABE) is introduced to meet the requirements of fine-grained access control and secure one-to-many data sharing. However, the computational burden imposed by attribute encryption renders it unsuitable for resource-constrained environments such as the Internet of Things (IoT) and edge computing. Furthermore, the issue of accountability for illegal keys is crucial, as authorized users may actively disclose or sell authorization keys for personal gain, and keys may also passively leak due to management negligence or hacking incidents. Additionally, since all authorization keys are generated by the attribute authorization center, there is a potential risk of unauthorized key forgery. In response to these challenges, this paper proposes an efficient and accountable leakage-resistant scheme based on attribute encryption. The scheme adopts more secure online/offline encryption mechanisms and cloud server-assisted decryption to alleviate the computational burden on resource-constrained devices. For illegal keys, the scheme supports accountability for both users and the authorization center, allowing the revocation of decryption privileges for malicious users. In the case of passively leaked keys, timely key updates and revocation of decryption capabilities for leaked keys are implemented. Finally, the paper provides selective security and accountability proofs for the scheme under standard models. Efficiency analysis and experimental results demonstrate that the proposed scheme enhances encryption/decryption efficiency, and the storage overhead for accountability is also extremely low.},
}
RevDate: 2024-07-04
CmpDate: 2024-07-04
A data science roadmap for open science organizations engaged in early-stage drug discovery.
Nature communications, 15(1):5640.
The Structural Genomics Consortium is an international open science research organization with a focus on accelerating early-stage drug discovery, namely hit discovery and optimization. We, as many others, believe that artificial intelligence (AI) is poised to be a main accelerator in the field. The question is then how to best benefit from recent advances in AI and how to generate, format and disseminate data to enable future breakthroughs in AI-guided drug discovery. We present here the recommendations of a working group composed of experts from both the public and private sectors. Robust data management requires precise ontologies and standardized vocabulary while a centralized database architecture across laboratories facilitates data integration into high-value datasets. Lab automation and opening electronic lab notebooks to data mining push the boundaries of data sharing and data modeling. Important considerations for building robust machine-learning models include transparent and reproducible data processing, choosing the most relevant data representation, defining the right training and test sets, and estimating prediction uncertainty. Beyond data-sharing, cloud-based computing can be harnessed to build and disseminate machine-learning models. Important vectors of acceleration for hit and chemical probe discovery will be (1) the real-time integration of experimental data generation and modeling workflows within design-make-test-analyze (DMTA) cycles openly, and at scale and (2) the adoption of a mindset where data scientists and experimentalists work as a unified team, and where data science is incorporated into the experimental design.
Additional Links: PMID-38965235
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38965235,
year = {2024},
author = {Edfeldt, K and Edwards, AM and Engkvist, O and Günther, J and Hartley, M and Hulcoop, DG and Leach, AR and Marsden, BD and Menge, A and Misquitta, L and Müller, S and Owen, DR and Schütt, KT and Skelton, N and Steffen, A and Tropsha, A and Vernet, E and Wang, Y and Wellnitz, J and Willson, TM and Clevert, DA and Haibe-Kains, B and Schiavone, LH and Schapira, M},
title = {A data science roadmap for open science organizations engaged in early-stage drug discovery.},
journal = {Nature communications},
volume = {15},
number = {1},
pages = {5640},
pmid = {38965235},
issn = {2041-1723},
support = {RGPIN-2019-04416//Canadian Network for Research and Innovation in Machining Technology, Natural Sciences and Engineering Research Council of Canada (NSERC Canadian Network for Research and Innovation in Machining Technology)/ ; },
mesh = {*Drug Discovery/methods ; *Machine Learning ; *Data Science/methods ; Humans ; Artificial Intelligence ; Information Dissemination/methods ; Data Mining/methods ; Cloud Computing ; Databases, Factual ; },
abstract = {The Structural Genomics Consortium is an international open science research organization with a focus on accelerating early-stage drug discovery, namely hit discovery and optimization. We, as many others, believe that artificial intelligence (AI) is poised to be a main accelerator in the field. The question is then how to best benefit from recent advances in AI and how to generate, format and disseminate data to enable future breakthroughs in AI-guided drug discovery. We present here the recommendations of a working group composed of experts from both the public and private sectors. Robust data management requires precise ontologies and standardized vocabulary while a centralized database architecture across laboratories facilitates data integration into high-value datasets. Lab automation and opening electronic lab notebooks to data mining push the boundaries of data sharing and data modeling. Important considerations for building robust machine-learning models include transparent and reproducible data processing, choosing the most relevant data representation, defining the right training and test sets, and estimating prediction uncertainty. Beyond data-sharing, cloud-based computing can be harnessed to build and disseminate machine-learning models. Important vectors of acceleration for hit and chemical probe discovery will be (1) the real-time integration of experimental data generation and modeling workflows within design-make-test-analyze (DMTA) cycles openly, and at scale and (2) the adoption of a mindset where data scientists and experimentalists work as a unified team, and where data science is incorporated into the experimental design.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Drug Discovery/methods
*Machine Learning
*Data Science/methods
Humans
Artificial Intelligence
Information Dissemination/methods
Data Mining/methods
Cloud Computing
Databases, Factual
RevDate: 2024-07-04
Universal terminal for cloud quantum computing.
Scientific reports, 14(1):15412.
To bring the quantum computing capacities to the personal edge devices, the optimum approach is to have simple non-error-corrected personal devices that offload the computational tasks to scalable quantum computers via edge servers with cryogenic components and fault-tolerant schemes. Hence the network elements deploy different encoding protocols. This article proposes quantum terminals that are compatible with different encoding protocols; paving the way for realizing mobile edge-quantum computing. By accommodating the atomic lattice processor inside a cavity, the entangling mechanism is provided by the Rydberg cavity-QED technology. The auxiliary atom, responsible for photon emission, senses the logical qubit state via the long-range Rydberg interaction. In other words, the state of logical qubit determines the interaction-induced level-shift at the central atom and hence derives the system over distinguished eigenstates, featuring photon emission at the early or late times controlled by quantum interference. Applying an entanglement-swapping gate on two emitted photons would make the far-separated logical qubits entangled regardless of their encoding protocols. The proposed scheme provides a universal photonic interface for clustering the processors and connecting them with the quantum memories and quantum cloud compatible with different encoding formats.
Additional Links: PMID-38965311
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38965311,
year = {2024},
author = {Khazali, M},
title = {Universal terminal for cloud quantum computing.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {15412},
pmid = {38965311},
issn = {2045-2322},
abstract = {To bring the quantum computing capacities to the personal edge devices, the optimum approach is to have simple non-error-corrected personal devices that offload the computational tasks to scalable quantum computers via edge servers with cryogenic components and fault-tolerant schemes. Hence the network elements deploy different encoding protocols. This article proposes quantum terminals that are compatible with different encoding protocols; paving the way for realizing mobile edge-quantum computing. By accommodating the atomic lattice processor inside a cavity, the entangling mechanism is provided by the Rydberg cavity-QED technology. The auxiliary atom, responsible for photon emission, senses the logical qubit state via the long-range Rydberg interaction. In other words, the state of logical qubit determines the interaction-induced level-shift at the central atom and hence derives the system over distinguished eigenstates, featuring photon emission at the early or late times controlled by quantum interference. Applying an entanglement-swapping gate on two emitted photons would make the far-separated logical qubits entangled regardless of their encoding protocols. The proposed scheme provides a universal photonic interface for clustering the processors and connecting them with the quantum memories and quantum cloud compatible with different encoding formats.},
}
RevDate: 2024-07-04
Accurately Computing the Interacted Volume of Molecules over Their 3D Mesh Models.
Journal of chemical information and modeling [Epub ahead of print].
For quickly predicting the rational arrangement of catalysts and substrates, we previously proposed a method to calculate the interacted volumes of molecules over their 3D point cloud models. However, the nonuniform density in molecular point clouds may lead to incomplete contours in some slices, reducing the accuracy of the previous method. In this paper, we propose a two-step method for more accurately computing molecular interacted volumes. First, by employing a prematched mesh slicing method, we layer the 3D triangular mesh models of the electrostatic potential isosurfaces of two molecules globally, transforming the volume calculation into finding the intersecting areas in each layer. Next, by subdividing polygonal edges, we accurately identify intersecting parts within each layer, ensuring precise calculation of interacted volumes. In addition, we present a concise overview for computing intersecting areas in cases of multiple contour intersections and for improving computational efficiency by incorporating bounding boxes at three stages. Experimental results demonstrate that our method maintains high accuracy in different experimental data sets, with an average relative error of 0.16%. On the same experimental setup, our average relative error is 0.07%, which is lower than the previous algorithm's 1.73%, improving the accuracy and stability in calculating interacted volumes.
Additional Links: PMID-38962905
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38962905,
year = {2024},
author = {Li, F and Lv, K and Liu, X and Zhou, Y and Liu, K},
title = {Accurately Computing the Interacted Volume of Molecules over Their 3D Mesh Models.},
journal = {Journal of chemical information and modeling},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jcim.4c00641},
pmid = {38962905},
issn = {1549-960X},
abstract = {For quickly predicting the rational arrangement of catalysts and substrates, we previously proposed a method to calculate the interacted volumes of molecules over their 3D point cloud models. However, the nonuniform density in molecular point clouds may lead to incomplete contours in some slices, reducing the accuracy of the previous method. In this paper, we propose a two-step method for more accurately computing molecular interacted volumes. First, by employing a prematched mesh slicing method, we layer the 3D triangular mesh models of the electrostatic potential isosurfaces of two molecules globally, transforming the volume calculation into finding the intersecting areas in each layer. Next, by subdividing polygonal edges, we accurately identify intersecting parts within each layer, ensuring precise calculation of interacted volumes. In addition, we present a concise overview for computing intersecting areas in cases of multiple contour intersections and for improving computational efficiency by incorporating bounding boxes at three stages. Experimental results demonstrate that our method maintains high accuracy in different experimental data sets, with an average relative error of 0.16%. On the same experimental setup, our average relative error is 0.07%, which is lower than the previous algorithm's 1.73%, improving the accuracy and stability in calculating interacted volumes.},
}
RevDate: 2024-06-28
CmpDate: 2024-06-28
A cloud-based training module for efficient de novo transcriptome assembly using Nextflow and Google cloud.
Briefings in bioinformatics, 25(4):.
This study describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" (https://github.com/NIGMS/NIGMS-Sandbox). The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on de novo transcriptome assembly using Nextflow in an interactive format that uses appropriate cloud resources for data access and analysis. Cloud computing is a powerful new means by which biomedical researchers can access resources and capacity that were previously either unattainable or prohibitively expensive. To take advantage of these resources, however, the biomedical research community needs new skills and knowledge. We present here a cloud-based training module, developed in conjunction with Google Cloud, Deloitte Consulting, and the NIH STRIDES Program, that uses the biological problem of de novo transcriptome assembly to demonstrate and teach the concepts of computational workflows (using Nextflow) and cost- and resource-efficient use of Cloud services (using Google Cloud Platform). Our work highlights the reduced necessity of on-site computing resources and the accessibility of cloud-based infrastructure for bioinformatics applications.
Additional Links: PMID-38941113
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38941113,
year = {2024},
author = {Seaman, RP and Campbell, R and Doe, V and Yosufzai, Z and Graber, JH},
title = {A cloud-based training module for efficient de novo transcriptome assembly using Nextflow and Google cloud.},
journal = {Briefings in bioinformatics},
volume = {25},
number = {4},
pages = {},
doi = {10.1093/bib/bbae313},
pmid = {38941113},
issn = {1477-4054},
support = {//Administrative Supplement to the Maine INBRE/ ; //Institutional Development Award/ ; P20GM103423//National Institute of General Medical Sciences of the National Institutes of Health/ ; },
mesh = {*Cloud Computing ; *Transcriptome ; Computational Biology/methods/education ; Software ; Humans ; Gene Expression Profiling/methods ; Internet ; },
abstract = {This study describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" (https://github.com/NIGMS/NIGMS-Sandbox). The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on de novo transcriptome assembly using Nextflow in an interactive format that uses appropriate cloud resources for data access and analysis. Cloud computing is a powerful new means by which biomedical researchers can access resources and capacity that were previously either unattainable or prohibitively expensive. To take advantage of these resources, however, the biomedical research community needs new skills and knowledge. We present here a cloud-based training module, developed in conjunction with Google Cloud, Deloitte Consulting, and the NIH STRIDES Program, that uses the biological problem of de novo transcriptome assembly to demonstrate and teach the concepts of computational workflows (using Nextflow) and cost- and resource-efficient use of Cloud services (using Google Cloud Platform). Our work highlights the reduced necessity of on-site computing resources and the accessibility of cloud-based infrastructure for bioinformatics applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Transcriptome
Computational Biology/methods/education
Software
Humans
Gene Expression Profiling/methods
Internet
RevDate: 2024-06-28
Cloud Computing to Enable Wearable-Driven Longitudinal Hemodynamic Maps.
International Conference for High Performance Computing, Networking, Storage and Analysis : [proceedings]. SC (Conference : Supercomputing), 2023:.
Tracking hemodynamic responses to treatment and stimuli over long periods remains a grand challenge. Moving from established single-heartbeat technology to longitudinal profiles would require continuous data describing how the patient's state evolves, new methods to extend the temporal domain over which flow is sampled, and high-throughput computing resources. While personalized digital twins can accurately measure 3D hemodynamics over several heartbeats, state-of-the-art methods would require hundreds of years of wallclock time on leadership scale systems to simulate one day of activity. To address these challenges, we propose a cloud-based, parallel-in-time framework leveraging continuous data from wearable devices to capture the first 3D patient-specific, longitudinal hemodynamic maps. We demonstrate the validity of our method by establishing ground truth data for 750 beats and comparing the results. Our cloud-based framework is based on an initial fixed set of simulations to enable the wearable-informed creation of personalized longitudinal hemodynamic maps.
Additional Links: PMID-38939612
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38939612,
year = {2023},
author = {Tanade, C and Rakestraw, E and Ladd, W and Draeger, E and Randles, A},
title = {Cloud Computing to Enable Wearable-Driven Longitudinal Hemodynamic Maps.},
journal = {International Conference for High Performance Computing, Networking, Storage and Analysis : [proceedings]. SC (Conference : Supercomputing)},
volume = {2023},
number = {},
pages = {},
doi = {10.1145/3581784.3607101},
pmid = {38939612},
issn = {2167-4337},
abstract = {Tracking hemodynamic responses to treatment and stimuli over long periods remains a grand challenge. Moving from established single-heartbeat technology to longitudinal profiles would require continuous data describing how the patient's state evolves, new methods to extend the temporal domain over which flow is sampled, and high-throughput computing resources. While personalized digital twins can accurately measure 3D hemodynamics over several heartbeats, state-of-the-art methods would require hundreds of years of wallclock time on leadership scale systems to simulate one day of activity. To address these challenges, we propose a cloud-based, parallel-in-time framework leveraging continuous data from wearable devices to capture the first 3D patient-specific, longitudinal hemodynamic maps. We demonstrate the validity of our method by establishing ground truth data for 750 beats and comparing the results. Our cloud-based framework is based on an initial fixed set of simulations to enable the wearable-informed creation of personalized longitudinal hemodynamic maps.},
}
RevDate: 2024-06-27
Hybrid deep learning and optimized clustering mechanism for load balancing and fault tolerance in cloud computing.
Network (Bristol, England) [Epub ahead of print].
Cloud services are one of the most quickly developing technologies. Furthermore, load balancing is recognized as a fundamental challenge for achieving energy efficiency. The primary function of load balancing is to deliver optimal services by releasing the load over multiple resources. Fault tolerance is being used to improve the reliability and accessibility of the network. In this paper, a hybrid Deep Learning-based load balancing algorithm is developed. Initially, tasks are allocated to all VMs in a round-robin method. Furthermore, the Deep Embedding Cluster (DEC) utilizes the Central Processing Unit (CPU), bandwidth, memory, processing elements, and frequency scaling factors while determining if a VM is overloaded or underloaded. The task performed on the overloaded VM is valued and the tasks accomplished on the overloaded VM are assigned to the underloaded VM for cloud load balancing. In addition, the Deep Q Recurrent Neural Network (DQRNN) is proposed to balance the load based on numerous factors such as supply, demand, capacity, load, resource utilization, and fault tolerance. Furthermore, the effectiveness of this model is assessed by load, capacity, resource consumption, and success rate, with ideal values of 0.147, 0.726, 0.527, and 0.895 are achieved.
Additional Links: PMID-38934441
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38934441,
year = {2024},
author = {Siruvoru, V and Aparna, S},
title = {Hybrid deep learning and optimized clustering mechanism for load balancing and fault tolerance in cloud computing.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-22},
doi = {10.1080/0954898X.2024.2369137},
pmid = {38934441},
issn = {1361-6536},
abstract = {Cloud services are one of the most quickly developing technologies. Furthermore, load balancing is recognized as a fundamental challenge for achieving energy efficiency. The primary function of load balancing is to deliver optimal services by releasing the load over multiple resources. Fault tolerance is being used to improve the reliability and accessibility of the network. In this paper, a hybrid Deep Learning-based load balancing algorithm is developed. Initially, tasks are allocated to all VMs in a round-robin method. Furthermore, the Deep Embedding Cluster (DEC) utilizes the Central Processing Unit (CPU), bandwidth, memory, processing elements, and frequency scaling factors while determining if a VM is overloaded or underloaded. The task performed on the overloaded VM is valued and the tasks accomplished on the overloaded VM are assigned to the underloaded VM for cloud load balancing. In addition, the Deep Q Recurrent Neural Network (DQRNN) is proposed to balance the load based on numerous factors such as supply, demand, capacity, load, resource utilization, and fault tolerance. Furthermore, the effectiveness of this model is assessed by load, capacity, resource consumption, and success rate, with ideal values of 0.147, 0.726, 0.527, and 0.895 are achieved.},
}
RevDate: 2024-06-27
Per-Pixel Forest Attribute Mapping and Error Estimation: The Google Earth Engine and R dataDriven Tool.
Sensors (Basel, Switzerland), 24(12): pii:s24123947.
Remote sensing products are typically assessed using a single accuracy estimate for the entire map, despite significant variations in accuracy across different map areas or classes. Estimating per-pixel uncertainty is a major challenge for enhancing the usability and potential of remote sensing products. This paper introduces the dataDriven open access tool, a novel statistical design-based approach that specifically addresses this issue by estimating per-pixel uncertainty through a bootstrap resampling procedure. Leveraging Sentinel-2 remote sensing data as auxiliary information, the capabilities of the Google Earth Engine cloud computing platform, and the R programming language, dataDriven can be applied in any world region and variables of interest. In this study, the dataDriven tool was tested in the Rincine forest estate study area-eastern Tuscany, Italy-focusing on volume density as the variable of interest. The average volume density was 0.042, corresponding to 420 m[3] per hectare. The estimated pixel errors ranged between 93 m[3] and 979 m[3] per hectare and were 285 m[3] per hectare on average. The ability to produce error estimates for each pixel in the map is a novel aspect in the context of the current advances in remote sensing and forest monitoring and assessment. It constitutes a significant support in forest management applications and also a powerful communication tool since it informs users about areas where map estimates are unreliable, at the same time highlighting the areas where the information provided via the map is more trustworthy. In light of this, the dataDriven tool aims to support researchers and practitioners in the spatially exhaustive use of remote sensing-derived products and map validation.
Additional Links: PMID-38931731
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38931731,
year = {2024},
author = {Francini, S and Marcelli, A and Chirici, G and Di Biase, RM and Fattorini, L and Corona, P},
title = {Per-Pixel Forest Attribute Mapping and Error Estimation: The Google Earth Engine and R dataDriven Tool.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {12},
pages = {},
doi = {10.3390/s24123947},
pmid = {38931731},
issn = {1424-8220},
abstract = {Remote sensing products are typically assessed using a single accuracy estimate for the entire map, despite significant variations in accuracy across different map areas or classes. Estimating per-pixel uncertainty is a major challenge for enhancing the usability and potential of remote sensing products. This paper introduces the dataDriven open access tool, a novel statistical design-based approach that specifically addresses this issue by estimating per-pixel uncertainty through a bootstrap resampling procedure. Leveraging Sentinel-2 remote sensing data as auxiliary information, the capabilities of the Google Earth Engine cloud computing platform, and the R programming language, dataDriven can be applied in any world region and variables of interest. In this study, the dataDriven tool was tested in the Rincine forest estate study area-eastern Tuscany, Italy-focusing on volume density as the variable of interest. The average volume density was 0.042, corresponding to 420 m[3] per hectare. The estimated pixel errors ranged between 93 m[3] and 979 m[3] per hectare and were 285 m[3] per hectare on average. The ability to produce error estimates for each pixel in the map is a novel aspect in the context of the current advances in remote sensing and forest monitoring and assessment. It constitutes a significant support in forest management applications and also a powerful communication tool since it informs users about areas where map estimates are unreliable, at the same time highlighting the areas where the information provided via the map is more trustworthy. In light of this, the dataDriven tool aims to support researchers and practitioners in the spatially exhaustive use of remote sensing-derived products and map validation.},
}
RevDate: 2024-06-27
On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads.
Sensors (Basel, Switzerland), 24(12): pii:s24123774.
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads.
Additional Links: PMID-38931559
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38931559,
year = {2024},
author = {Hong, S and Kim, Y and Nam, J and Kim, S},
title = {On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {12},
pages = {},
doi = {10.3390/s24123774},
pmid = {38931559},
issn = {1424-8220},
support = {2021R1G1A1006326//National Research Foundation of Korea/ ; },
abstract = {A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads.},
}
RevDate: 2024-06-25
Navigating latency hurdles: an in-depth examination of a cloud-powered GNSS real-time positioning application on mobile devices.
Scientific reports, 14(1):14668.
A growing dependence on real-time positioning apps for navigation, safety, and location-based services necessitates a deep understanding of latency challenges within cloud-based Global Navigation Satellite System (GNSS) solutions. This study analyses a GNSS real-time positioning app on smartphones that utilizes cloud computing for positioning data delivery. The study investigates and quantifies diverse latency contributors throughout the system architecture, including GNSS signal acquisition, data transmission, cloud processing, and result dissemination. Controlled experiments and real-world scenarios are employed to assess the influence of network conditions, device capabilities, and cloud server load on overall positioning latency. Findings highlight system bottlenecks and their relative contributions to latency. Additionally, practical recommendations are presented for developers and cloud service providers to mitigate these challenges and guarantee an optimal user experience for real-time positioning applications. This study not only elucidates the complex interplay of factors affecting GNSS app latency, but also paves the way for future advancements in cloud-based positioning solutions, ensuring the accuracy and timeliness critical for safety-critical and emerging applications.
Additional Links: PMID-38918484
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38918484,
year = {2024},
author = {Hernández Olcina, J and Anquela Julián, AB and Martín Furones, ÁE},
title = {Navigating latency hurdles: an in-depth examination of a cloud-powered GNSS real-time positioning application on mobile devices.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {14668},
pmid = {38918484},
issn = {2045-2322},
abstract = {A growing dependence on real-time positioning apps for navigation, safety, and location-based services necessitates a deep understanding of latency challenges within cloud-based Global Navigation Satellite System (GNSS) solutions. This study analyses a GNSS real-time positioning app on smartphones that utilizes cloud computing for positioning data delivery. The study investigates and quantifies diverse latency contributors throughout the system architecture, including GNSS signal acquisition, data transmission, cloud processing, and result dissemination. Controlled experiments and real-world scenarios are employed to assess the influence of network conditions, device capabilities, and cloud server load on overall positioning latency. Findings highlight system bottlenecks and their relative contributions to latency. Additionally, practical recommendations are presented for developers and cloud service providers to mitigate these challenges and guarantee an optimal user experience for real-time positioning applications. This study not only elucidates the complex interplay of factors affecting GNSS app latency, but also paves the way for future advancements in cloud-based positioning solutions, ensuring the accuracy and timeliness critical for safety-critical and emerging applications.},
}
RevDate: 2024-06-25
Enhancing Aviation Safety through AI-Driven Mental Health Management for Pilots and Air Traffic Controllers.
Cyberpsychology, behavior and social networking [Epub ahead of print].
This article provides an overview of the mental health challenges faced by pilots and air traffic controllers (ATCs), whose stressful professional lives may negatively impact global flight safety and security. The adverse effects of mental health disorders on their flight performance pose a particular safety risk, especially in sudden unexpected startle situations. Therefore, the early detection, prediction and prevention of mental health deterioration in pilots and ATCs, particularly among those at high risk, are crucial to minimize potential air crash incidents caused by human factors. Recent research in artificial intelligence (AI) demonstrates the potential of machine and deep learning, edge and cloud computing, virtual reality and wearable multimodal physiological sensors for monitoring and predicting mental health disorders. Longitudinal monitoring and analysis of pilots' and ATCs physiological, cognitive and behavioral states could help predict individuals at risk of undisclosed or emerging mental health disorders. Utilizing AI tools and methodologies to identify and select these individuals for preventive mental health training and interventions could be a promising and effective approach to preventing potential air crash accidents attributed to human factors and related mental health problems. Based on these insights, the article advocates for the design of a multidisciplinary mental healthcare ecosystem in modern aviation using AI tools and technologies, to foster more efficient and effective mental health management, thereby enhancing flight safety and security standards. This proposed ecosystem requires the collaboration of multidisciplinary experts, including psychologists, neuroscientists, physiologists, psychiatrists, etc. to address these challenges in modern aviation.
Additional Links: PMID-38916063
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38916063,
year = {2024},
author = {Ćosić, K and Popović, S and Wiederhold, BK},
title = {Enhancing Aviation Safety through AI-Driven Mental Health Management for Pilots and Air Traffic Controllers.},
journal = {Cyberpsychology, behavior and social networking},
volume = {},
number = {},
pages = {},
doi = {10.1089/cyber.2023.0737},
pmid = {38916063},
issn = {2152-2723},
abstract = {This article provides an overview of the mental health challenges faced by pilots and air traffic controllers (ATCs), whose stressful professional lives may negatively impact global flight safety and security. The adverse effects of mental health disorders on their flight performance pose a particular safety risk, especially in sudden unexpected startle situations. Therefore, the early detection, prediction and prevention of mental health deterioration in pilots and ATCs, particularly among those at high risk, are crucial to minimize potential air crash incidents caused by human factors. Recent research in artificial intelligence (AI) demonstrates the potential of machine and deep learning, edge and cloud computing, virtual reality and wearable multimodal physiological sensors for monitoring and predicting mental health disorders. Longitudinal monitoring and analysis of pilots' and ATCs physiological, cognitive and behavioral states could help predict individuals at risk of undisclosed or emerging mental health disorders. Utilizing AI tools and methodologies to identify and select these individuals for preventive mental health training and interventions could be a promising and effective approach to preventing potential air crash accidents attributed to human factors and related mental health problems. Based on these insights, the article advocates for the design of a multidisciplinary mental healthcare ecosystem in modern aviation using AI tools and technologies, to foster more efficient and effective mental health management, thereby enhancing flight safety and security standards. This proposed ecosystem requires the collaboration of multidisciplinary experts, including psychologists, neuroscientists, physiologists, psychiatrists, etc. to address these challenges in modern aviation.},
}
RevDate: 2024-06-25
Analysis-ready VCF at Biobank scale using Zarr.
bioRxiv : the preprint server for biology pii:2024.06.11.598241.
BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasises efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. Biobank scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.
RESULTS: We present the VCF Zarr specification, an encoding of the VCF data model using Zarr which makes retrieving subsets of the data much more efficient. Zarr is a cloud-native format for storing multi-dimensional data, widely used in scientific computing. We show how this format is far more efficient than standard VCF based approaches, and competitive with specialised methods for storing genotype data in terms of compression ratios and calculation performance. We demonstrate the VCF Zarr format (and the vcf2zarr conversion utility) on a subset of the Genomics England aggV2 dataset comprising 78,195 samples and 59,880,903 variants, with a 5X reduction in storage and greater than 300X reduction in CPU usage in some representative benchmarks.
CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely-used, open-source technologies has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores.
Additional Links: PMID-38915693
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38915693,
year = {2024},
author = {Czech, E and Millar, TR and White, T and Jeffery, B and Miles, A and Tallman, S and Wojdyla, R and Zabad, S and Hammerbacher, J and Kelleher, J},
title = {Analysis-ready VCF at Biobank scale using Zarr.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2024.06.11.598241},
pmid = {38915693},
abstract = {BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasises efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. Biobank scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.
RESULTS: We present the VCF Zarr specification, an encoding of the VCF data model using Zarr which makes retrieving subsets of the data much more efficient. Zarr is a cloud-native format for storing multi-dimensional data, widely used in scientific computing. We show how this format is far more efficient than standard VCF based approaches, and competitive with specialised methods for storing genotype data in terms of compression ratios and calculation performance. We demonstrate the VCF Zarr format (and the vcf2zarr conversion utility) on a subset of the Genomics England aggV2 dataset comprising 78,195 samples and 59,880,903 variants, with a 5X reduction in storage and greater than 300X reduction in CPU usage in some representative benchmarks.
CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely-used, open-source technologies has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores.},
}
RevDate: 2024-06-24
Enhancing Earth data analysis in 5G satellite networks: A novel lightweight approach integrating improved deep learning.
Heliyon, 10(11):e32071.
Efficiently handling huge data amounts and enabling processing-intensive applications to run in faraway areas simultaneously is the ultimate objective of 5G networks. Currently, in order to distribute computing tasks, ongoing studies are exploring the incorporation of fog-cloud servers onto satellites, presenting a promising solution to enhance connectivity in remote areas. Nevertheless, analyzing the copious amounts of data produced by scattered sensors remains a challenging endeavor. The conventional strategy of transmitting this data to a central server for analysis can be costly. In contrast to centralized learning methods, distributed machine learning (ML) provides an alternative approach, albeit with notable drawbacks. This paper addresses the comparative learning expenses of centralized and distributed learning systems to tackle these challenges directly. It proposes the creation of an integrated system that harmoniously merges cloud servers with satellite network structures, leveraging the strengths of each system. This integration could represent a major breakthrough in satellite-based networking technology by streamlining data processing from remote nodes and cutting down on expenses. The core of this approach lies in the adaptive tailoring of learning techniques for individual entities based on their specific contextual nuances. The experimental findings underscore the prowess of the innovative lightweight strategy, LMAED[2]L (Enhanced Deep Learning for Earth Data Analysis), across a spectrum of machine learning assignments, showcasing remarkable and consistent performance under diverse operational conditions. Through a strategic fusion of centralized and distributed learning frameworks, the LMAED2L method emerges as a dynamic and effective remedy for the intricate data analysis challenges encountered within satellite networks interfaced with cloud servers. The empirical findings reveal a significant performance boost of our novel approach over traditional methods, with an average increase in reward (4.1 %), task completion rate (3.9 %), and delivered packets (3.4 %). This report suggests that these advancements will catalyze the integration of cutting-edge machine learning algorithms within future networks, elevating responsiveness, efficiency, and resource utilization to new heights.
Additional Links: PMID-38912450
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38912450,
year = {2024},
author = {Yang, Y and Ren, K and Song, J},
title = {Enhancing Earth data analysis in 5G satellite networks: A novel lightweight approach integrating improved deep learning.},
journal = {Heliyon},
volume = {10},
number = {11},
pages = {e32071},
pmid = {38912450},
issn = {2405-8440},
abstract = {Efficiently handling huge data amounts and enabling processing-intensive applications to run in faraway areas simultaneously is the ultimate objective of 5G networks. Currently, in order to distribute computing tasks, ongoing studies are exploring the incorporation of fog-cloud servers onto satellites, presenting a promising solution to enhance connectivity in remote areas. Nevertheless, analyzing the copious amounts of data produced by scattered sensors remains a challenging endeavor. The conventional strategy of transmitting this data to a central server for analysis can be costly. In contrast to centralized learning methods, distributed machine learning (ML) provides an alternative approach, albeit with notable drawbacks. This paper addresses the comparative learning expenses of centralized and distributed learning systems to tackle these challenges directly. It proposes the creation of an integrated system that harmoniously merges cloud servers with satellite network structures, leveraging the strengths of each system. This integration could represent a major breakthrough in satellite-based networking technology by streamlining data processing from remote nodes and cutting down on expenses. The core of this approach lies in the adaptive tailoring of learning techniques for individual entities based on their specific contextual nuances. The experimental findings underscore the prowess of the innovative lightweight strategy, LMAED[2]L (Enhanced Deep Learning for Earth Data Analysis), across a spectrum of machine learning assignments, showcasing remarkable and consistent performance under diverse operational conditions. Through a strategic fusion of centralized and distributed learning frameworks, the LMAED2L method emerges as a dynamic and effective remedy for the intricate data analysis challenges encountered within satellite networks interfaced with cloud servers. The empirical findings reveal a significant performance boost of our novel approach over traditional methods, with an average increase in reward (4.1 %), task completion rate (3.9 %), and delivered packets (3.4 %). This report suggests that these advancements will catalyze the integration of cutting-edge machine learning algorithms within future networks, elevating responsiveness, efficiency, and resource utilization to new heights.},
}
RevDate: 2024-06-22
Cloud inversion analysis of surrounding rock parameters for underground powerhouse based on PSO-BP optimized neural network and web technology.
Scientific reports, 14(1):14399.
Aiming at the shortcomings of the BP neural network in practical applications, such as easy to fall into local extremum and slow convergence speed, we optimized the initial weights and thresholds of the BP neural network using the particle swarm optimization (PSO). Additionally, cloud computing service, web technology, cloud database and numerical simulation were integrated to construct an intelligent feedback analysis cloud program for underground engineering safety monitoring based on the PSO-BP algorithm. The program could conveniently, quickly, and intelligently carry out numerical analysis of underground engineering and dynamic feedback analysis of surrounding rock parameters. The program was applied to the cloud inversion analysis of the surrounding rock parameters for the underground powerhouse of the Shuangjiangkou Hydropower Station. The calculated displacement simulated with the back-analyzed parameters matches the measured displacement very well. The posterior variance evaluation shows that the posterior error ratio is 0.045 and the small error probability is 0.999. The evaluation results indicate that the intelligent feedback analysis cloud program has high accuracy and can be applied to engineering practice.
Additional Links: PMID-38909109
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38909109,
year = {2024},
author = {Qu, L and Xie, HQ and Pei, JL and Li, YG and Wu, JM and Feng, G and Xiao, ML},
title = {Cloud inversion analysis of surrounding rock parameters for underground powerhouse based on PSO-BP optimized neural network and web technology.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {14399},
pmid = {38909109},
issn = {2045-2322},
support = {No. 52109135//National Natural Science Foundation of China/ ; No. 2022-03//Science and Technology Innovation Program from Water Resources of Guangdong Province/ ; },
abstract = {Aiming at the shortcomings of the BP neural network in practical applications, such as easy to fall into local extremum and slow convergence speed, we optimized the initial weights and thresholds of the BP neural network using the particle swarm optimization (PSO). Additionally, cloud computing service, web technology, cloud database and numerical simulation were integrated to construct an intelligent feedback analysis cloud program for underground engineering safety monitoring based on the PSO-BP algorithm. The program could conveniently, quickly, and intelligently carry out numerical analysis of underground engineering and dynamic feedback analysis of surrounding rock parameters. The program was applied to the cloud inversion analysis of the surrounding rock parameters for the underground powerhouse of the Shuangjiangkou Hydropower Station. The calculated displacement simulated with the back-analyzed parameters matches the measured displacement very well. The posterior variance evaluation shows that the posterior error ratio is 0.045 and the small error probability is 0.999. The evaluation results indicate that the intelligent feedback analysis cloud program has high accuracy and can be applied to engineering practice.},
}
RevDate: 2024-06-22
Smartphone-Based Passive Sensing for Behavioral and Physical Monitoring in Free-Life Conditions: Technical Usability Study.
JMIR biomedical engineering, 6(2):e15417 pii:v6i2e15417.
BACKGROUND: Smartphone use is widely spreading in society. Their embedded functions and sensors may play an important role in therapy monitoring and planning. However, the use of smartphones for intrapersonal behavioral and physical monitoring is not yet fully supported by adequate studies addressing technical reliability and acceptance.
OBJECTIVE: The objective of this paper is to identify and discuss technical issues that may impact on the wide use of smartphones as clinical monitoring tools. The focus is on the quality of the data and transparency of the acquisition process.
METHODS: QuantifyMyPerson is a platform for continuous monitoring of smartphone use and embedded sensors data. The platform consists of an app for data acquisition, a backend cloud server for data storage and processing, and a web-based dashboard for data management and visualization. The data processing aims to extract meaningful features for the description of daily life such as phone status, calls, app use, GPS, and accelerometer data. A total of health subjects installed the app on their smartphones, running it for 7 months. The acquired data were analyzed to assess impact on smartphone performance (ie, battery consumption and anomalies in functioning) and data integrity. Relevance of the selected features in describing changes in daily life was assessed through the computation of a k-nearest neighbors global anomaly score to detect days that differ from others.
RESULTS: The effectiveness of smartphone-based monitoring depends on the acceptability and interoperability of the system as user retention and data integrity are key aspects. Acceptability was confirmed by the full transparency of the app and the absence of any conflicts with daily smartphone use. The only perceived issue was the battery consumption even though the trend of battery drain with and without the app running was comparable. Regarding interoperability, the app was successfully installed and run on several Android brands. The study shows that some smartphone manufacturers implement power-saving policies not allowing continuous sensor data acquisition and impacting integrity. Data integrity was 96% on smartphones whose power-saving policies do not impact the embedded sensor management and 84% overall.
CONCLUSIONS: The main technological barriers to continuous behavioral and physical monitoring (ie, battery consumption and power-saving policies of manufacturers) may be overcome. Battery consumption increase is mainly due to GPS triangulation and may be limited, while data missing because of power-saving policies are related only to periods of nonuse of the phone since the embedded sensors are reactivated by any smartphone event. Overall, smartphone-based passive sensing is fully feasible and scalable despite the Android market fragmentation.
Additional Links: PMID-38907377
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38907377,
year = {2021},
author = {Tonti, S and Marzolini, B and Bulgheroni, M},
title = {Smartphone-Based Passive Sensing for Behavioral and Physical Monitoring in Free-Life Conditions: Technical Usability Study.},
journal = {JMIR biomedical engineering},
volume = {6},
number = {2},
pages = {e15417},
doi = {10.2196/15417},
pmid = {38907377},
issn = {2561-3278},
abstract = {BACKGROUND: Smartphone use is widely spreading in society. Their embedded functions and sensors may play an important role in therapy monitoring and planning. However, the use of smartphones for intrapersonal behavioral and physical monitoring is not yet fully supported by adequate studies addressing technical reliability and acceptance.
OBJECTIVE: The objective of this paper is to identify and discuss technical issues that may impact on the wide use of smartphones as clinical monitoring tools. The focus is on the quality of the data and transparency of the acquisition process.
METHODS: QuantifyMyPerson is a platform for continuous monitoring of smartphone use and embedded sensors data. The platform consists of an app for data acquisition, a backend cloud server for data storage and processing, and a web-based dashboard for data management and visualization. The data processing aims to extract meaningful features for the description of daily life such as phone status, calls, app use, GPS, and accelerometer data. A total of health subjects installed the app on their smartphones, running it for 7 months. The acquired data were analyzed to assess impact on smartphone performance (ie, battery consumption and anomalies in functioning) and data integrity. Relevance of the selected features in describing changes in daily life was assessed through the computation of a k-nearest neighbors global anomaly score to detect days that differ from others.
RESULTS: The effectiveness of smartphone-based monitoring depends on the acceptability and interoperability of the system as user retention and data integrity are key aspects. Acceptability was confirmed by the full transparency of the app and the absence of any conflicts with daily smartphone use. The only perceived issue was the battery consumption even though the trend of battery drain with and without the app running was comparable. Regarding interoperability, the app was successfully installed and run on several Android brands. The study shows that some smartphone manufacturers implement power-saving policies not allowing continuous sensor data acquisition and impacting integrity. Data integrity was 96% on smartphones whose power-saving policies do not impact the embedded sensor management and 84% overall.
CONCLUSIONS: The main technological barriers to continuous behavioral and physical monitoring (ie, battery consumption and power-saving policies of manufacturers) may be overcome. Battery consumption increase is mainly due to GPS triangulation and may be limited, while data missing because of power-saving policies are related only to periods of nonuse of the phone since the embedded sensors are reactivated by any smartphone event. Overall, smartphone-based passive sensing is fully feasible and scalable despite the Android market fragmentation.},
}
RevDate: 2024-06-21
EfficientNet-deep quantum neural network-based economic denial of sustainability attack detection to enhance network security in cloud.
Network (Bristol, England) [Epub ahead of print].
Cloud computing (CC) is a future revolution in the Information technology (IT) and Communication field. Security and internet connectivity are the common major factors to slow down the proliferation of CC. Recently, a new kind of denial of service (DDoS) attacks, known as Economic Denial of Sustainability (EDoS) attack, has been emerging. Though EDoS attacks are smaller at a moment, it can be expected to develop in nearer prospective in tandem with progression in the cloud usage. Here, EfficientNet-B3-Attn-2 fused Deep Quantum Neural Network (EfficientNet-DQNN) is presented for EDoS detection. Initially, cloud is simulated and thereafter, considered input log file is fed to perform data pre-processing. Z-Score Normalization ;(ZSN) is employed to carry out pre-processing of data. Afterwards, feature fusion (FF) is accomplished based on Deep Neural Network (DNN) with Kulczynski similarity. Then, data augmentation (DA) is executed by oversampling based upon Synthetic Minority Over-sampling Technique (SMOTE). At last, attack detection is conducted utilizing EfficientNet-DQNN. Furthermore, EfficientNet-DQNN is formed by incorporation of EfficientNet-B3-Attn-2 with DQNN. In addition, EfficientNet-DQNN attained 89.8% of F1-score, 90.4% of accuracy, 91.1% of precision and 91.2% of recall using BOT-IOT dataset at K-Fold is 9.
Additional Links: PMID-38904211
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38904211,
year = {2024},
author = {Navaneethakrishnan, M and Robinson Joel, M and Kalavai Palani, S and Gnanaprakasam, GJ},
title = {EfficientNet-deep quantum neural network-based economic denial of sustainability attack detection to enhance network security in cloud.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-25},
doi = {10.1080/0954898X.2024.2361093},
pmid = {38904211},
issn = {1361-6536},
abstract = {Cloud computing (CC) is a future revolution in the Information technology (IT) and Communication field. Security and internet connectivity are the common major factors to slow down the proliferation of CC. Recently, a new kind of denial of service (DDoS) attacks, known as Economic Denial of Sustainability (EDoS) attack, has been emerging. Though EDoS attacks are smaller at a moment, it can be expected to develop in nearer prospective in tandem with progression in the cloud usage. Here, EfficientNet-B3-Attn-2 fused Deep Quantum Neural Network (EfficientNet-DQNN) is presented for EDoS detection. Initially, cloud is simulated and thereafter, considered input log file is fed to perform data pre-processing. Z-Score Normalization ;(ZSN) is employed to carry out pre-processing of data. Afterwards, feature fusion (FF) is accomplished based on Deep Neural Network (DNN) with Kulczynski similarity. Then, data augmentation (DA) is executed by oversampling based upon Synthetic Minority Over-sampling Technique (SMOTE). At last, attack detection is conducted utilizing EfficientNet-DQNN. Furthermore, EfficientNet-DQNN is formed by incorporation of EfficientNet-B3-Attn-2 with DQNN. In addition, EfficientNet-DQNN attained 89.8% of F1-score, 90.4% of accuracy, 91.1% of precision and 91.2% of recall using BOT-IOT dataset at K-Fold is 9.},
}
RevDate: 2024-06-20
On the quantum circuit implementation of modus ponens.
Scientific reports, 14(1):14245.
The process of inference reflects the structure of propositions with assigned truth values, either true or false. Modus ponens is a fundamental form of inference that involves affirming the antecedent to affirm the consequent. Inspired by the quantum computer, the superposition of true and false is used for the parallel processing. In this work, we propose a quantum version of modus ponens. Additionally, we introduce two generations of quantum modus ponens: quantum modus ponens inference chain and multidimensional quantum modus ponens. Finally, a simple implementation of quantum modus ponens on the OriginQ quantum computing cloud platform is demonstrated.
Additional Links: PMID-38902499
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38902499,
year = {2024},
author = {Dai, S},
title = {On the quantum circuit implementation of modus ponens.},
journal = {Scientific reports},
volume = {14},
number = {1},
pages = {14245},
pmid = {38902499},
issn = {2045-2322},
support = {62006168//National Natural Science Foundation of China/ ; LQ21A010001//Natural Science Foundation of Zhejiang Province/ ; },
abstract = {The process of inference reflects the structure of propositions with assigned truth values, either true or false. Modus ponens is a fundamental form of inference that involves affirming the antecedent to affirm the consequent. Inspired by the quantum computer, the superposition of true and false is used for the parallel processing. In this work, we propose a quantum version of modus ponens. Additionally, we introduce two generations of quantum modus ponens: quantum modus ponens inference chain and multidimensional quantum modus ponens. Finally, a simple implementation of quantum modus ponens on the OriginQ quantum computing cloud platform is demonstrated.},
}
RevDate: 2024-06-19
Streamline Intelligent Crowd Monitoring with IoT Cloud Computing Middleware.
Sensors (Basel, Switzerland), 24(11):.
This article introduces a novel middleware that utilizes cost-effective, low-power computing devices like Raspberry Pi to analyze data from wireless sensor networks (WSNs). It is designed for indoor settings like historical buildings and museums, tracking visitors and identifying points of interest. It serves as an evacuation aid by monitoring occupancy and gauging the popularity of specific areas, subjects, or art exhibitions. The middleware employs a basic form of the MapReduce algorithm to gather WSN data and distribute it across available computer nodes. Data collected by RFID sensors on visitor badges is stored on mini-computers placed in exhibition rooms and then transmitted to a remote database after a preset time frame. Utilizing MapReduce for data analysis and a leader election algorithm for fault tolerance, this middleware showcases its viability through metrics, demonstrating applications like swift prototyping and accurate validation of findings. Despite using simpler hardware, its performance matches resource-intensive methods involving audiovisual and AI techniques. This design's innovation lies in its fault-tolerant, distributed setup using budget-friendly, low-power devices rather than resource-heavy hardware or methods. Successfully tested at a historical building in Greece (M. Hatzidakis' residence), it is tailored for indoor spaces. This paper compares its algorithmic application layer with other implementations, highlighting its technical strengths and advantages. Particularly relevant in the wake of the COVID-19 pandemic and general monitoring middleware for indoor locations, this middleware holds promise in tracking visitor counts and overall building occupancy.
Additional Links: PMID-38894434
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38894434,
year = {2024},
author = {Gazis, A and Katsiri, E},
title = {Streamline Intelligent Crowd Monitoring with IoT Cloud Computing Middleware.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {11},
pages = {},
pmid = {38894434},
issn = {1424-8220},
abstract = {This article introduces a novel middleware that utilizes cost-effective, low-power computing devices like Raspberry Pi to analyze data from wireless sensor networks (WSNs). It is designed for indoor settings like historical buildings and museums, tracking visitors and identifying points of interest. It serves as an evacuation aid by monitoring occupancy and gauging the popularity of specific areas, subjects, or art exhibitions. The middleware employs a basic form of the MapReduce algorithm to gather WSN data and distribute it across available computer nodes. Data collected by RFID sensors on visitor badges is stored on mini-computers placed in exhibition rooms and then transmitted to a remote database after a preset time frame. Utilizing MapReduce for data analysis and a leader election algorithm for fault tolerance, this middleware showcases its viability through metrics, demonstrating applications like swift prototyping and accurate validation of findings. Despite using simpler hardware, its performance matches resource-intensive methods involving audiovisual and AI techniques. This design's innovation lies in its fault-tolerant, distributed setup using budget-friendly, low-power devices rather than resource-heavy hardware or methods. Successfully tested at a historical building in Greece (M. Hatzidakis' residence), it is tailored for indoor spaces. This paper compares its algorithmic application layer with other implementations, highlighting its technical strengths and advantages. Particularly relevant in the wake of the COVID-19 pandemic and general monitoring middleware for indoor locations, this middleware holds promise in tracking visitor counts and overall building occupancy.},
}
RevDate: 2024-06-19
Energy-Efficient Edge and Cloud Image Classification with Multi-Reservoir Echo State Network and Data Processing Units.
Sensors (Basel, Switzerland), 24(11):.
In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms.
Additional Links: PMID-38894431
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38894431,
year = {2024},
author = {López-Ortiz, EJ and Perea-Trigo, M and Soria-Morillo, LM and Álvarez-García, JA and Vegas-Olmos, JJ},
title = {Energy-Efficient Edge and Cloud Image Classification with Multi-Reservoir Echo State Network and Data Processing Units.},
journal = {Sensors (Basel, Switzerland)},
volume = {24},
number = {11},
pages = {},
pmid = {38894431},
issn = {1424-8220},
abstract = {In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms.},
}
RevDate: 2024-06-14
Cloud-based serverless computing enables accelerated monte carlo simulations for nuclear medicine imaging.
Biomedical physics & engineering express [Epub ahead of print].
This study investigates the potential of cloud-based serverless computing to accelerate Monte Carlo (MC) simulations for nuclear medicine imaging tasks. MC simulations can pose a high computational burden - even when executed on modern multi-core computing servers. Cloud computing allows simulation tasks to be highly parallelized and considerably accelerated. We investigate the computational performance of a cloud-based serverless MC simulation of radioactive decays for positron emission tomography imaging using Amazon Web Service (AWS) Lambda serverless computing platform for the first time in scientific literature. We provide a comparison of the computational performance of AWS to a modern on-premises multi-thread reconstruction server by measuring the execution times of the processes using between 10^5 and 2∙10^10 simulated decays. We deployed two popular MC simulation frameworks - SimSET and GATE - within the AWS computing environment. Containerized application images were used as a basis for an AWS Lambda function, and local (non-cloud) scripts were used to orchestrate the deployment of simulations. The task was broken down into smaller parallel runs, and launched on concurrently running AWS Lambda instances, and the results were postprocessed and downloaded via the Simple Storage Service. Our implementation of cloud-based MC simulations with SimSET outperforms local server-based computations by more than an order of magnitude. However, the GATE implementation creates more and larger output file sizes and reveals that the internet connection speed can become the primary bottleneck for data transfers. Simulating 109 decays using SimSET is possible within 5 min and accrues computation costs of about $10 on AWS, whereas GATE would have to run in batches for more than 100 min at considerably higher costs. Adopting cloud-based serverless computing architecture in medical imaging research facilities can considerably improve processing times and overall workflow efficiency, with future research exploring additional enhancements through optimized configurations and computational methods.
Additional Links: PMID-38876087
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38876087,
year = {2024},
author = {Bayerlein, R and Swarnakar, V and Selfridge, A and Spencer, BA and Nardo, L and Badawi, RD},
title = {Cloud-based serverless computing enables accelerated monte carlo simulations for nuclear medicine imaging.},
journal = {Biomedical physics & engineering express},
volume = {},
number = {},
pages = {},
doi = {10.1088/2057-1976/ad5847},
pmid = {38876087},
issn = {2057-1976},
abstract = {This study investigates the potential of cloud-based serverless computing to accelerate Monte Carlo (MC) simulations for nuclear medicine imaging tasks. MC simulations can pose a high computational burden - even when executed on modern multi-core computing servers. Cloud computing allows simulation tasks to be highly parallelized and considerably accelerated. We investigate the computational performance of a cloud-based serverless MC simulation of radioactive decays for positron emission tomography imaging using Amazon Web Service (AWS) Lambda serverless computing platform for the first time in scientific literature. We provide a comparison of the computational performance of AWS to a modern on-premises multi-thread reconstruction server by measuring the execution times of the processes using between 10^5 and 2∙10^10 simulated decays. We deployed two popular MC simulation frameworks - SimSET and GATE - within the AWS computing environment. Containerized application images were used as a basis for an AWS Lambda function, and local (non-cloud) scripts were used to orchestrate the deployment of simulations. The task was broken down into smaller parallel runs, and launched on concurrently running AWS Lambda instances, and the results were postprocessed and downloaded via the Simple Storage Service. Our implementation of cloud-based MC simulations with SimSET outperforms local server-based computations by more than an order of magnitude. However, the GATE implementation creates more and larger output file sizes and reveals that the internet connection speed can become the primary bottleneck for data transfers. Simulating 109 decays using SimSET is possible within 5 min and accrues computation costs of about $10 on AWS, whereas GATE would have to run in batches for more than 100 min at considerably higher costs. Adopting cloud-based serverless computing architecture in medical imaging research facilities can considerably improve processing times and overall workflow efficiency, with future research exploring additional enhancements through optimized configurations and computational methods.},
}
RevDate: 2024-06-14
Enhancing Energy Efficiency in Telehealth Internet of Things Systems Through Fog and Cloud Computing Integration: Simulation Study.
JMIR biomedical engineering, 9:e50175 pii:v9i1e50175.
BACKGROUND: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency.
OBJECTIVE: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems.
METHODS: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency.
RESULTS: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings.
CONCLUSIONS: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.
Additional Links: PMID-38875671
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38875671,
year = {2024},
author = {Guo, Y and Ganti, S and Wu, Y},
title = {Enhancing Energy Efficiency in Telehealth Internet of Things Systems Through Fog and Cloud Computing Integration: Simulation Study.},
journal = {JMIR biomedical engineering},
volume = {9},
number = {},
pages = {e50175},
doi = {10.2196/50175},
pmid = {38875671},
issn = {2561-3278},
abstract = {BACKGROUND: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency.
OBJECTIVE: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems.
METHODS: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency.
RESULTS: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings.
CONCLUSIONS: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.},
}
RevDate: 2024-06-14
Machine Learning-Based Time in Patterns for Blood Glucose Fluctuation Pattern Recognition in Type 1 Diabetes Management: Development and Validation Study.
JMIR AI, 2:e45450 pii:v2i1e45450.
BACKGROUND: Continuous glucose monitoring (CGM) for diabetes combines noninvasive glucose biosensors, continuous monitoring, cloud computing, and analytics to connect and simulate a hospital setting in a person's home. CGM systems inspired analytics methods to measure glycemic variability (GV), but existing GV analytics methods disregard glucose trends and patterns; hence, they fail to capture entire temporal patterns and do not provide granular insights about glucose fluctuations.
OBJECTIVE: This study aimed to propose a machine learning-based framework for blood glucose fluctuation pattern recognition, which enables a more comprehensive representation of GV profiles that could present detailed fluctuation information, be easily understood by clinicians, and provide insights about patient groups based on time in blood fluctuation patterns.
METHODS: Overall, 1.5 million measurements from 126 patients in the United Kingdom with type 1 diabetes mellitus (T1DM) were collected, and prevalent blood fluctuation patterns were extracted using dynamic time warping. The patterns were further validated in 225 patients in the United States with T1DM. Hierarchical clustering was then applied on time in patterns to form 4 clusters of patients. Patient groups were compared using statistical analysis.
RESULTS: In total, 6 patterns depicting distinctive glucose levels and trends were identified and validated, based on which 4 GV profiles of patients with T1DM were found. They were significantly different in terms of glycemic statuses such as diabetes duration (P=.04), glycated hemoglobin level (P<.001), and time in range (P<.001) and thus had different management needs.
CONCLUSIONS: The proposed method can analytically extract existing blood fluctuation patterns from CGM data. Thus, time in patterns can capture a rich view of patients' GV profile. Its conceptual resemblance with time in range, along with rich blood fluctuation details, makes it more scalable, accessible, and informative to clinicians.
Additional Links: PMID-38875568
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38875568,
year = {2023},
author = {Chan, NB and Li, W and Aung, T and Bazuaye, E and Montero, RM},
title = {Machine Learning-Based Time in Patterns for Blood Glucose Fluctuation Pattern Recognition in Type 1 Diabetes Management: Development and Validation Study.},
journal = {JMIR AI},
volume = {2},
number = {},
pages = {e45450},
doi = {10.2196/45450},
pmid = {38875568},
issn = {2817-1705},
abstract = {BACKGROUND: Continuous glucose monitoring (CGM) for diabetes combines noninvasive glucose biosensors, continuous monitoring, cloud computing, and analytics to connect and simulate a hospital setting in a person's home. CGM systems inspired analytics methods to measure glycemic variability (GV), but existing GV analytics methods disregard glucose trends and patterns; hence, they fail to capture entire temporal patterns and do not provide granular insights about glucose fluctuations.
OBJECTIVE: This study aimed to propose a machine learning-based framework for blood glucose fluctuation pattern recognition, which enables a more comprehensive representation of GV profiles that could present detailed fluctuation information, be easily understood by clinicians, and provide insights about patient groups based on time in blood fluctuation patterns.
METHODS: Overall, 1.5 million measurements from 126 patients in the United Kingdom with type 1 diabetes mellitus (T1DM) were collected, and prevalent blood fluctuation patterns were extracted using dynamic time warping. The patterns were further validated in 225 patients in the United States with T1DM. Hierarchical clustering was then applied on time in patterns to form 4 clusters of patients. Patient groups were compared using statistical analysis.
RESULTS: In total, 6 patterns depicting distinctive glucose levels and trends were identified and validated, based on which 4 GV profiles of patients with T1DM were found. They were significantly different in terms of glycemic statuses such as diabetes duration (P=.04), glycated hemoglobin level (P<.001), and time in range (P<.001) and thus had different management needs.
CONCLUSIONS: The proposed method can analytically extract existing blood fluctuation patterns from CGM data. Thus, time in patterns can capture a rich view of patients' GV profile. Its conceptual resemblance with time in range, along with rich blood fluctuation details, makes it more scalable, accessible, and informative to clinicians.},
}
RevDate: 2024-06-13
Establishment and Verification of a Skin Cancer Diagnosis Model Based on Image Convolutional Neural Network Analysis and Artificial Intelligence Algorithms.
Alternative therapies in health and medicine pii:AT10026 [Epub ahead of print].
Skin cancer is a serious public health problem, with countless deaths due to skin cancer each year. Early detection, aggressive and effective primary focus is the best treatment for skin cancer, which is important to improve patients' prognosis and reduce the death rate of the disease. However, judging skin tumors by the naked eye alone is a highly subjective factor, and the diagnosis can vary greatly even among professionally trained physicians. Clinically, skin endoscopy is a commonly used method for early diagnosis. However, the manual examination is time-consuming, laborious, and highly dependent on the clinical practice of dermatologists. In today's society, with the rapid development of information technology, the amount of information is increasing at a geometric rate, and new technologies such as cloud computing, distributed, data mining, and meta-inspiration are emerging. In this paper, we design and build a computer-aided diagnosis system for dermatoscopic images and apply meta-heuristic algorithms to image enhancement and image cutting to improve the quality of images, thus increasing the speed of diagnosis, early detection, and early treatment.
Additional Links: PMID-38870489
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38870489,
year = {2024},
author = {Danning, Z and Jia, Q and Yinni, M and Linjia, L},
title = {Establishment and Verification of a Skin Cancer Diagnosis Model Based on Image Convolutional Neural Network Analysis and Artificial Intelligence Algorithms.},
journal = {Alternative therapies in health and medicine},
volume = {},
number = {},
pages = {},
pmid = {38870489},
issn = {1078-6791},
abstract = {Skin cancer is a serious public health problem, with countless deaths due to skin cancer each year. Early detection, aggressive and effective primary focus is the best treatment for skin cancer, which is important to improve patients' prognosis and reduce the death rate of the disease. However, judging skin tumors by the naked eye alone is a highly subjective factor, and the diagnosis can vary greatly even among professionally trained physicians. Clinically, skin endoscopy is a commonly used method for early diagnosis. However, the manual examination is time-consuming, laborious, and highly dependent on the clinical practice of dermatologists. In today's society, with the rapid development of information technology, the amount of information is increasing at a geometric rate, and new technologies such as cloud computing, distributed, data mining, and meta-inspiration are emerging. In this paper, we design and build a computer-aided diagnosis system for dermatoscopic images and apply meta-heuristic algorithms to image enhancement and image cutting to improve the quality of images, thus increasing the speed of diagnosis, early detection, and early treatment.},
}
RevDate: 2024-06-13
MS-PyCloud: A Cloud Computing-Based Pipeline for Proteomic and Glycoproteomic Data Analyses.
Analytical chemistry [Epub ahead of print].
Rapid development and wide adoption of mass spectrometry-based glycoproteomic technologies have empowered scientists to study proteins and protein glycosylation in complex samples on a large scale. This progress has also created unprecedented challenges for individual laboratories to store, manage, and analyze proteomic and glycoproteomic data, both in the cost for proprietary software and high-performance computing and in the long processing time that discourages on-the-fly changes of data processing settings required in explorative and discovery analysis. We developed an open-source, cloud computing-based pipeline, MS-PyCloud, with graphical user interface (GUI), for proteomic and glycoproteomic data analysis. The major components of this pipeline include data file integrity validation, MS/MS database search for spectral assignments to peptide sequences, false discovery rate estimation, protein inference, quantitation of global protein levels, and specific glycan-modified glycopeptides as well as other modification-specific peptides such as phosphorylation, acetylation, and ubiquitination. To ensure the transparency and reproducibility of data analysis, MS-PyCloud includes open-source software tools with comprehensive testing and versioning for spectrum assignments. Leveraging public cloud computing infrastructure via Amazon Web Services (AWS), MS-PyCloud scales seamlessly based on analysis demand to achieve fast and efficient performance. Application of the pipeline to the analysis of large-scale LC-MS/MS data sets demonstrated the effectiveness and high performance of MS-PyCloud. The software can be downloaded at https://github.com/huizhanglab-jhu/ms-pycloud.
Additional Links: PMID-38869158
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38869158,
year = {2024},
author = {Hu, Y and Schnaubelt, M and Chen, L and Zhang, B and Hoang, T and Lih, TM and Zhang, Z and Zhang, H},
title = {MS-PyCloud: A Cloud Computing-Based Pipeline for Proteomic and Glycoproteomic Data Analyses.},
journal = {Analytical chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.analchem.3c01497},
pmid = {38869158},
issn = {1520-6882},
abstract = {Rapid development and wide adoption of mass spectrometry-based glycoproteomic technologies have empowered scientists to study proteins and protein glycosylation in complex samples on a large scale. This progress has also created unprecedented challenges for individual laboratories to store, manage, and analyze proteomic and glycoproteomic data, both in the cost for proprietary software and high-performance computing and in the long processing time that discourages on-the-fly changes of data processing settings required in explorative and discovery analysis. We developed an open-source, cloud computing-based pipeline, MS-PyCloud, with graphical user interface (GUI), for proteomic and glycoproteomic data analysis. The major components of this pipeline include data file integrity validation, MS/MS database search for spectral assignments to peptide sequences, false discovery rate estimation, protein inference, quantitation of global protein levels, and specific glycan-modified glycopeptides as well as other modification-specific peptides such as phosphorylation, acetylation, and ubiquitination. To ensure the transparency and reproducibility of data analysis, MS-PyCloud includes open-source software tools with comprehensive testing and versioning for spectrum assignments. Leveraging public cloud computing infrastructure via Amazon Web Services (AWS), MS-PyCloud scales seamlessly based on analysis demand to achieve fast and efficient performance. Application of the pipeline to the analysis of large-scale LC-MS/MS data sets demonstrated the effectiveness and high performance of MS-PyCloud. The software can be downloaded at https://github.com/huizhanglab-jhu/ms-pycloud.},
}
RevDate: 2024-06-13
CmpDate: 2024-06-13
The Flux Operator.
F1000Research, 13:203.
Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.
Additional Links: PMID-38868668
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38868668,
year = {2024},
author = {Sochat, V and Culquicondor, A and Ojea, A and Milroy, D},
title = {The Flux Operator.},
journal = {F1000Research},
volume = {13},
number = {},
pages = {203},
pmid = {38868668},
issn = {2046-1402},
mesh = {*Cloud Computing ; Workload ; Workflow ; },
abstract = {Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Workload
Workflow
RevDate: 2024-06-11
Crowd-sourced benchmarking of single-sample tumor subclonal reconstruction.
Nature biotechnology [Epub ahead of print].
Subclonal reconstruction algorithms use bulk DNA sequencing data to quantify parameters of tumor evolution, allowing an assessment of how cancers initiate, progress and respond to selective pressures. We launched the ICGC-TCGA (International Cancer Genome Consortium-The Cancer Genome Atlas) DREAM Somatic Mutation Calling Tumor Heterogeneity and Evolution Challenge to benchmark existing subclonal reconstruction algorithms. This 7-year community effort used cloud computing to benchmark 31 subclonal reconstruction algorithms on 51 simulated tumors. Algorithms were scored on seven independent tasks, leading to 12,061 total runs. Algorithm choice influenced performance substantially more than tumor features but purity-adjusted read depth, copy-number state and read mappability were associated with the performance of most algorithms on most tasks. No single algorithm was a top performer for all seven tasks and existing ensemble strategies were unable to outperform the best individual methods, highlighting a key research need. All containerized methods, evaluation code and datasets are available to support further assessment of the determinants of subclonal reconstruction accuracy and development of improved methods to understand tumor evolution.
Additional Links: PMID-38862616
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid38862616,
year = {2024},
author = {Salcedo, A and Tarabichi, M and Buchanan, A and Espiritu, SMG and Zhang, H and Zhu, K and Ou Yang, TH and Leshchiner, I and Anastassiou, D and Guan, Y and Jang, GH and Mootor, MFE and Haase, K and Deshwar, AG and Zou, W and Umar, I and Dentro, S and Wintersinger, JA and Chiotti, K and Demeulemeester, J and Jolly, C and Sycza, L and Ko, M and , and , and Wedge, DC and Morris, QD and Ellrott, K and Van Loo, P and Boutros, PC},
title = {Crowd-sourced benchmarking of single-sample tumor subclonal reconstruction.},
journal = {Nature biotechnology},
volume = {},
number = {},
pages = {},
pmid = {38862616},
issn = {1546-1696},
abstract = {Subclonal reconstruction algorithms use bulk DNA sequencing data to quantify parameters of tumor evolution, allowing an assessment of how cancers initiate, progress and respond to selective pressures. We launched the ICGC-TCGA (International Cancer Genome Consortium-The Cancer Genome Atlas) DREAM Somatic Mutation Calling Tumor Heterogeneity and Evolution Challenge to benchmark existing subclonal reconstruction algorithms. This 7-year community effort used cloud computing to benchmark 31 subclonal reconstruction algorithms on 51 simulated tumors. Algorithms were scored on seven independent tasks, leading to 12,061 total runs. Algorithm choice influenced performance substantially more than tumor features but purity-adjusted read depth, copy-number state and read mappability were associated with the performance of most algorithms on most tasks. No single algorithm was a top performer for all seven tasks and existing ensemble strategies were unable to outperform the best individual methods, highlighting a key research need. All containerized methods, evaluation code and datasets are available to support further assessment of the determinants of subclonal reconstruction accuracy and development of improved methods to understand tumor evolution.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.