Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 30 Mar 2023 at 01:41 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2023-03-29
Public attitudes toward cloud computing and willingness to share personal health records (PHRs) and genome data for health care research in Japan.
Human genome variation, 10(1):11.
Japan's government aims to promote the linkage of medical records, including medical genomic testing data and personal health records (PHRs), via cloud computing (the cloud). However, linking national medical records and using them for health care research can be controversial. Additionally, many ethical issues with using cloud networks with health care and genome data have been noted. However, no research has yet explored the Japanese public's opinions about their PHRs, including genome data, being shared for health care research or the use of the cloud for storing and analyzing such data. Therefore, we conducted a survey in March 2021 to clarify the public's attitudes toward sharing their PHRs, including genome data and using the cloud for health care research. We analyzed data to experimentally create digital health basic literacy scores (BLSs). Our results showed that the Japanese public had concerns about data sharing that overlapped with structural cloud computing issues. The effect of incentives on changes in participants' willingness to share data (WTSD) was limited. Instead, there could be a correlation between WTSD and BLSs. Finally, we argue that it is vital to consider not only researchers but also research participants as value cocreators in health care research conducted through the cloud to overcome both parties' vulnerability.
Additional Links: PMID-36990988
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36990988,
year = {2023},
author = {Kusunose, M and Muto, K},
title = {Public attitudes toward cloud computing and willingness to share personal health records (PHRs) and genome data for health care research in Japan.},
journal = {Human genome variation},
volume = {10},
number = {1},
pages = {11},
pmid = {36990988},
issn = {2054-345X},
abstract = {Japan's government aims to promote the linkage of medical records, including medical genomic testing data and personal health records (PHRs), via cloud computing (the cloud). However, linking national medical records and using them for health care research can be controversial. Additionally, many ethical issues with using cloud networks with health care and genome data have been noted. However, no research has yet explored the Japanese public's opinions about their PHRs, including genome data, being shared for health care research or the use of the cloud for storing and analyzing such data. Therefore, we conducted a survey in March 2021 to clarify the public's attitudes toward sharing their PHRs, including genome data and using the cloud for health care research. We analyzed data to experimentally create digital health basic literacy scores (BLSs). Our results showed that the Japanese public had concerns about data sharing that overlapped with structural cloud computing issues. The effect of incentives on changes in participants' willingness to share data (WTSD) was limited. Instead, there could be a correlation between WTSD and BLSs. Finally, we argue that it is vital to consider not only researchers but also research participants as value cocreators in health care research conducted through the cloud to overcome both parties' vulnerability.},
}
RevDate: 2023-03-28
SARS-CoV2 billion-compound docking.
Scientific data, 10(1):173.
This dataset contains ligand conformations and docking scores for 1.4 billion molecules docked against 6 structural targets from SARS-CoV2, representing 5 unique proteins: MPro, NSP15, PLPro, RDRP, and the Spike protein. Docking was carried out using the AutoDock-GPU platform on the Summit supercomputer and Google Cloud. The docking procedure employed the Solis Wets search method to generate 20 independent ligand binding poses per compound. Each compound geometry was scored using the AutoDock free energy estimate, and rescored using RFScore v3 and DUD-E machine-learned rescoring models. Input protein structures are included, suitable for use by AutoDock-GPU and other docking programs. As the result of an exceptionally large docking campaign, this dataset represents a valuable resource for discovering trends across small molecule and protein binding sites, training AI models, and comparing to inhibitor compounds targeting SARS-CoV-2. The work also gives an example of how to organize and process data from ultra-large docking screens.
Additional Links: PMID-36977690
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36977690,
year = {2023},
author = {Rogers, DM and Agarwal, R and Vermaas, JV and Smith, MD and Rajeshwar, RT and Cooper, C and Sedova, A and Boehm, S and Baker, M and Glaser, J and Smith, JC},
title = {SARS-CoV2 billion-compound docking.},
journal = {Scientific data},
volume = {10},
number = {1},
pages = {173},
pmid = {36977690},
issn = {2052-4463},
abstract = {This dataset contains ligand conformations and docking scores for 1.4 billion molecules docked against 6 structural targets from SARS-CoV2, representing 5 unique proteins: MPro, NSP15, PLPro, RDRP, and the Spike protein. Docking was carried out using the AutoDock-GPU platform on the Summit supercomputer and Google Cloud. The docking procedure employed the Solis Wets search method to generate 20 independent ligand binding poses per compound. Each compound geometry was scored using the AutoDock free energy estimate, and rescored using RFScore v3 and DUD-E machine-learned rescoring models. Input protein structures are included, suitable for use by AutoDock-GPU and other docking programs. As the result of an exceptionally large docking campaign, this dataset represents a valuable resource for discovering trends across small molecule and protein binding sites, training AI models, and comparing to inhibitor compounds targeting SARS-CoV-2. The work also gives an example of how to organize and process data from ultra-large docking screens.},
}
RevDate: 2023-03-28
Headwater streams and inland wetlands: Status and advancements of geospatial datasets and maps across the United States.
Earth-science reviews, 235:1-24.
Headwater streams and inland wetlands provide essential functions that support healthy watersheds and downstream waters. However, scientists and aquatic resource managers lack a comprehensive synthesis of national and state stream and wetland geospatial datasets and emerging technologies that can further improve these data. We conducted a review of existing United States (US) federal and state stream and wetland geospatial datasets, focusing on their spatial extent, permanence classifications, and current limitations. We also examined recent peer-reviewed literature for emerging methods that can potentially improve the estimation, representation, and integration of stream and wetland datasets. We found that federal and state datasets rely heavily on the US Geological Survey's National Hydrography Dataset for stream extent and duration information. Only eleven states (22%) had additional stream extent information and seven states (14%) provided additional duration information. Likewise, federal and state wetland datasets primarily use the US Fish and Wildlife Service's National Wetlands Inventory (NWI) Geospatial Dataset, with only two states using non-NWI datasets. Our synthesis revealed that LiDAR-based technologies hold promise for advancing stream and wetland mapping at limited spatial extents. While machine learning techniques may help to scale-up these LiDAR-derived estimates, challenges related to preprocessing and data workflows remain. High-resolution commercial imagery, supported by public imagery and cloud computing, may further aid characterization of the spatial and temporal dynamics of streams and wetlands, especially using multi-platform and multi-temporal machine learning approaches. Models integrating both stream and wetland dynamics are limited, and field-based efforts must remain a key component in developing improved headwater stream and wetland datasets. Continued financial and partnership support of existing databases is also needed to enhance mapping and inform water resources research and policy decisions.
Additional Links: PMID-36970305
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36970305,
year = {2022},
author = {Christensen, JR and Golden, HE and Alexander, LC and Pickard, BR and Fritz, KM and Lane, CR and Weber, MH and Kwok, RM and Keefer, MN},
title = {Headwater streams and inland wetlands: Status and advancements of geospatial datasets and maps across the United States.},
journal = {Earth-science reviews},
volume = {235},
number = {},
pages = {1-24},
pmid = {36970305},
issn = {0012-8252},
support = {EPA999999/ImEPA/Intramural EPA/United States ; },
abstract = {Headwater streams and inland wetlands provide essential functions that support healthy watersheds and downstream waters. However, scientists and aquatic resource managers lack a comprehensive synthesis of national and state stream and wetland geospatial datasets and emerging technologies that can further improve these data. We conducted a review of existing United States (US) federal and state stream and wetland geospatial datasets, focusing on their spatial extent, permanence classifications, and current limitations. We also examined recent peer-reviewed literature for emerging methods that can potentially improve the estimation, representation, and integration of stream and wetland datasets. We found that federal and state datasets rely heavily on the US Geological Survey's National Hydrography Dataset for stream extent and duration information. Only eleven states (22%) had additional stream extent information and seven states (14%) provided additional duration information. Likewise, federal and state wetland datasets primarily use the US Fish and Wildlife Service's National Wetlands Inventory (NWI) Geospatial Dataset, with only two states using non-NWI datasets. Our synthesis revealed that LiDAR-based technologies hold promise for advancing stream and wetland mapping at limited spatial extents. While machine learning techniques may help to scale-up these LiDAR-derived estimates, challenges related to preprocessing and data workflows remain. High-resolution commercial imagery, supported by public imagery and cloud computing, may further aid characterization of the spatial and temporal dynamics of streams and wetlands, especially using multi-platform and multi-temporal machine learning approaches. Models integrating both stream and wetland dynamics are limited, and field-based efforts must remain a key component in developing improved headwater stream and wetland datasets. Continued financial and partnership support of existing databases is also needed to enhance mapping and inform water resources research and policy decisions.},
}
RevDate: 2023-03-27
Actual rating calculation of the zoom cloud meetings app using user reviews on google play store with sentiment annotation of BERT and hybridization of RNN and LSTM.
Expert systems with applications, 223:119919.
The recent outbreaks of the COVID-19 forced people to work from home. All the educational institutes run their academic activities online. The online meeting app the "Zoom Cloud Meeting" provides the most entire supports for this purpose. For providing proper functionalities require in this situation of online supports the developers need the frequent release of new versions of the application. Which makes the chances to have lots of bugs during the release of new versions. To fix those bugs introduce developer needs users' feedback based on the new release of the application. But most of the time the ratings and reviews are created contraposition between them because of the users' inadvertent in giving ratings and reviews. And it has been the main problem to fix those bugs using user ratings for software developers. For this reason, we conduct this average rating calculation process based on the sentiment of user reviews to help software developers. We use BERT-based sentiment annotation to create unbiased datasets and hybridize RNN with LSTM to find calculated ratings based on the unbiased reviews dataset. Out of four models trained on four different datasets, we found promising performance in two datasets containing a necessarily large amount of unbiased reviews. The results show that the reviews have more positive sentiments than the actual ratings. Our results found an average of 3.60 stars rating, where the actual average rating found in dataset is 3.08 stars. We use reviews of more than 250 apps from the Google Play app store. The results of our can provide more promising if we can use a large dataset only containing the reviews of the Zoom Cloud Meeting app.
Additional Links: PMID-36969371
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36969371,
year = {2023},
author = {Islam, MJ and Datta, R and Iqbal, A},
title = {Actual rating calculation of the zoom cloud meetings app using user reviews on google play store with sentiment annotation of BERT and hybridization of RNN and LSTM.},
journal = {Expert systems with applications},
volume = {223},
number = {},
pages = {119919},
pmid = {36969371},
issn = {0957-4174},
abstract = {The recent outbreaks of the COVID-19 forced people to work from home. All the educational institutes run their academic activities online. The online meeting app the "Zoom Cloud Meeting" provides the most entire supports for this purpose. For providing proper functionalities require in this situation of online supports the developers need the frequent release of new versions of the application. Which makes the chances to have lots of bugs during the release of new versions. To fix those bugs introduce developer needs users' feedback based on the new release of the application. But most of the time the ratings and reviews are created contraposition between them because of the users' inadvertent in giving ratings and reviews. And it has been the main problem to fix those bugs using user ratings for software developers. For this reason, we conduct this average rating calculation process based on the sentiment of user reviews to help software developers. We use BERT-based sentiment annotation to create unbiased datasets and hybridize RNN with LSTM to find calculated ratings based on the unbiased reviews dataset. Out of four models trained on four different datasets, we found promising performance in two datasets containing a necessarily large amount of unbiased reviews. The results show that the reviews have more positive sentiments than the actual ratings. Our results found an average of 3.60 stars rating, where the actual average rating found in dataset is 3.08 stars. We use reviews of more than 250 apps from the Google Play app store. The results of our can provide more promising if we can use a large dataset only containing the reviews of the Zoom Cloud Meeting app.},
}
RevDate: 2023-03-26
ElasticBLAST: accelerating sequence search via cloud computing.
BMC bioinformatics, 24(1):117.
BACKGROUND: Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform.
RESULTS: We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information.
CONCLUSION: We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.
Additional Links: PMID-36967390
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36967390,
year = {2023},
author = {Camacho, C and Boratyn, GM and Joukov, V and Vera Alvarez, R and Madden, TL},
title = {ElasticBLAST: accelerating sequence search via cloud computing.},
journal = {BMC bioinformatics},
volume = {24},
number = {1},
pages = {117},
pmid = {36967390},
issn = {1471-2105},
abstract = {BACKGROUND: Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform.
RESULTS: We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information.
CONCLUSION: We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.},
}
RevDate: 2023-03-24
Using the Pan American Health Organization digital conversational agent to educate the public on alcohol use and health: a preliminary analysis.
JMIR formative research [Epub ahead of print].
BACKGROUND: Background: There is widespread misinformation about the effects of alcohol consumption on health, which were amplified during the COVID-19 pandemic through social media and internet channels. Chatbots and conversational agents became an important piece of the WHO response during the COVID-19 pandemic to quickly disseminate evidence-based information to the public, related to COVID-19 and tobacco. PAHO seized the opportunity to develop a conversational agent to talk about alcohol related topics and therefore complement traditional forms of health education which have been promoted in the past.
OBJECTIVE: Objective: To develop and deploy a digital conversational agent to interact to an unlimited number of users, 24 hours a day, anonymously, about alcohol topics, in several languages, including on ways to reduce risks from drinking, at no cost and accessible through various devices.
METHODS: Methods: The content development was based on the latest scientific evidence on alcohol impacts on health, social norms about drinking and data from the World Health Organization and PAHO. The agent itself was developed through a non-exclusive license agreement with a private company and included Google Digital Flow ES as the natural language processing software, and AWS for cloud services. Another company was contracted to program all the conversations, following the technical advice of PAHO staff.
RESULTS: Results: The conversational agent was named Pahola and it was deployed on November 19, 2021, through the PAHO website after a launch event with high publicity. No identifiable data were used and all interactions were anonymous, and therefore this was considered not research with human subjects. Pahola speaks in English, Spanish and Portuguese, interacts anonymously to a potential infinite number of users through various digital devices. Users were required to accept terms and conditions to enable access to their camera and microphone to interact with Pahola. Pahola attracted good attention from the media, reached 1.6 million people, leading to 236,000 clicks on its landing page, mostly through mobile devices. Only 1,532 users had a conversation after clicking to talk to Pahola. The average time users spent talking to Pahola was five minutes. Major dropouts were observed in different steps of the conversation flow. Some questions asked by users were not anticipated during programming and could not be answered.
CONCLUSIONS: Our findings showed several limitations to using a conversational agent for alcohol education to the general public. Improvements are needed to expand the content to make it more meaningful and engaging to the public. The potential of chatbots to educate the public on alcohol related topics seems enormous but requires a long-term investment of resources and research to be useful and reach many more people.
Additional Links: PMID-36961920
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36961920,
year = {2023},
author = {Goldnadel Monteiro, M and Pantani, D and Pinsky, I and Hernandes Rocha, TA},
title = {Using the Pan American Health Organization digital conversational agent to educate the public on alcohol use and health: a preliminary analysis.},
journal = {JMIR formative research},
volume = {},
number = {},
pages = {},
doi = {10.2196/43165},
pmid = {36961920},
issn = {2561-326X},
abstract = {BACKGROUND: Background: There is widespread misinformation about the effects of alcohol consumption on health, which were amplified during the COVID-19 pandemic through social media and internet channels. Chatbots and conversational agents became an important piece of the WHO response during the COVID-19 pandemic to quickly disseminate evidence-based information to the public, related to COVID-19 and tobacco. PAHO seized the opportunity to develop a conversational agent to talk about alcohol related topics and therefore complement traditional forms of health education which have been promoted in the past.
OBJECTIVE: Objective: To develop and deploy a digital conversational agent to interact to an unlimited number of users, 24 hours a day, anonymously, about alcohol topics, in several languages, including on ways to reduce risks from drinking, at no cost and accessible through various devices.
METHODS: Methods: The content development was based on the latest scientific evidence on alcohol impacts on health, social norms about drinking and data from the World Health Organization and PAHO. The agent itself was developed through a non-exclusive license agreement with a private company and included Google Digital Flow ES as the natural language processing software, and AWS for cloud services. Another company was contracted to program all the conversations, following the technical advice of PAHO staff.
RESULTS: Results: The conversational agent was named Pahola and it was deployed on November 19, 2021, through the PAHO website after a launch event with high publicity. No identifiable data were used and all interactions were anonymous, and therefore this was considered not research with human subjects. Pahola speaks in English, Spanish and Portuguese, interacts anonymously to a potential infinite number of users through various digital devices. Users were required to accept terms and conditions to enable access to their camera and microphone to interact with Pahola. Pahola attracted good attention from the media, reached 1.6 million people, leading to 236,000 clicks on its landing page, mostly through mobile devices. Only 1,532 users had a conversation after clicking to talk to Pahola. The average time users spent talking to Pahola was five minutes. Major dropouts were observed in different steps of the conversation flow. Some questions asked by users were not anticipated during programming and could not be answered.
CONCLUSIONS: Our findings showed several limitations to using a conversational agent for alcohol education to the general public. Improvements are needed to expand the content to make it more meaningful and engaging to the public. The potential of chatbots to educate the public on alcohol related topics seems enormous but requires a long-term investment of resources and research to be useful and reach many more people.},
}
RevDate: 2023-03-23
A sensor-enabled cloud-based computing platform for computational brain biomechanics.
Computer methods and programs in biomedicine, 233:107470 pii:S0169-2607(23)00136-0 [Epub ahead of print].
BACKGROUND AND OBJECTIVES: Driven by the risk of repetitive head trauma, sensors have been integrated into mouthguards to measure head impacts in contact sports and military activities. These wearable devices, referred to as "instrumented" or "smart" mouthguards are being actively developed by various research groups and organizations. These instrumented mouthguards provide an opportunity to further study and understand the brain biomechanics due to impact. In this study, we present a brain modeling service that can use information from these sensors to predict brain injury metrics in an automated fashion.
METHODS: We have built a brain modeling platform using several of Amazon's Web Services (AWS) to enable cloud computing and scalability. We use a custom-built cloud-based finite element modeling code to compute the physics-based nonlinear response of the intracranial brain tissue and provide a frontend web application and an application programming interface for groups working on head impact sensor technology to include simulated injury predictions into their research pipeline.
RESULTS: The platform results have been validated against experimental data available in literature for brain-skull relative displacements, brain strains and intracranial pressure. The parallel processing capability of the platform has also been tested and verified. We also studied the accuracy of the custom head surfaces generated by Avatar 3D.
CONCLUSION: We present a validated cloud-based computational brain modeling platform that uses sensor data as input for numerical brain models and outputs a quantitative description of brain tissue strains and injury metrics. The platform is expected to generate transparent, reproducible, and traceable brain computing results.
Additional Links: PMID-36958108
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36958108,
year = {2023},
author = {Menghani, RR and Das, A and Kraft, RH},
title = {A sensor-enabled cloud-based computing platform for computational brain biomechanics.},
journal = {Computer methods and programs in biomedicine},
volume = {233},
number = {},
pages = {107470},
doi = {10.1016/j.cmpb.2023.107470},
pmid = {36958108},
issn = {1872-7565},
abstract = {BACKGROUND AND OBJECTIVES: Driven by the risk of repetitive head trauma, sensors have been integrated into mouthguards to measure head impacts in contact sports and military activities. These wearable devices, referred to as "instrumented" or "smart" mouthguards are being actively developed by various research groups and organizations. These instrumented mouthguards provide an opportunity to further study and understand the brain biomechanics due to impact. In this study, we present a brain modeling service that can use information from these sensors to predict brain injury metrics in an automated fashion.
METHODS: We have built a brain modeling platform using several of Amazon's Web Services (AWS) to enable cloud computing and scalability. We use a custom-built cloud-based finite element modeling code to compute the physics-based nonlinear response of the intracranial brain tissue and provide a frontend web application and an application programming interface for groups working on head impact sensor technology to include simulated injury predictions into their research pipeline.
RESULTS: The platform results have been validated against experimental data available in literature for brain-skull relative displacements, brain strains and intracranial pressure. The parallel processing capability of the platform has also been tested and verified. We also studied the accuracy of the custom head surfaces generated by Avatar 3D.
CONCLUSION: We present a validated cloud-based computational brain modeling platform that uses sensor data as input for numerical brain models and outputs a quantitative description of brain tissue strains and injury metrics. The platform is expected to generate transparent, reproducible, and traceable brain computing results.},
}
RevDate: 2023-03-23
PhytoOracle: Scalable, modular phenomics data processing pipelines.
Frontiers in plant science, 14:1112973.
As phenomics data volume and dimensionality increase due to advancements in sensor technology, there is an urgent need to develop and implement scalable data processing pipelines. Current phenomics data processing pipelines lack modularity, extensibility, and processing distribution across sensor modalities and phenotyping platforms. To address these challenges, we developed PhytoOracle (PO), a suite of modular, scalable pipelines for processing large volumes of field phenomics RGB, thermal, PSII chlorophyll fluorescence 2D images, and 3D point clouds. PhytoOracle aims to (i) improve data processing efficiency; (ii) provide an extensible, reproducible computing framework; and (iii) enable data fusion of multi-modal phenomics data. PhytoOracle integrates open-source distributed computing frameworks for parallel processing on high-performance computing, cloud, and local computing environments. Each pipeline component is available as a standalone container, providing transferability, extensibility, and reproducibility. The PO pipeline extracts and associates individual plant traits across sensor modalities and collection time points, representing a unique multi-system approach to addressing the genotype-phenotype gap. To date, PO supports lettuce and sorghum phenotypic trait extraction, with a goal of widening the range of supported species in the future. At the maximum number of cores tested in this study (1,024 cores), PO processing times were: 235 minutes for 9,270 RGB images (140.7 GB), 235 minutes for 9,270 thermal images (5.4 GB), and 13 minutes for 39,678 PSII images (86.2 GB). These processing times represent end-to-end processing, from raw data to fully processed numerical phenotypic trait data. Repeatability values of 0.39-0.95 (bounding area), 0.81-0.95 (axis-aligned bounding volume), 0.79-0.94 (oriented bounding volume), 0.83-0.95 (plant height), and 0.81-0.95 (number of points) were observed in Field Scanalyzer data. We also show the ability of PO to process drone data with a repeatability of 0.55-0.95 (bounding area).
Additional Links: PMID-36950362
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36950362,
year = {2023},
author = {Gonzalez, EM and Zarei, A and Hendler, N and Simmons, T and Zarei, A and Demieville, J and Strand, R and Rozzi, B and Calleja, S and Ellingson, H and Cosi, M and Davey, S and Lavelle, DO and Truco, MJ and Swetnam, TL and Merchant, N and Michelmore, RW and Lyons, E and Pauli, D},
title = {PhytoOracle: Scalable, modular phenomics data processing pipelines.},
journal = {Frontiers in plant science},
volume = {14},
number = {},
pages = {1112973},
pmid = {36950362},
issn = {1664-462X},
abstract = {As phenomics data volume and dimensionality increase due to advancements in sensor technology, there is an urgent need to develop and implement scalable data processing pipelines. Current phenomics data processing pipelines lack modularity, extensibility, and processing distribution across sensor modalities and phenotyping platforms. To address these challenges, we developed PhytoOracle (PO), a suite of modular, scalable pipelines for processing large volumes of field phenomics RGB, thermal, PSII chlorophyll fluorescence 2D images, and 3D point clouds. PhytoOracle aims to (i) improve data processing efficiency; (ii) provide an extensible, reproducible computing framework; and (iii) enable data fusion of multi-modal phenomics data. PhytoOracle integrates open-source distributed computing frameworks for parallel processing on high-performance computing, cloud, and local computing environments. Each pipeline component is available as a standalone container, providing transferability, extensibility, and reproducibility. The PO pipeline extracts and associates individual plant traits across sensor modalities and collection time points, representing a unique multi-system approach to addressing the genotype-phenotype gap. To date, PO supports lettuce and sorghum phenotypic trait extraction, with a goal of widening the range of supported species in the future. At the maximum number of cores tested in this study (1,024 cores), PO processing times were: 235 minutes for 9,270 RGB images (140.7 GB), 235 minutes for 9,270 thermal images (5.4 GB), and 13 minutes for 39,678 PSII images (86.2 GB). These processing times represent end-to-end processing, from raw data to fully processed numerical phenotypic trait data. Repeatability values of 0.39-0.95 (bounding area), 0.81-0.95 (axis-aligned bounding volume), 0.79-0.94 (oriented bounding volume), 0.83-0.95 (plant height), and 0.81-0.95 (number of points) were observed in Field Scanalyzer data. We also show the ability of PO to process drone data with a repeatability of 0.55-0.95 (bounding area).},
}
RevDate: 2023-03-23
VAI-B: a multicenter platform for the external validation of artificial intelligence algorithms in breast imaging.
Journal of medical imaging (Bellingham, Wash.), 10(6):061404.
PURPOSE: Multiple vendors are currently offering artificial intelligence (AI) computer-aided systems for triage detection, diagnosis, and risk prediction of breast cancer based on screening mammography. There is an imminent need to establish validation platforms that enable fair and transparent testing of these systems against external data.
APPROACH: We developed validation of artificial intelligence for breast imaging (VAI-B), a platform for independent validation of AI algorithms in breast imaging. The platform is a hybrid solution, with one part implemented in the cloud and another in an on-premises environment at Karolinska Institute. Cloud services provide the flexibility of scaling the computing power during inference time, while secure on-premises clinical data storage preserves their privacy. A MongoDB database and a python package were developed to store and manage the data on-premises. VAI-B requires four data components: radiological images, AI inferences, radiologist assessments, and cancer outcomes.
RESULTS: To pilot test VAI-B, we defined a case-control population based on 8080 patients diagnosed with breast cancer and 36,339 healthy women based on the Swedish national quality registry for breast cancer. Images and radiological assessments from more than 100,000 mammography examinations were extracted from hospitals in three regions of Sweden. The images were processed by AI systems from three vendors in a virtual private cloud to produce abnormality scores related to signs of cancer in the images. A total of 105,706 examinations have been processed and stored in the database.
CONCLUSIONS: We have created a platform that will allow downstream evaluation of AI systems for breast cancer detection, which enables faster development cycles for participating vendors and safer AI adoption for participating hospitals. The platform was designed to be scalable and ready to be expanded should a new vendor want to evaluate their system or should a new hospital wish to obtain an evaluation of different AI systems on their images.
Additional Links: PMID-36949901
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36949901,
year = {2023},
author = {CossÃo, F and Schurz, H and Engström, M and Barck-Holst, C and Tsirikoglou, A and Lundström, C and Gustafsson, H and Smith, K and Zackrisson, S and Strand, F},
title = {VAI-B: a multicenter platform for the external validation of artificial intelligence algorithms in breast imaging.},
journal = {Journal of medical imaging (Bellingham, Wash.)},
volume = {10},
number = {6},
pages = {061404},
pmid = {36949901},
issn = {2329-4302},
abstract = {PURPOSE: Multiple vendors are currently offering artificial intelligence (AI) computer-aided systems for triage detection, diagnosis, and risk prediction of breast cancer based on screening mammography. There is an imminent need to establish validation platforms that enable fair and transparent testing of these systems against external data.
APPROACH: We developed validation of artificial intelligence for breast imaging (VAI-B), a platform for independent validation of AI algorithms in breast imaging. The platform is a hybrid solution, with one part implemented in the cloud and another in an on-premises environment at Karolinska Institute. Cloud services provide the flexibility of scaling the computing power during inference time, while secure on-premises clinical data storage preserves their privacy. A MongoDB database and a python package were developed to store and manage the data on-premises. VAI-B requires four data components: radiological images, AI inferences, radiologist assessments, and cancer outcomes.
RESULTS: To pilot test VAI-B, we defined a case-control population based on 8080 patients diagnosed with breast cancer and 36,339 healthy women based on the Swedish national quality registry for breast cancer. Images and radiological assessments from more than 100,000 mammography examinations were extracted from hospitals in three regions of Sweden. The images were processed by AI systems from three vendors in a virtual private cloud to produce abnormality scores related to signs of cancer in the images. A total of 105,706 examinations have been processed and stored in the database.
CONCLUSIONS: We have created a platform that will allow downstream evaluation of AI systems for breast cancer detection, which enables faster development cycles for participating vendors and safer AI adoption for participating hospitals. The platform was designed to be scalable and ready to be expanded should a new vendor want to evaluate their system or should a new hospital wish to obtain an evaluation of different AI systems on their images.},
}
RevDate: 2023-03-23
QuantImage v2: a comprehensive and integrated physician-centered cloud platform for radiomics and machine learning research.
European radiology experimental, 7(1):16.
BACKGROUND: Radiomics, the field of image-based computational medical biomarker research, has experienced rapid growth over the past decade due to its potential to revolutionize the development of personalized decision support models. However, despite its research momentum and important advances toward methodological standardization, the translation of radiomics prediction models into clinical practice only progresses slowly. The lack of physicians leading the development of radiomics models and insufficient integration of radiomics tools in the clinical workflow contributes to this slow uptake.
METHODS: We propose a physician-centered vision of radiomics research and derive minimal functional requirements for radiomics research software to support this vision. Free-to-access radiomics tools and frameworks were reviewed to identify best practices and reveal the shortcomings of existing software solutions to optimally support physician-driven radiomics research in a clinical environment.
RESULTS: Support for user-friendly development and evaluation of radiomics prediction models via machine learning was found to be missing in most tools. QuantImage v2 (QI2) was designed and implemented to address these shortcomings. QI2 relies on well-established existing tools and open-source libraries to realize and concretely demonstrate the potential of a one-stop tool for physician-driven radiomics research. It provides web-based access to cohort management, feature extraction, and visualization and supports "no-code" development and evaluation of machine learning models against patient-specific outcome data.
CONCLUSIONS: QI2 fills a gap in the radiomics software landscape by enabling "no-code" radiomics research, including model validation, in a clinical environment. Further information about QI2, a public instance of the system, and its source code is available at https://medgift.github.io/quantimage-v2-info/ . Key points As domain experts, physicians play a key role in the development of radiomics models. Existing software solutions do not support physician-driven research optimally. QuantImage v2 implements a physician-centered vision for radiomics research. QuantImage v2 is a web-based, "no-code" radiomics research platform.
Additional Links: PMID-36947346
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36947346,
year = {2023},
author = {Abler, D and Schaer, R and Oreiller, V and Verma, H and Reichenbach, J and Aidonopoulos, O and Evéquoz, F and Jreige, M and Prior, JO and Depeursinge, A},
title = {QuantImage v2: a comprehensive and integrated physician-centered cloud platform for radiomics and machine learning research.},
journal = {European radiology experimental},
volume = {7},
number = {1},
pages = {16},
pmid = {36947346},
issn = {2509-9280},
abstract = {BACKGROUND: Radiomics, the field of image-based computational medical biomarker research, has experienced rapid growth over the past decade due to its potential to revolutionize the development of personalized decision support models. However, despite its research momentum and important advances toward methodological standardization, the translation of radiomics prediction models into clinical practice only progresses slowly. The lack of physicians leading the development of radiomics models and insufficient integration of radiomics tools in the clinical workflow contributes to this slow uptake.
METHODS: We propose a physician-centered vision of radiomics research and derive minimal functional requirements for radiomics research software to support this vision. Free-to-access radiomics tools and frameworks were reviewed to identify best practices and reveal the shortcomings of existing software solutions to optimally support physician-driven radiomics research in a clinical environment.
RESULTS: Support for user-friendly development and evaluation of radiomics prediction models via machine learning was found to be missing in most tools. QuantImage v2 (QI2) was designed and implemented to address these shortcomings. QI2 relies on well-established existing tools and open-source libraries to realize and concretely demonstrate the potential of a one-stop tool for physician-driven radiomics research. It provides web-based access to cohort management, feature extraction, and visualization and supports "no-code" development and evaluation of machine learning models against patient-specific outcome data.
CONCLUSIONS: QI2 fills a gap in the radiomics software landscape by enabling "no-code" radiomics research, including model validation, in a clinical environment. Further information about QI2, a public instance of the system, and its source code is available at https://medgift.github.io/quantimage-v2-info/ . Key points As domain experts, physicians play a key role in the development of radiomics models. Existing software solutions do not support physician-driven research optimally. QuantImage v2 implements a physician-centered vision for radiomics research. QuantImage v2 is a web-based, "no-code" radiomics research platform.},
}
RevDate: 2023-03-22
GLUT1-DS Italian registry: past, present, and future: a useful tool for rare disorders.
Orphanet journal of rare diseases, 18(1):63.
BACKGROUND: GLUT1 deficiency syndrome is a rare, genetically determined neurological disorder for which Ketogenic Dietary Treatment represents the gold standard and lifelong treatment. Patient registries are powerful tools providing insights and real-world data on rare diseases.
OBJECTIVE: To describe the implementation of a national web-based registry for GLUT1-DS.
METHODS: This is a retrospective and prospective, multicenter, observational registry developed in collaboration with the Italian GLUT1-DS association and based on an innovative, flexible and configurable cloud computing technology platform, structured according to the most rigorous requirements for the management of patient's sensitive data. The Glut1 Registry collects baseline and follow-up data on the patient's demographics, history, symptoms, genotype, clinical, and instrumental evaluations and therapies.
RESULTS: Five Centers in Italy joined the registry, and two more Centers are currently joining. In the first two years of running, data from 67 patients (40 females and 27 males) have been collected. Age at symptom onset was within the first year of life in most (40, 60%) patients. The diagnosis was formulated in infancy in almost half of the cases (34, 51%). Symptoms at onset were mainly paroxysmal (mostly epileptic seizure and paroxysmal ocular movement disorder) or mixed paroxysmal and fixed symptoms (mostly psychomotor delay). Most patients (53, 79%) are currently under Ketogenic dietary treatments.
CONCLUSIONS: We describe the principles behind the design, development, and deployment of the web-based nationwide GLUT1-DS registry. It represents a stepping stone towards a more comprehensive understanding of the disease from onset to adulthood. It also represents a virtuous model from a technical, legal, and organizational point of view, thus representing a possible paradigmatic example for other rare disease registry implementation.
Additional Links: PMID-36944981
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36944981,
year = {2023},
author = {Varesio, C and De Giorgis, V and Veggiotti, P and Nardocci, N and Granata, T and Ragona, F and Pasca, L and Mensi, MM and Borgatti, R and Olivotto, S and Previtali, R and Riva, A and Mancardi, MM and Striano, P and Cavallin, M and Guerrini, R and Operto, FF and Pizzolato, A and Di Maulo, R and Martino, F and Lodi, A and Marini, C},
title = {GLUT1-DS Italian registry: past, present, and future: a useful tool for rare disorders.},
journal = {Orphanet journal of rare diseases},
volume = {18},
number = {1},
pages = {63},
pmid = {36944981},
issn = {1750-1172},
abstract = {BACKGROUND: GLUT1 deficiency syndrome is a rare, genetically determined neurological disorder for which Ketogenic Dietary Treatment represents the gold standard and lifelong treatment. Patient registries are powerful tools providing insights and real-world data on rare diseases.
OBJECTIVE: To describe the implementation of a national web-based registry for GLUT1-DS.
METHODS: This is a retrospective and prospective, multicenter, observational registry developed in collaboration with the Italian GLUT1-DS association and based on an innovative, flexible and configurable cloud computing technology platform, structured according to the most rigorous requirements for the management of patient's sensitive data. The Glut1 Registry collects baseline and follow-up data on the patient's demographics, history, symptoms, genotype, clinical, and instrumental evaluations and therapies.
RESULTS: Five Centers in Italy joined the registry, and two more Centers are currently joining. In the first two years of running, data from 67 patients (40 females and 27 males) have been collected. Age at symptom onset was within the first year of life in most (40, 60%) patients. The diagnosis was formulated in infancy in almost half of the cases (34, 51%). Symptoms at onset were mainly paroxysmal (mostly epileptic seizure and paroxysmal ocular movement disorder) or mixed paroxysmal and fixed symptoms (mostly psychomotor delay). Most patients (53, 79%) are currently under Ketogenic dietary treatments.
CONCLUSIONS: We describe the principles behind the design, development, and deployment of the web-based nationwide GLUT1-DS registry. It represents a stepping stone towards a more comprehensive understanding of the disease from onset to adulthood. It also represents a virtuous model from a technical, legal, and organizational point of view, thus representing a possible paradigmatic example for other rare disease registry implementation.},
}
RevDate: 2023-03-21
The Ultrafast and Accurate Mapping Algorithm FANSe3: Mapping a Human Whole-Genome Sequencing Dataset Within 30 Minutes.
Phenomics (Cham, Switzerland), 1(1):22-30.
Aligning billions of reads generated by the next-generation sequencing (NGS) to reference sequences, termed "mapping", is the time-consuming and computationally-intensive process in most NGS applications. A Fast, accurate and robust mapping algorithm is highly needed. Therefore, we developed the FANSe3 mapping algorithm, which can map a 30 × human whole-genome sequencing (WGS) dataset within 30 min, a 50 × human whole exome sequencing (WES) dataset within 30 s, and a typical mRNA-seq dataset within seconds in a single-server node without the need for any hardware acceleration feature. Like its predecessor FANSe2, the error rate of FANSe3 can be kept as low as 10[-9] in most cases, this is more robust than the Burrows-Wheeler transform-based algorithms. Error allowance hardly affected the identification of a driver somatic mutation in clinically relevant WGS data and provided robust gene expression profiles regardless of the parameter settings and sequencer used. The novel algorithm, designed for high-performance cloud-computing after infrastructures, will break the bottleneck of speed and accuracy in NGS data analysis and promote NGS applications in various fields. The FANSe3 algorithm can be downloaded from the website: http://www.chi-biotech.com/fanse3/.
Additional Links: PMID-36939746
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36939746,
year = {2021},
author = {Zhang, G and Zhang, Y and Jin, J},
title = {The Ultrafast and Accurate Mapping Algorithm FANSe3: Mapping a Human Whole-Genome Sequencing Dataset Within 30 Minutes.},
journal = {Phenomics (Cham, Switzerland)},
volume = {1},
number = {1},
pages = {22-30},
pmid = {36939746},
issn = {2730-5848},
abstract = {Aligning billions of reads generated by the next-generation sequencing (NGS) to reference sequences, termed "mapping", is the time-consuming and computationally-intensive process in most NGS applications. A Fast, accurate and robust mapping algorithm is highly needed. Therefore, we developed the FANSe3 mapping algorithm, which can map a 30 × human whole-genome sequencing (WGS) dataset within 30 min, a 50 × human whole exome sequencing (WES) dataset within 30 s, and a typical mRNA-seq dataset within seconds in a single-server node without the need for any hardware acceleration feature. Like its predecessor FANSe2, the error rate of FANSe3 can be kept as low as 10[-9] in most cases, this is more robust than the Burrows-Wheeler transform-based algorithms. Error allowance hardly affected the identification of a driver somatic mutation in clinically relevant WGS data and provided robust gene expression profiles regardless of the parameter settings and sequencer used. The novel algorithm, designed for high-performance cloud-computing after infrastructures, will break the bottleneck of speed and accuracy in NGS data analysis and promote NGS applications in various fields. The FANSe3 algorithm can be downloaded from the website: http://www.chi-biotech.com/fanse3/.},
}
RevDate: 2023-03-20
An artificial intelligence lightweight blockchain security model for security and privacy in IIoT systems.
Journal of cloud computing (Heidelberg, Germany), 12(1):38.
The Industrial Internet of Things (IIoT) promises to deliver innovative business models across multiple domains by providing ubiquitous connectivity, intelligent data, predictive analytics, and decision-making systems for improved market performance. However, traditional IIoT architectures are highly susceptible to many security vulnerabilities and network intrusions, which bring challenges such as lack of privacy, integrity, trust, and centralization. This research aims to implement an Artificial Intelligence-based Lightweight Blockchain Security Model (AILBSM) to ensure privacy and security of IIoT systems. This novel model is meant to address issues that can occur with security and privacy when dealing with Cloud-based IIoT systems that handle data in the Cloud or on the Edge of Networks (on-device). The novel contribution of this paper is that it combines the advantages of both lightweight blockchain and Convivial Optimized Sprinter Neural Network (COSNN) based AI mechanisms with simplified and improved security operations. Here, the significant impact of attacks is reduced by transforming features into encoded data using an Authentic Intrinsic Analysis (AIA) model. Extensive experiments are conducted to validate this system using various attack datasets. In addition, the results of privacy protection and AI mechanisms are evaluated separately and compared using various indicators. By using the proposed AILBSM framework, the execution time is minimized to 0.6 seconds, the overall classification accuracy is improved to 99.8%, and detection performance is increased to 99.7%. Due to the inclusion of auto-encoder based transformation and blockchain authentication, the anomaly detection performance of the proposed model is highly improved, when compared to other techniques.
Additional Links: PMID-36937654
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36937654,
year = {2023},
author = {Selvarajan, S and Srivastava, G and Khadidos, AO and Khadidos, AO and Baza, M and Alshehri, A and Lin, JC},
title = {An artificial intelligence lightweight blockchain security model for security and privacy in IIoT systems.},
journal = {Journal of cloud computing (Heidelberg, Germany)},
volume = {12},
number = {1},
pages = {38},
pmid = {36937654},
issn = {2192-113X},
abstract = {The Industrial Internet of Things (IIoT) promises to deliver innovative business models across multiple domains by providing ubiquitous connectivity, intelligent data, predictive analytics, and decision-making systems for improved market performance. However, traditional IIoT architectures are highly susceptible to many security vulnerabilities and network intrusions, which bring challenges such as lack of privacy, integrity, trust, and centralization. This research aims to implement an Artificial Intelligence-based Lightweight Blockchain Security Model (AILBSM) to ensure privacy and security of IIoT systems. This novel model is meant to address issues that can occur with security and privacy when dealing with Cloud-based IIoT systems that handle data in the Cloud or on the Edge of Networks (on-device). The novel contribution of this paper is that it combines the advantages of both lightweight blockchain and Convivial Optimized Sprinter Neural Network (COSNN) based AI mechanisms with simplified and improved security operations. Here, the significant impact of attacks is reduced by transforming features into encoded data using an Authentic Intrinsic Analysis (AIA) model. Extensive experiments are conducted to validate this system using various attack datasets. In addition, the results of privacy protection and AI mechanisms are evaluated separately and compared using various indicators. By using the proposed AILBSM framework, the execution time is minimized to 0.6 seconds, the overall classification accuracy is improved to 99.8%, and detection performance is increased to 99.7%. Due to the inclusion of auto-encoder based transformation and blockchain authentication, the anomaly detection performance of the proposed model is highly improved, when compared to other techniques.},
}
RevDate: 2023-03-20
Accelerating Minimap2 for Accurate Long Read Alignment on GPUs.
Journal of biotechnology and biomedicine, 6(1):13-23.
Long read sequencing technology is becoming increasingly popular for Precision Medicine applications like Whole Genome Sequencing (WGS) and microbial abundance estimation. Minimap2 is the state-of-the-art aligner and mapper used by the leading long read sequencing technologies, today. However, Minimap2 on CPUs is very slow for long noisy reads. ~60-70% of the run-time on a CPU comes from the highly sequential chaining step in Minimap2. On the other hand, most Point-of-Care computational workflows in long read sequencing use Graphics Processing Units (GPUs). We present minimap2-accelerated (mm2-ax), a heterogeneous design for sequence mapping and alignment where minimap2's compute intensive chaining step is sped up on the GPU and demonstrate its time and cost benefits. We extract better intra-read parallelism from chaining without losing mapping accuracy by forward transforming Minimap2's chaining algorithm. Moreover, we better utilize the high memory available on modern cloud instances apart from better workload balancing, data locality and minimal branch divergence on the GPU. We show mm2-ax on an NVIDIA A100 GPU improves the chaining step with 5.41 - 2.57X speedup and 4.07 - 1.93X speedup : costup over the fastest version of Minimap2, mm2-fast, benchmarked on a Google Cloud Platform instance of 30 SIMD cores.
Additional Links: PMID-36937168
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36937168,
year = {2023},
author = {Sadasivan, H and Maric, M and Dawson, E and Iyer, V and Israeli, J and Narayanasamy, S},
title = {Accelerating Minimap2 for Accurate Long Read Alignment on GPUs.},
journal = {Journal of biotechnology and biomedicine},
volume = {6},
number = {1},
pages = {13-23},
pmid = {36937168},
issn = {2642-9128},
abstract = {Long read sequencing technology is becoming increasingly popular for Precision Medicine applications like Whole Genome Sequencing (WGS) and microbial abundance estimation. Minimap2 is the state-of-the-art aligner and mapper used by the leading long read sequencing technologies, today. However, Minimap2 on CPUs is very slow for long noisy reads. ~60-70% of the run-time on a CPU comes from the highly sequential chaining step in Minimap2. On the other hand, most Point-of-Care computational workflows in long read sequencing use Graphics Processing Units (GPUs). We present minimap2-accelerated (mm2-ax), a heterogeneous design for sequence mapping and alignment where minimap2's compute intensive chaining step is sped up on the GPU and demonstrate its time and cost benefits. We extract better intra-read parallelism from chaining without losing mapping accuracy by forward transforming Minimap2's chaining algorithm. Moreover, we better utilize the high memory available on modern cloud instances apart from better workload balancing, data locality and minimal branch divergence on the GPU. We show mm2-ax on an NVIDIA A100 GPU improves the chaining step with 5.41 - 2.57X speedup and 4.07 - 1.93X speedup : costup over the fastest version of Minimap2, mm2-fast, benchmarked on a Google Cloud Platform instance of 30 SIMD cores.},
}
RevDate: 2023-03-20
A Systematic Literature Review on Service Composition for People with Disabilities: Taxonomies, Solutions, and Open Research Challenges.
Computational intelligence and neuroscience, 2023:5934548.
Integrating smart heterogeneous objects, IoT devices, data sources, and software services to produce new business processes and functionalities continues to attract considerable attention from the research community due to its unraveled advantages, including reusability, adaptation, distribution, and pervasiveness. However, the exploitation of service-oriented computing technologies (e.g., SOC, SOA, and microservice architectures) by people with special needs is underexplored and often overlooked. Furthermore, the existing challenges in this area are yet to be identified clearly. This research study presents a rigorous literature survey of the recent advances in service-oriented composition approaches and solutions for disabled people, their domains of application, and the major challenges, covering studies published between January 2010 and October 2022. To this end, we applied the systematic literature review (SLR) methodology to retrieve and collate only the articles presenting and discussing service composition solutions tailored to produce digitally accessible services for consumption by people who suffer from an impairment or loss of some physical or mental functions. We searched six renowned bibliographic databases, particularly IEEE Xplore, Web of Science, Springer Link, ACM Library, ScienceDirect, and Google Scholar, to synthesize a final pool of 38 related articles. Our survey contributes a comprehensive taxonomy of service composition solutions, techniques, and practices that are utilized to create assistive technologies and services. The seven-facet taxonomy helps researchers and practitioners to quickly understand and analyze the fundamental conceptualizations and characteristics of accessible service composition for people with disabilities. Key findings showed that services are fused to assist disabled persons to carry out their daily activities, mainly in smart homes and ambient intelligent environments. Despite the emergence of immersive technologies (e.g., wearable computing), user-service interactions are enabled primarily through tactile and speech modalities. Service descriptions mainly incorporate functional features (e.g., performance, latency, and cost) of service quality, largely ignoring accessibility features. Moreover, the outstanding research problems revolve around (1) the unavailability of assistive services datasets, (2) the underspecification of accessibility aspects of disabilities, (3) the weak adoption of accessible and universal design practices, (4) the abstraction of service composition approaches, and (5) the rare experimental testing of composition approaches with disabled users. We conclude our survey with a set of guidelines to realize effective assistive service composition in IoT and cloud environments. Researchers and practitioners are advised to create assistive services that support the social relationships of disabled users and model their accessibility needs as part of the quality of service (QoS). Moreover, they should exploit AI/ML models to address the evolving requirements of disabled users in their unique environments. Furthermore, weaknesses of service composition solutions and research challenges are exposed as notable opportunities for future research.
Additional Links: PMID-36936667
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36936667,
year = {2023},
author = {Namoun, A and Tufail, A and Nawas, W and BenRhouma, O and Alshanqiti, A},
title = {A Systematic Literature Review on Service Composition for People with Disabilities: Taxonomies, Solutions, and Open Research Challenges.},
journal = {Computational intelligence and neuroscience},
volume = {2023},
number = {},
pages = {5934548},
pmid = {36936667},
issn = {1687-5273},
abstract = {Integrating smart heterogeneous objects, IoT devices, data sources, and software services to produce new business processes and functionalities continues to attract considerable attention from the research community due to its unraveled advantages, including reusability, adaptation, distribution, and pervasiveness. However, the exploitation of service-oriented computing technologies (e.g., SOC, SOA, and microservice architectures) by people with special needs is underexplored and often overlooked. Furthermore, the existing challenges in this area are yet to be identified clearly. This research study presents a rigorous literature survey of the recent advances in service-oriented composition approaches and solutions for disabled people, their domains of application, and the major challenges, covering studies published between January 2010 and October 2022. To this end, we applied the systematic literature review (SLR) methodology to retrieve and collate only the articles presenting and discussing service composition solutions tailored to produce digitally accessible services for consumption by people who suffer from an impairment or loss of some physical or mental functions. We searched six renowned bibliographic databases, particularly IEEE Xplore, Web of Science, Springer Link, ACM Library, ScienceDirect, and Google Scholar, to synthesize a final pool of 38 related articles. Our survey contributes a comprehensive taxonomy of service composition solutions, techniques, and practices that are utilized to create assistive technologies and services. The seven-facet taxonomy helps researchers and practitioners to quickly understand and analyze the fundamental conceptualizations and characteristics of accessible service composition for people with disabilities. Key findings showed that services are fused to assist disabled persons to carry out their daily activities, mainly in smart homes and ambient intelligent environments. Despite the emergence of immersive technologies (e.g., wearable computing), user-service interactions are enabled primarily through tactile and speech modalities. Service descriptions mainly incorporate functional features (e.g., performance, latency, and cost) of service quality, largely ignoring accessibility features. Moreover, the outstanding research problems revolve around (1) the unavailability of assistive services datasets, (2) the underspecification of accessibility aspects of disabilities, (3) the weak adoption of accessible and universal design practices, (4) the abstraction of service composition approaches, and (5) the rare experimental testing of composition approaches with disabled users. We conclude our survey with a set of guidelines to realize effective assistive service composition in IoT and cloud environments. Researchers and practitioners are advised to create assistive services that support the social relationships of disabled users and model their accessibility needs as part of the quality of service (QoS). Moreover, they should exploit AI/ML models to address the evolving requirements of disabled users in their unique environments. Furthermore, weaknesses of service composition solutions and research challenges are exposed as notable opportunities for future research.},
}
RevDate: 2023-03-16
Sharing and Cooperation of Improved Cross-Entropy Optimization Algorithm in Telemedicine Multimedia Information Processing.
International journal of telemedicine and applications, 2023:7353489.
In order to improve the efficiency of medical multimedia information sharing, this paper combines cloud computing technology and SOA (service-oriented architecture) technology to build a medical multimedia information sharing system. Building a medical information sharing platform requires integrating information resources stored in information systems of medical institutions and nonmedical information systems related to medical information and forming a huge resource pool. It is important to mine and analyze the information resources in the resource pool to realize the sharing and interaction of medical information. To this end, this paper proposes a gain-adaptive control algorithm with online adjustable parameters and investigates the extension of the mutual entropy optimization algorithm in the control domain and its integrated processing capability in the process of medical multimedia information processing. In addition, this paper constructs a medical multimedia information sharing and collaboration platform with medical multimedia information sharing and telemedicine as the core and verifies the effectiveness of the platform through experiments. The simulation results and comparison results with other systems prove that the system in this paper can realize fast data processing, retrieve and analyze massive data, and meet the demand of remote intelligent diagnosis under the premise of safety and stability. Meanwhile, the system in this paper can help hospitals achieve fast and accurate diagnosis, which has strong theoretical and practical values.
Additional Links: PMID-36923109
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36923109,
year = {2023},
author = {Wu, H},
title = {Sharing and Cooperation of Improved Cross-Entropy Optimization Algorithm in Telemedicine Multimedia Information Processing.},
journal = {International journal of telemedicine and applications},
volume = {2023},
number = {},
pages = {7353489},
pmid = {36923109},
issn = {1687-6415},
abstract = {In order to improve the efficiency of medical multimedia information sharing, this paper combines cloud computing technology and SOA (service-oriented architecture) technology to build a medical multimedia information sharing system. Building a medical information sharing platform requires integrating information resources stored in information systems of medical institutions and nonmedical information systems related to medical information and forming a huge resource pool. It is important to mine and analyze the information resources in the resource pool to realize the sharing and interaction of medical information. To this end, this paper proposes a gain-adaptive control algorithm with online adjustable parameters and investigates the extension of the mutual entropy optimization algorithm in the control domain and its integrated processing capability in the process of medical multimedia information processing. In addition, this paper constructs a medical multimedia information sharing and collaboration platform with medical multimedia information sharing and telemedicine as the core and verifies the effectiveness of the platform through experiments. The simulation results and comparison results with other systems prove that the system in this paper can realize fast data processing, retrieve and analyze massive data, and meet the demand of remote intelligent diagnosis under the premise of safety and stability. Meanwhile, the system in this paper can help hospitals achieve fast and accurate diagnosis, which has strong theoretical and practical values.},
}
RevDate: 2023-03-20
The Microbiome and Big Data.
Current opinion in systems biology, 4:92-96.
Microbiome datasets have expanded rapidly in recent years. Advances in DNA sequencing, as well as the rise of shotgun metagenomics and metabolomics, are producing datasets that exceed the ability of researchers to analyze them on their personal computers. Here we describe what Big Data is in the context of microbiome research, how this data can be transformed into knowledge about microbes and their functions in their environments, and how the knowledge can be applied to move microbiome research forward. In particular, the development of new high-resolution tools to assess strain-level variability (moving away from OTUs), the advent of cloud computing and centralized analysis resources such as Qiita (for sequences) and GNPS (for mass spectrometry), and better methods for curating and describing "metadata" (contextual information about the sequence or chemical information) are rapidly assisting the use of microbiome data in fields ranging from human health to environmental studies.
Additional Links: PMID-36937228
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36937228,
year = {2017},
author = {Navas-Molina, JA and Hyde, ER and Sanders, J and Knight, R},
title = {The Microbiome and Big Data.},
journal = {Current opinion in systems biology},
volume = {4},
number = {},
pages = {92-96},
pmid = {36937228},
issn = {2452-3100},
abstract = {Microbiome datasets have expanded rapidly in recent years. Advances in DNA sequencing, as well as the rise of shotgun metagenomics and metabolomics, are producing datasets that exceed the ability of researchers to analyze them on their personal computers. Here we describe what Big Data is in the context of microbiome research, how this data can be transformed into knowledge about microbes and their functions in their environments, and how the knowledge can be applied to move microbiome research forward. In particular, the development of new high-resolution tools to assess strain-level variability (moving away from OTUs), the advent of cloud computing and centralized analysis resources such as Qiita (for sequences) and GNPS (for mass spectrometry), and better methods for curating and describing "metadata" (contextual information about the sequence or chemical information) are rapidly assisting the use of microbiome data in fields ranging from human health to environmental studies.},
}
RevDate: 2023-03-13
Long-term river extent dynamics and transition detection using remote sensing: Case studies of Mekong and Ganga River.
The Science of the total environment pii:S0048-9697(23)01390-6 [Epub ahead of print].
Currently, understanding river dynamics is limited to either bankline or reach-wise scale studies. Monitoring large-scale and long-term river extent dynamics provides fundamental insights relevant to the impact of climatic factors and anthropogenic activities on fluvial geomorphology. This study analyzed the two most populous rivers, Ganga and Mekong, to understand the river extent dynamics using 32 years of Landsat satellite data (1990-2022) in a cloud computing platform. This study categorizes river dynamics and transitions using the combination of pixel-wise water frequency and temporal trends. This approach can demarcate the river channel stability, areas affected by erosion and sedimentation, and the seasonal transitions in the river. The results illustrate that the Ganga river channel is found to be relatively unstable and very prone to meandering and migration as almost 40 % of the river channel has been altered in the past 32 years. The seasonal transitions, such as lost seasonal and seasonal to permanent changes are more prominent in the Ganga river, and the dominance of meandering and sedimentation in the lower course is also illustrated. In contrast, the Mekong river has a more stable course with erosion and sedimentation observed at sparse locations in the lower course. However, the lost seasonal and seasonal to permanent changes are also dominant in the Mekong river. Since 1990, Ganga and Mekong rivers have lost approximately 13.3 % and 4.7 % of their seasonal water respectively, as compared to the other transitions and categories. Factors such as climate change, floods, and man-made reservoirs could all be critical in triggering these morphological changes.
Additional Links: PMID-36914133
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36914133,
year = {2023},
author = {Aman, MA and Chu, HJ},
title = {Long-term river extent dynamics and transition detection using remote sensing: Case studies of Mekong and Ganga River.},
journal = {The Science of the total environment},
volume = {},
number = {},
pages = {162774},
doi = {10.1016/j.scitotenv.2023.162774},
pmid = {36914133},
issn = {1879-1026},
abstract = {Currently, understanding river dynamics is limited to either bankline or reach-wise scale studies. Monitoring large-scale and long-term river extent dynamics provides fundamental insights relevant to the impact of climatic factors and anthropogenic activities on fluvial geomorphology. This study analyzed the two most populous rivers, Ganga and Mekong, to understand the river extent dynamics using 32 years of Landsat satellite data (1990-2022) in a cloud computing platform. This study categorizes river dynamics and transitions using the combination of pixel-wise water frequency and temporal trends. This approach can demarcate the river channel stability, areas affected by erosion and sedimentation, and the seasonal transitions in the river. The results illustrate that the Ganga river channel is found to be relatively unstable and very prone to meandering and migration as almost 40 % of the river channel has been altered in the past 32 years. The seasonal transitions, such as lost seasonal and seasonal to permanent changes are more prominent in the Ganga river, and the dominance of meandering and sedimentation in the lower course is also illustrated. In contrast, the Mekong river has a more stable course with erosion and sedimentation observed at sparse locations in the lower course. However, the lost seasonal and seasonal to permanent changes are also dominant in the Mekong river. Since 1990, Ganga and Mekong rivers have lost approximately 13.3 % and 4.7 % of their seasonal water respectively, as compared to the other transitions and categories. Factors such as climate change, floods, and man-made reservoirs could all be critical in triggering these morphological changes.},
}
RevDate: 2023-03-13
An Efficient Hybrid Job Scheduling Optimization (EHJSO) approach to enhance resource search using Cuckoo and Grey Wolf Job Optimization for cloud environment.
PloS one, 18(3):e0282600 pii:PONE-D-22-30865.
Cloud computing has now evolved as an unavoidable technology in the fields of finance, education, internet business, and nearly all organisations. The cloud resources are practically accessible to cloud users over the internet to accomplish the desired task of the cloud users. The effectiveness and efficacy of cloud computing services depend on the tasks that the cloud users submit and the time taken to complete the task as well. By optimising resource allocation and utilisation, task scheduling is crucial to enhancing the effectiveness and performance of a cloud system. In this context, cloud computing offers a wide range of advantages, such as cost savings, security, flexibility, mobility, quality control, disaster recovery, automatic software upgrades, and sustainability. According to a recent research survey, more and more tech-savvy companies and industry executives are recognize and utilize the advantages of the Cloud computing. Hence, as the number of users of the Cloud increases, so did the need to regulate the resource allocation as well. However, the scheduling of jobs in the cloud necessitates a smart and fast algorithm that can discover the resources that are accessible and schedule the jobs that are requested by different users. Consequently, for better resource allocation and job scheduling, a fast, efficient, tolerable job scheduling algorithm is required. Efficient Hybrid Job Scheduling Optimization (EHJSO) utilises Cuckoo Search Optimization and Grey Wolf Job Optimization (GWO). Due to some cuckoo species' obligate brood parasitism (laying eggs in other species' nests), the Cuckoo search optimization approach was developed. Grey wolf optimization (GWO) is a population-oriented AI system inspired by grey wolf social structure and hunting strategies. Make span, computation time, fitness, iteration-based performance, and success rate were utilised to compare previous studies. Experiments show that the recommended method is superior.
Additional Links: PMID-36913423
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36913423,
year = {2023},
author = {Paulraj, D and Sethukarasi, T and Neelakandan, S and Prakash, M and Baburaj, E},
title = {An Efficient Hybrid Job Scheduling Optimization (EHJSO) approach to enhance resource search using Cuckoo and Grey Wolf Job Optimization for cloud environment.},
journal = {PloS one},
volume = {18},
number = {3},
pages = {e0282600},
doi = {10.1371/journal.pone.0282600},
pmid = {36913423},
issn = {1932-6203},
abstract = {Cloud computing has now evolved as an unavoidable technology in the fields of finance, education, internet business, and nearly all organisations. The cloud resources are practically accessible to cloud users over the internet to accomplish the desired task of the cloud users. The effectiveness and efficacy of cloud computing services depend on the tasks that the cloud users submit and the time taken to complete the task as well. By optimising resource allocation and utilisation, task scheduling is crucial to enhancing the effectiveness and performance of a cloud system. In this context, cloud computing offers a wide range of advantages, such as cost savings, security, flexibility, mobility, quality control, disaster recovery, automatic software upgrades, and sustainability. According to a recent research survey, more and more tech-savvy companies and industry executives are recognize and utilize the advantages of the Cloud computing. Hence, as the number of users of the Cloud increases, so did the need to regulate the resource allocation as well. However, the scheduling of jobs in the cloud necessitates a smart and fast algorithm that can discover the resources that are accessible and schedule the jobs that are requested by different users. Consequently, for better resource allocation and job scheduling, a fast, efficient, tolerable job scheduling algorithm is required. Efficient Hybrid Job Scheduling Optimization (EHJSO) utilises Cuckoo Search Optimization and Grey Wolf Job Optimization (GWO). Due to some cuckoo species' obligate brood parasitism (laying eggs in other species' nests), the Cuckoo search optimization approach was developed. Grey wolf optimization (GWO) is a population-oriented AI system inspired by grey wolf social structure and hunting strategies. Make span, computation time, fitness, iteration-based performance, and success rate were utilised to compare previous studies. Experiments show that the recommended method is superior.},
}
RevDate: 2023-03-13
FSPLO: a fast sensor placement location optimization method for cloud-aided inspection of smart buildings.
Journal of cloud computing (Heidelberg, Germany), 12(1):31.
With the awakening of health awareness, people are raising a series of health-related requirements for the buildings they live in, with a view to improving their living conditions. In this context, BIM (Building Information Modeling) makes full use of cutting-edge theories and technologies in many domains such as health, environment, and information technology to provide a new way for engineers to design and build various healthy and green buildings. Specifically, sensors are playing an important role in achieving smart building goals by monitoring the surroundings of buildings, objects and people with the help of cloud computing technology. In addition, it is necessary to quickly determine the optimal sensor placement to save energy and minimize the number of sensors for a building, which is a de-trial task for the cloud platform due to the limited number of sensors available and massive candidate locations for each sensor. In this paper, we propose a Fast Sensor Placement Location Optimization approach (FSPLO) to solve the BIM problem in cloud-aided smart buildings. In particular, we quickly filter out the repeated candidate locations of sensors in FSPLO using Locality Sensitive Hashing (LSH) techniques to maintain only a small number of optimized locations for deploying sensors around buildings. In this way, we can significantly reduce the number of sensors used for health and green buildings. Finally, a set of simulation experiments demonstrates the excellent performance of our proposed FSPLO method.
Additional Links: PMID-36910722
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36910722,
year = {2023},
author = {Yang, M and Ge, C and Zhao, X and Kou, H},
title = {FSPLO: a fast sensor placement location optimization method for cloud-aided inspection of smart buildings.},
journal = {Journal of cloud computing (Heidelberg, Germany)},
volume = {12},
number = {1},
pages = {31},
pmid = {36910722},
issn = {2192-113X},
abstract = {With the awakening of health awareness, people are raising a series of health-related requirements for the buildings they live in, with a view to improving their living conditions. In this context, BIM (Building Information Modeling) makes full use of cutting-edge theories and technologies in many domains such as health, environment, and information technology to provide a new way for engineers to design and build various healthy and green buildings. Specifically, sensors are playing an important role in achieving smart building goals by monitoring the surroundings of buildings, objects and people with the help of cloud computing technology. In addition, it is necessary to quickly determine the optimal sensor placement to save energy and minimize the number of sensors for a building, which is a de-trial task for the cloud platform due to the limited number of sensors available and massive candidate locations for each sensor. In this paper, we propose a Fast Sensor Placement Location Optimization approach (FSPLO) to solve the BIM problem in cloud-aided smart buildings. In particular, we quickly filter out the repeated candidate locations of sensors in FSPLO using Locality Sensitive Hashing (LSH) techniques to maintain only a small number of optimized locations for deploying sensors around buildings. In this way, we can significantly reduce the number of sensors used for health and green buildings. Finally, a set of simulation experiments demonstrates the excellent performance of our proposed FSPLO method.},
}
RevDate: 2023-03-13
CmpDate: 2023-03-13
DNS Tunnelling, Exfiltration and Detection over Cloud Environments.
Sensors (Basel, Switzerland), 23(5):.
The domain name system (DNS) protocol is fundamental to the operation of the internet, however, in recent years various methodologies have been developed that enable DNS attacks on organisations. In the last few years, the increased use of cloud services by organisations has created further security challenges as cyber criminals use numerous methodologies to exploit cloud services, configurations and the DNS protocol. In this paper, two different DNS tunnelling methods, Iodine and DNScat, have been conducted in the cloud environment (Google and AWS) and positive results of exfiltration have been achieved under different firewall configurations. Detection of malicious use of DNS protocol can be a challenge for organisations with limited cybersecurity support and expertise. In this study, various DNS tunnelling detection techniques were utilised in a cloud environment to create an effective monitoring system with a reliable detection rate, low implementation cost, and ease of use for organisations with limited detection capabilities. The Elastic stack (an open-source framework) was used to configure a DNS monitoring system and to analyse the collected DNS logs. Furthermore, payload and traffic analysis techniques were implemented to identify different tunnelling methods. This cloud-based monitoring system offers various detection techniques that can be used for monitoring DNS activities of any network especially accessible to small organisations. Moreover, the Elastic stack is open-source and it has no limitation with regards to the data that can be uploaded daily.
Additional Links: PMID-36904959
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904959,
year = {2023},
author = {Salat, L and Davis, M and Khan, N},
title = {DNS Tunnelling, Exfiltration and Detection over Cloud Environments.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
pmid = {36904959},
issn = {1424-8220},
abstract = {The domain name system (DNS) protocol is fundamental to the operation of the internet, however, in recent years various methodologies have been developed that enable DNS attacks on organisations. In the last few years, the increased use of cloud services by organisations has created further security challenges as cyber criminals use numerous methodologies to exploit cloud services, configurations and the DNS protocol. In this paper, two different DNS tunnelling methods, Iodine and DNScat, have been conducted in the cloud environment (Google and AWS) and positive results of exfiltration have been achieved under different firewall configurations. Detection of malicious use of DNS protocol can be a challenge for organisations with limited cybersecurity support and expertise. In this study, various DNS tunnelling detection techniques were utilised in a cloud environment to create an effective monitoring system with a reliable detection rate, low implementation cost, and ease of use for organisations with limited detection capabilities. The Elastic stack (an open-source framework) was used to configure a DNS monitoring system and to analyse the collected DNS logs. Furthermore, payload and traffic analysis techniques were implemented to identify different tunnelling methods. This cloud-based monitoring system offers various detection techniques that can be used for monitoring DNS activities of any network especially accessible to small organisations. Moreover, the Elastic stack is open-source and it has no limitation with regards to the data that can be uploaded daily.},
}
RevDate: 2023-03-11
A Smart Agricultural System Based on PLC and a Cloud Computing Web Application Using LoRa and LoRaWan.
Sensors (Basel, Switzerland), 23(5): pii:s23052725.
The increasing challenges of agricultural processes and the growing demand for food globally are driving the industrial agriculture sector to adopt the concept of 'smart farming'. Smart farming systems, with their real-time management and high level of automation, can greatly improve productivity, food safety, and efficiency in the agri-food supply chain. This paper presents a customized smart farming system that uses a low-cost, low-power, and wide-range wireless sensor network based on Internet of Things (IoT) and Long Range (LoRa) technologies. In this system, LoRa connectivity is integrated with existing Programmable Logic Controllers (PLCs), which are commonly used in industry and farming to control multiple processes, devices, and machinery through the Simatic IOT2040. The system also includes a newly developed web-based monitoring application hosted on a cloud server, which processes data collected from the farm environment and allows for remote visualization and control of all connected devices. A Telegram bot is included for automated communication with users through this mobile messaging app. The proposed network structure has been tested, and the path loss in the wireless LoRa is evaluated.
Additional Links: PMID-36904927
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904927,
year = {2023},
author = {Saban, M and Bekkour, M and Amdaouch, I and El Gueri, J and Ait Ahmed, B and Chaari, MZ and Ruiz-Alzola, J and Rosado-Muñoz, A and Aghzout, O},
title = {A Smart Agricultural System Based on PLC and a Cloud Computing Web Application Using LoRa and LoRaWan.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052725},
pmid = {36904927},
issn = {1424-8220},
abstract = {The increasing challenges of agricultural processes and the growing demand for food globally are driving the industrial agriculture sector to adopt the concept of 'smart farming'. Smart farming systems, with their real-time management and high level of automation, can greatly improve productivity, food safety, and efficiency in the agri-food supply chain. This paper presents a customized smart farming system that uses a low-cost, low-power, and wide-range wireless sensor network based on Internet of Things (IoT) and Long Range (LoRa) technologies. In this system, LoRa connectivity is integrated with existing Programmable Logic Controllers (PLCs), which are commonly used in industry and farming to control multiple processes, devices, and machinery through the Simatic IOT2040. The system also includes a newly developed web-based monitoring application hosted on a cloud server, which processes data collected from the farm environment and allows for remote visualization and control of all connected devices. A Telegram bot is included for automated communication with users through this mobile messaging app. The proposed network structure has been tested, and the path loss in the wireless LoRa is evaluated.},
}
RevDate: 2023-03-11
Identity-Based Proxy Re-Encryption Scheme Using Fog Computing and Anonymous Key Generation.
Sensors (Basel, Switzerland), 23(5): pii:s23052706.
In the fog computing architecture, a fog is a node closer to clients and responsible for responding to users' requests as well as forwarding messages to clouds. In some medical applications such as the remote healthcare, a sensor of patients will first send encrypted data of sensed information to a nearby fog such that the fog acting as a re-encryption proxy could generate a re-encrypted ciphertext designated for requested data users in the cloud. Specifically, a data user can request access to cloud ciphertexts by sending a query to the fog node that will forward this query to the corresponding data owner who preserves the right to grant or deny the permission to access his/her data. When the access request is granted, the fog node will obtain a unique re-encryption key for carrying out the re-encryption process. Although some previous concepts have been proposed to fulfill these application requirements, they either have known security flaws or incur higher computational complexity. In this work, we present an identity-based proxy re-encryption scheme on the basis of the fog computing architecture. Our identity-based mechanism uses public channels for key distribution and avoids the troublesome problem of key escrow. We also formally prove that the proposed protocol is secure in the IND-PrID-CPA notion. Furthermore, we show that our work exhibits better performance in terms of computational complexity.
Additional Links: PMID-36904909
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904909,
year = {2023},
author = {Lin, HY and Tsai, TT and Ting, PY and Fan, YR},
title = {Identity-Based Proxy Re-Encryption Scheme Using Fog Computing and Anonymous Key Generation.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052706},
pmid = {36904909},
issn = {1424-8220},
abstract = {In the fog computing architecture, a fog is a node closer to clients and responsible for responding to users' requests as well as forwarding messages to clouds. In some medical applications such as the remote healthcare, a sensor of patients will first send encrypted data of sensed information to a nearby fog such that the fog acting as a re-encryption proxy could generate a re-encrypted ciphertext designated for requested data users in the cloud. Specifically, a data user can request access to cloud ciphertexts by sending a query to the fog node that will forward this query to the corresponding data owner who preserves the right to grant or deny the permission to access his/her data. When the access request is granted, the fog node will obtain a unique re-encryption key for carrying out the re-encryption process. Although some previous concepts have been proposed to fulfill these application requirements, they either have known security flaws or incur higher computational complexity. In this work, we present an identity-based proxy re-encryption scheme on the basis of the fog computing architecture. Our identity-based mechanism uses public channels for key distribution and avoids the troublesome problem of key escrow. We also formally prove that the proposed protocol is secure in the IND-PrID-CPA notion. Furthermore, we show that our work exhibits better performance in terms of computational complexity.},
}
RevDate: 2023-03-11
Secure Data Transfer Based on a Multi-Level Blockchain for Internet of Vehicles.
Sensors (Basel, Switzerland), 23(5): pii:s23052664.
Because of the decentralized trait of the blockchain and the Internet of vehicles, both are very suitable for the architecture of the other. This study proposes a multi-level blockchain framework to secure information security on the Internet of vehicles. The main motivation of this study is to propose a new transaction block and ensure the identity of traders and the non-repudiation of transactions through the elliptic curve digital signature algorithm ECDSA. The designed multi-level blockchain architecture distributes the operations within the intra_cluster blockchain and the inter_cluster blockchain to improve the efficiency of the entire block. On the cloud computing platform, we exploit the threshold key management protocol, and the system can recover the system key as long as the threshold partial key is collected. This avoids the occurrence of PKI single-point failure. Thus, the proposed architecture ensures the security of OBU-RSU-BS-VM. The proposed multi-level blockchain framework consists of a block, intra-cluster blockchain and inter-cluster blockchain. The roadside unit RSU is responsible for the communication of vehicles in the vicinity, similar to a cluster head on the Internet of vehicles. This study exploits RSU to manage the block, and the base station is responsible for managing the intra-cluster blockchain named intra_clusterBC, and the cloud server at the back end is responsible for the entire system blockchain named inter_clusterBC. Finally, RSU, base stations and cloud servers cooperatively construct the multi-level blockchain framework and improve the security and the efficiency of the operation of the blockchain. Overall, in order to protect the security of the transaction data of the blockchain, we propose a new transaction block structure and adopt the elliptic curve cryptographic signature ECDSA to ensure that the Merkle tree root value is not changed and also make sure the transaction identity and non-repudiation of transaction data. Finally, this study considers information security in a cloud environment, and therefore we propose a secret-sharing and secure-map-reducing architecture based on the identity confirmation scheme. The proposed scheme with decentralization is very suitable for distributed connected vehicles and can also improve the execution efficiency of the blockchain.
Additional Links: PMID-36904869
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904869,
year = {2023},
author = {Lin, HY},
title = {Secure Data Transfer Based on a Multi-Level Blockchain for Internet of Vehicles.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052664},
pmid = {36904869},
issn = {1424-8220},
abstract = {Because of the decentralized trait of the blockchain and the Internet of vehicles, both are very suitable for the architecture of the other. This study proposes a multi-level blockchain framework to secure information security on the Internet of vehicles. The main motivation of this study is to propose a new transaction block and ensure the identity of traders and the non-repudiation of transactions through the elliptic curve digital signature algorithm ECDSA. The designed multi-level blockchain architecture distributes the operations within the intra_cluster blockchain and the inter_cluster blockchain to improve the efficiency of the entire block. On the cloud computing platform, we exploit the threshold key management protocol, and the system can recover the system key as long as the threshold partial key is collected. This avoids the occurrence of PKI single-point failure. Thus, the proposed architecture ensures the security of OBU-RSU-BS-VM. The proposed multi-level blockchain framework consists of a block, intra-cluster blockchain and inter-cluster blockchain. The roadside unit RSU is responsible for the communication of vehicles in the vicinity, similar to a cluster head on the Internet of vehicles. This study exploits RSU to manage the block, and the base station is responsible for managing the intra-cluster blockchain named intra_clusterBC, and the cloud server at the back end is responsible for the entire system blockchain named inter_clusterBC. Finally, RSU, base stations and cloud servers cooperatively construct the multi-level blockchain framework and improve the security and the efficiency of the operation of the blockchain. Overall, in order to protect the security of the transaction data of the blockchain, we propose a new transaction block structure and adopt the elliptic curve cryptographic signature ECDSA to ensure that the Merkle tree root value is not changed and also make sure the transaction identity and non-repudiation of transaction data. Finally, this study considers information security in a cloud environment, and therefore we propose a secret-sharing and secure-map-reducing architecture based on the identity confirmation scheme. The proposed scheme with decentralization is very suitable for distributed connected vehicles and can also improve the execution efficiency of the blockchain.},
}
RevDate: 2023-03-11
A Scalable Device for Undisturbed Measurement of Water and CO2 Fluxes through Natural Surfaces.
Sensors (Basel, Switzerland), 23(5): pii:s23052647.
In a climate change scenario and under a growing interest in Precision Agriculture, it is more and more important to map and record seasonal trends of the respiration of cropland and natural surfaces. Ground-level sensors to be placed in the field or integrated into autonomous vehicles are of growing interest. In this scope, a low-power IoT-compliant device for measurement of multiple surface CO2 and WV concentrations have been designed and developed. The device is described and tested under controlled and field conditions, showing ready and easy access to collected values typical of a cloud-computing-based approach. The device proved to be usable in indoor and open-air environments for a long time, and the sensors were arranged in multiple configurations to evaluate simultaneous concentrations and flows, while the low-cost, low-power (LP IoT-compliant) design is achieved by a specific design of the printed circuit board and a firmware code fitting the characteristics of the controller.
Additional Links: PMID-36904852
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904852,
year = {2023},
author = {Vitali, G and Arru, M and Magnanini, E},
title = {A Scalable Device for Undisturbed Measurement of Water and CO2 Fluxes through Natural Surfaces.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052647},
pmid = {36904852},
issn = {1424-8220},
abstract = {In a climate change scenario and under a growing interest in Precision Agriculture, it is more and more important to map and record seasonal trends of the respiration of cropland and natural surfaces. Ground-level sensors to be placed in the field or integrated into autonomous vehicles are of growing interest. In this scope, a low-power IoT-compliant device for measurement of multiple surface CO2 and WV concentrations have been designed and developed. The device is described and tested under controlled and field conditions, showing ready and easy access to collected values typical of a cloud-computing-based approach. The device proved to be usable in indoor and open-air environments for a long time, and the sensors were arranged in multiple configurations to evaluate simultaneous concentrations and flows, while the low-cost, low-power (LP IoT-compliant) design is achieved by a specific design of the printed circuit board and a firmware code fitting the characteristics of the controller.},
}
RevDate: 2023-03-11
Neural Network Models for Driving Control of Indoor Autonomous Vehicles in Mobile Edge Computing.
Sensors (Basel, Switzerland), 23(5): pii:s23052575.
Mobile edge computing has been proposed as a solution for solving the latency problem of traditional cloud computing. In particular, mobile edge computing is needed in areas such as autonomous driving, which requires large amounts of data to be processed without latency for safety. Indoor autonomous driving is attracting attention as one of the mobile edge computing services. Furthermore, it relies on its sensors for location recognition because indoor autonomous driving cannot use a GPS device, as is the case with outdoor driving. However, while the autonomous vehicle is being driven, the real-time processing of external events and the correction of errors are required for safety. Furthermore, an efficient autonomous driving system is required because it is a mobile environment with resource constraints. This study proposes neural network models as a machine-learning method for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the range data measured with the LiDAR sensor. We designed six neural network models to be evaluated according to the number of input data points. In addition, we made an autonomous vehicle based on the Raspberry Pi for driving and learning and an indoor circular driving track for collecting data and performance evaluation. Finally, we evaluated six neural network models in terms of confusion matrix, response time, battery consumption, and driving command accuracy. In addition, when neural network learning was applied, the effect of the number of inputs was confirmed in the usage of resources. The result will influence the choice of an appropriate neural network model for an indoor autonomous vehicle.
Additional Links: PMID-36904779
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904779,
year = {2023},
author = {Kwon, Y and Kim, W and Jung, I},
title = {Neural Network Models for Driving Control of Indoor Autonomous Vehicles in Mobile Edge Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052575},
pmid = {36904779},
issn = {1424-8220},
abstract = {Mobile edge computing has been proposed as a solution for solving the latency problem of traditional cloud computing. In particular, mobile edge computing is needed in areas such as autonomous driving, which requires large amounts of data to be processed without latency for safety. Indoor autonomous driving is attracting attention as one of the mobile edge computing services. Furthermore, it relies on its sensors for location recognition because indoor autonomous driving cannot use a GPS device, as is the case with outdoor driving. However, while the autonomous vehicle is being driven, the real-time processing of external events and the correction of errors are required for safety. Furthermore, an efficient autonomous driving system is required because it is a mobile environment with resource constraints. This study proposes neural network models as a machine-learning method for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the range data measured with the LiDAR sensor. We designed six neural network models to be evaluated according to the number of input data points. In addition, we made an autonomous vehicle based on the Raspberry Pi for driving and learning and an indoor circular driving track for collecting data and performance evaluation. Finally, we evaluated six neural network models in terms of confusion matrix, response time, battery consumption, and driving command accuracy. In addition, when neural network learning was applied, the effect of the number of inputs was confirmed in the usage of resources. The result will influence the choice of an appropriate neural network model for an indoor autonomous vehicle.},
}
RevDate: 2023-03-11
EEOA: Cost and Energy Efficient Task Scheduling in a Cloud-Fog Framework.
Sensors (Basel, Switzerland), 23(5): pii:s23052445.
Cloud-fog computing is a wide range of service environments created to provide quick, flexible services to customers, and the phenomenal growth of the Internet of Things (IoT) has produced an immense amount of data on a daily basis. To complete tasks and meet service-level agreement (SLA) commitments, the provider assigns appropriate resources and employs scheduling techniques to efficiently manage the execution of received IoT tasks in fog or cloud systems. The effectiveness of cloud services is directly impacted by some other important criteria, such as energy usage and cost, which are not taken into account by many of the existing methodologies. To resolve the aforementioned problems, an effective scheduling algorithm is required to schedule the heterogeneous workload and enhance the quality of service (QoS). Therefore, a nature-inspired multi-objective task scheduling algorithm called the electric earthworm optimization algorithm (EEOA) is proposed in this paper for IoT requests in a cloud-fog framework. This method was created using the combination of the earthworm optimization algorithm (EOA) and the electric fish optimization algorithm (EFO) to improve EFO's potential to be exploited while looking for the best solution to the problem at hand. Concerning execution time, cost, makespan, and energy consumption, the suggested scheduling technique's performance was assessed using significant instances of real-world workloads such as CEA-CURIE and HPC2N. Based on simulation results, our proposed approach improves efficiency by 89%, energy consumption by 94%, and total cost by 87% over existing algorithms for the scenarios considered using different benchmarks. Detailed simulations demonstrate that the suggested approach provides a superior scheduling scheme with better results than the existing scheduling techniques.
Additional Links: PMID-36904650
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904650,
year = {2023},
author = {Kumar, MS and Karri, GR},
title = {EEOA: Cost and Energy Efficient Task Scheduling in a Cloud-Fog Framework.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052445},
pmid = {36904650},
issn = {1424-8220},
abstract = {Cloud-fog computing is a wide range of service environments created to provide quick, flexible services to customers, and the phenomenal growth of the Internet of Things (IoT) has produced an immense amount of data on a daily basis. To complete tasks and meet service-level agreement (SLA) commitments, the provider assigns appropriate resources and employs scheduling techniques to efficiently manage the execution of received IoT tasks in fog or cloud systems. The effectiveness of cloud services is directly impacted by some other important criteria, such as energy usage and cost, which are not taken into account by many of the existing methodologies. To resolve the aforementioned problems, an effective scheduling algorithm is required to schedule the heterogeneous workload and enhance the quality of service (QoS). Therefore, a nature-inspired multi-objective task scheduling algorithm called the electric earthworm optimization algorithm (EEOA) is proposed in this paper for IoT requests in a cloud-fog framework. This method was created using the combination of the earthworm optimization algorithm (EOA) and the electric fish optimization algorithm (EFO) to improve EFO's potential to be exploited while looking for the best solution to the problem at hand. Concerning execution time, cost, makespan, and energy consumption, the suggested scheduling technique's performance was assessed using significant instances of real-world workloads such as CEA-CURIE and HPC2N. Based on simulation results, our proposed approach improves efficiency by 89%, energy consumption by 94%, and total cost by 87% over existing algorithms for the scenarios considered using different benchmarks. Detailed simulations demonstrate that the suggested approach provides a superior scheduling scheme with better results than the existing scheduling techniques.},
}
RevDate: 2023-03-11
Prioritization Based Task Offloading in UAV-Assisted Edge Networks.
Sensors (Basel, Switzerland), 23(5): pii:s23052375.
Under demanding operational conditions such as traffic surges, coverage issues, and low latency requirements, terrestrial networks may become inadequate to provide the expected service levels to users and applications. Moreover, when natural disasters or physical calamities occur, the existing network infrastructure may collapse, leading to formidable challenges for emergency communications in the area served. In order to provide wireless connectivity as well as facilitate a capacity boost under transient high service load situations, a substitute or auxiliary fast-deployable network is needed. Unmanned Aerial Vehicle (UAV) networks are well suited for such needs thanks to their high mobility and flexibility. In this work, we consider an edge network consisting of UAVs equipped with wireless access points. These software-defined network nodes serve a latency-sensitive workload of mobile users in an edge-to-cloud continuum setting. We investigate prioritization-based task offloading to support prioritized services in this on-demand aerial network. To serve this end, we construct an offloading management optimization model to minimize the overall penalty due to priority-weighted delay against task deadlines. Since the defined assignment problem is NP-hard, we also propose three heuristic algorithms as well as a branch and bound style quasi-optimal task offloading algorithm and investigate how the system performs under different operating conditions by conducting simulation-based experiments. Moreover, we made an open-source contribution to Mininet-WiFi to have independent Wi-Fi mediums, which were compulsory for simultaneous packet transfers on different Wi-Fi mediums.
Additional Links: PMID-36904580
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36904580,
year = {2023},
author = {Kalinagac, O and Gür, G and Alagöz, F},
title = {Prioritization Based Task Offloading in UAV-Assisted Edge Networks.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {5},
pages = {},
doi = {10.3390/s23052375},
pmid = {36904580},
issn = {1424-8220},
abstract = {Under demanding operational conditions such as traffic surges, coverage issues, and low latency requirements, terrestrial networks may become inadequate to provide the expected service levels to users and applications. Moreover, when natural disasters or physical calamities occur, the existing network infrastructure may collapse, leading to formidable challenges for emergency communications in the area served. In order to provide wireless connectivity as well as facilitate a capacity boost under transient high service load situations, a substitute or auxiliary fast-deployable network is needed. Unmanned Aerial Vehicle (UAV) networks are well suited for such needs thanks to their high mobility and flexibility. In this work, we consider an edge network consisting of UAVs equipped with wireless access points. These software-defined network nodes serve a latency-sensitive workload of mobile users in an edge-to-cloud continuum setting. We investigate prioritization-based task offloading to support prioritized services in this on-demand aerial network. To serve this end, we construct an offloading management optimization model to minimize the overall penalty due to priority-weighted delay against task deadlines. Since the defined assignment problem is NP-hard, we also propose three heuristic algorithms as well as a branch and bound style quasi-optimal task offloading algorithm and investigate how the system performs under different operating conditions by conducting simulation-based experiments. Moreover, we made an open-source contribution to Mininet-WiFi to have independent Wi-Fi mediums, which were compulsory for simultaneous packet transfers on different Wi-Fi mediums.},
}
RevDate: 2023-03-11
Prediction of the Topography of the Corticospinal Tract on T1-Weighted MR Images Using Deep-Learning-Based Segmentation.
Diagnostics (Basel, Switzerland), 13(5): pii:diagnostics13050911.
INTRODUCTION: Tractography is an invaluable tool in the planning of tumor surgery in the vicinity of functionally eloquent areas of the brain as well as in the research of normal development or of various diseases. The aim of our study was to compare the performance of a deep-learning-based image segmentation for the prediction of the topography of white matter tracts on T1-weighted MR images to the performance of a manual segmentation.
METHODS: T1-weighted MR images of 190 healthy subjects from 6 different datasets were utilized in this study. Using deterministic diffusion tensor imaging, we first reconstructed the corticospinal tract on both sides. After training a segmentation model on 90 subjects of the PIOP2 dataset using the nnU-Net in a cloud-based environment with graphical processing unit (Google Colab), we evaluated its performance using 100 subjects from 6 different datasets.
RESULTS: Our algorithm created a segmentation model that predicted the topography of the corticospinal pathway on T1-weighted images in healthy subjects. The average dice score was 0.5479 (0.3513-0.7184) on the validation dataset.
CONCLUSIONS: Deep-learning-based segmentation could be applicable in the future to predict the location of white matter pathways in T1-weighted scans.
Additional Links: PMID-36900055
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36900055,
year = {2023},
author = {Barany, L and Hore, N and Stadlbauer, A and Buchfelder, M and Brandner, S},
title = {Prediction of the Topography of the Corticospinal Tract on T1-Weighted MR Images Using Deep-Learning-Based Segmentation.},
journal = {Diagnostics (Basel, Switzerland)},
volume = {13},
number = {5},
pages = {},
doi = {10.3390/diagnostics13050911},
pmid = {36900055},
issn = {2075-4418},
abstract = {INTRODUCTION: Tractography is an invaluable tool in the planning of tumor surgery in the vicinity of functionally eloquent areas of the brain as well as in the research of normal development or of various diseases. The aim of our study was to compare the performance of a deep-learning-based image segmentation for the prediction of the topography of white matter tracts on T1-weighted MR images to the performance of a manual segmentation.
METHODS: T1-weighted MR images of 190 healthy subjects from 6 different datasets were utilized in this study. Using deterministic diffusion tensor imaging, we first reconstructed the corticospinal tract on both sides. After training a segmentation model on 90 subjects of the PIOP2 dataset using the nnU-Net in a cloud-based environment with graphical processing unit (Google Colab), we evaluated its performance using 100 subjects from 6 different datasets.
RESULTS: Our algorithm created a segmentation model that predicted the topography of the corticospinal pathway on T1-weighted images in healthy subjects. The average dice score was 0.5479 (0.3513-0.7184) on the validation dataset.
CONCLUSIONS: Deep-learning-based segmentation could be applicable in the future to predict the location of white matter pathways in T1-weighted scans.},
}
RevDate: 2023-03-11
An adaptive offloading framework for license plate detection in collaborative edge and cloud computing.
Mathematical biosciences and engineering : MBE, 20(2):2793-2814.
With the explosive growth of edge computing, huge amounts of data are being generated in billions of edge devices. It is really difficult to balance detection efficiency and detection accuracy at the same time for object detection on multiple edge devices. However, there are few studies to investigate and improve the collaboration between cloud computing and edge computing considering realistic challenges, such as limited computation capacities, network congestion and long latency. To tackle these challenges, we propose a new multi-model license plate detection hybrid methodology with the tradeoff between efficiency and accuracy to process the tasks of license plate detection at the edge nodes and the cloud server. We also design a new probability-based offloading initialization algorithm that not only obtains reasonable initial solutions but also facilitates the accuracy of license plate detection. In addition, we introduce an adaptive offloading framework by gravitational genetic searching algorithm (GGSA), which can comprehensively consider influential factors such as license plate detection time, queuing time, energy consumption, image quality, and accuracy. GGSA is helpful for Quality-of-Service (QoS) enhancement. Extensive experiments show that our proposed GGSA offloading framework exhibits good performance in collaborative edge and cloud computing of license plate detection compared with other methods. It demonstrate that when compared with traditional all tasks are executed on the cloud server (AC), the offloading effect of GGSA can be improved by 50.31%. Besides, the offloading framework has strong portability when making real-time offloading decisions.
Additional Links: PMID-36899558
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36899558,
year = {2023},
author = {Zhang, H and Wang, P and Zhang, S and Wu, Z},
title = {An adaptive offloading framework for license plate detection in collaborative edge and cloud computing.},
journal = {Mathematical biosciences and engineering : MBE},
volume = {20},
number = {2},
pages = {2793-2814},
doi = {10.3934/mbe.2023131},
pmid = {36899558},
issn = {1551-0018},
abstract = {With the explosive growth of edge computing, huge amounts of data are being generated in billions of edge devices. It is really difficult to balance detection efficiency and detection accuracy at the same time for object detection on multiple edge devices. However, there are few studies to investigate and improve the collaboration between cloud computing and edge computing considering realistic challenges, such as limited computation capacities, network congestion and long latency. To tackle these challenges, we propose a new multi-model license plate detection hybrid methodology with the tradeoff between efficiency and accuracy to process the tasks of license plate detection at the edge nodes and the cloud server. We also design a new probability-based offloading initialization algorithm that not only obtains reasonable initial solutions but also facilitates the accuracy of license plate detection. In addition, we introduce an adaptive offloading framework by gravitational genetic searching algorithm (GGSA), which can comprehensively consider influential factors such as license plate detection time, queuing time, energy consumption, image quality, and accuracy. GGSA is helpful for Quality-of-Service (QoS) enhancement. Extensive experiments show that our proposed GGSA offloading framework exhibits good performance in collaborative edge and cloud computing of license plate detection compared with other methods. It demonstrate that when compared with traditional all tasks are executed on the cloud server (AC), the offloading effect of GGSA can be improved by 50.31%. Besides, the offloading framework has strong portability when making real-time offloading decisions.},
}
RevDate: 2023-03-09
CmpDate: 2023-03-08
Ten lessons for data sharing with a data commons.
Scientific data, 10(1):120.
A data commons is a cloud-based data platform with a governance structure that allows a community to manage, analyze and share its data. Data commons provide a research community with the ability to manage and analyze large datasets using the elastic scalability provided by cloud computing and to share data securely and compliantly, and, in this way, accelerate the pace of research. Over the past decade, a number of data commons have been developed and we discuss some of the lessons learned from this effort.
Additional Links: PMID-36878917
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36878917,
year = {2023},
author = {Grossman, RL},
title = {Ten lessons for data sharing with a data commons.},
journal = {Scientific data},
volume = {10},
number = {1},
pages = {120},
pmid = {36878917},
issn = {2052-4463},
abstract = {A data commons is a cloud-based data platform with a governance structure that allows a community to manage, analyze and share its data. Data commons provide a research community with the ability to manage and analyze large datasets using the elastic scalability provided by cloud computing and to share data securely and compliantly, and, in this way, accelerate the pace of research. Over the past decade, a number of data commons have been developed and we discuss some of the lessons learned from this effort.},
}
RevDate: 2023-03-03
Cloud-Based Advanced Shuffled Frog Leaping Algorithm for Tasks Scheduling.
Big data [Epub ahead of print].
In recent years, the world has seen incremental growth in online activities owing to which the volume of data in cloud servers has also been increasing exponentially. With rapidly increasing data, load on cloud servers has increased in the cloud computing environment. With rapidly evolving technology, various cloud-based systems were developed to enhance the user experience. But, the increased online activities around the globe have also increased data load on the cloud-based systems. To maintain the efficiency and performance of the applications hosted in cloud servers, task scheduling has become very important. The task scheduling process helps in reducing the makespan time and average cost by scheduling the tasks to virtual machines (VMs). The task scheduling depends on assigning tasks to VMs to process the incoming tasks. The task scheduling should follow some algorithm for assigning tasks to VMs. Many researchers have proposed different scheduling algorithms for task scheduling in the cloud computing environment. In this article, an advanced form of the shuffled frog optimization algorithm, which works on the nature and behavior of frogs searching for food, has been proposed. The authors have introduced a new algorithm to shuffle the position of frogs in memeplex to obtain the best result. By using this optimization technique, the cost function of the central processing unit, makespan, and fitness function were calculated. The fitness function is the sum of the budget cost function and the makespan time. The proposed method helps in reducing the makespan time as well as the average cost by scheduling the tasks to VMs effectively. Finally, the performance of the proposed advanced shuffled frog optimization method is compared with existing task scheduling methods such as whale optimization-based scheduler (W-Scheduler), sliced particle swarm optimization (SPSO-SA), inverted ant colony optimization algorithm, and static learning particle swarm optimization (SLPSO-SA) in terms of average cost and metric makespan. Experimentally, it was concluded that the proposed advanced frog optimization algorithm can schedule tasks to the VMs more effectively as compared with other scheduling methods with a makespan of 6, average cost of 4, and fitness of 10.
Additional Links: PMID-36867158
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36867158,
year = {2023},
author = {Kumar, D and Mandal, N and Kumar, Y},
title = {Cloud-Based Advanced Shuffled Frog Leaping Algorithm for Tasks Scheduling.},
journal = {Big data},
volume = {},
number = {},
pages = {},
doi = {10.1089/big.2022.0095},
pmid = {36867158},
issn = {2167-647X},
abstract = {In recent years, the world has seen incremental growth in online activities owing to which the volume of data in cloud servers has also been increasing exponentially. With rapidly increasing data, load on cloud servers has increased in the cloud computing environment. With rapidly evolving technology, various cloud-based systems were developed to enhance the user experience. But, the increased online activities around the globe have also increased data load on the cloud-based systems. To maintain the efficiency and performance of the applications hosted in cloud servers, task scheduling has become very important. The task scheduling process helps in reducing the makespan time and average cost by scheduling the tasks to virtual machines (VMs). The task scheduling depends on assigning tasks to VMs to process the incoming tasks. The task scheduling should follow some algorithm for assigning tasks to VMs. Many researchers have proposed different scheduling algorithms for task scheduling in the cloud computing environment. In this article, an advanced form of the shuffled frog optimization algorithm, which works on the nature and behavior of frogs searching for food, has been proposed. The authors have introduced a new algorithm to shuffle the position of frogs in memeplex to obtain the best result. By using this optimization technique, the cost function of the central processing unit, makespan, and fitness function were calculated. The fitness function is the sum of the budget cost function and the makespan time. The proposed method helps in reducing the makespan time as well as the average cost by scheduling the tasks to VMs effectively. Finally, the performance of the proposed advanced shuffled frog optimization method is compared with existing task scheduling methods such as whale optimization-based scheduler (W-Scheduler), sliced particle swarm optimization (SPSO-SA), inverted ant colony optimization algorithm, and static learning particle swarm optimization (SLPSO-SA) in terms of average cost and metric makespan. Experimentally, it was concluded that the proposed advanced frog optimization algorithm can schedule tasks to the VMs more effectively as compared with other scheduling methods with a makespan of 6, average cost of 4, and fitness of 10.},
}
RevDate: 2023-03-03
CmpDate: 2023-03-03
Load-Balancing Strategy: Employing a Capsule Algorithm for Cutting Down Energy Consumption in Cloud Data Centers for Next Generation Wireless Systems.
Computational intelligence and neuroscience, 2023:6090282.
Per-user pricing is possible with cloud computing, a relatively new technology. It provides remote testing and commissioning services through the web, and it utilizes virtualization to make available computing resources. In order to host and store firm data, cloud computing relies on data centers. Data centers are made up of networked computers, cables, power supplies, and other components. Cloud data centers have always had to prioritise high performance over energy efficiency. The biggest obstacle is finding a happy medium between system performance and energy consumption, namely, lowering energy use without compromising system performance or service quality. These results were obtained using the PlanetLab dataset. In order to implement the strategy we recommend, it is crucial to get a complete picture of how energy is being consumed in the cloud. Using proper optimization criteria and guided by energy consumption models, this article offers the Capsule Significance Level of Energy Consumption (CSLEC) pattern, which demonstrates how to conserve more energy in cloud data centers. Capsule optimization's prediction phase F1-score of 96.7 percent and 97 percent data accuracy allow for more precise projections of future value.
Additional Links: PMID-36860419
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36860419,
year = {2023},
author = {Singh, J and Chen, J and Singh, SP and Singh, MP and Hassan, MM and Hassan, MM and Awal, H},
title = {Load-Balancing Strategy: Employing a Capsule Algorithm for Cutting Down Energy Consumption in Cloud Data Centers for Next Generation Wireless Systems.},
journal = {Computational intelligence and neuroscience},
volume = {2023},
number = {},
pages = {6090282},
pmid = {36860419},
issn = {1687-5273},
mesh = {*Algorithms ; *Cloud Computing ; Data Accuracy ; Electric Power Supplies ; Happiness ; },
abstract = {Per-user pricing is possible with cloud computing, a relatively new technology. It provides remote testing and commissioning services through the web, and it utilizes virtualization to make available computing resources. In order to host and store firm data, cloud computing relies on data centers. Data centers are made up of networked computers, cables, power supplies, and other components. Cloud data centers have always had to prioritise high performance over energy efficiency. The biggest obstacle is finding a happy medium between system performance and energy consumption, namely, lowering energy use without compromising system performance or service quality. These results were obtained using the PlanetLab dataset. In order to implement the strategy we recommend, it is crucial to get a complete picture of how energy is being consumed in the cloud. Using proper optimization criteria and guided by energy consumption models, this article offers the Capsule Significance Level of Energy Consumption (CSLEC) pattern, which demonstrates how to conserve more energy in cloud data centers. Capsule optimization's prediction phase F1-score of 96.7 percent and 97 percent data accuracy allow for more precise projections of future value.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Algorithms
*Cloud Computing
Data Accuracy
Electric Power Supplies
Happiness
RevDate: 2023-03-01
Policy-Based Holistic Application Management with BPMN and TOSCA.
SN computer science, 4(3):232.
With the wide adoption of cloud computing across technology industries and research institutions, an ever-growing interest in cloud orchestration frameworks has emerged over the past few years. These orchestration frameworks enable the automated provisioning and decommissioning of cloud applications in a timely and efficient manner, but they offer limited or no support for application management. While management functionalities, such as configuring, monitoring and scaling single components, can be directly covered by cloud providers and configuration management tools, holistic management features, such as backing up, testing and updating multiple components, cannot be automated using these approaches. In this paper, we propose a concept to automatically generate executable holistic management workflows based on the TOSCA standard. The practical feasibility of the approach is validated through a prototype implementation and a case study.
Additional Links: PMID-36855338
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36855338,
year = {2023},
author = {Calcaterra, D and Tomarchio, O},
title = {Policy-Based Holistic Application Management with BPMN and TOSCA.},
journal = {SN computer science},
volume = {4},
number = {3},
pages = {232},
pmid = {36855338},
issn = {2661-8907},
abstract = {With the wide adoption of cloud computing across technology industries and research institutions, an ever-growing interest in cloud orchestration frameworks has emerged over the past few years. These orchestration frameworks enable the automated provisioning and decommissioning of cloud applications in a timely and efficient manner, but they offer limited or no support for application management. While management functionalities, such as configuring, monitoring and scaling single components, can be directly covered by cloud providers and configuration management tools, holistic management features, such as backing up, testing and updating multiple components, cannot be automated using these approaches. In this paper, we propose a concept to automatically generate executable holistic management workflows based on the TOSCA standard. The practical feasibility of the approach is validated through a prototype implementation and a case study.},
}
RevDate: 2023-03-01
Framing Apache Spark in life sciences.
Heliyon, 9(2):e13368.
Advances in high-throughput and digital technologies have required the adoption of big data for handling complex tasks in life sciences. However, the drift to big data led researchers to face technical and infrastructural challenges for storing, sharing, and analysing them. In fact, this kind of tasks requires distributed computing systems and algorithms able to ensure efficient processing. Cutting edge distributed programming frameworks allow to implement flexible algorithms able to adapt the computation to the data over on-premise HPC clusters or cloud architectures. In this context, Apache Spark is a very powerful HPC engine for large-scale data processing on clusters. Also thanks to specialised libraries for working with structured and relational data, it allows to support machine learning, graph-based computation, and stream processing. This review article is aimed at helping life sciences researchers to ascertain the features of Apache Spark and to assess whether it can be successfully used in their research activities.
Additional Links: PMID-36852030
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36852030,
year = {2023},
author = {Manconi, A and Gnocchi, M and Milanesi, L and Marullo, O and Armano, G},
title = {Framing Apache Spark in life sciences.},
journal = {Heliyon},
volume = {9},
number = {2},
pages = {e13368},
pmid = {36852030},
issn = {2405-8440},
abstract = {Advances in high-throughput and digital technologies have required the adoption of big data for handling complex tasks in life sciences. However, the drift to big data led researchers to face technical and infrastructural challenges for storing, sharing, and analysing them. In fact, this kind of tasks requires distributed computing systems and algorithms able to ensure efficient processing. Cutting edge distributed programming frameworks allow to implement flexible algorithms able to adapt the computation to the data over on-premise HPC clusters or cloud architectures. In this context, Apache Spark is a very powerful HPC engine for large-scale data processing on clusters. Also thanks to specialised libraries for working with structured and relational data, it allows to support machine learning, graph-based computation, and stream processing. This review article is aimed at helping life sciences researchers to ascertain the features of Apache Spark and to assess whether it can be successfully used in their research activities.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
An Adaptable and Unsupervised TinyML Anomaly Detection System for Extreme Industrial Environments.
Sensors (Basel, Switzerland), 23(4):.
Industrial assets often feature multiple sensing devices to keep track of their status by monitoring certain physical parameters. These readings can be analyzed with machine learning (ML) tools to identify potential failures through anomaly detection, allowing operators to take appropriate corrective actions. Typically, these analyses are conducted on servers located in data centers or the cloud. However, this approach increases system complexity and is susceptible to failure in cases where connectivity is unavailable. Furthermore, this communication restriction limits the approach's applicability in extreme industrial environments where operating conditions affect communication and access to the system. This paper proposes and evaluates an end-to-end adaptable and configurable anomaly detection system that uses the Internet of Things (IoT), edge computing, and Tiny-MLOps methodologies in an extreme industrial environment such as submersible pumps. The system runs on an IoT sensing Kit, based on an ESP32 microcontroller and MicroPython firmware, located near the data source. The processing pipeline on the sensing device collects data, trains an anomaly detection model, and alerts an external gateway in the event of an anomaly. The anomaly detection model uses the isolation forest algorithm, which can be trained on the microcontroller in just 1.2 to 6.4 s and detect an anomaly in less than 16 milliseconds with an ensemble of 50 trees and 80 KB of RAM. Additionally, the system employs blockchain technology to provide a transparent and irrefutable repository of anomalies.
Additional Links: PMID-36850940
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850940,
year = {2023},
author = {Antonini, M and Pincheira, M and Vecchio, M and Antonelli, F},
title = {An Adaptable and Unsupervised TinyML Anomaly Detection System for Extreme Industrial Environments.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850940},
issn = {1424-8220},
abstract = {Industrial assets often feature multiple sensing devices to keep track of their status by monitoring certain physical parameters. These readings can be analyzed with machine learning (ML) tools to identify potential failures through anomaly detection, allowing operators to take appropriate corrective actions. Typically, these analyses are conducted on servers located in data centers or the cloud. However, this approach increases system complexity and is susceptible to failure in cases where connectivity is unavailable. Furthermore, this communication restriction limits the approach's applicability in extreme industrial environments where operating conditions affect communication and access to the system. This paper proposes and evaluates an end-to-end adaptable and configurable anomaly detection system that uses the Internet of Things (IoT), edge computing, and Tiny-MLOps methodologies in an extreme industrial environment such as submersible pumps. The system runs on an IoT sensing Kit, based on an ESP32 microcontroller and MicroPython firmware, located near the data source. The processing pipeline on the sensing device collects data, trains an anomaly detection model, and alerts an external gateway in the event of an anomaly. The anomaly detection model uses the isolation forest algorithm, which can be trained on the microcontroller in just 1.2 to 6.4 s and detect an anomaly in less than 16 milliseconds with an ensemble of 50 trees and 80 KB of RAM. Additionally, the system employs blockchain technology to provide a transparent and irrefutable repository of anomalies.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
An Optimized Convolutional Neural Network for the 3D Point-Cloud Compression.
Sensors (Basel, Switzerland), 23(4):.
Due to the tremendous volume taken by the 3D point-cloud models, knowing how to achieve the balance between a high compression ratio, a low distortion rate, and computing cost in point-cloud compression is a significant issue in the field of virtual reality (VR). Convolutional neural networks have been used in numerous point-cloud compression research approaches during the past few years in an effort to progress the research state. In this work, we have evaluated the effects of different network parameters, including neural network depth, stride, and activation function on point-cloud compression, resulting in an optimized convolutional neural network for compression. We first have analyzed earlier research on point-cloud compression based on convolutional neural networks before designing our own convolutional neural network. Then, we have modified our model parameters using the experimental data to further enhance the effect of point-cloud compression. Based on the experimental results, we have found that the neural network with the 4 layers and 2 strides parameter configuration using the Sigmoid activation function outperforms the default configuration by 208% in terms of the compression-distortion rate. The experimental results show that our findings are effective and universal and make a great contribution to the research of point-cloud compression using convolutional neural networks.
Additional Links: PMID-36850847
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850847,
year = {2023},
author = {Luo, G and He, B and Xiong, Y and Wang, L and Wang, H and Zhu, Z and Shi, X},
title = {An Optimized Convolutional Neural Network for the 3D Point-Cloud Compression.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850847},
issn = {1424-8220},
abstract = {Due to the tremendous volume taken by the 3D point-cloud models, knowing how to achieve the balance between a high compression ratio, a low distortion rate, and computing cost in point-cloud compression is a significant issue in the field of virtual reality (VR). Convolutional neural networks have been used in numerous point-cloud compression research approaches during the past few years in an effort to progress the research state. In this work, we have evaluated the effects of different network parameters, including neural network depth, stride, and activation function on point-cloud compression, resulting in an optimized convolutional neural network for compression. We first have analyzed earlier research on point-cloud compression based on convolutional neural networks before designing our own convolutional neural network. Then, we have modified our model parameters using the experimental data to further enhance the effect of point-cloud compression. Based on the experimental results, we have found that the neural network with the 4 layers and 2 strides parameter configuration using the Sigmoid activation function outperforms the default configuration by 208% in terms of the compression-distortion rate. The experimental results show that our findings are effective and universal and make a great contribution to the research of point-cloud compression using convolutional neural networks.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
A Federated Learning and Deep Reinforcement Learning-Based Method with Two Types of Agents for Computation Offload.
Sensors (Basel, Switzerland), 23(4):.
With the rise of latency-sensitive and computationally intensive applications in mobile edge computing (MEC) environments, the computation offloading strategy has been widely studied to meet the low-latency demands of these applications. However, the uncertainty of various tasks and the time-varying conditions of wireless networks make it difficult for mobile devices to make efficient decisions. The existing methods also face the problems of long-delay decisions and user data privacy disclosures. In this paper, we present the FDRT, a federated learning and deep reinforcement learning-based method with two types of agents for computation offload, to minimize the system latency. FDRT uses a multi-agent collaborative computation offloading strategy, namely, DRT. DRT divides the offloading decision into whether to compute tasks locally and whether to offload tasks to MEC servers. The designed DDQN agent considers the task information, its own resources, and the network status conditions of mobile devices, and the designed D3QN agent considers these conditions of all MEC servers in the collaborative cloud-side end MEC system; both jointly learn the optimal decision. FDRT also applies federated learning to reduce communication overhead and optimize the model training of DRT by designing a new parameter aggregation method, while protecting user data privacy. The simulation results showed that DRT effectively reduced the average task execution delay by up to 50% compared with several baselines and state-of-the-art offloading strategies. FRDT also accelerates the convergence rate of multi-agent training and reduces the training time of DRT by 61.7%.
Additional Links: PMID-36850846
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850846,
year = {2023},
author = {Liu, S and Yang, S and Zhang, H and Wu, W},
title = {A Federated Learning and Deep Reinforcement Learning-Based Method with Two Types of Agents for Computation Offload.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850846},
issn = {1424-8220},
abstract = {With the rise of latency-sensitive and computationally intensive applications in mobile edge computing (MEC) environments, the computation offloading strategy has been widely studied to meet the low-latency demands of these applications. However, the uncertainty of various tasks and the time-varying conditions of wireless networks make it difficult for mobile devices to make efficient decisions. The existing methods also face the problems of long-delay decisions and user data privacy disclosures. In this paper, we present the FDRT, a federated learning and deep reinforcement learning-based method with two types of agents for computation offload, to minimize the system latency. FDRT uses a multi-agent collaborative computation offloading strategy, namely, DRT. DRT divides the offloading decision into whether to compute tasks locally and whether to offload tasks to MEC servers. The designed DDQN agent considers the task information, its own resources, and the network status conditions of mobile devices, and the designed D3QN agent considers these conditions of all MEC servers in the collaborative cloud-side end MEC system; both jointly learn the optimal decision. FDRT also applies federated learning to reduce communication overhead and optimize the model training of DRT by designing a new parameter aggregation method, while protecting user data privacy. The simulation results showed that DRT effectively reduced the average task execution delay by up to 50% compared with several baselines and state-of-the-art offloading strategies. FRDT also accelerates the convergence rate of multi-agent training and reduces the training time of DRT by 61.7%.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions.
Sensors (Basel, Switzerland), 23(4):.
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.
Additional Links: PMID-36850813
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850813,
year = {2023},
author = {Vaño, R and Lacalle, I and Sowiński, P and S-Julián, R and Palau, CE},
title = {Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850813},
issn = {1424-8220},
abstract = {Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
Distributed Detection of Malicious Android Apps While Preserving Privacy Using Federated Learning.
Sensors (Basel, Switzerland), 23(4):.
Recently, deep learning has been widely used to solve existing computing problems through large-scale data mining. Conventional training of the deep learning model is performed on a central (cloud) server that is equipped with high computing power, by integrating data via high computational intensity. However, integrating raw data from multiple clients raises privacy concerns that are increasingly being focused on. In federated learning (FL), clients train deep learning models in a distributed fashion using their local data; instead of sending raw data to a central server, they send parameter values of the trained local model to a central server for integration. Because FL does not transmit raw data to the outside, it is free from privacy issues. In this paper, we perform an experimental study that explores the dynamics of the FL-based Android malicious app detection method under three data distributions across clients, i.e., (i) independent and identically distributed (IID), (ii) non-IID, (iii) non-IID and unbalanced. Our experiments demonstrate that the application of FL is feasible and efficient in detecting malicious Android apps in a distributed manner on cellular networks.
Additional Links: PMID-36850794
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850794,
year = {2023},
author = {Lee, S},
title = {Distributed Detection of Malicious Android Apps While Preserving Privacy Using Federated Learning.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850794},
issn = {1424-8220},
abstract = {Recently, deep learning has been widely used to solve existing computing problems through large-scale data mining. Conventional training of the deep learning model is performed on a central (cloud) server that is equipped with high computing power, by integrating data via high computational intensity. However, integrating raw data from multiple clients raises privacy concerns that are increasingly being focused on. In federated learning (FL), clients train deep learning models in a distributed fashion using their local data; instead of sending raw data to a central server, they send parameter values of the trained local model to a central server for integration. Because FL does not transmit raw data to the outside, it is free from privacy issues. In this paper, we perform an experimental study that explores the dynamics of the FL-based Android malicious app detection method under three data distributions across clients, i.e., (i) independent and identically distributed (IID), (ii) non-IID, (iii) non-IID and unbalanced. Our experiments demonstrate that the application of FL is feasible and efficient in detecting malicious Android apps in a distributed manner on cellular networks.},
}
RevDate: 2023-02-28
Design of Low-Complexity Convolutional Neural Network Accelerator for Finger Vein Identification System.
Sensors (Basel, Switzerland), 23(4):.
In the biometric field, vein identification is a vital process that is constrained by the invisibility of veins as well as other unique features. Moreover, users generally do not wish to have their personal information uploaded to the cloud, so edge computing has become popular for the sake of protecting user privacy. In this paper, we propose a low-complexity and lightweight convolutional neural network (CNN) and we design intellectual property (IP) for shortening the inference time in finger vein recognition. This neural network system can operate independently in client mode. After fetching the user's finger vein image via a near-infrared (NIR) camera mounted on an embedded system, vein features can be efficiently extracted by vein curving algorithms and user identification can be completed quickly. Better image quality and higher recognition accuracy can be obtained by combining several preprocessing techniques and the modified CNN. Experimental data were collected by the finger vein image capture equipment developed in our laboratory based on the specifications of similar products currently on the market. Extensive experiments demonstrated the practicality and robustness of the proposed finger vein identification system.
Additional Links: PMID-36850785
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850785,
year = {2023},
author = {Chang, RC and Wang, CY and Li, YH and Chiu, CD},
title = {Design of Low-Complexity Convolutional Neural Network Accelerator for Finger Vein Identification System.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850785},
issn = {1424-8220},
abstract = {In the biometric field, vein identification is a vital process that is constrained by the invisibility of veins as well as other unique features. Moreover, users generally do not wish to have their personal information uploaded to the cloud, so edge computing has become popular for the sake of protecting user privacy. In this paper, we propose a low-complexity and lightweight convolutional neural network (CNN) and we design intellectual property (IP) for shortening the inference time in finger vein recognition. This neural network system can operate independently in client mode. After fetching the user's finger vein image via a near-infrared (NIR) camera mounted on an embedded system, vein features can be efficiently extracted by vein curving algorithms and user identification can be completed quickly. Better image quality and higher recognition accuracy can be obtained by combining several preprocessing techniques and the modified CNN. Experimental data were collected by the finger vein image capture equipment developed in our laboratory based on the specifications of similar products currently on the market. Extensive experiments demonstrated the practicality and robustness of the proposed finger vein identification system.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application.
Sensors (Basel, Switzerland), 23(4):.
Recently, the concept of the internet of things and its services has emerged with cloud computing. Cloud computing is a modern technology for dealing with big data to perform specified operations. The cloud addresses the problem of selecting and placing iterations across nodes in fog computing. Previous studies focused on original swarm intelligent and mathematical models; thus, we proposed a novel hybrid method based on two modern metaheuristic algorithms. This paper combined the Aquila Optimizer (AO) algorithm with the elephant herding optimization (EHO) for solving dynamic data replication problems in the fog computing environment. In the proposed method, we present a set of objectives that determine data transmission paths, choose the least cost path, reduce network bottlenecks, bandwidth, balance, and speed data transfer rates between nodes in cloud computing. A hybrid method, AOEHO, addresses the optimal and least expensive path, determines the best replication via cloud computing, and determines optimal nodes to select and place data replication near users. Moreover, we developed a multi-objective optimization based on the proposed AOEHO to decrease the bandwidth and enhance load balancing and cloud throughput. The proposed method is evaluated based on data replication using seven criteria. These criteria are data replication access, distance, costs, availability, SBER, popularity, and the Floyd algorithm. The experimental results show the superiority of the proposed AOEHO strategy performance over other algorithms, such as bandwidth, distance, load balancing, data transmission, and least cost path.
Additional Links: PMID-36850784
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850784,
year = {2023},
author = {Mohamed, AA and Abualigah, L and Alburaikan, A and Khalifa, HAE},
title = {AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850784},
issn = {1424-8220},
abstract = {Recently, the concept of the internet of things and its services has emerged with cloud computing. Cloud computing is a modern technology for dealing with big data to perform specified operations. The cloud addresses the problem of selecting and placing iterations across nodes in fog computing. Previous studies focused on original swarm intelligent and mathematical models; thus, we proposed a novel hybrid method based on two modern metaheuristic algorithms. This paper combined the Aquila Optimizer (AO) algorithm with the elephant herding optimization (EHO) for solving dynamic data replication problems in the fog computing environment. In the proposed method, we present a set of objectives that determine data transmission paths, choose the least cost path, reduce network bottlenecks, bandwidth, balance, and speed data transfer rates between nodes in cloud computing. A hybrid method, AOEHO, addresses the optimal and least expensive path, determines the best replication via cloud computing, and determines optimal nodes to select and place data replication near users. Moreover, we developed a multi-objective optimization based on the proposed AOEHO to decrease the bandwidth and enhance load balancing and cloud throughput. The proposed method is evaluated based on data replication using seven criteria. These criteria are data replication access, distance, costs, availability, SBER, popularity, and the Floyd algorithm. The experimental results show the superiority of the proposed AOEHO strategy performance over other algorithms, such as bandwidth, distance, load balancing, data transmission, and least cost path.},
}
RevDate: 2023-02-28
Using Mobile Edge AI to Detect and Map Diseases in Citrus Orchards.
Sensors (Basel, Switzerland), 23(4):.
Deep Learning models have presented promising results when applied to Agriculture 4.0. Among other applications, these models can be used in disease detection and fruit counting. Deep Learning models usually have many layers in the architecture and millions of parameters. This aspect hinders the use of Deep Learning on mobile devices as they require a large amount of processing power for inference. In addition, the lack of high-quality Internet connectivity in the field impedes the usage of cloud computing, pushing the processing towards edge devices. This work describes the proposal of an edge AI application to detect and map diseases in citrus orchards. The proposed system has low computational demand, enabling the use of low-footprint models for both detection and classification tasks. We initially compared AI algorithms to detect fruits on trees. Specifically, we analyzed and compared YOLO and Faster R-CNN. Then, we studied lean AI models to perform the classification task. In this context, we tested and compared the performance of MobileNetV2, EfficientNetV2-B0, and NASNet-Mobile. In the detection task, YOLO and Faster R-CNN had similar AI performance metrics, but YOLO was significantly faster. In the image classification task, MobileNetMobileV2 and EfficientNetV2-B0 obtained an accuracy of 100%, while NASNet-Mobile had a 98% performance. As for the timing performance, MobileNetV2 and EfficientNetV2-B0 were the best candidates, while NASNet-Mobile was significantly worse. Furthermore, MobileNetV2 had a 10% better performance than EfficientNetV2-B0. Finally, we provide a method to evaluate the results from these algorithms towards describing the disease spread using statistical parametric models and a genetic algorithm to perform the parameters' regression. With these results, we validated the proposed pipeline, enabling the usage of adequate AI models to develop a mobile edge AI solution.
Additional Links: PMID-36850763
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850763,
year = {2023},
author = {da Silva, JCF and Silva, MC and Luz, EJS and Delabrida, S and Oliveira, RAR},
title = {Using Mobile Edge AI to Detect and Map Diseases in Citrus Orchards.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850763},
issn = {1424-8220},
abstract = {Deep Learning models have presented promising results when applied to Agriculture 4.0. Among other applications, these models can be used in disease detection and fruit counting. Deep Learning models usually have many layers in the architecture and millions of parameters. This aspect hinders the use of Deep Learning on mobile devices as they require a large amount of processing power for inference. In addition, the lack of high-quality Internet connectivity in the field impedes the usage of cloud computing, pushing the processing towards edge devices. This work describes the proposal of an edge AI application to detect and map diseases in citrus orchards. The proposed system has low computational demand, enabling the use of low-footprint models for both detection and classification tasks. We initially compared AI algorithms to detect fruits on trees. Specifically, we analyzed and compared YOLO and Faster R-CNN. Then, we studied lean AI models to perform the classification task. In this context, we tested and compared the performance of MobileNetV2, EfficientNetV2-B0, and NASNet-Mobile. In the detection task, YOLO and Faster R-CNN had similar AI performance metrics, but YOLO was significantly faster. In the image classification task, MobileNetMobileV2 and EfficientNetV2-B0 obtained an accuracy of 100%, while NASNet-Mobile had a 98% performance. As for the timing performance, MobileNetV2 and EfficientNetV2-B0 were the best candidates, while NASNet-Mobile was significantly worse. Furthermore, MobileNetV2 had a 10% better performance than EfficientNetV2-B0. Finally, we provide a method to evaluate the results from these algorithms towards describing the disease spread using statistical parametric models and a genetic algorithm to perform the parameters' regression. With these results, we validated the proposed pipeline, enabling the usage of adequate AI models to develop a mobile edge AI solution.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
Control and Optimisation of Power Grids Using Smart Meter Data: A Review.
Sensors (Basel, Switzerland), 23(4):.
This paper provides a comprehensive review of the applications of smart meters in the control and optimisation of power grids to support a smooth energy transition towards the renewable energy future. The smart grids become more complicated due to the presence of small-scale low inertia generators and the implementation of electric vehicles (EVs), which are mainly based on intermittent and variable renewable energy resources. Optimal and reliable operation of this environment using conventional model-based approaches is very difficult. Advancements in measurement and communication technologies have brought the opportunity of collecting temporal or real-time data from prosumers through Advanced Metering Infrastructure (AMI). Smart metering brings the potential of applying data-driven algorithms for different power system operations and planning services, such as infrastructure sizing and upgrade and generation forecasting. It can also be used for demand-side management, especially in the presence of new technologies such as EVs, 5G/6G networks and cloud computing. These algorithms face privacy-preserving and cybersecurity challenges that need to be well addressed. This article surveys the state-of-the-art of each of these topics, reviewing applications, challenges and opportunities of using smart meters to address them. It also stipulates the challenges that smart grids present to smart meters and the benefits that smart meters can bring to smart grids. Furthermore, the paper is concluded with some expected future directions and potential research questions for smart meters, smart grids and their interplay.
Additional Links: PMID-36850711
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850711,
year = {2023},
author = {Chen, Z and Amani, AM and Yu, X and Jalili, M},
title = {Control and Optimisation of Power Grids Using Smart Meter Data: A Review.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850711},
issn = {1424-8220},
abstract = {This paper provides a comprehensive review of the applications of smart meters in the control and optimisation of power grids to support a smooth energy transition towards the renewable energy future. The smart grids become more complicated due to the presence of small-scale low inertia generators and the implementation of electric vehicles (EVs), which are mainly based on intermittent and variable renewable energy resources. Optimal and reliable operation of this environment using conventional model-based approaches is very difficult. Advancements in measurement and communication technologies have brought the opportunity of collecting temporal or real-time data from prosumers through Advanced Metering Infrastructure (AMI). Smart metering brings the potential of applying data-driven algorithms for different power system operations and planning services, such as infrastructure sizing and upgrade and generation forecasting. It can also be used for demand-side management, especially in the presence of new technologies such as EVs, 5G/6G networks and cloud computing. These algorithms face privacy-preserving and cybersecurity challenges that need to be well addressed. This article surveys the state-of-the-art of each of these topics, reviewing applications, challenges and opportunities of using smart meters to address them. It also stipulates the challenges that smart grids present to smart meters and the benefits that smart meters can bring to smart grids. Furthermore, the paper is concluded with some expected future directions and potential research questions for smart meters, smart grids and their interplay.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
A Secure IoT-Based Irrigation System for Precision Agriculture Using the Expeditious Cipher.
Sensors (Basel, Switzerland), 23(4):.
Due to the recent advances in the domain of smart agriculture as a result of integrating traditional agriculture and the latest information technologies including the Internet of Things (IoT), cloud computing, and artificial intelligence (AI), there is an urgent need to address the information security-related issues and challenges in this field. In this article, we propose the integration of lightweight cryptography techniques into the IoT ecosystem for smart agriculture to meet the requirements of resource-constrained IoT devices. Moreover, we investigate the adoption of a lightweight encryption protocol, namely, the Expeditious Cipher (X-cipher), to create a secure channel between the sensing layer and the broker in the Message Queue Telemetry Transport (MQTT) protocol as well as a secure channel between the broker and its subscribers. Our case study focuses on smart irrigation systems, and the MQTT protocol is deployed as the application messaging protocol in these systems. Smart irrigation strives to decrease the misuse of natural resources by enhancing the efficiency of agricultural irrigation. This secure channel is utilized to eliminate the main security threat in precision agriculture by protecting sensors' published data from eavesdropping and theft, as well as from unauthorized changes to sensitive data that can negatively impact crops' development. In addition, the secure channel protects the irrigation decisions made by the data analytics (DA) entity regarding the irrigation time and the quantity of water that is returned to actuators from any alteration. Performance evaluation of our chosen lightweight encryption protocol revealed an improvement in terms of power consumption, execution time, and required memory usage when compared with the Advanced Encryption Standard (AES). Moreover, the selected lightweight encryption protocol outperforms the PRESENT lightweight encryption protocol in terms of throughput and memory usage.
Additional Links: PMID-36850688
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850688,
year = {2023},
author = {Fathy, C and Ali, HM},
title = {A Secure IoT-Based Irrigation System for Precision Agriculture Using the Expeditious Cipher.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850688},
issn = {1424-8220},
abstract = {Due to the recent advances in the domain of smart agriculture as a result of integrating traditional agriculture and the latest information technologies including the Internet of Things (IoT), cloud computing, and artificial intelligence (AI), there is an urgent need to address the information security-related issues and challenges in this field. In this article, we propose the integration of lightweight cryptography techniques into the IoT ecosystem for smart agriculture to meet the requirements of resource-constrained IoT devices. Moreover, we investigate the adoption of a lightweight encryption protocol, namely, the Expeditious Cipher (X-cipher), to create a secure channel between the sensing layer and the broker in the Message Queue Telemetry Transport (MQTT) protocol as well as a secure channel between the broker and its subscribers. Our case study focuses on smart irrigation systems, and the MQTT protocol is deployed as the application messaging protocol in these systems. Smart irrigation strives to decrease the misuse of natural resources by enhancing the efficiency of agricultural irrigation. This secure channel is utilized to eliminate the main security threat in precision agriculture by protecting sensors' published data from eavesdropping and theft, as well as from unauthorized changes to sensitive data that can negatively impact crops' development. In addition, the secure channel protects the irrigation decisions made by the data analytics (DA) entity regarding the irrigation time and the quantity of water that is returned to actuators from any alteration. Performance evaluation of our chosen lightweight encryption protocol revealed an improvement in terms of power consumption, execution time, and required memory usage when compared with the Advanced Encryption Standard (AES). Moreover, the selected lightweight encryption protocol outperforms the PRESENT lightweight encryption protocol in terms of throughput and memory usage.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
Achieving Reliability in Cloud Computing by a Novel Hybrid Approach.
Sensors (Basel, Switzerland), 23(4):.
Cloud computing (CC) benefits and opportunities are among the fastest growing technologies in the computer industry. Cloud computing's challenges include resource allocation, security, quality of service, availability, privacy, data management, performance compatibility, and fault tolerance. Fault tolerance (FT) refers to a system's ability to continue performing its intended task in the presence of defects. Fault-tolerance challenges include heterogeneity and a lack of standards, the need for automation, cloud downtime reliability, consideration for recovery point objects, recovery time objects, and cloud workload. The proposed research includes machine learning (ML) algorithms such as naïve Bayes (NB), library support vector machine (LibSVM), multinomial logistic regression (MLR), sequential minimal optimization (SMO), K-nearest neighbor (KNN), and random forest (RF) as well as a fault-tolerance method known as delta-checkpointing to achieve higher accuracy, lesser fault prediction error, and reliability. Furthermore, the secondary data were collected from the homonymous, experimental high-performance computing (HPC) system at the Swiss Federal Institute of Technology (ETH), Zurich, and the primary data were generated using virtual machines (VMs) to select the best machine learning classifier. In this article, the secondary and primary data were divided into two split ratios of 80/20 and 70/30, respectively, and cross-validation (5-fold) was used to identify more accuracy and less prediction of faults in terms of true, false, repair, and failure of virtual machines. Secondary data results show that naïve Bayes performed exceptionally well on CPU-Mem mono and multi blocks, and sequential minimal optimization performed very well on HDD mono and multi blocks in terms of accuracy and fault prediction. In the case of greater accuracy and less fault prediction, primary data results revealed that random forest performed very well in terms of accuracy and fault prediction but not with good time complexity. Sequential minimal optimization has good time complexity with minor differences in random forest accuracy and fault prediction. We decided to modify sequential minimal optimization. Finally, the modified sequential minimal optimization (MSMO) algorithm with the fault-tolerance delta-checkpointing (D-CP) method is proposed to improve accuracy, fault prediction error, and reliability in cloud computing.
Additional Links: PMID-36850563
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850563,
year = {2023},
author = {Shahid, MA and Alam, MM and Su'ud, MM},
title = {Achieving Reliability in Cloud Computing by a Novel Hybrid Approach.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850563},
issn = {1424-8220},
abstract = {Cloud computing (CC) benefits and opportunities are among the fastest growing technologies in the computer industry. Cloud computing's challenges include resource allocation, security, quality of service, availability, privacy, data management, performance compatibility, and fault tolerance. Fault tolerance (FT) refers to a system's ability to continue performing its intended task in the presence of defects. Fault-tolerance challenges include heterogeneity and a lack of standards, the need for automation, cloud downtime reliability, consideration for recovery point objects, recovery time objects, and cloud workload. The proposed research includes machine learning (ML) algorithms such as naïve Bayes (NB), library support vector machine (LibSVM), multinomial logistic regression (MLR), sequential minimal optimization (SMO), K-nearest neighbor (KNN), and random forest (RF) as well as a fault-tolerance method known as delta-checkpointing to achieve higher accuracy, lesser fault prediction error, and reliability. Furthermore, the secondary data were collected from the homonymous, experimental high-performance computing (HPC) system at the Swiss Federal Institute of Technology (ETH), Zurich, and the primary data were generated using virtual machines (VMs) to select the best machine learning classifier. In this article, the secondary and primary data were divided into two split ratios of 80/20 and 70/30, respectively, and cross-validation (5-fold) was used to identify more accuracy and less prediction of faults in terms of true, false, repair, and failure of virtual machines. Secondary data results show that naïve Bayes performed exceptionally well on CPU-Mem mono and multi blocks, and sequential minimal optimization performed very well on HDD mono and multi blocks in terms of accuracy and fault prediction. In the case of greater accuracy and less fault prediction, primary data results revealed that random forest performed very well in terms of accuracy and fault prediction but not with good time complexity. Sequential minimal optimization has good time complexity with minor differences in random forest accuracy and fault prediction. We decided to modify sequential minimal optimization. Finally, the modified sequential minimal optimization (MSMO) algorithm with the fault-tolerance delta-checkpointing (D-CP) method is proposed to improve accuracy, fault prediction error, and reliability in cloud computing.},
}
RevDate: 2023-02-28
CmpDate: 2023-02-28
An Innovative Cloud-Fog-Based Smart Grid Scheme for Efficient Resource Utilization.
Sensors (Basel, Switzerland), 23(4):.
Smart grids (SGs) enhance the effectiveness, reliability, resilience, and energy-efficient operation of electrical networks. Nonetheless, SGs suffer from big data transactions which limit their capabilities and can cause delays in the optimal operation and management tasks. Therefore, it is clear that a fast and reliable architecture is needed to make big data management in SGs more efficient. This paper assesses the optimal operation of the SGs using cloud computing (CC), fog computing, and resource allocation to enhance the management problem. Technically, big data management makes SG more efficient if cloud and fog computing (CFC) are integrated. The integration of fog computing (FC) with CC minimizes cloud burden and maximizes resource allocation. There are three key features for the proposed fog layer: awareness of position, short latency, and mobility. Moreover, a CFC-driven framework is proposed to manage data among different agents. In order to make the system more efficient, FC allocates virtual machines (VMs) according to load-balancing techniques. In addition, the present study proposes a hybrid gray wolf differential evolution optimization algorithm (HGWDE) that brings gray wolf optimization (GWO) and improved differential evolution (IDE) together. Simulation results conducted in MATLAB verify the efficiency of the suggested algorithm according to the high data transaction and computational time. According to the results, the response time of HGWDE is 54 ms, 82.1 ms, and 81.6 ms faster than particle swarm optimization (PSO), differential evolution (DE), and GWO. HGWDE's processing time is 53 ms, 81.2 ms, and 80.6 ms faster than PSO, DE, and GWO. Although GWO is a bit more efficient than HGWDE, the difference is not very significant.
Additional Links: PMID-36850350
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36850350,
year = {2023},
author = {Alsokhiry, F and Annuk, A and Mohamed, MA and Marinho, M},
title = {An Innovative Cloud-Fog-Based Smart Grid Scheme for Efficient Resource Utilization.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {4},
pages = {},
pmid = {36850350},
issn = {1424-8220},
abstract = {Smart grids (SGs) enhance the effectiveness, reliability, resilience, and energy-efficient operation of electrical networks. Nonetheless, SGs suffer from big data transactions which limit their capabilities and can cause delays in the optimal operation and management tasks. Therefore, it is clear that a fast and reliable architecture is needed to make big data management in SGs more efficient. This paper assesses the optimal operation of the SGs using cloud computing (CC), fog computing, and resource allocation to enhance the management problem. Technically, big data management makes SG more efficient if cloud and fog computing (CFC) are integrated. The integration of fog computing (FC) with CC minimizes cloud burden and maximizes resource allocation. There are three key features for the proposed fog layer: awareness of position, short latency, and mobility. Moreover, a CFC-driven framework is proposed to manage data among different agents. In order to make the system more efficient, FC allocates virtual machines (VMs) according to load-balancing techniques. In addition, the present study proposes a hybrid gray wolf differential evolution optimization algorithm (HGWDE) that brings gray wolf optimization (GWO) and improved differential evolution (IDE) together. Simulation results conducted in MATLAB verify the efficiency of the suggested algorithm according to the high data transaction and computational time. According to the results, the response time of HGWDE is 54 ms, 82.1 ms, and 81.6 ms faster than particle swarm optimization (PSO), differential evolution (DE), and GWO. HGWDE's processing time is 53 ms, 81.2 ms, and 80.6 ms faster than PSO, DE, and GWO. Although GWO is a bit more efficient than HGWDE, the difference is not very significant.},
}
RevDate: 2023-02-27
A Monte Carlo approach to study the effect of ions on the nucleation of sulfuric acid-water clusters.
Journal of computational chemistry [Epub ahead of print].
The nucleation of sulfuric acid-water clusters is a significant contribution to the formation of aerosols as precursors of cloud condensation nuclei (CCN). Depending on the temperature, there is an interplay between the clustering of particles and their evaporation controlling the efficiency of cluster growth. For typical temperatures in the atmosphere, the evaporation of H 2 SO 4 (?) H 2 O clusters is more efficient than the clustering of the first, small clusters, and thus their growth is dampened at its early stages. Since the evaporation rates of small clusters containing an HSO 4 - $$ {\mathrm{HSO}}_4^{-} $$ ion are much smaller than for purely neutral sulfuric acid clusters, they can serve as a central body for the further attachment of H 2 SO 4 (?) H 2 O molecules. We here present an innovative Monte Carlo model to study the growth of aqueous sulfuric acid clusters around central ions. Unlike classical thermodynamic nucleation theory or kinetic models, this model allows to trace individual particles and thus to determine properties for each individual particle. As a benchmarking case, we have performed simulations at T = 300 K $$ T=300\kern0.5em \mathrm{K} $$ a relative humidity of 50% with dipole and ion concentrations of c dipole = 5 × 10 8 - 10 9 cm - 3 $$ {c}_{dipole}=5\kern0.5em \times \kern0.5em {10}^8-{10}^9\kern0.5em {\mathrm{cm}}^{-3} $$ and c ion = 0 - 10 7 cm - 3 $$ {c}_{ion}=0-{10}^7\kern0.5em {\mathrm{cm}}^{-3} $$ . We discuss the runtime of our simulations and present the velocity distribution of ionic clusters, the size distribution of the clusters as well as the formation rate of clusters with radii R ≥ 0.85 nm $$ R\ge 0.85\kern0.5em \mathrm{nm} $$ . Simulations give reasonable velocity and size distributions and there is a good agreement of the formation rates with previous results, including the relevance of ions for the initial growth of sulfuric acid-water clusters. Conclusively, we present a computational method which allows studying detailed particle properties during the growth of aerosols as a precursor of CCN.
Additional Links: PMID-36847779
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36847779,
year = {2023},
author = {Krog, D and Enghoff, MB and Köhn, C},
title = {A Monte Carlo approach to study the effect of ions on the nucleation of sulfuric acid-water clusters.},
journal = {Journal of computational chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1002/jcc.27076},
pmid = {36847779},
issn = {1096-987X},
abstract = {The nucleation of sulfuric acid-water clusters is a significant contribution to the formation of aerosols as precursors of cloud condensation nuclei (CCN). Depending on the temperature, there is an interplay between the clustering of particles and their evaporation controlling the efficiency of cluster growth. For typical temperatures in the atmosphere, the evaporation of H 2 SO 4 (?) H 2 O clusters is more efficient than the clustering of the first, small clusters, and thus their growth is dampened at its early stages. Since the evaporation rates of small clusters containing an HSO 4 - $$ {\mathrm{HSO}}
_4^{-}
$$ ion are much smaller than for purely neutral sulfuric acid clusters, they can serve as a central body for the further attachment of H 2 SO 4 (?) H 2 O molecules. We here present an innovative Monte Carlo model to study the growth of aqueous sulfuric acid clusters around central ions. Unlike classical thermodynamic nucleation theory or kinetic models, this model allows to trace individual particles and thus to determine properties for each individual particle. As a benchmarking case, we have performed simulations at T = 300 K $$ T=300\kern0.5em \mathrm{K}
$$ a relative humidity of 50% with dipole and ion concentrations of c dipole = 5 × 10 8 - 10 9 cm - 3 $$ {c}_
{dipole}=
5\kern0.5em \times \kern0.5em {10}^
8-{10}^
9\kern0.5em {\mathrm{cm}}
^{-3}
$$ and c ion = 0 - 10 7 cm - 3 $$ {c}_
{ion}=
0-{10}^
7\kern0.5em {\mathrm{cm}}
^{-3}
$$ . We discuss the runtime of our simulations and present the velocity distribution of ionic clusters, the size distribution of the clusters as well as the formation rate of clusters with radii R ≥ 0.85 nm $$ R\ge 0.85\kern0.5em \mathrm{nm}
$$ . Simulations give reasonable velocity and size distributions and there is a good agreement of the formation rates with previous results, including the relevance of ions for the initial growth of sulfuric acid-water clusters. Conclusively, we present a computational method which allows studying detailed particle properties during the growth of aerosols as a precursor of CCN.},
}
RevDate: 2023-02-27
Data workflows and visualization in support of surveillance practice.
Frontiers in veterinary science, 10:1129863.
The Swedish National Veterinary Institute (SVA) is working on implementing reusable and adaptable workflows for epidemiological analysis and dynamic report generation to improve disease surveillance. Important components of this work include: data access, development environment, computational resources and cloud-based management. The development environment relies on Git for code collaboration and version control and the R language for statistical computing and data visualization. The computational resources include both local and cloud-based systems, with automatic workflows managed in the cloud. The workflows are designed to be flexible and adaptable to changing data sources and stakeholder demands, with the ultimate goal to create a robust infrastructure for the delivery of actionable epidemiological information.
Additional Links: PMID-36846250
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36846250,
year = {2023},
author = {Gustafsson, W and Dórea, FC and Widgren, S and Frössling, J and Vidal, G and Kim, H and Cha, W and Comin, A and Rodriguez Ewerlöf, I and Rosendal, T},
title = {Data workflows and visualization in support of surveillance practice.},
journal = {Frontiers in veterinary science},
volume = {10},
number = {},
pages = {1129863},
pmid = {36846250},
issn = {2297-1769},
abstract = {The Swedish National Veterinary Institute (SVA) is working on implementing reusable and adaptable workflows for epidemiological analysis and dynamic report generation to improve disease surveillance. Important components of this work include: data access, development environment, computational resources and cloud-based management. The development environment relies on Git for code collaboration and version control and the R language for statistical computing and data visualization. The computational resources include both local and cloud-based systems, with automatic workflows managed in the cloud. The workflows are designed to be flexible and adaptable to changing data sources and stakeholder demands, with the ultimate goal to create a robust infrastructure for the delivery of actionable epidemiological information.},
}
RevDate: 2023-02-26
Applications and advances in acoustic monitoring for infectious disease epidemiology.
Trends in parasitology pii:S1471-4922(23)00012-0 [Epub ahead of print].
Emerging infectious diseases continue to pose a significant burden on global public health, and there is a critical need to better understand transmission dynamics arising at the interface of human activity and wildlife habitats. Passive acoustic monitoring (PAM), more typically applied to questions of biodiversity and conservation, provides an opportunity to collect and analyse audio data in relative real time and at low cost. Acoustic methods are increasingly accessible, with the expansion of cloud-based computing, low-cost hardware, and machine learning approaches. Paired with purposeful experimental design, acoustic data can complement existing surveillance methods and provide a novel toolkit to investigate the key biological parameters and ecological interactions that underpin infectious disease epidemiology.
Additional Links: PMID-36842917
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36842917,
year = {2023},
author = {Johnson, E and Campos-Cerqueira, M and Jumail, A and Yusni, ASA and Salgado-Lynn, M and Fornace, K},
title = {Applications and advances in acoustic monitoring for infectious disease epidemiology.},
journal = {Trends in parasitology},
volume = {},
number = {},
pages = {},
doi = {10.1016/j.pt.2023.01.008},
pmid = {36842917},
issn = {1471-5007},
abstract = {Emerging infectious diseases continue to pose a significant burden on global public health, and there is a critical need to better understand transmission dynamics arising at the interface of human activity and wildlife habitats. Passive acoustic monitoring (PAM), more typically applied to questions of biodiversity and conservation, provides an opportunity to collect and analyse audio data in relative real time and at low cost. Acoustic methods are increasingly accessible, with the expansion of cloud-based computing, low-cost hardware, and machine learning approaches. Paired with purposeful experimental design, acoustic data can complement existing surveillance methods and provide a novel toolkit to investigate the key biological parameters and ecological interactions that underpin infectious disease epidemiology.},
}
RevDate: 2023-02-26
Spatio-temporal analysis of climate and irrigated vegetation cover changes and their role in lake water level depletion using a pixel-based approach and canonical correlation analysis.
The Science of the total environment pii:S0048-9697(23)00942-7 [Epub ahead of print].
Lake Urmia, located in northwest Iran, was among the world's largest hypersaline lakes but has now experienced a 7 m decrease in water level, from 1278 m to 1271 over 1996 to 2019. There is doubt as to whether the pixel-based analysis (PBA) approach's answer to the lake's drying is a natural process or a result of human intervention. Here, a non-parametric Mann-Kendall trend test was applied to a 21-year record (2000-2020) of satellite data products, i.e., temperature, precipitation, snow cover, and irrigated vegetation cover (IVC). The Google Earth Engine (GEE) cloud-computing platform utilized over 10 sub-basins in three provinces surrounding Lake Urmia to obtain and calculate pixel-based monthly and seasonal scales for the products. Canonical correlation analysis was employed in order to understand the correlation between variables and lake water level (LWL). The trend analysis results show significant increases in temperature (from 1 to 2 °C during 2000-2020) over May-September, i.e., in 87 %-25 % of the basin. However, precipitation has seen an insignificant decrease (from 3 to 9 mm during 2000-2019) in the rainy months (April and May). Snow cover has also decreased and, when compared with precipitation, shows a change in precipitation patterns from snow to rain. IVC has increased significantly in all sub-basins, especially the southern parts of the lake, with the West province making the largest contribution to the development of IVC. According to the PBA, this analysis underpins the very high contribution of IVC to the drying of the lake in more detail, although the contribution of climate change in this matter is also apparent. The development of IVC leads to increased water consumption through evapotranspiration and excess evaporation caused by the storage of water for irrigation. Due to the decreased runoff caused by consumption exceeding the basin's capacity, the lake cannot be fed sufficiently.
Additional Links: PMID-36842572
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36842572,
year = {2023},
author = {Andaryani, S and Nourani, V and Abbasnejad, H and Koch, J and Stisen, S and Klöve, B and Haghighi, AT},
title = {Spatio-temporal analysis of climate and irrigated vegetation cover changes and their role in lake water level depletion using a pixel-based approach and canonical correlation analysis.},
journal = {The Science of the total environment},
volume = {},
number = {},
pages = {162326},
doi = {10.1016/j.scitotenv.2023.162326},
pmid = {36842572},
issn = {1879-1026},
abstract = {Lake Urmia, located in northwest Iran, was among the world's largest hypersaline lakes but has now experienced a 7 m decrease in water level, from 1278 m to 1271 over 1996 to 2019. There is doubt as to whether the pixel-based analysis (PBA) approach's answer to the lake's drying is a natural process or a result of human intervention. Here, a non-parametric Mann-Kendall trend test was applied to a 21-year record (2000-2020) of satellite data products, i.e., temperature, precipitation, snow cover, and irrigated vegetation cover (IVC). The Google Earth Engine (GEE) cloud-computing platform utilized over 10 sub-basins in three provinces surrounding Lake Urmia to obtain and calculate pixel-based monthly and seasonal scales for the products. Canonical correlation analysis was employed in order to understand the correlation between variables and lake water level (LWL). The trend analysis results show significant increases in temperature (from 1 to 2 °C during 2000-2020) over May-September, i.e., in 87 %-25 % of the basin. However, precipitation has seen an insignificant decrease (from 3 to 9 mm during 2000-2019) in the rainy months (April and May). Snow cover has also decreased and, when compared with precipitation, shows a change in precipitation patterns from snow to rain. IVC has increased significantly in all sub-basins, especially the southern parts of the lake, with the West province making the largest contribution to the development of IVC. According to the PBA, this analysis underpins the very high contribution of IVC to the drying of the lake in more detail, although the contribution of climate change in this matter is also apparent. The development of IVC leads to increased water consumption through evapotranspiration and excess evaporation caused by the storage of water for irrigation. Due to the decreased runoff caused by consumption exceeding the basin's capacity, the lake cannot be fed sufficiently.},
}
RevDate: 2023-02-25
An Efficient Virtual Machine Consolidation Algorithm for Cloud Computing.
Entropy (Basel, Switzerland), 25(2): pii:e25020351.
With the rapid development of integration in blockchain and IoT, virtual machine consolidation (VMC) has become a heated topic because it can effectively improve the energy efficiency and service quality of cloud computing in the blockchain. The current VMC algorithm is not effective enough because it does not regard the load of the virtual machine (VM) as an analyzed time series. Therefore, we proposed a VMC algorithm based on load forecast to improve efficiency. First, we proposed a migration VM selection strategy based on load increment prediction called LIP. Combined with the current load and load increment, this strategy can effectively improve the accuracy of selecting VM from the overloaded physical machines (PMs). Then, we proposed a VM migration point selection strategy based on the load sequence prediction called SIR. We merged VMs with complementary load series into the same PM, effectively improving the stability of the PM load, thereby reducing the service level agreement violation (SLAV) and the number of VM migrations due to the resource competition of the PM. Finally, we proposed a better virtual machine consolidation (VMC) algorithm based on the load prediction of LIP and SIR. The experimental results show that our VMC algorithm can effectively improve energy efficiency.
Additional Links: PMID-36832716
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36832716,
year = {2023},
author = {Yuan, L and Wang, Z and Sun, P and Wei, Y},
title = {An Efficient Virtual Machine Consolidation Algorithm for Cloud Computing.},
journal = {Entropy (Basel, Switzerland)},
volume = {25},
number = {2},
pages = {},
doi = {10.3390/e25020351},
pmid = {36832716},
issn = {1099-4300},
abstract = {With the rapid development of integration in blockchain and IoT, virtual machine consolidation (VMC) has become a heated topic because it can effectively improve the energy efficiency and service quality of cloud computing in the blockchain. The current VMC algorithm is not effective enough because it does not regard the load of the virtual machine (VM) as an analyzed time series. Therefore, we proposed a VMC algorithm based on load forecast to improve efficiency. First, we proposed a migration VM selection strategy based on load increment prediction called LIP. Combined with the current load and load increment, this strategy can effectively improve the accuracy of selecting VM from the overloaded physical machines (PMs). Then, we proposed a VM migration point selection strategy based on the load sequence prediction called SIR. We merged VMs with complementary load series into the same PM, effectively improving the stability of the PM load, thereby reducing the service level agreement violation (SLAV) and the number of VM migrations due to the resource competition of the PM. Finally, we proposed a better virtual machine consolidation (VMC) algorithm based on the load prediction of LIP and SIR. The experimental results show that our VMC algorithm can effectively improve energy efficiency.},
}
RevDate: 2023-02-25
Kullback-Leibler Divergence of an Open-Queuing Network of a Cell-Signal-Transduction Cascade.
Entropy (Basel, Switzerland), 25(2): pii:e25020326.
Queuing networks (QNs) are essential models in operations research, with applications in cloud computing and healthcare systems. However, few studies have analyzed the cell's biological signal transduction using QN theory. This study entailed the modeling of signal transduction as an open Jackson's QN (JQN) to theoretically determine cell signal transduction, under the assumption that the signal mediator queues in the cytoplasm, and the mediator is exchanged from one signaling molecule to another through interactions between the signaling molecules. Each signaling molecule was regarded as a network node in the JQN. The JQN Kullback-Leibler divergence (KLD) was defined using the ratio of the queuing time (λ) to the exchange time (μ), λ/μ. The mitogen-activated protein kinase (MAPK) signal-cascade model was applied, and the KLD rate per signal-transduction-period was shown to be conserved when the KLD was maximized. Our experimental study on MAPK cascade supported this conclusion. This result is similar to the entropy-rate conservation of chemical kinetics and entropy coding reported in our previous studies. Thus, JQN can be used as a novel framework to analyze signal transduction.
Additional Links: PMID-36832692
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36832692,
year = {2023},
author = {Tsuruyama, T},
title = {Kullback-Leibler Divergence of an Open-Queuing Network of a Cell-Signal-Transduction Cascade.},
journal = {Entropy (Basel, Switzerland)},
volume = {25},
number = {2},
pages = {},
doi = {10.3390/e25020326},
pmid = {36832692},
issn = {1099-4300},
abstract = {Queuing networks (QNs) are essential models in operations research, with applications in cloud computing and healthcare systems. However, few studies have analyzed the cell's biological signal transduction using QN theory. This study entailed the modeling of signal transduction as an open Jackson's QN (JQN) to theoretically determine cell signal transduction, under the assumption that the signal mediator queues in the cytoplasm, and the mediator is exchanged from one signaling molecule to another through interactions between the signaling molecules. Each signaling molecule was regarded as a network node in the JQN. The JQN Kullback-Leibler divergence (KLD) was defined using the ratio of the queuing time (λ) to the exchange time (μ), λ/μ. The mitogen-activated protein kinase (MAPK) signal-cascade model was applied, and the KLD rate per signal-transduction-period was shown to be conserved when the KLD was maximized. Our experimental study on MAPK cascade supported this conclusion. This result is similar to the entropy-rate conservation of chemical kinetics and entropy coding reported in our previous studies. Thus, JQN can be used as a novel framework to analyze signal transduction.},
}
RevDate: 2023-02-25
Diversity-Aware Marine Predators Algorithm for Task Scheduling in Cloud Computing.
Entropy (Basel, Switzerland), 25(2): pii:e25020285.
With the increase in cloud users and internet of things (IoT) applications, advanced task scheduling (TS) methods are required to reasonably schedule tasks in cloud computing. This study proposes a diversity-aware marine predators algorithm (DAMPA) for solving TS in cloud computing. In DAMPA, to enhance the premature convergence avoidance ability, the predator crowding degree ranking and comprehensive learning strategies were adopted in the second stage to maintain the population diversity and thereby inhibit premature convergence. Additionally, a stage-independent control of the stepsize-scaling strategy that uses different control parameters in three stages was designed to balance the exploration and exploitation abilities. Two case experiments were conducted to evaluate the proposed algorithm. Compared with the latest algorithm, in the first case, DAMPA reduced the makespan and energy consumption by 21.06% and 23.47% at most, respectively. In the second case, the makespan and energy consumption are reduced by 34.35% and 38.60% on average, respectively. Meanwhile, the algorithm achieved greater throughput in both cases.
Additional Links: PMID-36832652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36832652,
year = {2023},
author = {Chen, D and Zhang, Y},
title = {Diversity-Aware Marine Predators Algorithm for Task Scheduling in Cloud Computing.},
journal = {Entropy (Basel, Switzerland)},
volume = {25},
number = {2},
pages = {},
doi = {10.3390/e25020285},
pmid = {36832652},
issn = {1099-4300},
abstract = {With the increase in cloud users and internet of things (IoT) applications, advanced task scheduling (TS) methods are required to reasonably schedule tasks in cloud computing. This study proposes a diversity-aware marine predators algorithm (DAMPA) for solving TS in cloud computing. In DAMPA, to enhance the premature convergence avoidance ability, the predator crowding degree ranking and comprehensive learning strategies were adopted in the second stage to maintain the population diversity and thereby inhibit premature convergence. Additionally, a stage-independent control of the stepsize-scaling strategy that uses different control parameters in three stages was designed to balance the exploration and exploitation abilities. Two case experiments were conducted to evaluate the proposed algorithm. Compared with the latest algorithm, in the first case, DAMPA reduced the makespan and energy consumption by 21.06% and 23.47% at most, respectively. In the second case, the makespan and energy consumption are reduced by 34.35% and 38.60% on average, respectively. Meanwhile, the algorithm achieved greater throughput in both cases.},
}
RevDate: 2023-02-25
ShrewdAttack: Low Cost High Accuracy Model Extraction.
Entropy (Basel, Switzerland), 25(2): pii:e25020282.
Machine learning as a service (MLaaS) plays an essential role in the current ecosystem. Enterprises do not need to train models by themselves separately. Instead, they can use well-trained models provided by MLaaS to support business activities. However, such an ecosystem could be threatened by model extraction attacks-an attacker steals the functionality of a trained model provided by MLaaS and builds a substitute model locally. In this paper, we proposed a model extraction method with low query costs and high accuracy. In particular, we use pre-trained models and task-relevant data to decrease the size of query data. We use instance selection to reduce query samples. In addition, we divided query data into two categories, namely low-confidence data and high-confidence data, to reduce the budget and improve accuracy. We then conducted attacks on two models provided by Microsoft Azure as our experiments. The results show that our scheme achieves high accuracy at low cost, with the substitution models achieving 96.10% and 95.24% substitution while querying only 7.32% and 5.30% of their training data on the two models, respectively. This new attack approach creates additional security challenges for models deployed on cloud platforms. It raises the need for novel mitigation strategies to secure the models. In future work, generative adversarial networks and model inversion attacks can be used to generate more diverse data to be applied to the attacks.
Additional Links: PMID-36832648
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36832648,
year = {2023},
author = {Liu, Y and Luo, J and Yang, Y and Wang, X and Gheisari, M and Luo, F},
title = {ShrewdAttack: Low Cost High Accuracy Model Extraction.},
journal = {Entropy (Basel, Switzerland)},
volume = {25},
number = {2},
pages = {},
doi = {10.3390/e25020282},
pmid = {36832648},
issn = {1099-4300},
abstract = {Machine learning as a service (MLaaS) plays an essential role in the current ecosystem. Enterprises do not need to train models by themselves separately. Instead, they can use well-trained models provided by MLaaS to support business activities. However, such an ecosystem could be threatened by model extraction attacks-an attacker steals the functionality of a trained model provided by MLaaS and builds a substitute model locally. In this paper, we proposed a model extraction method with low query costs and high accuracy. In particular, we use pre-trained models and task-relevant data to decrease the size of query data. We use instance selection to reduce query samples. In addition, we divided query data into two categories, namely low-confidence data and high-confidence data, to reduce the budget and improve accuracy. We then conducted attacks on two models provided by Microsoft Azure as our experiments. The results show that our scheme achieves high accuracy at low cost, with the substitution models achieving 96.10% and 95.24% substitution while querying only 7.32% and 5.30% of their training data on the two models, respectively. This new attack approach creates additional security challenges for models deployed on cloud platforms. It raises the need for novel mitigation strategies to secure the models. In future work, generative adversarial networks and model inversion attacks can be used to generate more diverse data to be applied to the attacks.},
}
RevDate: 2023-02-25
Straggler- and Adversary-Tolerant Secure Distributed Matrix Multiplication Using Polynomial Codes.
Entropy (Basel, Switzerland), 25(2): pii:e25020266.
Large matrix multiplications commonly take place in large-scale machine-learning applications. Often, the sheer size of these matrices prevent carrying out the multiplication at a single server. Therefore, these operations are typically offloaded to a distributed computing platform with a master server and a large amount of workers in the cloud, operating in parallel. For such distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay by introducing a tolerance against straggling workers, i.e., workers for which execution time significantly lags with respect to the average. In addition to exact recovery, we impose a security constraint on both matrices to be multiplied. Specifically, we assume that workers can collude and eavesdrop on the content of these matrices. For this problem, we introduce a new class of polynomial codes with fewer non-zero coefficients than the degree +1. We provide closed-form expressions for the recovery threshold and show that our construction improves the recovery threshold of existing schemes in the literature, in particular for larger matrix dimensions and a moderate to large number of colluding workers. In the absence of any security constraints, we show that our construction is optimal in terms of recovery threshold.
Additional Links: PMID-36832632
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36832632,
year = {2023},
author = {Byrne, E and Gnilke, OW and Kliewer, J},
title = {Straggler- and Adversary-Tolerant Secure Distributed Matrix Multiplication Using Polynomial Codes.},
journal = {Entropy (Basel, Switzerland)},
volume = {25},
number = {2},
pages = {},
doi = {10.3390/e25020266},
pmid = {36832632},
issn = {1099-4300},
abstract = {Large matrix multiplications commonly take place in large-scale machine-learning applications. Often, the sheer size of these matrices prevent carrying out the multiplication at a single server. Therefore, these operations are typically offloaded to a distributed computing platform with a master server and a large amount of workers in the cloud, operating in parallel. For such distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay by introducing a tolerance against straggling workers, i.e., workers for which execution time significantly lags with respect to the average. In addition to exact recovery, we impose a security constraint on both matrices to be multiplied. Specifically, we assume that workers can collude and eavesdrop on the content of these matrices. For this problem, we introduce a new class of polynomial codes with fewer non-zero coefficients than the degree +1. We provide closed-form expressions for the recovery threshold and show that our construction improves the recovery threshold of existing schemes in the literature, in particular for larger matrix dimensions and a moderate to large number of colluding workers. In the absence of any security constraints, we show that our construction is optimal in terms of recovery threshold.},
}
RevDate: 2023-02-25
Current Status and Future Forecast of Short-lived Climate-Forced Ozone in Tehran, Iran, derived from Ground-Based and Satellite Observations.
Water, air, and soil pollution, 234(2):134.
In this study, the distribution and alterations of ozone concentrations in Tehran, Iran, in 2021 were investigated. The impacts of precursors (i.e., CO, NO2, and NO) on ozone were examined using the data collected over 12 months (i.e., January 2021 to December 2021) from 21 stations of the Air Quality Control Company (AQCC). The results of monthly heat mapping of tropospheric ozone concentrations indicated the lowest value in December and the highest value in July. The lowest and highest seasonal concentrations were in winter and summer, respectively. Moreover, there was a negative correlation between ozone and its precursors. The Inverse Distance Weighting (IDW) method was then implemented to obtain air pollution zoning maps. Then, ozone concentration modeled by the IDW method was compared with the average monthly change of total column density of ozone derived from Sentinel-5 satellite data in the Google Earth Engine (GEE) cloud platform. A good agreement was discovered despite the harsh circumstances that both ground-based and satellite measurements were subjected to. The results obtained from both datasets showed that the west of the city of Tehran had the highest averaged O3 concentration. In this study, the status of the concentration of ozone precursors and tropospheric ozone in 2022 was also predicted. For this purpose, the Box-Jenkins Seasonal Autoregressive Integrated Moving Average (SARIMA) approach was implemented to predict the monthly air quality parameters. Overall, it was observed that the SARIMA approach was an efficient tool for forecasting air quality. Finally, the results showed that the trends of ozone obtained from terrestrial and satellite observations throughout 2021 were slightly different due to the contribution of the tropospheric ozone precursor concentration and meteorology conditions.
Additional Links: PMID-36819757
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36819757,
year = {2023},
author = {Borhani, F and Shafiepour Motlagh, M and Ehsani, AH and Rashidi, Y and Ghahremanloo, M and Amani, M and Moghimi, A},
title = {Current Status and Future Forecast of Short-lived Climate-Forced Ozone in Tehran, Iran, derived from Ground-Based and Satellite Observations.},
journal = {Water, air, and soil pollution},
volume = {234},
number = {2},
pages = {134},
pmid = {36819757},
issn = {0049-6979},
abstract = {In this study, the distribution and alterations of ozone concentrations in Tehran, Iran, in 2021 were investigated. The impacts of precursors (i.e., CO, NO2, and NO) on ozone were examined using the data collected over 12 months (i.e., January 2021 to December 2021) from 21 stations of the Air Quality Control Company (AQCC). The results of monthly heat mapping of tropospheric ozone concentrations indicated the lowest value in December and the highest value in July. The lowest and highest seasonal concentrations were in winter and summer, respectively. Moreover, there was a negative correlation between ozone and its precursors. The Inverse Distance Weighting (IDW) method was then implemented to obtain air pollution zoning maps. Then, ozone concentration modeled by the IDW method was compared with the average monthly change of total column density of ozone derived from Sentinel-5 satellite data in the Google Earth Engine (GEE) cloud platform. A good agreement was discovered despite the harsh circumstances that both ground-based and satellite measurements were subjected to. The results obtained from both datasets showed that the west of the city of Tehran had the highest averaged O3 concentration. In this study, the status of the concentration of ozone precursors and tropospheric ozone in 2022 was also predicted. For this purpose, the Box-Jenkins Seasonal Autoregressive Integrated Moving Average (SARIMA) approach was implemented to predict the monthly air quality parameters. Overall, it was observed that the SARIMA approach was an efficient tool for forecasting air quality. Finally, the results showed that the trends of ozone obtained from terrestrial and satellite observations throughout 2021 were slightly different due to the contribution of the tropospheric ozone precursor concentration and meteorology conditions.},
}
RevDate: 2023-02-23
Use of accounting concepts to study research: return on investment in XSEDE, a US cyberinfrastructure service.
Scientometrics [Epub ahead of print].
This paper uses accounting concepts-particularly the concept of Return on Investment (ROI)-to reveal the quantitative value of scientific research pertaining to a major US cyberinfrastructure project (XSEDE-the eXtreme Science and Engineering Discovery Environment). XSEDE provides operational and support services for advanced information technology systems, cloud systems, and supercomputers supporting non-classified US research, with an average budget for XSEDE of US$20M+ per year over the period studied (2014-2021). To assess the financial effectiveness of these services, we calculated a proxy for ROI, and converted quantitative measures of XSEDE service delivery into financial values using costs for service from the US marketplace. We calculated two estimates of ROI: a Conservative Estimate, functioning as a lower bound and using publicly available data for a lower valuation of XSEDE services; and a Best Available Estimate, functioning as a more accurate estimate, but using some unpublished valuation data. Using the largest dataset assembled for analysis of ROI for a cyberinfrastructure project, we found a Conservative Estimate of ROI of 1.87, and a Best Available Estimate of ROI of 3.24. Through accounting methods, we show that XSEDE services offer excellent value to the US government, that the services offered uniquely by XSEDE (that is, not otherwise available for purchase) were the most valuable to the facilitation of US research activities, and that accounting-based concepts hold great value for understanding the mechanisms of scientific research generally.
Additional Links: PMID-36818051
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36818051,
year = {2023},
author = {Stewart, CA and Costa, CM and Wernert, JA and Snapp-Childs, W and Bland, M and Blood, P and Campbell, T and Couvares, P and Fischer, J and Hancock, DY and Hart, DL and Jankowski, H and Knepper, R and McMullen, DF and Mehringer, S and Pierce, M and Rogers, G and Sinkovits, RS and Towns, J},
title = {Use of accounting concepts to study research: return on investment in XSEDE, a US cyberinfrastructure service.},
journal = {Scientometrics},
volume = {},
number = {},
pages = {1-31},
pmid = {36818051},
issn = {0138-9130},
abstract = {This paper uses accounting concepts-particularly the concept of Return on Investment (ROI)-to reveal the quantitative value of scientific research pertaining to a major US cyberinfrastructure project (XSEDE-the eXtreme Science and Engineering Discovery Environment). XSEDE provides operational and support services for advanced information technology systems, cloud systems, and supercomputers supporting non-classified US research, with an average budget for XSEDE of US$20M+ per year over the period studied (2014-2021). To assess the financial effectiveness of these services, we calculated a proxy for ROI, and converted quantitative measures of XSEDE service delivery into financial values using costs for service from the US marketplace. We calculated two estimates of ROI: a Conservative Estimate, functioning as a lower bound and using publicly available data for a lower valuation of XSEDE services; and a Best Available Estimate, functioning as a more accurate estimate, but using some unpublished valuation data. Using the largest dataset assembled for analysis of ROI for a cyberinfrastructure project, we found a Conservative Estimate of ROI of 1.87, and a Best Available Estimate of ROI of 3.24. Through accounting methods, we show that XSEDE services offer excellent value to the US government, that the services offered uniquely by XSEDE (that is, not otherwise available for purchase) were the most valuable to the facilitation of US research activities, and that accounting-based concepts hold great value for understanding the mechanisms of scientific research generally.},
}
RevDate: 2023-02-22
Longitudinally tracking personal physiomes for precision management of childhood epilepsy.
PLOS digital health, 1(12):e0000161.
Our current understanding of human physiology and activities is largely derived from sparse and discrete individual clinical measurements. To achieve precise, proactive, and effective health management of an individual, longitudinal, and dense tracking of personal physiomes and activities is required, which is only feasible by utilizing wearable biosensors. As a pilot study, we implemented a cloud computing infrastructure to integrate wearable sensors, mobile computing, digital signal processing, and machine learning to improve early detection of seizure onsets in children. We recruited 99 children diagnosed with epilepsy and longitudinally tracked them at single-second resolution using a wearable wristband, and prospectively acquired more than one billion data points. This unique dataset offered us an opportunity to quantify physiological dynamics (e.g., heart rate, stress response) across age groups and to identify physiological irregularities upon epilepsy onset. The high-dimensional personal physiome and activity profiles displayed a clustering pattern anchored by patient age groups. These signatory patterns included strong age and sex-specific effects on varying circadian rhythms and stress responses across major childhood developmental stages. For each patient, we further compared the physiological and activity profiles associated with seizure onsets with the personal baseline and developed a machine learning framework to accurately capture these onset moments. The performance of this framework was further replicated in another independent patient cohort. We next referenced our predictions with the electroencephalogram (EEG) signals on selected patients and demonstrated that our approach could detect subtle seizures not recognized by humans and could detect seizures prior to clinical onset. Our work demonstrated the feasibility of a real-time mobile infrastructure in a clinical setting, which has the potential to be valuable in caring for epileptic patients. Extension of such a system has the potential to be leveraged as a health management device or longitudinal phenotyping tool in clinical cohort studies.
Additional Links: PMID-36812648
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36812648,
year = {2022},
author = {Jiang, P and Gao, F and Liu, S and Zhang, S and Zhang, X and Xia, Z and Zhang, W and Jiang, T and Zhu, JL and Zhang, Z and Shu, Q and Snyder, M and Li, J},
title = {Longitudinally tracking personal physiomes for precision management of childhood epilepsy.},
journal = {PLOS digital health},
volume = {1},
number = {12},
pages = {e0000161},
pmid = {36812648},
issn = {2767-3170},
abstract = {Our current understanding of human physiology and activities is largely derived from sparse and discrete individual clinical measurements. To achieve precise, proactive, and effective health management of an individual, longitudinal, and dense tracking of personal physiomes and activities is required, which is only feasible by utilizing wearable biosensors. As a pilot study, we implemented a cloud computing infrastructure to integrate wearable sensors, mobile computing, digital signal processing, and machine learning to improve early detection of seizure onsets in children. We recruited 99 children diagnosed with epilepsy and longitudinally tracked them at single-second resolution using a wearable wristband, and prospectively acquired more than one billion data points. This unique dataset offered us an opportunity to quantify physiological dynamics (e.g., heart rate, stress response) across age groups and to identify physiological irregularities upon epilepsy onset. The high-dimensional personal physiome and activity profiles displayed a clustering pattern anchored by patient age groups. These signatory patterns included strong age and sex-specific effects on varying circadian rhythms and stress responses across major childhood developmental stages. For each patient, we further compared the physiological and activity profiles associated with seizure onsets with the personal baseline and developed a machine learning framework to accurately capture these onset moments. The performance of this framework was further replicated in another independent patient cohort. We next referenced our predictions with the electroencephalogram (EEG) signals on selected patients and demonstrated that our approach could detect subtle seizures not recognized by humans and could detect seizures prior to clinical onset. Our work demonstrated the feasibility of a real-time mobile infrastructure in a clinical setting, which has the potential to be valuable in caring for epileptic patients. Extension of such a system has the potential to be leveraged as a health management device or longitudinal phenotyping tool in clinical cohort studies.},
}
RevDate: 2023-02-22
Artificial intelligence model for analyzing colonic endoscopy images to detect changes associated with irritable bowel syndrome.
PLOS digital health, 2(2):e0000058 pii:PDIG-D-22-00137.
IBS is not considered to be an organic disease and usually shows no abnormality on lower gastrointestinal endoscopy, although biofilm formation, dysbiosis, and histological microinflammation have recently been reported in patients with IBS. In this study, we investigated whether an artificial intelligence (AI) colorectal image model can identify minute endoscopic changes, which cannot typically be detected by human investigators, that are associated with IBS. Study subjects were identified based on electronic medical records and categorized as IBS (Group I; n = 11), IBS with predominant constipation (IBS-C; Group C; n = 12), and IBS with predominant diarrhea (IBS-D; Group D; n = 12). The study subjects had no other diseases. Colonoscopy images from IBS patients and from asymptomatic healthy subjects (Group N; n = 88) were obtained. Google Cloud Platform AutoML Vision (single-label classification) was used to construct AI image models to calculate sensitivity, specificity, predictive value, and AUC. A total of 2479, 382, 538, and 484 images were randomly selected for Groups N, I, C and D, respectively. The AUC of the model discriminating between Group N and I was 0.95. Sensitivity, specificity, positive predictive value, and negative predictive value of Group I detection were 30.8%, 97.6%, 66.7%, and 90.2%, respectively. The overall AUC of the model discriminating between Groups N, C, and D was 0.83; sensitivity, specificity, and positive predictive value of Group N were 87.5%, 46.2%, and 79.9%, respectively. Using the image AI model, colonoscopy images of IBS could be discriminated from healthy subjects at AUC 0.95. Prospective studies are needed to further validate whether this externally validated model has similar diagnostic capabilities at other facilities and whether it can be used to determine treatment efficacy.
Additional Links: PMID-36812592
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36812592,
year = {2023},
author = {Tabata, K and Mihara, H and Nanjo, S and Motoo, I and Ando, T and Teramoto, A and Fujinami, H and Yasuda, I},
title = {Artificial intelligence model for analyzing colonic endoscopy images to detect changes associated with irritable bowel syndrome.},
journal = {PLOS digital health},
volume = {2},
number = {2},
pages = {e0000058},
doi = {10.1371/journal.pdig.0000058},
pmid = {36812592},
issn = {2767-3170},
abstract = {IBS is not considered to be an organic disease and usually shows no abnormality on lower gastrointestinal endoscopy, although biofilm formation, dysbiosis, and histological microinflammation have recently been reported in patients with IBS. In this study, we investigated whether an artificial intelligence (AI) colorectal image model can identify minute endoscopic changes, which cannot typically be detected by human investigators, that are associated with IBS. Study subjects were identified based on electronic medical records and categorized as IBS (Group I; n = 11), IBS with predominant constipation (IBS-C; Group C; n = 12), and IBS with predominant diarrhea (IBS-D; Group D; n = 12). The study subjects had no other diseases. Colonoscopy images from IBS patients and from asymptomatic healthy subjects (Group N; n = 88) were obtained. Google Cloud Platform AutoML Vision (single-label classification) was used to construct AI image models to calculate sensitivity, specificity, predictive value, and AUC. A total of 2479, 382, 538, and 484 images were randomly selected for Groups N, I, C and D, respectively. The AUC of the model discriminating between Group N and I was 0.95. Sensitivity, specificity, positive predictive value, and negative predictive value of Group I detection were 30.8%, 97.6%, 66.7%, and 90.2%, respectively. The overall AUC of the model discriminating between Groups N, C, and D was 0.83; sensitivity, specificity, and positive predictive value of Group N were 87.5%, 46.2%, and 79.9%, respectively. Using the image AI model, colonoscopy images of IBS could be discriminated from healthy subjects at AUC 0.95. Prospective studies are needed to further validate whether this externally validated model has similar diagnostic capabilities at other facilities and whether it can be used to determine treatment efficacy.},
}
RevDate: 2023-02-22
Open data and algorithms for open science in AI-driven molecular informatics.
Current opinion in structural biology, 79:102542 pii:S0959-440X(23)00016-7 [Epub ahead of print].
Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.
Additional Links: PMID-36805192
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36805192,
year = {2023},
author = {Brinkhaus, HO and Rajan, K and Schaub, J and Zielesny, A and Steinbeck, C},
title = {Open data and algorithms for open science in AI-driven molecular informatics.},
journal = {Current opinion in structural biology},
volume = {79},
number = {},
pages = {102542},
doi = {10.1016/j.sbi.2023.102542},
pmid = {36805192},
issn = {1879-033X},
abstract = {Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.},
}
RevDate: 2023-02-22
CmpDate: 2023-02-22
Deep reinforcement learning-based pairwise DNA sequence alignment method compatible with embedded edge devices.
Scientific reports, 13(1):2773.
Sequence alignment is an essential component of bioinformatics, for identifying regions of similarity that may indicate functional, structural, or evolutionary relationships between the sequences. Genome-based diagnostics relying on DNA sequencing have benefited hugely from the boom in computing power in recent decades, particularly due to cloud-computing and the rise of graphics processing units (GPUs) and other advanced computing platforms for running advanced algorithms. Translating the success of such breakthroughs in diagnostics to affordable solutions for low-cost healthcare requires development of algorithms that can operate on the edge instead of in the cloud, using low-cost and low-power electronic systems such as microcontrollers and field programmable gate arrays (FPGAs). In this work, we present EdgeAlign, a deep reinforcement learning based method for performing pairwise DNA sequence alignment on stand-alone edge devices. EdgeAlign uses deep reinforcement learning to train a deep Q-network (DQN) agent for performing sequence alignment on fixed length sub-sequences, using a sliding window that is scanned over the length of the entire sequence. The hardware resource-consumption for implementing this scheme is thus independent of the lengths of the sequences to be aligned, and is further optimized using a novel AutoML based method for neural network model size reduction. Unlike other algorithms for sequence alignment reported in literature, the model demonstrated in this work is highly compact and deployed on two edge devices (NVIDIA Jetson Nano Developer Kit and Digilent Arty A7-100T, containing Xilinx XC7A35T Artix-7 FPGA) for demonstration of alignment for sequences from the publicly available Influenza sequences at the National Center for Biotechnology Information (NCBI) Virus Data Hub.
Additional Links: PMID-36797269
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36797269,
year = {2023},
author = {Lall, A and Tallur, S},
title = {Deep reinforcement learning-based pairwise DNA sequence alignment method compatible with embedded edge devices.},
journal = {Scientific reports},
volume = {13},
number = {1},
pages = {2773},
pmid = {36797269},
issn = {2045-2322},
mesh = {Sequence Alignment ; *Algorithms ; *Neural Networks, Computer ; Computers ; DNA ; },
abstract = {Sequence alignment is an essential component of bioinformatics, for identifying regions of similarity that may indicate functional, structural, or evolutionary relationships between the sequences. Genome-based diagnostics relying on DNA sequencing have benefited hugely from the boom in computing power in recent decades, particularly due to cloud-computing and the rise of graphics processing units (GPUs) and other advanced computing platforms for running advanced algorithms. Translating the success of such breakthroughs in diagnostics to affordable solutions for low-cost healthcare requires development of algorithms that can operate on the edge instead of in the cloud, using low-cost and low-power electronic systems such as microcontrollers and field programmable gate arrays (FPGAs). In this work, we present EdgeAlign, a deep reinforcement learning based method for performing pairwise DNA sequence alignment on stand-alone edge devices. EdgeAlign uses deep reinforcement learning to train a deep Q-network (DQN) agent for performing sequence alignment on fixed length sub-sequences, using a sliding window that is scanned over the length of the entire sequence. The hardware resource-consumption for implementing this scheme is thus independent of the lengths of the sequences to be aligned, and is further optimized using a novel AutoML based method for neural network model size reduction. Unlike other algorithms for sequence alignment reported in literature, the model demonstrated in this work is highly compact and deployed on two edge devices (NVIDIA Jetson Nano Developer Kit and Digilent Arty A7-100T, containing Xilinx XC7A35T Artix-7 FPGA) for demonstration of alignment for sequences from the publicly available Influenza sequences at the National Center for Biotechnology Information (NCBI) Virus Data Hub.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Sequence Alignment
*Algorithms
*Neural Networks, Computer
Computers
DNA
RevDate: 2023-02-16
A smart IoMT based architecture for E-healthcare patient monitoring system using artificial intelligence algorithms.
Frontiers in physiology, 14:1125952.
Generally, cloud computing is integrated with wireless sensor network to enable the monitoring systems and it improves the quality of service. The sensed patient data are monitored with biosensors without considering the patient datatype and this minimizes the work of hospitals and physicians. Wearable sensor devices and the Internet of Medical Things (IoMT) have changed the health service, resulting in faster monitoring, prediction, diagnosis, and treatment. Nevertheless, there have been difficulties that need to be resolved by the use of AI methods. The primary goal of this study is to introduce an AI-powered, IoMT telemedicine infrastructure for E-healthcare. In this paper, initially the data collection from the patient body is made using the sensed devices and the information are transmitted through the gateway/Wi-Fi and is stored in IoMT cloud repository. The stored information is then acquired, preprocessed to refine the collected data. The features from preprocessed data are extracted by means of high dimensional Linear Discriminant analysis (LDA) and the best optimal features are selected using reconfigured multi-objective cuckoo search algorithm (CSA). The prediction of abnormal/normal data is made by using Hybrid ResNet 18 and GoogleNet classifier (HRGC). The decision is then made whether to send alert to hospitals/healthcare personnel or not. If the expected results are satisfactory, the participant information is saved in the internet for later use. At last, the performance analysis is carried so as to validate the efficiency of proposed mechanism.
Additional Links: PMID-36793418
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36793418,
year = {2023},
author = {A, A and Dahan, F and Alroobaea, R and Alghamdi, WY and Mustafa Khaja Mohammed, and Hajjej, F and Deema Mohammed Alsekait, and Raahemifar, K},
title = {A smart IoMT based architecture for E-healthcare patient monitoring system using artificial intelligence algorithms.},
journal = {Frontiers in physiology},
volume = {14},
number = {},
pages = {1125952},
pmid = {36793418},
issn = {1664-042X},
abstract = {Generally, cloud computing is integrated with wireless sensor network to enable the monitoring systems and it improves the quality of service. The sensed patient data are monitored with biosensors without considering the patient datatype and this minimizes the work of hospitals and physicians. Wearable sensor devices and the Internet of Medical Things (IoMT) have changed the health service, resulting in faster monitoring, prediction, diagnosis, and treatment. Nevertheless, there have been difficulties that need to be resolved by the use of AI methods. The primary goal of this study is to introduce an AI-powered, IoMT telemedicine infrastructure for E-healthcare. In this paper, initially the data collection from the patient body is made using the sensed devices and the information are transmitted through the gateway/Wi-Fi and is stored in IoMT cloud repository. The stored information is then acquired, preprocessed to refine the collected data. The features from preprocessed data are extracted by means of high dimensional Linear Discriminant analysis (LDA) and the best optimal features are selected using reconfigured multi-objective cuckoo search algorithm (CSA). The prediction of abnormal/normal data is made by using Hybrid ResNet 18 and GoogleNet classifier (HRGC). The decision is then made whether to send alert to hospitals/healthcare personnel or not. If the expected results are satisfactory, the participant information is saved in the internet for later use. At last, the performance analysis is carried so as to validate the efficiency of proposed mechanism.},
}
RevDate: 2023-02-15
ElasticBLAST: Accelerating Sequence Search via Cloud Computing.
bioRxiv : the preprint server for biology pii:2023.01.04.522777.
BACKGROUND: Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform.
RESULTS: We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information.
CONCLUSION: We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.
Additional Links: PMID-36789435
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36789435,
year = {2023},
author = {Camacho, C and Boratyn, GM and Joukov, V and Alvarez, RV and Madden, TL},
title = {ElasticBLAST: Accelerating Sequence Search via Cloud Computing.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2023.01.04.522777},
pmid = {36789435},
abstract = {BACKGROUND: Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform.
RESULTS: We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information.
CONCLUSION: We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.},
}
RevDate: 2023-02-15
Efficiency and optimization of government service resource allocation in a cloud computing environment.
Journal of cloud computing (Heidelberg, Germany), 12(1):18.
According to the connotation and structure of government service resources, data of government service resources in L city from 2019 to 2021 are used to calculate the efficiency of government service resource allocation in each county and region in different periods, particularly by adding the government cloud platform and cloud computing resources to the government service resource data and applying the data envelopment analysis (DEA) method, which has practical significance for the development and innovation of government services. On this basis, patterns and evolutionary trends of government service resource allocation efficiency in each region during the study period are analyzed and discussed. Results are as follows. i) Overall efficiency level in the allocation of government service resources in L city is not high, showing an increasing annual trend among the high and low staggering. ii) Relative difference of allocation efficiency of government service resources is a common phenomenon of regional development, the existence and evolution of which are the direct or indirect influence and reflection of various aspects, such as economic strength and reform effort. iii) Data analysis for the specific points indicates that increased input does not necessarily lead to increased efficiency, some indicators have insufficient input or redundant output. Therefore, optimization of the physical, human, and financial resource allocation methods; and the intelligent online processing of government services achieved by the adoption of government cloud platform and cloud computing resources are the current objective choices to realize maximum efficiency in the allocation of government service resources.
Additional Links: PMID-36789367
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36789367,
year = {2023},
author = {Guo, YG and Yin, Q and Wang, Y and Xu, J and Zhu, L},
title = {Efficiency and optimization of government service resource allocation in a cloud computing environment.},
journal = {Journal of cloud computing (Heidelberg, Germany)},
volume = {12},
number = {1},
pages = {18},
pmid = {36789367},
issn = {2192-113X},
abstract = {According to the connotation and structure of government service resources, data of government service resources in L city from 2019 to 2021 are used to calculate the efficiency of government service resource allocation in each county and region in different periods, particularly by adding the government cloud platform and cloud computing resources to the government service resource data and applying the data envelopment analysis (DEA) method, which has practical significance for the development and innovation of government services. On this basis, patterns and evolutionary trends of government service resource allocation efficiency in each region during the study period are analyzed and discussed. Results are as follows. i) Overall efficiency level in the allocation of government service resources in L city is not high, showing an increasing annual trend among the high and low staggering. ii) Relative difference of allocation efficiency of government service resources is a common phenomenon of regional development, the existence and evolution of which are the direct or indirect influence and reflection of various aspects, such as economic strength and reform effort. iii) Data analysis for the specific points indicates that increased input does not necessarily lead to increased efficiency, some indicators have insufficient input or redundant output. Therefore, optimization of the physical, human, and financial resource allocation methods; and the intelligent online processing of government services achieved by the adoption of government cloud platform and cloud computing resources are the current objective choices to realize maximum efficiency in the allocation of government service resources.},
}
RevDate: 2023-02-15
Towards Device Agnostic Detection of Stress and Craving in Patients with Substance Use Disorder.
Proceedings of the ... Annual Hawaii International Conference on System Sciences. Annual Hawaii International Conference on System Sciences, 2023:3156-3163.
Novel technologies have great potential to improve the treatment of individuals with substance use disorder (SUD) and to reduce the current high rate of relapse (i.e. return to drug use). Wearable sensor-based systems that continuously measure physiology can provide information about behavior and opportunities for real-time interventions. We have previously developed an mHealth system which includes a wearable sensor, a mobile phone app, and a cloud-based server with embedded machine learning algorithms which detect stress and craving. The system functions as a just-in-time intervention tool to help patients de-escalate and as a tool for clinicians to tailor treatment based on stress and craving patterns observed. However, in our pilot work we found that to deploy the system to diverse socioeconomic populations and to increase usability, the system must be able to work efficiently with cost-effective and popular commercial wearable devices. To make the system device agnostic, methods to transform the data from a commercially available wearable for use in algorithms developed from research grade wearable sensor are proposed. The accuracy of these transformations in detecting stress and craving in individuals with SUD is further explored.
Additional Links: PMID-36788990
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36788990,
year = {2023},
author = {Shrestha, S and Stapp, J and Taylor, M and Leach, R and Carreiro, S and Indic, P},
title = {Towards Device Agnostic Detection of Stress and Craving in Patients with Substance Use Disorder.},
journal = {Proceedings of the ... Annual Hawaii International Conference on System Sciences. Annual Hawaii International Conference on System Sciences},
volume = {2023},
number = {},
pages = {3156-3163},
pmid = {36788990},
issn = {1530-1605},
abstract = {Novel technologies have great potential to improve the treatment of individuals with substance use disorder (SUD) and to reduce the current high rate of relapse (i.e. return to drug use). Wearable sensor-based systems that continuously measure physiology can provide information about behavior and opportunities for real-time interventions. We have previously developed an mHealth system which includes a wearable sensor, a mobile phone app, and a cloud-based server with embedded machine learning algorithms which detect stress and craving. The system functions as a just-in-time intervention tool to help patients de-escalate and as a tool for clinicians to tailor treatment based on stress and craving patterns observed. However, in our pilot work we found that to deploy the system to diverse socioeconomic populations and to increase usability, the system must be able to work efficiently with cost-effective and popular commercial wearable devices. To make the system device agnostic, methods to transform the data from a commercially available wearable for use in algorithms developed from research grade wearable sensor are proposed. The accuracy of these transformations in detecting stress and craving in individuals with SUD is further explored.},
}
RevDate: 2023-02-14
CmpDate: 2023-02-14
Implementation of a full-color holographic system using RGB-D salient object detection and divided point cloud gridding.
Optics express, 31(2):1641-1655.
At present, a real objects-based full-color holographic system usually uses a digital single-lens reflex (DSLR) camera array or depth camera to collect data. It then relies on a spatial light modulator to modulate the input light source for the reconstruction of the 3-D scene of the real objects. However, the main challenges the high-quality holographic 3-D display faced were the limitation of generation speed and the low accuracy of the computer-generated holograms. This research generates more effective and accurate point cloud data by developing an RGB-D salient object detection model in the acquisition unit. In addition, a divided point cloud gridding method is proposed to enhance the computing speed of hologram generation. In the RGB channels, we categorized each object point into depth grids with identical depth values. The depth girds are divided into M × N parts, and only the effective parts will be calculated. Compared with traditional methods, the calculation time is dramatically reduced. The feasibility of our proposed approach is established through experiments.
Additional Links: PMID-36785195
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36785195,
year = {2023},
author = {Zhao, Y and Bu, JW and Liu, W and Ji, JH and Yang, QH and Lin, SF},
title = {Implementation of a full-color holographic system using RGB-D salient object detection and divided point cloud gridding.},
journal = {Optics express},
volume = {31},
number = {2},
pages = {1641-1655},
doi = {10.1364/OE.477666},
pmid = {36785195},
issn = {1094-4087},
abstract = {At present, a real objects-based full-color holographic system usually uses a digital single-lens reflex (DSLR) camera array or depth camera to collect data. It then relies on a spatial light modulator to modulate the input light source for the reconstruction of the 3-D scene of the real objects. However, the main challenges the high-quality holographic 3-D display faced were the limitation of generation speed and the low accuracy of the computer-generated holograms. This research generates more effective and accurate point cloud data by developing an RGB-D salient object detection model in the acquisition unit. In addition, a divided point cloud gridding method is proposed to enhance the computing speed of hologram generation. In the RGB channels, we categorized each object point into depth grids with identical depth values. The depth girds are divided into M × N parts, and only the effective parts will be calculated. Compared with traditional methods, the calculation time is dramatically reduced. The feasibility of our proposed approach is established through experiments.},
}
RevDate: 2023-02-14
Collaborative Business Process Fault Resolution in the Services Cloud.
IEEE transactions on services computing, 16(1):162-176.
The emergence of cloud and edge computing has enabled rapid development and deployment of Internet-centric distributed applications. There are many platforms and tools that can facilitate users to develop distributed business process (BP) applications by composing relevant service components in a plug and play manner. However, there is no guarantee that a BP application developed in this way is fault-free. In this paper, we formalize the problem of collaborative BP fault resolution which aims to utilize information from existing fault-free BPs that use similar services to resolve faults in a user developed BP. We present an approach based on association analysis of pairwise transformations between a faulty BP and existing BPs to identify the smallest possible set of transformations to resolve the fault(s) in the user developed BP. An extensive experimental evaluation over both synthetically generated faulty BPs and real BPs developed by users shows the effectiveness of our approach.
Additional Links: PMID-36776787
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36776787,
year = {2023},
author = {Zahid, MA and Shafiq, B and Vaidya, J and Afzal, A and Shamail, S},
title = {Collaborative Business Process Fault Resolution in the Services Cloud.},
journal = {IEEE transactions on services computing},
volume = {16},
number = {1},
pages = {162-176},
doi = {10.1109/tsc.2021.3112525},
pmid = {36776787},
issn = {1939-1374},
support = {R01 GM118574/GM/NIGMS NIH HHS/United States ; R35 GM134927/GM/NIGMS NIH HHS/United States ; },
abstract = {The emergence of cloud and edge computing has enabled rapid development and deployment of Internet-centric distributed applications. There are many platforms and tools that can facilitate users to develop distributed business process (BP) applications by composing relevant service components in a plug and play manner. However, there is no guarantee that a BP application developed in this way is fault-free. In this paper, we formalize the problem of collaborative BP fault resolution which aims to utilize information from existing fault-free BPs that use similar services to resolve faults in a user developed BP. We present an approach based on association analysis of pairwise transformations between a faulty BP and existing BPs to identify the smallest possible set of transformations to resolve the fault(s) in the user developed BP. An extensive experimental evaluation over both synthetically generated faulty BPs and real BPs developed by users shows the effectiveness of our approach.},
}
RevDate: 2023-02-13
CmpDate: 2023-02-13
Detection and Mitigation of IoT-Based Attacks Using SNMP and Moving Target Defense Techniques.
Sensors (Basel, Switzerland), 23(3):.
This paper proposes a solution for ensuring the security of IoT devices in the cloud environment by protecting against distributed denial-of-service (DDoS) and false data injection attacks. The proposed solution is based on the integration of simple network management protocol (SNMP), Kullback-Leibler distance (KLD), access control rules (ACL), and moving target defense (MTD) techniques. The SNMP and KLD techniques are used to detect DDoS and false data sharing attacks, while the ACL and MTD techniques are applied to mitigate these attacks by hardening the target and reducing the attack surface. The effectiveness of the proposed framework is validated through experimental simulations on the Amazon Web Service (AWS) platform, which shows a significant reduction in attack probabilities and delays. The integration of IoT and cloud technologies is a powerful combination that can deliver customized and critical solutions to major business vendors. However, ensuring the confidentiality and security of data among IoT devices, storage, and access to the cloud is crucial to maintaining trust among internet users. This paper demonstrates the importance of implementing robust security measures to protect IoT devices in the cloud environment and highlights the potential of the proposed solution in protecting against DDoS and false data injection attacks.
Additional Links: PMID-36772751
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772751,
year = {2023},
author = {Gayathri, R and Usharani, S and Mahdal, M and Vezhavendhan, R and Vincent, R and Rajesh, M and Elangovan, M},
title = {Detection and Mitigation of IoT-Based Attacks Using SNMP and Moving Target Defense Techniques.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
pmid = {36772751},
issn = {1424-8220},
abstract = {This paper proposes a solution for ensuring the security of IoT devices in the cloud environment by protecting against distributed denial-of-service (DDoS) and false data injection attacks. The proposed solution is based on the integration of simple network management protocol (SNMP), Kullback-Leibler distance (KLD), access control rules (ACL), and moving target defense (MTD) techniques. The SNMP and KLD techniques are used to detect DDoS and false data sharing attacks, while the ACL and MTD techniques are applied to mitigate these attacks by hardening the target and reducing the attack surface. The effectiveness of the proposed framework is validated through experimental simulations on the Amazon Web Service (AWS) platform, which shows a significant reduction in attack probabilities and delays. The integration of IoT and cloud technologies is a powerful combination that can deliver customized and critical solutions to major business vendors. However, ensuring the confidentiality and security of data among IoT devices, storage, and access to the cloud is crucial to maintaining trust among internet users. This paper demonstrates the importance of implementing robust security measures to protect IoT devices in the cloud environment and highlights the potential of the proposed solution in protecting against DDoS and false data injection attacks.},
}
RevDate: 2023-02-11
At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives.
Sensors (Basel, Switzerland), 23(3): pii:s23031639.
Given its advantages in low latency, fast response, context-aware services, mobility, and privacy preservation, edge computing has emerged as the key support for intelligent applications and 5G/6G Internet of things (IoT) networks. This technology extends the cloud by providing intermediate services at the edge of the network and improving the quality of service for latency-sensitive applications. Many AI-based solutions with machine learning, deep learning, and swarm intelligence have exhibited the high potential to perform intelligent cognitive sensing, intelligent network management, big data analytics, and security enhancement for edge-based smart applications. Despite its many benefits, there are still concerns about the required capabilities of intelligent edge computing to deal with the computational complexity of machine learning techniques for big IoT data analytics. Resource constraints of edge computing, distributed computing, efficient orchestration, and synchronization of resources are all factors that require attention for quality of service improvement and cost-effective development of edge-based smart applications. In this context, this paper aims to explore the confluence of AI and edge in many application domains in order to leverage the potential of the existing research around these factors and identify new perspectives. The confluence of edge computing and AI improves the quality of user experience in emergency situations, such as in the Internet of vehicles, where critical inaccuracies or delays can lead to damage and accidents. These are the same factors that most studies have used to evaluate the success of an edge-based application. In this review, we first provide an in-depth analysis of the state of the art of AI in edge-based applications with a focus on eight application areas: smart agriculture, smart environment, smart grid, smart healthcare, smart industry, smart education, smart transportation, and security and privacy. Then, we present a qualitative comparison that emphasizes the main objective of the confluence, the roles and the use of artificial intelligence at the network edge, and the key enabling technologies for edge analytics. Then, open challenges, future research directions, and perspectives are identified and discussed. Finally, some conclusions are drawn.
Additional Links: PMID-36772680
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772680,
year = {2023},
author = {Bourechak, A and Zedadra, O and Kouahla, MN and Guerrieri, A and Seridi, H and Fortino, G},
title = {At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031639},
pmid = {36772680},
issn = {1424-8220},
abstract = {Given its advantages in low latency, fast response, context-aware services, mobility, and privacy preservation, edge computing has emerged as the key support for intelligent applications and 5G/6G Internet of things (IoT) networks. This technology extends the cloud by providing intermediate services at the edge of the network and improving the quality of service for latency-sensitive applications. Many AI-based solutions with machine learning, deep learning, and swarm intelligence have exhibited the high potential to perform intelligent cognitive sensing, intelligent network management, big data analytics, and security enhancement for edge-based smart applications. Despite its many benefits, there are still concerns about the required capabilities of intelligent edge computing to deal with the computational complexity of machine learning techniques for big IoT data analytics. Resource constraints of edge computing, distributed computing, efficient orchestration, and synchronization of resources are all factors that require attention for quality of service improvement and cost-effective development of edge-based smart applications. In this context, this paper aims to explore the confluence of AI and edge in many application domains in order to leverage the potential of the existing research around these factors and identify new perspectives. The confluence of edge computing and AI improves the quality of user experience in emergency situations, such as in the Internet of vehicles, where critical inaccuracies or delays can lead to damage and accidents. These are the same factors that most studies have used to evaluate the success of an edge-based application. In this review, we first provide an in-depth analysis of the state of the art of AI in edge-based applications with a focus on eight application areas: smart agriculture, smart environment, smart grid, smart healthcare, smart industry, smart education, smart transportation, and security and privacy. Then, we present a qualitative comparison that emphasizes the main objective of the confluence, the roles and the use of artificial intelligence at the network edge, and the key enabling technologies for edge analytics. Then, open challenges, future research directions, and perspectives are identified and discussed. Finally, some conclusions are drawn.},
}
RevDate: 2023-02-11
Distributed Data Integrity Verification Scheme in Multi-Cloud Environment.
Sensors (Basel, Switzerland), 23(3): pii:s23031623.
Most existing data integrity auditing protocols in cloud storage rely on proof of probabilistic data possession. Consequently, the sampling rate of data integrity verification is low to prevent expensive costs to the auditor. However, in the case of a multi-cloud environment, the amount of stored data will be huge. As a result, a higher sampling rate is needed. It will also have an increased cost for the auditor as a consequence. Therefore, this paper proposes a blockchain-based distributed data integrity verification protocol in multi-cloud environments that enables data verification using multi-verifiers. The proposed scheme aims to increase the sampling rate of data verification without increasing the costs significantly. The performance analysis shows that this protocol achieved a lower time consumption required for verification tasks using multi-verifiers than a single verifier. Furthermore, utilizing multi-verifiers also decreases each verifier's computation and communication costs.
Additional Links: PMID-36772662
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772662,
year = {2023},
author = {Witanto, EN and Stanley, B and Lee, SG},
title = {Distributed Data Integrity Verification Scheme in Multi-Cloud Environment.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031623},
pmid = {36772662},
issn = {1424-8220},
abstract = {Most existing data integrity auditing protocols in cloud storage rely on proof of probabilistic data possession. Consequently, the sampling rate of data integrity verification is low to prevent expensive costs to the auditor. However, in the case of a multi-cloud environment, the amount of stored data will be huge. As a result, a higher sampling rate is needed. It will also have an increased cost for the auditor as a consequence. Therefore, this paper proposes a blockchain-based distributed data integrity verification protocol in multi-cloud environments that enables data verification using multi-verifiers. The proposed scheme aims to increase the sampling rate of data verification without increasing the costs significantly. The performance analysis shows that this protocol achieved a lower time consumption required for verification tasks using multi-verifiers than a single verifier. Furthermore, utilizing multi-verifiers also decreases each verifier's computation and communication costs.},
}
RevDate: 2023-02-11
Parallel Processing of Sensor Data in a Distributed Rules Engine Environment through Clustering and Data Flow Reconfiguration.
Sensors (Basel, Switzerland), 23(3): pii:s23031543.
An emerging reality is the development of smart buildings and cities, which improve residents' comfort. These environments employ multiple sensor networks, whose data must be acquired and processed in real time by multiple rule engines, which trigger events that enable specific actuators. The problem is how to handle those data in a scalable manner by using multiple processing instances to maximize the system throughput. This paper considers the types of sensors that are used in these scenarios and proposes a model for abstracting the information flow as a weighted dependency graph. Two parallel computing methods are then proposed for obtaining an efficient data flow: a variation of the parallel k-means clustering algorithm and a custom genetic algorithm. Simulation results show that the two proposed flow reconfiguration algorithms reduce the rule processing times and provide an efficient solution for increasing the scalability of the considered environment. Another aspect being discussed is using an open-source cloud solution to manage the system and how to use the two algorithms to increase efficiency. These methods allow for a seamless increase in the number of sensors in the environment by making smart use of the available resources.
Additional Links: PMID-36772584
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772584,
year = {2023},
author = {Alexandrescu, A},
title = {Parallel Processing of Sensor Data in a Distributed Rules Engine Environment through Clustering and Data Flow Reconfiguration.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031543},
pmid = {36772584},
issn = {1424-8220},
abstract = {An emerging reality is the development of smart buildings and cities, which improve residents' comfort. These environments employ multiple sensor networks, whose data must be acquired and processed in real time by multiple rule engines, which trigger events that enable specific actuators. The problem is how to handle those data in a scalable manner by using multiple processing instances to maximize the system throughput. This paper considers the types of sensors that are used in these scenarios and proposes a model for abstracting the information flow as a weighted dependency graph. Two parallel computing methods are then proposed for obtaining an efficient data flow: a variation of the parallel k-means clustering algorithm and a custom genetic algorithm. Simulation results show that the two proposed flow reconfiguration algorithms reduce the rule processing times and provide an efficient solution for increasing the scalability of the considered environment. Another aspect being discussed is using an open-source cloud solution to manage the system and how to use the two algorithms to increase efficiency. These methods allow for a seamless increase in the number of sensors in the environment by making smart use of the available resources.},
}
RevDate: 2023-02-11
Local Scheduling in KubeEdge-Based Edge Computing Environment.
Sensors (Basel, Switzerland), 23(3): pii:s23031522.
KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment.
Additional Links: PMID-36772562
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772562,
year = {2023},
author = {Kim, SH and Kim, T},
title = {Local Scheduling in KubeEdge-Based Edge Computing Environment.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031522},
pmid = {36772562},
issn = {1424-8220},
abstract = {KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment.},
}
RevDate: 2023-02-11
Research on Comprehensive Evaluation and Early Warning of Transmission Lines' Operation Status Based on Dynamic Cloud Computing.
Sensors (Basel, Switzerland), 23(3): pii:s23031469.
The current methods for evaluating the operating condition of electricity transmission lines (ETLs) and providing early warning have several problems, such as the low correlation of data, ignoring the influence of seasonal factors, and strong subjectivity. This paper analyses the sensitive factors that influence dynamic key evaluation indices such as grounding resistance, sag, and wire corrosion, establishes the evaluation criteria of the ETL operation state, and proposes five ETL status levels and seven principles for selecting evaluation indices. Nine grade I evaluation indices and twenty-nine grade II evaluation indices, including passageway and meteorological environments, are determined. The cloud model theory is embedded and used to propose a warning technology for the operation state of ETLs based on inspection defect parameters and the cloud model. Combined with the inspection defect parameters of a line in the Baicheng district of Jilin Province and the critical evaluation index data such as grounding resistance, sag, and wire corrosion, which are used to calculate the timeliness of the data, the solid line is evaluated. The research shows that the dynamic evaluation model is correct and that the ETL status evaluation and early warning method have reasonable practicability.
Additional Links: PMID-36772506
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772506,
year = {2023},
author = {Wang, M and Li, C and Wang, X and Piao, Z and Yang, Y and Dai, W and Zhang, Q},
title = {Research on Comprehensive Evaluation and Early Warning of Transmission Lines' Operation Status Based on Dynamic Cloud Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031469},
pmid = {36772506},
issn = {1424-8220},
abstract = {The current methods for evaluating the operating condition of electricity transmission lines (ETLs) and providing early warning have several problems, such as the low correlation of data, ignoring the influence of seasonal factors, and strong subjectivity. This paper analyses the sensitive factors that influence dynamic key evaluation indices such as grounding resistance, sag, and wire corrosion, establishes the evaluation criteria of the ETL operation state, and proposes five ETL status levels and seven principles for selecting evaluation indices. Nine grade I evaluation indices and twenty-nine grade II evaluation indices, including passageway and meteorological environments, are determined. The cloud model theory is embedded and used to propose a warning technology for the operation state of ETLs based on inspection defect parameters and the cloud model. Combined with the inspection defect parameters of a line in the Baicheng district of Jilin Province and the critical evaluation index data such as grounding resistance, sag, and wire corrosion, which are used to calculate the timeliness of the data, the solid line is evaluated. The research shows that the dynamic evaluation model is correct and that the ETL status evaluation and early warning method have reasonable practicability.},
}
RevDate: 2023-02-11
An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization.
Sensors (Basel, Switzerland), 23(3): pii:s23031384.
Task scheduling in the cloud computing paradigm poses a challenge for researchers as the workloads that come onto cloud platforms are dynamic and heterogeneous. Therefore, scheduling these heterogeneous tasks to the appropriate virtual resources is a huge challenge. The inappropriate assignment of tasks to virtual resources leads to the degradation of the quality of services and thereby leads to a violation of the SLA metrics, ultimately leading to the degradation of trust in the cloud provider by the cloud user. Therefore, to preserve trust in the cloud provider and to improve the scheduling process in the cloud paradigm, we propose an efficient task scheduling algorithm that considers the priorities of tasks as well as virtual machines, thereby scheduling tasks accurately to appropriate VMs. This scheduling algorithm is modeled using firefly optimization. The workload for this approach is considered by using fabricated datasets with different distributions and the real-time worklogs of HPC2N and NASA were considered. This algorithm was implemented by using a Cloudsim simulation environment and, finally, our proposed approach is compared over the baseline approaches of ACO, PSO, and the GA. The simulation results revealed that our proposed approach has shown a significant impact over the baseline approaches by minimizing the makespan, availability, success rate, and turnaround efficiency.
Additional Links: PMID-36772424
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772424,
year = {2023},
author = {Mangalampalli, S and Karri, GR and Elngar, AA},
title = {An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031384},
pmid = {36772424},
issn = {1424-8220},
abstract = {Task scheduling in the cloud computing paradigm poses a challenge for researchers as the workloads that come onto cloud platforms are dynamic and heterogeneous. Therefore, scheduling these heterogeneous tasks to the appropriate virtual resources is a huge challenge. The inappropriate assignment of tasks to virtual resources leads to the degradation of the quality of services and thereby leads to a violation of the SLA metrics, ultimately leading to the degradation of trust in the cloud provider by the cloud user. Therefore, to preserve trust in the cloud provider and to improve the scheduling process in the cloud paradigm, we propose an efficient task scheduling algorithm that considers the priorities of tasks as well as virtual machines, thereby scheduling tasks accurately to appropriate VMs. This scheduling algorithm is modeled using firefly optimization. The workload for this approach is considered by using fabricated datasets with different distributions and the real-time worklogs of HPC2N and NASA were considered. This algorithm was implemented by using a Cloudsim simulation environment and, finally, our proposed approach is compared over the baseline approaches of ACO, PSO, and the GA. The simulation results revealed that our proposed approach has shown a significant impact over the baseline approaches by minimizing the makespan, availability, success rate, and turnaround efficiency.},
}
RevDate: 2023-02-11
Simulating IoT Workflows in DISSECT-CF-Fog.
Sensors (Basel, Switzerland), 23(3): pii:s23031294.
The modelling of IoT applications utilising the resources of cloud and fog computing is not straightforward because they have to support various trigger-based events that make human life easier. The sequence of tasks, such as performing a service call, receiving a data packet in the form of a message sent by an IoT device, and managing actuators or executing a computational task on a virtual machine, are often associated with and composed of IoT workflows. The development and deployment of such IoT workflows and their management systems in real life, including communication and network operations, can be complicated due to high operation costs and access limitations. Therefore, simulation solutions are often applied for such purposes. In this paper, we introduce a novel simulator extension of the DISSECT-CF-Fog simulator that leverages the workflow scheduling and its execution capabilities to model real-life IoT use cases. We also show that state-of-the-art simulators typically omit the IoT factor in the case of the scientific workflow evaluation. Therefore, we present a scalability study focusing on scientific workflows and on the interoperability of scientific and IoT workflows in DISSECT-CF-Fog.
Additional Links: PMID-36772335
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772335,
year = {2023},
author = {Markus, A and Al-Haboobi, A and Kecskemeti, G and Kertesz, A},
title = {Simulating IoT Workflows in DISSECT-CF-Fog.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031294},
pmid = {36772335},
issn = {1424-8220},
abstract = {The modelling of IoT applications utilising the resources of cloud and fog computing is not straightforward because they have to support various trigger-based events that make human life easier. The sequence of tasks, such as performing a service call, receiving a data packet in the form of a message sent by an IoT device, and managing actuators or executing a computational task on a virtual machine, are often associated with and composed of IoT workflows. The development and deployment of such IoT workflows and their management systems in real life, including communication and network operations, can be complicated due to high operation costs and access limitations. Therefore, simulation solutions are often applied for such purposes. In this paper, we introduce a novel simulator extension of the DISSECT-CF-Fog simulator that leverages the workflow scheduling and its execution capabilities to model real-life IoT use cases. We also show that state-of-the-art simulators typically omit the IoT factor in the case of the scientific workflow evaluation. Therefore, we present a scalability study focusing on scientific workflows and on the interoperability of scientific and IoT workflows in DISSECT-CF-Fog.},
}
RevDate: 2023-02-11
A Blockchain-Based Authentication and Authorization Scheme for Distributed Mobile Cloud Computing Services.
Sensors (Basel, Switzerland), 23(3): pii:s23031264.
Authentication and authorization constitute the essential security component, access control, for preventing unauthorized access to cloud services in mobile cloud computing (MCC) environments. Traditional centralized access control models relying on third party trust face a critical challenge due to a high trust cost and single point of failure. Blockchain can achieve the distributed trust for access control designs in a mutual untrustworthy scenario, but it also leads to expensive storage overhead. Considering the above issues, this work constructed an authentication and authorization scheme based on blockchain that can provide a dynamic update of access permissions by utilizing the smart contract. Compared with the conventional authentication scheme, the proposed scheme integrates an extra authorization function without additional computation and communication costs in the authentication phase. To improve the storage efficiency and system scalability, only one transaction is required to be stored in blockchain to record a user's access privileges on different service providers (SPs). In addition, mobile users in the proposed scheme are able to register with an arbitrary SP once and then utilize the same credential to access different SPs with different access levels. The security analysis indicates that the proposed scheme is secure under the random oracle model. The performance analysis clearly shows that the proposed scheme possesses superior computation and communication efficiencies and requires a low blockchain storage capacity for accomplishing user registration and updates.
Additional Links: PMID-36772304
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772304,
year = {2023},
author = {Yu, L and He, M and Liang, H and Xiong, L and Liu, Y},
title = {A Blockchain-Based Authentication and Authorization Scheme for Distributed Mobile Cloud Computing Services.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031264},
pmid = {36772304},
issn = {1424-8220},
abstract = {Authentication and authorization constitute the essential security component, access control, for preventing unauthorized access to cloud services in mobile cloud computing (MCC) environments. Traditional centralized access control models relying on third party trust face a critical challenge due to a high trust cost and single point of failure. Blockchain can achieve the distributed trust for access control designs in a mutual untrustworthy scenario, but it also leads to expensive storage overhead. Considering the above issues, this work constructed an authentication and authorization scheme based on blockchain that can provide a dynamic update of access permissions by utilizing the smart contract. Compared with the conventional authentication scheme, the proposed scheme integrates an extra authorization function without additional computation and communication costs in the authentication phase. To improve the storage efficiency and system scalability, only one transaction is required to be stored in blockchain to record a user's access privileges on different service providers (SPs). In addition, mobile users in the proposed scheme are able to register with an arbitrary SP once and then utilize the same credential to access different SPs with different access levels. The security analysis indicates that the proposed scheme is secure under the random oracle model. The performance analysis clearly shows that the proposed scheme possesses superior computation and communication efficiencies and requires a low blockchain storage capacity for accomplishing user registration and updates.},
}
RevDate: 2023-02-11
Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.
Sensors (Basel, Switzerland), 23(3): pii:s23031052.
Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the cloud server is trusted by default. However, edge computing intelligent services have limited bandwidth and unstable network connections, which make it impossible for edge devices to retrain their models or update the global model. Therefore, it is crucial to detect whether the data of edge nodes are polluted in time. This paper proposes a layered defense framework for edge-computing intelligent services. At the edge, we combine the gradient rising strategy and attention self-distillation mechanism to maximize the correlation between edge device data and edge object categories and train a clean model as much as possible. On the server side, we first implement a two-layer backdoor detection mechanism to eliminate backdoor updates and use the attention self-distillation mechanism to restore the model performance. Our results show that the two-stage defense mode is more suitable for the security protection of edge computing intelligent services. It can not only weaken the effectiveness of the backdoor at the edge end but also conduct this defense at the server end, making the model more secure. The precision of our model on the main task is almost the same as that of the clean model.
Additional Links: PMID-36772101
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36772101,
year = {2023},
author = {Yang, J and Zheng, J and Wang, H and Li, J and Sun, H and Han, W and Jiang, N and Tan, YA},
title = {Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.},
journal = {Sensors (Basel, Switzerland)},
volume = {23},
number = {3},
pages = {},
doi = {10.3390/s23031052},
pmid = {36772101},
issn = {1424-8220},
abstract = {Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the cloud server is trusted by default. However, edge computing intelligent services have limited bandwidth and unstable network connections, which make it impossible for edge devices to retrain their models or update the global model. Therefore, it is crucial to detect whether the data of edge nodes are polluted in time. This paper proposes a layered defense framework for edge-computing intelligent services. At the edge, we combine the gradient rising strategy and attention self-distillation mechanism to maximize the correlation between edge device data and edge object categories and train a clean model as much as possible. On the server side, we first implement a two-layer backdoor detection mechanism to eliminate backdoor updates and use the attention self-distillation mechanism to restore the model performance. Our results show that the two-stage defense mode is more suitable for the security protection of edge computing intelligent services. It can not only weaken the effectiveness of the backdoor at the edge end but also conduct this defense at the server end, making the model more secure. The precision of our model on the main task is almost the same as that of the clean model.},
}
RevDate: 2023-02-11
GPU-Enhanced DFTB Metadynamics for Efficiently Predicting Free Energies of Biochemical Systems.
Molecules (Basel, Switzerland), 28(3): pii:molecules28031277.
Metadynamics calculations of large chemical systems with ab initio methods are computationally prohibitive due to the extensive sampling required to simulate the large degrees of freedom in these systems. To address this computational bottleneck, we utilized a GPU-enhanced density functional tight binding (DFTB) approach on a massively parallelized cloud computing platform to efficiently calculate the thermodynamics and metadynamics of biochemical systems. To first validate our approach, we calculated the free-energy surfaces of alanine dipeptide and showed that our GPU-enhanced DFTB calculations qualitatively agree with computationally-intensive hybrid DFT benchmarks, whereas classical force fields give significant errors. Most importantly, we show that our GPU-accelerated DFTB calculations are significantly faster than previous approaches by up to two orders of magnitude. To further extend our GPU-enhanced DFTB approach, we also carried out a 10 ns metadynamics simulation of remdesivir, which is prohibitively out of reach for routine DFT-based metadynamics calculations. We find that the free-energy surfaces of remdesivir obtained from DFTB and classical force fields differ significantly, where the latter overestimates the internal energy contribution of high free-energy states. Taken together, our benchmark tests, analyses, and extensions to large biochemical systems highlight the use of GPU-enhanced DFTB simulations for efficiently predicting the free-energy surfaces/thermodynamics of large biochemical systems.
Additional Links: PMID-36770943
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36770943,
year = {2023},
author = {Kumar, A and Arantes, PR and Saha, A and Palermo, G and Wong, BM},
title = {GPU-Enhanced DFTB Metadynamics for Efficiently Predicting Free Energies of Biochemical Systems.},
journal = {Molecules (Basel, Switzerland)},
volume = {28},
number = {3},
pages = {},
doi = {10.3390/molecules28031277},
pmid = {36770943},
issn = {1420-3049},
support = {R01GM141329/NH/NIH HHS/United States ; },
abstract = {Metadynamics calculations of large chemical systems with ab initio methods are computationally prohibitive due to the extensive sampling required to simulate the large degrees of freedom in these systems. To address this computational bottleneck, we utilized a GPU-enhanced density functional tight binding (DFTB) approach on a massively parallelized cloud computing platform to efficiently calculate the thermodynamics and metadynamics of biochemical systems. To first validate our approach, we calculated the free-energy surfaces of alanine dipeptide and showed that our GPU-enhanced DFTB calculations qualitatively agree with computationally-intensive hybrid DFT benchmarks, whereas classical force fields give significant errors. Most importantly, we show that our GPU-accelerated DFTB calculations are significantly faster than previous approaches by up to two orders of magnitude. To further extend our GPU-enhanced DFTB approach, we also carried out a 10 ns metadynamics simulation of remdesivir, which is prohibitively out of reach for routine DFT-based metadynamics calculations. We find that the free-energy surfaces of remdesivir obtained from DFTB and classical force fields differ significantly, where the latter overestimates the internal energy contribution of high free-energy states. Taken together, our benchmark tests, analyses, and extensions to large biochemical systems highlight the use of GPU-enhanced DFTB simulations for efficiently predicting the free-energy surfaces/thermodynamics of large biochemical systems.},
}
RevDate: 2023-02-11
Artificial Intelligence and Machine Learning Technology Driven Modern Drug Discovery and Development.
International journal of molecular sciences, 24(3): pii:ijms24032026.
The discovery and advances of medicines may be considered as the ultimate relevant translational science effort that adds to human invulnerability and happiness. But advancing a fresh medication is a quite convoluted, costly, and protracted operation, normally costing USD ~2.6 billion and consuming a mean time span of 12 years. Methods to cut back expenditure and hasten new drug discovery have prompted an arduous and compelling brainstorming exercise in the pharmaceutical industry. The engagement of Artificial Intelligence (AI), including the deep-learning (DL) component in particular, has been facilitated by the employment of classified big data, in concert with strikingly reinforced computing prowess and cloud storage, across all fields. AI has energized computer-facilitated drug discovery. An unrestricted espousing of machine learning (ML), especially DL, in many scientific specialties, and the technological refinements in computing hardware and software, in concert with various aspects of the problem, sustain this progress. ML algorithms have been extensively engaged for computer-facilitated drug discovery. DL methods, such as artificial neural networks (ANNs) comprising multiple buried processing layers, have of late seen a resurgence due to their capability to power automatic attribute elicitations from the input data, coupled with their ability to obtain nonlinear input-output pertinencies. Such features of DL methods augment classical ML techniques which bank on human-contrived molecular descriptors. A major part of the early reluctance concerning utility of AI in pharmaceutical discovery has begun to melt, thereby advancing medicinal chemistry. AI, along with modern experimental technical knowledge, is anticipated to invigorate the quest for new and improved pharmaceuticals in an expeditious, economical, and increasingly compelling manner. DL-facilitated methods have just initiated kickstarting for some integral issues in drug discovery. Many technological advances, such as "message-passing paradigms", "spatial-symmetry-preserving networks", "hybrid de novo design", and other ingenious ML exemplars, will definitely come to be pervasively widespread and help dissect many of the biggest, and most intriguing inquiries. Open data allocation and model augmentation will exert a decisive hold during the progress of drug discovery employing AI. This review will address the impending utilizations of AI to refine and bolster the drug discovery operation.
Additional Links: PMID-36768346
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36768346,
year = {2023},
author = {Sarkar, C and Das, B and Rawat, VS and Wahlang, JB and Nongpiur, A and Tiewsoh, I and Lyngdoh, NM and Das, D and Bidarolli, M and Sony, HT},
title = {Artificial Intelligence and Machine Learning Technology Driven Modern Drug Discovery and Development.},
journal = {International journal of molecular sciences},
volume = {24},
number = {3},
pages = {},
doi = {10.3390/ijms24032026},
pmid = {36768346},
issn = {1422-0067},
abstract = {The discovery and advances of medicines may be considered as the ultimate relevant translational science effort that adds to human invulnerability and happiness. But advancing a fresh medication is a quite convoluted, costly, and protracted operation, normally costing USD ~2.6 billion and consuming a mean time span of 12 years. Methods to cut back expenditure and hasten new drug discovery have prompted an arduous and compelling brainstorming exercise in the pharmaceutical industry. The engagement of Artificial Intelligence (AI), including the deep-learning (DL) component in particular, has been facilitated by the employment of classified big data, in concert with strikingly reinforced computing prowess and cloud storage, across all fields. AI has energized computer-facilitated drug discovery. An unrestricted espousing of machine learning (ML), especially DL, in many scientific specialties, and the technological refinements in computing hardware and software, in concert with various aspects of the problem, sustain this progress. ML algorithms have been extensively engaged for computer-facilitated drug discovery. DL methods, such as artificial neural networks (ANNs) comprising multiple buried processing layers, have of late seen a resurgence due to their capability to power automatic attribute elicitations from the input data, coupled with their ability to obtain nonlinear input-output pertinencies. Such features of DL methods augment classical ML techniques which bank on human-contrived molecular descriptors. A major part of the early reluctance concerning utility of AI in pharmaceutical discovery has begun to melt, thereby advancing medicinal chemistry. AI, along with modern experimental technical knowledge, is anticipated to invigorate the quest for new and improved pharmaceuticals in an expeditious, economical, and increasingly compelling manner. DL-facilitated methods have just initiated kickstarting for some integral issues in drug discovery. Many technological advances, such as "message-passing paradigms", "spatial-symmetry-preserving networks", "hybrid de novo design", and other ingenious ML exemplars, will definitely come to be pervasively widespread and help dissect many of the biggest, and most intriguing inquiries. Open data allocation and model augmentation will exert a decisive hold during the progress of drug discovery employing AI. This review will address the impending utilizations of AI to refine and bolster the drug discovery operation.},
}
RevDate: 2023-02-10
Cannabis and male sexual health: contemporary qualitative review and insight into perspectives of young men on the internet.
Sexual medicine reviews pii:6991251 [Epub ahead of print].
INTRODUCTION: Cannabis use is increasing across the United States, yet its short- and long-term effects on sexual function remain controversial. Currently, there is a paucity of studies exploring the relationship between cannabis and men's health.
OBJECTIVES: To summarize the available literature on cannabis and men's health and provide insight into lay perceptions of this topic.
METHODS: We performed a qualitative PubMed review of the existing literature on cannabis and men's health according to the PRISMA guidelines. Separately, we analyzed relevant themes in online men's health forums. We utilized a Google cloud-based platform (BigQuery) to extract relevant posts from 5 men's health Reddit forums from August 2018 to August 2019. We conducted a qualitative thematic analysis of the posts and quantitatively analyzed them using natural language processing and a meaning extraction method with principal component analysis.
RESULTS: Our literature review revealed a mix of animal and human studies demonstrating the negative effects of cannabis on semen parameters and varying effects on erectile function and hormone levels. In our analysis of 372 686 Reddit posts, 1190 (0.3%) included relevant discussion on cannabis and men's health. An overall 272 posts were manually analyzed, showing that online discussions revolve around seeking answers and sharing the effects of cannabis on various aspects of sexual health and quality of life, often with conflicting experiences. Quantitative analysis revealed 1 thematic cluster related to cannabis, insecurity, and mental/physical health.
CONCLUSIONS: There is a limited number of quality human studies investigating the effects of cannabis on men's health. Men online are uncertain about how cannabis affects their sexual health and seek more information. As the prevalence of cannabis use increases, so does the need for research in this area.
Additional Links: PMID-36763944
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36763944,
year = {2023},
author = {Shahinyan, GK and Hu, MY and Jiang, T and Osadchiy, V and Sigalos, JT and Mills, JN and Kachroo, N and Eleswarapu, SV},
title = {Cannabis and male sexual health: contemporary qualitative review and insight into perspectives of young men on the internet.},
journal = {Sexual medicine reviews},
volume = {},
number = {},
pages = {},
doi = {10.1093/sxmrev/qeac010},
pmid = {36763944},
issn = {2050-0521},
abstract = {INTRODUCTION: Cannabis use is increasing across the United States, yet its short- and long-term effects on sexual function remain controversial. Currently, there is a paucity of studies exploring the relationship between cannabis and men's health.
OBJECTIVES: To summarize the available literature on cannabis and men's health and provide insight into lay perceptions of this topic.
METHODS: We performed a qualitative PubMed review of the existing literature on cannabis and men's health according to the PRISMA guidelines. Separately, we analyzed relevant themes in online men's health forums. We utilized a Google cloud-based platform (BigQuery) to extract relevant posts from 5 men's health Reddit forums from August 2018 to August 2019. We conducted a qualitative thematic analysis of the posts and quantitatively analyzed them using natural language processing and a meaning extraction method with principal component analysis.
RESULTS: Our literature review revealed a mix of animal and human studies demonstrating the negative effects of cannabis on semen parameters and varying effects on erectile function and hormone levels. In our analysis of 372 686 Reddit posts, 1190 (0.3%) included relevant discussion on cannabis and men's health. An overall 272 posts were manually analyzed, showing that online discussions revolve around seeking answers and sharing the effects of cannabis on various aspects of sexual health and quality of life, often with conflicting experiences. Quantitative analysis revealed 1 thematic cluster related to cannabis, insecurity, and mental/physical health.
CONCLUSIONS: There is a limited number of quality human studies investigating the effects of cannabis on men's health. Men online are uncertain about how cannabis affects their sexual health and seek more information. As the prevalence of cannabis use increases, so does the need for research in this area.},
}
RevDate: 2023-02-10
SL-Cloud: A Cloud-based resource to support synthetic lethal interaction discovery.
F1000Research, 11:493.
Synthetic lethal interactions (SLIs), genetic interactions in which the simultaneous inactivation of two genes leads to a lethal phenotype, are promising targets for therapeutic intervention in cancer, as exemplified by the recent success of PARP inhibitors in treating BRCA1/2-deficient tumors. We present SL-Cloud, a new component of the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC), that provides an integrated framework of cloud-hosted data resources and curated workflows to enable facile prediction of SLIs. This resource addresses two main challenges related to SLI inference: the need to wrangle and preprocess large multi-omic datasets and the availability of multiple comparable prediction approaches. SL-Cloud enables customizable computational inference of SLIs and testing of prediction approaches across multiple datasets. We anticipate that cancer researchers will find utility in this tool for discovery of SLIs to support further investigation into potential drug targets for anticancer therapies.
Additional Links: PMID-36761837
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36761837,
year = {2022},
author = {Tercan, B and Qin, G and Kim, TK and Aguilar, B and Phan, J and Longabaugh, W and Pot, D and Kemp, CJ and Chambwe, N and Shmulevich, I},
title = {SL-Cloud: A Cloud-based resource to support synthetic lethal interaction discovery.},
journal = {F1000Research},
volume = {11},
number = {},
pages = {493},
doi = {10.12688/f1000research.110903.1},
pmid = {36761837},
issn = {2046-1402},
abstract = {Synthetic lethal interactions (SLIs), genetic interactions in which the simultaneous inactivation of two genes leads to a lethal phenotype, are promising targets for therapeutic intervention in cancer, as exemplified by the recent success of PARP inhibitors in treating BRCA1/2-deficient tumors. We present SL-Cloud, a new component of the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC), that provides an integrated framework of cloud-hosted data resources and curated workflows to enable facile prediction of SLIs. This resource addresses two main challenges related to SLI inference: the need to wrangle and preprocess large multi-omic datasets and the availability of multiple comparable prediction approaches. SL-Cloud enables customizable computational inference of SLIs and testing of prediction approaches across multiple datasets. We anticipate that cancer researchers will find utility in this tool for discovery of SLIs to support further investigation into potential drug targets for anticancer therapies.},
}
RevDate: 2023-02-10
First steps into the cloud: Using Amazon data storage and computing with Python notebooks.
PloS one, 18(2):e0278316.
With the oncoming age of big data, biologists are encountering more use cases for cloud-based computing to streamline data processing and storage. Unfortunately, cloud platforms are difficult to learn, and there are few resources for biologists to demystify them. We have developed a guide for experimental biologists to set up cloud processing on Amazon Web Services to cheaply outsource data processing and storage. Here we provide a guide for setting up a computing environment in the cloud and showcase examples of using Python and Julia programming languages. We present example calcium imaging data in the zebrafish brain and corresponding analysis using suite2p software. Tools for budget and user management are further discussed in the attached protocol. Using this guide, researchers with limited coding experience can get started with cloud-based computing or move existing coding infrastructure into the cloud environment.
Additional Links: PMID-36757918
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36757918,
year = {2023},
author = {Pollak, DJ and Chawla, G and Andreev, A and Prober, DA},
title = {First steps into the cloud: Using Amazon data storage and computing with Python notebooks.},
journal = {PloS one},
volume = {18},
number = {2},
pages = {e0278316},
pmid = {36757918},
issn = {1932-6203},
abstract = {With the oncoming age of big data, biologists are encountering more use cases for cloud-based computing to streamline data processing and storage. Unfortunately, cloud platforms are difficult to learn, and there are few resources for biologists to demystify them. We have developed a guide for experimental biologists to set up cloud processing on Amazon Web Services to cheaply outsource data processing and storage. Here we provide a guide for setting up a computing environment in the cloud and showcase examples of using Python and Julia programming languages. We present example calcium imaging data in the zebrafish brain and corresponding analysis using suite2p software. Tools for budget and user management are further discussed in the attached protocol. Using this guide, researchers with limited coding experience can get started with cloud-based computing or move existing coding infrastructure into the cloud environment.},
}
RevDate: 2023-02-08
CmpDate: 2023-02-09
Ultra-fast semi-empirical quantum chemistry for high-throughput computational campaigns with Sparrow.
The Journal of chemical physics, 158(5):054118.
Semi-empirical quantum chemical approaches are known to compromise accuracy for the feasibility of calculations on huge molecules. However, the need for ultrafast calculations in interactive quantum mechanical studies, high-throughput virtual screening, and data-driven machine learning has shifted the emphasis toward calculation runtimes recently. This comes with new constraints for the software implementation as many fast calculations would suffer from a large overhead of the manual setup and other procedures that are comparatively fast when studying a single molecular structure, but which become prohibitively slow for high-throughput demands. In this work, we discuss the effect of various well-established semi-empirical approximations on calculation speed and relate this to data transfer rates from the raw-data source computer to the results of the visualization front end. For the former, we consider desktop computers, local high performance computing, and remote cloud services in order to elucidate the effect on interactive calculations, for web and cloud interfaces in local applications, and in world-wide interactive virtual sessions. The models discussed in this work have been implemented into our open-source software SCINE Sparrow.
Additional Links: PMID-36754821
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36754821,
year = {2023},
author = {Bosia, F and Zheng, P and Vaucher, A and Weymuth, T and Dral, PO and Reiher, M},
title = {Ultra-fast semi-empirical quantum chemistry for high-throughput computational campaigns with Sparrow.},
journal = {The Journal of chemical physics},
volume = {158},
number = {5},
pages = {054118},
doi = {10.1063/5.0136404},
pmid = {36754821},
issn = {1089-7690},
abstract = {Semi-empirical quantum chemical approaches are known to compromise accuracy for the feasibility of calculations on huge molecules. However, the need for ultrafast calculations in interactive quantum mechanical studies, high-throughput virtual screening, and data-driven machine learning has shifted the emphasis toward calculation runtimes recently. This comes with new constraints for the software implementation as many fast calculations would suffer from a large overhead of the manual setup and other procedures that are comparatively fast when studying a single molecular structure, but which become prohibitively slow for high-throughput demands. In this work, we discuss the effect of various well-established semi-empirical approximations on calculation speed and relate this to data transfer rates from the raw-data source computer to the results of the visualization front end. For the former, we consider desktop computers, local high performance computing, and remote cloud services in order to elucidate the effect on interactive calculations, for web and cloud interfaces in local applications, and in world-wide interactive virtual sessions. The models discussed in this work have been implemented into our open-source software SCINE Sparrow.},
}
RevDate: 2023-02-08
Breaking the barriers to designing online experiments: A novel open-source platform for supporting procedural skill learning experiments.
Computers in biology and medicine, 154:106627 pii:S0010-4825(23)00092-6 [Epub ahead of print].
BACKGROUND: Motor learning experiments are typically performed in laboratory environments, which can be time-consuming and require dedicated equipment/personnel, thus limiting the ability to gather data from large samples. To address this problem, some researchers have transitioned to unsupervised online experiments, showing advantages in participant recruitment without losing validity. However, most online platforms require coding experience or time-consuming setups to create and run experiments, limiting their usage across the field.
METHOD: To tackle this issue, an open-source web-based platform was developed (https://experiments.neurro-lab.engin.umich.edu/) to create, run, and manage procedural skill learning experiments without coding or setup requirements. The feasibility of the platform and the comparability of the results between supervised (n = 17) and unsupervised (n = 24) were tested in 41 naive right-handed participants using an established sequential finger tapping task. The study also tested if a previously reported rapid form of offline consolidation (i.e., microscale learning) in procedural skill learning could be replicated with the developed platform and evaluated the extent of interlimb transfer associated with the finger tapping task.
RESULTS: The results indicated that the performance metrics were comparable between the supervised and unsupervised groups (all p's > 0.05). The learning curves, mean tapping speeds, and micro-scale learning were similar to previous studies. Training led to significant improvements in mean tapping speed (2.22 ± 1.48 keypresses/s, p < 0.001) and a significant interlimb transfer of learning (1.22 ± 1.43 keypresses/s, p < 0.05).
CONCLUSIONS: The results show that the presented platform may serve as a valuable tool for conducting online procedural skill-learning experiments.
Additional Links: PMID-36753980
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36753980,
year = {2023},
author = {Cubillos, LH and Augenstein, TE and Ranganathan, R and Krishnan, C},
title = {Breaking the barriers to designing online experiments: A novel open-source platform for supporting procedural skill learning experiments.},
journal = {Computers in biology and medicine},
volume = {154},
number = {},
pages = {106627},
doi = {10.1016/j.compbiomed.2023.106627},
pmid = {36753980},
issn = {1879-0534},
abstract = {BACKGROUND: Motor learning experiments are typically performed in laboratory environments, which can be time-consuming and require dedicated equipment/personnel, thus limiting the ability to gather data from large samples. To address this problem, some researchers have transitioned to unsupervised online experiments, showing advantages in participant recruitment without losing validity. However, most online platforms require coding experience or time-consuming setups to create and run experiments, limiting their usage across the field.
METHOD: To tackle this issue, an open-source web-based platform was developed (https://experiments.neurro-lab.engin.umich.edu/) to create, run, and manage procedural skill learning experiments without coding or setup requirements. The feasibility of the platform and the comparability of the results between supervised (n = 17) and unsupervised (n = 24) were tested in 41 naive right-handed participants using an established sequential finger tapping task. The study also tested if a previously reported rapid form of offline consolidation (i.e., microscale learning) in procedural skill learning could be replicated with the developed platform and evaluated the extent of interlimb transfer associated with the finger tapping task.
RESULTS: The results indicated that the performance metrics were comparable between the supervised and unsupervised groups (all p's > 0.05). The learning curves, mean tapping speeds, and micro-scale learning were similar to previous studies. Training led to significant improvements in mean tapping speed (2.22 ± 1.48 keypresses/s, p < 0.001) and a significant interlimb transfer of learning (1.22 ± 1.43 keypresses/s, p < 0.05).
CONCLUSIONS: The results show that the presented platform may serve as a valuable tool for conducting online procedural skill-learning experiments.},
}
RevDate: 2023-02-07
Interactive Quantum Chemistry Enabled by Machine Learning, Graphical Processing Units, and Cloud Computing.
Annual review of physical chemistry [Epub ahead of print].
Modern quantum chemistry algorithms are increasingly able to accurately predict molecular properties that are useful for chemists in research and education. Despite this progress, performing such calculations is currently unattainable to the wider chemistry community, as they often require domain expertise, computer programming skills, and powerful computer hardware. In this review, we outline methods to eliminate these barriers using cutting-edge technologies. We discuss the ingredients needed to create accessible platforms that can compute quantum chemistry properties in real time, including graphical processing units-accelerated quantum chemistry in the cloud, artificial intelligence-driven natural molecule input methods, and extended reality visualization. We end by highlighting a series of exciting applications that assemble these components to create uniquely interactive platforms for computing and visualizing spectra, 3D structures, molecular orbitals, and many other chemical properties. Expected final online publication date for the Annual Review of Physical Chemistry, Volume 74 is April 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Additional Links: PMID-36750410
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36750410,
year = {2022},
author = {Raucci, U and Weir, H and Sakshuwong, S and Seritan, S and Hicks, CB and Vannucci, F and Rea, F and MartÃnez, TJ},
title = {Interactive Quantum Chemistry Enabled by Machine Learning, Graphical Processing Units, and Cloud Computing.},
journal = {Annual review of physical chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1146/annurev-physchem-061020-053438},
pmid = {36750410},
issn = {1545-1593},
abstract = {Modern quantum chemistry algorithms are increasingly able to accurately predict molecular properties that are useful for chemists in research and education. Despite this progress, performing such calculations is currently unattainable to the wider chemistry community, as they often require domain expertise, computer programming skills, and powerful computer hardware. In this review, we outline methods to eliminate these barriers using cutting-edge technologies. We discuss the ingredients needed to create accessible platforms that can compute quantum chemistry properties in real time, including graphical processing units-accelerated quantum chemistry in the cloud, artificial intelligence-driven natural molecule input methods, and extended reality visualization. We end by highlighting a series of exciting applications that assemble these components to create uniquely interactive platforms for computing and visualizing spectra, 3D structures, molecular orbitals, and many other chemical properties. Expected final online publication date for the Annual Review of Physical Chemistry, Volume 74 is April 2023. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.},
}
RevDate: 2023-02-07
A harmonized public resource of deeply sequenced diverse human genomes.
bioRxiv : the preprint server for biology pii:2023.01.23.525248.
Underrepresented populations are often excluded from genomic studies due in part to a lack of resources supporting their analysis. The 1000 Genomes Project (1kGP) and Human Genome Diversity Project (HGDP), which have recently been sequenced to high coverage, are valuable genomic resources because of the global diversity they capture and their open data sharing policies. Here, we harmonized a high quality set of 4,096 whole genomes from HGDP and 1kGP with data from gnomAD and identified over 155 million high-quality SNVs, indels, and SVs. We performed a detailed ancestry analysis of this cohort, characterizing population structure and patterns of admixture across populations, analyzing site frequency spectra, and measuring variant counts at global and subcontinental levels. We also demonstrate substantial added value from this dataset compared to the prior versions of the component resources, typically combined via liftover and variant intersection; for example, we catalog millions of new genetic variants, mostly rare, compared to previous releases. In addition to unrestricted individual-level public release, we provide detailed tutorials for conducting many of the most common quality control steps and analyses with these data in a scalable cloud-computing environment and publicly release this new phased joint callset for use as a haplotype resource in phasing and imputation pipelines. This jointly called reference panel will serve as a key resource to support research of diverse ancestry populations.
Additional Links: PMID-36747613
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36747613,
year = {2023},
author = {Koenig, Z and Yohannes, MT and Nkambule, LL and Goodrich, JK and Kim, HA and Zhao, X and Wilson, MW and Tiao, G and Hao, SP and Sahakian, N and Chao, KR and , and Talkowski, ME and Daly, MJ and Brand, H and Karczewski, KJ and Atkinson, EG and Martin, AR},
title = {A harmonized public resource of deeply sequenced diverse human genomes.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2023.01.23.525248},
pmid = {36747613},
abstract = {Underrepresented populations are often excluded from genomic studies due in part to a lack of resources supporting their analysis. The 1000 Genomes Project (1kGP) and Human Genome Diversity Project (HGDP), which have recently been sequenced to high coverage, are valuable genomic resources because of the global diversity they capture and their open data sharing policies. Here, we harmonized a high quality set of 4,096 whole genomes from HGDP and 1kGP with data from gnomAD and identified over 155 million high-quality SNVs, indels, and SVs. We performed a detailed ancestry analysis of this cohort, characterizing population structure and patterns of admixture across populations, analyzing site frequency spectra, and measuring variant counts at global and subcontinental levels. We also demonstrate substantial added value from this dataset compared to the prior versions of the component resources, typically combined via liftover and variant intersection; for example, we catalog millions of new genetic variants, mostly rare, compared to previous releases. In addition to unrestricted individual-level public release, we provide detailed tutorials for conducting many of the most common quality control steps and analyses with these data in a scalable cloud-computing environment and publicly release this new phased joint callset for use as a haplotype resource in phasing and imputation pipelines. This jointly called reference panel will serve as a key resource to support research of diverse ancestry populations.},
}
RevDate: 2023-02-04
Retracted: Discussion on Health Service System of Mobile Medical Institutions Based on Internet of Things and Cloud Computing.
Journal of healthcare engineering, 2023:9892481.
[This retracts the article DOI: 10.1155/2022/5235349.].
Additional Links: PMID-36733938
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36733938,
year = {2023},
author = {Healthcare Engineering, JO},
title = {Retracted: Discussion on Health Service System of Mobile Medical Institutions Based on Internet of Things and Cloud Computing.},
journal = {Journal of healthcare engineering},
volume = {2023},
number = {},
pages = {9892481},
pmid = {36733938},
issn = {2040-2309},
abstract = {[This retracts the article DOI: 10.1155/2022/5235349.].},
}
RevDate: 2023-02-01
NMRtist: an online platform for automated biomolecular NMR spectra analysis.
Bioinformatics (Oxford, England) pii:7019933 [Epub ahead of print].
UNLABELLED: We present NMRtist, an online platform that combines deep learning, large-scale optimization, and cloud computing to automate protein NMR spectra analysis. Our website provides virtual storage for NMR spectra deposition together with a set of applications designed for automated peak picking, chemical shift assignment, and protein structure determination. The system can be used by non-experts and allows protein assignments and structures to be determined within hours after the measurements, strictly without any human intervention.
AVAILABILITY: NMRtist is freely available to non-commercial users at https://nmrtist.org.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Additional Links: PMID-36723167
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36723167,
year = {2023},
author = {Klukowski, P and Riek, R and Güntert, P},
title = {NMRtist: an online platform for automated biomolecular NMR spectra analysis.},
journal = {Bioinformatics (Oxford, England)},
volume = {},
number = {},
pages = {},
doi = {10.1093/bioinformatics/btad066},
pmid = {36723167},
issn = {1367-4811},
abstract = {UNLABELLED: We present NMRtist, an online platform that combines deep learning, large-scale optimization, and cloud computing to automate protein NMR spectra analysis. Our website provides virtual storage for NMR spectra deposition together with a set of applications designed for automated peak picking, chemical shift assignment, and protein structure determination. The system can be used by non-experts and allows protein assignments and structures to be determined within hours after the measurements, strictly without any human intervention.
AVAILABILITY: NMRtist is freely available to non-commercial users at https://nmrtist.org.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.},
}
RevDate: 2023-02-01
The BACPAC Research Program Data Harmonization: Rationale for Data Elements and Standards.
Pain medicine (Malden, Mass.) pii:7017526 [Epub ahead of print].
OBJECTIVE: One aim of the Back Pain Consortium (BACPAC) Research Program is to develop an integrated model of chronic low back pain that is informed by combined data from translational research and clinical trials. We describe efforts to maximize data harmonization and accessibility to facilitate Consortium-wide analyses.
METHODS: Consortium-wide working groups established harmonized data elements to be collected in all studies and developed standards for tabular and non-tabular data (e.g., imaging and omics). The BACPAC Data Portal was developed to facilitate research collaboration across the Consortium.
RESULTS: Clinical experts developed the BACPAC Minimum Dataset with required domains and outcome measures to be collected using questionnaires across projects. Other non-required domain-specific measures are collected by multiple studies. To optimize cross-study analyses, a modified data standard was developed based on the Clinical Data Interchange Standards Consortium Study Data Tabulation Model to harmonize data structures and facilitate integration of baseline characteristics, participant-reported outcomes, chronic low back pain treatments, clinical exam, functional performance, psychosocial characteristics, quantitative sensory testing, imaging and biomechanical data. Standards to accommodate the unique features of chronic low back pain data were adopted. Research units submit standardized study data to the BACPAC Data Portal, developed as a secure cloud-based central data repository and computing infrastructure for researchers to access and conduct analyses on data collected by or acquired for BACPAC.
CONCLUSIONS: BACPAC harmonization efforts and data standards serve as an innovative model for data integration that could be used as a framework for other consortia with multiple, decentralized research programs.
Additional Links: PMID-36721327
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36721327,
year = {2023},
author = {Batorsky, A and Bowden, AE and Darwin, J and Fields, AJ and Greco, CM and Harris, RE and Hue, TF and Kakyomya, J and Mehling, W and O'Neill, C and Patterson, CG and Piva, SR and Sollmann, N and Toups, V and Wasan, AD and Wasserman, R and Williams, DA and Vo, NV and Psioda, MA and McCumber, M},
title = {The BACPAC Research Program Data Harmonization: Rationale for Data Elements and Standards.},
journal = {Pain medicine (Malden, Mass.)},
volume = {},
number = {},
pages = {},
doi = {10.1093/pm/pnad008},
pmid = {36721327},
issn = {1526-4637},
abstract = {OBJECTIVE: One aim of the Back Pain Consortium (BACPAC) Research Program is to develop an integrated model of chronic low back pain that is informed by combined data from translational research and clinical trials. We describe efforts to maximize data harmonization and accessibility to facilitate Consortium-wide analyses.
METHODS: Consortium-wide working groups established harmonized data elements to be collected in all studies and developed standards for tabular and non-tabular data (e.g., imaging and omics). The BACPAC Data Portal was developed to facilitate research collaboration across the Consortium.
RESULTS: Clinical experts developed the BACPAC Minimum Dataset with required domains and outcome measures to be collected using questionnaires across projects. Other non-required domain-specific measures are collected by multiple studies. To optimize cross-study analyses, a modified data standard was developed based on the Clinical Data Interchange Standards Consortium Study Data Tabulation Model to harmonize data structures and facilitate integration of baseline characteristics, participant-reported outcomes, chronic low back pain treatments, clinical exam, functional performance, psychosocial characteristics, quantitative sensory testing, imaging and biomechanical data. Standards to accommodate the unique features of chronic low back pain data were adopted. Research units submit standardized study data to the BACPAC Data Portal, developed as a secure cloud-based central data repository and computing infrastructure for researchers to access and conduct analyses on data collected by or acquired for BACPAC.
CONCLUSIONS: BACPAC harmonization efforts and data standards serve as an innovative model for data integration that could be used as a framework for other consortia with multiple, decentralized research programs.},
}
RevDate: 2023-01-31
Genome-wide screening and identification of potential kinases involved in endoplasmic reticulum stress responses.
Life sciences pii:S0024-3205(23)00086-3 [Epub ahead of print].
AIM: This study aims to identify endoplasmic reticulum stress response elements (ERSE) in the human genome to explore potentially regulated genes, including kinases and transcription factors, involved in the endoplasmic reticulum (ER) stress and its related diseases.
MATERIALS AND METHODS: Python-based whole genome screening of ERSE was performed using the Amazon Web Services elastic computing system. The Kinome database was used to filter out the kinases from the extracted list of ERSE-related genes. Additionally, network analysis and genome enrichment were achieved using NDEx, the Network and Data Exchange software, and web-based computational tools. To validate the gene expression, quantitative RT-PCR was performed for selected kinases from the list by exposing the HeLa cells to tunicamycin, an ER stress inducer, for various time points.
KEY FINDINGS: The overall number of ERSE-associated genes follows a similar pattern in humans, mice, and rats, demonstrating the ERSE's conservation in mammals. A total of 2705 ERSE sequences were discovered in the human genome (GRCh38.p14), from which we identified 36 kinases encoding genes. Gene expression analysis has shown a significant change in the expression of selected genes under ER stress conditions in HeLa cells, supporting our finding.
SIGNIFICANCE: In this study, we have introduced a rapid method using Amazon cloud-based services for genome-wide screening of ERSE sequences from both positive and negative strands, which covers the entire genome reference sequences. Approximately 10 % of human protein-protein interactomes were found to be associated with ERSE-related genes. Our study also provides a rich resource of human ER stress-response-based protein networks and transcription factor interactions and a reference point for future research aiming at targeted therapeutics.
Additional Links: PMID-36720454
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36720454,
year = {2023},
author = {Firoz, A and Ravanan, P and Saha, P and Prashar, T and Talwar, P},
title = {Genome-wide screening and identification of potential kinases involved in endoplasmic reticulum stress responses.},
journal = {Life sciences},
volume = {},
number = {},
pages = {121452},
doi = {10.1016/j.lfs.2023.121452},
pmid = {36720454},
issn = {1879-0631},
abstract = {AIM: This study aims to identify endoplasmic reticulum stress response elements (ERSE) in the human genome to explore potentially regulated genes, including kinases and transcription factors, involved in the endoplasmic reticulum (ER) stress and its related diseases.
MATERIALS AND METHODS: Python-based whole genome screening of ERSE was performed using the Amazon Web Services elastic computing system. The Kinome database was used to filter out the kinases from the extracted list of ERSE-related genes. Additionally, network analysis and genome enrichment were achieved using NDEx, the Network and Data Exchange software, and web-based computational tools. To validate the gene expression, quantitative RT-PCR was performed for selected kinases from the list by exposing the HeLa cells to tunicamycin, an ER stress inducer, for various time points.
KEY FINDINGS: The overall number of ERSE-associated genes follows a similar pattern in humans, mice, and rats, demonstrating the ERSE's conservation in mammals. A total of 2705 ERSE sequences were discovered in the human genome (GRCh38.p14), from which we identified 36 kinases encoding genes. Gene expression analysis has shown a significant change in the expression of selected genes under ER stress conditions in HeLa cells, supporting our finding.
SIGNIFICANCE: In this study, we have introduced a rapid method using Amazon cloud-based services for genome-wide screening of ERSE sequences from both positive and negative strands, which covers the entire genome reference sequences. Approximately 10 % of human protein-protein interactomes were found to be associated with ERSE-related genes. Our study also provides a rich resource of human ER stress-response-based protein networks and transcription factor interactions and a reference point for future research aiming at targeted therapeutics.},
}
RevDate: 2023-01-30
Monitoring invasive pines using remote sensing: a case study from Sri Lanka.
Environmental monitoring and assessment, 195(2):347.
Production plantation forestry has many economic benefits but can also have negative environmental impacts such as the spreading of invasive pines to native forest habitats. Monitoring forest for the presence of invasive pines helps with the management of this issue. However, detection of vegetation change over a large time period is difficult due to changes in image quality and sensor types, and by the spectral similarity of evergreen species and frequent cloud cover in the study area. The costs of high-resolution images are also prohibitive for routine monitoring in resource-constrained countries. This research investigated the use of remote sensing to identify the spread of Pinus caribaea over a 21-year period (2000 to 2021) in Belihuloya, Sri Lanka, using Landsat images. It applied a range of techniques to produce cloud free images, extract vegetation features, and improve vegetation classification accuracy, followed by the use of Geographical Information System to spatially analyze the spread of invasive pines. The results showed most invading pines were found within 100 m of the pine plantations' borders where broadleaved forests and grasslands are vulnerable to invasion. However, the extent of invasive pine had an overall decline of 4 ha over the 21 years. The study confirmed that remote sensing combined with spatial analysis are effective tools for monitoring invasive pines in countries with limited resources. This study also provides information to conservationists and forest managers to conduct strategic planning for sustainable forest management and conservation in Sri Lanka.
Additional Links: PMID-36717471
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36717471,
year = {2023},
author = {Nandasena, WDKV and Brabyn, L and Serrao-Neumann, S},
title = {Monitoring invasive pines using remote sensing: a case study from Sri Lanka.},
journal = {Environmental monitoring and assessment},
volume = {195},
number = {2},
pages = {347},
pmid = {36717471},
issn = {1573-2959},
abstract = {Production plantation forestry has many economic benefits but can also have negative environmental impacts such as the spreading of invasive pines to native forest habitats. Monitoring forest for the presence of invasive pines helps with the management of this issue. However, detection of vegetation change over a large time period is difficult due to changes in image quality and sensor types, and by the spectral similarity of evergreen species and frequent cloud cover in the study area. The costs of high-resolution images are also prohibitive for routine monitoring in resource-constrained countries. This research investigated the use of remote sensing to identify the spread of Pinus caribaea over a 21-year period (2000 to 2021) in Belihuloya, Sri Lanka, using Landsat images. It applied a range of techniques to produce cloud free images, extract vegetation features, and improve vegetation classification accuracy, followed by the use of Geographical Information System to spatially analyze the spread of invasive pines. The results showed most invading pines were found within 100 m of the pine plantations' borders where broadleaved forests and grasslands are vulnerable to invasion. However, the extent of invasive pine had an overall decline of 4 ha over the 21 years. The study confirmed that remote sensing combined with spatial analysis are effective tools for monitoring invasive pines in countries with limited resources. This study also provides information to conservationists and forest managers to conduct strategic planning for sustainable forest management and conservation in Sri Lanka.},
}
RevDate: 2023-01-31
CoCoNet: an efficient deep learning tool for viral metagenome binning.
Bioinformatics (Oxford, England), 37(18):2803-2810.
MOTIVATION: Metagenomic approaches hold the potential to characterize microbial communities and unravel the intricate link between the microbiome and biological processes. Assembly is one of the most critical steps in metagenomics experiments. It consists of transforming overlapping DNA sequencing reads into sufficiently accurate representations of the community's genomes. This process is computationally difficult and commonly results in genomes fragmented across many contigs. Computational binning methods are used to mitigate fragmentation by partitioning contigs based on their sequence composition, abundance or chromosome organization into bins representing the community's genomes. Existing binning methods have been principally tuned for bacterial genomes and do not perform favorably on viral metagenomes.
RESULTS: We propose Composition and Coverage Network (CoCoNet), a new binning method for viral metagenomes that leverages the flexibility and the effectiveness of deep learning to model the co-occurrence of contigs belonging to the same viral genome and provide a rigorous framework for binning viral contigs. Our results show that CoCoNet substantially outperforms existing binning methods on viral datasets.
CoCoNet was implemented in Python and is available for download on PyPi (https://pypi.org/). The source code is hosted on GitHub at https://github.com/Puumanamana/CoCoNet and the documentation is available at https://coconet.readthedocs.io/en/latest/index.html. CoCoNet does not require extensive resources to run. For example, binning 100k contigs took about 4 h on 10 Intel CPU Cores (2.4 GHz), with a memory peak at 27 GB (see Supplementary Fig. S9). To process a large dataset, CoCoNet may need to be run on a high RAM capacity server. Such servers are typically available in high-performance or cloud computing settings.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Additional Links: PMID-33822891
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid33822891,
year = {2021},
author = {Arisdakessian, CG and Nigro, OD and Steward, GF and Poisson, G and Belcaid, M},
title = {CoCoNet: an efficient deep learning tool for viral metagenome binning.},
journal = {Bioinformatics (Oxford, England)},
volume = {37},
number = {18},
pages = {2803-2810},
doi = {10.1093/bioinformatics/btab213},
pmid = {33822891},
issn = {1367-4811},
abstract = {MOTIVATION: Metagenomic approaches hold the potential to characterize microbial communities and unravel the intricate link between the microbiome and biological processes. Assembly is one of the most critical steps in metagenomics experiments. It consists of transforming overlapping DNA sequencing reads into sufficiently accurate representations of the community's genomes. This process is computationally difficult and commonly results in genomes fragmented across many contigs. Computational binning methods are used to mitigate fragmentation by partitioning contigs based on their sequence composition, abundance or chromosome organization into bins representing the community's genomes. Existing binning methods have been principally tuned for bacterial genomes and do not perform favorably on viral metagenomes.
RESULTS: We propose Composition and Coverage Network (CoCoNet), a new binning method for viral metagenomes that leverages the flexibility and the effectiveness of deep learning to model the co-occurrence of contigs belonging to the same viral genome and provide a rigorous framework for binning viral contigs. Our results show that CoCoNet substantially outperforms existing binning methods on viral datasets.
CoCoNet was implemented in Python and is available for download on PyPi (https://pypi.org/). The source code is hosted on GitHub at https://github.com/Puumanamana/CoCoNet and the documentation is available at https://coconet.readthedocs.io/en/latest/index.html. CoCoNet does not require extensive resources to run. For example, binning 100k contigs took about 4 h on 10 Intel CPU Cores (2.4 GHz), with a memory peak at 27 GB (see Supplementary Fig. S9). To process a large dataset, CoCoNet may need to be run on a high RAM capacity server. Such servers are typically available in high-performance or cloud computing settings.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.},
}
RevDate: 2023-01-30
MAG-D: A multivariate attention network based approach for cloud workload forecasting.
Future generations computer systems : FGCS, 142:376-392.
The Coronavirus pandemic and the work-from-home have drastically changed the working style and forced us to rapidly shift towards cloud-based platforms & services for seamless functioning. The pandemic has accelerated a permanent shift in cloud migration. It is estimated that over 95% of digital workloads will reside in cloud-native platforms. Real-time workload forecasting and efficient resource management are two critical challenges for cloud service providers. As cloud workloads are highly volatile and chaotic due to their time-varying nature; thus classical machine learning-based prediction models failed to acquire accurate forecasting. Recent advances in deep learning have gained massive popularity in forecasting highly nonlinear cloud workloads; however, they failed to achieve excellent forecasting outcomes. Consequently, demands for designing more accurate forecasting algorithms exist. Therefore, in this work, we propose 'MAG-D', a Multivariate Attention and Gated recurrent unit based Deep learning approach for Cloud workload forecasting in data centers. We performed an extensive set of experiments on the Google cluster traces, and we confirm that MAG-DL exploits the long-range nonlinear dependencies of cloud workload and improves the prediction accuracy on average compared to the recent techniques applying hybrid methods using Long Short Term Memory Network (LSTM), Convolutional Neural Network (CNN), Gated Recurrent Units (GRU), and Bidirectional Long Short Term Memory Network (BiLSTM).
Additional Links: PMID-36714386
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36714386,
year = {2023},
author = {Patel, YS and Bedi, J},
title = {MAG-D: A multivariate attention network based approach for cloud workload forecasting.},
journal = {Future generations computer systems : FGCS},
volume = {142},
number = {},
pages = {376-392},
pmid = {36714386},
issn = {0167-739X},
abstract = {The Coronavirus pandemic and the work-from-home have drastically changed the working style and forced us to rapidly shift towards cloud-based platforms & services for seamless functioning. The pandemic has accelerated a permanent shift in cloud migration. It is estimated that over 95% of digital workloads will reside in cloud-native platforms. Real-time workload forecasting and efficient resource management are two critical challenges for cloud service providers. As cloud workloads are highly volatile and chaotic due to their time-varying nature; thus classical machine learning-based prediction models failed to acquire accurate forecasting. Recent advances in deep learning have gained massive popularity in forecasting highly nonlinear cloud workloads; however, they failed to achieve excellent forecasting outcomes. Consequently, demands for designing more accurate forecasting algorithms exist. Therefore, in this work, we propose 'MAG-D', a Multivariate Attention and Gated recurrent unit based Deep learning approach for Cloud workload forecasting in data centers. We performed an extensive set of experiments on the Google cluster traces, and we confirm that MAG-DL exploits the long-range nonlinear dependencies of cloud workload and improves the prediction accuracy on average compared to the recent techniques applying hybrid methods using Long Short Term Memory Network (LSTM), Convolutional Neural Network (CNN), Gated Recurrent Units (GRU), and Bidirectional Long Short Term Memory Network (BiLSTM).},
}
RevDate: 2023-01-30
Towards interactional management for power batteries of electric vehicles.
RSC advances, 13(3):2036-2056.
With the ever-growing digitalization and mobility of electric transportation, lithium-ion batteries are facing performance and safety issues with the appearance of new materials and the advance of manufacturing techniques. This paper presents a systematic review of burgeoning multi-scale modelling and design for battery efficiency and safety management. The rise of cloud computing provides a tactical solution on how to efficiently achieve the interactional management and control of power batteries based on the battery system and traffic big data. The potential of selecting adaptive strategies in emerging digital management is covered systematically from principles and modelling, to machine learning. Specifically, multi-scale optimization is expounded in terms of materials, structures, manufacturing and grouping. The progress on modelling, state estimation and management methods is summarized and discussed in detail. Moreover, this review demonstrates the innovative progress of machine learning based data analysis in battery research so far, laying the foundation for future cloud and digital battery management to develop reliable onboard applications.
Additional Links: PMID-36712619
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36712619,
year = {2023},
author = {He, R and Xie, W and Wu, B and Brandon, NP and Liu, X and Li, X and Yang, S},
title = {Towards interactional management for power batteries of electric vehicles.},
journal = {RSC advances},
volume = {13},
number = {3},
pages = {2036-2056},
pmid = {36712619},
issn = {2046-2069},
abstract = {With the ever-growing digitalization and mobility of electric transportation, lithium-ion batteries are facing performance and safety issues with the appearance of new materials and the advance of manufacturing techniques. This paper presents a systematic review of burgeoning multi-scale modelling and design for battery efficiency and safety management. The rise of cloud computing provides a tactical solution on how to efficiently achieve the interactional management and control of power batteries based on the battery system and traffic big data. The potential of selecting adaptive strategies in emerging digital management is covered systematically from principles and modelling, to machine learning. Specifically, multi-scale optimization is expounded in terms of materials, structures, manufacturing and grouping. The progress on modelling, state estimation and management methods is summarized and discussed in detail. Moreover, this review demonstrates the innovative progress of machine learning based data analysis in battery research so far, laying the foundation for future cloud and digital battery management to develop reliable onboard applications.},
}
RevDate: 2023-01-30
Localization of lung abnormalities on chest X-rays using self-supervised equivariant attention.
Biomedical engineering letters, 13(1):21-30.
UNLABELLED: Chest X-Ray (CXR) images provide most anatomical details and the abnormalities on a 2D plane. Therefore, a 2D view of the 3D anatomy is sometimes sufficient for the initial diagnosis. However, close to fourteen commonly occurring diseases are sometimes difficult to identify by visually inspecting the images. Therefore, there is a drift toward developing computer-aided assistive systems to help radiologists. This paper proposes a deep learning model for the classification and localization of chest diseases by using image-level annotations. The model consists of a modified Resnet50 backbone for extracting feature corpus from the images, a classifier, and a pixel correlation module (PCM). During PCM training, the network is a weight-shared siamese architecture where the first branch applies the affine transform to the image before feeding to the network, while the second applies the same transform to the network output. The method was evaluated on CXR from the clinical center in the ratio of 70:20 for training and testing. The model was developed and tested using the cloud computing platform Google Colaboratory (NVidia Tesla P100 GPU, 16 GB of RAM). A radiologist subjectively validated the results. Our model trained with the configurations mentioned in this paper outperformed benchmark results.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13534-022-00249-5.
Additional Links: PMID-36711159
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36711159,
year = {2023},
author = {D'Souza, G and Reddy, NVS and Manjunath, KN},
title = {Localization of lung abnormalities on chest X-rays using self-supervised equivariant attention.},
journal = {Biomedical engineering letters},
volume = {13},
number = {1},
pages = {21-30},
pmid = {36711159},
issn = {2093-985X},
abstract = {UNLABELLED: Chest X-Ray (CXR) images provide most anatomical details and the abnormalities on a 2D plane. Therefore, a 2D view of the 3D anatomy is sometimes sufficient for the initial diagnosis. However, close to fourteen commonly occurring diseases are sometimes difficult to identify by visually inspecting the images. Therefore, there is a drift toward developing computer-aided assistive systems to help radiologists. This paper proposes a deep learning model for the classification and localization of chest diseases by using image-level annotations. The model consists of a modified Resnet50 backbone for extracting feature corpus from the images, a classifier, and a pixel correlation module (PCM). During PCM training, the network is a weight-shared siamese architecture where the first branch applies the affine transform to the image before feeding to the network, while the second applies the same transform to the network output. The method was evaluated on CXR from the clinical center in the ratio of 70:20 for training and testing. The model was developed and tested using the cloud computing platform Google Colaboratory (NVidia Tesla P100 GPU, 16 GB of RAM). A radiologist subjectively validated the results. Our model trained with the configurations mentioned in this paper outperformed benchmark results.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13534-022-00249-5.},
}
RevDate: 2023-01-30
Digital transformation of major scientific meetings induced by the COVID-19 pandemic: insights from the ESC 2020 annual congress.
European heart journal. Digital health, 2(4):704-712 pii:ztab076.
As a consequence of the COVID-19 pandemic, the European Society of Cardiology (ESC) was forced to pivot the scientific programme of the ESC Congress 2021 into a totally new format for online consumption, The Digital Experience. A variety of new suppliers were involved, including experts in TV studio, cloud infrastructure, online platforms, video management, and online analytics. An information technology platform able to support hundreds of thousands simultaneous connections was built and cloud computing technologies were put in place to help scale up and down the resources needed for the high number of users at peak times. The video management system was characterized by multiple layers of security and redundancy and offered the same fluidity, albeit at a different resolution, to all user independently of the performance of their internet connection. The event, free for all users, was an undisputed success, both from a scientific/educational as well as from a digital technology perspective. The number of registrations increased by almost four-fold when compared with the 2019 record-breaking edition in Paris, with a greater proportion of younger and female participants as well as of participants from low- and middle-income countries. No major technical failures were encountered. For the first time in history, attendees from all around the globe had the same real-time access to the world's most popular cardiovascular conference.
Additional Links: PMID-36713097
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36713097,
year = {2021},
author = {Roffi, M and Casadei, B and Gouillard, C and Nambatingué, N and Daval, G and Bardinet, I and Priori, SG},
title = {Digital transformation of major scientific meetings induced by the COVID-19 pandemic: insights from the ESC 2020 annual congress.},
journal = {European heart journal. Digital health},
volume = {2},
number = {4},
pages = {704-712},
doi = {10.1093/ehjdh/ztab076},
pmid = {36713097},
issn = {2634-3916},
abstract = {As a consequence of the COVID-19 pandemic, the European Society of Cardiology (ESC) was forced to pivot the scientific programme of the ESC Congress 2021 into a totally new format for online consumption, The Digital Experience. A variety of new suppliers were involved, including experts in TV studio, cloud infrastructure, online platforms, video management, and online analytics. An information technology platform able to support hundreds of thousands simultaneous connections was built and cloud computing technologies were put in place to help scale up and down the resources needed for the high number of users at peak times. The video management system was characterized by multiple layers of security and redundancy and offered the same fluidity, albeit at a different resolution, to all user independently of the performance of their internet connection. The event, free for all users, was an undisputed success, both from a scientific/educational as well as from a digital technology perspective. The number of registrations increased by almost four-fold when compared with the 2019 record-breaking edition in Paris, with a greater proportion of younger and female participants as well as of participants from low- and middle-income countries. No major technical failures were encountered. For the first time in history, attendees from all around the globe had the same real-time access to the world's most popular cardiovascular conference.},
}
RevDate: 2023-01-27
Democratizing clinical-genomic data: How federated platforms can promote benefits sharing in genomics.
Frontiers in genetics, 13:1045450.
Since the first sequencing of the human genome, associated sequencing costs have dramatically lowered, leading to an explosion of genomic data. This valuable data should in theory be of huge benefit to the global community, although unfortunately the benefits of these advances have not been widely distributed. Much of today's clinical-genomic data is siloed and inaccessible in adherence with strict governance and privacy policies, with more than 97% of hospital data going unused, according to one reference. Despite these challenges, there are promising efforts to make clinical-genomic data accessible and useful without compromising security. Specifically, federated data platforms are emerging as key resources to facilitate secure data sharing without having to physically move the data from outside of its organizational or jurisdictional boundaries. In this perspective, we summarize the overarching progress in establishing federated data platforms, and highlight critical considerations on how they should be managed to ensure patient and public trust. These platforms are enabling global collaboration and improving representation of underrepresented groups, since sequencing efforts have not prioritized diverse population representation until recently. Federated data platforms, when combined with advances in no-code technology, can be accessible to the diverse end-users that make up the genomics workforce, and we discuss potential strategies to develop sustainable business models so that the platforms can continue to enable research long term. Although these platforms must be carefully managed to ensure appropriate and ethical use, they are democratizing access and insights to clinical-genomic data that will progress research and enable impactful therapeutic findings.
Additional Links: PMID-36704354
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36704354,
year = {2022},
author = {Alvarellos, M and Sheppard, HE and Knarston, I and Davison, C and Raine, N and Seeger, T and Prieto Barja, P and Chatzou Dunford, M},
title = {Democratizing clinical-genomic data: How federated platforms can promote benefits sharing in genomics.},
journal = {Frontiers in genetics},
volume = {13},
number = {},
pages = {1045450},
pmid = {36704354},
issn = {1664-8021},
abstract = {Since the first sequencing of the human genome, associated sequencing costs have dramatically lowered, leading to an explosion of genomic data. This valuable data should in theory be of huge benefit to the global community, although unfortunately the benefits of these advances have not been widely distributed. Much of today's clinical-genomic data is siloed and inaccessible in adherence with strict governance and privacy policies, with more than 97% of hospital data going unused, according to one reference. Despite these challenges, there are promising efforts to make clinical-genomic data accessible and useful without compromising security. Specifically, federated data platforms are emerging as key resources to facilitate secure data sharing without having to physically move the data from outside of its organizational or jurisdictional boundaries. In this perspective, we summarize the overarching progress in establishing federated data platforms, and highlight critical considerations on how they should be managed to ensure patient and public trust. These platforms are enabling global collaboration and improving representation of underrepresented groups, since sequencing efforts have not prioritized diverse population representation until recently. Federated data platforms, when combined with advances in no-code technology, can be accessible to the diverse end-users that make up the genomics workforce, and we discuss potential strategies to develop sustainable business models so that the platforms can continue to enable research long term. Although these platforms must be carefully managed to ensure appropriate and ethical use, they are democratizing access and insights to clinical-genomic data that will progress research and enable impactful therapeutic findings.},
}
RevDate: 2023-01-26
Deep-learning optimized DEOCSU suite provides an iterable pipeline for accurate ChIP-exo peak calling.
Briefings in bioinformatics pii:7005164 [Epub ahead of print].
Recognizing binding sites of DNA-binding proteins is a key factor for elucidating transcriptional regulation in organisms. ChIP-exo enables researchers to delineate genome-wide binding landscapes of DNA-binding proteins with near single base-pair resolution. However, the peak calling step hinders ChIP-exo application since the published algorithms tend to generate false-positive and false-negative predictions. Here, we report the development of DEOCSU (DEep-learning Optimized ChIP-exo peak calling SUite), a novel machine learning-based ChIP-exo peak calling suite. DEOCSU entails the deep convolutional neural network model which was trained with curated ChIP-exo peak data to distinguish the visualized data of bona fide peaks from false ones. Performance validation of the trained deep-learning model indicated its high accuracy, high precision and high recall of over 95%. Applying the new suite to both in-house and publicly available ChIP-exo datasets obtained from bacteria, eukaryotes and archaea revealed an accurate prediction of peaks containing canonical motifs, highlighting the versatility and efficiency of DEOCSU. Furthermore, DEOCSU can be executed on a cloud computing platform or the local environment. With visualization software included in the suite, adjustable options such as the threshold of peak probability, and iterable updating of the pre-trained model, DEOCSU can be optimized for users' specific needs.
Additional Links: PMID-36702751
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36702751,
year = {2023},
author = {Bang, I and Lee, SM and Park, S and Park, JY and Nong, LK and Gao, Y and Palsson, BO and Kim, D},
title = {Deep-learning optimized DEOCSU suite provides an iterable pipeline for accurate ChIP-exo peak calling.},
journal = {Briefings in bioinformatics},
volume = {},
number = {},
pages = {},
doi = {10.1093/bib/bbad024},
pmid = {36702751},
issn = {1477-4054},
abstract = {Recognizing binding sites of DNA-binding proteins is a key factor for elucidating transcriptional regulation in organisms. ChIP-exo enables researchers to delineate genome-wide binding landscapes of DNA-binding proteins with near single base-pair resolution. However, the peak calling step hinders ChIP-exo application since the published algorithms tend to generate false-positive and false-negative predictions. Here, we report the development of DEOCSU (DEep-learning Optimized ChIP-exo peak calling SUite), a novel machine learning-based ChIP-exo peak calling suite. DEOCSU entails the deep convolutional neural network model which was trained with curated ChIP-exo peak data to distinguish the visualized data of bona fide peaks from false ones. Performance validation of the trained deep-learning model indicated its high accuracy, high precision and high recall of over 95%. Applying the new suite to both in-house and publicly available ChIP-exo datasets obtained from bacteria, eukaryotes and archaea revealed an accurate prediction of peaks containing canonical motifs, highlighting the versatility and efficiency of DEOCSU. Furthermore, DEOCSU can be executed on a cloud computing platform or the local environment. With visualization software included in the suite, adjustable options such as the threshold of peak probability, and iterable updating of the pre-trained model, DEOCSU can be optimized for users' specific needs.},
}
RevDate: 2023-01-26
Omics Notebook: robust, reproducible and flexible automated multiomics exploratory analysis and reporting.
Bioinformatics advances, 1(1):vbab024.
SUMMARY: Mass spectrometry is an increasingly important tool for the global interrogation of diverse biomolecules. Unfortunately, the complexity of downstream data analysis is a major challenge for the routine use of these data by investigators from broader training backgrounds. Omics Notebook is an open-source framework for exploratory analysis, reporting and integrating multiomic data that are automated, reproducible and customizable. Built-in functions allow the processing of proteomic data from MaxQuant and metabolomic data from XCMS, along with other omics data in standardized input formats as specified in the documentation. In addition, the use of containerization manages R package installation requirements and is tailored for shared high-performance computing or cloud environments.
Omics Notebook is implemented in Python and R and is available for download from https://github.com/cnsb-boston/Omics_Notebook with additional documentation under a GNU GPLv3 license.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online.
Additional Links: PMID-36700091
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid36700091,
year = {2021},
author = {Blum, BC and Emili, A},
title = {Omics Notebook: robust, reproducible and flexible automated multiomics exploratory analysis and reporting.},
journal = {Bioinformatics advances},
volume = {1},
number = {1},
pages = {vbab024},
pmid = {36700091},
issn = {2635-0041},
abstract = {SUMMARY: Mass spectrometry is an increasingly important tool for the global interrogation of diverse biomolecules. Unfortunately, the complexity of downstream data analysis is a major challenge for the routine use of these data by investigators from broader training backgrounds. Omics Notebook is an open-source framework for exploratory analysis, reporting and integrating multiomic data that are automated, reproducible and customizable. Built-in functions allow the processing of proteomic data from MaxQuant and metabolomic data from XCMS, along with other omics data in standardized input formats as specified in the documentation. In addition, the use of containerization manages R package installation requirements and is tailored for shared high-performance computing or cloud environments.
Omics Notebook is implemented in Python and R and is available for download from https://github.com/cnsb-boston/Omics_Notebook with additional documentation under a GNU GPLv3 license.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.