QUERY RUN:
26 Jan 2022 at 01:32
HITS:
2230
PAGE OPTIONS:
SLOW OPTIONS:
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 26 Jan 2022 at 01:32 Created:

Cloud Computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2022-01-20

Kasinathan G, S Jayakumar (2022)

Cloud-Based Lung Tumor Detection and Stage Classification Using Deep Learning Techniques.

BioMed research international, 2022:4185835.

Artificial intelligence (AI), Internet of Things (IoT), and the cloud computing have recently become widely used in the healthcare sector, which aid in better decision-making for a radiologist. PET imaging or positron emission tomography is one of the most reliable approaches for a radiologist to diagnosing many cancers, including lung tumor. In this work, we proposed stage classification of lung tumor which is a more challenging task in computer-aided diagnosis. As a result, a modified computer-aided diagnosis is being considered as a way to reduce the heavy workloads and second opinion to radiologists. In this paper, we present a strategy for classifying and validating different stages of lung tumor progression, as well as a deep neural model and data collection using cloud system for categorizing phases of pulmonary illness. The proposed system presents a Cloud-based Lung Tumor Detector and Stage Classifier (Cloud-LTDSC) as a hybrid technique for PET/CT images. The proposed Cloud-LTDSC initially developed the active contour model as lung tumor segmentation, and multilayer convolutional neural network (M-CNN) for classifying different stages of lung cancer has been modelled and validated with standard benchmark images. The performance of the presented technique is evaluated using a benchmark image LIDC-IDRI dataset of 50 low doses and also utilized the lung CT DICOM images. Compared with existing techniques in the literature, our proposed method achieved good result for the performance metrics accuracy, recall, and precision evaluated. Under numerous aspects, our proposed approach produces superior outcomes on all of the applied dataset images. Furthermore, the experimental result achieves an average lung tumor stage classification accuracy of 97%-99.1% and an average of 98.6% which is significantly higher than the other existing techniques.

RevDate: 2022-01-20

Syed SA, Sheela Sobana Rani K, Mohammad GB, et al (2022)

Design of Resources Allocation in 6G Cybertwin Technology Using the Fuzzy Neuro Model in Healthcare Systems.

Journal of healthcare engineering, 2022:5691203.

In 6G edge communication networks, the machine learning models play a major role in enabling intelligent decision-making in case of optimal resource allocation in case of the healthcare system. However, it causes a bottleneck, in the form of sophisticated memory calculations, between the hidden layers and the cost of communication between the edge devices/edge nodes and the cloud centres, while transmitting the data from the healthcare management system to the cloud centre via edge nodes. In order to reduce these hurdles, it is important to share workloads to further eliminate the problems related to complicated memory calculations and transmission costs. The effort aims mainly to reduce storage costs and cloud computing associated with neural networks as the complexity of the computations increases with increasing numbers of hidden layers. This study modifies federated teaching to function with distributed assignment resource settings as a distributed deep learning model. It improves the capacity to learn from the data and assigns an ideal workload depending on the limited available resources, slow network connection, and more edge devices. Current network status can be sent to the cloud centre by the edge devices and edge nodes autonomously using cybertwin, meaning that local data are often updated to calculate global data. The simulation shows how effective resource management and allocation is better than standard approaches. It is seen from the results that the proposed method achieves higher resource utilization and success rate than existing methods. Index Terms are fuzzy, healthcare, bioinformatics, 6G wireless communication, cybertwin, machine learning, neural network, and edge.

RevDate: 2022-01-20

Raju KB, Dara S, Vidyarthi A, et al (2022)

Smart Heart Disease Prediction System with IoT and Fog Computing Sectors Enabled by Cascaded Deep Learning Model.

Computational intelligence and neuroscience, 2022:1070697.

Chronic illnesses like chronic respiratory disease, cancer, heart disease, and diabetes are threats to humans around the world. Among them, heart disease with disparate features or symptoms complicates diagnosis. Because of the emergence of smart wearable gadgets, fog computing and "Internet of Things" (IoT) solutions have become necessary for diagnosis. The proposed model integrates Edge-Fog-Cloud computing for the accurate and fast delivery of outcomes. The hardware components collect data from different patients. The heart feature extraction from signals is done to get significant features. Furthermore, the feature extraction of other attributes is also gathered. All these features are gathered and subjected to the diagnostic system using an Optimized Cascaded Convolution Neural Network (CCNN). Here, the hyperparameters of CCNN are optimized by the Galactic Swarm Optimization (GSO). Through the performance analysis, the precision of the suggested GSO-CCNN is 3.7%, 3.7%, 3.6%, 7.6%, 67.9%, 48.4%, 33%, 10.9%, and 7.6% more advanced than PSO-CCNN, GWO-CCNN, WOA-CCNN, DHOA-CCNN, DNN, RNN, LSTM, CNN, and CCNN, respectively. Thus, the comparative analysis of the suggested system ensures its efficiency over the conventional models.

RevDate: 2022-01-19

Xie M, Yang L, Chen G, et al (2022)

RiboChat: a chat-style web interface for analysis and annotation of ribosome profiling data.

Briefings in bioinformatics pii:6511203 [Epub ahead of print].

The increasing volume of ribosome profiling (Ribo-seq) data, computational complexity of its data processing and operational handicap of related analytical procedures present a daunting set of informatics challenges. These impose a substantial barrier to researchers particularly with no or limited bioinformatics expertise in analyzing and decoding translation information from Ribo-seq data, thus driving the need for a new research paradigm for data computation and information extraction. In this knowledge base, we herein present a novel interactive web platform, RiboChat (https://db.cngb.org/ribobench/chat.html), for direct analyzing and annotating Ribo-seq data in the form of a chat conversation. It consists of a user-friendly web interface and a backend cloud-computing service. When typing a data analysis question into the chat window, the object-text detection module will be run to recognize relevant keywords from the input text. Based on the features identified in the input, individual analytics modules are then scored to find the perfect-matching candidate. The corresponding analytics module will be further executed after checking the completion status of the uploading of datasets and configured parameters. Overall, RiboChat represents an important step forward in the emerging direction of next-generation data analytics and will enable the broad research community to conveniently decipher translation information embedded within Ribo-seq data.

RevDate: 2022-01-17

Wang L, Lu Z, Van Buren P, et al (2022)

SciApps: An Automated Platform for Processing and Distribution of Plant Genomics Data.

Methods in molecular biology (Clifton, N.J.), 2443:197-209.

SciApps is an open-source, web-based platform for processing, storing, visualizing, and distributing genomic data and analysis results. Built upon the Tapis (formerly Agave) platform, SciApps brings users TB-scale of data storage via CyVerse Data Store and over one million CPUs via the Extreme Science and Engineering Discovery Environment (XSEDE) resources at Texas Advanced Computing Center (TACC). SciApps provides users ways to chain individual jobs into automated and reproducible workflows in a distributed cloud and provides a management system for data, associated metadata, individual analysis jobs, and multi-step workflows. This chapter provides examples of how to (1) submitting, managing, constructing workflows, (2) using public workflows for Bulked Segregant Analysis (BSA), (3) constructing a Data Analysis Center (DAC), and Data Coordination Center (DCC) for the plant ENCODE project.

RevDate: 2022-01-17

Williams J (2022)

CyVerse for Reproducible Research: RNA-Seq Analysis.

Methods in molecular biology (Clifton, N.J.), 2443:57-79.

Posing complex research questions poses complex reproducibility challenges. Datasets may need to be managed over long periods of time. Reliable and secure repositories are needed for data storage. Sharing big data requires advance planning and becomes complex when collaborators are spread across institutions and countries. Many complex analyses require the larger compute resources only provided by cloud and high-performance computing infrastructure. Finally at publication, funder and publisher requirements must be met for data availability and accessibility and computational reproducibility. For all of these reasons, cloud-based cyberinfrastructures are an important component for satisfying the needs of data-intensive research. Learning how to incorporate these technologies into your research skill set will allow you to work with data analysis challenges that are often beyond the resources of individual research institutions. One of the advantages of CyVerse is that there are many solutions for high-powered analyses that do not require knowledge of command line (i.e., Linux) computing. In this chapter we will highlight CyVerse capabilities by analyzing RNA-Seq data. The lessons learned will translate to doing RNA-Seq in other computing environments and will focus on how CyVerse infrastructure supports reproducibility goals (e.g., metadata management, containers), team science (e.g., data sharing features), and flexible computing environments (e.g., interactive computing, scaling).

RevDate: 2022-01-17

Ogwel B, Odhiambo-Otieno G, Otieno G, et al (2022)

Leveraging cloud computing for improved health service delivery: Findings from public health facilities in Kisumu County, Western Kenya-2019.

Learning health systems, 6(1):e10276 pii:LRH210276.

Introduction: Healthcare delivery systems across the world have been shown to fall short of the ideals of being cost-effective and meeting pre-established standards of quality but the problem is more pronounced in Africa. Cloud computing emerges as a platform healthcare institutions could leverage to address these shortfalls. The aim of this study was to establish the extent of cloud computing adoption and its influence on health service delivery by public health facilities in Kisumu County.

Methods: The study employed a cross-sectional study design in one-time data collection among facility in-charges and health records officers from 57 public health facilities. The target population was 114 healthcare personnel and the sample size (n = 88) was computed using Yamane formula and drawn using stratified random sampling. Poisson regression was used to determine the influence of cloud computing adoption on the number of realized benefits to health service delivery.

Results: Among 80 respondents, Cloud computing had been adopted by 42 (53%) while Software-as-a-Service, Platform-as-a-Service and Infrastructure-as-a-Service implementations were at 100%, 0% and 5% among adopters, respectively. Overall, those who had adopted cloud computing realized a significantly higher number of benefits to health service delivery compared to those who had not (Incident-rate ratio (IRR) =1.93, 95% confidence interval (95% CI) [1.36-2.72]). A significantly higher number of benefits was realized by those who had implemented Infrastructure-as-a-Service alongside Software-as-a-Service (IRR = 2.22, 95% CI [1.15-4.29]) and those who had implemented Software-as-a-Service only (IRR = 1.89, 95% CI [1.33-2.70]) compared to non-adopters. We observed similar results in the stratified analysis looking at economic, operational, and functional benefits to health service delivery.

Conclusion: Cloud computing resulted in improved health service delivery with these benefits still being realized irrespective of the service implementation model deployed. The findings buttress the need for healthcare institutions to adopt cloud computing and integrate it in their operations in order to improve health service delivery.

RevDate: 2022-01-17

Li Y, Li T, Shen P, et al (2021)

Sim-DRS: a similarity-based dynamic resource scheduling algorithm for microservice-based web systems.

PeerJ. Computer science, 7:e824 pii:cs-824.

Microservice-based Web Systems (MWS), which provide a fundamental infrastructure for constructing large-scale cloud-based Web applications, are designed as a set of independent, small and modular microservices implementing individual tasks and communicating with messages. This microservice-based architecture offers great application scalability, but meanwhile incurs complex and reactive autoscaling actions that are performed dynamically and periodically based on current workloads. However, this problem has thus far remained largely unexplored. In this paper, we formulate a problem of Dynamic Resource Scheduling for Microservice-based Web Systems (DRS-MWS) and propose a similarity-based heuristic scheduling algorithm that aims to quickly find viable scheduling schemes by utilizing solutions to similar problems. The performance superiority of the proposed scheduling solution in comparison with three state-of-the-art algorithms is illustrated by experimental results generated through a well-known microservice benchmark on disparate computing nodes in public clouds.

RevDate: 2022-01-17

Hussain SA, Bassam NA, Zayegh A, et al (2022)

Prediction and Evaluation of healthy and unhealthy status of COVID-19 patients using wearable device prototype data.

MethodsX pii:S2215-0161(22)00003-6 [Epub ahead of print].

COVID-19 pandemic seriousness is making the whole world suffer due to inefficient medication and vaccines. The article prediction analysis is carried out with the dataset downloaded from the Application peripheral interface (API) designed explicitly for COVID-19 quarantined patients. The measured data is collected from a wearable device used for quarantined healthy and unhealthy patients. The wearable device provides data of temperature, heart rate, SPO2, blood saturation, and blood pressure timely for alerting the medical authorities and providing a better diagnosis and treatment. The dataset contains 1085 patients with eight features representing 490 COVID-19 infected and 595 standard cases. The work considers different parameters, namely heart rate, temperature, SpO2, bpm parameters, and health status. Furthermore, the real-time data collected can predict the health status of patients as infected and non-infected from measured parameters. The collected dataset uses a random forest classifier with linear and polynomial regression to train and validate COVID-19 patient data. The google colab is an Integral development environment inbuilt with python and Jupyter notebook with scikit-learn version 0.22.1 virtually tested on cloud coding tools. The dataset is trained and tested in 80% and 20% ratio for accuracy evaluation and avoid overfitting in the model. This analysis could help medical authorities and governmental agencies of every country respond timely and reduce the contamination of the disease.•The measured data provide a comprehensive mapping of disease symptoms to predict the health status. They can restrict the virus transmission and take necessary steps to control, mitigate and manage the disease.•Benefits in scientific research with Artificial Intelligence (AI) to tackle the hurdles in analyzing disease diagnosis.•The diagnosis results of disease symptoms can identify the severity of the patient to monitor and manage the difficulties for the outbreak caused.

RevDate: 2022-01-18

He P, Zhang B, S Shen (2022)

Effects of Out-of-Hospital Continuous Nursing on Postoperative Breast Cancer Patients by Medical Big Data.

Journal of healthcare engineering, 2022:9506915.

This study aimed to explore the application value of the intelligent medical communication system based on the Apriori algorithm and cloud follow-up platform in out-of-hospital continuous nursing of breast cancer patients. In this study, the Apriori algorithm is optimized by Amazon Web Services (AWS) and graphics processing unit (GPU) to improve its data mining speed. At the same time, a cloud follow-up platform-based intelligent mobile medical communication system is established, which includes the log-in, my workstation, patient records, follow-up center, satisfaction management, propaganda and education center, SMS platform, and appointment management module. The subjects are divided into the control group (routine telephone follow-up, 163) and the intervention group (continuous nursing intervention, 216) according to different nursing methods. The cloud follow-up platform-based intelligent medical communication system is used to analyze patients' compliance, quality of life before and after nursing, function limitation of affected limb, and nursing satisfaction under different nursing methods. The running time of Apriori algorithm is proportional to the data amount and inversely proportional to the number of nodes in the cluster. Compared with the control group, there are statistical differences in the proportion of complete compliance data, the proportion of poor compliance data, and the proportion of total compliance in the intervention group (P < 0.05). After the intervention, the scores of the quality of life in the two groups are statistically different from those before treatment (P < 0.05), and the scores of the quality of life in the intervention group were higher than those in the control group (P < 0.05). The proportion of patients with limited and severely limited functional activity of the affected limb in the intervention group is significantly lower than that in the control group (P < 0.05). The satisfaction rate of postoperative nursing in the intervention group is significantly higher than that in the control group (P < 0.001), and the proportion of basically satisfied and dissatisfied patients in the control group was higher than that in the intervention group (P < 0.05).

RevDate: 2022-01-18

Tang J (2022)

Discussion on Health Service System of Mobile Medical Institutions Based on Internet of Things and Cloud Computing.

Journal of healthcare engineering, 2022:5235349.

Because modern human beings pay more and more attention to physical health, and there are many problems in the traditional medical service system, human beings have a higher and higher voice for the new medical model. At present, there are many researches on the application of modern science and technology to put forward solutions to medical development, but they generally pay attention to some details and ignore the construction of the whole medical service system. In order to solve the problems of low efficiency of the traditional medical model, difficult communication between doctors and patients, unreasonable allocation of medical resources, and so on, this article proposes establishing a perfect medical and health service system. First, the correlation functions are used, such as cosine correlation, to calculate the correlation of various medical products, and then the correlation measurement methods of cloud computing and the Internet of Things are used to realize the network connection of smart medical equipment, efficiently store, calculate and analyze health data, and realize online outpatient services, health file management, data analysis, and other functions. Then, the energy consumption formula of the wireless transceiver was used to reduce the resource loss in the operation of the system. Then, we use the questionnaire to understand the current situation of mobile medical and put forward improvement suggestions. This article also scores the performance of the system. The experimental results show that the performance rating of traditional medical institutions is B, while the model rating of mobile medical institutions is a, and the efficiency is optimized by 4.42%.

RevDate: 2022-01-18

Li W, Zhang Y, Wang J, et al (2022)

MicroRNA-489 Promotes the Apoptosis of Cardiac Muscle Cells in Myocardial Ischemia-Reperfusion Based on Smart Healthcare.

Journal of healthcare engineering, 2022:2538769.

With the development of information technology, the concept of smart healthcare has gradually come to the fore. Smart healthcare uses a new generation of information technologies, such as the Internet of Things (loT), big data, cloud computing, and artificial intelligence, to transform the traditional medical system in an all-around way, making healthcare more efficient, more convenient, and more personalized. miRNAs can regulate the proliferation, differentiation, and apoptosis of human cells. Relevant studies have also shown that miRNAs may play a key role in the occurrence and development of myocardial ischemia-reperfusion injury (MIRI). This study aims to explore the effects of miR-489 in MIRI. In this study, miR-489 expression in a myocardial ischemia-reperfusion animal model and H9C2 cells induced by H/R was detected by qRT-PCR. The release of lactate dehydrogenase (LDH) and the activity of creatine kinase (CK) was detected after miR-489 knockdown in H9C2 cells induced by H/R. The apoptosis of H9C2 cells and animal models were determined by ELISA. The relationship between miR-489 and SPIN1 was verified by a double fluorescence reporter enzyme assay. The expression of the PI3K/AKT pathway-related proteins was detected by Western blot. Experimental results showed that miR-489 was highly expressed in cardiac muscle cells of the animal model and in H9C2 cells induced by H/R of the myocardial infarction group, which was positively associated with the apoptosis of cardiac muscle cells with ischemia-reperfusion. miR-489 knockdown can reduce the apoptosis of cardiac muscle cells caused by ischemia-reperfusion. In downstream targeting studies, it was found that miR-489 promotes the apoptosis of cardiac muscle cells after ischemia-reperfusion by targeting the inhibition of the SPIN1-mediated PI3K/AKT pathway. In conclusion, high expression of miR-489 is associated with increased apoptosis of cardiac muscle cells after ischemia-reperfusion, which can promote the apoptosis after ischemia-reperfusion by targeting the inhibition of the SPIN1-mediated PI3K/AKT pathway. Therefore, miR-489 can be one of the potential therapeutic targets for reducing the apoptosis of cardiac muscle cells after ischemia-reperfusion.

RevDate: 2022-01-16

Jadhao S, Davison CL, Roulis EV, et al (2022)

RBCeq: A robust and scalable algorithm for accurate genetic blood typing.

EBioMedicine, 76:103759 pii:S2352-3964(21)00553-3 [Epub ahead of print].

BACKGROUND: While blood transfusion is an essential cornerstone of hematological care, patients requiring repetitive transfusion remain at persistent risk of alloimmunization due to the diversity of human blood group polymorphisms. Despite the promise, user friendly methods to accurately identify blood types from next-generation sequencing data are currently lacking. To address this unmet need, we have developed RBCeq, a novel genetic blood typing algorithm to accurately identify 36 blood group systems.

METHODS: RBCeq can predict complex blood groups such as RH, and ABO that require identification of small indels and copy number variants. RBCeq also reports clinically significant, rare, and novel variants with potential clinical relevance that may lead to the identification of novel blood group alleles.

FINDINGS: The RBCeq algorithm demonstrated 99·07% concordance when validated on 402 samples which included 29 antigens with serology and 9 antigens with SNP-array validation in 14 blood group systems and 59 antigens validation on manual predicted phenotype from variant call files. We have also developed a user-friendly web server that generates detailed blood typing reports with advanced visualization (https://www.rbceq.org/).

INTERPRETATION: RBCeq will assist blood banks and immunohematology laboratories by overcoming existing methodological limitations like scalability, reproducibility, and accuracy when genotyping and phenotyping in multi-ethnic populations. This Amazon Web Services (AWS) cloud based platform has the potential to reduce pre-transfusion testing time and to increase sample processing throughput, ultimately improving quality of patient care.

FUNDING: This work was supported in part by Advance Queensland Research Fellowship, MRFF Genomics Health Futures Mission (76,757), and the Australian Red Cross LifeBlood. The Australian governments fund the Australian Red Cross Lifeblood for the provision of blood, blood products and services to the Australian community.

RevDate: 2022-01-15

Li J, Wang J, Yang L, et al (2022)

Spatiotemporal change analysis of long time series inland water in Sri Lanka based on remote sensing cloud computing.

Scientific reports, 12(1):766.

Sri Lanka is an important hub connecting Asia-Africa-Europe maritime routes. It receives abundant but uneven spatiotemporal distribution of rainfall and has evident seasonal water shortages. Monitoring water area changes in inland lakes and reservoirs plays an important role in guiding the development and utilisation of water resources. In this study, a rapid surface water extraction model based on the Google Earth Engine remote sensing cloud computing platform was constructed. By evaluating the optimal spectral water index method, the spatiotemporal variations of reservoirs and inland lakes in Sri Lanka were analysed. The results showed that Automated Water Extraction Index (AWEIsh) could accurately identify the water boundary with an overall accuracy of 99.14%, which was suitable for surface water extraction in Sri Lanka. The area of the Maduru Oya Reservoir showed an overall increasing trend based on small fluctuations from 1988 to 2018, and the monthly area of the reservoir fluctuated significantly in 2017. Thus, water resource management in the dry zone should focus more on seasonal regulation and control. From 1995 to 2015, the number and area of lakes and reservoirs in Sri Lanka increased to different degrees, mainly concentrated in arid provinces including Northern, North Central, and Western Provinces. Overall, the amount of surface water resources have increased.

RevDate: 2022-01-14

Li Q, Jiang L, Qiao K, et al (2021)

INCloud: integrated neuroimaging cloud for data collection, management, analysis and clinical translations.

General psychiatry, 34(6):e100651 pii:gpsych-2021-100651.

Background: Neuroimaging techniques provide rich and accurate measures of brain structure and function, and have become one of the most popular methods in mental health and neuroscience research. Rapidly growing neuroimaging research generates massive amounts of data, bringing new challenges in data collection, large-scale data management, efficient computing requirements and data mining and analyses.

Aims: To tackle the challenges and promote the application of neuroimaging technology in clinical practice, we developed an integrated neuroimaging cloud (INCloud). INCloud provides a full-stack solution for the entire process of large-scale neuroimaging data collection, management, analysis and clinical applications.

Methods: INCloud consists of data acquisition systems, a data warehouse, automatic multimodal image quality check and processing systems, a brain feature library, a high-performance computing cluster and computer-aided diagnosis systems (CADS) for mental disorders. A unique design of INCloud is the brain feature library that converts the unit of data management from image to image features such as hippocampal volume. Connecting the CADS to the scientific database, INCloud allows the accumulation of scientific data to continuously improve the accuracy of objective diagnosis of mental disorders.

Results: Users can manage and analyze neuroimaging data on INCloud, without the need to download them to the local device. INCloud users can query, manage, analyze and share image features based on customized criteria. Several examples of 'mega-analyses' based on the brain feature library are shown.

Conclusions: Compared with traditional neuroimaging acquisition and analysis workflow, INCloud features safe and convenient data management and sharing, reduced technical requirements for researchers, high-efficiency computing and data mining, and straightforward translations to clinical service. The design and implementation of the system are also applicable to imaging research platforms in other fields.

RevDate: 2022-01-14

Han H, X Gu (2021)

Linkage Between Inclusive Digital Finance and High-Tech Enterprise Innovation Performance: Role of Debt and Equity Financing.

Frontiers in psychology, 12:814408.

This study investigates the relationship between digital financial inclusion, external financing, and the innovation performance of high-tech enterprises in China. The choice of corporate financing methods is an important part of organizational behavioral psychology, and different financing models will have a certain effect on organizational performance, especially in the digital economy environment. Therefore, based on resource dependence theory and financing constraint theory, the present study utilizes the panel data collected from the China Stock Market & Accounting Research (CSMAR) database from 2011 to 2020 of 112 companies in the Yangtze River Delta region and the "The Peking University Digital Financial Inclusion Index of China (PKU-DFIIC)" released by the Peking University Digital Finance Research Center and Ant Financial Group. The results show that the Digital Financial Inclusion Index (DFIIC) has a significant positive correlation with the innovation performance of high-tech enterprises. The higher the level of debt financing, the stronger the role of digital financial inclusion in promoting innovation performance. Investigating the DFIIC in terms of coverage breadth and usage depth, we find that usage depth does not significantly encourage innovation performance. The effect of the interaction between coverage breadth and external financing is consistent with the results for the DFIIC. The study suggests that equity financing promotes the usage depth of the DFIIC in state-owned enterprises. In contrast, debt financing promotes the coverage breadth of non-state-owned enterprises. Finally, we propose relevant policy recommendations based on the research results. It includes in-depth popularization of inclusive finance in the daily operations of enterprises at the technical level, refinement of external financing policy incentives for enterprises based on the characteristics of ownership, and strengthening the research of technologies such as big data, artificial intelligence (AI), and cloud computing. The paper presents a range of theoretical and practical implications for practitioners and academics relevant to high-tech enterprises.

RevDate: 2022-01-13

Decap D, de Schaetzen van Brienen L, Larmuseau M, et al (2022)

Halvade somatic: Somatic variant calling with Apache Spark.

GigaScience, 11(1):.

BACKGROUND: The accurate detection of somatic variants from sequencing data is of key importance for cancer treatment and research. Somatic variant calling requires a high sequencing depth of the tumor sample, especially when the detection of low-frequency variants is also desired. In turn, this leads to large volumes of raw sequencing data to process and hence, large computational requirements. For example, calling the somatic variants according to the GATK best practices guidelines requires days of computing time for a typical whole-genome sequencing sample.

FINDINGS: We introduce Halvade Somatic, a framework for somatic variant calling from DNA sequencing data that takes advantage of multi-node and/or multi-core compute platforms to reduce runtime. It relies on Apache Spark to provide scalable I/O and to create and manage data streams that are processed on different CPU cores in parallel. Halvade Somatic contains all required steps to process the tumor and matched normal sample according to the GATK best practices recommendations: read alignment (BWA), sorting of reads, preprocessing steps such as marking duplicate reads and base quality score recalibration (GATK), and, finally, calling the somatic variants (Mutect2). Our approach reduces the runtime on a single 36-core node to 19.5 h compared to a runtime of 84.5 h for the original pipeline, a speedup of 4.3 times. Runtime can be further decreased by scaling to multiple nodes, e.g., we observe a runtime of 1.36 h using 16 nodes, an additional speedup of 14.4 times. Halvade Somatic supports variant calling from both whole-genome sequencing and whole-exome sequencing data and also supports Strelka2 as an alternative or complementary variant calling tool. We provide a Docker image to facilitate single-node deployment. Halvade Somatic can be executed on a variety of compute platforms, including Amazon EC2 and Google Cloud.

CONCLUSIONS: To our knowledge, Halvade Somatic is the first somatic variant calling pipeline that leverages Big Data processing platforms and provides reliable, scalable performance. Source code is freely available.

RevDate: 2022-01-13

Feldman D, Funk L, Le A, et al (2022)

Pooled genetic perturbation screens with image-based phenotypes.

Nature protocols [Epub ahead of print].

Discovery of the genetic components underpinning fundamental and disease-related processes is being rapidly accelerated by combining efficient, programmable genetic engineering with phenotypic readouts of high spatial, temporal and/or molecular resolution. Microscopy is a fundamental tool for studying cell biology, but its lack of high-throughput sequence readouts hinders integration in large-scale genetic screens. Optical pooled screens using in situ sequencing provide massively scalable integration of barcoded lentiviral libraries (e.g., CRISPR perturbation libraries) with high-content imaging assays, including dynamic processes in live cells. The protocol uses standard lentiviral vectors and molecular biology, providing single-cell resolution of phenotype and engineered genotype, scalability to millions of cells and accurate sequence reads sufficient to distinguish >106 perturbations. In situ amplification takes ~2 d, while sequencing can be performed in ~1.5 h per cycle. The image analysis pipeline provided enables fully parallel automated sequencing analysis using a cloud or cluster computing environment.

RevDate: 2022-01-12

Maniyar CB, Kumar A, DR Mishra (2022)

Continuous and Synoptic Assessment of Indian Inland Waters for Harmful Algae Blooms.

Harmful algae, 111:102160.

Cyanobacterial Harmful Algal Blooms (CyanoHABs) are progressively becoming a major water quality, socioeconomic, and health hazard worldwide. In India, there are frequent episodes of severe CyanoHABs, which are left untreated due to a lack of awareness and monitoring infrastructure, affecting the economy of the country gravely. In this study, for the first time, we present a country-wide analysis of CyanoHABs in India by developing a novel interactive cloud-based dashboard called "CyanoKhoj" in Google Earth Engine (GEE) which uses Sentinel-3 Ocean and Land Colour Instrument (OLCI) remotely sensed datasets. The main goal of this study was to showcase the utility of CyanoKhoj for rapid monitoring and discuss the widespread CyanoHABs problems across India. We demonstrate the utility of Cyanokhoj by including select case studies of lakes and reservoirs geographically spread across five states: Bargi and Gandhisagar Dams in Madhya Pradesh, Hirakud Reservoir in Odisha, Ukai Dam in Gujarat, Linganamakki Reservoir in Karnataka, and Pulicat Lake in Tamil Nadu. These sites were studied from September to November 2018 using CyanoKhoj, which is capable of near-real-time monitoring and country-wide assessment of CyanoHABs. We used CyanoKhoj to prepare spatiotemporal maps of Chlorophyll-a (Chl-a) content and Cyanobacterial Cell Density (CCD) to study the local spread of the CyanoHABs and their phenology in these waterbodies. A first-ever all-India CCD map is also presented for the year 2018, which highlights the spatial spread of CyanoHABs throughout the country (32 large waterbodies across India with severe bloom: CCD>2,500,000). Results indicate that CyanoHABs are most prevalent in nutrient-rich waterbodies prone to industrial and other nutrient-rich discharges. A clear temporal evolution of the blooms showed that they are dominant during the post-monsoon season (September-October) when the nutrient concentrations in the waterbodies are at their peak, and they begin to decline towards winter (November-December). CyanoKhoj is an open-source tool that can have a significant broader impact in mapping CyanoHABs not only throughout cyanobacteria data-scarce India, but on a global level using archived and future Sentinel-3A/B OLCI data.

RevDate: 2022-01-11

Saxena D, AK Singh (2022)

OFP-TM: an online VM failure prediction and tolerance model towards high availability of cloud computing environments.

The Journal of supercomputing pii:4235 [Epub ahead of print].

The indispensable collaboration of cloud computing in every digital service has raised its resource usage exponentially. The ever-growing demand of cloud resources evades service availability leading to critical challenges such as cloud outages, SLA violation, and excessive power consumption. Previous approaches have addressed this problem by utilizing multiple cloud platforms or running multiple replicas of a Virtual Machine (VM) resulting into high operational cost. This paper has addressed this alarming problem from a different perspective by proposing a novel O nline virtual machine F ailure P rediction and T olerance M odel (OFP-TM) with high availability awareness embedded in physical machines as well as virtual machines. The failure-prone VMs are estimated in real-time based on their future resource usage by developing an ensemble approach-based resource predictor. These VMs are assigned to a failure tolerance unit comprising of a resource provision matrix and Selection Box (S-Box) mechanism which triggers the migration of failure-prone VMs and handle any outage beforehand while maintaining the desired level of availability for cloud users. The proposed model is evaluated and compared against existing related approaches by simulating cloud environment and executing several experiments using a real-world workload Google Cluster dataset. Consequently, it has been concluded that OFP-TM improves availability and scales down the number of live VM migrations up to 33.5% and 83.3%, respectively, over without OFP-TM.

RevDate: 2022-01-11

Syed SA, Rashid M, Hussain S, et al (2022)

QoS Aware and Fault Tolerance Based Software-Defined Vehicular Networks Using Cloud-Fog Computing.

Sensors (Basel, Switzerland), 22(1): pii:s22010401.

Software-defined network (SDN) and vehicular ad-hoc network (VANET) combined provided a software-defined vehicular network (SDVN). To increase the quality of service (QoS) of vehicle communication and to make the overall process efficient, researchers are working on VANET communication systems. Current research work has made many strides, but due to the following limitations, it needs further investigation and research: Cloud computing is used for messages/tasks execution instead of fog computing, which increases response time. Furthermore, a fault tolerance mechanism is used to reduce the tasks/messages failure ratio. We proposed QoS aware and fault tolerance-based software-defined V vehicular networks using Cloud-fog computing (QAFT-SDVN) to address the above issues. We provided heuristic algorithms to solve the above limitations. The proposed model gets vehicle messages through SDN nodes which are placed on fog nodes. SDN controllers receive messages from nearby SDN units and prioritize the messages in two different ways. One is the message nature way, while the other one is deadline and size way of messages prioritization. SDN controller categorized in safety and non-safety messages and forward to the destination. After sending messages to their destination, we check their acknowledgment; if the destination receives the messages, then no action is taken; otherwise, we use a fault tolerance mechanism. We send the messages again. The proposed model is implemented in CloudSIm and iFogSim, and compared with the latest models. The results show that our proposed model decreased response time by 50% of the safety and non-safety messages by using fog nodes for the SDN controller. Furthermore, we reduced the execution time of the safety and non-safety messages by up to 4%. Similarly, compared with the latest model, we reduced the task failure ratio by 20%, 15%, 23.3%, and 22.5%.

RevDate: 2022-01-11

Loke CH, Adam MS, Nordin R, et al (2021)

Physical Distancing Device with Edge Computing for COVID-19 (PADDIE-C19).

Sensors (Basel, Switzerland), 22(1): pii:s22010279.

The most effective methods of preventing COVID-19 infection include maintaining physical distancing and wearing a face mask while in close contact with people in public places. However, densely populated areas have a greater incidence of COVID-19 dissemination, which is caused by people who do not comply with standard operating procedures (SOPs). This paper presents a prototype called PADDIE-C19 (Physical Distancing Device with Edge Computing for COVID-19) to implement the physical distancing monitoring based on a low-cost edge computing device. The PADDIE-C19 provides real-time results and responses, as well as notifications and warnings to anyone who violates the 1-m physical distance rule. In addition, PADDIE-C19 includes temperature screening using an MLX90614 thermometer and ultrasonic sensors to restrict the number of people on specified premises. The Neural Network Processor (KPU) in Grove Artificial Intelligence Hardware Attached on Top (AI HAT), an edge computing unit, is used to accelerate the neural network model on person detection and achieve up to 18 frames per second (FPS). The results show that the accuracy of person detection with Grove AI HAT could achieve 74.65% and the average absolute error between measured and actual physical distance is 8.95 cm. Furthermore, the accuracy of the MLX90614 thermometer is guaranteed to have less than 0.5 °C value difference from the more common Fluke 59 thermometer. Experimental results also proved that when cloud computing is compared to edge computing, the Grove AI HAT achieves the average performance of 18 FPS for a person detector (kmodel) with an average 56 ms execution time in different networks, regardless of the network connection type or speed.

RevDate: 2022-01-11

Ojo MO, Viola I, Baratta M, et al (2021)

Practical Experiences of a Smart Livestock Location Monitoring System Leveraging GNSS, LoRaWAN and Cloud Services.

Sensors (Basel, Switzerland), 22(1): pii:s22010273.

Livestock farming is, in most cases in Europe, unsupervised, thus making it difficult to ensure adequate control of the position of the animals for the improvement of animal welfare. In addition, the geographical areas involved in livestock grazing usually have difficult access with harsh orography and lack of communications infrastructure, thus the need to provide a low-power livestock localization and monitoring system is of paramount importance, which is crucial not for a sustainable agriculture, but also for the protection of native breeds and meats thanks to their controlled supervision. In this context, this work presents an Internet of things (IoT)-based system integrating low-power wide area (LPWA) technology, cloud, and virtualization services to provide real-time livestock location monitoring. Taking into account the constraints coming from the environment in terms of energy supply and network connectivity, our proposed system is based on a wearable device equipped with inertial sensors, Global Positioning System (GPS) receiver, and LoRaWAN transceiver, which can provide a satisfactory compromise between performance, cost, and energy consumption. At first, this article provides the state-of-the-art localization techniques and technologies applied to smart livestock. Then, we proceed to provide the hardware and firmware co-design to achieve very low energy consumption, thus providing a significant positive impact to the battery life. The proposed platform has been evaluated in a pilot test in the northern part of Italy, evaluating different configurations in terms of sampling period, experimental duration, and number of devices. The results are analyzed and discussed for packet delivery ratio, energy consumption, localization accuracy, battery discharge measurement, and delay.

RevDate: 2022-01-11

Forcén-Muñoz M, Pavón-Pulido N, López-Riquelme JA, et al (2021)

Irriman Platform: Enhancing Farming Sustainability through Cloud Computing Techniques for Irrigation Management.

Sensors (Basel, Switzerland), 22(1): pii:s22010228.

Crop sustainability is essential for balancing economic development and environmental care, mainly in strong and very competitive regions in the agri-food sector, such as the Region of Murcia in Spain, considered to be the orchard of Europe, despite being a semi-arid area with an important scarcity of fresh water. In this region, farmers apply efficient techniques to minimize supplies and maximize quality and productivity; however, the effects of climate change and the degradation of significant natural environments, such as, the "Mar Menor", the most extent saltwater lagoon of Europe, threatened by resources overexploitation, lead to the search of even better irrigation management techniques to avoid certain effects which could damage the quaternary aquifer connected to such lagoon. This paper describes the Irriman Platform, a system based on Cloud Computing techniques, which includes low-cost wireless data loggers, capable of acquiring data from a wide range of agronomic sensors, and a novel software architecture for safely storing and processing such information, making crop monitoring and irrigation management easier. The proposed platform helps agronomists to optimize irrigation procedures through a usable web-based tool which allows them to elaborate irrigation plans and to evaluate their effectiveness over crops. The system has been deployed in a large number of representative crops, located along near 50,000 ha of the surface, during several phenological cycles. Results demonstrate that the system enables crop monitoring and irrigation optimization, and makes interaction between farmers and agronomists easier.

RevDate: 2022-01-11

Angel NA, Ravindran D, Vincent PMDR, et al (2021)

Sensors (Basel, Switzerland), 22(1): pii:s22010196.

Cloud computing has become integral lately due to the ever-expanding Internet-of-things (IoT) network. It still is and continues to be the best practice for implementing complex computational applications, emphasizing the massive processing of data. However, the cloud falls short due to the critical constraints of novel IoT applications generating vast data, which entails a swift response time with improved privacy. The newest drift is moving computational and storage resources to the edge of the network, involving a decentralized distributed architecture. The data processing and analytics perform at proximity to end-users, and overcome the bottleneck of cloud computing. The trend of deploying machine learning (ML) at the network edge to enhance computing applications and services has gained momentum lately, specifically to reduce latency and energy consumed while optimizing the security and management of resources. There is a need for rigorous research efforts oriented towards developing and implementing machine learning algorithms that deliver the best results in terms of speed, accuracy, storage, and security, with low power consumption. This extensive survey presented on the prominent computing paradigms in practice highlights the latest innovations resulting from the fusion between ML and the evolving computing paradigms and discusses the underlying open research challenges and future prospects.

RevDate: 2022-01-11

Quezada-Gaibor D, Torres-Sospedra J, Nurmi J, et al (2021)

Cloud Platforms for Context-Adaptive Positioning and Localisation in GNSS-Denied Scenarios-A Systematic Review.

Sensors (Basel, Switzerland), 22(1): pii:s22010110.

Cloud Computing and Cloud Platforms have become an essential resource for businesses, due to their advanced capabilities, performance, and functionalities. Data redundancy, scalability, and security, are among the key features offered by cloud platforms. Location-Based Services (LBS) often exploit cloud platforms to host positioning and localisation systems. This paper introduces a systematic review of current positioning platforms for GNSS-denied scenarios. We have undertaken a comprehensive analysis of each component of the positioning and localisation systems, including techniques, protocols, standards, and cloud services used in the state-of-the-art deployments. Furthermore, this paper identifies the limitations of existing solutions, outlining shortcomings in areas that are rarely subjected to scrutiny in existing reviews of indoor positioning, such as computing paradigms, privacy, and fault tolerance. We then examine contributions in the areas of efficient computation, interoperability, positioning, and localisation. Finally, we provide a brief discussion concerning the challenges for cloud platforms based on GNSS-denied scenarios.

RevDate: 2022-01-11

Ali A, Iqbal MM, Jamil H, et al (2021)

Multilevel Central Trust Management Approach for Task Scheduling on IoT-Based Mobile Cloud Computing.

Sensors (Basel, Switzerland), 22(1): pii:s22010108.

With the increasing number of mobile devices and IoT devices across a wide range of real-life applications, our mobile cloud computing devices will not cope with this growing number of audiences soon, which implies and demands the need to shift to fog computing. Task scheduling is one of the most demanding scopes after the trust computation inside the trustable nodes. The mobile devices and IoT devices transfer the resource-intensive tasks towards mobile cloud computing. Some tasks are resource-intensive and not trustable to allocate to the mobile cloud computing resources. This consequently gives rise to trust evaluation and data sync-up of devices joining and leaving the network. The resources are more intensive for cloud computing and mobile cloud computing. Time, energy, and resources are wasted due to the nontrustable nodes. This research article proposes a multilevel trust enhancement approach for efficient task scheduling in mobile cloud environments. We first calculate the trustable tasks needed to offload towards the mobile cloud computing. Then, an efficient and dynamic scheduler is added to enhance the task scheduling after trust computation using social and environmental trust computation techniques. To improve the time and energy efficiency of IoT and mobile devices using the proposed technique, the energy computation and time request computation are compared with the existing methods from literature, which identified improvements in the results. Our proposed approach is centralized to tackle constant SyncUPs of incoming devices' trust values with mobile cloud computing. With the benefits of mobile cloud computing, the centralized data distribution method is a positive approach.

RevDate: 2022-01-11

Rocha-Jácome C, Carvajal RG, Chavero FM, et al (2021)

Industry 4.0: A Proposal of Paradigm Organization Schemes from a Systematic Literature Review.

Sensors (Basel, Switzerland), 22(1): pii:s22010066.

Currently, the concept of Industry 4.0 is well known; however, it is extremely complex, as it is constantly evolving and innovating. It includes the participation of many disciplines and areas of knowledge as well as the integration of many technologies, both mature and emerging, but working in collaboration and relying on their study and implementation under the novel criteria of Cyber-Physical Systems. This study starts with an exhaustive search for updated scientific information of which a bibliometric analysis is carried out with results presented in different tables and graphs. Subsequently, based on the qualitative analysis of the references, we present two proposals for the schematic analysis of Industry 4.0 that will help academia and companies to support digital transformation studies. The results will allow us to perform a simple alternative analysis of Industry 4.0 to understand the functions and scope of the integrating technologies to achieve a better collaboration of each area of knowledge and each professional, considering the potential and limitations of each one, supporting the planning of an appropriate strategy, especially in the management of human resources, for the successful execution of the digital transformation of the industry.

RevDate: 2022-01-10

Goudarzi A, G Moya-Galé (2021)

Automatic Speech Recognition in Noise for Parkinson's Disease: A Pilot Study.

Frontiers in artificial intelligence, 4:809321.

The sophistication of artificial intelligence (AI) technologies has significantly advanced in the past decade. However, the observed unpredictability and variability of AI behavior in noisy signals is still underexplored and represents a challenge when trying to generalize AI behavior to real-life environments, especially for people with a speech disorder, who already experience reduced speech intelligibility. In the context of developing assistive technology for people with Parkinson's disease using automatic speech recognition (ASR), this pilot study reports on the performance of Google Cloud speech-to-text technology with dysarthric and healthy speech in the presence of multi-talker babble noise at different intensity levels. Despite sensitivities and shortcomings, it is possible to control the performance of these systems with current tools in order to measure speech intelligibility in real-life conditions.

RevDate: 2022-01-10

Almusallam N, Alabdulatif A, F Alarfaj (2021)

Analysis of Privacy-Preserving Edge Computing and Internet of Things Models in Healthcare Domain.

Computational and mathematical methods in medicine, 2021:6834800.

The healthcare sector is rapidly being transformed to one that operates in new computing environments. With researchers increasingly committed to finding and expanding healthcare solutions to include the Internet of Things (IoT) and edge computing, there is a need to monitor more closely than ever the data being collected, shared, processed, and stored. The advent of cloud, IoT, and edge computing paradigms poses huge risks towards the privacy of data, especially, in the healthcare environment. However, there is a lack of comprehensive research focused on seeking efficient and effective solutions that ensure data privacy in the healthcare domain. The data being collected and processed by healthcare applications is sensitive, and its manipulation by malicious actors can have catastrophic repercussions. This paper discusses the current landscape of privacy-preservation solutions in IoT and edge healthcare applications. It describes the common techniques adopted by researchers to integrate privacy in their healthcare solutions. Furthermore, the paper discusses the limitations of these solutions in terms of their technical complexity, effectiveness, and sustainability. The paper closes with a summary and discussion of the challenges of safeguarding privacy in IoT and edge healthcare solutions which need to be resolved for future applications.

RevDate: 2022-01-10

Wang S, Hou Y, Li X, et al (2021)

Practical Implementation of Artificial Intelligence-Based Deep Learning and Cloud Computing on the Application of Traditional Medicine and Western Medicine in the Diagnosis and Treatment of Rheumatoid Arthritis.

Frontiers in pharmacology, 12:765435 pii:765435.

Rheumatoid arthritis (RA), an autoimmune disease of unknown etiology, is a serious threat to the health of middle-aged and elderly people. Although western medicine, traditional medicine such as traditional Chinese medicine, Tibetan medicine and other ethnic medicine have shown certain advantages in the diagnosis and treatment of RA, there are still some practical shortcomings, such as delayed diagnosis, improper treatment scheme and unclear drug mechanism. At present, the applications of artificial intelligence (AI)-based deep learning and cloud computing has aroused wide attention in the medical and health field, especially in screening potential active ingredients, targets and action pathways of single drugs or prescriptions in traditional medicine and optimizing disease diagnosis and treatment models. Integrated information and analysis of RA patients based on AI and medical big data will unquestionably benefit more RA patients worldwide. In this review, we mainly elaborated the application status and prospect of AI-assisted deep learning and cloud computation-oriented western medicine and traditional medicine on the diagnosis and treatment of RA in different stages. It can be predicted that with the help of AI, more pharmacological mechanisms of effective ethnic drugs against RA will be elucidated and more accurate solutions will be provided for the treatment and diagnosis of RA in the future.

RevDate: 2022-01-10

Bai Y, Liu Q, Wu W, et al (2021)

cuSCNN: A Secure and Batch-Processing Framework for Privacy-Preserving Convolutional Neural Network Prediction on GPU.

Frontiers in computational neuroscience, 15:799977.

The emerging topic of privacy-preserving deep learning as a service has attracted increasing attention in recent years, which focuses on building an efficient and practical neural network prediction framework to secure client and model-holder data privately on the cloud. In such a task, the time cost of performing the secure linear layers is expensive, where matrix multiplication is the atomic operation. Most existing mix-based solutions heavily emphasized employing BGV-based homomorphic encryption schemes to secure the linear layer on the CPU platform. However, they suffer an efficiency and energy loss when dealing with a larger-scale dataset, due to the complicated encoded methods and intractable ciphertext operations. To address it, we propose cuSCNN, a secure and efficient framework to perform the privacy prediction task of a convolutional neural network (CNN), which can flexibly perform on the GPU platform. Its main idea is 2-fold: (1) To avoid the trivia and complicated homomorphic matrix computations brought by BGV-based solutions, it adopts GSW-based homomorphic matrix encryption to efficiently enable the linear layers of CNN, which is a naive method to secure matrix computation operations. (2) To improve the computation efficiency on GPU, a hybrid optimization approach based on CUDA (Compute Unified Device Architecture) has been proposed to improve the parallelism level and memory access speed when performing the matrix multiplication on GPU. Extensive experiments are conducted on industrial datasets and have shown the superior performance of the proposed cuSCNN framework in terms of runtime and power consumption compared to the other frameworks.

RevDate: 2022-01-10

Zhu L, Wang C, He Z, et al (2021)

A lightweight automatic sleep staging method for children using single-channel EEG based on edge artificial intelligence.

World wide web pii:983 [Epub ahead of print].

With the development of telemedicine and edge computing, edge artificial intelligence (AI) will become a new development trend for smart medicine. On the other hand, nearly one-third of children suffer from sleep disorders. However, all existing sleep staging methods are for adults. Therefore, we adapted edge AI to develop a lightweight automatic sleep staging method for children using single-channel EEG. The trained sleep staging model will be deployed to edge smart devices so that the sleep staging can be implemented on edge devices which will greatly save network resources and improving the performance and privacy of sleep staging application. Then the results and hypnogram will be uploaded to the cloud server for further analysis by the physicians to get sleep disease diagnosis reports and treatment opinions. We utilized 1D convolutional neural networks (1D-CNN) and long short term memory (LSTM) to build our sleep staging model, named CSleepNet. We tested the model on our childrens sleep (CS) dataset and sleep-EDFX dataset. For the CS dataset, we experimented with F4-M1 channel EEG using four different loss functions, and the logcosh performed best with overall accuracy of 83.06% and F1-score of 76.50%. We used Fpz-Cz and Pz-Oz channel EEG to train our model in Sleep-EDFX dataset, and achieved an accuracy of 86.41% without manual feature extraction. The experimental results show that our method has great potential. It not only plays an important role in sleep-related research, but also can be widely used in the classification of other time sequences physiological signals.

RevDate: 2022-01-10

Peng Y, Liu E, Peng S, et al (2022)

Using artificial intelligence technology to fight COVID-19: a review.

Artificial intelligence review pii:10106 [Epub ahead of print].

In late December 2019, a new type of coronavirus was discovered, which was later named severe acute respiratory syndrome coronavirus 2(SARS-CoV-2). Since its discovery, the virus has spread globally, with 2,975,875 deaths as of 15 April 2021, and has had a huge impact on our health systems and economy. How to suppress the continued spread of new coronary pneumonia is the main task of many scientists and researchers. The introduction of artificial intelligence technology has provided a huge contribution to the suppression of the new coronavirus. This article discusses the main application of artificial intelligence technology in the suppression of coronavirus from three major aspects of identification, prediction, and development through a large amount of literature research, and puts forward the current main challenges and possible development directions. The results show that it is an effective measure to combine artificial intelligence technology with a variety of new technologies to predict and identify COVID-19 patients.

RevDate: 2022-01-09

Elnashar A, Zeng H, Wu B, et al (2022)

Assessment of environmentally sensitive areas to desertification in the Blue Nile Basin driven by the MEDALUS-GEE framework.

The Science of the total environment pii:S0048-9697(22)00014-6 [Epub ahead of print].

Assessing environmentally sensitive areas (ESA) to desertification and understanding their primary drivers are necessary for applying targeted management practices to combat land degradation at the basin scale. We have developed the MEditerranean Desertification And Land Use framework in the Google Earth Engine cloud platform (MEDALUS-GEE) to map and assess the ESA index at 300 m grids in the Blue Nile Basin (BNB). The ESA index was derived from elaborating 19 key indicators representing soil, climate, vegetation, and management through the geometric mean of their sensitivity scores. The results showed that 43.4%, 28.8%, and 70.4% of the entire BNB, Upper BNB, and Lower BNB, respectively, are highly susceptible to desertification, indicating appropriate land and water management measures should be urgently implemented. Our findings also showed that the main land degradation drivers are moderate to intensive cultivation across the BNB, high slope gradient and water erosion in the Upper BNB, and low soil organic matter and vegetation cover in the Lower BNB. The study presented an integrated monitoring and assessment framework for understanding desertification processes to help achieve land-related sustainable development goals.

RevDate: 2022-01-08

Alrebdi N, Alabdulatif A, Iwendi C, et al (2022)

SVBE: searchable and verifiable blockchain-based electronic medical records system.

Scientific reports, 12(1):266.

Central management of electronic medical systems faces a major challenge because it requires trust in a single entity that cannot effectively protect files from unauthorized access or attacks. This challenge makes it difficult to provide some services in central electronic medical systems, such as file search and verification, although they are needed. This gap motivated us to develop a system based on blockchain that has several characteristics: decentralization, security, anonymity, immutability, and tamper-proof. The proposed system provides several services: storage, verification, and search. The system consists of a smart contract that connects to a decentralized user application through which users can transact with the system. In addition, the system uses an interplanetary file system (IPFS) and cloud computing to store patients' data and files. Experimental results and system security analysis show that the system performs search and verification tasks securely and quickly through the network.

RevDate: 2022-01-08

Li L, Zhang Y, Q Geng (2021)

Mean-square bounded consensus of nonlinear multi-agent systems under deception attack.

ISA transactions pii:S0019-0578(21)00637-6 [Epub ahead of print].

This paper researches mean-square bounded consensus for a nonlinear multi-agent system subjected to randomly occurring deception attack, process and measurement noises. Considering the measurement tampered by the attacker, an estimator is presented to obtain relative accurate state estimation, where the gain is acquired by a recursive algorithm. On this basis, a type of centralized controller is designed combined with cloud computing system. Moreover, from perspective of the defender, a detector is proposed at the side of agent to detect whether the current actuator input is attacked. Using linear matrix inequality, sufficient conditions are given for achieving mean-square bounded consensus and an upper boundary is derived. Finally, validity of the proposed method is illustrated via two simulation examples.

RevDate: 2022-01-06

Cresswell K, Domínguez Hernández A, Williams R, et al (2022)

Key Challenges and Opportunities for Cloud Technology in Health Care: Semistructured Interview Study.

JMIR human factors, 9(1):e31246 pii:v9i1e31246.

BACKGROUND: The use of cloud computing (involving storage and processing of data on the internet) in health care has increasingly been highlighted as having great potential in facilitating data-driven innovations. Although some provider organizations are reaping the benefits of using cloud providers to store and process their data, others are lagging behind.

OBJECTIVE: We aim to explore the existing challenges and barriers to the use of cloud computing in health care settings and investigate how perceived risks can be addressed.

METHODS: We conducted a qualitative case study of cloud computing in health care settings, interviewing a range of individuals with perspectives on supply, implementation, adoption, and integration of cloud technology. Data were collected through a series of in-depth semistructured interviews exploring current applications, implementation approaches, challenges encountered, and visions for the future. The interviews were transcribed and thematically analyzed using NVivo 12 (QSR International). We coded the data based on a sociotechnical coding framework developed in related work.

RESULTS: We interviewed 23 individuals between September 2020 and November 2020, including professionals working across major cloud providers, health care provider organizations, innovators, small and medium-sized software vendors, and academic institutions. The participants were united by a common vision of a cloud-enabled ecosystem of applications and by drivers surrounding data-driven innovation. The identified barriers to progress included the cost of data migration and skill gaps to implement cloud technologies within provider organizations, the cultural shift required to move to externally hosted services, a lack of user pull as many benefits were not visible to those providing frontline care, and a lack of interoperability standards and central regulations.

CONCLUSIONS: Implementations need to be viewed as a digitally enabled transformation of services, driven by skill development, organizational change management, and user engagement, to facilitate the implementation and exploitation of cloud-based infrastructures and to maximize returns on investment.

RevDate: 2022-01-06

Fang Q, S Yan (2022)

MCX Cloud-a modern, scalable, high-performance and in-browser Monte Carlo simulation platform with cloud computing.

Journal of biomedical optics, 27(8):.

SIGNIFICANCE: Despite the ample progress made toward faster and more accurate Monte Carlo (MC) simulation tools over the past decade, the limited usability and accessibility of these advanced modeling tools remain key barriers to widespread use among the broad user community.

AIM: An open-source, high-performance, web-based MC simulator that builds upon modern cloud computing architectures is highly desirable to deliver state-of-the-art MC simulations and hardware acceleration to general users without the need for special hardware installation and optimization.

APPROACH: We have developed a configuration-free, in-browser 3D MC simulation platform-Monte Carlo eXtreme (MCX) Cloud-built upon an array of robust and modern technologies, including a Docker Swarm-based cloud-computing backend and a web-based graphical user interface (GUI) that supports in-browser 3D visualization, asynchronous data communication, and automatic data validation via JavaScript Object Notation (JSON) schemas.

RESULTS: The front-end of the MCX Cloud platform offers an intuitive simulation design, fast 3D data rendering, and convenient simulation sharing. The Docker Swarm container orchestration backend is highly scalable and can support high-demand GPU MC simulations using MCX over a dynamically expandable virtual cluster.

CONCLUSION: MCX Cloud makes fast, scalable, and feature-rich MC simulations readily available to all biophotonics researchers without overhead. It is fully open-source and can be freely accessed at http://mcx.space/cloud.

RevDate: 2022-01-06

Alsuhibany SA, Abdel-Khalek S, Algarni A, et al (2021)

Ensemble of Deep Learning Based Clinical Decision Support System for Chronic Kidney Disease Diagnosis in Medical Internet of Things Environment.

Computational intelligence and neuroscience, 2021:4931450.

Recently, Internet of Things (IoT) and cloud computing environments become commonly employed in several healthcare applications by the integration of monitoring things such as sensors and medical gadgets for observing remote patients. For availing of improved healthcare services, the huge count of data generated by IoT gadgets from the medicinal field can be investigated in the CC environment rather than relying on limited processing and storage resources. At the same time, earlier identification of chronic kidney disease (CKD) becomes essential to reduce the mortality rate significantly. This study develops an ensemble of deep learning based clinical decision support systems (EDL-CDSS) for CKD diagnosis in the IoT environment. The goal of the EDL-CDSS technique is to detect and classify different stages of CKD using the medical data collected by IoT devices and benchmark repositories. In addition, the EDL-CDSS technique involves the design of Adaptive Synthetic (ADASYN) technique for outlier detection process. Moreover, an ensemble of three models, namely, deep belief network (DBN), kernel extreme learning machine (KELM), and convolutional neural network with gated recurrent unit (CNN-GRU), are performed. Finally, quasi-oppositional butterfly optimization algorithm (QOBOA) is used for the hyperparameter tuning of the DBN and CNN-GRU models. A wide range of simulations was carried out and the outcomes are studied in terms of distinct measures. A brief outcomes analysis highlighted the supremacy of the EDL-CDSS technique on exiting approaches.

RevDate: 2022-01-05

Perkel JM (2022)

Terra takes the pain out of 'omics' computing in the cloud.

Nature, 601(7891):154-155.

RevDate: 2022-01-03

Ha LT (2022)

Are digital business and digital public services a driver for better energy security? Evidence from a European sample.

Environmental science and pollution research international [Epub ahead of print].

This paper empirically analyses the impacts of the digital transformation process in the business and public sectors on energy security (ES). We employ 8 indicators to represent four aspects of energy security, including availability, acceptability, develop-ability, and sustainability. Digital businesses development is captured by e-Commerce (including e-Commerce sales, e-Commerce turnover, e-Commerce web sales) and e-Business (including customer relation management (CRM) usage and cloud usage). Digital public services development is reflected by business mobility and key enablers. Different econometric techniques are utilized in a database of 24 European Union countries from 2011 to 2019. Our estimation results demonstrate that digital businesses play a critical role in improving the acceptability and develop-ability of energy security, while digitalization in public services supports achieving energy sustainability goals. The use of modern digital technology such as big data, cloud computing is extremely important to ensure the security of the energy system, especially the availability of energy. For further discussion on the role of digital public services, we reveal a nonlinear association between digitalization in the public sector and energy intensity and energy consumption, suggesting the acceptability and develop-ability of energy security can be enhanced if the digital transformation process achieves a certain level.

RevDate: 2022-01-04

Hussain AA, Bouachir O, Al-Turjman F, et al (2020)

AI Techniques for COVID-19.

IEEE access : practical innovations, open solutions, 8:128776-128795.

Artificial Intelligence (AI) intent is to facilitate human limits. It is getting a standpoint on human administrations, filled by the growing availability of restorative clinical data and quick progression of insightful strategies. Motivated by the need to highlight the need for employing AI in battling the COVID-19 Crisis, this survey summarizes the current state of AI applications in clinical administrations while battling COVID-19. Furthermore, we highlight the application of Big Data while understanding this virus. We also overview various intelligence techniques and methods that can be applied to various types of medical information-based pandemic. We classify the existing AI techniques in clinical data analysis, including neural systems, classical SVM, and edge significant learning. Also, an emphasis has been made on regions that utilize AI-oriented cloud computing in combating various similar viruses to COVID-19. This survey study is an attempt to benefit medical practitioners and medical researchers in overpowering their faced difficulties while handling COVID-19 big data. The investigated techniques put forth advances in medical data analysis with an exactness of up to 90%. We further end up with a detailed discussion about how AI implementation can be a huge advantage in combating various similar viruses.

RevDate: 2022-01-04

Wang B, L Xu (2021)

Construction of the "Internet Plus" Community Smart Elderly Care Service Platform.

Journal of healthcare engineering, 2021:4310648.

With the rapid development of China's market economy and the increasing trend of population aging, the traditional community elderly care service model has exposed more and more problems, such as the imbalance between supply and demand, single service, and lack of flexibility. In response to these issues, this research attempts to explore the possible paths and practical challenges of applying the Internet, Internet of Things, mobile networks, big data, and cloud computing to community elderly care services. This research believes that the construction of the "Internet Plus" community smart elderly care services platform is a general trend. Innovating the traditional community elderly care service model is conducive to fully integrating elderly care resources and improving the quality of elderly care services.

RevDate: 2022-01-04
CmpDate: 2022-01-04

Abd Elaziz M, Abualigah L, Ibrahim RA, et al (2021)

IoT Workflow Scheduling Using Intelligent Arithmetic Optimization Algorithm in Fog Computing.

Computational intelligence and neuroscience, 2021:9114113.

Instead of the cloud, the Internet of things (IoT) activities are offloaded into fog computing to boost the quality of services (QoSs) needed by many applications. However, the availability of continuous computing resources on fog computing servers is one of the restrictions for IoT applications since transmitting the large amount of data generated using IoT devices would create network traffic and cause an increase in computational overhead. Therefore, task scheduling is the main problem that needs to be solved efficiently. This study proposes an energy-aware model using an enhanced arithmetic optimization algorithm (AOA) method called AOAM, which addresses fog computing's job scheduling problem to maximize users' QoSs by maximizing the makespan measure. In the proposed AOAM, we enhanced the conventional AOA searchability using the marine predators algorithm (MPA) search operators to address the diversity of the used solutions and local optimum problems. The proposed AOAM is validated using several parameters, including various clients, data centers, hosts, virtual machines, tasks, and standard evaluation measures, including the energy and makespan. The obtained results are compared with other state-of-the-art methods; it showed that AOAM is promising and solved task scheduling effectively compared with the other comparative methods.

RevDate: 2021-12-28

Lee S, Yoon D, Yeo S, et al (2021)

Mitigating Cold Start Problem in Serverless Computing with Function Fusion.

Sensors (Basel, Switzerland), 21(24): pii:s21248416.

As Artificial Intelligence (AI) is becoming ubiquitous in many applications, serverless computing is also emerging as a building block for developing cloud-based AI services. Serverless computing has received much interest because of its simplicity, scalability, and resource efficiency. However, due to the trade-off with resource efficiency, serverless computing suffers from the cold start problem, that is, a latency between a request arrival and function execution. The cold start problem significantly influences the overall response time of workflow that consists of functions because the cold start may occur in every function within the workflow. Function fusion can be one of the solutions to mitigate the cold start latency of a workflow. If two functions are fused into a single function, the cold start of the second function is removed; however, if parallel functions are fused, the workflow response time can be increased because the parallel functions run sequentially even if the cold start latency is reduced. This study presents an approach to mitigate the cold start latency of a workflow using function fusion while considering a parallel run. First, we identify three latencies that affect response time, present a workflow response time model considering the latency, and efficiently find a fusion solution that can optimize the response time on the cold start. Our method shows a response time of 28-86% of the response time of the original workflow in five workflows.

RevDate: 2021-12-28

Salih S, Hamdan M, Abdelmaboud A, et al (2021)

Prioritising Organisational Factors Impacting Cloud ERP Adoption and the Critical Issues Related to Security, Usability, and Vendors: A Systematic Literature Review.

Sensors (Basel, Switzerland), 21(24): pii:s21248391.

Cloud ERP is a type of enterprise resource planning (ERP) system that runs on the vendor's cloud platform instead of an on-premises network, enabling companies to connect through the Internet. The goal of this study was to rank and prioritise the factors driving cloud ERP adoption by organisations and to identify the critical issues in terms of security, usability, and vendors that impact adoption of cloud ERP systems. The assessment of critical success factors (CSFs) in on-premises ERP adoption and implementation has been well documented; however, no previous research has been carried out on CSFs in cloud ERP adoption. Therefore, the contribution of this research is to provide research and practice with the identification and analysis of 16 CSFs through a systematic literature review, where 73 publications on cloud ERP adoption were assessed from a range of different conferences and journals, using inclusion and exclusion criteria. Drawing from the literature, we found security, usability, and vendors were the top three most widely cited critical issues for the adoption of cloud-based ERP; hence, the second contribution of this study was an integrative model constructed with 12 drivers based on the security, usability, and vendor characteristics that may have greater influence as the top critical issues in the adoption of cloud ERP systems. We also identified critical gaps in current research, such as the inconclusiveness of findings related to security critical issues, usability critical issues, and vendor critical issues, by highlighting the most important drivers influencing those issues in cloud ERP adoption and the lack of discussion on the nature of the criticality of those CSFs. This research will aid in the development of new strategies or the revision of existing strategies and polices aimed at effectively integrating cloud ERP into cloud computing infrastructure. It will also allow cloud ERP suppliers to determine organisations' and business owners' expectations and implement appropriate tactics. A better understanding of the CSFs will narrow the field of failure and assist practitioners and managers in increasing their chances of success.

RevDate: 2021-12-28

Bucur V, LC Miclea (2021)

Multi-Cloud Resource Management Techniques for Cyber-Physical Systems.

Sensors (Basel, Switzerland), 21(24): pii:s21248364.

Information technology is based on data management between various sources. Software projects, as varied as simple applications or as complex as self-driving cars, are heavily reliant on the amounts, and types, of data ingested by one or more interconnected systems. Data is not only consumed but is transformed or mutated which requires copious amounts of computing resources. One of the most exciting areas of cyber-physical systems, autonomous vehicles, makes heavy use of deep learning and AI to mimic the highly complex actions of a human driver. Attempting to map human behavior (a large and abstract concept) requires large amounts of data, used by AIs to increase their knowledge and better attempt to solve complex problems. This paper outlines a full-fledged solution for managing resources in a multi-cloud environment. The purpose of this API is to accommodate ever-increasing resource requirements by leveraging the multi-cloud and using commercially available tools to scale resources and make systems more resilient while remaining as cloud agnostic as possible. To that effect, the work herein will consist of an architectural breakdown of the resource management API, a low-level description of the implementation and an experiment aimed at proving the feasibility, and applicability of the systems described.

RevDate: 2021-12-28

Hameed SS, Selamat A, Abdul Latiff L, et al (2021)

A Hybrid Lightweight System for Early Attack Detection in the IoMT Fog.

Sensors (Basel, Switzerland), 21(24): pii:s21248289.

Cyber-attack detection via on-gadget embedded models and cloud systems are widely used for the Internet of Medical Things (IoMT). The former has a limited computation ability, whereas the latter has a long detection time. Fog-based attack detection is alternatively used to overcome these problems. However, the current fog-based systems cannot handle the ever-increasing IoMT's big data. Moreover, they are not lightweight and are designed for network attack detection only. In this work, a hybrid (for host and network) lightweight system is proposed for early attack detection in the IoMT fog. In an adaptive online setting, six different incremental classifiers were implemented, namely a novel Weighted Hoeffding Tree Ensemble (WHTE), Incremental K-Nearest Neighbors (IKNN), Incremental Naïve Bayes (INB), Hoeffding Tree Majority Class (HTMC), Hoeffding Tree Naïve Bayes (HTNB), and Hoeffding Tree Naïve Bayes Adaptive (HTNBA). The system was benchmarked with seven heterogeneous sensors and a NetFlow data infected with nine types of recent attack. The results showed that the proposed system worked well on the lightweight fog devices with ~100% accuracy, a low detection time, and a low memory usage of less than 6 MiB. The single-criteria comparative analysis showed that the WHTE ensemble was more accurate and was less sensitive to the concept drift.

RevDate: 2021-12-28

Alwakeel AM (2021)

An Overview of Fog Computing and Edge Computing Security and Privacy Issues.

Sensors (Basel, Switzerland), 21(24): pii:s21248226.

With the advancement of different technologies such as 5G networks and IoT the use of different cloud computing technologies became essential. Cloud computing allowed intensive data processing and warehousing solution. Two different new cloud technologies that inherit some of the traditional cloud computing paradigm are fog computing and edge computing that is aims to simplify some of the complexity of cloud computing and leverage the computing capabilities within the local network in order to preform computation tasks rather than carrying it to the cloud. This makes this technology fits with the properties of IoT systems. However, using such technology introduces several new security and privacy challenges that could be huge obstacle against implementing these technologies. In this paper, we survey some of the main security and privacy challenges that faces fog and edge computing illustrating how these security issues could affect the work and implementation of edge and fog computing. Moreover, we present several countermeasures to mitigate the effect of these security issues.

RevDate: 2021-12-27

Xie P, Ma E, Z Xu (2021)

Cloud Computing Image Recognition System Assists the Construction of the Internet of Things Model of Administrative Management Event Parameters.

Computational intelligence and neuroscience, 2021:8630256.

RevDate: 2021-12-27

Yang JS, Cuomo RE, Purushothaman V, et al (2021)

Campus Smoking Policies and Smoking-Related Twitter Posts Originating From California Public Universities: Retrospective Study.

JMIR formative research, 5(12):e33331 pii:v5i12e33331.

BACKGROUND: The number of colleges and universities with smoke- or tobacco-free campus policies has been increasing. The effects of campus smoking policies on overall sentiment, particularly among young adult populations, are more difficult to assess owing to the changing tobacco and e-cigarette product landscape and differential attitudes toward policy implementation and enforcement.

OBJECTIVE: The goal of the study was to retrospectively assess the campus climate toward tobacco use by comparing tweets from California universities with and those without smoke- or tobacco-free campus policies.

METHODS: Geolocated Twitter posts from 2015 were collected using the Twitter public application programming interface in combination with cloud computing services on Amazon Web Services. Posts were filtered for tobacco products and behavior-related keywords. A total of 42,877,339 posts were collected from 2015, with 2837 originating from a University of California or California State University system campus, and 758 of these manually verified as being about smoking. Chi-square tests were conducted to determine if there were significant differences in tweet user sentiments between campuses that were smoke- or tobacco-free (all University of California campuses and California State University, Fullerton) compared to those that were not. A separate content analysis of tweets included in chi-square tests was conducted to identify major themes by campus smoking policy status.

RESULTS: The percentage of positive sentiment tweets toward tobacco use was higher on campuses without a smoke- or tobacco-free campus policy than on campuses with a smoke- or tobacco-free campus policy (76.7% vs 66.4%, P=.03). Higher positive sentiment on campuses without a smoke- or tobacco-free campus policy may have been driven by general comments about one's own smoking behavior and comments about smoking as a general behavior. Positive sentiment tweets originating from campuses without a smoke- or tobacco-free policy had greater variation in tweet type, which may have also contributed to differences in sentiment among universities.

CONCLUSIONS: Our study introduces preliminary data suggesting that campus smoke- and tobacco-free policies are associated with a reduction in positive sentiment toward smoking. However, continued expressions and intentions to smoke and reports of one's own smoking among Twitter users suggest a need for more research to better understand the dynamics between implementation of smoke- and tobacco-free policies and resulting tobacco behavioral sentiment.

RevDate: 2021-12-24

Garcés-Jiménez A, Calderón-Gómez H, Gómez-Pulido JM, et al (2021)

Medical Prognosis of Infectious Diseases in Nursing Homes by Applying Machine Learning on Clinical Data Collected in Cloud Microservices.

International journal of environmental research and public health, 18(24): pii:ijerph182413278.

BACKGROUND: treating infectious diseases in elderly individuals is difficult; patient referral to emergency services often occurs, since the elderly tend to arrive at consultations with advanced, serious symptoms.

AIM: it was hypothesized that anticipating an infectious disease diagnosis by a few days could significantly improve a patient's well-being and reduce the burden on emergency health system services.

METHODS: vital signs from residents were taken daily and transferred to a database in the cloud. Classifiers were used to recognize patterns in the spatial domain process of the collected data. Doctors reported their diagnoses when any disease presented. A flexible microservice architecture provided access and functionality to the system.

RESULTS: combining two different domains, health and technology, is not easy, but the results are encouraging. The classifiers reported good results; the system has been well accepted by medical personnel and is proving to be cost-effective and a good solution to service disadvantaged areas. In this context, this research found the importance of certain clinical variables in the identification of infectious diseases.

CONCLUSIONS: this work explores how to apply mobile communications, cloud services, and machine learning technology, in order to provide efficient tools for medical staff in nursing homes. The scalable architecture can be extended to big data applications that may extract valuable knowledge patterns for medical research.

RevDate: 2021-12-23

Qiu J, Yan X, Wang W, et al (2021)

Skeleton-Based Abnormal Behavior Detection Using Secure Partitioned Convolutional Neural Network Model.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

The abnormal behavior detection is the vital for evaluation of daily-life health status of the patient with cognitive impairment. Previous studies about abnormal behavior detection indicate that convolution neural network (CNN)-based computer vision owns the high robustness and accuracy for detection. However, executing CNN model on the cloud possible incurs a privacy disclosure problem during data transmission, and the high computation overhead makes difficult to execute the model on edge-end IoT devices with a well real-time performance. In this paper, we realize a skeleton-based abnormal behavior detection, and propose a secure partitioned CNN model (SP-CNN) to extract human skeleton keypoints and achieve safely collaborative computing by deploying different CNN model layers on the cloud and the IoT device. Because, the data outputted from the IoT device are processed by the several CNN layers instead of transmitting the sensitive video data, objectively it reduces the risk of privacy disclosure. Moreover, we also design an encryption method based on channel state information (CSI) to guarantee the sensitive data security. At last, we apply SP-CNN in abnormal behavior detection to evaluate its effectiveness. The experiment results illustrate that the efficiency of the abnormal behavior detection based on SP-CNN is at least 33.2% higher than the state-of-the-art methods, and its detection accuracy arrives to 97.54%.

RevDate: 2021-12-23

Lu ZX, Qian P, Bi D, et al (2021)

Application of AI and IoT in Clinical Medicine: Summary and Challenges.

Current medical science [Epub ahead of print].

The application of artificial intelligence (AI) technology in the medical field has experienced a long history of development. In turn, some long-standing points and challenges in the medical field have also prompted diverse research teams to continue to explore AI in depth. With the development of advanced technologies such as the Internet of Things (IoT), cloud computing, big data, and 5G mobile networks, AI technology has been more widely adopted in the medical field. In addition, the in-depth integration of AI and IoT technology enables the gradual improvement of medical diagnosis and treatment capabilities so as to provide services to the public in a more effective way. In this work, we examine the technical basis of IoT, cloud computing, big data analysis and machine learning involved in clinical medicine, combined with concepts of specific algorithms such as activity recognition, behavior recognition, anomaly detection, assistant decision-making system, to describe the scenario-based applications of remote diagnosis and treatment collaboration, neonatal intensive care unit, cardiology intensive care unit, emergency first aid, venous thromboembolism, monitoring nursing, image-assisted diagnosis, etc. We also systematically summarize the application of AI and IoT in clinical medicine, analyze the main challenges thereof, and comment on the trends and future developments in this field.

RevDate: 2021-12-23

Siam AI, Almaiah MA, Al-Zahrani A, et al (2021)

Secure Health Monitoring Communication Systems Based on IoT and Cloud Computing for Medical Emergency Applications.

Computational intelligence and neuroscience, 2021:8016525.

Smart health surveillance technology has attracted wide attention between patients and professionals or specialists to provide early detection of critical abnormal situations without the need to be in direct contact with the patient. This paper presents a secure smart monitoring portable multivital signal system based on Internet-of-Things (IoT) technology. The implemented system is designed to measure the key health parameters: heart rate (HR), blood oxygen saturation (SpO2), and body temperature, simultaneously. The captured physiological signals are processed and encrypted using the Advanced Encryption Standard (AES) algorithm before sending them to the cloud. An ESP8266 integrated unit is used for processing, encryption, and providing connectivity to the cloud over Wi-Fi. On the other side, trusted medical organization servers receive and decrypt the measurements and display the values on the monitoring dashboard for the authorized specialists. The proposed system measurements are compared with a number of commercial medical devices. Results demonstrate that the measurements of the proposed system are within the 95% confidence interval. Moreover, Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Relative Error (MRE) for the proposed system are calculated as 1.44, 1.12, and 0.012, respectively, for HR, 1.13, 0.92, and 0.009, respectively, for SpO2, and 0.13, 0.11, and 0.003, respectively, for body temperature. These results demonstrate the high accuracy and reliability of the proposed system.

RevDate: 2021-12-22

Abdul Hadi M, Schmid J, Trabesinger S, et al (2021)

High-frequency machine datasets captured via Edge Device from Spinner U5-630 milling machine.

Data in brief, 39:107670 pii:S2352-3409(21)00945-8.

The high-frequency (HF) machine data is retrieved from the Spinner U5-630 milling machine via an Edge Device. Unlike cloud computing, an Edge Device refers to distributed data processing of devices in proximity that generate data, which can thereby be used for analysis [1,2]. This data has a sampling rate of 2ms and hence, a frequency of 500Hz. The HF machine data is from various experiments performed. There are 2 experiments performed (parts 1 and 2). The experimented part 1 has 12 .json data files and part 2 has 11 .json files. In total, there are 23 files of HF machine data from 23 experiments. The HF machine data has vast potential for analysis as it contains all the information from the machine during the machining process. One part of the information was used in our case to calculate the energy consumption of the machine. Similarly, the data can be used for retrieving information of torque, commanded and actual speed, NC code, current, etc.

RevDate: 2021-12-17

Kahn MG, Mui JY, Ames MJ, et al (2021)

Migrating a research data warehouse to a public cloud: challenges and opportunities.

Journal of the American Medical Informatics Association : JAMIA pii:6468865 [Epub ahead of print].

OBJECTIVE: Clinical research data warehouses (RDWs) linked to genomic pipelines and open data archives are being created to support innovative, complex data-driven discoveries. The computing and storage needs of these research environments may quickly exceed the capacity of on-premises systems. New RDWs are migrating to cloud platforms for the scalability and flexibility needed to meet these challenges. We describe our experience in migrating a multi-institutional RDW to a public cloud.

MATERIALS AND METHODS: This study is descriptive. Primary materials included internal and public presentations before and after the transition, analysis documents, and actual billing records. Findings were aggregated into topical categories.

RESULTS: Eight categories of migration issues were identified. Unanticipated challenges included legacy system limitations; network, computing, and storage architectures that realize performance and cost benefits in the face of hyper-innovation, complex security reviews and approvals, and limited cloud consulting expertise.

DISCUSSION: Cloud architectures enable previously unavailable capabilities, but numerous pitfalls can impede realizing the full benefits of a cloud environment. Rapid changes in cloud capabilities can quickly obsolete existing architectures and associated institutional policies. Touchpoints with on-premise networks and systems can add unforeseen complexity. Governance, resource management, and cost oversight are critical to allow rapid innovation while minimizing wasted resources and unnecessary costs.

CONCLUSIONS: Migrating our RDW to the cloud has enabled capabilities and innovations that would not have been possible with an on-premises environment. Notwithstanding the challenges of managing cloud resources, the resulting RDW capabilities have been highly positive to our institution, research community, and partners.

RevDate: 2021-12-18

Pandya S, Sur A, N Solke (2021)

COVIDSAVIOR: A Novel Sensor-Fusion and Deep Learning Based Framework for Virus Outbreaks.

Frontiers in public health, 9:797808.

The presented deep learning and sensor-fusion based assistive technology (Smart Facemask and Thermal scanning kiosk) will protect the individual using auto face-mask detection and auto thermal scanning to detect the current body temperature. Furthermore, the presented system also facilitates a variety of notifications, such as an alarm, if an individual is not wearing a mask and detects thermal temperature beyond the standard body temperature threshold, such as 98.6°F (37°C). Design/methodology/approach-The presented deep Learning and sensor-fusion-based approach can also detect an individual in with or without mask situations and provide appropriate notification to the security personnel by raising the alarm. Moreover, the smart tunnel is also equipped with a thermal sensing unit embedded with a camera, which can detect the real-time body temperature of an individual concerning the prescribed body temperature limits as prescribed by WHO reports. Findings-The investigation results validate the performance evaluation of the presented smart face-mask and thermal scanning mechanism. The presented system can also detect an outsider entering the building with or without mask condition and be aware of the security control room by raising appropriate alarms. Furthermore, the presented smart epidemic tunnel is embedded with an intelligent algorithm that can perform real-time thermal scanning of an individual and store essential information in a cloud platform, such as Google firebase. Thus, the proposed system favors society by saving time and helps in lowering the spread of coronavirus.

RevDate: 2021-12-16

Iregbu K, Dramowski A, Milton R, et al (2021)

Global health systems' data science approach for precision diagnosis of sepsis in early life.

The Lancet. Infectious diseases pii:S1473-3099(21)00645-9 [Epub ahead of print].

Neonates and children in low-income and middle-income countries (LMICs) contribute to the highest number of sepsis-associated deaths globally. Interventions to prevent sepsis mortality are hampered by a lack of comprehensive epidemiological data and pathophysiological understanding of biological pathways. In this review, we discuss the challenges faced by LMICs in diagnosing sepsis in these age groups. We highlight a role for multi-omics and health care data to improve diagnostic accuracy of clinical algorithms, arguing that health-care systems urgently need precision medicine to avoid the pitfalls of missed diagnoses, misdiagnoses, and overdiagnoses, and associated antimicrobial resistance. We discuss ethical, regulatory, and systemic barriers related to the collection and use of big data in LMICs. Technologies such as cloud computing, artificial intelligence, and medical tricorders might help, but they require collaboration with local communities. Co-partnering (joint equal development of technology between producer and end-users) could facilitate integration of these technologies as part of future care-delivery systems, offering a chance to transform the global management and prevention of sepsis for neonates and children.

RevDate: 2021-12-16

Keddy KH, Saha S, Kariuki S, et al (2021)

Using big data and mobile health to manage diarrhoeal disease in children in low-income and middle-income countries: societal barriers and ethical implications.

The Lancet. Infectious diseases pii:S1473-3099(21)00585-5 [Epub ahead of print].

Diarrhoea is an important cause of morbidity and mortality in children from low-income and middle-income countries (LMICs), despite advances in the management of this condition. Understanding of the causes of diarrhoea in children in LMICs has advanced owing to large multinational studies and big data analytics computing the disease burden, identifying the important variables that have contributed to reducing this burden. The advent of the mobile phone has further enabled the management of childhood diarrhoea by providing both clinical support to health-care workers (such as diagnosis and management) and communicating preventive measures to carers (such as breastfeeding and vaccination reminders) in some settings. There are still challenges in addressing the burden of diarrhoeal diseases, such as incomplete patient information, underrepresented geographical areas, concerns about patient confidentiality, unequal partnerships between study investigators, and the reactive approach to outbreaks. A transparent approach to promote the inclusion of researchers in LMICs could address partnership imbalances. A big data umbrella encompassing cloud-based centralised databases to analyse interlinked human, animal, agricultural, social, and climate data would provide an informative solution to the development of appropriate management protocols in LMICs.

RevDate: 2021-12-16

Lin B, W Huang (2021)

A Study of Cloud-Based Remote Clinical Care Technology.

Journal of healthcare engineering, 2021:8024091.

This paper uses cloud computing to build and design remote clinical care technology, and the study refines the evaluation approach for the elements and builds an evaluation prototype for the strategy, uses service design theory to improve the design of the service part of the assistive system, summarizes the list of requirements based on system design and service design, and designs a service design prototype. Through design practice, the detailed design of the software interaction interface and the auxiliary product of the care assistance system based on the prototype are investigated. Based on the user perspective, the strategy of meeting user expectations and improving user information literacy is proposed; based on the social network perspective, the strategy of establishing a long-term mechanism for smart medical operation and improving the information interaction network environment is proposed; and based on the system service perspective, the strategy of optimizing the system function design and innovating the service model is proposed. Compared with the traditional written patient handover, the application of MNIS under cloud computing can significantly shorten the handover time of surgical patients, improve the standardized rate of surgical safety verification execution and the qualified rate of nursing documents, while the rate of standardized application of prophylactic antibiotics is also significantly higher than that of the control group. The questionnaire survey of nursing staff in the operating room showed that clinical nursing staff was generally satisfied with the clinical application of MNIS under cloud computing, with an average satisfaction score of 64.5 to 11.3, and an average score of 3.58 to 0.54 for each item. Among them, pre-application training of MNIS, departmental support for MNIS, and its ease of verification in surgical patients were the three main factors favoring the clinical application of MNIS in the operating room with cloud computing, while barriers to wireless network connectivity, inconvenient PDA input, and small screen size were the three main drawbacks affecting its application. The determined clinical evaluation index system of MNIS in the operating room is innovative, which not only includes clinical care indicators but also covers general hardware and software indicators, which can effectively reflect the practical application capability of mobile terminal clinical and user experience feelings, and the evaluation index system is comprehensive.

RevDate: 2021-12-15

Byrne M, O'Malley L, Glenny AM, et al (2021)

Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England.

PloS one, 16(12):e0259797 pii:PONE-D-21-22440.

BACKGROUND: Online reviews may act as a rich source of data to assess the quality of dental practices. Assessing the content and sentiment of reviews on a large scale is time consuming and expensive. Automation of the process of assigning sentiment to big data samples of reviews may allow for reviews to be used as Patient Reported Experience Measures for primary care dentistry.

AIM: To assess the reliability of three different online sentiment analysis tools (Amazon Comprehend DetectSentiment API (ACDAPI), Google and Monkeylearn) at assessing the sentiment of reviews of dental practices working on National Health Service contracts in the United Kingdom.

METHODS: A Python 3 script was used to mine 15800 reviews from 4803 unique dental practices on the NHS.uk websites between April 2018 -March 2019. A random sample of 270 reviews were rated by the three sentiment analysis tools. These reviews were rated by 3 blinded independent human reviewers and a pooled sentiment score was assigned. Kappa statistics and polychoric evalutaiton were used to assess the level of agreement. Disagreements between the automated and human reviewers were qualitatively assessed.

RESULTS: There was good agreement between the sentiment assigned to reviews by the human reviews and ACDAPI (k = 0.660). The Google (k = 0.706) and Monkeylearn (k = 0.728) showed slightly better agreement at the expense of usability on a massive dataset. There were 33 disagreements in rating between ACDAPI and human reviewers, of which n = 16 were due to syntax errors, n = 10 were due to misappropriation of the strength of conflicting emotions and n = 7 were due to a lack of overtly emotive language in the text.

CONCLUSIONS: There is good agreement between the sentiment of an online review assigned by a group of humans and by cloud-based sentiment analysis. This may allow the use of automated sentiment analysis for quality assessment of dental service provision in the NHS.

RevDate: 2021-12-15

Halder A, Verma A, Biswas D, et al (2021)

Recent advances in mass-spectrometry based proteomics software, tools and databases.

Drug discovery today. Technologies, 39:69-79.

The field of proteomics immensely depends on data generation and data analysis which are thoroughly supported by software and databases. There has been a massive advancement in mass spectrometry-based proteomics over the last 10 years which has compelled the scientific community to upgrade or develop algorithms, tools, and repository databases in the field of proteomics. Several standalone software, and comprehensive databases have aided the establishment of integrated omics pipeline and meta-analysis workflow which has contributed to understand the disease pathobiology, biomarker discovery and predicting new therapeutic modalities. For shotgun proteomics where Data Dependent Acquisition is performed, several user-friendly software are developed that can analyse the pre-processed data to provide mechanistic insights of the disease. Likewise, in Data Independent Acquisition, pipelines are emerged which can accomplish the task from building the spectral library to identify the therapeutic targets. Furthermore, in the age of big data analysis the implications of machine learning and cloud computing are appending robustness, rapidness and in-depth proteomics data analysis. The current review talks about the recent advancement, and development of software, tools, and database in the field of mass-spectrometry based proteomics.

RevDate: 2021-12-15

Frye L, Bhat S, Akinsanya K, et al (2021)

From computer-aided drug discovery to computer-driven drug discovery.

Drug discovery today. Technologies, 39:111-117.

RevDate: 2021-12-13

Rowe SP, MG Pomper (2021)

Molecular imaging in oncology: Current impact and future directions.

CA: a cancer journal for clinicians [Epub ahead of print].

The authors define molecular imaging, according to the Society of Nuclear Medicine and Molecular Imaging, as the visualization, characterization, and measurement of biological processes at the molecular and cellular levels in humans and other living systems. Although practiced for many years clinically in nuclear medicine, expansion to other imaging modalities began roughly 25 years ago and has accelerated since. That acceleration derives from the continual appearance of new and highly relevant animal models of human disease, increasingly sensitive imaging devices, high-throughput methods to discover and optimize affinity agents to key cellular targets, new ways to manipulate genetic material, and expanded use of cloud computing. Greater interest by scientists in allied fields, such as chemistry, biomedical engineering, and immunology, as well as increased attention by the pharmaceutical industry, have likewise contributed to the boom in activity in recent years. Whereas researchers and clinicians have applied molecular imaging to a variety of physiologic processes and disease states, here, the authors focus on oncology, arguably where it has made its greatest impact. The main purpose of imaging in oncology is early detection to enable interception if not prevention of full-blown disease, such as the appearance of metastases. Because biochemical changes occur before changes in anatomy, molecular imaging-particularly when combined with liquid biopsy for screening purposes-promises especially early localization of disease for optimum management. Here, the authors introduce the ways and indications in which molecular imaging can be undertaken, the tools used and under development, and near-term challenges and opportunities in oncology.

RevDate: 2021-12-13

Calabrese B (2022)

Web and Cloud Computing to Analyze Microarray Data.

Methods in molecular biology (Clifton, N.J.), 2401:29-38.

Microarray technology is a high-throughput technique that can simultaneously measure hundreds of thousands of genes' expression levels. Web and cloud computing tools and databases for storage and analysis of microarray data are necessary for biologists to interpret massive data from experiments. This chapter presents the main databases and web and cloud computing tools for microarray data storage and analysis.

RevDate: 2021-12-13

Marozzo F, L Belcastro (2022)

High-Performance Framework to Analyze Microarray Data.

Methods in molecular biology (Clifton, N.J.), 2401:13-27.

Pharmacogenomics is an important research field that studies the impact of genetic variation of patients on drug responses, looking for correlations between single nucleotide polymorphisms (SNPs) of patient genome and drug toxicity or efficacy. The large number of available samples and the high resolution of the instruments allow microarray platforms to produce huge amounts of SNP data. To analyze such data and find correlations in a reasonable time, high-performance computing solutions must be used. Cloud4SNP is a bioinformatics tool, based on Data Mining Cloud Framework (DMCF), for parallel preprocessing and statistical analysis of SNP pharmacogenomics microarray data.This work describes how Cloud4SNP has been extended to execute applications on Apache Spark, which provides faster execution time for iterative and batch processing. The experimental evaluation shows that Cloud4SNP is able to exploit the high-performance features of Apache Spark, obtaining faster execution times and high level of scalability, with a global speedup that is very close to linear values.

RevDate: 2021-12-13

Tiwari A, Dhiman V, Iesa MAM, et al (2021)

Patient Behavioral Analysis with Smart Healthcare and IoT.

Behavioural neurology, 2021:4028761.

Patient behavioral analysis is the key factor for providing treatment to patients who may suffer from various difficulties including neurological disease, head trauma, and mental disease. Analyzing the patient's behavior helps in determining the root cause of the disease. In traditional healthcare, patient behavioral analysis has lots of challenges that were much more difficult. The patient behavior can be easily analyzed with the development of smart healthcare. Information technology plays a key role in understanding the concept of smart healthcare. A new generation of information technologies including IoT and cloud computing is used for changing the traditional healthcare system in all ways. Using Internet of Things in the healthcare institution enhances the effectiveness as well as makes it more personalized and convenient to the patients. The first thing that will be discussed in the article is the technologies that have been used to support the smart class, and further, there will be a discussion on the existing problems with the smart healthcare system and how these problems can be solved. This study can provide essential information about the role of smart healthcare and IoT in maintaining behavior of patent. Various biomarkers are maintained properly with the help of these technologies. This study can provide effective information about importance of smart health system. This smart healthcare is conducted with the involvement of proper architecture. This is treated as effective energy efficiency architecture. Artificial intelligence is used increasingly in healthcare to maintain diagnosis and other important factors of healthcare. This application is also used to maintain patient engagement, which is also included in this study. Major hardware components are also included in this technology such as CO sensor and CO2 sensor.

RevDate: 2021-12-13

ElAraby ME, Elzeki OM, Shams MY, et al (2022)

A novel Gray-Scale spatial exploitation learning Net for COVID-19 by crawling Internet resources.

Biomedical signal processing and control, 73:103441.

Today, the earth planet suffers from the decay of active pandemic COVID-19 which motivates scientists and researchers to detect and diagnose the infected people. Chest X-ray (CXR) image is a common utility tool for detection. Even the CXR suffers from low informative details about COVID-19 patches; the computer vision helps to overcome it through grayscale spatial exploitation analysis. In turn, it is highly recommended to acquire more CXR images to increase the capacity and ability to learn for mining the grayscale spatial exploitation. In this paper, an efficient Gray-scale Spatial Exploitation Net (GSEN) is designed by employing web pages crawling across cloud computing environments. The motivation of this work are i) utilizing a framework methodology for constructing consistent dataset by web crawling to update the dataset continuously per crawling iteration; ii) designing lightweight, fast learning, comparable accuracy, and fine-tuned parameters gray-scale spatial exploitation deep neural net; iii) comprehensive evaluation of the designed gray-scale spatial exploitation net for different collected dataset(s) based on web COVID-19 crawling verse the transfer learning of the pre-trained nets. Different experiments have been performed for benchmarking both the proposed web crawling framework methodology and the designed gray-scale spatial exploitation net. Due to the accuracy metric, the proposed net achieves 95.60% for two-class labels, and 92.67% for three-class labels, respectively compared with the most recent transfer learning Google-Net, VGG-19, Res-Net 50, and Alex-Net approaches. Furthermore, web crawling utilizes the accuracy rates improvement in a positive relationship to the cardinality of crawled CXR dataset.

RevDate: 2021-12-13

Subramanian M, Shanmuga Vadivel K, Hatamleh WA, et al (2021)

The role of contemporary digital tools and technologies in Covid-19 crisis: An exploratory analysis.

Expert systems pii:EXSY12834 [Epub ahead of print].

Following the Covid-19 pandemic, there has been an increase in interest in using digital resources to contain pandemics. To avoid, detect, monitor, regulate, track, and manage diseases, predict outbreaks and conduct data analysis and decision-making processes, a variety of digital technologies are used, ranging from artificial intelligence (AI)-powered machine learning (ML) or deep learning (DL) focused applications to blockchain technology and big data analytics enabled by cloud computing and the internet of things (IoT). In this paper, we look at how emerging technologies such as the IoT and sensors, AI, ML, DL, blockchain, augmented reality, virtual reality, cloud computing, big data, robots and drones, intelligent mobile apps, and 5G are advancing health care and paving the way to combat the Coivd-19 pandemic. The aim of this research is to look at possible technologies, processes, and tools for addressing Covid-19 issues such as pre-screening, early detection, monitoring infected/quarantined individuals, forecasting future infection rates, and more. We also look at the research possibilities that have arisen as a result of the use of emerging technology to handle the Covid-19 crisis.

RevDate: 2021-12-13

Verdu E, Nieto YV, N Saleem (2021)

Call for Special Issue Papers: Cloud Computing and Big Data for Cognitive IoT.

Big data, 9(6):413-414.

RevDate: 2021-12-13

Waitman LR, Song X, Walpitage DL, et al (2021)

Enhancing PCORnet Clinical Research Network data completeness by integrating multistate insurance claims with electronic health records in a cloud environment aligned with CMS security and privacy requirements.

Journal of the American Medical Informatics Association : JAMIA pii:6460151 [Epub ahead of print].

OBJECTIVE: The Greater Plains Collaborative (GPC) and other PCORnet Clinical Data Research Networks capture healthcare utilization within their health systems. Here, we describe a reusable environment (GPC Reusable Observable Unified Study Environment [GROUSE]) that integrates hospital and electronic health records (EHRs) data with state-wide Medicare and Medicaid claims and assess how claims and clinical data complement each other to identify obesity and related comorbidities in a patient sample.

MATERIALS AND METHODS: EHR, billing, and tumor registry data from 7 healthcare systems were integrated with Center for Medicare (2011-2016) and Medicaid (2011-2012) services insurance claims to create deidentified databases in Informatics for Integrating Biology & the Bedside and PCORnet Common Data Model formats. We describe technical details of how this federally compliant, cloud-based data environment was built. As a use case, trends in obesity rates for different age groups are reported, along with the relative contribution of claims and EHR data-to-data completeness and detecting common comorbidities.

RESULTS: GROUSE contained 73 billion observations from 24 million unique patients (12.9 million Medicare; 13.9 million Medicaid; 6.6 million GPC patients) with 1 674 134 patients crosswalked and 983 450 patients with body mass index (BMI) linked to claims. Diagnosis codes from EHR and claims sources underreport obesity by 2.56 times compared with body mass index measures. However, common comorbidities such as diabetes and sleep apnea diagnoses were more often available from claims diagnoses codes (1.6 and 1.4 times, respectively).

CONCLUSION: GROUSE provides a unified EHR-claims environment to address health system and federal privacy concerns, which enables investigators to generalize analyses across health systems integrated with multistate insurance claims.

RevDate: 2021-12-13

Li Y, MA Cianfrocco (2021)

Cloud computing platforms to support cryo-EM structure determination.

Trends in biochemical sciences pii:S0968-0004(21)00244-9 [Epub ahead of print].

Leveraging the power of single-particle cryo-electron microscopy (cryo-EM) requires robust and accessible computational infrastructure. Here, we summarize the cloud computing landscape and picture the outlook of a hybrid cryo-EM computing workflow, and make suggestions to the community to facilitate a future for cryo-EM that integrates into cloud computing infrastructure.

RevDate: 2021-12-13

Zhou Y, Qian C, Guo Y, et al (2021)

XCloud-pFISTA: A Medical Intelligence Cloud for Accelerated MRI.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2021:3289-3292.

Machine learning and artificial intelligence have shown remarkable performance in accelerated magnetic resonance imaging (MRI). Cloud computing technologies have great advantages in building an easily accessible platform to deploy advanced algorithms. In this work, we develop an open-access, easy-to-use and high-performance medical intelligence cloud computing platform (XCloud-pFISTA) to reconstruct MRI images from undersampled k-space data. Two state-of-the-art approaches of the Projected Fast Iterative Soft-Thresholding Algorithm (pFISTA) family have been successfully implemented on the cloud. This work can be considered as a good example of cloud-based medical image reconstruction and may benefit the future development of integrated reconstruction and online diagnosis system.

RevDate: 2021-12-10

Kua J, Loke SW, Arora C, et al (2021)

Internet of Things in Space: A Review of Opportunities and Challenges from Satellite-Aided Computing to Digitally-Enhanced Space Living.

Sensors (Basel, Switzerland), 21(23): pii:s21238117.

Recent scientific and technological advancements driven by the Internet of Things (IoT), Machine Learning (ML) and Artificial Intelligence (AI), distributed computing and data communication technologies have opened up a vast range of opportunities in many scientific fields-spanning from fast, reliable and efficient data communication to large-scale cloud/edge computing and intelligent big data analytics. Technological innovations and developments in these areas have also enabled many opportunities in the space industry. The successful Mars landing of NASA's Perseverance rover on 18 February 2021 represents another giant leap for humankind in space exploration. Emerging research and developments of connectivity and computing technologies in IoT for space/non-terrestrial environments is expected to yield significant benefits in the near future. This survey paper presents a broad overview of the area and provides a look-ahead of the opportunities made possible by IoT and space-based technologies. We first survey the current developments of IoT and space industry, and identify key challenges and opportunities in these areas. We then review the state-of-the-art and discuss future opportunities for IoT developments, deployment and integration to support future endeavors in space exploration.

RevDate: 2021-12-10

Sodhro AH, N Zahid (2021)

AI-Enabled Framework for Fog Computing Driven E-Healthcare Applications.

Sensors (Basel, Switzerland), 21(23): pii:s21238039.

Artificial Intelligence (AI) is the revolutionary paradigm to empower sixth generation (6G) edge computing based e-healthcare for everyone. Thus, this research aims to promote an AI-based cost-effective and efficient healthcare application. The cyber physical system (CPS) is a key player in the internet world where humans and their personal devices such as cell phones, laptops, wearables, etc., facilitate the healthcare environment. The data extracting, examining and monitoring strategies from sensors and actuators in the entire medical landscape are facilitated by cloud-enabled technologies for absorbing and accepting the entire emerging wave of revolution. The efficient and accurate examination of voluminous data from the sensor devices poses restrictions in terms of bandwidth, delay and energy. Due to the heterogeneous nature of the Internet of Medical Things (IoMT), the driven healthcare system must be smart, interoperable, convergent, and reliable to provide pervasive and cost-effective healthcare platforms. Unfortunately, because of higher power consumption and lesser packet delivery rate, achieving interoperable, convergent, and reliable transmission is challenging in connected healthcare. In such a scenario, this paper has fourfold major contributions. The first contribution is the development of a single chip wearable electrocardiogram (ECG) with the support of an analog front end (AFE) chip model (i.e., ADS1292R) for gathering the ECG data to examine the health status of elderly or chronic patients with the IoT-based cyber physical system (CPS). The second proposes a fuzzy-based sustainable, interoperable, and reliable algorithm (FSIRA), which is an intelligent and self-adaptive decision-making approach to prioritize emergency and critical patients in association with the selected parameters for improving healthcare quality at reasonable costs. The third is the proposal of a specific cloud-based architecture for mobile and connected healthcare. The fourth is the identification of the right balance between reliability, packet loss ratio, convergence, latency, interoperability, and throughput to support an adaptive IoMT driven connected healthcare. It is examined and observed that our proposed approaches outperform the conventional techniques by providing high reliability, high convergence, interoperability, and a better foundation to analyze and interpret the accuracy in systems from a medical health aspect. As for the IoMT, an enabled healthcare cloud is the key ingredient on which to focus, as it also faces the big hurdle of less bandwidth, more delay and energy drain. Thus, we propose the mathematical trade-offs between bandwidth, interoperability, reliability, delay, and energy dissipation for IoMT-oriented smart healthcare over a 6G platform.

RevDate: 2021-12-10

Lazazzera R, Laguna P, Gil E, et al (2021)

Proposal for a Home Sleep Monitoring Platform Employing a Smart Glove.

Sensors (Basel, Switzerland), 21(23): pii:s21237976.

The present paper proposes the design of a sleep monitoring platform. It consists of an entire sleep monitoring system based on a smart glove sensor called UpNEA worn during the night for signals acquisition, a mobile application, and a remote server called AeneA for cloud computing. UpNEA acquires a 3-axis accelerometer signal, a photoplethysmography (PPG), and a peripheral oxygen saturation (SpO2) signal from the index finger. Overnight recordings are sent from the hardware to a mobile application and then transferred to AeneA. After cloud computing, the results are shown in a web application, accessible for the user and the clinician. The AeneA sleep monitoring activity performs different tasks: sleep stages classification and oxygen desaturation assessment; heart rate and respiration rate estimation; tachycardia, bradycardia, atrial fibrillation, and premature ventricular contraction detection; and apnea and hypopnea identification and classification. The PPG breathing rate estimation algorithm showed an absolute median error of 0.5 breaths per minute for the 32 s window and 0.2 for the 64 s window. The apnea and hypopnea detection algorithm showed an accuracy (Acc) of 75.1%, by windowing the PPG in one-minute segments. The classification task revealed 92.6% Acc in separating central from obstructive apnea, 83.7% in separating central apnea from central hypopnea and 82.7% in separating obstructive apnea from obstructive hypopnea. The novelty of the integrated algorithms and the top-notch cloud computing products deployed, encourage the production of the proposed solution for home sleep monitoring.

RevDate: 2021-12-10

Guo K, Liu C, Zhao S, et al (2021)

Design of a Millimeter-Wave Radar Remote Monitoring System for the Elderly Living Alone Using WIFI Communication.

Sensors (Basel, Switzerland), 21(23): pii:s21237893.

In response to the current demand for the remote monitoring of older people living alone, a non-contact human vital signs monitoring system based on millimeter wave radar has gradually become the object of research. This paper mainly carried out research regarding the detection method to obtain human breathing and heartbeat signals using a frequency modulated continuous wave system. We completed a portable millimeter-wave radar module for wireless communication. The radar module was a small size and had a WIFI communication interface, so we only needed to provide a power cord for the radar module. The breathing and heartbeat signals were detected and separated by FIR digital filter and the wavelet transform method. By building a cloud computing framework, we realized remote and senseless monitoring of the vital signs for older people living alone. Experiments were also carried out to compare the performance difference between the system and the common contact detection system. The experimental results showed that the life parameter detection system based on the millimeter wave sensor has strong real-time performance and accuracy.

RevDate: 2021-12-10

Akram J, Tahir A, Munawar HS, et al (2021)

Cloud- and Fog-Integrated Smart Grid Model for Efficient Resource Utilisation.

Sensors (Basel, Switzerland), 21(23): pii:s21237846.

The smart grid (SG) is a contemporary electrical network that enhances the network's performance, reliability, stability, and energy efficiency. The integration of cloud and fog computing with SG can increase its efficiency. The combination of SG with cloud computing enhances resource allocation. To minimise the burden on the Cloud and optimise resource allocation, the concept of fog computing integration with cloud computing is presented. Fog has three essential functionalities: location awareness, low latency, and mobility. We offer a cloud and fog-based architecture for information management in this study. By allocating virtual machines using a load-balancing mechanism, fog computing makes the system more efficient (VMs). We proposed a novel approach based on binary particle swarm optimisation with inertia weight adjusted using simulated annealing. The technique is named BPSOSA. Inertia weight is an important factor in BPSOSA which adjusts the size of the search space for finding the optimal solution. The BPSOSA technique is compared against the round robin, odds algorithm, and ant colony optimisation. In terms of response time, BPSOSA outperforms round robin, odds algorithm, and ant colony optimisation by 53.99 ms, 82.08 ms, and 81.58 ms, respectively. In terms of processing time, BPSOSA outperforms round robin, odds algorithm, and ant colony optimisation by 52.94 ms, 81.20 ms, and 80.56 ms, respectively. Compared to BPSOSA, ant colony optimisation has slightly better cost efficiency, however, the difference is insignificant.

RevDate: 2021-12-10

Bravo-Arrabal J, Toscano-Moreno M, Fernandez-Lozano JJ, et al (2021)

The Internet of Cooperative Agents Architecture (X-IoCA) for Robots, Hybrid Sensor Networks, and MEC Centers in Complex Environments: A Search and Rescue Case Study.

Sensors (Basel, Switzerland), 21(23): pii:s21237843.

Cloud robotics and advanced communications can foster a step-change in cooperative robots and hybrid wireless sensor networks (H-WSN) for demanding environments (e.g., disaster response, mining, demolition, and nuclear sites) by enabling the timely sharing of data and computational resources between robot and human teams. However, the operational complexity of such multi-agent systems requires defining effective architectures, coping with implementation details, and testing in realistic deployments. This article proposes X-IoCA, an Internet of robotic things (IoRT) and communication architecture consisting of a hybrid and heterogeneous network of wireless transceivers (H2WTN), based on LoRa and BLE technologies, and a robot operating system (ROS) network. The IoRT is connected to a feedback information system (FIS) distributed among multi-access edge computing (MEC) centers. Furthermore, we present SAR-IoCA, an implementation of the architecture for search and rescue (SAR) integrated into a 5G network. The FIS for this application consists of an SAR-FIS (including a path planner for UGVs considering risks detected by a LoRa H-WSN) and an ROS-FIS (for real-time monitoring and processing of information published throughout the ROS network). Moreover, we discuss lessons learned from using SAR-IoCA in a realistic exercise where three UGVs, a UAV, and responders collaborated to rescue victims from a tunnel accessible through rough terrain.

RevDate: 2021-12-10

Huang CE, Li YH, Aslam MS, et al (2021)

Super-Resolution Generative Adversarial Network Based on the Dual Dimension Attention Mechanism for Biometric Image Super-Resolution.

Sensors (Basel, Switzerland), 21(23): pii:s21237817.

There exist many types of intelligent security sensors in the environment of the Internet of Things (IoT) and cloud computing. Among them, the sensor for biometrics is one of the most important types. Biometric sensors capture the physiological or behavioral features of a person, which can be further processed with cloud computing to verify or identify the user. However, a low-resolution (LR) biometrics image causes the loss of feature details and reduces the recognition rate hugely. Moreover, the lack of resolution negatively affects the performance of image-based biometric technology. From a practical perspective, most of the IoT devices suffer from hardware constraints and the low-cost equipment may not be able to meet various requirements, particularly for image resolution, because it asks for additional storage to store high-resolution (HR) images, and a high bandwidth to transmit the HR image. Therefore, how to achieve high accuracy for the biometric system without using expensive and high-cost image sensors is an interesting and valuable issue in the field of intelligent security sensors. In this paper, we proposed DDA-SRGAN, which is a generative adversarial network (GAN)-based super-resolution (SR) framework using the dual-dimension attention mechanism. The proposed model can be trained to discover the regions of interest (ROI) automatically in the LR images without any given prior knowledge. The experiments were performed on the CASIA-Thousand-v4 and the CelebA datasets. The experimental results show that the proposed method is able to learn the details of features in crucial regions and achieve better performance in most cases.

RevDate: 2021-12-10

Erhan L, Di Mauro M, Anjum A, et al (2021)

Embedded Data Imputation for Environmental Intelligent Sensing: A Case Study.

Sensors (Basel, Switzerland), 21(23): pii:s21237774.

Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsustainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demonstrate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner.

RevDate: 2021-12-08

O'Leary MA, S Kaufman (2011)

MorphoBank: phylophenomics in the "cloud".

Cladistics : the international journal of the Willi Hennig Society, 27(5):529-537.

A highly interoperable informatics infrastructure rapidly emerged to handle genomic data used for phylogenetics and was instrumental in the growth of molecular systematics. Parallel growth in software and databases to address needs peculiar to phylophenomics has been relatively slow and fragmented. Systematists currently face the challenge that Earth may hold tens of millions of species (living and fossil) to be described and classified. Grappling with research on this scale has increasingly resulted in work by teams, many constructing large phenomic supermatrices. Until now, phylogeneticists have managed data in single-user, file-based desktop software wholly unsuitable for real-time, team-based collaborative work. Furthermore, phenomic data often differ from genomic data in readily lending themselves to media representation (e.g. 2D and 3D images, video, sound). Phenomic data are a growing component of phylogenetics, and thus teams require the ability to record homology hypotheses using media and to share and archive these data. Here we describe MorphoBank, a web application and database leveraging software as a service methodology compatible with "cloud" computing technology for the construction of matrices of phenomic data. In its tenth year, and fully available to the scientific community at-large since inception, MorphoBank enables interactive collaboration not possible with desktop software, permitting self-assembling teams to develop matrices, in real time, with linked media in a secure web environment. MorphoBank also provides any user with tools to build character and media ontologies (rule sets) within matrices, and to display these as directed acyclic graphs. These rule sets record the phylogenetic interrelatedness of characters (e.g. if X is absent, Y is inapplicable, or X-Z characters share a media view). MorphoBank has enabled an order of magnitude increase in phylophenomic data collection: a recent collaboration by more than 25 researchers has produced a database of > 4500 phenomic characters supported by > 10 000 media. © The Willi Hennig Society 2011.

RevDate: 2021-12-08

Liu S, Jiang L, X Wang (2021)

Intelligent Internet of Things Medical Technology in Implantable Intravenous Infusion Port in Children with Malignant Tumors.

Journal of healthcare engineering, 2021:8936820.

Due to the recent technological revolution that is centered around information technology, the Internet of Medical Things (IoMT) has become an important research domain. IoMT is a combination of Internet of Things (IoT), big data, cloud computing, ubiquitous network, and three-dimensional holographic technology, which is used to build a smart medical diagnosis and treatment system. Additionally, this system should automate various activities, such as the patient's health record and health monitoring, which is an important issue in the development of modern and smart healthcare system. In this paper, we have thoroughly examined the role of a smart healthcare system architecture and other key supporting technologies in improving the health status of both indoor and outdoor patients. The proposed system has the capacity to investigate and predict (if feasible) the clinical application and nursing effects of totally implantable intravenous port (TIVAP) in pediatric hematological tumors. For this purpose, seventy children with hematologic tumors were treated with TIVAP, and IoMT-enabled care was provided to them, where the occurrence of adverse events, specifically after the treatment, was observed. The experimental results collected after the 70 children were treated and cared for by TIVAP show that there were five cases of adverse events, whereas the incidence rate of the adverse events was 7.14%. Moreover, TIVAP has significant efficacy in the treatment of hematologic tumors in children, and it equally reduces the vascular injury caused by chemotherapy in younger patients. Likewise, targeted care reduces the incidence of adverse events in children with expected ratio.

RevDate: 2021-12-08

Hardy NP, RA Cahill (2021)

Digital surgery for gastroenterological diseases.

World journal of gastroenterology, 27(42):7240-7246.

Advances in machine learning, computer vision and artificial intelligence methods, in combination with those in processing and cloud computing capability, portend the advent of true decision support during interventions in real-time and soon perhaps in automated surgical steps. Such capability, deployed alongside technology intraoperatively, is termed digital surgery and can be delivered without the need for high-end capital robotic investment. An area close to clinical usefulness right now harnesses advances in near infrared endolaparoscopy and fluorescence guidance for tissue characterisation through the use of biophysics-inspired algorithms. This represents a potential synergistic methodology for the deep learning methods currently advancing in ophthalmology, radiology, and recently gastroenterology via colonoscopy. As databanks of more general surgical videos are created, greater analytic insights can be derived across the operative spectrum of gastroenterological disease and operations (including instrumentation and operative step sequencing and recognition, followed over time by surgeon and instrument performance assessment) and linked to value-based outcomes. However, issues of legality, ethics and even morality need consideration, as do the limiting effects of monopolies, cartels and isolated data silos. Furthermore, the role of the surgeon, surgical societies and healthcare institutions in this evolving field needs active deliberation, as the default risks relegation to bystander or passive recipient. This editorial provides insight into this accelerating field by illuminating the near-future and next decade evolutionary steps towards widespread clinical integration for patient and societal benefit.

RevDate: 2021-12-01

Karim HMR, Singha SK, Neema PK, et al (2021)

Information technology-based joint preoperative assessment, risk stratification and its impact on patient management, perioperative outcome, and cost.

Discoveries (Craiova, Romania), 9(2):e130 pii:240.

BACKGROUND: Despite negative recommendations, routine preoperative testing practice is nearly universal. Our aim is to bring the healthcare providers on one platform by using information-technology based preanaesthetic assessment and evaluate the routine preoperative testing's impact on patient outcome and cost.

METHODS: A prospective, non-randomised study was conducted in a teaching hospital during January 2019-August 2020. A locally developed software and cloud-computing were used as a tool to modify preanaesthesia evaluation. The number of investigations ordered, time taken, cost incurred, were compared with the routine practice. Further data were matched as per surgical invasiveness and the patient's physical status. Appropriate tests compared intergroup differences and p-value <0.05 was considered significant. Results: Data from 114 patients (58 in routine and 56 in patient and surgery specific) were analysed. Patient and surgery specific investigation led to a reduction in the investigations by 80-90%, hospital visit by 50%, and the total cost by 80%, without increasing the day of surgery cancellation or complications.

CONCLUSION: Information technology-based joint preoperative assessment and risk stratification are feasible through locally developed software with minimal cost. It helps in applying patient and surgery specific investigation, reducing the number of tests, hospital visit, and cost, without adversely affecting the perioperative outcome. The application of the modified method will help in cost-effective, yet quality and safe perioperative healthcare delivery. It will also benefit the public from both service and economic perspective.

RevDate: 2021-12-01

Yan X, J Wang (2021)

Dynamic monitoring of urban built-up object expansion trajectories in Karachi, Pakistan with time series images and the LandTrendr algorithm.

Scientific reports, 11(1):23118.

In the complex process of urbanization, retrieving its dynamic expansion trajectories with an efficient method is challenging, especially for urban regions that are not clearly distinguished from the surroundings in arid regions. In this study, we propose a framework for extracting spatiotemporal change information on urban disturbances. First, the urban built-up object areas in 2000 and 2020 were obtained using object-oriented segmentation method. Second, we applied LandTrendr (LT) algorithm and multiple bands/indices to extract annual spatiotemporal information. This process was implemented effectively with the support of the cloud computing platform of Earth Observation big data. The overall accuracy of time information extraction, the kappa coefficient, and average detection error were 83.76%, 0.79, and 0.57 a, respectively. These results show that Karachi expanded continuously during 2000-2020, with an average annual growth rate of 4.7%. However, this expansion was not spatiotemporally balanced. The coastal area developed quickly within a shorter duration, whereas the main newly added urban regions locate in the northern and eastern inland areas. This study demonstrated an effective framework for extract the dynamic spatiotemporal change information of urban built-up objects and substantially eliminate the salt-and-pepper effect based on pixel detection. Methods used in our study are of general promotion significance in the monitoring of other disturbances caused by natural or human activities.

RevDate: 2021-11-30

Ni Z, Chen H, Li Z, et al (2021)

MSCET: A Multi-Scenario Offloading Schedule for Biomedical Data Processing and Analysis in Cloud-Edge-Terminal Collaborative Vehicular Networks.

IEEE/ACM transactions on computational biology and bioinformatics, PP: [Epub ahead of print].

RevDate: 2021-11-29

Improved prediction error expansion and mirroring embedded samples for enhancing reversible audio data hiding.

Heliyon, 7(11):e08381 pii:S2405-8440(21)02484-1.

Many applications work by processing either small or big data, including sensitive and confidential ones, through computer networks like cloud computing. However, many systems are public and may not provide enough security mechanisms. Meanwhile, once the data are compromised, the security and privacy of the users will suffer from serious problems. Therefore, security protection is much required in various aspects, and one of how it is done is by embedding the data (payload) in another form of data (cover) such as audio. However, the existing methods do not provide enough space to accommodate the payload, so bigger data can not be taken; the quality of the respective generated data is relatively low, making it much different from its corresponding cover. This research works on these problems by improving a prediction error expansion-based algorithm and designing a mirroring embedded sample scheme. Here, a processed audio sample is forced to be as close as possible to the original one. The experimental results show that this proposed method produces a higher quality of stego data considering the size of the payloads. It achieves more than 100 dB, which is higher than that of the compared algorithms. Additionally, this proposed method is reversible, which means that both the original payload and the audio cover can be fully reconstructed.

RevDate: 2021-11-27

Shah SC (2021)

Design of a Machine Learning-Based Intelligent Middleware Platform for a Heterogeneous Private Edge Cloud System.

Sensors (Basel, Switzerland), 21(22): pii:s21227701.

Recent advances in mobile technologies have facilitated the development of a new class of smart city and fifth-generation (5G) network applications. These applications have diverse requirements, such as low latencies, high data rates, significant amounts of computing and storage resources, and access to sensors and actuators. A heterogeneous private edge cloud system was proposed to address the requirements of these applications. The proposed heterogeneous private edge cloud system is characterized by a complex and dynamic multilayer network and computing infrastructure. Efficient management and utilization of this infrastructure may increase data rates and reduce data latency, data privacy risks, and traffic to the core Internet network. A novel intelligent middleware platform is proposed in the current study to manage and utilize heterogeneous private edge cloud infrastructure efficiently. The proposed platform aims to provide computing, data collection, and data storage services to support emerging resource-intensive and non-resource-intensive smart city and 5G network applications. It aims to leverage regression analysis and reinforcement learning methods to solve the problem of efficiently allocating heterogeneous resources to application tasks. This platform adopts parallel transmission techniques, dynamic interface allocation techniques, and machine learning-based algorithms in a dynamic multilayer network infrastructure to improve network and application performance. Moreover, it uses container and device virtualization technologies to address problems related to heterogeneous hardware and execution environments.

RevDate: 2021-11-27

Fatima M, Nisar MW, Rashid J, et al (2021)

A Novel Fingerprinting Technique for Data Storing and Sharing through Clouds.

Sensors (Basel, Switzerland), 21(22): pii:s21227647.

With the emerging growth of digital data in information systems, technology faces the challenge of knowledge prevention, ownership rights protection, security, and privacy measurement of valuable and sensitive data. On-demand availability of various data as services in a shared and automated environment has become a reality with the advent of cloud computing. The digital fingerprinting technique has been adopted as an effective solution to protect the copyright and privacy of digital properties from illegal distribution and identification of malicious traitors over the cloud. Furthermore, it is used to trace the unauthorized distribution and the user of multimedia content distributed through the cloud. In this paper, we propose a novel fingerprinting technique for the cloud environment to protect numeric attributes in relational databases for digital privacy management. The proposed solution with the novel fingerprinting scheme is robust and efficient. It can address challenges such as embedding secure data over the cloud, essential to secure relational databases. The proposed technique provides a decoding accuracy of 100%, 90%, and 40% for 10% to 30%, 40%, and 50% of deleted records.

RevDate: 2021-11-27

Huang C, Yang Q, W Huang (2021)

Analysis of the Spatial and Temporal Changes of NDVI and Its Driving Factors in the Wei and Jing River Basins.

International journal of environmental research and public health, 18(22): pii:ijerph182211863.

This study aimed to explore the long-term vegetation cover change and its driving factors in the typical watershed of the Yellow River Basin. This research was based on the Google Earth Engine (GEE), a remote sensing cloud platform, and used the Landsat surface reflectance datasets and the Pearson correlation method to analyze the vegetation conditions in the areas above Xianyang on the Wei River and above Zhangjiashan on the Jing River. Random forest and decision tree models were used to analyze the effects of various climatic factors (precipitation, temperature, soil moisture, evapotranspiration, and drought index) on NDVI (normalized difference vegetation index). Then, based on the residual analysis method, the effects of human activities on NDVI were explored. The results showed that: (1) From 1987 to 2018, the NDVI of the two watersheds showed an increasing trend; in particular, after 2008, the average increase rate of NDVI in the growing season (April to September) increased from 0.0032/a and 0.003/a in the base period (1987-2008) to 0.0172/a and 0.01/a in the measurement period (2008-2018), for the Wei and Jing basins, respectively. In addition, the NDVI significantly increased from 21.78% and 31.32% in the baseline period (1987-2008) to 83.76% and 92.40% in the measurement period (2008-2018), respectively. (2) The random forest and classification and regression tree model (CART) can assess the contribution and sensitivity of various climate factors to NDVI. Precipitation, soil moisture, and temperature were found to be the three main factors that affect the NDVI of the study area, and their contributions were 37.05%, 26.42%, and 15.72%, respectively. The changes in precipitation and soil moisture in the entire Jing River Basin and the upper and middle reaches of the Wei River above Xianyang caused significant changes in NDVI. Furthermore, changes in precipitation and temperature led to significant changes in NDVI in the lower reaches of the Wei River. (3) The impact of human activities in the Wei and Jing basins on NDVI has gradually changed from negative to positive, which is mainly due to the implementation of soil and water conservation measures. The proportions of areas with positive effects of human activities were 80.88% and 81.95%, of which the proportions of areas with significant positive effects were 11.63% and 7.76%, respectively. These are mainly distributed in the upper reaches of the Wei River and the western and eastern regions of the Jing River. These areas are the key areas where soil and water conservation measures have been implemented in recent years, and the corresponding land use has transformed from cultivated land to forest and grassland. The negative effects accounted for 1.66% and 0.10% of the area, respectively, and were mainly caused by urban expansion and coal mining.

RevDate: 2021-11-27

Bhatia S, J Malhotra (2021)

Morton Filter-Based Security Mechanism for Healthcare System in Cloud Computing.

Healthcare (Basel, Switzerland), 9(11): pii:healthcare9111551.

Electronic health records contain the patient's sensitive information. If these data are acquired by a malicious user, it will not only cause the pilferage of the patient's personal data but also affect the diagnosis and treatment. One of the most challenging tasks in cloud-based healthcare systems is to provide security and privacy to electronic health records. Various probabilistic data structures and watermarking techniques were used in the cloud-based healthcare systems to secure patient's data. Most of the existing studies focus on cuckoo and bloom filters, without considering their throughputs. In this research, a novel cloud security mechanism is introduced, which supersedes the shortcomings of existing approaches. The proposed solution enhances security with methods such as fragile watermark, least significant bit replacement watermarking, class reliability factor, and Morton filters included in the formation of the security mechanism. A Morton filter is an approximate set membership data structure (ASMDS) that proves many improvements to other data structures, such as cuckoo, bloom, semi-sorting cuckoo, and rank and select quotient filters. The Morton filter improves security; it supports insertions, deletions, and lookups operations and improves their respective throughputs by 0.9× to 15.5×, 1.3× to 1.6×, and 1.3× to 2.5×, when compared to cuckoo filters. We used Hadoop version 0.20.3, and the platform was Red Hat Enterprise Linux 6; we executed five experiments, and the average of the results has been taken. The results of the simulation work show that our proposed security mechanism provides an effective solution for secure data storage in cloud-based healthcare systems, with a load factor of 0.9. Furthermore, to aid cloud security in healthcare systems, we presented the motivation, objectives, related works, major research gaps, and materials and methods; we, thus, presented and implemented a cloud security mechanism, in the form of an algorithm and a set of results and conclusions.

RevDate: 2021-11-26

Wilson PH, Rogers JM, Vogel K, et al (2021)

Home-based (virtual) rehabilitation improves motor and cognitive function for stroke patients: a randomized controlled trial of the Elements (EDNA-22) system.

Journal of neuroengineering and rehabilitation, 18(1):165.

BACKGROUND: Home-based rehabilitation of arm function is a significant gap in service provision for adult stroke. The EDNA-22 tablet is a portable virtual rehabilitation-based system that provides a viable option for home-based rehabilitation using a suite of tailored movement tasks, and performance monitoring via cloud computing data storage. The study reported here aimed to compare use of the EDNA system with an active control (Graded Repetitive Arm Supplementary Program-GRASP training) group using a parallel RCT design.

METHODS: Of 19 originally randomized, 17 acute-care patients with upper-extremity dysfunction following unilateral stroke completed training in either the treatment (n = 10) or active control groups (n = 7), each receiving 8-weeks of in-home training involving 30-min sessions scheduled 3-4 times weekly. Performance was assessed across motor, cognitive and functional behaviour in the home. Primary motor measures, collected by a blinded assessor, were the Box and Blocks Task (BBT) and 9-Hole Pegboard Test (9HPT), and for cognition the Montreal Cognitive Assessment (MoCA). Functional behaviour was assessed using the Stroke Impact Scale (SIS) and Neurobehavioural Functioning Inventory (NFI).

RESULTS: One participant from each group withdrew for personal reasons. No adverse events were reported. Results showed a significant and large improvement in performance on the BBT for the more-affected hand in the EDNA training group, only (g = 0.90). There was a mild-to-moderate effect of training on the 9HPT for EDNA (g = 0.55) and control (g = 0.42) groups, again for the more affected hand. In relation to cognition, performance on the MoCA improved for the EDNA group (g = 0.70). Finally, the EDNA group showed moderate (but non-significant) improvement in functional behaviour on the SIS (g = 0.57) and NFI (g = 0.49).

CONCLUSION: A short course of home-based training using the EDNA-22 system can yield significant gains in motor and cognitive performance, over and above an active control training that also targets upper-limb function. Intriguingly, these changes in performance were corroborated only tentatively in the reports of caregivers. We suggest that future research consider how the implementation of home-based rehabilitation technology can be optimized. We contend that self-administered digitally-enhanced training needs to become part of the health literacy of all stakeholders who are impacted by stroke and other acquired brain injuries. Trial registration Australian New Zealand Clinical Trials Registry (ANZCTR) Number: ACTRN12619001557123. Registered 12 November 2019, http://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=378298&isReview=true.

RevDate: 2021-11-24

Suvakov M, Panda A, Diesh C, et al (2021)

CNVpytor: a tool for copy number variation detection and analysis from read depth and allele imbalance in whole-genome sequencing.

GigaScience, 10(11):.

BACKGROUND: Detecting copy number variations (CNVs) and copy number alterations (CNAs) based on whole-genome sequencing data is important for personalized genomics and treatment. CNVnator is one of the most popular tools for CNV/CNA discovery and analysis based on read depth.

FINDINGS: Herein, we present an extension of CNVnator developed in Python-CNVpytor. CNVpytor inherits the reimplemented core engine of its predecessor and extends visualization, modularization, performance, and functionality. Additionally, CNVpytor uses B-allele frequency likelihood information from single-nucleotide polymorphisms and small indels data as additional evidence for CNVs/CNAs and as primary information for copy number-neutral losses of heterozygosity.

CONCLUSIONS: CNVpytor is significantly faster than CNVnator-particularly for parsing alignment files (2-20 times faster)-and has (20-50 times) smaller intermediate files. CNV calls can be filtered using several criteria, annotated, and merged over multiple samples. Modular architecture allows it to be used in shared and cloud environments such as Google Colab and Jupyter notebook. Data can be exported into JBrowse, while a lightweight plugin version of CNVpytor for JBrowse enables nearly instant and GUI-assisted analysis of CNVs by any user. CNVpytor release and the source code are available on GitHub at https://github.com/abyzovlab/CNVpytor under the MIT license.

RevDate: 2021-11-24

Shamshirband S, Joloudari JH, Shirkharkolaie SK, et al (2021)

Game theory and evolutionary optimization approaches applied to resource allocation problems in computing environments: A survey.

Mathematical biosciences and engineering : MBE, 18(6):9190-9232.

Today's intelligent computing environments, including the Internet of Things (IoT), Cloud Computing (CC), Fog Computing (FC), and Edge Computing (EC), allow many organizations worldwide to optimize their resource allocation regarding the quality of service and energy consumption. Due to the acute conditions of utilizing resources by users and the real-time nature of the data, a comprehensive and integrated computing environment has not yet provided a robust and reliable capability for proper resource allocation. Although traditional resource allocation approaches in a low-capacity hardware resource system are efficient for small-scale resource providers, for a complex system in the conditions of dynamic computing resources and fierce competition in obtaining resources, they cannot develop and adaptively manage the conditions optimally. To optimize the resource allocation with minimal delay, low energy consumption, minimum computational complexity, high scalability, and better resource utilization efficiency, CC/FC/EC/IoT-based computing architectures should be designed intelligently. Therefore, the objective of this research is a comprehensive survey on resource allocation problems using computational intelligence-based evolutionary optimization and mathematical game theory approaches in different computing environments according to the latest scientific research achievements.

RevDate: 2021-11-24

Liu Y, Huang W, Wang L, et al (2021)

Dynamic computation offloading algorithm based on particle swarm optimization with a mutation operator in multi-access edge computing.

Mathematical biosciences and engineering : MBE, 18(6):9163-9189.

RevDate: 2021-11-24

Al-Zumia FA, Tian Y, M Al-Rodhaan (2021)

A novel fault-tolerant privacy-preserving cloud-based data aggregation scheme for lightweight health data.

Mathematical biosciences and engineering : MBE, 18(6):7539-7560.

Mobile health networks (MHNWs) have facilitated instant medical health care and remote health monitoring for patients. Currently, a vast amount of health data needs to be quickly collected, processed and analyzed. The main barrier to doing so is the limited amount of the computational storage resources that are required for MHNWs. Therefore, health data must be outsourced to the cloud. Although the cloud has the benefits of powerful computation capabilities and intensive storage resources, security and privacy concerns exist. Therefore, our study examines how to collect and aggregate these health data securely and efficiently, with a focus on the theoretical importance and application potential of the aggregated data. In this work, we propose a novel design for a private and fault-tolerant cloud-based data aggregation scheme. Our design is based on a future ciphertext mechanism for improving the fault tolerance capabilities of MHNWs. Our scheme is privatized via differential privacy, which is achieved by encrypting noisy health data and enabling the cloud to obtain the results of only the noisy sum. Our scheme is efficient, reliable and secure and combines different approaches and algorithms to improve the security and efficiency of the system. Our proposed scheme is evaluated with an extensive simulation study, and the simulation results show that it is efficient and reliable. The computational cost of our scheme is significantly less than that of the related scheme. The aggregation error is minimized from ${\rm{O}}\left({\sqrt {{\bf{w + 1}}} } \right)$ in the related scheme to O(1) in our scheme.

RevDate: 2021-11-23

Huang L, Tian S, Zhao W, et al (2021)

5G-Enabled intelligent construction of a chest pain center with up-conversion lateral flow immunoassay.

The Analyst [Epub ahead of print].

Acute myocardial infarction (AMI) has become a worldwide health problem because of its rapid onset and high mortality. Cardiac troponin I (cTnI) is the gold standard for diagnosis of AMI, and its rapid and accurate detection is critical for early diagnosis and management of AMI. Using a lateral flow immunoassay with upconverting nanoparticles as fluorescent probes, we developed an up-conversion fluorescence reader capable of rapidly quantifying the cTnI concentration in serum based upon the fluorescence intensity of the test and control lines on the test strip. Reliable detection of cTnI in the range 0.1-50 ng mL-1 could be achieved in 15 min, with a lower detection limit of 0.1 ng mL-1. The reader was also adapted for use on a 5th generation (5G) mobile network enabled intelligent chest pain center. Through Bluetooth wireless communication, the results achieved using the reader on an ambulance heading to a central hospital could be transmitted to a 5G smartphone and uploaded for real-time edge computing and cloud storage. An application in the 5G smartphone allows users to upload their medical information to establish dedicated electronic health records and doctors to monitor patients' health status and provide remote medical services. Combined with mobile internet and big data, the 5G-enabled intelligent chest pain center with up-conversion lateral flow immunoassay may predict the onset of AMI and save valuable time for patients suffering an AMI.

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

short personal version

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )

New Science

Weird Science

Science Policy & Funding

Biodiversity

Paleontology

Astronomy

Paleoanthropology

WTF !?