picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
14 Nov 2022 at 01:59
HITS:
2758
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 14 Nov 2022 at 01:59 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2022-11-09

Zhang X, Han L, Sobeih T, et al (2022)

CXR-Net: A Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia from Chest X-ray Images.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first-line imaging technique for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Currently, many deep learning (DL) models have been proposed to detect COVID-19 pneumonia from CXR images. Unfortunately, these deep classifiers lack the transparency in interpreting findings, which may limit their applications in clinical practice. The existing explanation methods produce either too noisy or imprecise results, and hence are unsuitable for diagnostic purposes. In this work, we propose a novel explainable CXR deep neural Network (CXR-Net) for accurate COVID-19 pneumonia detection with an enhanced pixel-level visual explanation using CXR images. An Encoder-Decoder-Encoder architecture is proposed, in which an extra encoder is added after the encoder-decoder structure to ensure the model can be trained on category samples. The method has been evaluated on real world CXR datasets from both public and private sources, including healthy, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia cases. The results demonstrate that the proposed method can achieve a satisfactory accuracy and provide fine-resolution activation maps for visual explanation in the lung disease detection. The Average Accuracy, Sensitivity, Specificity, PPV and F1-score of models in the COVID-19 pneumonia detection reach 0.992, 0.998, 0.985 and 0.989, respectively. Compared to current state-of-the-art visual explanation methods, the proposed method can provide more detailed, high-resolution, visual explanation for the classification results. It can be deployed in various computing environments, including cloud, CPU and GPU environments. It has a great potential to be used in clinical practice for COVID-19 pneumonia diagnosis.

RevDate: 2022-11-06

Tomassini S, Sbrollini A, Covella G, et al (2022)

Brain-on-Cloud for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans.

Computer methods and programs in biomedicine, 227:107191 pii:S0169-2607(22)00572-7 [Epub ahead of print].

BACKGROUND AND OBJECTIVE: Alzheimer's disease accounts for approximately 70% of all dementia cases. Cortical and hippocampal atrophy caused by Alzheimer's disease can be appreciated easily from a T1-weighted structural magnetic resonance scan. Since a timely therapeutic intervention during the initial stages of the syndrome has a positive impact on both disease progression and quality of life of affected subjects, Alzheimer's disease diagnosis is crucial. Thus, this study relies on the development of a robust yet lightweight 3D framework, Brain-on-Cloud, dedicated to efficient learning of Alzheimer's disease-related features from 3D structural magnetic resonance whole-brain scans by improving our recent convolutional long short-term memory-based framework with the integration of a set of data handling techniques in addition to the tuning of the model hyper-parameters and the evaluation of its diagnostic performance on independent test data.

METHODS: For this objective, four serial experiments were conducted on a scalable GPU cloud service. They were compared and the hyper-parameters of the best experiment were tuned until reaching the best-performing configuration. In parallel, two branches were designed. In the first branch of Brain-on-Cloud, training, validation and testing were performed on OASIS-3. In the second branch, unenhanced data from ADNI-2 were employed as independent test set, and the diagnostic performance of Brain-on-Cloud was evaluated to prove its robustness and generalization capability. The prediction scores were computed for each subject and stratified according to age, sex and mini mental state examination.

RESULTS: In its best guise, Brain-on-Cloud is able to discriminate Alzheimer's disease with an accuracy of 92% and 76%, sensitivity of 94% and 82%, and area under the curve of 96% and 92% on OASIS-3 and independent ADNI-2 test data, respectively.

CONCLUSIONS: Brain-on-Cloud shows to be a reliable, lightweight and easily-reproducible framework for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans, performing well without segmenting the brain into its portions. Preserving the brain anatomy, its application and diagnostic ability can be extended to other cognitive disorders. Due to its cloud nature, computational lightness and fast execution, it can also be applied in real-time diagnostic scenarios providing prompt clinical decision support.

RevDate: 2022-11-02

Golkar A, Malekhosseini R, RahimiZadeh K, et al (2022)

A priority queue-based telemonitoring system for automatic diagnosis of heart diseases in integrated fog computing environments.

Health informatics journal, 28(4):14604582221137453.

Various studies have shown the benefits of using distributed fog computing for healthcare systems. The new pattern of fog and edge computing reduces latency for data processing compared to cloud computing. Nevertheless, the proposed fog models still have many limitations in improving system performance and patients' response time.This paper, proposes a new performance model by integrating fog computing, priority queues and certainty theory into the Edge computing devices and validating it by analyzing heart disease patients' conditions in clinical decision support systems (CDSS). In this model, a Certainty Factor (CF) value is assigned to each symptom of heart disease. When one or more symptoms show an abnormal value, the patient's condition will be evaluated using CF values in the fog layer. In the fog layer, requests are categorized in different priority queues before arriving into the system. The results demonstrate that network usage, latency, and response time of patients' requests are respectively improved by 25.55%, 42.92%, and 34.28% compared to the cloud model. Prioritizing patient requests with respect to CF values in the CDSS provides higher system Quality of Service (QoS) and patients' response time.

RevDate: 2022-11-01

Ament SA, Adkins RS, Carter R, et al (2022)

The Neuroscience Multi-Omic Archive: a BRAIN Initiative resource for single-cell transcriptomic and epigenomic data from the mammalian brain.

Nucleic acids research pii:6786191 [Epub ahead of print].

Scalable technologies to sequence the transcriptomes and epigenomes of single cells are transforming our understanding of cell types and cell states. The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative Cell Census Network (BICCN) is applying these technologies at unprecedented scale to map the cell types in the mammalian brain. In an effort to increase data FAIRness (Findable, Accessible, Interoperable, Reusable), the NIH has established repositories to make data generated by the BICCN and related BRAIN Initiative projects accessible to the broader research community. Here, we describe the Neuroscience Multi-Omic Archive (NeMO Archive; nemoarchive.org), which serves as the primary repository for genomics data from the BRAIN Initiative. Working closely with other BRAIN Initiative researchers, we have organized these data into a continually expanding, curated repository, which contains transcriptomic and epigenomic data from over 50 million brain cells, including single-cell genomic data from all of the major regions of the adult and prenatal human and mouse brains, as well as substantial single-cell genomic data from non-human primates. We make available several tools for accessing these data, including a searchable web portal, a cloud-computing interface for large-scale data processing (implemented on Terra, terra.bio), and a visualization and analysis platform, NeMO Analytics (nemoanalytics.org).

RevDate: 2022-11-01

Prakash AJ, Kumar S, Behera MD, et al (2022)

Impact of extreme weather events on cropland inundation over Indian subcontinent.

Environmental monitoring and assessment, 195(1):50.

Cyclonic storms and extreme precipitation lead to loss of lives and significant damage to land and property, crop productivity, etc. The "Gulab" cyclonic storm formed on the 24th of September 2021 in the Bay of Bengal (BoB), hit the eastern Indian coasts on the 26th of September and caused massive damage and water inundation. This study used Integrated Multi-satellite Retrievals for GPM (IMERG) satellite precipitation data for daily to monthly scale assessments focusing on the "Gulab" cyclonic event. The Otsu's thresholding approach was applied to Sentinel-1 data to map water inundation. Standardized Precipitation Index (SPI) was employed to analyze the precipitation deviation compared to the 20 years mean climatology across India from June to November 2021 on a monthly scale. The water-inundated areas were overlaid on a recent publicly available high-resolution land use land cover (LULC) map to demarcate crop area damage in four eastern Indian states such as Andhra Pradesh, Chhattisgarh, Odisha, and Telangana. The maximum water inundation and crop area damages were observed in Andhra Pradesh (~2700 km2), followed by Telangana (~2040 km2) and Odisha (~1132 km2), and the least in Chhattisgarh (~93.75 km2). This study has potential implications for an emergency response to extreme weather events, such as cyclones, extreme precipitation, and flood. The spatio-temporal data layers and rapid assessment methodology can be helpful to various users such as disaster management authorities, mitigation and response teams, and crop insurance scheme development. The relevant satellite data, products, and cloud-computing facility could operationalize systematic disaster monitoring under the rising threats of extreme weather events in the coming years.

RevDate: 2022-10-31

Khosla A, Sonu , Awan HTA, et al (2022)

Emergence of MXene and MXene-Polymer Hybrid Membranes as Future- Environmental Remediation Strategies.

Advanced science (Weinheim, Baden-Wurttemberg, Germany) [Epub ahead of print].

The continuous deterioration of the environment due to extensive industrialization and urbanization has raised the requirement to devise high-performance environmental remediation technologies. Membrane technologies, primarily based on conventional polymers, are the most commercialized air, water, solid, and radiation-based environmental remediation strategies. Low stability at high temperatures, swelling in organic contaminants, and poor selectivity are the fundamental issues associated with polymeric membranes restricting their scalable viability. Polymer-metal-carbides and nitrides (MXenes) hybrid membranes possess remarkable physicochemical attributes, including strong mechanical endurance, high mechanical flexibility, superior adsorptive behavior, and selective permeability, due to multi-interactions between polymers and MXene's surface functionalities. This review articulates the state-of-the-art MXene-polymer hybrid membranes, emphasizing its fabrication routes, enhanced physicochemical properties, and improved adsorptive behavior. It comprehensively summarizes the utilization of MXene-polymer hybrid membranes for environmental remediation applications, including water purification, desalination, ion-separation, gas separation and detection, containment adsorption, and electromagnetic and nuclear radiation shielding. Furthermore, the review highlights the associated bottlenecks of MXene-Polymer hybrid-membranes and its possible alternate solutions to meet industrial requirements. Discussed are opportunities and prospects related to MXene-polymer membrane to devise intelligent and next-generation environmental remediation strategies with the integration of modern age technologies of internet-of-things, artificial intelligence, machine-learning, 5G-communication and cloud-computing are elucidated.

RevDate: 2022-10-29

Raveendran K, Freese NH, Kintali C, et al (2022)

BioViz Connect: Web Application Linking CyVerse Cloud Resources to Genomic Visualization in the Integrated Genome Browser.

Frontiers in bioinformatics, 2:764619.

Genomics researchers do better work when they can interactively explore and visualize data. Due to the vast size of experimental datasets, researchers are increasingly using powerful, cloud-based systems to process and analyze data. These remote systems, called science gateways, offer user-friendly, Web-based access to high performance computing and storage resources, but typically lack interactive visualization capability. In this paper, we present BioViz Connect, a middleware Web application that links CyVerse science gateway resources to the Integrated Genome Browser (IGB), a highly interactive native application implemented in Java that runs on the user's personal computer. Using BioViz Connect, users can 1) stream data from the CyVerse data store into IGB for visualization, 2) improve the IGB user experience for themselves and others by adding IGB specific metadata to CyVerse data files, including genome version and track appearance, and 3) run compute-intensive visual analytics functions on CyVerse infrastructure to create new datasets for visualization in IGB or other applications. To demonstrate how BioViz Connect facilitates interactive data visualization, we describe an example RNA-Seq data analysis investigating how heat and desiccation stresses affect gene expression in the model plant Arabidopsis thaliana. The RNA-Seq use case illustrates how interactive visualization with IGB can help a user identify problematic experimental samples, sanity-check results using a positive control, and create new data files for interactive visualization in IGB (or other tools) using a Docker image deployed to CyVerse via the Terrain API. Lastly, we discuss limitations of the technologies used and suggest opportunities for future work. BioViz Connect is available from https://bioviz.org.

RevDate: 2022-10-29

Guérinot C, Marcon V, Godard C, et al (2021)

New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing.

Frontiers in bioinformatics, 1:777101.

Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.

RevDate: 2022-10-27

Reani Y, O Bobrowski (2022)

Cycle Registration in Persistent Homology with Applications in Topological Bootstrap.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

We propose a novel approach for comparing the persistent homology representations of two spaces (or filtrations). Commonly used methods are based on numerical summaries such as persistence diagrams and persistence landscapes, along with suitable metrics (e.g. Wasserstein). These summaries are useful for computational purposes, but they are merely a marginal of the actual topological information that persistent homology can provide. Instead, our approach compares between two topological representations directly in the data space. We do so by defining a correspondence relation between individual persistent cycles of two different spaces, and devising a method for computing this correspondence. Our matching of cycles is based on both the persistence intervals and the spatial placement of each feature. We demonstrate our new framework in the context of topological inference, where we use statistical bootstrap methods in order to differentiate between real features and noise in point cloud data.

RevDate: 2022-10-27

Li X, K You (2022)

Real-time tracking and detection of patient conditions in the intelligent m-Health monitoring system.

Frontiers in public health, 10:922718.

In order to help patients monitor their personal health in real time, this paper proposes an intelligent mobile health monitoring system and establishes a corresponding health network to track and process patients' physical activity and other health-related factors in real time. Performance was analyzed. The experimental results show that after comparing the accuracy, delay time, error range, efficiency, and energy utilization of Im-HMS and existing UCD systems, it is found that the accuracy of Im-HMS is mostly between 98 and 100%, while the accuracy of UCD is mostly between 98 and 100%. Most of the systems are between 91 and 97%; in terms of delay comparison, the delay of the Im-HMS system is between 18 and 39 ms, which is far lower than the lowest value of the UCD system of 84 ms, and the Im-HMS is significantly better than the existing UCD system; the error range of Im-HMS is mainly between 0.2 and 1.4, while the error range of UCD system is mainly between -2 and 14; and in terms of efficiency and energy utilization, Im-HMS values are higher than those of UCD system. In general, the Im-HMS system proposed in this study is more accurate than UCD system and has lower delay, smaller error, and higher efficiency, and energy utilization is more efficient than UCD system, which is of great significance for mobile health monitoring in practical applications.

RevDate: 2022-10-27

Yu L, Yu PS, Duan Y, et al (2022)

A resource scheduling method for reliable and trusted distributed composite services in cloud environment based on deep reinforcement learning.

Frontiers in genetics, 13:964784 pii:964784.

With the vigorous development of Internet technology, applications are increasingly migrating to the cloud. Cloud, a distributed network environment, has been widely extended to many fields such as digital finance, supply chain management, and biomedicine. In order to meet the needs of the rapid development of the modern biomedical industry, the biological cloud platform is an inevitable choice for the integration and analysis of medical information. It improves the work efficiency of the biological information system and also realizes reliable and credible intelligent processing of biological resources. Cloud services in bioinformatics are mainly for the processing of biological data, such as the analysis and processing of genes, the testing and detection of human tissues and organs, and the storage and transportation of vaccines. Biomedical companies form a data chain on the cloud, and they provide services and transfer data to each other to create composite services. Therefore, our motivation is to improve process efficiency of biological cloud services. Users' business requirements have become complicated and diversified, which puts forward higher requirements for service scheduling strategies in cloud computing platforms. In addition, deep reinforcement learning shows strong perception and continuous decision-making capabilities in automatic control problems, which provides a new idea and method for solving the service scheduling and resource allocation problems in the cloud computing field. Therefore, this paper designs a composite service scheduling model under the containers instance mode which hybrids reservation and on-demand. The containers in the cluster are divided into two instance modes: reservation and on-demand. A composite service is described as a three-level structure: a composite service consists of multiple services, and a service consists of multiple service instances, where the service instance is the minimum scheduling unit. In addition, an improved Deep Q-Network (DQN) algorithm is proposed and applied to the scheduling algorithm of composite services. The experimental results show that applying our improved DQN algorithm to the composite services scheduling problem in the container cloud environment can effectively reduce the completion time of the composite services. Meanwhile, the method improves Quality of Service (QoS) and resource utilization in the container cloud environment.

RevDate: 2022-10-27

Jensen TL, Hooper WF, Cherikh SR, et al (2021)

RP-REP Ribosomal Profiling Reports: an open-source cloud-enabled framework for reproducible ribosomal profiling data processing, analysis, and result reporting.

F1000Research, 10:143.

Ribosomal profiling is an emerging experimental technology to measure protein synthesis by sequencing short mRNA fragments undergoing translation in ribosomes. Applied on the genome wide scale, this is a powerful tool to profile global protein synthesis within cell populations of interest. Such information can be utilized for biomarker discovery and detection of treatment-responsive genes. However, analysis of ribosomal profiling data requires careful preprocessing to reduce the impact of artifacts and dedicated statistical methods for visualizing and modeling the high-dimensional discrete read count data. Here we present Ribosomal Profiling Reports (RP-REP), a new open-source cloud-enabled software that allows users to execute start-to-end gene-level ribosomal profiling and RNA-Seq analysis on a pre-configured Amazon Virtual Machine Image (AMI) hosted on AWS or on the user's own Ubuntu Linux server. The software works with FASTQ files stored locally, on AWS S3, or at the Sequence Read Archive (SRA). RP-REP automatically executes a series of customizable steps including filtering of contaminant RNA, enrichment of true ribosomal footprints, reference alignment and gene translation quantification, gene body coverage, CRAM compression, reference alignment QC, data normalization, multivariate data visualization, identification of differentially translated genes, and generation of heatmaps, co-translated gene clusters, enriched pathways, and other custom visualizations. RP-REP provides functionality to contrast RNA-SEQ and ribosomal profiling results, and calculates translational efficiency per gene. The software outputs a PDF report and publication-ready table and figure files. As a use case, we provide RP-REP results for a dengue virus study that tested cytosol and endoplasmic reticulum cellular fractions of human Huh7 cells pre-infection and at 6 h, 12 h, 24 h, and 40 h post-infection. Case study results, Ubuntu installation scripts, and the most recent RP-REP source code are accessible at GitHub. The cloud-ready AMI is available at AWS (AMI ID: RPREP RSEQREP (Ribosome Profiling and RNA-Seq Reports) v2.1 (ami-00b92f52d763145d3)).

RevDate: 2022-10-27

Zhang Y, Wu Z, Lin P, et al (2022)

Hand gestures recognition in videos taken with a lensless camera.

Optics express, 30(22):39520-39533.

A lensless camera is an imaging system that uses a mask in place of a lens, making it thinner, lighter, and less expensive than a lensed camera. However, additional complex computation and time are required for image reconstruction. This work proposes a deep learning model named Raw3dNet that recognizes hand gestures directly on raw videos captured by a lensless camera without the need for image restoration. In addition to conserving computational resources, the reconstruction-free method provides privacy protection. Raw3dNet is a novel end-to-end deep neural network model for the recognition of hand gestures in lensless imaging systems. It is created specifically for raw video captured by a lensless camera and has the ability to properly extract and combine temporal and spatial features. The network is composed of two stages: 1. spatial feature extractor (SFE), which enhances the spatial features of each frame prior to temporal convolution; 2. 3D-ResNet, which implements spatial and temporal convolution of video streams. The proposed model achieves 98.59% accuracy on the Cambridge Hand Gesture dataset in the lensless optical experiment, which is comparable to the lensed-camera result. Additionally, the feasibility of physical object recognition is assessed. Further, we show that the recognition can be achieved with respectable accuracy using only a tiny portion of the original raw data, indicating the potential for reducing data traffic in cloud computing scenarios.

RevDate: 2022-10-27

Amin F, Abbasi R, Mateen A, et al (2022)

A Step toward Next-Generation Advancements in the Internet of Things Technologies.

Sensors (Basel, Switzerland), 22(20): pii:s22208072.

The Internet of Things (IoT) devices generate a large amount of data over networks; therefore, the efficiency, complexity, interfaces, dynamics, robustness, and interaction need to be re-examined on a large scale. This phenomenon will lead to seamless network connectivity and the capability to provide support for the IoT. The traditional IoT is not enough to provide support. Therefore, we designed this study to provide a systematic analysis of next-generation advancements in the IoT. We propose a systematic catalog that covers the most recent advances in the traditional IoT. An overview of the IoT from the perspectives of big data, data science, and network science disciplines and also connecting technologies is given. We highlight the conceptual view of the IoT, key concepts, growth, and most recent trends. We discuss and highlight the importance and the integration of big data, data science, and network science along with key applications such as artificial intelligence, machine learning, blockchain, federated learning, etc. Finally, we discuss various challenges and issues of IoT such as architecture, integration, data provenance, and important applications such as cloud and edge computing, etc. This article will provide aid to the readers and other researchers in an understanding of the IoT's next-generation developments and tell how they apply to the real world.

RevDate: 2022-10-27

Farag MM (2022)

Matched Filter Interpretation of CNN Classifiers with Application to HAR.

Sensors (Basel, Switzerland), 22(20): pii:s22208060.

Time series classification is an active research topic due to its wide range of applications and the proliferation of sensory data. Convolutional neural networks (CNNs) are ubiquitous in modern machine learning (ML) models. In this work, we present a matched filter (MF) interpretation of CNN classifiers accompanied by an experimental proof of concept using a carefully developed synthetic dataset. We exploit this interpretation to develop an MF CNN model for time series classification comprising a stack of a Conv1D layer followed by a GlobalMaxPooling layer acting as a typical MF for automated feature extraction and a fully connected layer with softmax activation for computing class probabilities. The presented interpretation enables developing superlight highly accurate classifier models that meet the tight requirements of edge inference. Edge inference is emerging research that addresses the latency, availability, privacy, and connectivity concerns of the commonly deployed cloud inference. The MF-based CNN model has been applied to the sensor-based human activity recognition (HAR) problem due to its significant importance in a broad range of applications. The UCI-HAR, WISDM-AR, and MotionSense datasets are used for model training and testing. The proposed classifier is tested and benchmarked on an android smartphone with average accuracy and F1 scores of 98% and 97%, respectively, which outperforms state-of-the-art HAR methods in terms of classification accuracy and run-time performance. The proposed model size is less than 150 KB, and the average inference time is less than 1 ms. The presented interpretation helps develop a better understanding of CNN operation and decision mechanisms. The proposed model is distinguished from related work by jointly featuring interpretability, high accuracy, and low computational cost, enabling its ready deployment on a wide set of mobile devices for a broad range of applications.

RevDate: 2022-10-27

Munir T, Akbar MS, Ahmed S, et al (2022)

A Systematic Review of Internet of Things in Clinical Laboratories: Opportunities, Advantages, and Challenges.

Sensors (Basel, Switzerland), 22(20): pii:s22208051.

The Internet of Things (IoT) is the network of physical objects embedded with sensors, software, electronics, and online connectivity systems. This study explores the role of IoT in clinical laboratory processes; this systematic review was conducted adhering to the PRISMA Statement 2020 guidelines. We included IoT models and applications across preanalytical, analytical, and postanalytical laboratory processes. PubMed, Cochrane Central, CINAHL Plus, Scopus, IEEE, and A.C.M. Digital library were searched between August 2015 to August 2022; the data were tabulated. Cohen's coefficient of agreement was calculated to quantify inter-reviewer agreements; a total of 18 studies were included with Cohen's coefficient computed to be 0.91. The included studies were divided into three classifications based on availability, including preanalytical, analytical, and postanalytical. The majority (77.8%) of the studies were real-tested. Communication-based approaches were the most common (83.3%), followed by application-based approaches (44.4%) and sensor-based approaches (33.3%) among the included studies. Open issues and challenges across the included studies included scalability, costs and energy consumption, interoperability, privacy and security, and performance issues. In this study, we identified, classified, and evaluated IoT applicability in clinical laboratory systems. This study presents pertinent findings for IoT development across clinical laboratory systems, for which it is essential that more rigorous and efficient testing and studies be conducted in the future.

RevDate: 2022-10-27

Velichko A, Huyut MT, Belyaev M, et al (2022)

Machine Learning Sensors for Diagnosis of COVID-19 Disease Using Routine Blood Values for Internet of Things Application.

Sensors (Basel, Switzerland), 22(20): pii:s22207886.

Healthcare digitalization requires effective applications of human sensors, when various parameters of the human body are instantly monitored in everyday life due to the Internet of Things (IoT). In particular, machine learning (ML) sensors for the prompt diagnosis of COVID-19 are an important option for IoT application in healthcare and ambient assisted living (AAL). Determining a COVID-19 infected status with various diagnostic tests and imaging results is costly and time-consuming. This study provides a fast, reliable and cost-effective alternative tool for the diagnosis of COVID-19 based on the routine blood values (RBVs) measured at admission. The dataset of the study consists of a total of 5296 patients with the same number of negative and positive COVID-19 test results and 51 routine blood values. In this study, 13 popular classifier machine learning models and the LogNNet neural network model were exanimated. The most successful classifier model in terms of time and accuracy in the detection of the disease was the histogram-based gradient boosting (HGB) (accuracy: 100%, time: 6.39 sec). The HGB classifier identified the 11 most important features (LDL, cholesterol, HDL-C, MCHC, triglyceride, amylase, UA, LDH, CK-MB, ALP and MCH) to detect the disease with 100% accuracy. In addition, the importance of single, double and triple combinations of these features in the diagnosis of the disease was discussed. We propose to use these 11 features and their binary combinations as important biomarkers for ML sensors in the diagnosis of the disease, supporting edge computing on Arduino and cloud IoT service.

RevDate: 2022-10-27

Merone M, Graziosi A, Lapadula V, et al (2022)

A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems.

Sensors (Basel, Switzerland), 22(20): pii:s22207807.

The exponential increase in internet data poses several challenges to cloud systems and data centers, such as scalability, power overheads, network load, and data security. To overcome these limitations, research is focusing on the development of edge computing systems, i.e., based on a distributed computing model in which data processing occurs as close as possible to where the data are collected. Edge computing, indeed, mitigates the limitations of cloud computing, implementing artificial intelligence algorithms directly on the embedded devices enabling low latency responses without network overhead or high costs, and improving solution scalability. Today, the hardware improvements of the edge devices make them capable of performing, even if with some constraints, complex computations, such as those required by Deep Neural Networks. Nevertheless, to efficiently implement deep learning algorithms on devices with limited computing power, it is necessary to minimize the production time and to quickly identify, deploy, and, if necessary, optimize the best Neural Network solution. This study focuses on developing a universal method to identify and port the best Neural Network on an edge system, valid regardless of the device, Neural Network, and task typology. The method is based on three steps: a trade-off step to obtain the best Neural Network within different solutions under investigation; an optimization step to find the best configurations of parameters under different acceleration techniques; eventually, an explainability step using local interpretable model-agnostic explanations (LIME), which provides a global approach to quantify the goodness of the classifier decision criteria. We evaluated several MobileNets on the Fudan Shangai-Tech dataset to test the proposed approach.

RevDate: 2022-10-27

Torrisi F, Amato E, Corradino C, et al (2022)

Characterization of Volcanic Cloud Components Using Machine Learning Techniques and SEVIRI Infrared Images.

Sensors (Basel, Switzerland), 22(20): pii:s22207712.

Volcanic explosive eruptions inject several different types of particles and gasses into the atmosphere, giving rise to the formation and propagation of volcanic clouds. These can pose a serious threat to the health of people living near an active volcano and cause damage to air traffic. Many efforts have been devoted to monitor and characterize volcanic clouds. Satellite infrared (IR) sensors have been shown to be well suitable for volcanic cloud monitoring tasks. Here, a machine learning (ML) approach was developed in Google Earth Engine (GEE) to detect a volcanic cloud and to classify its main components using satellite infrared images. We implemented a supervised support vector machine (SVM) algorithm to segment a combination of thermal infrared (TIR) bands acquired by the geostationary MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). This ML algorithm was applied to some of the paroxysmal explosive events that occurred at Mt. Etna between 2020 and 2022. We found that the ML approach using a combination of TIR bands from the geostationary satellite is very efficient, achieving an accuracy of 0.86, being able to properly detect, track and map automatically volcanic ash clouds in near real-time.

RevDate: 2022-10-27

Li Z (2022)

Forecasting Weekly Dengue Cases by Integrating Google Earth Engine-Based Risk Predictor Generation and Google Colab-Based Deep Learning Modeling in Fortaleza and the Federal District, Brazil.

International journal of environmental research and public health, 19(20): pii:ijerph192013555.

Efficient and accurate dengue risk prediction is an important basis for dengue prevention and control, which faces challenges, such as downloading and processing multi-source data to generate risk predictors and consuming significant time and computational resources to train and validate models locally. In this context, this study proposed a framework for dengue risk prediction by integrating big geospatial data cloud computing based on Google Earth Engine (GEE) platform and artificial intelligence modeling on the Google Colab platform. It enables defining the epidemiological calendar, delineating the predominant area of dengue transmission in cities, generating the data of risk predictors, and defining multi-date ahead prediction scenarios. We implemented the experiments based on weekly dengue cases during 2013-2020 in the Federal District and Fortaleza, Brazil to evaluate the performance of the proposed framework. Four predictors were considered, including total rainfall (Rsum), mean temperature (Tmean), mean relative humidity (RHmean), and mean normalized difference vegetation index (NDVImean). Three models (i.e., random forest (RF), long-short term memory (LSTM), and LSTM with attention mechanism (LSTM-ATT)), and two modeling scenarios (i.e., modeling with or without dengue cases) were set to implement 1- to 4-week ahead predictions. A total of 24 models were built, and the results showed in general that LSTM and LSTM-ATT models outperformed RF models; modeling could benefit from using historical dengue cases as one of the predictors, and it makes the predicted curve fluctuation more stable compared with that only using climate and environmental factors; attention mechanism could further improve the performance of LSTM models. This study provides implications for future dengue risk prediction in terms of the effectiveness of GEE-based big geospatial data processing for risk predictor generation and Google Colab-based risk modeling and presents the benefits of using historical dengue data as one of the input features and the attention mechanism for LSTM modeling.

RevDate: 2022-10-27

Alenoghena CO, Onumanyi AJ, Ohize HO, et al (2022)

eHealth: A Survey of Architectures, Developments in mHealth, Security Concerns and Solutions.

International journal of environmental research and public health, 19(20): pii:ijerph192013071.

The ramifications of the COVID-19 pandemic have contributed in part to a recent upsurge in the study and development of eHealth systems. Although it is almost impossible to cover all aspects of eHealth in a single discussion, three critical areas have gained traction. These include the need for acceptable eHealth architectures, the development of mobile health (mHealth) technologies, and the need to address eHealth system security concerns. Existing survey articles lack a synthesis of the most recent advancements in the development of architectures, mHealth solutions, and innovative security measures, which are essential components of effective eHealth systems. Consequently, the present article aims at providing an encompassing survey of these three aspects towards the development of successful and efficient eHealth systems. Firstly, we discuss the most recent innovations in eHealth architectures, such as blockchain-, Internet of Things (IoT)-, and cloud-based architectures, focusing on their respective benefits and drawbacks while also providing an overview of how they might be implemented and used. Concerning mHealth and security, we focus on key developments in both areas while discussing other critical topics of importance for eHealth systems. We close with a discussion of the important research challenges and potential future directions as they pertain to architecture, mHealth, and security concerns. This survey gives a comprehensive overview, including the merits and limitations of several possible technologies for the development of eHealth systems. This endeavor offers researchers and developers a quick snapshot of the information necessary during the design and decision-making phases of the eHealth system development lifecycle. Furthermore, we conclude that building a unified architecture for eHealth systems would require combining several existing designs. It also points out that there are still a number of problems to be solved, so more research and investment are needed to develop and deploy functional eHealth systems.

RevDate: 2022-10-25

Schubert PJ, Dorkenwald S, Januszewski M, et al (2022)

SyConn2: dense synaptic connectivity inference for volume electron microscopy.

Nature methods [Epub ahead of print].

The ability to acquire ever larger datasets of brain tissue using volume electron microscopy leads to an increasing demand for the automated extraction of connectomic information. We introduce SyConn2, an open-source connectome analysis toolkit, which works with both on-site high-performance compute environments and rentable cloud computing clusters. SyConn2 was tested on connectomic datasets with more than 10 million synapses, provides a web-based visualization interface and makes these data amenable to complex anatomical and neuronal connectivity queries.

RevDate: 2022-10-24

Zhang Y, P Geng (2022)

Multi-Task Assignment Method of the Cloud Computing Platform Based on Artificial Intelligence.

Computational intelligence and neuroscience, 2022:1789490.

To realize load balancing of cloud computing platforms in big data processing, the method of finding the optimal load balancing physical host in the algorithm cycle is adopted at present. This optimal load balancing strategy that overly focuses on the current deployment problem has certain limitations. It will make the system less efficient and the user's waiting time unnecessarily prolonged. This paper proposes a task assignment method for long-term resource load balancing of cloud platforms based on artificial intelligence and big data (TABAI). The maximum posterior probability for each physical host is calculated using Bayesian theory. Euler's formula is used to calculate the similarity between the host with the largest posterior probability and other hosts as a threshold. The hosts are classified according to the threshold to determine the optimal cluster and then form the final set of candidate physical hosts. It improves the resource utilization and external service capability of the cloud platform by combining cluster analysis with Bayes' theorem to achieve global load balancing in the time dimension. The experimental results show that: TABAI has a smaller processing time than the traditional load balancing multi-task assignment method. When the time is >600 s, the standard deviation of TABAI decreases to a greater extent, and it has stronger external service capabilities.

RevDate: 2022-10-24

Yentes JM, Liu WY, Zhang K, et al (2022)

Updated Perspectives on the Role of Biomechanics in COPD: Considerations for the Clinician.

International journal of chronic obstructive pulmonary disease, 17:2653-2675 pii:339195.

Patients with chronic obstructive pulmonary disease (COPD) demonstrate extra-pulmonary functional decline such as an increased prevalence of falls. Biomechanics offers insight into functional decline by examining mechanics of abnormal movement patterns. This review discusses biomechanics of functional outcomes, muscle mechanics, and breathing mechanics in patients with COPD as well as future directions and clinical perspectives. Patients with COPD demonstrate changes in their postural sway during quiet standing compared to controls, and these deficits are exacerbated when sensory information (eg, eyes closed) is manipulated. If standing balance is disrupted with a perturbation, patients with COPD are slower to return to baseline and their muscle activity is differential from controls. When walking, patients with COPD appear to adopt a gait pattern that may increase stability (eg, shorter and wider steps, decreased gait speed) in addition to altered gait variability. Biomechanical muscle mechanics (ie, tension, extensibility, elasticity, and irritability) alterations with COPD are not well documented, with relatively few articles investigating these properties. On the other hand, dyssynchronous motion of the abdomen and rib cage while breathing is well documented in patients with COPD. Newer biomechanical technologies have allowed for estimation of regional, compartmental, lung volumes during activity such as exercise, as well as respiratory muscle activation during breathing. Future directions of biomechanical analyses in COPD are trending toward wearable sensors, big data, and cloud computing. Each of these offers unique opportunities as well as challenges. Advanced analytics of sensor data can offer insight into the health of a system by quantifying complexity or fluctuations in patterns of movement, as healthy systems demonstrate flexibility and are thus adaptable to changing conditions. Biomechanics may offer clinical utility in prediction of 30-day readmissions, identifying disease severity, and patient monitoring. Biomechanics is complementary to other assessments, capturing what patients do, as well as their capability.

RevDate: 2022-10-24

Bonino da Silva Santos LO, Ferreira Pires L, Graciano Martinez V, et al (2023)

Personal Health Train Architecture with Dynamic Cloud Staging.

SN computer science, 4(1):14.

Scientific advances, especially in the healthcare domain, can be accelerated by making data available for analysis. However, in traditional data analysis systems, data need to be moved to a central processing unit that performs analyses, which may be undesirable, e.g. due to privacy regulations in case these data contain personal information. This paper discusses the Personal Health Train (PHT) approach in which data processing is brought to the (personal health) data rather than the other way around, allowing (private) data accessed to be controlled, and to observe ethical and legal concerns. This paper introduces the PHT architecture and discusses the data staging solution that allows processing to be delegated to components spawned in a private cloud environment in case the (health) organisation hosting the data has limited resources to execute the required processing. This paper shows the feasibility and suitability of the solution with a relatively simple, yet representative, case study of data analysis of Covid-19 infections, which is performed by components that are created on demand and run in the Amazon Web Services platform. This paper also shows that the performance of our solution is acceptable, and that our solution is scalable. This paper demonstrates that the PHT approach enables data analysis with controlled access, preserving privacy and complying with regulations such as GDPR, while the solution is deployed in a private cloud environment.

RevDate: 2022-10-21

Proctor T, Seritan S, Rudinger K, et al (2022)

Scalable Randomized Benchmarking of Quantum Computers Using Mirror Circuits.

Physical review letters, 129(15):150502.

The performance of quantum gates is often assessed using some form of randomized benchmarking. However, the existing methods become infeasible for more than approximately five qubits. Here we show how to use a simple and customizable class of circuits-randomized mirror circuits-to perform scalable, robust, and flexible randomized benchmarking of Clifford gates. We show that this technique approximately estimates the infidelity of an average many-qubit logic layer, and we use simulations of up to 225 qubits with physically realistic error rates in the range 0.1%-1% to demonstrate its scalability. We then use up to 16 physical qubits of a cloud quantum computing platform to demonstrate that our technique can reveal and quantify crosstalk errors in many-qubit circuits.

RevDate: 2022-10-21

Matar A, Hansson M, Slokenberga S, et al (2022)

A proposal for an international Code of Conduct for data sharing in genomics.

Developing world bioethics [Epub ahead of print].

As genomic research becomes commonplace across the world, there is an increased need to coordinate practices among researchers, especially with regard to data sharing. One such way is an international code of conduct. In September 2020, an expert panel consisting of representatives from various fields convened to discuss a draft proposal formed via a synthesis of existing professional codes and other recommendations. This article presents an overview and analysis of the main issues related to international genomic research that were discussed by the expert panel, and the results of the discussion and follow up responses by the experts. As a result, the article presents as an annex a proposal for an international code of conduct for data sharing in genomics that is meant to establish best practices.

RevDate: 2022-10-21

Asif RN, Abbas S, Khan MA, et al (2022)

Development and Validation of Embedded Device for Electrocardiogram Arrhythmia Empowered with Transfer Learning.

Computational intelligence and neuroscience, 2022:5054641.

With the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.

RevDate: 2022-10-21

Han Z, Li F, G Wang (2022)

Financial Data Mining Model Based on K-Truss Community Query Model and Artificial Intelligence.

Computational intelligence and neuroscience, 2022:9467623.

With the continuous development of Internet technology and related industries, emerging technologies such as big data and cloud computing have gradually integrated into and influenced social life. Emerging technologies have, to a large extent, revolutionized people's way of production and life and provided a lot of convenience for people's life. With the popularity of these technologies, information and data have also begun to explode. When we usually use an image storage system to process this information, we all know that an image contains countless pixels, and these pixels are interconnected to form the entire image. In real life, communities are like these pixels. On the Internet, communities are composed of interconnected parts. Nowadays, in various fields such as image modeling, we still have some problems, such as the problem of recognition rate, and we also found many problems when studying the community structure, which attracts more and more researchers, but the research on community query problems started late and the development is still relatively slow, so designing an excellent community query algorithm is a problem we urgently need to solve. With this goal, and based on previous research results, we have conducted in-depth discussions on community query algorithms, and hope that our research results can be applied to real life.

RevDate: 2022-10-21

Jia Z (2022)

Garden Landscape Design Method in Public Health Urban Planning Based on Big Data Analysis Technology.

Journal of environmental and public health, 2022:2721247.

Aiming at the goal of high-quality development of the landscape architecture industry, we should actively promote the development and integration of digital, networked, and intelligent technologies and promote the intelligent and diversified development of the landscape architecture industry. Due to the limitation of drawing design technology and construction method, the traditional landscape architecture construction cannot really understand the public demands, and the construction scheme also relies on the experience and subjective aesthetics of professionals, resulting in improper connection between design and construction. At present, under the guidance of the national strategy, under the background of the rapid development of digital technologies such as 5G, big data, cloud computing, Internet of Things, and digital twins, the high integration of landscape architecture construction and digital technology has led to the transformation of the production mode of landscape architecture construction. Abundant professional data and convenient information processing platform enable landscape planners, designers, and builders to evaluate the whole life cycle of the project more scientifically and objectively and realize the digitalization of the whole process of investigation, analysis, design, construction, operation, and maintenance. For the landscape architecture industry, the significance of digital technology is not only to change the production tools but also to update the environmental awareness, design response, and construction methods, which makes the landscape architecture planning and design achieve the organic combination of qualitative and quantitative and also makes the landscape architecture discipline more scientific and rational. In this paper, the new method of combining grey relational degree with machine learning is used to provide new guidance for traditional landscape planning by using big data information in landscape design and has achieved very good results. The article analyzes the guidance of landscape architecture design under the big data in China and provides valuable reference for promoting the construction of landscape architecture in China.

RevDate: 2022-10-20

Su J, Su K, S Wang (2022)

Evaluation of digital economy development level based on multi-attribute decision theory.

PloS one, 17(10):e0270859 pii:PONE-D-22-01127.

The maturity and commercialization of emerging digital technologies represented by artificial intelligence, cloud computing, block chain and virtual reality are giving birth to a new and higher economic form, that is, digital economy. Digital economy is different from the traditional industrial economy. It is clean, efficient, green and recyclable. It represents and promotes the future direction of global economic development, especially in the context of the sudden COVID-19 pandemic as a continuing disaster. Therefore, it is essential to establish the comprehensive evaluation model of digital economy development scientifically and reasonably. In this paper, first on the basis of literature analysis, the relevant indicators of digital economy development are collected manually and then screened by the grey dynamic clustering and rough set reduction theory. The evaluation index system of digital economy development is constructed from four dimensions: digital innovation impetus support, digital infrastructure construction support, national economic environment and digital policy guarantee, digital integration and application. Next the subjective weight and objective weight are calculated by the group FAHP method, entropy method and improved CRITIC method, and the combined weight is integrated with the thought of maximum variance. The grey correlation analysis and improved VIKOR model are combined to systematically evaluate the digital economy development level of 31 provinces and cities in China from 2013 to 2019. The results of empirical analysis show that the overall development of China's digital economy shows a trend of superposition and rise, and the development of digital economy in the four major economic zones is unbalanced. Finally, we put forward targeted opinions on the construction of China's provincial digital economy.

RevDate: 2022-10-20

Moya-Galé G, Walsh SJ, A Goudarzi (2022)

Automatic Assessment of Intelligibility in Noise in Parkinson Disease: Validation Study.

Journal of medical Internet research, 24(10):e40567 pii:v24i10e40567.

BACKGROUND: Most individuals with Parkinson disease (PD) experience a degradation in their speech intelligibility. Research on the use of automatic speech recognition (ASR) to assess intelligibility is still sparse, especially when trying to replicate communication challenges in real-life conditions (ie, noisy backgrounds). Developing technologies to automatically measure intelligibility in noise can ultimately assist patients in self-managing their voice changes due to the disease.

OBJECTIVE: The goal of this study was to pilot-test and validate the use of a customized web-based app to assess speech intelligibility in noise in individuals with dysarthria associated with PD.

METHODS: In total, 20 individuals with dysarthria associated with PD and 20 healthy controls (HCs) recorded a set of sentences using their phones. The Google Cloud ASR API was used to automatically transcribe the speakers' sentences. An algorithm was created to embed speakers' sentences in +6-dB signal-to-noise multitalker babble. Results from ASR performance were compared to those from 30 listeners who orthographically transcribed the same set of sentences. Data were reduced into a single event, defined as a success if the artificial intelligence (AI) system transcribed a random speaker or sentence as well or better than the average of 3 randomly chosen human listeners. These data were further analyzed by logistic regression to assess whether AI success differed by speaker group (HCs or speakers with dysarthria) or was affected by sentence length. A discriminant analysis was conducted on the human listener data and AI transcriber data independently to compare the ability of each data set to discriminate between HCs and speakers with dysarthria.

RESULTS: The data analysis indicated a 0.8 probability (95% CI 0.65-0.91) that AI performance would be as good or better than the average human listener. AI transcriber success probability was not found to be dependent on speaker group. AI transcriber success was found to decrease with sentence length, losing an estimated 0.03 probability of transcribing as well as the average human listener for each word increase in sentence length. The AI transcriber data were found to offer the same discrimination of speakers into categories (HCs and speakers with dysarthria) as the human listener data.

CONCLUSIONS: ASR has the potential to assess intelligibility in noise in speakers with dysarthria associated with PD. Our results hold promise for the use of AI with this clinical population, although a full range of speech severity needs to be evaluated in future work, as well as the effect of different speaking tasks on ASR.

RevDate: 2022-10-19

Anonymous (2022)

Understanding enterprise data warehouses to support clinical and translational research: enterprise information technology relationships, data governance, workforce, and cloud computing.

RevDate: 2022-10-19

Gendia A (2022)

Cloud Based AI-Driven Video Analytics (CAVs) in Laparoscopic Surgery: A Step Closer to a Virtual Portfolio.

Cureus, 14(9):e29087.

AIMS: To outline the use of cloud-based artificial intelligence (AI)-driven video analytics (CAVs) in minimally invasive surgery and to propose their potential as a virtual portfolio for trainee and established surgeons. Methods: An independent online demonstration was requested from three platforms, namely Theator (Palo Alto, California, USA), Touch Surgery™ (Medtronic, London, England, UK), and C-SATS® (Seattle, Washington, USA). The assessed domains were online and app-based accessibility, the ability for timely trainee feedback, and AI integration for operation-specific steps and critical views.

RESULTS: The CAVs enable users to record surgeries with the advantage of limitless video storage through clouding and smart integration into theatre settings. This can be used to view surgeries and review trainee videos through a medium of communication and sharing with the ability to provide feedback. Theator and C-SATS® provide their users with surgical skills scoring systems with customizable options that can be used to provide structured feedback to trainees. Additionally, AI plays an important role in all three platforms by providing time-based analysis of steps and highlighting critical milestones. Conclusion: Cloud-based AI-driven video analytics is an emerging new technology that enables users to store, analyze, and review videos. This technology has the potential to improve training, governance, and standardization procedures. Moreover, with the future adaptation of the technology, CAVs can be integrated into the trainees' portfolios as part of their virtual curriculum. This can enable a structured assessment of a surgeon's progression and degree of experience throughout their surgical career.

RevDate: 2022-10-19

Yamamoto Y, Shimobaba T, T Ito (2022)

HORN-9: Special-purpose computer for electroholography with the Hilbert transform.

Optics express, 30(21):38115-38127.

Holography is a technology that uses light interference and diffraction to record and reproduce three-dimensional (3D) information. Using computers, holographic 3D scenes (electroholography) have been widely studied. Nevertheless, its practical application requires enormous computing power, and current computers have limitations in real-time processing. In this study, we show that holographic reconstruction (HORN)-9, a special-purpose computer for electroholography with the Hilbert transform, can compute a 1, 920 × 1, 080-pixel computer-generated hologram from a point cloud of 65,000 points in 0.030 s (33 fps) on a single card. This performance is 8, 7, and 170 times more efficient than a previously developed HORN-8, a graphics processing unit, and a central processing unit (CPU), respectively. We also demonstrated the real-time processing and display of 400,000 points on multiple HORN-9s, achieving an acceleration of 600 times with four HORN-9 units compared with a single CPU.

RevDate: 2022-10-18

Houskeeper HF, Hooker SB, KC Cavanaugh (2022)

Spectrally simplified approach for leveraging legacy geostationary oceanic observations.

Applied optics, 61(27):7966-7977.

The use of multispectral geostationary satellites to study aquatic ecosystems improves the temporal frequency of observations and mitigates cloud obstruction, but no operational capability presently exists for the coastal and inland waters of the United States. The Advanced Baseline Imager (ABI) on the current iteration of the Geostationary Operational Environmental Satellites, termed the R Series (GOES-R), however, provides sub-hourly imagery and the opportunity to overcome this deficit and to leverage a large repository of existing GOES-R aquatic observations. The fulfillment of this opportunity is assessed herein using a spectrally simplified, two-channel aquatic algorithm consistent with ABI wave bands to estimate the diffuse attenuation coefficient for photosynthetically available radiation, Kd(PAR). First, an in situ ABI dataset was synthesized using a globally representative dataset of above- and in-water radiometric data products. Values of Kd(PAR) were estimated by fitting the ratio of the shortest and longest visible wave bands from the in situ ABI dataset to coincident, in situKd(PAR) data products. The algorithm was evaluated based on an iterative cross-validation analysis in which 80% of the dataset was randomly partitioned for fitting and the remaining 20% was used for validation. The iteration producing the median coefficient of determination (R2) value (0.88) resulted in a root mean square difference of 0.319m-1, or 8.5% of the range in the validation dataset. Second, coincident mid-day images of central and southern California from ABI and from the Moderate Resolution Imaging Spectroradiometer (MODIS) were compared using Google Earth Engine (GEE). GEE default ABI reflectance values were adjusted based on a near infrared signal. Matchups between the ABI and MODIS imagery indicated similar spatial variability (R2=0.60) between ABI adjusted blue-to-red reflectance ratio values and MODIS default diffuse attenuation coefficient for spectral downward irradiance at 490 nm, Kd(490), values. This work demonstrates that if an operational capability to provide ABI aquatic data products was realized, the spectral configuration of ABI would potentially support a sub-hourly, visible aquatic data product that is applicable to water-mass tracing and physical oceanography research.

RevDate: 2022-10-18

Song L, Wang H, Z Shi (2022)

A Literature Review Research on Monitoring Conditions of Mechanical Equipment Based on Edge Computing.

Applied bionics and biomechanics, 2022:9489306.

The motivation of this research is to review all methods used in data compression of collected data in monitoring the condition of equipment based on the framework of edge computing. Since a large amount of signal data is collected when monitoring conditions of mechanical equipment, namely, signals of running machines are continuously transmitted to be crunched, compressed data should be handled effectively. However, this process occupies resources since data transmission requires the allocation of a large capacity. To resolve this problem, this article examines the monitoring conditions of equipment based on edge computing. First, the signal is pre-processed by edge computing, so that the fault characteristics can be identified quickly. Second, signals with difficult-to-identify fault characteristics need to be compressed to save transmission resources. Then, different types of signal data collected in mechanical equipment conditions are compressed by various compression methods and uploaded to the cloud. Finally, the cloud platform, which has powerful processing capability, is processed to improve the volume of the data transmission. By examining and analyzing the monitoring conditions and signal compression methods of mechanical equipment, the future development trend is elaborated to provide references and ideas for the contemporary research of data monitoring and data compression algorithms. Consequently, the manuscript presents different compression methods in detail and clarifies the data compression methods used for the signal compression of equipment based on edge computing.

RevDate: 2022-10-17

Kobayashi K, Yoshida H, Tanjo T, et al (2022)

Cloud service checklist for academic communities and customization for genome medical research.

Human genome variation, 9(1):36.

In this paper, we present a cloud service checklist designed to help IT administrators or researchers in academic organizations select the most suitable cloud services. This checklist, which comprises items that we believe IT administrators or researchers in academic organizations should consider when they adopt cloud services, comprehensively covers the issues related to a variety of cloud services, including security, functionality, performance, and law. In response to the increasing demands for storage and computing resources in genome medical science communities, various guidelines for using resources operated by external organizations, such as cloud services, have been published by different academic funding agencies and the Japanese government. However, it is sometimes difficult to identify the checklist items that satisfy the genome medical science community's guidelines, and some of these requirements are not included in the existing checklists. This issue provided our motivation for creating a cloud service checklist customized for genome medical research communities. The resulting customized checklist is designed to help researchers easily find information about the cloud services that satisfy the guidelines in genome medical science communities. Additionally, we explore whether many cloud service providers satisfy the requirements or checklist items in the cloud service checklist for genome medical research by evaluating their survey responses.

RevDate: 2022-10-17

Bu H, Xia J, Wu Q, et al (2022)

Relationship Discovery and Hierarchical Embedding for Web Service Quality Prediction.

Computational intelligence and neuroscience, 2022:9240843.

Web Services Quality Prediction has become a popular research theme in Cloud Computing and the Internet of Things. Graph Convolutional Network (GCN)-based methods are more efficient by aggregating feature information from the local graph neighborhood. Despite the fact that these prior works have demonstrated better prediction performance, they are still challenged as follows: (1) first, the user-service bipartite graph is essentially a heterogeneous graph that contains four kinds of relationships. Previous GCN-based models have only focused on using some of these relationships. Therefore, how to fully mine and use the above relationships is critical to improving the prediction accuracy. (2) After the embedding is obtained from the GCNs, the commonly used similarity calculation methods for downstream prediction need to traverse the data one by one, which is time-consuming. To address these challenges, this work proposes a novel relationship discovery and hierarchical embedding method based on GCNs (named as RDHE), which designs a dual mechanism to represent services and users, respectively, designs a new community discovery method and a fast similarity calculation process, which can fully mine and utilize the relationships in the graph. The results of the experiment on the real data set show that this method greatly improved the accuracy of the web service quality prediction.

RevDate: 2022-10-17

Mondal P, Dutta T, Qadir A, et al (2022)

Radar and optical remote sensing for near real-time assessments of cyclone impacts on coastal ecosystems.

Remote sensing in ecology and conservation, 8(4):506-520.

Rapid impact assessment of cyclones on coastal ecosystems is critical for timely rescue and rehabilitation operations in highly human-dominated landscapes. Such assessments should also include damage assessments of vegetation for restoration planning in impacted natural landscapes. Our objective is to develop a remote sensing-based approach combining satellite data derived from optical (Sentinel-2), radar (Sentinel-1), and LiDAR (Global Ecosystem Dynamics Investigation) platforms for rapid assessment of post-cyclone inundation in non-forested areas and vegetation damage in a primarily forested ecosystem. We apply this multi-scalar approach for assessing damages caused by the cyclone Amphan that hit coastal India and Bangladesh in May 2020, severely flooding several districts in the two countries, and causing destruction to the Sundarban mangrove forests. Our analysis shows that at least 6821 sq. km. land across the 39 study districts was inundated even after 10 days after the cyclone. We further calculated the change in forest greenness as the difference in normalized difference vegetation index (NDVI) pre- and post-cyclone. Our findings indicate a <0.2 unit decline in NDVI in 3.45 sq. km. of the forest. Rapid assessment of post-cyclone damage in mangroves is challenging due to limited navigability of waterways, but critical for planning of mitigation and recovery measures. We demonstrate the utility of Otsu method, an automated statistical approach of the Google Earth Engine platform to identify inundated areas within days after a cyclone. Our radar-based inundation analysis advances current practices because it requires minimal user inputs, and is effective in the presence of high cloud cover. Such rapid assessment, when complemented with detailed information on species and vegetation composition, can inform appropriate restoration efforts in severely impacted regions and help decision makers efficiently manage resources for recovery and aid relief. We provide the datasets from this study on an open platform to aid in future research and planning endeavors.

RevDate: 2022-10-17

Saba Raoof S, MAS Durai (2022)

A Comprehensive Review on Smart Health Care: Applications, Paradigms, and Challenges with Case Studies.

Contrast media & molecular imaging, 2022:4822235.

Growth and advancement of the Deep Learning (DL) and the Internet of Things (IoT) are figuring out their way over the modern contemporary world through integrating various technologies in distinct fields viz, agriculture, manufacturing, energy, transportation, supply chains, cities, healthcare, and so on. Researchers had identified the feasibility of integrating deep learning, cloud, and IoT to enhance the overall automation, where IoT may prolong its application area through utilizing cloud services and the cloud can even prolong its applications through data acquired by IoT devices like sensors and deep learning for disease detection and diagnosis. This study explains a summary of various techniques utilized in smart healthcare, i.e., deep learning, cloud-based-IoT applications in smart healthcare, fog computing in smart healthcare, and challenges and issues faced by smart healthcare and it presents a wider scope as it is not intended for a particular application such aspatient monitoring, disease detection, and diagnosing and the technologies used for developing this smart systems are outlined. Smart health bestows the quality of life. Convenient and comfortable living is made possible by the services provided by smart healthcare systems (SHSs). Since healthcare is a massive area with enormous data and a broad spectrum of diseases associated with different organs, immense research can be done to overcome the drawbacks of traditional healthcare methods. Deep learning with IoT can effectively be applied in the healthcare sector to automate the diagnosing and treatment process even in rural areas remotely. Applications may include disease prevention and diagnosis, fitness and patient monitoring, food monitoring, mobile health, telemedicine, emergency systems, assisted living, self-management of chronic diseases, and so on.

RevDate: 2022-10-17

Coelho R, Braga R, David JMN, et al (2022)

A Blockchain-Based Architecture for Trust in Collaborative Scientific Experimentation.

Journal of grid computing, 20(4):35.

In scientific collaboration, data sharing, the exchange of ideas and results are essential to knowledge construction and the development of science. Hence, we must guarantee interoperability, privacy, traceability (reinforcing transparency), and trust. Provenance has been widely recognized for providing a history of the steps taken in scientific experiments. Consequently, we must support traceability, assisting in scientific results' reproducibility. One of the technologies that can enhance trust in collaborative scientific experimentation is blockchain. This work proposes an architecture, named BlockFlow, based on blockchain, provenance, and cloud infrastructure to bring trust and traceability in the execution of collaborative scientific experiments. The proposed architecture is implemented on Hyperledger, and a scenario about the genomic sequencing of the SARS-CoV-2 coronavirus is used to evaluate the architecture, discussing the benefits of providing traceability and trust in collaborative scientific experimentation. Furthermore, the architecture addresses the heterogeneity of shared data, facilitating interpretation by geographically distributed researchers and analysis of such data. Through a blockchain-based architecture that provides support on provenance and blockchain, we can enhance data sharing, traceability, and trust in collaborative scientific experiments.

RevDate: 2022-10-14

Kang G, YG Kim (2022)

Secure Collaborative Platform for Health Care Research in an Open Environment: Perspective on Accountability in Access Control.

Journal of medical Internet research, 24(10):e37978 pii:v24i10e37978.

BACKGROUND: With the recent use of IT in health care, a variety of eHealth data are increasingly being collected and stored by national health agencies. As these eHealth data can advance the modern health care system and make it smarter, many researchers want to use these data in their studies. However, using eHealth data brings about privacy and security concerns. The analytical environment that supports health care research must also consider many requirements. For these reasons, countries generally provide research platforms for health care, but some data providers (eg, patients) are still concerned about the security and privacy of their eHealth data. Thus, a more secure platform for health care research that guarantees the utility of eHealth data while focusing on its security and privacy is needed.

OBJECTIVE: This study aims to implement a research platform for health care called the health care big data platform (HBDP), which is more secure than previous health care research platforms. The HBDP uses attribute-based encryption to achieve fine-grained access control and encryption of stored eHealth data in an open environment. Moreover, in the HBDP, platform administrators can perform the appropriate follow-up (eg, block illegal users) and monitoring through a private blockchain. In other words, the HBDP supports accountability in access control.

METHODS: We first identified potential security threats in the health care domain. We then defined the security requirements to minimize the identified threats. In particular, the requirements were defined based on the security solutions used in existing health care research platforms. We then proposed the HBDP, which meets defined security requirements (ie, access control, encryption of stored eHealth data, and accountability). Finally, we implemented the HBDP to prove its feasibility.

RESULTS: This study carried out case studies for illegal user detection via the implemented HBDP based on specific scenarios related to the threats. As a result, the platform detected illegal users appropriately via the security agent. Furthermore, in the empirical evaluation of massive data encryption (eg, 100,000 rows with 3 sensitive columns within 46 columns) for column-level encryption, full encryption after column-level encryption, and full decryption including column-level decryption, our approach achieved approximately 3 minutes, 1 minute, and 9 minutes, respectively. In the blockchain, average latencies and throughputs in 1Org with 2Peers reached approximately 18 seconds and 49 transactions per second (TPS) in read mode and approximately 4 seconds and 120 TPS in write mode in 300 TPS.

CONCLUSIONS: The HBDP enables fine-grained access control and secure storage of eHealth data via attribute-based encryption cryptography. It also provides nonrepudiation and accountability through the blockchain. Therefore, we consider that our proposal provides a sufficiently secure environment for the use of eHealth data in health care research.

RevDate: 2022-10-14

Konstantinou C, Xanthopoulos A, Tsaras K, et al (2022)

Vaccination Coverage Against Human Papillomavirus in Female Students in Cyprus.

Cureus, 14(9):e28936.

Background Human papillomavirus (HPV) has been associated with the development of several cancers and cardiovascular diseases in females. Nevertheless, there is still poor data on vaccination coverage against HPV in several countries, including Cyprus. The main target of the present research was to assess the vaccination status of female students in Cyprus. Methodology An online survey was conducted via a cloud-based short questionnaire on Google Forms. Students with a known email address were initially invited via email to complete the survey. The questionnaire was distributed to 340 students, aged 18-49 years old, who lived in Cyprus (60% response rate). Results The total vaccination coverage was 38.1%. The mean age of participants was 23.5 (±6.5) years. The major reason for non-vaccination was the belief that participants were not at risk of serious illness from HPV infection (22%), followed by the reported lack of time to get vaccinated (16%) and inertia (13%). The students who had information about the safety of HPV vaccines from electronic sources of information (television, websites, and blogs) had lower vaccination coverage compared to those who had received information from alternative sources (primary health centers, family doctors, or obstetricians) (relative risk (RR) = 1.923, 95% confidence interval (CI) = 0.9669-3.825; p = 0.033). No significant differences in vaccination rates between participants who were coming from schools of health sciences versus those from financial schools (RR = 1.082, 95% CI = 0.7574-1.544; p = 0.3348) were observed. Conclusions Public health policy interventions and education on HPV vaccines are effective ways to improve the awareness and acceptance rate of HPV vaccination among female students and improve the HPV vaccination coverage level in Cyprus.

RevDate: 2022-10-14

Shumba AT, Montanaro T, Sergi I, et al (2022)

Leveraging IoT-Aware Technologies and AI Techniques for Real-Time Critical Healthcare Applications.

Sensors (Basel, Switzerland), 22(19): pii:s22197675.

Personalised healthcare has seen significant improvements due to the introduction of health monitoring technologies that allow wearable devices to unintrusively monitor physiological parameters such as heart health, blood pressure, sleep patterns, and blood glucose levels, among others. Additionally, utilising advanced sensing technologies based on flexible and innovative biocompatible materials in wearable devices allows high accuracy and precision measurement of biological signals. Furthermore, applying real-time Machine Learning algorithms to highly accurate physiological parameters allows precise identification of unusual patterns in the data to provide health event predictions and warnings for timely intervention. However, in the predominantly adopted architectures, health event predictions based on Machine Learning are typically obtained by leveraging Cloud infrastructures characterised by shortcomings such as delayed response times and privacy issues. Fortunately, recent works highlight that a new paradigm based on Edge Computing technologies and on-device Artificial Intelligence significantly improve the latency and privacy issues. Applying this new paradigm to personalised healthcare architectures can significantly improve their efficiency and efficacy. Therefore, this paper reviews existing IoT healthcare architectures that utilise wearable devices and subsequently presents a scalable and modular system architecture to leverage emerging technologies to solve identified shortcomings. The defined architecture includes ultrathin, skin-compatible, flexible, high precision piezoelectric sensors, low-cost communication technologies, on-device intelligence, Edge Intelligence, and Edge Computing technologies. To provide development guidelines and define a consistent reference architecture for improved scalable wearable IoT-based critical healthcare architectures, this manuscript outlines the essential functional and non-functional requirements based on deductions from existing architectures and emerging technology trends. The presented system architecture can be applied to many scenarios, including ambient assisted living, where continuous surveillance and issuance of timely warnings can afford independence to the elderly and chronically ill. We conclude that the distribution and modularity of architecture layers, local AI-based elaboration, and data packaging consistency are the more essential functional requirements for critical healthcare application use cases. We also identify fast response time, utility, comfort, and low cost as the essential non-functional requirements for the defined system architecture.

RevDate: 2022-10-14

Shahzad K, Zia T, EU Qazi (2022)

A Review of Functional Encryption in IoT Applications.

Sensors (Basel, Switzerland), 22(19): pii:s22197567.

The Internet of Things (IoT) represents a growing aspect of how entities, including humans and organizations, are likely to connect with others in their public and private interactions. The exponential rise in the number of IoT devices, resulting from ever-growing IoT applications, also gives rise to new opportunities for exploiting potential security vulnerabilities. In contrast to conventional cryptosystems, frameworks that incorporate fine-grained access control offer better opportunities for protecting valuable assets, especially when the connectivity level is dense. Functional encryption is an exciting new paradigm of public-key encryption that supports fine-grained access control, generalizing a range of existing fine-grained access control mechanisms. This survey reviews the recent applications of functional encryption and the major cryptographic primitives that it covers, identifying areas where the adoption of these primitives has had the greatest impact. We first provide an overview of different application areas where these access control schemes have been applied. Then, an in-depth survey of how the schemes are used in a multitude of applications related to IoT is given, rendering a potential vision of security and integrity that this growing field promises. Towards the end, we identify some research trends and state the open challenges that current developments face for a secure IoT realization.

RevDate: 2022-10-14

Qin M, Liu T, Hou B, et al (2022)

A Low-Latency RDP-CORDIC Algorithm for Real-Time Signal Processing of Edge Computing Devices in Smart Grid Cyber-Physical Systems.

Sensors (Basel, Switzerland), 22(19): pii:s22197489.

Smart grids are being expanded in scale with the increasing complexity of the equipment. Edge computing is gradually replacing conventional cloud computing due to its low latency, low power consumption, and high reliability. The CORDIC algorithm has the characteristics of high-speed real-time processing and is very suitable for hardware accelerators in edge computing devices. The iterative calculation method of the CORDIC algorithm yet leads to problems such as complex structure and high consumption of hardware resource. In this paper, we propose an RDP-CORDIC algorithm which pre-computes all micro-rotation directions and transforms the conventional single-stage iterative structure into a three-stage and multi-stage combined iterative structure, thereby enabling it to solve the problems of the conventional CORDIC algorithm with many iterations and high consumption. An accuracy compensation algorithm for the direction prediction constant is also proposed to solve the problem of high ROM consumption in the high precision implementation of the RDP-CORDIC algorithm. The experimental results showed that the RDP-CORDIC algorithm had faster computation speed and lower resource consumption with higher guaranteed accuracy than other CORDIC algorithms. Therefore, the RDP-CORDIC algorithm proposed in this paper may effectively increase computation performance while reducing the power and resource consumption of edge computing devices in smart grid systems.

RevDate: 2022-10-14

Busaeed S, Katib I, Albeshri A, et al (2022)

LidSonic V2.0: A LiDAR and Deep-Learning-Based Green Assistive Edge Device to Enhance Mobility for the Visually Impaired.

Sensors (Basel, Switzerland), 22(19): pii:s22197435.

Over a billion people around the world are disabled, among whom 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, and poor environments and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required in order to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach using a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on the smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image-processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, having constructed their prototype implementations and tested them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than USD 80. Essentially, we provide designs of an inexpensive, miniature green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach enables faster inference and decision-making using relatively low energy with smaller data sizes, as well as faster communications for edge, fog, and cloud computing.

RevDate: 2022-10-14

Lei L, Kou L, Zhan X, et al (2022)

An Anomaly Detection Algorithm Based on Ensemble Learning for 5G Environment.

Sensors (Basel, Switzerland), 22(19): pii:s22197436.

With the advent of the digital information age, new data services such as virtual reality, industrial Internet, and cloud computing have proliferated in recent years. As a result, it increases operator demand for 5G bearer networks by providing features such as high transmission capacity, ultra-long transmission distance, network slicing, and intelligent management and control. Software-defined networking, as a new network architecture, intends to increase network flexibility and agility and can better satisfy the demands of 5G networks for network slicing. Nevertheless, software-defined networking still faces the challenge of network intrusion. We propose an abnormal traffic detection method based on the stacking method and self-attention mechanism, which makes up for the shortcoming of the inability to track long-term dependencies between data samples in ensemble learning. Our method utilizes a self-attention mechanism and a convolutional network to automatically learn long-term associations between traffic samples and provide them to downstream tasks in sample embedding. In addition, we design a novel stacking ensemble method, which computes the sample embedding and the predicted values of the heterogeneous base learner through the fusion module to obtain the final outlier results. This paper conducts experiments on abnormal traffic datasets in the software-defined network environment, calculates precision, recall and F1-score, and compares and analyzes them with other algorithms. The experimental results show that the method designed in this paper achieves 0.9972, 0.9996, and 0.9984 in multiple indicators of precision, recall, and F1-score, respectively, which are better than the comparison methods.

RevDate: 2022-10-14

Yi F, Zhang L, Xu L, et al (2022)

WSNEAP: An Efficient Authentication Protocol for IIoT-Oriented Wireless Sensor Networks.

Sensors (Basel, Switzerland), 22(19): pii:s22197413.

With the development of the Industrial Internet of Things (IIoT), industrial wireless sensors need to upload the collected private data to the cloud servers, resulting in a large amount of private data being exposed on the Internet. Private data are vulnerable to hacking. Many complex wireless-sensor-authentication protocols have been proposed. In this paper, we proposed an efficient authentication protocol for IIoT-oriented wireless sensor networks. The protocol introduces the PUF chip, and uses the Bloom filter to save and query the challenge-response pairs generated by the PUF chip. It ensures the security of the physical layer of the device and reduces the computing cost and communication cost of the wireless sensor side. The protocol introduces a pre-authentication mechanism to achieve continuous authentication between the gateway and the cloud server. The overall computational cost of the protocol is reduced. Formal security analysis and informal security analysis proved that our proposed protocol has more security features. We implemented various security primitives using the MIRACL cryptographic library and GMP large number library. Our proposed protocol was compared in-depth with related work. Detailed experiments show that our proposed protocol significantly reduces the computational cost and communication cost on the wireless sensor side and the overall computational cost of the protocol.

RevDate: 2022-10-14

Thirumalaisamy M, Basheer S, Selvarajan S, et al (2022)

Interaction of Secure Cloud Network and Crowd Computing for Smart City Data Obfuscation.

Sensors (Basel, Switzerland), 22(19): pii:s22197169.

There can be many inherent issues in the process of managing cloud infrastructure and the platform of the cloud. The platform of the cloud manages cloud software and legality issues in making contracts. The platform also handles the process of managing cloud software services and legal contract-based segmentation. In this paper, we tackle these issues directly with some feasible solutions. For these constraints, the Averaged One-Dependence Estimators (AODE) classifier and the SELECT Applicable Only to Parallel Server (SELECT-APSL ASA) method are proposed to separate the data related to the place. ASA is made up of the AODE and SELECT Applicable Only to Parallel Server. The AODE classifier is used to separate the data from smart city data based on the hybrid data obfuscation technique. The data from the hybrid data obfuscation technique manages 50% of the raw data, and 50% of hospital data is masked using the proposed transmission. The analysis of energy consumption before the cryptosystem shows the total packet delivered by about 71.66% compared with existing algorithms. The analysis of energy consumption after cryptosystem assumption shows 47.34% consumption, compared to existing state-of-the-art algorithms. The average energy consumption before data obfuscation decreased by 2.47%, and the average energy consumption after data obfuscation was reduced by 9.90%. The analysis of the makespan time before data obfuscation decreased by 33.71%. Compared to existing state-of-the-art algorithms, the study of makespan time after data obfuscation decreased by 1.3%. These impressive results show the strength of our methodology.

RevDate: 2022-10-13

Yang DM, Chang TJ, Hung KF, et al (2022)

Smart healthcare: A prospective future medical approach for COVID-19.

Journal of the Chinese Medical Association : JCMA pii:02118582-990000000-00100 [Epub ahead of print].

COVID-19 has greatly affected human life for over 3 years. In this review, we focus on smart healthcare solutions that address major requirements for coping with the COVID-19 pandemic, including (1) the continuous monitoring of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), (2) patient stratification with distinct short-term outcomes (e.g. mild or severe diseases) and long-term outcomes (e.g. long COVID), and (3) adherence to medication and treatments for patients with COVID-19. Smart healthcare often utilizes medical artificial intelligence (AI) and cloud computing and integrates cutting-edge biological and optoelectronic techniques. These are valuable technologies for addressing the unmet needs in the management of COVID. By leveraging deep/machine learning (DL/ML) capabilities and big data, medical AI can perform precise prognosis predictions and provide reliable suggestions for physicians' decision-making. Through the assistance of the Internet of Medical Things (IoMT), which encompasses wearable devices, smartphone apps, Internet-based drug delivery systems, and telemedicine technologies, the status of mild cases can be continuously monitored and medications provided at home without the need for hospital care. In cases that develop into severe cases, emergency feedback can be provided through the hospital for rapid treatment. Smart healthcare can possibly prevent the development of severe COVID-19 cases and therefore lower the burden on intensive care units.

RevDate: 2022-10-13

Li H (2022)

Cloud Computing Image Processing Application in Athlete Training High-Resolution Image Detection.

Computational intelligence and neuroscience, 2022:7423411.

The rapid development of Internet of things mobile application technology and artificial intelligence technology has given birth to a lot of services that can meet the needs of modern life, such as augmented reality technology, face recognition services, and language recognition and translation, which are often applied to various fields, and some other aspects of information communication and processing services. It has been used on various mobile phone, computer, or tablet user clients. Terminal equipment is subject to the ultralow latency and low energy consumption requirements of the above-mentioned applications. Therefore, the gap between resource-demanding application services and resource-limited mobile devices will bring great problems to the current and future development of IoT mobile applications. Based on the local image features of depth images, this paper designs an image detection method for athletes' motion posture. First, according to the characteristics of the local image, the depth image of the athlete obtained through Kinect is converted into bone point data. Next, a 3-stage exploration algorithm is used to perform block matching calculations on the athlete's bone point image to predict the athlete's movement posture. At the same time, using the characteristics of the Euclidean distance of the bone point image, the movement behavior is recognized. According to the experimental results, for some external environmental factors, such as sun illumination and other factors, the image detection method designed in this paper can effectively avoid their interference and influence and show the movement posture of athletes, showing excellent accuracy and robustness in predicting the movement posture of athletes and action recognition. This method can simplify a series of calibration tasks in the initial stage of 3D video surveillance and infer the posture of the observation target and recognize it in real time. The one that has good application values has specific reference values for the same job.

RevDate: 2022-10-10

B D, M L, R A, et al (2022)

A Novel Feature Selection with Hybrid Deep Learning Based Heart Disease Detection and Classification in the e-Healthcare Environment.

Computational intelligence and neuroscience, 2022:1167494.

With the advancements in data mining, wearables, and cloud computing, online disease diagnosis services have been widely employed in the e-healthcare environment and improved the quality of the services. The e-healthcare services help to reduce the death rate by the earlier identification of the diseases. Simultaneously, heart disease (HD) is a deadly disorder, and patient survival depends on early diagnosis of HD. Early HD diagnosis and categorization play a key role in the analysis of clinical data. In the context of e-healthcare, we provide a novel feature selection with hybrid deep learning-based heart disease detection and classification (FSHDL-HDDC) model. The two primary preprocessing processes of the FSHDL-HDDC approach are data normalisation and the replacement of missing values. The FSHDL-HDDC method also necessitates the development of a feature selection method based on the elite opposition-based squirrel searchalgorithm (EO-SSA) in order to determine the optimal subset of features. Moreover, an attention-based convolutional neural network (ACNN) with long short-term memory (LSTM), called (ACNN-LSTM) model, is utilized for the detection of HD by using medical data. An extensive experimental study is performed to ensure the improved classification performance of the FSHDL-HDDC technique. A detailed comparison study reported the betterment of the FSHDL-HDDC method on existing techniques interms of different performance measures. The suggested system, the FSHDL-HDDC, has reached its maximum level of accuracy, which is 0.9772.

RevDate: 2022-10-10

Chen X, X Huang (2022)

Application of Price Competition Model Based on Computational Neural Network in Risk Prediction of Transnational Investment.

Computational intelligence and neuroscience, 2022:8906385.

Aiming at the scenario where edge devices rely on cloud servers for collaborative computing, this paper proposes an efficient edge-cloud collaborative reasoning method. In order to meet the application's specific requirements for delay or accuracy, an optimal division point selection algorithm is proposed. A kind of multichannel supply chain price game model is constructed, and nonlinear dynamics theory is introduced into the research of the multichannel supply chain market. According to the actual competition situation, the different business strategies of retailers are considered in the modeling, which makes the model closer to the actual competition situation. Taking the retailer's profit as an indicator, the influence of the chaos phenomenon on the market performance is analyzed. Compared with the previous studies, this thesis uses nonlinear theory to better reveal the operating laws of the economic system. This paper selects company A in the financial industry to acquire company B in Sweden. It is concluded that company B is currently facing financial difficulties, but its brand and technical advantages are far superior to company A. The indirect financial risk index of company B, that is, the investment environment, is analyzed, and the final investment environment score of the country where company B is located is 90 points, which is an excellent grade by scoring the investment environment of the target enterprise. Combining the investment environment score and the alarm situation prediction score, it is concluded that the postmerger financial risk warning level of company A is in serious alarm.

RevDate: 2022-10-07

Zhao Y, Rokhani FZ, Sazlina SG, et al (2022)

Defining the concepts of a smart nursing home and its potential technology utilities that integrate medical services and are acceptable to stakeholders: a scoping review.

BMC geriatrics, 22(1):787.

BACKGROUND AND OBJECTIVES: Smart technology in nursing home settings has the potential to elevate an operation that manages more significant number of older residents. However, the concepts, definitions, and types of smart technology, integrated medical services, and stakeholders' acceptability of smart nursing homes are less clear. This scoping review aims to define a smart nursing home and examine the qualitative evidence on technological feasibility, integration of medical services, and acceptability of the stakeholders.

METHODS: Comprehensive searches were conducted on stakeholders' websites (Phase 1) and 11 electronic databases (Phase 2), for existing concepts of smart nursing home, on what and how technologies and medical services were implemented in nursing home settings, and acceptability assessment by the stakeholders. The publication year was inclusive from January 1999 to September 2021. The language was limited to English and Chinese. Included articles must report nursing home settings related to older adults ≥ 60 years old with or without medical demands but not bed-bound. Technology Readiness Levels were used to measure the readiness of new technologies and system designs. The analysis was guided by the Framework Method and the smart technology adoption behaviours of elder consumers theoretical model. The results were reported according to the PRISMA-ScR.

RESULTS: A total of 177 literature (13 website documents and 164 journal articles) were selected. Smart nursing homes are technology-assisted nursing homes that allow the life enjoyment of their residents. They used IoT, computing technologies, cloud computing, big data and AI, information management systems, and digital health to integrate medical services in monitoring abnormal events, assisting daily living, conducting teleconsultation, managing health information, and improving the interaction between providers and residents. Fifty-five percent of the new technologies were ready for use in nursing homes (levels 6-7), and the remaining were proven the technical feasibility (levels 1-5). Healthcare professionals with higher education, better tech-savviness, fewer years at work, and older adults with more severe illnesses were more acceptable to smart technologies.

CONCLUSIONS: Smart nursing homes with integrated medical services have great potential to improve the quality of care and ensure older residents' quality of life.

RevDate: 2022-10-07

Chen L, Yu L, Liu Y, et al (2022)

Space-time-regulated imaging analyzer for smart coagulation diagnosis.

Cell reports. Medicine pii:S2666-3791(22)00320-2 [Epub ahead of print].

The development of intelligent blood coagulation diagnoses is awaited to meet the current need for large clinical time-sensitive caseloads due to its efficient and automated diagnoses. Herein, a method is reported and validated to realize it through artificial intelligence (AI)-assisted optical clotting biophysics (OCB) properties identification. The image differential calculation is used for precise acquisition of OCB properties with elimination of initial differences, and the strategy of space-time regulation allows on-demand space time OCB properties identification and enables diverse blood function diagnoses. The integrated applications of smartphones and cloud computing offer a user-friendly automated analysis for accurate and convenient diagnoses. The prospective assays of clinical cases (n = 41) show that the system realizes 97.6%, 95.1%, and 100% accuracy for coagulation factors, fibrinogen function, and comprehensive blood coagulation diagnoses, respectively. This method should enable more low-cost and convenient diagnoses and provide a path for potential diagnostic-markers finding.

RevDate: 2022-10-07

Fu Z (2022)

Computer cyberspace security mechanism supported by cloud computing.

PloS one, 17(10):e0271546 pii:PONE-D-22-07534.

To improve the cybersecurity of Cloud Computing (CC) system. This paper proposes a Network Anomaly Detection (NAD) model based on the Fuzzy-C-Means (FCM) clustering algorithm. Secondly, the Cybersecurity Assessment Model (CAM) based on Grey Relational Grade (GRG) is creatively constructed. Finally, combined with Rivest Shamir Adleman (RSA) algorithm, this work proposes a CC network-oriented data encryption technology, selects different data sets for different models, and tests each model through design experiments. The results show that the average Correct Detection Rate (CDR) of the NAD model for different types of abnormal data is 93.33%. The average False Positive Rate (FPR) and the average Unreported Rate (UR) are 6.65% and 16.27%, respectively. Thus, the NAD model can ensure a high detection accuracy in the case of sufficient data. Meanwhile, the cybersecurity situation prediction by the CAM is in good agreement with the actual situation. The error between the average value of cybersecurity situation prediction and the actual value is only 0.82%, and the prediction accuracy is high. The RSA algorithm can control the average encryption time for very large text, about 12s. The decryption time is slightly longer but within a reasonable range. For different-size text, the encryption time is maintained within 0.5s. This work aims to provide important technical support for anomaly detection, overall security situation analysis, and data transmission security protection of CC systems to improve their cybersecurity.

RevDate: 2022-10-07

Zhang C, Cheng T, Li D, et al (2022)

Low-host double MDA workflow for uncultured ASFV positive blood and serum sample sequencing.

Frontiers in veterinary science, 9:936781.

African swine fever (ASF) is a highly lethal and contagious disease caused by African swine fever virus (ASFV). Whole-genome sequencing of ASFV is necessary to study its mutation, recombination, and trace its transmission. Uncultured samples have a considerable amount of background DNA, which causes waste of sequencing throughput, storage space, and computing resources. Sequencing methods attempted for uncultured samples have various drawbacks. In this study, we improved C18 spacer MDA (Multiple Displacement Amplification)-combined host DNA exhaustion strategy to remove background DNA and fit NGS and TGS sequencing. Using this workflow, we successfully sequenced two uncultured ASFV positive samples. The results show that this method can significantly reduce the percentage of background DNA. We also developed software that can perform real-time base call and analyses in set intervals of ASFV TGS sequencing reads on a cloud server.

RevDate: 2022-10-05

Guo MH, Liu ZN, Mu TJ, et al (2022)

Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks. Self-attention updates the feature at each position by computing a weighted sum of features using pair-wise affinities across all positions to capture the long-range dependency within a single sample. However, self-attention has quadratic complexity and ignores potential correlation between different samples. This paper proposes a novel attention mechanism which we call external attention, based on two external, small, learnable, shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers; it conveniently replaces self-attention in existing popular architectures. External attention has linear complexity and implicitly considers the correlations between all data samples. We further incorporate the multi-head mechanism into external attention to provide an all-MLP architecture, external attention MLP (EAMLP), for image classification. Extensive experiments on image classification, object detection, semantic segmentation, instance segmentation, image generation, and point cloud analysis reveal that our method provides results comparable or superior to the self-attention mechanism and some of its variants, with much lower computational and memory costs.

RevDate: 2022-10-04

Zhou Y, Hu Z, Geng Q, et al (2022)

Monitoring and analysis of desertification surrounding Qinghai Lake (China) using remote sensing big data.

Environmental science and pollution research international [Epub ahead of print].

Desertification is one of the most serious ecological environmental problems in the world. Monitoring the spatiotemporal dynamics of desertification is crucial for its control. The region around Qinghai Lake, in the northeastern part of the Qinghai-Tibet Plateau in China, is a special ecological function area and a climate change sensitive area, making its environmental conditions a great concern. Using cloud computing via Google Earth Engine (GEE), we collected Landsat 5 TM, Landsat 8 OLI/TIRS, and MODIS Albedo images from 2000 to 2020 in the region around Qinghai Lake, acquired land surface albedo (Albedo), and normalized vegetation index (NDVI) to build a remote sensing monitoring model of desertification. Our results showed that the desertification difference index based on the Albedo-NDVI feature space could reflect the degree of desertification in the region around Qinghai Lake. GEE offers significant advantages, such as massive data processing and long-term dynamic monitoring. The desertification land area fluctuated downward in the study area from 2000 to 2020, and the overall desertification status improved. Natural factors, such as climate change from warm-dry to warm-wet and decreased wind speed, and human factors improved the desertification situation. The findings indicate that desertification in the region around Qinghai Lake has been effectively controlled, and the overall desertification trend is improving.

RevDate: 2022-10-03

Zhou Y, MG Varzaneh (2022)

Efficient and scalable patients clustering based on medical big data in cloud platform.

Journal of cloud computing (Heidelberg, Germany), 11(1):49.

With the outbreak and popularity of COVID-19 pandemic worldwide, the volume of patients is increasing rapidly all over the world, which brings a big risk and challenge for the maintenance of public healthcare. In this situation, quick integration and analysis of the medical records of patients in a cloud platform are of positive and valuable significance for accurate recognition and scientific diagnosis of the healthy conditions of potential patients. However, due to the big volume of medical data of patients distributed in different platforms (e.g., multiple hospitals), how to integrate these data for patient clustering and analysis in a time-efficient and scalable manner in cloud platform is still a challenging task, while guaranteeing the capability of privacy-preservation. Motivated by this fact, a time-efficient, scalable and privacy-guaranteed patient clustering method in cloud platform is proposed in this work. At last, we demonstrate the competitive advantages of our method via a set of simulated experiments. Experiment results with competitive methods in current research literatures have proved the feasibility of our proposal.

RevDate: 2022-10-03

Moser N, Yu LS, Rodriguez Manzano J, et al (2022)

Quantitative detection of dengue serotypes using a smartphone-connected handheld lab-on-chip platform.

Frontiers in bioengineering and biotechnology, 10:892853 pii:892853.

Dengue is one of the most prevalent infectious diseases in the world. Rapid, accurate and scalable diagnostics are key to patient management and epidemiological surveillance of the dengue virus (DENV), however current technologies do not match required clinical sensitivity and specificity or rely on large laboratory equipment. In this work, we report the translation of our smartphone-connected handheld Lab-on-Chip (LoC) platform for the quantitative detection of two dengue serotypes. At its core, the approach relies on the combination of Complementary Metal-Oxide-Semiconductor (CMOS) microchip technology to integrate an array of 78 × 56 potentiometric sensors, and a label-free reverse-transcriptase loop mediated isothermal amplification (RT-LAMP) assay. The platform communicates to a smartphone app which synchronises results in real time with a secure cloud server hosted by Amazon Web Services (AWS) for epidemiological surveillance. The assay on our LoC platform (RT-eLAMP) was shown to match performance on a gold-standard fluorescence-based real-time instrument (RT-qLAMP) with synthetic DENV-1 and DENV-2 RNA and extracted RNA from 9 DENV-2 clinical isolates, achieving quantitative detection in under 15 min. To validate the portability of the platform and the geo-tagging capabilities, we led our study in the laboratories at Imperial College London, UK, and Kaohsiung Medical Hospital, Taiwan. This approach carries high potential for application in low resource settings at the point of care (PoC).

RevDate: 2022-09-30

Sun J, Endo S, Lin H, et al (2022)

Perturbative Quantum Simulation.

Physical review letters, 129(12):120505.

Approximation based on perturbation theory is the foundation for most of the quantitative predictions of quantum mechanics, whether in quantum many-body physics, chemistry, quantum field theory, or other domains. Quantum computing provides an alternative to the perturbation paradigm, yet state-of-the-art quantum processors with tens of noisy qubits are of limited practical utility. Here, we introduce perturbative quantum simulation, which combines the complementary strengths of the two approaches, enabling the solution of large practical quantum problems using limited noisy intermediate-scale quantum hardware. The use of a quantum processor eliminates the need to identify a solvable unperturbed Hamiltonian, while the introduction of perturbative coupling permits the quantum processor to simulate systems larger than the available number of physical qubits. We present an explicit perturbative expansion that mimics the Dyson series expansion and involves only local unitary operations, and show its optimality over other expansions under certain conditions. We numerically benchmark the method for interacting bosons, fermions, and quantum spins in different topologies, and study different physical phenomena, such as information propagation, charge-spin separation, and magnetism, on systems of up to 48 qubits only using an 8+1 qubit quantum hardware. We demonstrate our scheme on the IBM quantum cloud, verifying its noise robustness and illustrating its potential for benchmarking large quantum processors with smaller ones.

RevDate: 2022-09-29

Jiang Y, Y Lei (2022)

Implementation of Trusted Traceability Query Using Blockchain and Deep Reinforcement Learning in Resource Management.

Computational intelligence and neuroscience, 2022:6559517.

To better track the source of goods and maintain the quality of goods, the present work uses blockchain technology to establish a system for trusted traceability queries and information management. Primarily, the analysis is made on the shortcomings of the traceability system in the field of agricultural products at the present stage; the study is conducted on the application of the traceability system to blockchain technology, and a new model of agricultural product traceability system is established based on the blockchain technology. Then, a study is carried out on the task scheduling problem of resource clusters in cloud computing resource management. The present work expands the task model and uses the deep Q network algorithm in deep reinforcement learning to solve various optimization objectives preset in the task scheduling problem. Next, a resource management algorithm based on a deep Q network is proposed. Finally, the performance of the algorithm is analyzed from the aspects of parameters, structure, and task load. Experiments show that the algorithm is better than Shortest Job First (SJF), Tetris ∗ , Packer, and other classic task scheduling algorithms in different optimization objectives. In the traceability system test, the traceability accuracy is 99% for the constructed system in the first group of samples. In the second group, the traceability accuracy reaches 98% for the constructed system. In general, the traceability accuracy of the system proposed here is above 98% in 8 groups of experimental samples, and the traceability accuracy is close for each experimental group. The resource management approach of the traceability system constructed here provides some ideas for the application of reinforcement learning technology in the construction of traceability systems.

RevDate: 2022-09-28

Wolf K, Dawson RJ, Mills JP, et al (2022)

Towards a digital twin for supporting multi-agency incident management in a smart city.

Scientific reports, 12(1):16221.

Cost-effective on-demand computing resources can help to process the increasing number of large, diverse datasets generated from smart internet-enabled technology, such as sensors, CCTV cameras, and mobile devices, with high temporal resolution. Category 1 emergency services (Ambulance, Fire and Rescue, and Police) can benefit from access to (near) real-time traffic- and weather data to coordinate multiple services, such as reassessing a route on the transport network affected by flooding or road incidents. However, there is a tendency not to utilise available smart city data sources, due to the heterogeneous data landscape, lack of real-time information, and communication inefficiencies. Using a systems engineering approach, we identify the current challenges faced by stakeholders involved in incident response and formulate future requirements for an improved system. Based on these initial findings, we develop a use case using Microsoft Azure cloud computing technology for analytical functionalities that can better support stakeholders in their response to an incident. Our prototype allows stakeholders to view available resources, send automatic updates and integrate location-based real-time weather and traffic data. We anticipate our study will provide a foundation for the future design of a data ontology for multi-agency incident response in smart cities of the future.

RevDate: 2022-09-28

Roy B, E Bari (2022)

Examining the relationship between land surface temperature and landscape features using spectral indices with Google Earth Engine.

Heliyon, 8(9):e10668.

Land surface temperature (LST) is strongly influenced by landscape features as they change the thermal characteristics of the surface greatly. Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), Normalized Difference Built-up Index (NDBI), and Normalized Difference Bareness Index (NDBAI) correspond to vegetation cover, water bodies, impervious build-ups, and bare lands, respectively. These indices were utilized to demonstrate the relationship between multiple landscape features and LST using the spectral indices derived from images of Landsat 5 Thematic Mapper (TM), and Landsat 8 Operational Land Imager (OLI) of Sylhet Sadar Upazila (2000-2018). Google Earth Engine (GEE) cloud computing platform was used to filter, process, and analyze trends with logistic regression. LST and other spectral indices were calculated. Changes in LST (2000-2018) range from -6 °C to +4 °C in the study area. Because of higher vegetation cover and reserve forest, the north-eastern part of the study region had the greatest variations in LST. The spectral indices corresponding to landscape features have a considerable explanatory capacity for describing LST scenarios. The correlation of these indices with LST ranges from -0.52 (NDBI) to +0.57 (NDVI).

RevDate: 2022-09-28
CmpDate: 2022-09-28

Huemer J, Kronschläger M, Ruiss M, et al (2022)

Diagnostic accuracy of code-free deep learning for detection and evaluation of posterior capsule opacification.

BMJ open ophthalmology, 7(1):.

OBJECTIVE: To train and validate a code-free deep learning system (CFDLS) on classifying high-resolution digital retroillumination images of posterior capsule opacification (PCO) and to discriminate between clinically significant and non-significant PCOs.

METHODS AND ANALYSIS: For this retrospective registry study, three expert observers graded two independent datasets of 279 images three separate times with no PCO to severe PCO, providing binary labels for clinical significance. The CFDLS was trained and internally validated using 179 images of a training dataset and externally validated with 100 images. Model development was through Google Cloud AutoML Vision. Intraobserver and interobserver variabilities were assessed using Fleiss kappa (κ) coefficients and model performance through sensitivity, specificity and area under the curve (AUC).

RESULTS: Intraobserver variability κ values for observers 1, 2 and 3 were 0.90 (95% CI 0.86 to 0.95), 0.94 (95% CI 0.90 to 0.97) and 0.88 (95% CI 0.82 to 0.93). Interobserver agreement was high, ranging from 0.85 (95% CI 0.79 to 0.90) between observers 1 and 2 to 0.90 (95% CI 0.85 to 0.94) for observers 1 and 3. On internal validation, the AUC of the CFDLS was 0.99 (95% CI 0.92 to 1.0); sensitivity was 0.89 at a specificity of 1. On external validation, the AUC was 0.97 (95% CI 0.93 to 0.99); sensitivity was 0.84 and specificity was 0.92.

CONCLUSION: This CFDLS provides highly accurate discrimination between clinically significant and non-significant PCO equivalent to human expert graders. The clinical value as a potential decision support tool in different models of care warrants further research.

RevDate: 2022-09-28

Sulis E, Amantea IA, Aldinucci M, et al (2022)

An ambient assisted living architecture for hospital at home coupled with a process-oriented perspective.

Journal of ambient intelligence and humanized computing [Epub ahead of print].

The growing number of next-generation applications offers a relevant opportunity for healthcare services, generating an urgent need for architectures for systems integration. Moreover, the huge amount of stored information related to events can be explored by adopting a process-oriented perspective. This paper discusses an Ambient Assisted Living healthcare architecture to manage hospital home-care services. The proposed solution relies on adopting an event manager to integrate sources ranging from personal devices to web-based applications. Data are processed on a federated cloud platform offering computing infrastructure and storage resources to improve scientific research. In a second step, a business process analysis of telehealth and telemedicine applications is considered. An initial study explored the business process flow to capture the main sequences of tasks, activities, events. This step paves the way for the integration of process mining techniques to compliance monitoring in an AAL architecture framework.

RevDate: 2022-09-28

Ahmad I, Abdullah S, A Ahmed (2022)

IoT-fog-based healthcare 4.0 system using blockchain technology.

The Journal of supercomputing [Epub ahead of print].

Real-time tracking and surveillance of patients' health has become ubiquitous in the healthcare sector as a result of the development of fog, cloud computing, and Internet of Things (IoT) technologies. Medical IoT (MIoT) equipment often transfers health data to a pharmaceutical data center, where it is saved, evaluated, and made available to relevant stakeholders or users. Fog layers have been utilized to increase the scalability and flexibility of IoT-based healthcare services, by providing quick response times and low latency. Our proposed solution focuses on an electronic healthcare system that manages both critical and non-critical patients simultaneously. Fog layer is distributed into two halves: critical fog cluster and non-critical fog cluster. Critical patients are handled at critical fog clusters for quick response, while non-critical patients are handled using blockchain technology at non-critical fog cluster, which protects the privacy of patient health records. The suggested solution requires little modification to the current IoT ecosystem while decrease the response time for critical messages and offloading the cloud infrastructure. Reduced storage requirements for cloud data centers benefit users in addition to saving money on construction and operating expenses. In addition, we examined the proposed work for recall, accuracy, precision, and F-score. The results show that the suggested approach is successful in protecting privacy while retaining standard network settings. Moreover, suggested system and benchmark are evaluated in terms of system response time, drop rate, throughput, fog, and cloud utilization. Evaluated results clearly indicate the performance of proposed system is better than benchmark.

RevDate: 2022-09-28
CmpDate: 2022-09-28

Yue Q (2022)

Dynamic Database Design of Sports Quality Based on Genetic Data Algorithm and Artificial Intelligence.

Computational intelligence and neuroscience, 2022:7473109.

According to the traditional data mining method, it is no longer applicable to obtain knowledge from the database, and the knowledge mined in the past must be constantly updated. In the last few years, Internet technology and cloud computing technology have emerged. The emergence of these two technologies has brought about Earth-shaking changes in certain industries. In order to efficiently retrieve and count a large amount of data at a lower cost, big data technology is proposed. Big data technology has played an important role for data with various types, huge quantities, and extremely fast changing speeds. However, big data technology still has some limitations, and researchers still cannot obtain the value of data in a short period of time with low cost and high efficiency. The sports database constructed in this paper can effectively carry out statistics and analysis on the data of sports learning. In the prototype system, log files can be mined, classified, and preprocessed. For the incremental data obtained by preprocessing, incremental data mining can be performed, a classification model can be established, and the database can be updated to provide users with personalized services. Through the method of data survey, the author studied the students' exercise status, and the feedback data show that college students lack the awareness of physical exercise and have no fitness habit. It is necessary to accelerate the reform of college sports and cultivate students' good sports awareness.

RevDate: 2022-09-28
CmpDate: 2022-09-28

Zhu J (2022)

The Usage of Designing the Urban Sculpture Scene Based on Edge Computing.

Computational intelligence and neuroscience, 2022:9346771.

To not only achieve the goal of urban cultural construction but also save the cost of urban sculpture space design, EC (edge computing) is combined with urban sculpture space design and planning first. Then it briefly discusses the service category, system architecture, advantages, and characteristics of urban sculpture, as well as the key points and difficulties of its construction, and the layered architecture of EC for urban sculpture spaces is proposed. Secondly, the cloud edge combination technology is adopted, and the urban sculpture is used as a specific function of the edge system node to conduct an in-depth analysis to build an urban sculpture safety supervision system architecture platform. Finally, the actual energy required for implementation is predicted and evaluated, the specific monitoring system coverage is set up, and some equations are made for calculating the energy consumption of the monitored machines according to the number of devices and route planning required by the urban sculpture safety supervision system. An optimization algorithm for energy consumption is proposed based on reinforcement learning and compared with the three control groups. The results show that when the seven monitoring devices cover detection points less than 800, the required energy consumption increases linearly. When the detection devices cover more than 800 detection points, the required energy consumption is stable and varies from 10000 to 12000; that is, when the number of monitoring devices is 7, the optimal number of monitoring points is about 800. When the number of detection points is fixed, increasing the number of monitoring devices in a small range can reduce the total energy consumption. The optimization algorithm based on the reinforcement learning proposal can obtain an approximate optimal solution. The research results show that the combination of edge computing and urban sculpture can expand the function of urban sculpture and make it serve people better.

RevDate: 2022-09-28
CmpDate: 2022-09-28

Zheng M, Liu B, L Sun (2022)

LawRec: Automatic Recommendation of Legal Provisions Based on Legal Text Analysis.

Computational intelligence and neuroscience, 2022:6313161.

Smart court technologies are making full use of modern science to promote the modernization of the trial system and trial capabilities, for example, artificial intelligence, Internet of things, and cloud computing. The smart court technologies can improve the efficiency of case handling and achieving convenience for the people. Article recommendation is an important part of intelligent trial. For ordinary people without legal background, the traditional information retrieval system that searches laws and regulations based on keywords is not applicable because they do not have the ability to extract professional legal vocabulary from complex case processes. This paper proposes a law recommendation framework, called LawRec, based on Bidirectional Encoder Representation from Transformers (BERT) and Skip-Recurrent Neural Network (Skip-RNN) models. It intends to integrate the knowledge of legal provisions with the case description and uses the BERT model to learn the case description text and legal knowledge, respectively. At last, laws and regulations for cases can be recommended. Experiment results show that the proposed LawRec can achieve better performance than state-of-the-art methods.

RevDate: 2022-09-26

Park JY, Lee K, DR Chung (2022)

Public interest in the digital transformation accelerated by the COVID-19 pandemic and perception of its future impact.

The Korean journal of internal medicine pii:kjim.2022.129 [Epub ahead of print].

Background/Aims: The coronavirus disease 2019 (COVID-19) pandemic has accelerated digital transformation (DT). We investigated the trend of the public interest in technologies regarding the DT and Koreans' experiences and their perceptions of the future impact of these technologies.

Methods: Using Google Trends, the relative search volume (RSV) for topics including "coronavirus," "artificial intelligence," "cloud," "big data," and "metaverse" were retrieved for the period from January 2020 to January 2022. A survey was conducted to assess the population's knowledge, experience, and perceptions regarding the DT.

Results: The RSV for "metaverse" showed an increasing trend, in contrast to those for "cloud," "big data," and "coronavirus." The RSVs for DT-related keywords had a negative correlation with the number of new weekly COVID-19 cases. In our survey, 78.1% responded that the positive impact of the DT on future lives would outweigh the negative impact. The predictors for this positive perception included experiences with the metaverse (4.0-fold) and virtual reality (VR)/augmented reality (AR) education (3.8-fold). Respondents predicted that the biggest change would occur in the healthcare sector after transportation/ communication.

Conclusions: Koreans' search interest for "metaverse" showed an increasing trend during the COVID-19 pandemic. Koreans believe that DT will bring about big changes in the healthcare sector. Most of the survey respondents have a positive outlook about the impact of DT on future life, and the predictors for this positive perception include the experiences with the metaverse or VR/AR education. Healthcare professionals need to accelerate the adoption of DT in clinical practice, education and training.

RevDate: 2022-09-24

Zhao XG, H Cao (2022)

Linking research of biomedical datasets.

Briefings in bioinformatics pii:6712704 [Epub ahead of print].

Biomedical data preprocessing and efficient computing can be as important as the statistical methods used to fit the data; data processing needs to consider application scenarios, data acquisition and individual rights and interests. We review common principles, knowledge and methods of integrated research according to the whole-pipeline processing mechanism diverse, coherent, sharing, auditable and ecological. First, neuromorphic and native algorithms integrate diverse datasets, providing linear scalability and high visualization. Second, the choice mechanism of different preprocessing, analysis and transaction methods from raw to neuromorphic was summarized on the node and coordinator platforms. Third, combination of node, network, cloud, edge, swarm and graph builds an ecosystem of cohort integrated research and clinical diagnosis and treatment. Looking forward, it is vital to simultaneously combine deep computing, mass data storage and massively parallel communication.

RevDate: 2022-09-23

Jeong Y, T Kim (2022)

A Cluster-Driven Adaptive Training Approach for Federated Learning.

Sensors (Basel, Switzerland), 22(18): pii:s22187061.

Federated learning (FL) is a promising collaborative learning approach in edge computing, reducing communication costs and addressing the data privacy concerns of traditional cloud-based training. Owing to this, diverse studies have been conducted to distribute FL into industry. However, there still remain the practical issues of FL to be solved (e.g., handling non-IID data and stragglers) for an actual implementation of FL. To address these issues, in this paper, we propose a cluster-driven adaptive training approach (CATA-Fed) to enhance the performance of FL training in a practical environment. CATA-Fed employs adaptive training during the local model updates to enhance the efficiency of training, reducing the waste of time and resources due to the presence of the stragglers and also provides a straggler mitigating scheme, which can reduce the workload of straggling clients. In addition to this, CATA-Fed clusters the clients considering the data size and selects the training participants within a cluster to reduce the magnitude differences of local gradients collected in the global model update under a statistical heterogeneous condition (e.g., non-IID data). During this client selection process, a proportional fair scheduling is employed for securing the data diversity as well as balancing the load of clients. We conduct extensive experiments using three benchmark datasets (MNIST, Fashion-MNIST, and CIFAR-10), and the results show that CATA-Fed outperforms the previous FL schemes (FedAVG, FedProx, and TiFL) with regard to the training speed and test accuracy under the diverse FL conditions.

RevDate: 2022-09-23

Caro-Via S, Vidaña-Vila E, Ginovart-Panisello GJ, et al (2022)

Edge-Computing Meshed Wireless Acoustic Sensor Network for Indoor Sound Monitoring.

Sensors (Basel, Switzerland), 22(18): pii:s22187032.

This work presents the design of a wireless acoustic sensor network (WASN) that monitors indoor spaces. The proposed network would enable the acquisition of valuable information on the behavior of the inhabitants of the space. This WASN has been conceived to work in any type of indoor environment, including houses, hospitals, universities or even libraries, where the tracking of people can give relevant insight, with a focus on ambient assisted living environments. The proposed WASN has several priorities and differences compared to the literature: (i) presenting a low-cost flexible sensor able to monitor wide indoor areas; (ii) balance between acoustic quality and microphone cost; and (iii) good communication between nodes to increase the connectivity coverage. A potential application of the proposed network could be the generation of a sound map of a certain location (house, university, offices, etc.) or, in the future, the acoustic detection of events, giving information about the behavior of the inhabitants of the place under study. Each node of the network comprises an omnidirectional microphone and a computation unit, which processes acoustic information locally following the edge-computing paradigm to avoid sending raw data to a cloud server, mainly for privacy and connectivity purposes. Moreover, this work explores the placement of acoustic sensors in a real scenario, following acoustic coverage criteria. The proposed network aims to encourage the use of real-time non-invasive devices to obtain behavioral and environmental information, in order to take decisions in real-time with the minimum intrusiveness in the location under study.

RevDate: 2022-09-23

Barron A, Sanchez-Gallegos DD, Carrizales-Espinoza D, et al (2022)

On the Efficient Delivery and Storage of IoT Data in Edge-Fog-Cloud Environments.

Sensors (Basel, Switzerland), 22(18): pii:s22187016.

Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge-fog-cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge-fog-cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.

RevDate: 2022-09-23

Alvear-Puertas VE, Burbano-Prado YA, Rosero-Montalvo PD, et al (2022)

Smart and Portable Air-Quality Monitoring IoT Low-Cost Devices in Ibarra City, Ecuador.

Sensors (Basel, Switzerland), 22(18): pii:s22187015.

Nowadays, increasing air-pollution levels are a public health concern that affects all living beings, with the most polluting gases being present in urban environments. For this reason, this research presents portable Internet of Things (IoT) environmental monitoring devices that can be installed in vehicles and that send message queuing telemetry transport (MQTT) messages to a server, with a time series database allocated in edge computing. The visualization stage is performed in cloud computing to determine the city air-pollution concentration using three different labels: low, normal, and high. To determine the environmental conditions in Ibarra, Ecuador, a data analysis scheme is used with outlier detection and supervised classification stages. In terms of relevant results, the performance percentage of the IoT nodes used to infer air quality was greater than 90%. In addition, the memory consumption was 14 Kbytes in a flash and 3 Kbytes in a RAM, reducing the power consumption and bandwidth needed in traditional air-pollution measuring stations.

RevDate: 2022-09-23

Maruta K, Nishiuchi H, Nakazato J, et al (2022)

5G/B5G mmWave Cellular Networks with MEC Prefetching Based on User Context Information.

Sensors (Basel, Switzerland), 22(18): pii:s22186983.

To deal with recent increasing mobile traffic, ultra-broadband communication with millimeter-wave (mmWave) has been regarded as a key technology for 5G cellular networks. In a previous study, a mmWave heterogeneous network was composed of several mmWave small cells overlaid on the coverage of a macro cell. However, as seen from the optical fiber penetration rate worldwide, it is difficult to say that backhaul with Gbps order is available everywhere. In the case of using mmWave access under a limited backhaul capacity, it becomes a bottleneck at the backhaul; thus, mmWave access cannot fully demonstrate its potential. On the other hand, the concept of multi-access edge computing (MEC) has been proposed to decrease the response latency compared to cloud computing by deploying storage and computation resources to the user side of mobile networks. This paper introduces MEC into mmWave heterogeneous networks and proposes a content prefetching algorithm to resolve such backhaul issues. Context information, such as the destination, mobility, and traffic tendency, is shared through the macro cell to the prefetch application and data that the users request. Prefetched data is stored in the MEC and then transmitted via mmWave without a backhaul bottleneck. The effectiveness is verified through computer simulations where we implement realistic user mobility as well as traffic and backhauling models. The results show that the proposed framework achieved 95% system capacity even under the constraint of a 1 Gbps backhaul link.

RevDate: 2022-09-23

Alghamdi A, Zhu J, Yin G, et al (2022)

Blockchain Empowered Federated Learning Ecosystem for Securing Consumer IoT Features Analysis.

Sensors (Basel, Switzerland), 22(18): pii:s22186786.

Resource constraint Consumer Internet of Things (CIoT) is controlled through gateway devices (e.g., smartphones, computers, etc.) that are connected to Mobile Edge Computing (MEC) servers or cloud regulated by a third party. Recently Machine Learning (ML) has been widely used in automation, consumer behavior analysis, device quality upgradation, etc. Typical ML predicts by analyzing customers' raw data in a centralized system which raises the security and privacy issues such as data leakage, privacy violation, single point of failure, etc. To overcome the problems, Federated Learning (FL) developed an initial solution to ensure services without sharing personal data. In FL, a centralized aggregator collaborates and makes an average for a global model used for the next round of training. However, the centralized aggregator raised the same issues, such as a single point of control leaking the updated model and interrupting the entire process. Additionally, research claims data can be retrieved from model parameters. Beyond that, since the Gateway (GW) device has full access to the raw data, it can also threaten the entire ecosystem. This research contributes a blockchain-controlled, edge intelligence federated learning framework for a distributed learning platform for CIoT. The federated learning platform allows collaborative learning with users' shared data, and the blockchain network replaces the centralized aggregator and ensures secure participation of gateway devices in the ecosystem. Furthermore, blockchain is trustless, immutable, and anonymous, encouraging CIoT end users to participate. We evaluated the framework and federated learning outcomes using the well-known Stanford Cars dataset. Experimental results prove the effectiveness of the proposed framework.

RevDate: 2022-09-23

Liu X, Zhao X, Liu G, et al (2022)

Collaborative Task Offloading and Service Caching Strategy for Mobile Edge Computing.

Sensors (Basel, Switzerland), 22(18): pii:s22186760.

Mobile edge computing (MEC), which sinks the functions of cloud servers, has become an emerging paradigm to solve the contradiction between delay-sensitive tasks and resource-constrained terminals. Task offloading assisted by service caching in a collaborative manner can reduce delay and balance the edge load in MEC. Due to the limited storage resources of edge servers, it is a significant issue to develop a dynamical service caching strategy according to the actual variable user demands in task offloading. Therefore, this paper investigates the collaborative task offloading problem assisted by a dynamical caching strategy in MEC. Furthermore, a two-level computing strategy called joint task offloading and service caching (JTOSC) is proposed to solve the optimized problem. The outer layer in JTOSC iteratively updates the service caching decisions based on the Gibbs sampling. The inner layer in JTOSC adopts the fairness-aware allocation algorithm and the offloading revenue preference-based bilateral matching algorithm to get a great computing resource allocation and task offloading scheme. The simulation results indicate that the proposed strategy outperforms the other four comparison strategies in terms of maximum offloading delay, service cache hit rate, and edge load balance.

RevDate: 2022-09-23

Li D, Mao Y, Chen X, et al (2022)

Deployment and Allocation Strategy for MEC Nodes in Complex Multi-Terminal Scenarios.

Sensors (Basel, Switzerland), 22(18): pii:s22186719.

Mobile edge computing (MEC) has become an effective solution for insufficient computing and communication problems for the Internet of Things (IoT) applications due to its rich computing resources on the edge side. In multi-terminal scenarios, the deployment scheme of edge nodes has an important impact on system performance and has become an essential issue in end-edge-cloud architecture. In this article, we consider specific factors, such as spatial location, power supply, and urgency requirements of terminals, with respect to building an evaluation model to solve the allocation problem. An evaluation model based on reward, energy consumption, and cost factors is proposed. The genetic algorithm is applied to determine the optimal edge node deployment and allocation strategies. Moreover, we compare the proposed method with the k-means and ant colony algorithms. The results show that the obtained strategies achieve good evaluation results under problem constraints. Furthermore, we conduct comparison tests with different attributes to further test the performance of the proposed method.

RevDate: 2022-09-23

Tang X, Xu L, G Chen (2022)

Research on the Rapid Diagnostic Method of Rolling Bearing Fault Based on Cloud-Edge Collaboration.

Entropy (Basel, Switzerland), 24(9): pii:e24091277.

Recent deep-learning methods for fault diagnosis of rolling bearings need a significant amount of computing time and resources. Most of them cannot meet the requirements of real-time fault diagnosis of rolling bearings under the cloud computing framework. This paper proposes a quick cloud-edge collaborative bearing fault diagnostic method based on the tradeoff between the advantages and disadvantages of cloud and edge computing. First, a collaborative cloud-based framework and an improved DSCNN-GAP algorithm are suggested to build a general model using the public bearing fault dataset. Second, the general model is distributed to each edge node, and a limited number of unique fault samples acquired by each edge node are used to quickly adjust the parameters of the model before running diagnostic tests. Finally, a fusion result is made from the diagnostic results of each edge node by DS evidence theory. Experiment results show that the proposed method not only improves diagnostic accuracy by DSCNN-GAP and fusion of multi-sensors, but also decreases diagnosis time by migration learning with the cloud-edge collaborative framework. Additionally, the method can effectively enhance data security and privacy protection.

RevDate: 2022-09-20

Lin HY, Tsai TT, Wu HR, et al (2022)

Secure access control using updateable attribute keys.

Mathematical biosciences and engineering : MBE, 19(11):11367-11379.

In the era of cloud computing, the technique of access control is vital to protect the confidentiality and integrity of cloud data. From the perspective of servers, they should only allow authenticated clients to gain the access of data. Specifically, the server will share a communication channel with the client by generating a common session key. It is thus regarded as a symmetric key for encrypting data in the current channel. An access control mechanism using attribute-based encryptions is most flexible, since the decryption privilege can be granted to the ones who have sufficient attributes. In the paper, the authors propose a secure access control consisting of the attributed-based mutual authentication and the attribute-based encryption. The most appealing property of our system is that the attribute keys associated with each user is periodically updatable. Moreover, we will also show that our system fulfills the security of fuzzy selective-ID assuming the hardness of Decisional Modified Bilinear Diffie-Hellman (DMBDH) problem.

RevDate: 2022-09-20

Liu D, Li Z, Wang C, et al (2022)

Enabling secure mutual authentication and storage checking in cloud-assisted IoT.

Mathematical biosciences and engineering : MBE, 19(11):11034-11046.

Internet of things (IoT) is a technology that can collect the data sensed by the devices for the further real-time services. Using the technique of cloud computing to assist IoT devices in data storing can eliminate the disadvantage of the constrained local storage and computing capability. However, the complex network environment makes cloud servers vulnerable to attacks, and adversaries pretend to be legal IoT clients trying to access the cloud server. Hence, it is necessary to provide a mechanism of mutual authentication for the cloud system to enhance the storage security. In this paper, a secure mutual authentication is proposed for cloud-assisted IoT. Note that the technique of chameleon hash signature is used to construct the authentication. Moreover, the proposed scheme can provide storage checking with the assist of a fully-trusted entity, which highly improves the checking fairness and efficiency. Security analysis proves that the proposed scheme in this paper is correct. Performance analysis demonstrates that the proposed scheme can be performed with high efficiency.

RevDate: 2022-09-20

Wu Y, Zheng C, Xie L, et al (2022)

Cloud-Based English Multimedia for Universities Test Questions Modeling and Applications.

Computational intelligence and neuroscience, 2022:4563491.

This study constructs a cloud computing-based college English multimedia test question modeling and application through an in-depth study of cloud computing and college English multimedia test questions. The emergence of cloud computing technology undoubtedly provides a new and ideal method to solve test data and paper management problems. This study analyzes the advantages of the Hadoop computing platform and MapReduce computing model and builds a distributed computing platform based on Hadoop using universities' existing hardware and software resources. The study analyzes the advantages of the Hadoop computing platform and the MapReduce computing model. The UML model of the system is given, the system is implemented, the system is tested functionally, and the results of the analysis are given. Multimedia is the critical link to realizing the optimization of English test questions. The proper use of multimedia test questions will undoubtedly become an inevitable trend in the development of English test questions in the future, which requires every worker on the education front to continuously analyze and study the problems arising from multimedia teaching, summarize the experience of multimedia teaching, and explore new methods of multimedia teaching, so that multimedia teaching can better promote the optimization of English test questions in colleges and universities and better serve the education teaching.

RevDate: 2022-09-20
CmpDate: 2022-09-20

Zhang F, Zhang Z, H Xiao (2022)

Research on Medical Big Data Analysis and Disease Prediction Method Based on Artificial Intelligence.

Computational and mathematical methods in medicine, 2022:4224287.

In recent years, the continuous development of big data, cloud services, Internet+, artificial intelligence, and other technologies has accelerated the improvement of data communication services in the traditional pharmaceutical industry. It plays a leading role in the development of my country's pharmaceutical industry, deepening the reform of the health system, improving the efficiency and quality of medical services, and developing new technologies. In this context, we make the following research and draw the following conclusions: (1) the scale of my country's medical big data market is constantly increasing, and the global medical big data market is also increasing. Compared with the global medical big data market, China's medical big data has grown at a faster rate. From the initial 10.33% in 2015, the proportion has reached 38.7% after 7 years, and the proportion has increased by 28.37%. (2) Generally speaking, urine is mainly slightly acidic, that is, the pH is around 6.0, the normal range is 5.0 to 7.0, and there are also neutral or slightly alkaline. 8 and 7.5 are generally people with some physical problems. In recent years, the pharmaceutical industry has continuously developed technologies such as big data, cloud computing, Internet+, and artificial intelligence by improving data transmission services. As an important strategic resource of the country, the generation of great medical skills and great information is of great significance to the development of my country's pharmaceutical industry and the deepening of the reform of the national medical system. Improve the efficiency and level of medical services, and establish forms and services. Accelerate economic growth. In this sense, we set out to explore.

RevDate: 2022-09-20

Shoeibi A, Moridian P, Khodatars M, et al (2022)

An overview of deep learning techniques for epileptic seizures detection and prediction based on neuroimaging modalities: Methods, challenges, and future works.

Computers in biology and medicine, 149:106053.

Epilepsy is a disorder of the brain denoted by frequent seizures. The symptoms of seizure include confusion, abnormal staring, and rapid, sudden, and uncontrollable hand movements. Epileptic seizure detection methods involve neurological exams, blood tests, neuropsychological tests, and neuroimaging modalities. Among these, neuroimaging modalities have received considerable attention from specialist physicians. One method to facilitate the accurate and fast diagnosis of epileptic seizures is to employ computer-aided diagnosis systems (CADS) based on deep learning (DL) and neuroimaging modalities. This paper has studied a comprehensive overview of DL methods employed for epileptic seizures detection and prediction using neuroimaging modalities. First, DL-based CADS for epileptic seizures detection and prediction using neuroimaging modalities are discussed. Also, descriptions of various datasets, preprocessing algorithms, and DL models which have been used for epileptic seizures detection and prediction have been included. Then, research on rehabilitation tools has been presented, which contains brain-computer interface (BCI), cloud computing, internet of things (IoT), hardware implementation of DL techniques on field-programmable gate array (FPGA), etc. In the discussion section, a comparison has been carried out between research on epileptic seizure detection and prediction. The challenges in epileptic seizures detection and prediction using neuroimaging modalities and DL models have been described. In addition, possible directions for future works in this field, specifically for solving challenges in datasets, DL, rehabilitation, and hardware models, have been proposed. The final section is dedicated to the conclusion which summarizes the significant findings of the paper.

RevDate: 2022-09-18

Kim YK, Kim HJ, Lee H, et al (2022)

Correction: Privacy-preserving parallel kNN classification algorithm using index-based filtering in cloud computing.

PloS one, 17(9):e0274981.

[This corrects the article DOI: 10.1371/journal.pone.0267908.].

RevDate: 2022-09-19
CmpDate: 2022-09-19

Zhuang Y, N Jiang (2022)

Progressive privacy-preserving batch retrieval of lung CT image sequences based on edge-cloud collaborative computation.

PloS one, 17(9):e0274507.

BACKGROUND: A computer tomography image (CI) sequence can be regarded as a time-series data that is composed of a great deal of nearby and similar CIs. Since the computational and I/O costs of similarity measure, encryption, and decryption calculation during a similarity retrieval of the large CI sequences (CIS) are extremely high, deploying all retrieval tasks in the cloud, however, will lead to excessive computing load on the cloud, which will greatly and negatively affect the retrieval performance.

METHODOLOGIES: To tackle the above challenges, the paper proposes a progressive privacy-preserving Batch Retrieval scheme for the lung CISs based on edge-cloud collaborative computation called the BRS method. There are four supporting techniques to enable the BRS method, such as: 1) batch similarity measure for CISs, 2) CIB-based privacy preserving scheme, 3) uniform edge-cloud index framework, and 4) edge buffering.

RESULTS: The experimental results reveal that our method outperforms the state-of-the-art approaches in terms of efficiency and scalability, drastically reducing response time by lowering network communication costs while enhancing retrieval safety and accuracy.

RevDate: 2022-09-17
CmpDate: 2022-09-16

Veeraiah D, Mohanty R, Kundu S, et al (2022)

Detection of Malicious Cloud Bandwidth Consumption in Cloud Computing Using Machine Learning Techniques.

Computational intelligence and neuroscience, 2022:4003403.

The Internet of Things, sometimes known as IoT, is a relatively new kind of Internet connectivity that connects physical objects to the Internet in a way that was not possible in the past. The Internet of Things is another name for this concept (IoT). The Internet of Things has a larger attack surface as a result of its hyperconnectivity and heterogeneity, both of which are characteristics of the IoT. In addition, since the Internet of Things devices are deployed in managed and uncontrolled contexts, it is conceivable for malicious actors to build new attacks that target these devices. As a result, the Internet of Things (IoT) requires self-protection security systems that are able to autonomously interpret attacks in IoT traffic and efficiently handle the attack scenario by triggering appropriate reactions at a pace that is faster than what is currently available. In order to fulfill this requirement, fog computing must be utilised. This type of computing has the capability of integrating an intelligent self-protection mechanism into the distributed fog nodes. This allows the IoT application to be protected with the least amount of human intervention while also allowing for faster management of attack scenarios. Implementing a self-protection mechanism at malicious fog nodes is the primary objective of this research work. This mechanism should be able to detect and predict known attacks based on predefined attack patterns, as well as predict novel attacks based on no predefined attack patterns, and then choose the most appropriate response to neutralise the identified attack. In the environment of the IoT, a distributed Gaussian process regression is used at fog nodes to anticipate attack patterns that have not been established in the past. This allows for the prediction of new cyberattacks in the environment. It predicts attacks in an uncertain IoT setting at a speedier rate and with greater precision than prior techniques. It is able to effectively anticipate both low-rate and high-rate assaults in a more timely manner within the dispersed fog nodes, which enables it to mount a more accurate defence. In conclusion, a fog computing-based self-protection system is developed to choose the most appropriate reaction using fuzzy logic for detected or anticipated assaults using the suggested detection and prediction mechanisms. This is accomplished by utilising a self-protection system that is based on the development of a self-protection system that utilises the suggested detection and prediction mechanisms. The findings of the experimental investigation indicate that the proposed system identifies threats, lowers bandwidth usage, and thwarts assaults at a rate that is twenty-five percent faster than the cloud-based system implementation.

RevDate: 2022-09-15

Huang H, Aschettino S, Lari N, et al (2022)

A Versatile and Scalable Platform That Streamlines Data Collection for Patient-Centered Studies: Usability and Feasibility Study.

JMIR formative research, 6(9):e38579 pii:v6i9e38579.

BACKGROUND: The Food and Drug Administration Center for Biologics Evaluation and Research (CBER) established the Biologics Effectiveness and Safety (BEST) Initiative with several objectives, including the expansion and enhancement of CBER's access to fit-for-purpose data sources, analytics, tools, and infrastructures to improve the understanding of patient experiences with conditions related to CBER-regulated products. Owing to existing challenges in data collection, especially for rare disease research, CBER recognized the need for a comprehensive platform where study coordinators can engage with study participants and design and deploy studies while patients or caregivers could enroll, consent, and securely participate as well.

OBJECTIVE: This study aimed to increase awareness and describe the design, development, and novelty of the Survey of Health and Patient Experience (SHAPE) platform, its functionality and application, quality improvement efforts, open-source availability, and plans for enhancement.

METHODS: SHAPE is hosted in a Google Cloud environment and comprises 3 parts: the administrator application, participant app, and application programming interface. The administrator can build a study comprising a set of questionnaires and self-report entries through the app. Once the study is deployed, the participant can access the app, consent to the study, and complete its components. To build SHAPE to be scalable and flexible, we leveraged the open-source software development kit, Ionic Framework. This enabled the building and deploying of apps across platforms, including iOS, Android, and progressive web applications, from a single codebase by using standardized web technologies. SHAPE has been integrated with a leading Health Level 7 (HL7®) Fast Healthcare Interoperability Resources (FHIR®) application programming interface platform, 1upHealth, which allows participants to consent to 1-time data pull of their electronic health records. We used an agile-based process that engaged multiple stakeholders in SHAPE's design and development.

RESULTS: SHAPE allows study coordinators to plan, develop, and deploy questionnaires to obtain important end points directly from patients or caregivers. Electronic health record integration enables access to patient health records, which can validate and enhance the accuracy of data-capture methods. The administrator can then download the study data into HL7® FHIR®-formatted JSON files. In this paper, we illustrate how study coordinators can use SHAPE to design patient-centered studies. We demonstrate its broad applicability through a hypothetical type 1 diabetes cohort study and an ongoing pilot study on metachromatic leukodystrophy to implement best practices for designing a regulatory-grade natural history study for rare diseases.

CONCLUSIONS: SHAPE is an intuitive and comprehensive data-collection tool for a variety of clinical studies. Further customization of this versatile and scalable platform allows for multiple use cases. SHAPE can capture patient perspectives and clinical data, thereby providing regulators, clinicians, researchers, and patient advocacy organizations with data to inform drug development and improve patient outcomes.

RevDate: 2022-09-17

Wang C, Kon WY, Ng HJ, et al (2022)

Experimental symmetric private information retrieval with measurement-device-independent quantum network.

Light, science & applications, 11(1):268.

Secure information retrieval is an essential task in today's highly digitised society. In some applications, it may be necessary that user query's privacy and database content's security are enforced. For these settings, symmetric private information retrieval (SPIR) could be employed, but its implementation is known to be demanding, requiring a private key-exchange network as the base layer. Here, we report for the first time a realisation of provably-secure SPIR supported by a quantum-secure key-exchange network. The SPIR scheme looks at biometric security, offering secure retrieval of 582-byte fingerprint files from a database with 800 entries. Our experimental results clearly demonstrate the feasibility of SPIR with quantum secure communications, thereby opening up new possibilities in secure distributed data storage and cloud computing over the future Quantum Internet.

RevDate: 2022-09-13
CmpDate: 2022-09-13

Ahamed Ahanger T, Aldaej A, Atiquzzaman M, et al (2022)

Distributed Blockchain-Based Platform for Unmanned Aerial Vehicles.

Computational intelligence and neuroscience, 2022:4723124.

Internet of Things (IoT)-inspired drone environment is having a greater influence on daily lives in the form of drone-based smart electricity monitoring, traffic routing, and personal healthcare. However, communication between drones and ground control systems must be protected to avoid potential vulnerabilities and improve coordination among scattered UAVs in the IoT context. In the current paper, a distributed UAV scheme is proposed that uses blockchain technology and a network topology similar to the IoT and cloud server to secure communications during data collection and transmission and reduce the likelihood of attack by maliciously manipulated UAVs. As an alternative to relying on a traditional blockchain approach, a unique, safe, and lightweight blockchain architecture is proposed that reduces computing and storage requirements while keeping privacy and security advantages. In addition, a unique reputation-based consensus protocol is built to assure the dependability of the decentralized network. Numerous types of transactions are established to characterize diverse data access. To validate the presented blockchain-based distributed system, performance evaluations are conducted to estimate the statistical effectiveness in the form of temporal delay, packet flow efficacy, precision, specificity, sensitivity, and security efficiency.

RevDate: 2022-09-13
CmpDate: 2022-09-13

Zhu G, Li X, Zheng C, et al (2022)

Multimedia Fusion Privacy Protection Algorithm Based on IoT Data Security under Network Regulations.

Computational intelligence and neuroscience, 2022:3574812.

This study provides an in-depth analysis and research on multimedia fusion privacy protection algorithms based on IoT data security in a network regulation environment. Aiming at the problem of collusion and conspiracy to deceive users in the process of outsourced computing and outsourced verification, a safe, reliable, and collusion-resistant scheme based on blockchain is studied for IoT outsourced data computing and public verification, with the help of distributed storage methods, where smart devices encrypt the collected data and upload them to the DHT for storage along with the results of this data given by the cloud server. After testing, the constructed model has a privacy-preserving budget value of 0.6 and the smallest information leakage ratio of multimedia fusion data based on IoT data security when the decision tree depth is 6. After using this model under this condition, the maximum value of the information leakage ratio of multimedia fusion data based on IoT data security is reduced from 0.0865 to 0.003, and the data security is significantly improved. In the consensus verification process, to reduce the consensus time and ensure the operating efficiency of the system, a consensus node selection algorithm is proposed, thereby reducing the time complexity of the consensus. Based on the smart grid application scenario, the security and performance of the proposed model are analyzed. This study proves the correctness of this scheme by using BAN logic and proves the security of this scheme under the stochastic prediction machine model. Finally, this study compares the security aspects and performance aspects of the scheme with some existing similar schemes and shows that the scheme is feasible under IoT.

RevDate: 2022-09-13
CmpDate: 2022-09-13

Alyami J, Sadad T, Rehman A, et al (2022)

Cloud Computing-Based Framework for Breast Tumor Image Classification Using Fusion of AlexNet and GLCM Texture Features with Ensemble Multi-Kernel Support Vector Machine (MK-SVM).

Computational intelligence and neuroscience, 2022:7403302.

Breast cancer is common among women all over the world. Early identification of breast cancer lowers death rates. However, it is difficult to determine whether these are cancerous or noncancerous lesions due to their inconsistencies in image appearance. Machine learning techniques are widely employed in imaging analysis as a diagnostic method for breast cancer classification. However, patients cannot take advantage of remote areas as these systems are unavailable on clouds. Thus, breast cancer detection for remote patients is indispensable, which can only be possible through cloud computing. The user is allowed to feed images into the cloud system, which is further investigated through the computer aided diagnosis (CAD) system. Such systems could also be used to track patients, older adults, especially with disabilities, particularly in remote areas of developing countries that do not have medical facilities and paramedic staff. In the proposed CAD system, a fusion of AlexNet architecture and GLCM (gray-level cooccurrence matrix) features are used to extract distinguishable texture features from breast tissues. Finally, to attain higher precision, an ensemble of MK-SVM is used. For testing purposes, the proposed model is applied to the MIAS dataset, a commonly used breast image database, and achieved 96.26% accuracy.

RevDate: 2022-09-13

Xie Y, Zhang K, Kou H, et al (2022)

Private anomaly detection of student health conditions based on wearable sensors in mobile cloud computing.

Journal of cloud computing (Heidelberg, Germany), 11(1):38.

With the continuous spread of COVID-19 virus, how to guarantee the healthy living of people especially the students who are of relative weak physique is becoming a key research issue of significant values. Specifically, precise recognition of the anomaly in student health conditions is beneficial to the quick discovery of potential patients. However, there are so many students in each school that the education managers cannot know about the health conditions of students in a real-time manner and accurately recognize the possible anomaly among students quickly. Fortunately, the quick development of mobile cloud computing technologies and wearable sensors has provided a promising way to monitor the real-time health conditions of students and find out the anomalies timely. However, two challenges are present in the above anomaly detection issue. First, the health data monitored by massive wearable sensors are often massive and updated frequently, which probably leads to high sensor-cloud transmission cost for anomaly detection. Second, the health data of students are often sensitive enough, which probably impedes the integration of health data in cloud environment even renders the health data-based anomaly detection infeasible. In view of these challenges, we propose a time-efficient and privacy-aware anomaly detection solution for students with wearable sensors in mobile cloud computing environment. At last, we validate the effectiveness and efficiency of our work via a set of simulated experiments.

RevDate: 2022-09-13

Vadde U, VS Kompalli (2022)

Energy efficient service placement in fog computing.

PeerJ. Computer science, 8:e1035.

The Internet of Things (IoT) concept evolved into a slew of applications. To satisfy the requests of these applications, using cloud computing is troublesome because of the high latency caused by the distance between IoT devices and cloud resources. Fog computing has become promising with its geographically distributed infrastructure for providing resources using fog nodes near IoT devices, thereby reducing the bandwidth and latency. A geographical distribution, heterogeneity and resource constraints of fog nodes introduce the key challenge of placing application modules/services in such a large scale infrastructure. In this work, we propose an improved version of the JAYA approach for optimal placement of modules that minimizes the energy consumption of a fog landscape. We analyzed the performance in terms of energy consumption, network usage, delays and execution time. Using iFogSim, we ran simulations and observed that our approach reduces on average 31% of the energy consumption compared to modern methods.

RevDate: 2022-09-13

Singh A, K Chatterjee (2022)

Edge computing based secure health monitoring framework for electronic healthcare system.

Cluster computing [Epub ahead of print].

Nowadays, Smart Healthcare Systems (SHS) are frequently used by people for personal healthcare observations using various smart devices. The SHS uses IoT technology and cloud infrastructure for data capturing, transmitting it through smart devices, data storage, processing, and healthcare advice. Processing such a huge amount of data from numerous IoT devices in a short time is quite challenging. Thus, technological frameworks such as edge computing or fog computing can be used as a middle layer between cloud and user in SHS. It reduces the response time for data processing at the lower level (edge level). But, Edge of Things (EoT) also suffers from security and privacy issues. A robust healthcare monitoring framework with secure data storage and access is needed. It will provide a quick response in case of the production of abnormal data and store/access the sensitive data securely. This paper proposed a Secure Framework based on the Edge of Things (SEoT) for Smart healthcare systems. This framework is mainly designed for real-time health monitoring, maintaining the security and confidentiality of the healthcare data in a controlled manner. This paper included clustering approaches for analyzing bio-signal data for abnormality detection and Attribute-Based Encryption (ABE) for bio-signal data security and secure access. The experimental results of the proposed framework show improved performance with maintaining the accuracy of up to 98.5% and data security.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )