picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
31 Jul 2021 at 01:36
HITS:
1958
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 31 Jul 2021 at 01:36 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2021-07-30

Wang Y, Murlidaran S, DA Pearlman (2021)

Quantum simulations of SARS-CoV-2 main protease Mpro enable high-quality scoring of diverse ligands.

Journal of computer-aided molecular design [Epub ahead of print].

The COVID-19 pandemic has led to unprecedented efforts to identify drugs that can reduce its associated morbidity/mortality rate. Computational chemistry approaches hold the potential for triaging potential candidates far more quickly than their experimental counterparts. These methods have been widely used to search for small molecules that can inhibit critical proteins involved in the SARS-CoV-2 replication cycle. An important target is the SARS-CoV-2 main protease Mpro, an enzyme that cleaves the viral polyproteins into individual proteins required for viral replication and transcription. Unfortunately, standard computational screening methods face difficulties in ranking diverse ligands to a receptor due to disparate ligand scaffolds and varying charge states. Here, we describe full density functional quantum mechanical (DFT) simulations of Mpro in complex with various ligands to obtain absolute ligand binding energies. Our calculations are enabled by a new cloud-native parallel DFT implementation running on computational resources from Amazon Web Services (AWS). The results we obtain are promising: the approach is quite capable of scoring a very diverse set of existing drug compounds for their affinities to M pro and suggest the DFT approach is potentially more broadly applicable to repurpose screening against this target. In addition, each DFT simulation required only ~ 1 h (wall clock time) per ligand. The fast turnaround time raises the practical possibility of a broad application of large-scale quantum mechanics in the drug discovery pipeline at stages where ligand diversity is essential.

RevDate: 2021-07-30

Li X, Ren S, F Gu (2021)

Medical Internet of Things to Realize Elderly Stroke Prevention and Nursing Management.

Journal of healthcare engineering, 2021:9989602.

Stroke is a major disease that seriously endangers the lives and health of middle-aged and elderly people in our country, but its implementation of secondary prevention needs to be improved urgently. The application of IoT technology in home health monitoring and telemedicine, as well as the popularization of cloud computing, contributes to the early identification of ischemic stroke and provides intelligent, humanized, and preventive medical and health services for patients at high risk of stroke. This article clarifies the networking structure and networking objects of the rehabilitation system Internet of Things, clarifies the functions of each part, and establishes an overall system architecture based on smart medical care; the design and optimization of the mechanical part of the stroke rehabilitation robot are carried out, as well as kinematics and dynamic analysis. According to the functions of different types of stroke rehabilitation robots, strategies are given for the use of lower limb rehabilitation robots; standardized codes are used to identify system objects, and RFID technology is used to automatically identify users and devices. Combined with the use of the Internet and GSM mobile communication network, construct a network database of system networking objects and, on this basis, establish information management software based on a smart medical rehabilitation system that takes care of both doctors and patients to realize the system's Internet of Things architecture. In addition, this article also gives the recovery strategy generation in the system with the design method of resource scheduling method and the theoretical algorithm of rehabilitation strategy generation is given and verified. This research summarizes the application background, advantages, and past practice of the Internet of Things in stroke medical care, develops and applies a medical collaborative cloud computing system for systematic intervention of stroke, and realizes the module functions such as information sharing, regional monitoring, and collaborative consultation within the base.

RevDate: 2021-07-30

Mrozek D, Stępień K, Grzesik P, et al (2021)

A Large-Scale and Serverless Computational Approach for Improving Quality of NGS Data Supporting Big Multi-Omics Data Analyses.

Frontiers in genetics, 12:699280.

Various types of analyses performed over multi-omics data are driven today by next-generation sequencing (NGS) techniques that produce large volumes of DNA/RNA sequences. Although many tools allow for parallel processing of NGS data in a Big Data distributed environment, they do not facilitate the improvement of the quality of NGS data for a large scale in a simple declarative manner. Meanwhile, large sequencing projects and routine DNA/RNA sequencing associated with molecular profiling of diseases for personalized treatment require both good quality data and appropriate infrastructure for efficient storing and processing of the data. To solve the problems, we adapt the concept of Data Lake for storing and processing big NGS data. We also propose a dedicated library that allows cleaning the DNA/RNA sequences obtained with single-read and paired-end sequencing techniques. To accommodate the growth of NGS data, our solution is largely scalable on the Cloud and may rapidly and flexibly adjust to the amount of data that should be processed. Moreover, to simplify the utilization of the data cleaning methods and implementation of other phases of data analysis workflows, our library extends the declarative U-SQL query language providing a set of capabilities for data extraction, processing, and storing. The results of our experiments prove that the whole solution supports requirements for ample storage and highly parallel, scalable processing that accompanies NGS-based multi-omics data analyses.

RevDate: 2021-07-28

Ashammakhi N, Unluturk BD, Kaarela O, et al (2021)

The Cells and the Implant Interact With the Biological System Via the Internet and Cloud Computing as the New Mediator.

The Journal of craniofacial surgery, 32(5):1655-1657.

RevDate: 2021-07-27

Niemann M, Lachmann N, Geneugelijk K, et al (2021)

Computational Eurotransplant kidney allocation simulations demonstrate the feasibility and benefit of T-cell epitope matching.

PLoS computational biology, 17(7):e1009248 pii:PCOMPBIOL-D-20-02268 [Epub ahead of print].

The EuroTransplant Kidney Allocation System (ETKAS) aims at allocating organs to patients on the waiting list fairly whilst optimizing HLA match grades. ETKAS currently considers the number of HLA-A, -B, -DR mismatches. Evidently, epitope matching is biologically and clinically more relevant. We here executed ETKAS-based computer simulations to evaluate the impact of epitope matching on allocation and compared the strategies. A virtual population of 400,000 individuals was generated using the National Marrow Donor Program (NMDP) haplotype frequency dataset of 2011. Using this population, a waiting list of 10,400 patients was constructed and maintained during simulation, matching the 2015 Eurotransplant Annual Report characteristics. Unacceptable antigens were assigned randomly relative to their frequency using HLAMatchmaker. Over 22,600 kidneys were allocated in 10 years in triplicate using Markov Chain Monte Carlo simulations on 32-CPU-core cloud-computing instances. T-cell epitopes were calculated using the www.pirche.com portal. Waiting list effects were evaluated against ETKAS for five epitope matching scenarios. Baseline simulations of ETKAS slightly overestimated reported average HLA match grades. The best balanced scenario maintained prioritisation of HLA A-B-DR fully matched donors while replacing the HLA match grade by PIRCHE-II score and exchanging the HLA mismatch probability (MMP) by epitope MMP. This setup showed no considerable impact on kidney exchange rates and waiting time. PIRCHE-II scores improved, whereas the average HLA match grade diminishes slightly, yet leading to an improved estimated graft survival. We conclude that epitope-based matching in deceased donor kidney allocation is feasible while maintaining equal balances on the waiting list.

RevDate: 2021-07-28

Aslam B, Javed AR, Chakraborty C, et al (2021)

Blockchain and ANFIS empowered IoMT application for privacy preserved contact tracing in COVID-19 pandemic.

Personal and ubiquitous computing [Epub ahead of print].

Life-threatening novel severe acute respiratory syndrome coronavirus (SARS-CoV-2), also known as COVID-19, has engulfed the world and caused health and economic challenges. To control the spread of COVID-19, a mechanism is required to enforce physical distancing between people. This paper proposes a Blockchain-based framework that preserves patients' anonymity while tracing their contacts with the help of Bluetooth-enabled smartphones. We use a smartphone application to interact with the proposed blockchain framework for contact tracing of the general public using Bluetooth and to store the obtained data over the cloud, which is accessible to health departments and government agencies to perform necessary and timely actions (e.g., like quarantine the infected people moving around). Thus, the proposed framework helps people perform their regular business and day-to-day activities with a controlled mechanism that keeps them safe from infected and exposed people. The smartphone application is capable enough to check their COVID status after analyzing the symptoms quickly and observes (based on given symptoms) either this person is infected or not. As a result, the proposed Adaptive Neuro-Fuzzy Interference System (ANFIS) system predicts the COVID status, and K-Nearest Neighbor (KNN) enhances the accuracy rate to 95.9% compared to state-of-the-art results.

RevDate: 2021-07-27

Silva Junior D, Pacitti E, Paes A, et al (2021)

Provenance-and machine learning-based recommendation of parameter values in scientific workflows.

PeerJ. Computer science, 7:e606.

Scientific Workflows (SWfs) have revolutionized how scientists in various domains of science conduct their experiments. The management of SWfs is performed by complex tools that provide support for workflow composition, monitoring, execution, capturing, and storage of the data generated during execution. In some cases, they also provide components to ease the visualization and analysis of the generated data. During the workflow's composition phase, programs must be selected to perform the activities defined in the workflow specification. These programs often require additional parameters that serve to adjust the program's behavior according to the experiment's goals. Consequently, workflows commonly have many parameters to be manually configured, encompassing even more than one hundred in many cases. Wrongly parameters' values choosing can lead to crash workflows executions or provide undesired results. As the execution of data- and compute-intensive workflows is commonly performed in a high-performance computing environment e.g., (a cluster, a supercomputer, or a public cloud), an unsuccessful execution configures a waste of time and resources. In this article, we present FReeP-Feature Recommender from Preferences, a parameter value recommendation method that is designed to suggest values for workflow parameters, taking into account past user preferences. FReeP is based on Machine Learning techniques, particularly in Preference Learning. FReeP is composed of three algorithms, where two of them aim at recommending the value for one parameter at a time, and the third makes recommendations for n parameters at once. The experimental results obtained with provenance data from two broadly used workflows showed FReeP usefulness in the recommendation of values for one parameter. Furthermore, the results indicate the potential of FReeP to recommend values for n parameters in scientific workflows.

RevDate: 2021-07-27

Skarlat O, S Schulte (2021)

FogFrame: a framework for IoT application execution in the fog.

PeerJ. Computer science, 7:e588.

Recently, a multitude of conceptual architectures and theoretical foundations for fog computing have been proposed. Despite this, there is still a lack of concrete frameworks to setup real-world fog landscapes. In this work, we design and implement the fog computing framework FogFrame-a system able to manage and monitor edge and cloud resources in fog landscapes and to execute Internet of Things (IoT) applications. FogFrame provides communication and interaction as well as application management within a fog landscape, namely, decentralized service placement, deployment and execution. For service placement, we formalize a system model, define an objective function and constraints, and solve the problem implementing a greedy algorithm and a genetic algorithm. The framework is evaluated with regard to Quality of Service parameters of IoT applications and the utilization of fog resources using a real-world operational testbed. The evaluation shows that the service placement is adapted according to the demand and the available resources in the fog landscape. The greedy placement leads to the maximum utilization of edge devices keeping at the edge as many services as possible, while the placement based on the genetic algorithm keeps devices from overloads by balancing between the cloud and edge. When comparing edge and cloud deployment, the service deployment time at the edge takes 14% of the deployment time in the cloud. If fog resources are utilized at maximum capacity, and a new application request arrives with the need of certain sensor equipment, service deployment becomes impossible, and the application needs to be delegated to other fog resources. The genetic algorithm allows to better accommodate new applications and keep the utilization of edge devices at about 50% CPU. During the experiments, the framework successfully reacts to runtime events: (i) services are recovered when devices disappear from the fog landscape; (ii) cloud resources and highly utilized devices are released by migrating services to new devices; (iii) and in case of overloads, services are migrated in order to release resources.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Sauber AM, Awad A, Shawish AF, et al (2021)

A Novel Hadoop Security Model for Addressing Malicious Collusive Workers.

Computational intelligence and neuroscience, 2021:5753948.

With the daily increase of data production and collection, Hadoop is a platform for processing big data on a distributed system. A master node globally manages running jobs, whereas worker nodes process partitions of the data locally. Hadoop uses MapReduce as an effective computing model. However, Hadoop experiences a high level of security vulnerability over hybrid and public clouds. Specially, several workers can fake results without actually processing their portions of the data. Several redundancy-based approaches have been proposed to counteract this risk. A replication mechanism is used to duplicate all or some of the tasks over multiple workers (nodes). A drawback of such approaches is that they generate a high overhead over the cluster. Additionally, malicious workers can behave well for a long period of time and attack later. This paper presents a novel model to enhance the security of the cloud environment against untrusted workers. A new component called malicious workers' trap (MWT) is developed to run on the master node to detect malicious (noncollusive and collusive) workers as they convert and attack the system. An implementation to test the proposed model and to analyze the performance of the system shows that the proposed model can accurately detect malicious workers with minor processing overhead compared to vanilla MapReduce and Verifiable MapReduce (V-MR) model [1]. In addition, MWT maintains a balance between the security and usability of the Hadoop cluster.

RevDate: 2021-07-27

Tariq MU, Poulin M, AA Abonamah (2021)

Achieving Operational Excellence Through Artificial Intelligence: Driving Forces and Barriers.

Frontiers in psychology, 12:686624.

This paper presents an in-depth literature review on the driving forces and barriers for achieving operational excellence through artificial intelligence (AI). Artificial intelligence is a technological concept spanning operational management, philosophy, humanities, statistics, mathematics, computer sciences, and social sciences. AI refers to machines mimicking human behavior in terms of cognitive functions. The evolution of new technological procedures and advancements in producing intelligence for machines creates a positive impact on decisions, operations, strategies, and management incorporated in the production process of goods and services. Businesses develop various methods and solutions to extract meaningful information, such as big data, automatic production capabilities, and systematization for business improvement. The progress in organizational competitiveness is apparent through improvements in firm's decisions, resulting in increased operational efficiencies. Innovation with AI has enabled small businesses to reduce operating expenses and increase revenues. The focused literature review reveals the driving forces for achieving operational excellence through AI are improvement in computing abilities of machines, development of data-based AI, advancements in deep learning, cloud computing, data management, and integration of AI in operations. The barriers are mainly cultural constraints, fear of the unknown, lack of employee skills, and strategic planning for adopting AI. The current paper presents an analysis of articles focused on AI adoption in production and operations. We selected articles published between 2015 and 2020. Our study contributes to the literature reviews on operational excellence, artificial intelligence, driving forces for AI, and AI barriers in achieving operational excellence.

RevDate: 2021-07-27

Sharma SK, SS Ahmed (2021)

IoT-based analysis for controlling & spreading prediction of COVID-19 in Saudi Arabia.

Soft computing [Epub ahead of print].

Presently, novel coronavirus outbreak 2019 (COVID-19) is a major threat to public health. Mathematical epidemic models can be utilized to forecast the course of an epidemic and cultivate approaches for controlling it. This paper utilizes the real data of spreading COVID-19 in Saudi Arabia for mathematical modeling and complex analyses. This paper introduces the Susceptible, Exposed, Infectious, Recovered, Undetectable, and Deceased (SEIRUD) and Machine learning algorithm to predict and control COVID-19 in Saudi Arabia.This COVID-19 has initiated many methods, such as cloud computing, edge-computing, IoT, artificial intelligence. The use of sensor devices has increased enormously. Similarly, several developments in solving the COVID-19 crisis have been used by IoT applications. The new technology relies on IoT variables and the roles of symptoms using wearable sensors to forecast cases of COVID-19. The working model involves wearable devices, occupational therapy, condition control, testing of cases, suspicious and IoT elements. Mathematical modeling is useful for understanding the fundamental principle of the transmission of COVID-19 and providing guidance for possible predictions. The method suggested predicts whether COVID-19 would expand or die in the long term in the population. The mathematical study results and related simulation are described here as a way of forecasting the progress and the possible end of the epidemic with three forms of scenarios: 'No Action,' 'Lockdowns and New Medicine.' The lock case slows it down the peak by minimizing infection and impacts area equality of the infected deformation. This study familiarizes the ideal protocol, which can support the Saudi population to breakdown spreading COVID-19 in an accurate and timely way. The simulation findings have been executed, and the suggested model enhances the accuracy ratio of 89.3%, prediction ratio of 88.7%, the precision ratio of 87.7%, recall ratio of 86.4%, and F1 score of 90.9% compared to other existing methods.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Huč A, Šalej J, M Trebar (2021)

Analysis of Machine Learning Algorithms for Anomaly Detection on Edge Devices.

Sensors (Basel, Switzerland), 21(14): pii:s21144946.

The Internet of Things (IoT) consists of small devices or a network of sensors, which permanently generate huge amounts of data. Usually, they have limited resources, either computing power or memory, which means that raw data are transferred to central systems or the cloud for analysis. Lately, the idea of moving intelligence to the IoT is becoming feasible, with machine learning (ML) moved to edge devices. The aim of this study is to provide an experimental analysis of processing a large imbalanced dataset (DS2OS), split into a training dataset (80%) and a test dataset (20%). The training dataset was reduced by randomly selecting a smaller number of samples to create new datasets Di (i = 1, 2, 5, 10, 15, 20, 40, 60, 80%). Afterwards, they were used with several machine learning algorithms to identify the size at which the performance metrics show saturation and classification results stop improving with an F1 score equal to 0.95 or higher, which happened at 20% of the training dataset. Further on, two solutions for the reduction of the number of samples to provide a balanced dataset are given. In the first, datasets DRi consist of all anomalous samples in seven classes and a reduced majority class ('NL') with i = 0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20 percent of randomly selected samples. In the second, datasets DCi are generated from the representative samples determined with clustering from the training dataset. All three dataset reduction methods showed comparable performance results. Further evaluation of training times and memory usage on Raspberry Pi 4 shows a possibility to run ML algorithms with limited sized datasets on edge devices.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Yar H, Imran AS, Khan ZA, et al (2021)

Towards Smart Home Automation Using IoT-Enabled Edge-Computing Paradigm.

Sensors (Basel, Switzerland), 21(14): pii:s21144932.

Smart home applications are ubiquitous and have gained popularity due to the overwhelming use of Internet of Things (IoT)-based technology. The revolution in technologies has made homes more convenient, efficient, and even more secure. The need for advancement in smart home technology is necessary due to the scarcity of intelligent home applications that cater to several aspects of the home simultaneously, i.e., automation, security, safety, and reducing energy consumption using less bandwidth, computation, and cost. Our research work provides a solution to these problems by deploying a smart home automation system with the applications mentioned above over a resource-constrained Raspberry Pi (RPI) device. The RPI is used as a central controlling unit, which provides a cost-effective platform for interconnecting a variety of devices and various sensors in a home via the Internet. We propose a cost-effective integrated system for smart home based on IoT and Edge-Computing paradigm. The proposed system provides remote and automatic control to home appliances, ensuring security and safety. Additionally, the proposed solution uses the edge-computing paradigm to store sensitive data in a local cloud to preserve the customer's privacy. Moreover, visual and scalar sensor-generated data are processed and held over edge device (RPI) to reduce bandwidth, computation, and storage cost. In the comparison with state-of-the-art solutions, the proposed system is 5% faster in detecting motion, and 5 ms and 4 ms in switching relay on and off, respectively. It is also 6% more efficient than the existing solutions with respect to energy consumption.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Kosasih DI, Lee BG, Lim H, et al (2021)

An Unsupervised Learning-Based Spatial Co-Location Detection System from Low-Power Consumption Sensor.

Sensors (Basel, Switzerland), 21(14): pii:s21144773.

Spatial co-location detection is the task of inferring the co-location of two or more objects in the geographic space. Mobile devices, especially a smartphone, are commonly employed to accomplish this task with the human object. Previous work focused on analyzing mobile GPS data to accomplish this task. While this approach may guarantee high accuracy from the perspective of the data, it is considered inefficient since knowing the object's absolute geographic location is not required to accomplish this task. This work proposed the implementation of the unsupervised learning-based algorithm, namely convolutional autoencoder, to infer the co-location of people from a low-power consumption sensor data-magnetometer readings. The idea is that if the trained model can also reconstruct the other data with the structural similarity (SSIM) index being above 0.5, we can then conclude that the observed individuals were co-located. The evaluation of our system has indicated that the proposed approach could recognize the spatial co-location of people from magnetometer readings.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Alhasnawi BN, Jasim BH, Rahman ZSA, et al (2021)

A Novel Robust Smart Energy Management and Demand Reduction for Smart Homes Based on Internet of Energy.

Sensors (Basel, Switzerland), 21(14): pii:s21144756.

In residential energy management (REM), Time of Use (ToU) of devices scheduling based on user-defined preferences is an essential task performed by the home energy management controller. This paper devised a robust REM technique capable of monitoring and controlling residential loads within a smart home. In this paper, a new distributed multi-agent framework based on the cloud layer computing architecture is developed for real-time microgrid economic dispatch and monitoring. In this paper the grey wolf optimizer (GWO), artificial bee colony (ABC) optimization algorithm-based Time of Use (ToU) pricing model is proposed to define the rates for shoulder-peak and on-peak hours. The results illustrate the effectiveness of the proposed the grey wolf optimizer (GWO), artificial bee colony (ABC) optimization algorithm based ToU pricing scheme. A Raspberry Pi3 based model of a well-known test grid topology is modified to support real-time communication with open-source IoE platform Node-Red used for cloud computing. Two levels communication system connects microgrid system, implemented in Raspberry Pi3, to cloud server. The local communication level utilizes IP/TCP and MQTT is used as a protocol for global communication level. The results demonstrate and validate the effectiveness of the proposed technique, as well as the capability to track the changes of load with the interactions in real-time and the fast convergence rate.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Stan OP, Enyedi S, Corches C, et al (2021)

Method to Increase Dependability in a Cloud-Fog-Edge Environment.

Sensors (Basel, Switzerland), 21(14): pii:s21144714.

Robots can be very different, from humanoids to intelligent self-driving cars or just IoT systems that collect and process local sensors' information. This paper presents a way to increase dependability for information exchange and processing in systems with Cloud-Fog-Edge architectures. In an ideal interconnected world, the recognized and registered robots must be able to communicate with each other if they are close enough, or through the Fog access points without overloading the Cloud. In essence, the presented work addresses the Edge area and how the devices can communicate in a safe and secure environment using cryptographic methods for structured systems. The presented work emphasizes the importance of security in a system's dependability and offers a communication mechanism for several robots without overburdening the Cloud. This solution is ideal to be used where various monitoring and control aspects demand extra degrees of safety. The extra private keys employed by this procedure further enhance algorithm complexity, limiting the probability that the method may be broken by brute force or systemic attacks.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Brescia E, Costantino D, Marzo F, et al (2021)

Automated Multistep Parameter Identification of SPMSMs in Large-Scale Applications Using Cloud Computing Resources.

Sensors (Basel, Switzerland), 21(14): pii:s21144699.

Parameter identification of permanent magnet synchronous machines (PMSMs) represents a well-established research area. However, parameter estimation of multiple running machines in large-scale applications has not yet been investigated. In this context, a flexible and automated approach is required to minimize complexity, costs, and human interventions without requiring machine information. This paper proposes a novel identification strategy for surface PMSMs (SPMSMs), highly suitable for large-scale systems. A novel multistep approach using measurement data at different operating conditions of the SPMSM is proposed to perform the parameter identification without requiring signal injection, extra sensors, machine information, and human interventions. Thus, the proposed method overcomes numerous issues of the existing parameter identification schemes. An IoT/cloud architecture is designed to implement the proposed multistep procedure and massively perform SPMSM parameter identifications. Finally, hardware-in-the-loop results show the effectiveness of the proposed approach.

RevDate: 2021-07-20

Hanussek M, Bartusch F, J Krüger (2021)

Performance and scaling behavior of bioinformatic applications in virtualization environments to create awareness for the efficient use of compute resources.

PLoS computational biology, 17(7):e1009244 pii:PCOMPBIOL-D-20-01988 [Epub ahead of print].

The large amount of biological data available in the current times, makes it necessary to use tools and applications based on sophisticated and efficient algorithms, developed in the area of bioinformatics. Further, access to high performance computing resources is necessary, to achieve results in reasonable time. To speed up applications and utilize available compute resources as efficient as possible, software developers make use of parallelization mechanisms, like multithreading. Many of the available tools in bioinformatics offer multithreading capabilities, but more compute power is not always helpful. In this study we investigated the behavior of well-known applications in bioinformatics, regarding their performance in the terms of scaling, different virtual environments and different datasets with our benchmarking tool suite BOOTABLE. The tool suite includes the tools BBMap, Bowtie2, BWA, Velvet, IDBA, SPAdes, Clustal Omega, MAFFT, SINA and GROMACS. In addition we added an application using the machine learning framework TensorFlow. Machine learning is not directly part of bioinformatics but applied to many biological problems, especially in the context of medical images (X-ray photographs). The mentioned tools have been analyzed in two different virtual environments, a virtual machine environment based on the OpenStack cloud software and in a Docker environment. The gained performance values were compared to a bare-metal setup and among each other. The study reveals, that the used virtual environments produce an overhead in the range of seven to twenty-five percent compared to the bare-metal environment. The scaling measurements showed, that some of the analyzed tools do not benefit from using larger amounts of computing resources, whereas others showed an almost linear scaling behavior. The findings of this study have been generalized as far as possible and should help users to find the best amount of resources for their analysis. Further, the results provide valuable information for resource providers to handle their resources as efficiently as possible and raise the user community's awareness of the efficient usage of computing resources.

RevDate: 2021-07-20

Zeng X, Zhang X, Yang S, et al (2021)

Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices.

Sensors (Basel, Switzerland), 21(13): pii:s21134592.

Implicit authentication mechanisms are expected to prevent security and privacy threats for mobile devices using behavior modeling. However, recently, researchers have demonstrated that the performance of behavioral biometrics is insufficiently accurate. Furthermore, the unique characteristics of mobile devices, such as limited storage and energy, make it subject to constrained capacity of data collection and processing. In this paper, we propose an implicit authentication architecture based on edge computing, coined Edge computing-based mobile Device Implicit Authentication (EDIA), which exploits edge-based gait biometric identification using a deep learning model to authenticate users. The gait data captured by a device's accelerometer and gyroscope sensors is utilized as the input of our optimized model, which consists of a CNN and a LSTM in tandem. Especially, we deal with extracting the features of gait signal in a two-dimensional domain through converting the original signal into an image, and then input it into our network. In addition, to reduce computation overhead of mobile devices, the model for implicit authentication is generated on the cloud server, and the user authentication process also takes place on the edge devices. We evaluate the performance of EDIA under different scenarios where the results show that i) we achieve a true positive rate of 97.77% and also a 2% false positive rate; and ii) EDIA still reaches high accuracy with limited dataset size.

RevDate: 2021-07-20

Alwateer M, Almars AM, Areed KN, et al (2021)

Ambient Healthcare Approach with Hybrid Whale Optimization Algorithm and Naïve Bayes Classifier.

Sensors (Basel, Switzerland), 21(13): pii:s21134579.

There is a crucial need to process patient's data immediately to make a sound decision rapidly; this data has a very large size and excessive features. Recently, many cloud-based IoT healthcare systems are proposed in the literature. However, there are still several challenges associated with the processing time and overall system efficiency concerning big healthcare data. This paper introduces a novel approach for processing healthcare data and predicts useful information with the support of the use of minimum computational cost. The main objective is to accept several types of data and improve accuracy and reduce the processing time. The proposed approach uses a hybrid algorithm which will consist of two phases. The first phase aims to minimize the number of features for big data by using the Whale Optimization Algorithm as a feature selection technique. After that, the second phase performs real-time data classification by using Naïve Bayes Classifier. The proposed approach is based on fog Computing for better business agility, better security, deeper insights with privacy, and reduced operation cost. The experimental results demonstrate that the proposed approach can reduce the number of datasets features, improve the accuracy and reduce the processing time. Accuracy enhanced by average rate: 3.6% (3.34 for Diabetes, 2.94 for Heart disease, 3.77 for Heart attack prediction, and 4.15 for Sonar). Besides, it enhances the processing speed by reducing the processing time by an average rate: 8.7% (28.96 for Diabetes, 1.07 for Heart disease, 3.31 for Heart attack prediction, and 1.4 for Sonar).

RevDate: 2021-07-20

Agapiou A, V Lysandrou (2021)

Observing Thermal Conditions of Historic Buildings through Earth Observation Data and Big Data Engine.

Sensors (Basel, Switzerland), 21(13): pii:s21134557.

This study combines satellite observation, cloud platforms, and geographical information systems (GIS) to investigate at a macro-scale level of observation the thermal conditions of two historic clusters in Cyprus, namely in Limassol and Strovolos municipalities. The two case studies share different environmental and climatic conditions. The former site is coastal, the last a hinterland, and they both contain historic buildings with similar building materials and techniques. For the needs of the study, more than 140 Landsat 7 ETM+ and 8 LDCM images were processed at the Google Earth Engine big data cloud platform to investigate the thermal conditions of the two historic clusters over the period 2013-2020. The multi-temporal thermal analysis included the calibration of all images to provide land surface temperature (LST) products at a 100 m spatial resolution. Moreover, to investigate anomalies related to possible land cover changes of the area, two indices were extracted from the satellite images, the normalised difference vegetation index (NDVI) and the normalised difference build index (NDBI). Anticipated results include the macro-scale identification of multi-temporal changes, diachronic changes, the establishment of change patterns based on seasonality and location, occurring in large clusters of historic buildings.

RevDate: 2021-07-20

Moon J, Yang M, J Jeong (2021)

A Novel Approach to the Job Shop Scheduling Problem Based on the Deep Q-Network in a Cooperative Multi-Access Edge Computing Ecosystem.

Sensors (Basel, Switzerland), 21(13): pii:s21134553.

In this study, based on multi-access edge computing (MEC), we provided the possibility of cooperating manufacturing processes. We tried to solve the job shop scheduling problem by applying DQN (deep Q-network), a reinforcement learning model, to this method. Here, to alleviate the overload of computing resources, an efficient DQN was used for the experiments using transfer learning data. Additionally, we conducted scheduling studies in the edge computing ecosystem of our manufacturing processes without the help of cloud centers. Cloud computing, an environment in which scheduling processing is performed, has issues sensitive to the manufacturing process in general, such as security issues and communication delay time, and research is being conducted in various fields, such as the introduction of an edge computing system that can replace them. We proposed a method of independently performing scheduling at the edge of the network through cooperative scheduling between edge devices within a multi-access edge computing structure. The proposed framework was evaluated, analyzed, and compared with existing frameworks in terms of providing solutions and services.

RevDate: 2021-07-20

Chen L, Grimstead I, Bell D, et al (2021)

Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras.

Sensors (Basel, Switzerland), 21(13): pii:s21134564.

Traffic cameras are a widely available source of open data that offer tremendous value to public authorities by providing real-time statistics to understand and monitor the activity levels of local populations and their responses to policy interventions such as those seen during the COrona VIrus Disease 2019 (COVID-19) pandemic. This paper presents an end-to-end solution based on the Google Cloud Platform with scalable processing capability to deal with large volumes of traffic camera data across the UK in a cost-efficient manner. It describes a deep learning pipeline to detect pedestrians and vehicles and to generate mobility statistics from these. It includes novel methods for data cleaning and post-processing using a Structure SImilarity Measure (SSIM)-based static mask that improves reliability and accuracy in classifying people and vehicles from traffic camera images. The solution resulted in statistics describing trends in the 'busyness' of various towns and cities in the UK. We validated time series against Automatic Number Plate Recognition (ANPR) cameras across North East England, showing a close correlation between our statistical output and the ANPR source. Trends were also favorably compared against traffic flow statistics from the UK's Department of Transport. The results of this work have been adopted as an experimental faster indicator of the impact of COVID-19 on the UK economy and society by the Office for National Statistics (ONS).

RevDate: 2021-07-19

Risco S, Moltó G, Naranjo DM, et al (2021)

Serverless Workflows for Containerised Applications in the Cloud Continuum.

Journal of grid computing, 19(3):30.

This paper introduces an open-source platform to support serverless computing for scientific data-processing workflow-based applications across the Cloud continuum (i.e. simultaneously involving both on-premises and public Cloud platforms to process data captured at the edge). This is achieved via dynamic resource provisioning for FaaS platforms compatible with scale-to-zero approaches that minimise resource usage and cost for dynamic workloads with different elasticity requirements. The platform combines the usage of dynamically deployed auto-scaled Kubernetes clusters on on-premises Clouds and automated Cloud bursting into AWS Lambda to achieve higher levels of elasticity. A use case in public health for smart cities is used to assess the platform, in charge of detecting people not wearing face masks from captured videos. Faces are blurred for enhanced anonymity in the on-premises Cloud and detection via Deep Learning models is performed in AWS Lambda for this data-driven containerised workflow. The results indicate that hybrid workflows across the Cloud continuum can efficiently perform local data processing for enhanced regulations compliance and perform Cloud bursting for increased levels of elasticity.

RevDate: 2021-07-14

Worrell GA (2021)

Electrical Brain Stimulation for Epilepsy and Emerging Applications.

Journal of clinical neurophysiology : official publication of the American Electroencephalographic Society pii:00004691-900000000-99281 [Epub ahead of print].

SUMMARY: Electrical brain stimulation is an established therapy for movement disorders, epilepsy, obsessive compulsive disorder, and a potential therapy for many other neurologic and psychiatric disorders. Despite significant progress and FDA approvals, there remain significant clinical gaps that can be addressed with next generation systems. Integrating wearable sensors and implantable brain devices with off-the-body computing resources (smart phones and cloud resources) opens a new vista for dense behavioral and physiological signal tracking coupled with adaptive stimulation therapy that should have applications for a range of brain and mind disorders. Here, we briefly review some history and current electrical brain stimulation applications for epilepsy, deep brain stimulation and responsive neurostimulation, and emerging applications for next generation devices and systems.

RevDate: 2021-07-15

Guo B, Ma Y, Yang J, et al (2021)

Smart Healthcare System Based on Cloud-Internet of Things and Deep Learning.

Journal of healthcare engineering, 2021:4109102.

Introduction: Health monitoring and remote diagnosis can be realized through Smart Healthcare. In view of the existing problems such as simple measurement parameters of wearable devices, huge computing pressure of cloud servers, and lack of individualization of diagnosis, a novel Cloud-Internet of Things (C-IOT) framework for medical monitoring is put forward.

Methods: Smart phones are adopted as gateway devices to achieve data standardization and preprocess to generate health gray-scale map uploaded to the cloud server. The cloud server realizes the business logic processing and uses the deep learning model to carry out the gray-scale map calculation of health parameters. A deep learning model based on the convolution neural network (CNN) is constructed, in which six volunteers are selected to participate in the experiment, and their health data are marked by private doctors to generate initial data set.

Results: Experimental results show the feasibility of the proposed framework. The test data set is used to test the CNN model after training; the forecast accuracy is over 77.6%.

Conclusion: The CNN model performs well in the recognition of health status. Collectively, this Smart Healthcare System is expected to assist doctors by improving the diagnosis of health status in clinical practice.

RevDate: 2021-07-14

Morales-Botello ML, Gachet D, de Buenaga M, et al (2021)

Chronic patient remote monitoring through the application of big data and internet of things.

Health informatics journal, 27(3):14604582211030956.

Chronic patients could benefit from the technological advances, but the clinical approaches for this kind of patients are still limited. This paper describes a system for chronic patients monitoring both, in home and external environments. For this purpose, we used novel technologies as big data, cloud computing and internet of things (IoT). Additionally, the system has been validated for three use cases: cardiovascular disease (CVD), hypertension (HPN) and chronic obstructive pulmonary disease (COPD), which were selected for their incidence in the population. This system is innovative within e-health, mainly due to the use of a big data architecture based on open-source components, also it provides a scalable and distributed environment for storage and processing of biomedical sensor data. The proposed system enables the incorporation of non-medical data sources in order to improve the self-management of chronic diseases and to develop better strategies for health interventions for chronic and dependents patients.

RevDate: 2021-07-12

Miras Del Río H, Ortiz Lora A, Bertolet Reina A, et al (2021)

A Monte Carlo dose calculation system for ophthalmic brachytherapy based on a realistic eye model.

Medical physics [Epub ahead of print].

PURPOSE: There is a growing trend towards the adoption of model-based calculation algorithms (MBDCAs) for brachytherapy dose calculations which can properly handle media and source/applicator heterogeneities. However, most of dose calculations in ocular plaque therapy are based on homogeneous water media and standard in-silico ocular phantoms, ignoring non-water equivalency of the anatomic tissues and heterogeneities in applicators and patient anatomy. In this work, we introduce EyeMC, a Monte Carlo (MC) model-based calculation algorithm for ophthalmic plaque brachytherapy using realistic and adaptable patient-specific eye geometries and materials.

METHODS: We used the MC code PENELOPE in EyeMC to model Bebig IsoSeed I25.S16 seeds in COMS plaques and 106 Ru/106 Rh applicators that are coupled onto a customizable eye model with realistic geometry and composition. To significantly reduce calculation times, we integrated EyeMC with CloudMC, a cloud computing platform for radiation therapy calculations. EyeMC is equipped with an evaluation module that allows the generation of isodose distributions, dose-volume histograms, and comparisons with Plaque Simulator three-dimensional dose distribution. We selected a sample of patients treated with 125 I and 106 Ru isotopes in our institution, covering a variety of different type of plaques, tumor sizes, and locations. Results from EyeMC were compared to the original plan calculated by the TPS Plaque Simulation, studying the influence of heterogeneous media composition as well.

RESULTS: EyeMC calculations for Ru plaques agreed well with manufacturer's reference data and data of MC simulations from Hermida et al. (2013). Significant deviations, up to 20%, were only found in lateral profiles for notched plaques. As expected, media composition significantly affected estimated doses to different eye structures, especially in the 125 I cases evaluated. Dose to sclera and lens were found to be about 12% lower when considering real media, while average dose to tumor was 9% higher. 106 Ru cases presented a 1%-3% dose reduction in all structures using real media for calculation, except for the lens, which showed an average dose 7.6% lower than water-based calculations. Comparisons with Plaque Simulator calculations showed large differences in dose to critical structures for 106 Ru notched plaques. 125 I cases presented significant and systematic dose deviations when using the default calculation parameters from Plaque Simulator version 5.3.8., which were corrected when using calculation parameters from a custom physics model for carrier-attenuation and air-interface correction functions.

CONCLUSIONS: EyeMC is a MC calculation system for ophthalmic brachytherapy based on a realistic and customizable eye-tumor model which includes the main eye structures with their real composition. Integrating this tool into a cloud computing environment allows to perform high-precision MC calculations of ocular plaque treatments in short times. The observed variability in eye anatomy among the selected cases justifies the use of patient-specific models.

RevDate: 2021-07-13

Zhou C, Hu J, N Chen (2021)

Remote Care Assistance in Emergency Department Based on Smart Medical.

Journal of healthcare engineering, 2021:9971960.

Smart medical care is user-centric, medical information is the main line, and big data, Internet of Things, cloud computing, artificial intelligence, and other technologies are used to establish scientific and accurate information, as well as an efficient and reasonable medical service system. Smart medical plays an important role in alleviating doctor-patient conflicts caused by information asymmetry, regional health differences caused by irrational allocation of medical resources, and improving medical service levels. This article mainly introduces the remote care assistance system of emergency department based on smart medical and intends to provide some ideas and directions for the technical research of patients in emergency department receiving remote care. This paper proposes a research method for remote care assistance in emergency departments based on smart medical, including an overview of remote care based on smart medical, remote care sensor real-time monitoring algorithms based on smart medical, signal detection algorithms, and signal clustering algorithms for smart medical. Remote care in the emergency department assisted in research experiments. The experimental results show that 86.0% of patients like the remote care system based on smart medical studied in this paper.

RevDate: 2021-07-12

Zhao X, Liu J, Ji B, et al (2021)

Service Migration Policy Optimization considering User Mobility for E-Healthcare Applications.

Journal of healthcare engineering, 2021:9922876.

Mobile edge computing (MEC) is an emerging technology that provides cloud services at the edge of network to enable latency-critical and resource-intensive E-healthcare applications. User mobility is common in MEC. User mobility can result in an interruption of ongoing edge services and a dramatic drop in quality of service. Service migration has a great potential to address the issues and brings inevitable cost for the system. In this paper, we propose a service migration solution based on migration zone and formulate service migration cost with a comprehensive model that captures the key challenges. Then, we formulate service migration problem into Markov decision process to obtain optimal service migration policies that decide where to migrate in a limited area. We propose three algorithms to resolve the optimization problem given by the formulated model. Finally, we demonstrate the performance of our proposed algorithms by carrying out extensive experiments. We show that the proposed service migration approach reduces the total cost by up to 3 times compared to no migration and outperforms the general solution in terms of the total expected reward.

RevDate: 2021-07-07

Jayawardene M, Nandasena MS, Silva U, et al (2021)

Creation of a multiaccess database for hepatopancreaticobiliary surgery using open-source technology in a country that lacks electronic clinical database management systems.

Annals of hepato-biliary-pancreatic surgery, 25(Suppl 1):S403.

Introduction: State sector hospitals in Sri Lanka lack electronic database management systems. The database at the HPB unit at Colombo South Teaching Hospital was based on a rudimentary Google-Sheet that wasn't maintenance-friendly, prone to inconsistencies, and lacked data retrievability for analysis purposes.

Methods: Using cloud-based Google services and AppSheet, a multiaccess mobile app was developed to store HPB data. The author spent 2 months studying the web platform to create a password-protected app. Consent was obtained from patients to maintain clinical data through the app using mobile devices of the HPB team members. After 25 months of use of the app, this abstract analyses the overall data.

Results: The app can record 254 data variables per patient, of which 222 are analyzable. The database has so far 1,561 patients referred to and managed at the HPB unit since November 2018 in which 566 liver (M:F 2:1), 578 pancreatic (M:F 1.5:1), and 417 biliary pathologies (M:F 1:1.2) have been diagnosed. 857 have malignant pathologies, 523 have benign pathologies. 455 had a conclusive surgical management decision, 477 had a nonsurgical management decision. 74% (n = 420) of liver, 37% (n = 214) of pancreatic, and 53% (n = 222) of biliary patients had cancers. Data on 239 HCC, 64 CRLM, 137 pancreatic adenocarcinomas, 16 pancreatic cystic neoplasms, 141 cholangiocarcinoma, and 43 gallbladder cancers are available in the database. In whom a management decision was reached, 31% of liver, 39% of pancreatic, and 49% of biliary patients were operated on.

Conclusions: This cost-free user-friendly solution can revolutionize database management for a low-income country.

RevDate: 2021-07-06

Qu N, W You (2021)

Design and fault diagnosis of DCS sintering furnace's temperature control system for edge computing.

PloS one, 16(7):e0253246 pii:PONE-D-21-10646.

Under the background of modern industrial processing and production, the sintering furnace's temperature control system is researched to achieve intelligent smelting and reduce energy consumption. First, the specific application and implementation of edge computing in industrial processing and production are analyzed. The industrial processing and production intelligent equipment based on edge computing includes the equipment layer, the edge layer, and the cloud platform layer. This architecture improves the operating efficiency of the intelligent control system. Then, the sintering furnace in the metallurgical industry is taken as an example. The sintering furnace connects powder material particles at high temperatures; thus, the core temperature control system is investigated. Under the actual sintering furnace engineering design, the Distributed Control System (DCS) is used as the basis of sintering furnace temperature control, and the Programmable Logic Controller (PLC) is adopted to reduce the electrical wiring and switch contacts. The hardware circuit of DCS is designed; on this basis, an embedded operating system with excellent performance is transplanted according to functional requirements. The final DCS-based temperature control system is applied to actual monitoring. The real-time temperature of the upper, middle, and lower currents of 1# sintering furnace at a particular point is measured to be 56.95°C, 56.58°C, and 57.2°C, respectively. The real-time temperature of the upper, middle, and lower currents of 2# sintering furnaces at a particular point is measured to be 144.7°C, 143.8°C, and 144.0°C, respectively. Overall, the temperature control deviation of the three currents of the two sintering furnaces stays in the controllable range. An expert system based on fuzzy logic in the fault diagnosis system can comprehensively predict the situation of the sintering furnaces. The prediction results of the sintering furnace's faults are closer to the actual situation compared with the fault diagnosis method based on the Backpropagation (BP) neural network. The designed system makes up for the shortcomings of the sintering furnace's traditional temperature control systems and can control the temperature of the sintering furnace intelligently and scientifically. Besides, it can diagnose equipment faults timely and efficiently, thereby improving the sintering efficiency.

RevDate: 2021-07-06

Qin J, Mei G, Ma Z, et al (2021)

General Paradigm of Edge-Based Internet of Things Data Mining for Geohazard Prevention.

Big data [Epub ahead of print].

Geological hazards (geohazards) are geological processes or phenomena formed under external-induced factors causing losses to human life and property. Geohazards are sudden, cause great harm, and have broad ranges of influence, which bring considerable challenges to geohazard prevention. Monitoring and early warning are the most common strategies to prevent geohazards. With the development of the internet of things (IoT), IoT-based monitoring devices provide rich and fine data, making geohazard monitoring and early warning more accurate and effective. IoT-based monitoring data can be transmitted to a cloud center for processing to provide credible data references for geohazard early warning. However, the massive numbers of IoT devices occupy most resources of the cloud center, which increases the data processing delay. Moreover, limited bandwidth restricts the transmission of large amounts of geohazard monitoring data. Thus, in some cases, cloud computing is not able to meet the real-time requirements of geohazard early warning. Edge computing technology processes data closer to the data source than to the cloud center, which provides the opportunity for the rapid processing of monitoring data. This article presents the general paradigm of edge-based IoT data mining for geohazard prevention, especially monitoring and early warning. The paradigm mainly includes data acquisition, data mining and analysis, and data interpretation. Moreover, a real case is used to illustrate the details of the presented general paradigm. Finally, this article discusses several key problems for the general paradigm of edge-based IoT data mining for geohazard prevention.

RevDate: 2021-07-07

Shin H, Lee K, HY Kwon (2021)

A comparative experimental study of distributed storage engines for big spatial data processing using GeoSpark.

The Journal of supercomputing [Epub ahead of print].

With increasing numbers of GPS-equipped mobile devices, we are witnessing a deluge of spatial information that needs to be effectively and efficiently managed. Even though there are several distributed spatial data processing systems such as GeoSpark (Apache Sedona), the effects of underlying storage engines have not been well studied for spatial data processing. In this paper, we evaluate the performance of various distributed storage engines for processing large-scale spatial data using GeoSpark, a state-of-the-art distributed spatial data processing system running on top of Apache Spark. For our performance evaluation, we choose three distributed storage engines having different characteristics: (1) HDFS, (2) MongoDB, and (3) Amazon S3. To conduct our experimental study on a real cloud computing environment, we utilize Amazon EMR instances (up to 6 instances) for distributed spatial data processing. For the evaluation of big spatial data processing, we generate data sets considering four kinds of various data distributions and various data sizes up to one billion point records (38.5GB raw size). Through the extensive experiments, we measure the processing time of storage engines with the following variations: (1) sharding strategies in MongoDB, (2) caching effects, (3) data distributions, (4) data set sizes, (5) the number of running executors and storage nodes, and (6) the selectivity of queries. The major points observed from the experiments are summarized as follows. (1) The overall performance of MongoDB-based GeoSpark is degraded compared to HDFS- and S3-based GeoSpark in our experimental settings. (2) The performance of MongoDB-based GeoSpark is relatively improved in large-scale data sets compared to the others. (3) HDFS- and S3-based GeoSpark are more scalable to running executors and storage nodes compared to MongoDB-based GeoSpark. (4) The sharding strategy based on the spatial proximity significantly improves the performance of MongoDB-based GeoSpark. (5) S3- and HDFS-based GeoSpark show similar performances in all the environmental settings. (6) Caching in distributed environments improves the overall performance of spatial data processing. These results can be usefully utilized in decision-making of choosing the most adequate storage engine for big spatial data processing in a target distributed environment.

RevDate: 2021-07-06

Singh VK, MH Kolekar (2021)

Deep learning empowered COVID-19 diagnosis using chest CT scan images for collaborative edge-cloud computing platform.

Multimedia tools and applications [Epub ahead of print].

The novel coronavirus outbreak has spread worldwide, causing respiratory infections in humans, leading to a huge global pandemic COVID-19. According to World Health Organization, the only way to curb this spread is by increasing the testing and isolating the infected. Meanwhile, the clinical testing currently being followed is not easily accessible and requires much time to give the results. In this scenario, remote diagnostic systems could become a handy solution. Some existing studies leverage the deep learning approach to provide an effective alternative to clinical diagnostic techniques. However, it is difficult to use such complex networks in resource constraint environments. To address this problem, we developed a fine-tuned deep learning model inspired by the architecture of the MobileNet V2 model. Moreover, the developed model is further optimized in terms of its size and complexity to make it compatible with mobile and edge devices. The results of extensive experimentation performed on a real-world dataset consisting of 2482 chest Computerized Tomography scan images strongly suggest the superiority of the developed fine-tuned deep learning model in terms of high accuracy and faster diagnosis time. The proposed model achieved a classification accuracy of 96.40%, with approximately ten times shorter response time than prevailing deep learning models. Further, McNemar's statistical test results also prove the efficacy of the proposed model.

RevDate: 2021-07-06

Mandal S, Khan DA, S Jain (2021)

Cloud-Based Zero Trust Access Control Policy: An Approach to Support Work-From-Home Driven by COVID-19 Pandemic.

New generation computing [Epub ahead of print].

The ubiquitous cloud computing services provide a new paradigm to the work-from-home environment adopted by the enterprise in the unprecedented crisis of the COVID-19 outbreak. However, the change in work culture would also increase the chances of the cybersecurity attack, MAC spoofing attack, and DDoS/DoS attack due to the divergent incoming traffic from the untrusted network for accessing the enterprise's resources. Networks are usually unable to detect spoofing if the intruder already forges the host's MAC address. However, the techniques used in the existing researches mistakenly classify the malicious host as the legitimate one. This paper proposes a novel access control policy based on a zero-trust network by explicitly restricting the incoming network traffic to substantiate MAC spoofing attacks in the software-defined network (SDN) paradigm of cloud computing. The multiplicative increase and additive decrease algorithm helps to detect the advanced MAC spoofing attack before penetrating the SDN-based cloud resources. Based on the proposed approach, a dynamic threshold is assigned to the incoming port number. The self-learning feature of the threshold stamping helps to rectify a legitimate user's traffic before classifying it to the attacker. Finally, the mathematical and experimental results exhibit high accuracy and detection rate than the existing methodologies. The novelty of this approach strengthens the security of the SDN paradigm of cloud resources by redefining conventional access control policy.

RevDate: 2021-07-03

Ni L, Sun X, Li X, et al (2021)

GCWOAS2: Multiobjective Task Scheduling Strategy Based on Gaussian Cloud-Whale Optimization in Cloud Computing.

Computational intelligence and neuroscience, 2021:5546758.

An important challenge facing cloud computing is how to correctly and effectively handle and serve millions of users' requests. Efficient task scheduling in cloud computing can intuitively affect the resource configuration and operating cost of the entire system. However, task and resource scheduling in a cloud computing environment is an NP-hard problem. In this paper, we propose a three-layer scheduling model based on whale-Gaussian cloud. In the second layer of the model, a whale optimization strategy based on the Gaussian cloud model (GCWOAS2) is used for multiobjective task scheduling in a cloud computing which is to minimize the completion time of the task via effectively utilizing the virtual machine resources and to keep the load balancing of each virtual machine, reducing the operating cost of the system. In the GCWOAS2 strategy, an opposition-based learning mechanism is first used to initialize the scheduling strategy to generate the optimal scheduling scheme. Then, an adaptive mobility factor is proposed to dynamically expand the search range. The whale optimization algorithm based on the Gaussian cloud model is proposed to enhance the randomness of search. Finally, a multiobjective task scheduling algorithm based on Gaussian whale-cloud optimization (GCWOA) is presented, so that the entire scheduling strategy can not only expand the search range but also jump out of the local maximum and obtain the global optimal scheduling strategy. Experimental results show that compared with other existing metaheuristic algorithms, our strategy can not only shorten the task completion time but also balance the load of virtual machine resources, and at the same time, it also has a better performance in resource utilization.

RevDate: 2021-07-13
CmpDate: 2021-07-06

Pinheiro A, Canedo ED, Albuquerque RO, et al (2021)

Validation of Architecture Effectiveness for the Continuous Monitoring of File Integrity Stored in the Cloud Using Blockchain and Smart Contracts.

Sensors (Basel, Switzerland), 21(13):.

The management practicality and economy offered by the various technological solutions based on cloud computing have attracted many organizations, which have chosen to migrate services to the cloud, despite the numerous challenges arising from this migration. Cloud storage services are emerging as a relevant solution to meet the legal requirements of maintaining custody of electronic documents for long periods. However, the possibility of losses and the consequent financial damage require the permanent monitoring of this information. In a previous work named "Monitoring File Integrity Using Blockchain and Smart Contracts", the authors proposed an architecture based on blockchain, smart contract, and computational trust technologies that allows the periodic monitoring of the integrity of files stored in the cloud. However, the experiments carried out in the initial studies that validated the architecture included only small- and medium-sized files. As such, this paper presents a validation of the architecture to determine its effectiveness and efficiency when storing large files for long periods. The article provides an improved and detailed description of the proposed processes, followed by a security analysis of the architecture. The results of both the validation experiments and the implemented defense mechanism analysis confirm the security and the efficiency of the architecture in identifying corrupted files, regardless of file size and storage time.

RevDate: 2021-07-12
CmpDate: 2021-07-06

Zhou H, Zhang W, Wang C, et al (2021)

BBNet: A Novel Convolutional Neural Network Structure in Edge-Cloud Collaborative Inference.

Sensors (Basel, Switzerland), 21(13):.

Edge-cloud collaborative inference can significantly reduce the delay of a deep neural network (DNN) by dividing the network between mobile edge and cloud. However, the in-layer data size of DNN is usually larger than the original data, so the communication time to send intermediate data to the cloud will also increase end-to-end latency. To cope with these challenges, this paper proposes a novel convolutional neural network structure-BBNet-that accelerates collaborative inference from two levels: (1) through channel-pruning: reducing the number of calculations and parameters of the original network; (2) through compressing the feature map at the split point to further reduce the size of the data transmitted. In addition, This paper implemented the BBNet structure based on NVIDIA Nano and the server. Compared with the original network, BBNet's FLOPs and parameter achieve up to 5.67× and 11.57× on the compression rate, respectively. In the best case, the feature compression layer can reach a bit-compression rate of 512×. Compared with the better bandwidth conditions, BBNet has a more obvious inference delay when the network conditions are poor. For example, when the upload bandwidth is only 20 kb/s, the end-to-end latency of BBNet is increased by 38.89× compared with the cloud-only approach.

RevDate: 2021-07-06
CmpDate: 2021-07-06

Mendez J, Molina M, Rodriguez N, et al (2021)

Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge.

Sensors (Basel, Switzerland), 21(12):.

There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.

RevDate: 2021-07-06
CmpDate: 2021-07-06

Caminero AC, R Muñoz-Mansilla (2021)

Quality of Service Provision in Fog Computing: Network-Aware Scheduling of Containers.

Sensors (Basel, Switzerland), 21(12):.

State-of-the-art scenarios, such as Internet of Things (IoT) and Smart Cities, have recently arisen. They involve the processing of huge data sets under strict time requirements, rendering the use of cloud resources unfeasible. For this reason, Fog computing has been proposed as a solution; however, there remains a need for intelligent allocation decisions, in order to make it a fully usable solution in such contexts. In this paper, a network-aware scheduling algorithm is presented, which aims to select the fog node most suitable for the execution of an application within a given deadline. This decision is made taking the status of the network into account. This scheduling algorithm was implemented as an extension to the Kubernetes default scheduler, and compared with existing proposals in the literature. The comparison shows that our proposal is the only one that can execute all the submitted jobs within their deadlines (i.e., no job is rejected or executed exceeding its deadline) with certain configurations in some of the scenarios tested, thus obtaining an optimal solution in such scenarios.

RevDate: 2021-07-06
CmpDate: 2021-07-06

Pauca O, Maxim A, CF Caruntu (2021)

Multivariable Optimisation for Waiting-Time Minimisation at Roundabout Intersections in a Cyber-Physical Framework.

Sensors (Basel, Switzerland), 21(12):.

The evolution of communication networks offers new possibilities for development in the automotive industry. Smart vehicles will benefit from the possibility of connecting with the infrastructure and from an extensive exchange of data between them. Furthermore, new control strategies can be developed that benefit the advantages of these communication networks. In this endeavour, the main purposes considered by the automotive industry and researchers from academia are defined by: (i) ensuring people's safety; (ii) reducing the overall costs, and (iii) improving the traffic by maximising the fluidity. In this paper, a cyber-physical framework (CPF) to control the access of vehicles in roundabout intersections composed of two levels is proposed. Both levels correspond to the cyber part of the CPF, while the physical part is composed of the vehicles crossing the roundabout. The first level, i.e., the edge-computing layer, is based on an analytical solution that uses multivariable optimisation to minimise the waiting times of the vehicles entering a roundabout intersection and to ensure a safe crossing. The second level, i.e., the cloud-computing layer, stores information about the waiting times and trajectories of all the vehicles that cross the roundabout and uses them for long-term analysis and prediction. The simulated results show the efficacy of the proposed method, which can be easily implemented on an embedded device for real-time operation.

RevDate: 2021-07-05

Bao Y, Lin P, Li Y, et al (2021)

Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes.

Sensors (Basel, Switzerland), 21(11):.

Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes.

RevDate: 2021-07-07
CmpDate: 2021-07-07

Kuaban GS, Atmaca T, Kamli A, et al (2021)

Performance Analysis of Packet Aggregation Mechanisms and Their Applications in Access (e.g., IoT, 4G/5G), Core, and Data Centre Networks.

Sensors (Basel, Switzerland), 21(11):.

The transmission of massive amounts of small packets generated by access networks through high-speed Internet core networks to other access networks or cloud computing data centres has introduced several challenges such as poor throughput, underutilisation of network resources, and higher energy consumption. Therefore, it is essential to develop strategies to deal with these challenges. One of them is to aggregate smaller packets into a larger payload packet, and these groups of aggregated packets will share the same header, hence increasing throughput, improved resource utilisation, and reduction in energy consumption. This paper presents a review of packet aggregation applications in access networks (e.g., IoT and 4G/5G mobile networks), optical core networks, and cloud computing data centre networks. Then we propose new analytical models based on diffusion approximation for the evaluation of the performance of packet aggregation mechanisms. We demonstrate the use of measured traffic from real networks to evaluate the performance of packet aggregation mechanisms analytically. The use of diffusion approximation allows us to consider time-dependent queueing models with general interarrival and service time distributions. Therefore these models are more general than others presented till now.

RevDate: 2021-07-07
CmpDate: 2021-07-07

Nouh R, Singh M, D Singh (2021)

SafeDrive: Hybrid Recommendation System Architecture for Early Safety Predication Using Internet of Vehicles.

Sensors (Basel, Switzerland), 21(11):.

The Internet of vehicles (IoV) is a rapidly emerging technological evolution of Intelligent Transportation System (ITS). This paper proposes SafeDrive, a dynamic driver profile (DDP) using a hybrid recommendation system. DDP is a set of functional modules, to analyses individual driver's behaviors, using prior violation and accident records, to identify driving risk patterns. In this paper, we have considered three synthetic data-sets for 1500 drivers based on their profile information, risk parameters information, and risk likelihood. In addition, we have also considered the driver's historical violation/accident data-set records based on four risk-score levels such as high-risk, medium-risk, low-risk, and no-risk to predict current and future driver risk scores. Several error calculation methods have been applied in this study to analyze our proposed hybrid recommendation systems' performance to classify the driver's data with higher accuracy based on various criteria. The evaluated results help to improve the driving behavior and broadcast early warning alarm to the other vehicles in IoV environment for the overall road safety. Moreover, the propoed model helps to provide a safe and predicted environment for vehicles, pedestrians, and road objects, with the help of regular monitoring of vehicle motion, driver behavior, and road conditions. It also enables accurate prediction of accidents beforehand, and also minimizes the complexity of on-road vehicles and latency due to fog/cloud computing servers.

RevDate: 2021-07-07
CmpDate: 2021-07-07

Wang Q, Su M, Zhang M, et al (2021)

Integrating Digital Technologies and Public Health to Fight Covid-19 Pandemic: Key Technologies, Applications, Challenges and Outlook of Digital Healthcare.

International journal of environmental research and public health, 18(11):.

Integration of digital technologies and public health (or digital healthcare) helps us to fight the Coronavirus Disease 2019 (COVID-19) pandemic, which is the biggest public health crisis humanity has faced since the 1918 Influenza Pandemic. In order to better understand the digital healthcare, this work conducted a systematic and comprehensive review of digital healthcare, with the purpose of helping us combat the COVID-19 pandemic. This paper covers the background information and research overview of digital healthcare, summarizes its applications and challenges in the COVID-19 pandemic, and finally puts forward the prospects of digital healthcare. First, main concepts, key development processes, and common application scenarios of integrating digital technologies and digital healthcare were offered in the part of background information. Second, the bibliometric techniques were used to analyze the research output, geographic distribution, discipline distribution, collaboration network, and hot topics of digital healthcare before and after COVID-19 pandemic. We found that the COVID-19 pandemic has greatly accelerated research on the integration of digital technologies and healthcare. Third, application cases of China, EU and U.S using digital technologies to fight the COVID-19 pandemic were collected and analyzed. Among these digital technologies, big data, artificial intelligence, cloud computing, 5G are most effective weapons to combat the COVID-19 pandemic. Applications cases show that these technologies play an irreplaceable role in controlling the spread of the COVID-19. By comparing the application cases in these three regions, we contend that the key to China's success in avoiding the second wave of COVID-19 pandemic is to integrate digital technologies and public health on a large scale without hesitation. Fourth, the application challenges of digital technologies in the public health field are summarized. These challenges mainly come from four aspects: data delays, data fragmentation, privacy security, and data security vulnerabilities. Finally, this study provides the future application prospects of digital healthcare. In addition, we also provide policy recommendations for other countries that use digital technology to combat COVID-19.

RevDate: 2021-07-05

Kim J, Lee J, T Kim (2021)

AdaMM: Adaptive Object Movement and Motion Tracking in Hierarchical Edge Computing System.

Sensors (Basel, Switzerland), 21(12): pii:s21124089.

This paper presents a novel adaptive object movement and motion tracking (AdaMM) framework in a hierarchical edge computing system for achieving GPU memory footprint reduction of deep learning (DL)-based video surveillance services. DL-based object movement and motion tracking requires a significant amount of resources, such as (1) GPU processing power for the inference phase and (2) GPU memory for model loading. Despite the absence of an object in the video, if the DL model is loaded, the GPU memory must be kept allocated for the loaded model. Moreover, in several cases, video surveillance tries to capture events that rarely occur (e.g., abnormal object behaviors); therefore, such standby GPU memory might be easily wasted. To alleviate this problem, the proposed AdaMM framework categorizes the tasks used for the object movement and motion tracking procedure in an increasing order of the required processing and memory resources as task (1) frame difference calculation, task (2) object detection, and task (3) object motion and movement tracking. The proposed framework aims to adaptively release the unnecessary standby object motion and movement tracking model to save GPU memory by utilizing light tasks, such as frame difference calculation and object detection in a hierarchical manner. Consequently, object movement and motion tracking are adaptively triggered if the object is detected within the specified threshold time; otherwise, the GPU memory for the model of task (3) can be released. Moreover, object detection is also adaptively performed if the frame difference over time is greater than the specified threshold. We implemented the proposed AdaMM framework using commercial edge devices by considering a three-tier system, such as the 1st edge node for both tasks (1) and (2), the 2nd edge node for task (3), and the cloud for sending a push alarm. A measurement-based experiment reveals that the proposed framework achieves a maximum GPU memory reduction of 76.8% compared to the baseline system, while requiring a 2680 ms delay for loading the model for object movement and motion tracking.

RevDate: 2021-07-02

Miseikis J, Caroni P, Duchamp P, et al (2020)

Lio-A Personal Robot Assistant for Human-Robot Interaction and Care Applications.

IEEE robotics and automation letters, 5(4):5339-5346.

Lio is a mobile robot platform with a multi-functional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis. Lio is intrinsically safe by having full coverage in soft artificial-leather material as well as collision detection, limited speed and forces. Furthermore, the robot has a compliant motion controller. A combination of visual, audio, laser, ultrasound and mechanical sensors are used for safe navigation and environment understanding. The ROS-enabled setup allows researchers to access raw sensor data as well as have direct control of the robot. The friendly appearance of Lio has resulted in the robot being well accepted by health care staff and patients. Fully autonomous operation is made possible by a flexible decision engine, autonomous navigation and automatic recharging. Combined with time-scheduled task triggers, this allows Lio to operate throughout the day, with a battery life of up to 8 hours and recharging during idle times. A combination of powerful computing units provides enough processing power to deploy artificial intelligence and deep learning-based solutions on-board the robot without the need to send any sensitive data to cloud services, guaranteeing compliance with privacy requirements. During the COVID-19 pandemic, Lio was rapidly adjusted to perform additional functionality like disinfection and remote elevated body temperature detection. It complies with ISO13482 - Safety requirements for personal care robots, meaning it can be directly tested and deployed in care facilities.

RevDate: 2021-07-05

Fedorov A, Longabaugh WJR, Pot D, et al (2021)

NCI Imaging Data Commons.

Cancer research pii:0008-5472.CAN-21-0950 [Epub ahead of print].

The National Cancer Institute (NCI) Cancer Research Data Commons (CRDC) aims to establish a national cloud-based data science infrastructure. Imaging Data Commons (IDC) is a new component of CRDC supported by the Cancer Moonshot{trade mark, serif}. The goal of IDC is to enable a broad spectrum of cancer researchers, with and without imaging expertise, to easily access and explore the value of de-identified imaging data and to support integrated analyses with non-imaging data. We achieve this goal by co-locating versatile imaging collections with cloud-based computing resources and data exploration, visualization, and analysis tools. The IDC pilot was released in October 2020 and is being continuously populated with radiology and histopathology collections. IDC provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research.

RevDate: 2021-06-29

Park S, Lee D, Kim Y, et al (2021)

BioVLAB-Cancer-Pharmacogenomics: Tumor Heterogeneity and Pharmacogenomics Analysis of Multi-omics Data from Tumor on the Cloud.

Bioinformatics (Oxford, England) pii:6311261 [Epub ahead of print].

MOTIVATION: Multi-omics data in molecular biology has accumulated rapidly over the years. Such data contains valuable information for research in medicine and drug discovery. Unfortunately, data-driven research in medicine and drug discovery is challenging for a majority of small research labs due to the large volume of data and the complexity of analysis pipeline.

RESULTS: We present BioVLAB-Cancer-Pharmacogenomics, a bioinformatics system that facilitates analysis of multi-omics data from breast cancer to analyze and investigate intratumor heterogeneity and pharmacogenomics on Amazon Web Services. Our system takes multi-omics data as input to perform tumor heterogeneity analysis in terms of TCGA data and deconvolve-and-match the tumor gene expression to cell line data in CCLE using DNA methylation profiles. We believe that our system can help small research labs perform analysis of tumor multi-omics without worrying about computational infrastructure and maintenance of databases and tools.

AVAILABILITY: http://biohealth.snu.ac.kr/software/biovlab_cancer_pharmacogenomics.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2021-06-29

Leu MG, Weinberg ST, Monsen C, et al (2021)

Web Services and Cloud Computing in Pediatric Care.

Pediatrics pii:peds.2021-052048 [Epub ahead of print].

Electronic health record (EHR) systems do not uniformly implement pediatric-supportive functionalities. One method of adding these capabilities across EHR platforms is to integrate Web services and Web applications that may perform decision support and store data in the cloud when the EHR platform is able to integrate Web services. Specific examples of these services are described, such as immunization clinical decision support services, consumer health resources, and bilirubin nomograms. Health care providers, EHR vendors, and developers share responsibilities in the appropriate development, integration, and use of Web services and Web applications as they relate to best practices in the areas of data security and confidentiality, technical availability, audit trails, terminology and messaging standards, compliance with the Health Insurance Portability and Accountability Act, testing, usability, and other considerations. It is desirable for health care providers to have knowledge of Web services and Web applications that can improve pediatric capabilities in their own EHRs because this will naturally inform discussions concerning EHR features and facilitate implementation and subsequent use of these capabilities by clinicians caring for children.

RevDate: 2021-06-29

Ahanger TA, Tariq U, Nusir M, et al (2021)

A novel IoT-fog-cloud-based healthcare system for monitoring and predicting COVID-19 outspread.

The Journal of supercomputing [Epub ahead of print].

Rapid communication of viral sicknesses is an arising public medical issue across the globe. Out of these, COVID-19 is viewed as the most critical and novel infection nowadays. The current investigation gives an effective framework for the monitoring and prediction of COVID-19 virus infection (C-19VI). To the best of our knowledge, no research work is focused on incorporating IoT technology for C-19 outspread over spatial-temporal patterns. Moreover, limited work has been done in the direction of prediction of C-19 in humans for controlling the spread of COVID-19. The proposed framework includes a four-level architecture for the expectation and avoidance of COVID-19 contamination. The presented model comprises COVID-19 Data Collection (C-19DC) level, COVID-19 Information Classification (C-19IC) level, COVID-19-Mining and Extraction (C-19ME) level, and COVID-19 Prediction and Decision Modeling (C-19PDM) level. Specifically, the presented model is used to empower a person/community to intermittently screen COVID-19 Fever Measure (C-19FM) and forecast it so that proactive measures are taken in advance. Additionally, for prescient purposes, the probabilistic examination of C-19VI is quantified as degree of membership, which is cumulatively characterized as a COVID-19 Fever Measure (C-19FM). Moreover, the prediction is realized utilizing the temporal recurrent neural network. Additionally, based on the self-organized mapping technique, the presence of C-19VI is determined over a geographical area. Simulation is performed over four challenging datasets. In contrast to other strategies, altogether improved outcomes in terms of classification efficiency, prediction viability, and reliability were registered for the introduced model.

RevDate: 2021-06-29

Singh A, Jindal V, Sandhu R, et al (2021)

A scalable framework for smart COVID surveillance in the workplace using Deep Neural Networks and cloud computing.

Expert systems [Epub ahead of print].

A smart and scalable system is required to schedule various machine learning applications to control pandemics like COVID-19 using computing infrastructure provided by cloud and fog computing. This paper proposes a framework that considers the use case of smart office surveillance to monitor workplaces for detecting possible violations of COVID effectively. The proposed framework uses deep neural networks, fog computing and cloud computing to develop a scalable and time-sensitive infrastructure that can detect two major violations: wearing a mask and maintaining a minimum distance of 6 feet between employees in the office environment. The proposed framework is developed with the vision to integrate multiple machine learning applications and handle the computing infrastructures for pandemic applications. The proposed framework can be used by application developers for the rapid development of new applications based on the requirements and do not worry about scheduling. The proposed framework is tested for two independent applications and performed better than the traditional cloud environment in terms of latency and response time. The work done in this paper tries to bridge the gap between machine learning applications and their computing infrastructure for COVID-19.

RevDate: 2021-06-30

Elnashar A, Zeng H, Wu B, et al (2021)

Soil erosion assessment in the Blue Nile Basin driven by a novel RUSLE-GEE framework.

The Science of the total environment, 793:148466 pii:S0048-9697(21)03538-5 [Epub ahead of print].

Assessment of soil loss and understanding its major drivers are essential to implement targeted management interventions. We have proposed and developed a Revised Universal Soil Loss Equation framework fully implemented in the Google Earth Engine cloud platform (RUSLE-GEE) for high spatial resolution (90 m) soil erosion assessment. Using RUSLE-GEE, we analyzed the soil loss rate for different erosion levels, land cover types, and slopes in the Blue Nile Basin. The results showed that the mean soil loss rate is 39.73, 57.98, and 6.40 t ha-1 yr-1 for the entire Blue Nile, Upper Blue Nile, and Lower Blue Nile Basins, respectively. Our results also indicated that soil protection measures should be implemented in approximately 27% of the Blue Nile Basin, as these areas face a moderate to high risk of erosion (>10 t ha-1 yr-1). In addition, downscaling the Tropical Rainfall Measuring Mission (TRMM) precipitation data from 25 km to 1 km spatial resolution significantly impacts rainfall erosivity and soil loss rate. In terms of soil erosion assessment, the study showed the rapid characterization of soil loss rates that could be used to prioritize erosion mitigation plans to support sustainable land resources and tackle land degradation in the Blue Nile Basin.

RevDate: 2021-06-29
CmpDate: 2021-06-29

Karhade DS, Roach J, Shrestha P, et al (2021)

An Automated Machine Learning Classifier for Early Childhood Caries.

Pediatric dentistry, 43(3):191-197.

Purpose: The purpose of the study was to develop and evaluate an automated machine learning algorithm (AutoML) for children's classification according to early childhood caries (ECC) status. Methods: Clinical, demographic, behavioral, and parent-reported oral health status information for a sample of 6,404 three- to five-year-old children (mean age equals 54 months) participating in an epidemiologic study of early childhood oral health in North Carolina was used. ECC prevalence (decayed, missing, and filled primary teeth surfaces [dmfs] score greater than zero, using an International Caries Detection and Assessment System score greater than or equal to three caries lesion detection threshold) was 54 percent. Ten sets of ECC predictors were evaluated for ECC classification accuracy (i.e., area under the ROC curve [AUC], sensitivity [Se], and positive predictive value [PPV]) using an AutoML deployment on Google Cloud, followed by internal validation and external replication. Results: A parsimonious model including two terms (i.e., children's age and parent-reported child oral health status: excellent/very good/good/fair/poor) had the highest AUC (0.74), Se (0.67), and PPV (0.64) scores and similar performance using an external National Health and Nutrition Examination Survey (NHANES) dataset (AUC equals 0.80, Se equals 0.73, PPV equals 0.49). Contrarily, a comprehensive model with 12 variables covering demographics (e.g., race/ethnicity, parental education), oral health behaviors, fluoride exposure, and dental home had worse performance (AUC equals 0.66, Se equals 0.54, PPV equals 0.61). Conclusions: Parsimonious automated machine learning early childhood caries classifiers, including single-item self-reports, can be valuable for ECC screening. The classifier can accommodate biological information that can help improve its performance in the future.

RevDate: 2021-06-23

El Motaki S, Yahyaouy A, Gualous H, et al (2021)

A new weighted fuzzy C-means clustering for workload monitoring in cloud datacenter platforms.

Cluster computing [Epub ahead of print].

The rapid growth in virtualization solutions has driven the widespread adoption of cloud computing paradigms among various industries and applications. This has led to a growing need for XaaS solutions and equipment to enable teleworking. To meet this need, cloud operators and datacenters have to overtake several challenges related to continuity, the quality of services provided, data security, and anomaly detection issues. Mainly, anomaly detection methods play a critical role in detecting virtual machines' abnormal behaviours that can potentially violate service level agreements established with users. Unsupervised machine learning techniques are among the most commonly used technologies for implementing anomaly detection systems. This paper introduces a novel clustering approach for analyzing virtual machine behaviour while running workloads in a system based on resource usage details (such as CPU utilization and downtime events). The proposed algorithm is inspired by the intuitive mechanism of flocking birds in nature to form reasonable clusters. Each starling movement's direction depends on self-information and information provided by other close starlings during the flight. Analogically, after associating a weight with each data sample to guide the formation of meaningful groups, each data element determines its next position in the feature space based on its current position and surroundings. Based on a realistic dataset and clustering validity indices, the experimental evaluation shows that the new weighted fuzzy c-means algorithm provides interesting results and outperforms the corresponding standard algorithm (weighted fuzzy c-means).

RevDate: 2021-06-23

Donato L, Scimone C, Rinaldi C, et al (2021)

New evaluation methods of read mapping by 17 aligners on simulated and empirical NGS data: an updated comparison of DNA- and RNA-Seq data from Illumina and Ion Torrent technologies.

Neural computing & applications [Epub ahead of print].

During the last (15) years, improved omics sequencing technologies have expanded the scale and resolution of various biological applications, generating high-throughput datasets that require carefully chosen software tools to be processed. Therefore, following the sequencing development, bioinformatics researchers have been challenged to implement alignment algorithms for next-generation sequencing reads. However, nowadays selection of aligners based on genome characteristics is poorly studied, so our benchmarking study extended the "state of art" comparing 17 different aligners. The chosen tools were assessed on empirical human DNA- and RNA-Seq data, as well as on simulated datasets in human and mouse, evaluating a set of parameters previously not considered in such kind of benchmarks. As expected, we found that each tool was the best in specific conditions. For Ion Torrent single-end RNA-Seq samples, the most suitable aligners were CLC and BWA-MEM, which reached the best results in terms of efficiency, accuracy, duplication rate, saturation profile and running time. About Illumina paired-end osteomyelitis transcriptomics data, instead, the best performer algorithm, together with the already cited CLC, resulted Novoalign, which excelled in accuracy and saturation analyses. Segemehl and DNASTAR performed the best on both DNA-Seq data, with Segemehl particularly suitable for exome data. In conclusion, our study could guide users in the selection of a suitable aligner based on genome and transcriptome characteristics. However, several other aspects, emerged from our work, should be considered in the evolution of alignment research area, such as the involvement of artificial intelligence to support cloud computing and mapping to multiple genomes.

Supplementary Information: The online version contains supplementary material available at 10.1007/s00521-021-06188-z.

RevDate: 2021-07-02

Bichmann L, Gupta S, Rosenberger G, et al (2021)

DIAproteomics: A Multifunctional Data Analysis Pipeline for Data-Independent Acquisition Proteomics and Peptidomics.

Journal of proteome research, 20(7):3758-3766.

Data-independent acquisition (DIA) is becoming a leading analysis method in biomedical mass spectrometry. The main advantages include greater reproducibility and sensitivity and a greater dynamic range compared with data-dependent acquisition (DDA). However, the data analysis is complex and often requires expert knowledge when dealing with large-scale data sets. Here we present DIAproteomics, a multifunctional, automated, high-throughput pipeline implemented in the Nextflow workflow management system that allows one to easily process proteomics and peptidomics DIA data sets on diverse compute infrastructures. The central components are well-established tools such as the OpenSwathWorkflow for the DIA spectral library search and PyProphet for the false discovery rate assessment. In addition, it provides options to generate spectral libraries from existing DDA data and to carry out the retention time and chromatogram alignment. The output includes annotated tables and diagnostic visualizations from the statistical postprocessing and computation of fold-changes across pairwise conditions, predefined in an experimental design. DIAproteomics is well documented open-source software and is available under a permissive license to the scientific community at https://www.openms.de/diaproteomics/.

RevDate: 2021-06-30

Li J, Peng B, Wei Y, et al (2021)

Accurate extraction of surface water in complex environment based on Google Earth Engine and Sentinel-2.

PloS one, 16(6):e0253209.

To realize the accurate extraction of surface water in complex environment, this study takes Sri Lanka as the study area owing to the complex geography and various types of water bodies. Based on Google Earth engine and Sentinel-2 images, an automatic water extraction model in complex environment(AWECE) was developed. The accuracy of water extraction by AWECE, NDWI, MNDWI and the revised version of multi-spectral water index (MuWI-R) models was evaluated from visual interpretation and quantitative analysis. The results show that the AWECE model could significantly improve the accuracy of water extraction in complex environment, with an overall accuracy of 97.16%, and an extremely low omission error (0.74%) and commission error (2.35%). The AEWCE model could effectively avoid the influence of cloud shadow, mountain shadow and paddy soil on water extraction accuracy. The model can be widely applied in cloudy, mountainous and other areas with complex environments, which has important practical significance for water resources investigation, monitoring and protection.

RevDate: 2021-06-19

Azhir E, Jafari Navimipour N, Hosseinzadeh M, et al (2021)

A technique for parallel query optimization using MapReduce framework and a semantic-based clustering method.

PeerJ. Computer science, 7:e580.

Query optimization is the process of identifying the best Query Execution Plan (QEP). The query optimizer produces a close to optimal QEP for the given queries based on the minimum resource usage. The problem is that for a given query, there are plenty of different equivalent execution plans, each with a corresponding execution cost. To produce an effective query plan thus requires examining a large number of alternative plans. Access plan recommendation is an alternative technique to database query optimization, which reuses the previously-generated QEPs to execute new queries. In this technique, the query optimizer uses clustering methods to identify groups of similar queries. However, clustering such large datasets is challenging for traditional clustering algorithms due to huge processing time. Numerous cloud-based platforms have been introduced that offer low-cost solutions for the processing of distributed queries such as Hadoop, Hive, Pig, etc. This paper has applied and tested a model for clustering variant sizes of large query datasets parallelly using MapReduce. The results demonstrate the effectiveness of the parallel implementation of query workloads clustering to achieve good scalability.

RevDate: 2021-06-19

Inamura T, Y Mizuchi (2021)

SIGVerse: A Cloud-Based VR Platform for Research on Multimodal Human-Robot Interaction.

Frontiers in robotics and AI, 8:549360.

Research on Human-Robot Interaction (HRI) requires the substantial consideration of an experimental design, as well as a significant amount of time to practice the subject experiment. Recent technology in virtual reality (VR) can potentially address these time and effort challenges. The significant advantages of VR systems for HRI are: 1) cost reduction, as experimental facilities are not required in a real environment; 2) provision of the same environmental and embodied interaction conditions to test subjects; 3) visualization of arbitrary information and situations that cannot occur in reality, such as playback of past experiences, and 4) ease of access to an immersive and natural interface for robot/avatar teleoperations. Although VR tools with their features have been applied and developed in previous HRI research, all-encompassing tools or frameworks remain unavailable. In particular, the benefits of integration with cloud computing have not been comprehensively considered. Hence, the purpose of this study is to propose a research platform that can comprehensively provide the elements required for HRI research by integrating VR and cloud technologies. To realize a flexible and reusable system, we developed a real-time bridging mechanism between the robot operating system (ROS) and Unity. To confirm the feasibility of the system in a practical HRI scenario, we applied the proposed system to three case studies, including a robot competition named RoboCup@Home. via these case studies, we validated the system's usefulness and its potential for the development and evaluation of social intelligence via multimodal HRI.

RevDate: 2021-06-19

Paul-Gilloteaux P, Tosi S, Hériché JK, et al (2021)

Bioimage analysis workflows: community resources to navigate through a complex ecosystem.

F1000Research, 10:320.

Workflows are the keystone of bioimage analysis, and the NEUBIAS (Network of European BioImage AnalystS) community is trying to gather the actors of this field and organize the information around them. One of its most recent outputs is the opening of the F1000Research NEUBIAS gateway, whose main objective is to offer a channel of publication for bioimage analysis workflows and associated resources. In this paper we want to express some personal opinions and recommendations related to finding, handling and developing bioimage analysis workflows. The emergence of "big data" in bioimaging and resource-intensive analysis algorithms make local data storage and computing solutions a limiting factor. At the same time, the need for data sharing with collaborators and a general shift towards remote work, have created new challenges and avenues for the execution and sharing of bioimage analysis workflows. These challenges are to reproducibly run workflows in remote environments, in particular when their components come from different software packages, but also to document them and link their parameters and results by following the FAIR principles (Findable, Accessible, Interoperable, Reusable) to foster open and reproducible science. In this opinion paper, we focus on giving some directions to the reader to tackle these challenges and navigate through this complex ecosystem, in order to find and use workflows, and to compare workflows addressing the same problem. We also discuss tools to run workflows in the cloud and on High Performance Computing resources, and suggest ways to make these workflows FAIR.

RevDate: 2021-06-16

Tan C, J Lin (2021)

A new QoE-based prediction model for evaluating virtual education systems with COVID-19 side effects using data mining.

Soft computing [Epub ahead of print].

Today, emerging technologies such as 5G Internet of things (IoT), virtual reality and cloud-edge computing have enhanced and upgraded higher education environments in universities, colleagues and research centers. Computer-assisted learning systems with aggregating IoT applications and smart devices have improved the e-learning systems by enabling remote monitoring and screening of the behavioral aspects of teaching and education scores of students. On the other side, educational data mining has improved the higher education systems by predicting and analyzing the behavioral aspects of teaching and education scores of students. Due to an unexpected and huge increase in the number of patients during coronavirus (COVID-19) pandemic, all universities, campuses, schools, research centers, many scientific collaborations and meetings have closed and forced to initiate online teaching, e-learning and virtual meeting. Due to importance of behavioral aspects of teaching and education between lecturers and students, prediction of quality of experience (QoE) in virtual education systems is a critical issue. This paper presents a new prediction model to detect technical aspects of teaching and e-learning in virtual education systems using data mining. Association rules mining and supervised techniques are applied to detect efficient QoE factors on virtual education systems. The experimental results described that the suggested prediction model meets the proper accuracy, precision and recall factors for predicting the behavioral aspects of teaching and e-learning for students in virtual education systems.

RevDate: 2021-06-15

Abbasi WA, Abbas SA, S Andleeb (2021)

PANDA: Predicting the change in proteins binding affinity upon mutations by finding a signal in primary structures.

Journal of bioinformatics and computational biology [Epub ahead of print].

Accurately determining a change in protein binding affinity upon mutations is important to find novel therapeutics and to assist mutagenesis studies. Determination of change in binding affinity upon mutations requires sophisticated, expensive, and time-consuming wet-lab experiments that can be supported with computational methods. Most of the available computational prediction techniques depend upon protein structures that bound their applicability to only protein complexes with recognized 3D structures. In this work, we explore the sequence-based prediction of change in protein binding affinity upon mutation and question the effectiveness of [Formula: see text]-fold cross-validation (CV) across mutations adopted in previous studies to assess the generalization ability of such predictors with no known mutation during training. We have used protein sequence information instead of protein structures along with machine learning techniques to accurately predict the change in protein binding affinity upon mutation. Our proposed sequence-based novel change in protein binding affinity predictor called PANDA performs comparably to the existing methods gauged through an appropriate CV scheme and an external independent test dataset. On an external test dataset, our proposed method gives a maximum Pearson correlation coefficient of 0.52 in comparison to the state-of-the-art existing protein structure-based method called MutaBind which gives a maximum Pearson correlation coefficient of 0.59. Our proposed protein sequence-based method, to predict a change in binding affinity upon mutations, has wide applicability and comparable performance in comparison to existing protein structure-based methods. We made PANDA easily accessible through a cloud-based webserver and python code available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/panda, respectively.

RevDate: 2021-06-18

Laske TG, Garshelis DL, Iles TL, et al (2021)

An engineering perspective on the development and evolution of implantable cardiac monitors in free-living animals.

Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 376(1830):20200217.

The latest technologies associated with implantable physiological monitoring devices can record multiple channels of data (including: heart rates and rhythms, activity, temperature, impedance and posture), and coupled with powerful software applications, have provided novel insights into the physiology of animals in the wild. This perspective details past challenges and lessons learned from the uses and developments of implanted biologgers designed for human clinical application in our research on free-ranging American black bears (Ursus americanus). In addition, we reference other research by colleagues and collaborators who have leveraged these devices in their work, including: brown bears (Ursus arctos), grey wolves (Canis lupus), moose (Alces alces), maned wolves (Chrysocyon brachyurus) and southern elephant seals (Mirounga leonina). We also discuss the potentials for applications of such devices across a range of other species. To date, the devices described have been used in fifteen different wild species, with publications pending in many instances. We have focused our physiological research on the analyses of heart rates and rhythms and thus special attention will be paid to this topic. We then discuss some major expected step changes such as improvements in sensing algorithms, data storage, and the incorporation of next-generation short-range wireless telemetry. The latter provides new avenues for data transfer, and when combined with cloud-based computing, it not only provides means for big data storage but also the ability to readily leverage high-performance computing platforms using artificial intelligence and machine learning algorithms. These advances will dramatically increase both data quantity and quality and will facilitate the development of automated recognition of extreme physiological events or key behaviours of interest in a broad array of environments, thus further aiding wildlife monitoring and management. This article is part of the theme issue 'Measuring physiology in free-living animals (Part I)'.

RevDate: 2021-06-12

Macdonald JC, Isom DC, Evans DD, et al (2021)

Digital Innovation in Medicinal Product Regulatory Submission, Review, and Approvals to Create a Dynamic Regulatory Ecosystem-Are We Ready for a Revolution?.

Frontiers in medicine, 8:660808.

The pace of scientific progress over the past several decades within the biological, drug development, and the digital realm has been remarkable. The'omics revolution has enabled a better understanding of the biological basis of disease, unlocking the possibility of new products such as gene and cell therapies which offer novel patient centric solutions. Innovative approaches to clinical trial designs promise greater efficiency, and in recent years, scientific collaborations, and consortia have been developing novel approaches to leverage new sources of evidence such as real-world data, patient experience data, and biomarker data. Alongside this there have been great strides in digital innovation. Cloud computing has become mainstream and the internet of things and blockchain technology have become a reality. These examples of transformation stand in sharp contrast to the current inefficient approach for regulatory submission, review, and approval of medicinal products. This process has not fundamentally changed since the beginning of medicine regulation in the late 1960s. Fortunately, progressive initiatives are emerging that will enrich and streamline regulatory decision making and deliver patient centric therapies, if they are successful in transforming the current transactional construct and harnessing scientific and technological advances. Such a radical transformation will not be simple for both regulatory authorities and company sponsors, nor will progress be linear. We examine the shortcomings of the current system with its entrenched and variable business processes, offer examples of progress as catalysts for change, and make the case for a new cloud based model. To optimize navigation toward this reality we identify implications and regulatory design questions which must be addressed. We conclude that a new model is possible and is slowly emerging through cumulative change initiatives that question, challenge, and redesign best practices, roles, and responsibilities, and that this must be combined with adaptation of behaviors and acquisition of new skills.

RevDate: 2021-06-17

Abdel-Kader RF, El-Sayad NE, RY Rizk (2021)

Efficient energy and completion time for dependent task computation offloading algorithm in industry 4.0.

PloS one, 16(6):e0252756.

Rapid technological development has revolutionized the industrial sector. Internet of Things (IoT) started to appear in many fields, such as health care and smart cities. A few years later, IoT was supported by industry, leading to what is called Industry 4.0. In this paper, a cloud-assisted fog-networking architecture is implemented in an IoT environment with a three-layer network. An efficient energy and completion time for dependent task computation offloading (ET-DTCO) algorithm is proposed, and it considers two quality-of-service (QoS) parameters: efficient energy and completion time offloading for dependent tasks in Industry 4.0. The proposed solution employs the Firefly algorithm to optimize the process of the selection-offloading computing mode and determine the optimal solution for performing tasks locally or offloaded to a fog or cloud considering the task dependency. Moreover, the proposed algorithm is compared with existing techniques. Simulation results proved that the proposed ET-DTCO algorithm outperforms other offloading algorithms in minimizing energy consumption and completion time while enhancing the overall efficiency of the system.

RevDate: 2021-06-07

Kapitonov A, Lonshakov S, Bulatov V, et al (2021)

Robot-as-a-Service: From Cloud to Peering Technologies.

Frontiers in robotics and AI, 8:560829.

This article is devoted to the historical overview of the Robot-as-a-Service concept. Several major scientific publications on the development of Robot-as-a-Service systems based on a service-oriented paradigm are considered. Much attention is paid to the analysis of a centralized approach in the development using cloud computing services and the search for the limitations of this approach. As a result, general conclusions on the reviewed publications are given, as well as the authors' own vision of Robot-as-a-Service systems based on the concept of robot economics.

RevDate: 2021-06-07

Li F, Shankar A, B Santhosh Kumar (2021)

Fog-Internet of things-assisted multi-sensor intelligent monitoring model to analyse the physical health condition.

Technology and health care : official journal of the European Society for Engineering and Medicine pii:THC213009 [Epub ahead of print].

BACKGROUND: Internet of Things (IoT) technology provides a tremendous and structured solution to tackle service deliverance aspects of healthcare in terms of mobile health and remote patient tracking. In medicine observation applications, IoT and cloud computing serves as an assistant in the health sector and plays an incredibly significant role. Health professionals and technicians have built an excellent platform for people with various illnesses, leveraging principles of wearable technology, wireless channels, and other remote devices for low-cost healthcare monitoring.

OBJECTIVE: This paper proposed the Fog-IoT-assisted multisensor intelligent monitoring model (FIoT-MIMM) for analyzing the patient's physical health condition.

METHOD: The proposed system uses a multisensor device for collecting biometric and medical observing data. The main point is to continually generate emergency alerts on mobile phones from the fog system to users. For the precautionary steps and suggestions for patients' health, a fog layer's temporal information is used.

RESULTS: Experimental findings show that the proposed FIoT-MIMM model has less response time and high accuracy in determining a patient's condition than other existing methods. Furthermore, decision making based on real-time healthcare information further improves the utility of the suggested model.

RevDate: 2021-06-07

Cui M, Baek SS, Crespo RG, et al (2021)

Internet of things-based cloud computing platform for analyzing the physical health condition.

Technology and health care : official journal of the European Society for Engineering and Medicine pii:THC213003 [Epub ahead of print].

BACKGROUND: Health monitoring is important for early disease diagnosis and will reduce the discomfort and treatment expenses, which is very relevant in terms of prevention. The early diagnosis and treatment of multiple conditions will improve solutions to the patient's healthcare radically. A concept model for the real-time patient tracking system is the primary goal of the method. The Internet of things (IoT) has made health systems accessible for programs based on the value of patient health.

OBJECTIVE: In this paper, the IoT-based cloud computing for patient health monitoring framework (IoT-CCPHM), has been proposed for effective monitoring of the patients.

METHOD: The emerging connected sensors and IoT devices monitor and test the cardiac speed, oxygen saturation percentage, body temperature, and patient's eye movement. The collected data are used in the cloud database to evaluate the patient's health, and the effects of all measures are stored. The IoT-CCPHM maintains that the medical record is processed in the cloud servers.

RESULTS: The experimental results show that patient health monitoring is a reliable way to improve health effectively.

RevDate: 2021-06-28

Lin Z, Zou J, Liu S, et al (2021)

A Cloud Computing Platform for Scalable Relative and Absolute Binding Free Energy Predictions: New Opportunities and Challenges for Drug Discovery.

Journal of chemical information and modeling, 61(6):2720-2732.

Free energy perturbation (FEP) has become widely used in drug discovery programs for binding affinity prediction between candidate compounds and their biological targets. However, limitations of FEP applications also exist, including, but not limited to, high cost, long waiting time, limited scalability, and breadth of application scenarios. To overcome these problems, we have developed XFEP, a scalable cloud computing platform for both relative and absolute free energy predictions using optimized simulation protocols. XFEP enables large-scale FEP calculations in a more efficient, scalable, and affordable way, for example, the evaluation of 5000 compounds can be performed in 1 week using 50-100 GPUs with a computing cost roughly equivalent to the cost for the synthesis of only one new compound. By combining these capabilities with artificial intelligence techniques for goal-directed molecule generation and evaluation, new opportunities can be explored for FEP applications in the drug discovery stages of hit identification, hit-to-lead, and lead optimization based not only on structure exploitation within the given chemical series but also including evaluation and comparison of completely unrelated molecules during structure exploration in a larger chemical space. XFEP provides the basis for scalable FEP applications to become more widely used in drug discovery projects and to speed up the drug discovery process from hit identification to preclinical candidate compound nomination.

RevDate: 2021-06-05

Heidari A, N Jafari Navimipour (2021)

A new SLA-aware method for discovering the cloud services using an improved nature-inspired optimization algorithm.

PeerJ. Computer science, 7:e539.

Cloud computing is one of the most important computing patterns that use a pay-as-you-go manner to process data and execute applications. Therefore, numerous enterprises are migrating their applications to cloud environments. Not only do intensive applications deal with enormous quantities of data, but they also demonstrate compute-intensive properties very frequently. The dynamicity, coupled with the ambiguity between marketed resources and resource requirement queries from users, remains important issues that hamper efficient discovery in a cloud environment. Cloud service discovery becomes a complex problem because of the increase in network size and complexity. Complexity and network size keep increasing dynamically, making it a complex NP-hard problem that requires effective service discovery approaches. One of the most famous cloud service discovery methods is the Ant Colony Optimization (ACO) algorithm; however, it suffers from a load balancing problem among the discovered nodes. If the workload balance is inefficient, it limits the use of resources. This paper solved this problem by applying an Inverted Ant Colony Optimization (IACO) algorithm for load-aware service discovery in cloud computing. The IACO considers the pheromones' repulsion instead of attraction. We design a model for service discovery in the cloud environment to overcome the traditional shortcomings. Numerical results demonstrate that the proposed mechanism can obtain an efficient service discovery method. The algorithm is simulated using a CloudSim simulator, and the result shows better performance. Reducing energy consumption, mitigate response time, and better Service Level Agreement (SLA) violation in the cloud environments are the advantages of the proposed method.

RevDate: 2021-06-04

Ali A, Ahmed M, Khan A, et al (2021)

VisTAS: blockchain-based visible and trusted remote authentication system.

PeerJ. Computer science, 7:e516 pii:cs-516.

The information security domain focuses on security needs at all levels in a computing environment in either the Internet of Things, Cloud Computing, Cloud of Things, or any other implementation. Data, devices, services, or applications and communication are required to be protected and provided by information security shields at all levels and in all working states. Remote authentication is required to perform different administrative operations in an information system, and Administrators have full access to the system and may pose insider threats. Superusers and administrators are the most trusted persons in an organisation. "Trust but verify" is an approach to have an eye on the superusers and administrators. Distributed ledger technology (Blockchain-based data storage) is an immutable data storage scheme and provides a built-in facility to share statistics among peers. Distributed ledgers are proposed to provide visible security and non-repudiation, which securely records administrators' authentications requests. The presence of security, privacy, and accountability measures establish trust among its stakeholders. Securing information in an electronic data processing system is challenging, i.e., providing services and access control for the resources to only legitimate users. Authentication plays a vital role in systems' security; therefore, authentication and identity management are the key subjects to provide information security services. The leading cause of information security breaches is the failure of identity management/authentication systems and insider threats. In this regard, visible security measures have more deterrence than other schemes. In this paper, an authentication scheme, "VisTAS," has been introduced, which provides visible security and trusted authentication services to the tenants and keeps the records in the blockchain.

RevDate: 2021-06-05

Cambronero ME, Bernal A, Valero V, et al (2021)

Profiling SLAs for cloud system infrastructures and user interactions.

PeerJ. Computer science, 7:e513.

Cloud computing has emerged as a cutting-edge technology which is widely used by both private and public institutions, since it eliminates the capital expense of buying, maintaining, and setting up both hardware and software. Clients pay for the services they use, under the so-called Service Level Agreements (SLAs), which are the contracts that establish the terms and costs of the services. In this paper, we propose the CloudCost UML profile, which allows the modeling of cloud architectures and the users' behavior when they interact with the cloud to request resources. We then investigate how to increase the profits of cloud infrastructures by using price schemes. For this purpose, we distinguish between two types of users in the SLAs: regular and high-priority users. Regular users do not require a continuous service, so they can wait to be attended to. In contrast, high-priority users require a constant and immediate service, so they pay a greater price for their services. In addition, a computer-aided design tool, called MSCC (Modeling SLAs Cost Cloud), has been implemented to support the CloudCost profile, which enables the creation of specific cloud scenarios, as well as their edition and validation. Finally, we present a complete case study to illustrate the applicability of the CloudCost profile, thus making it possible to draw conclusions about how to increase the profits of the cloud infrastructures studied by adjusting the different cloud parameters and the resource configuration.

RevDate: 2021-06-14

Tropea M, De Rango F, Nevigato N, et al (2021)

SCARE: A Novel Switching and Collision Avoidance pRocEss for Connected Vehicles Using Virtualization and Edge Computing Paradigm.

Sensors (Basel, Switzerland), 21(11):.

In this paper, some collision avoidance systems based on MEC in a VANET environment are proposed and investigated. Micro services at edge are considered to support service continuity in vehicle communication and advertising. This considered system makes use of cloud and edge computing, allowing to switch communication from edge to cloud server and vice versa when possible, trying to guarantee the required constraints and balancing the communication among the servers. Simulation results were used to evaluate the performance of three considered mechanisms: the first one considering only edge with load balancing, the second one using edge/cloud switching and the third one using edge with load balancing and collision avoidance advertising.

RevDate: 2021-06-15

Li DC, Huang CT, Tseng CW, et al (2021)

Fuzzy-Based Microservice Resource Management Platform for Edge Computing in the Internet of Things.

Sensors (Basel, Switzerland), 21(11):.

Edge computing exhibits the advantages of real-time operation, low latency, and low network cost. It has become a key technology for realizing smart Internet of Things applications. Microservices are being used by an increasing number of edge computing networks because of their sufficiently small code, reduced program complexity, and flexible deployment. However, edge computing has more limited resources than cloud computing, and thus edge computing networks have higher requirements for the overall resource scheduling of running microservices. Accordingly, the resource management of microservice applications in edge computing networks is a crucial issue. In this study, we developed and implemented a microservice resource management platform for edge computing networks. We designed a fuzzy-based microservice computing resource scaling (FMCRS) algorithm that can dynamically control the resource expansion scale of microservices. We proposed and implemented two microservice resource expansion methods based on the resource usage of edge network computing nodes. We conducted the experimental analysis in six scenarios and the experimental results proved that the designed microservice resource management platform can reduce the response time for microservice resource adjustments and dynamically expand microservices horizontally and vertically. Compared with other state-of-the-art microservice resource management methods, FMCRS can reduce sudden surges in overall network resource allocation, and thus, it is more suitable for the edge computing microservice management environment.

RevDate: 2021-06-15
CmpDate: 2021-06-07

Botez R, Costa-Requena J, Ivanciu IA, et al (2021)

SDN-Based Network Slicing Mechanism for a Scalable 4G/5G Core Network: A Kubernetes Approach.

Sensors (Basel, Switzerland), 21(11): pii:s21113773.

Managing the large volumes of IoT and M2M traffic requires the evaluation of the scalability and reliability for all the components in the end-to-end system. This includes connectivity, mobile network functions, and application or services receiving and processing the data from end devices. Firstly, this paper discusses the design of a containerized IoT and M2M application and the mechanisms for delivering automated scalability and high availability when deploying it in: (1) the edge using balenaCloud; (2) the Amazon Web Services cloud with EC2 instances; and (3) the dedicated Amazon Web Services IoT service. The experiments showed that there are no significant differences between edge and cloud deployments regarding resource consumption. Secondly, the solutions for scaling the 4G/5G network functions and mobile backhaul that provide the connectivity between devices and IoT/M2M applications are analyzed. In this case, the scalability and high availability of the 4G/5G components are provided by Kubernetes. The experiments showed that our proposed scaling algorithm for network slicing managed with SDN guarantees the necessary radio and network resources for end-to-end high availability.

RevDate: 2021-07-01
CmpDate: 2021-07-01

Shoeibi A, Khodatars M, Ghassemi N, et al (2021)

Epileptic Seizures Detection Using Deep Learning Techniques: A Review.

International journal of environmental research and public health, 18(11):.

A variety of screening approaches have been proposed to diagnose epileptic seizures, using electroencephalography (EEG) and magnetic resonance imaging (MRI) modalities. Artificial intelligence encompasses a variety of areas, and one of its branches is deep learning (DL). Before the rise of DL, conventional machine learning algorithms involving feature extraction were performed. This limited their performance to the ability of those handcrafting the features. However, in DL, the extraction of features and classification are entirely automated. The advent of these techniques in many areas of medicine, such as in the diagnosis of epileptic seizures, has made significant advances. In this study, a comprehensive overview of works focused on automated epileptic seizure detection using DL techniques and neuroimaging modalities is presented. Various methods proposed to diagnose epileptic seizures automatically using EEG and MRI modalities are described. In addition, rehabilitation systems developed for epileptic seizures using DL have been analyzed, and a summary is provided. The rehabilitation tools include cloud computing techniques and hardware required for implementation of DL algorithms. The important challenges in accurate detection of automated epileptic seizures using DL with EEG and MRI modalities are discussed. The advantages and limitations in employing DL-based techniques for epileptic seizures diagnosis are presented. Finally, the most promising DL models proposed and possible future works on automated epileptic seizure detection are delineated.

RevDate: 2021-06-15
CmpDate: 2021-06-07

Rashed EA, A Hirata (2021)

One-Year Lesson: Machine Learning Prediction of COVID-19 Positive Cases with Meteorological Data and Mobility Estimate in Japan.

International journal of environmental research and public health, 18(11):.

With the wide spread of COVID-19 and the corresponding negative impact on different life aspects, it becomes important to understand ways to deal with the pandemic as a part of daily routine. After a year of the COVID-19 pandemic, it has become obvious that different factors, including meteorological factors, influence the speed at which the disease is spread and the potential fatalities. However, the impact of each factor on the speed at which COVID-19 is spreading remains controversial. Accurate forecasting of potential positive cases may lead to better management of healthcare resources and provide guidelines for government policies in terms of the action required within an effective timeframe. Recently, Google Cloud has provided online COVID-19 forecasting data for the United States and Japan, which would help in predicting future situations on a state/prefecture scale and are updated on a day-by-day basis. In this study, we propose a deep learning architecture to predict the spread of COVID-19 considering various factors, such as meteorological data and public mobility estimates, and applied it to data collected in Japan to demonstrate its effectiveness. The proposed model was constructed using a neural network architecture based on a long short-term memory (LSTM) network. The model consists of multi-path LSTM layers that are trained using time-series meteorological data and public mobility data obtained from open-source data. The model was tested using different time frames, and the results were compared to Google Cloud forecasts. Public mobility is a dominant factor in estimating new positive cases, whereas meteorological data improve their accuracy. The average relative error of the proposed model ranged from 16.1% to 22.6% in major regions, which is a significant improvement compared with Google Cloud forecasting. This model can be used to provide public awareness regarding the morbidity risk of the COVID-19 pandemic in a feasible manner.

RevDate: 2021-06-21
CmpDate: 2021-06-21

Gorgulla C, Çınaroğlu SS, Fischer PD, et al (2021)

VirtualFlow Ants-Ultra-Large Virtual Screenings with Artificial Intelligence Driven Docking Algorithm Based on Ant Colony Optimization.

International journal of molecular sciences, 22(11):.

The docking program PLANTS, which is based on ant colony optimization (ACO) algorithm, has many advanced features for molecular docking. Among them are multiple scoring functions, the possibility to model explicit displaceable water molecules, and the inclusion of experimental constraints. Here, we add support of PLANTS to VirtualFlow (VirtualFlow Ants), which adds a valuable method for primary virtual screenings and rescoring procedures. Furthermore, we have added support of ligand libraries in the MOL2 format, as well as on the fly conversion of ligand libraries which are in the PDBQT format to the MOL2 format to endow VirtualFlow Ants with an increased flexibility regarding the ligand libraries. The on the fly conversion is carried out with Open Babel and the program SPORES. We applied VirtualFlow Ants to a test system involving KEAP1 on the Google Cloud up to 128,000 CPUs, and the observed scaling behavior is approximately linear. Furthermore, we have adjusted several central docking parameters of PLANTS (such as the speed parameter or the number of ants) and screened 10 million compounds for each of the 10 resulting docking scenarios. We analyzed their docking scores and average docking times, which are key factors in virtual screenings. The possibility of carrying out ultra-large virtual screening with PLANTS via VirtualFlow Ants opens new avenues in computational drug discovery.

RevDate: 2021-06-15
CmpDate: 2021-06-04

Ismail L, Materwala H, A Hennebelle (2021)

A Scoping Review of Integrated Blockchain-Cloud (BcC) Architecture for Healthcare: Applications, Challenges and Solutions.

Sensors (Basel, Switzerland), 21(11):.

Blockchain is a disruptive technology for shaping the next era of a healthcare system striving for efficient and effective patient care. This is thanks to its peer-to-peer, secure, and transparent characteristics. On the other hand, cloud computing made its way into the healthcare system thanks to its elasticity and cost-efficiency nature. However, cloud-based systems fail to provide a secured and private patient-centric cohesive view to multiple healthcare stakeholders. In this situation, blockchain provides solutions to address security and privacy concerns of the cloud because of its decentralization feature combined with data security and privacy, while cloud provides solutions to the blockchain scalability and efficiency challenges. Therefore a novel paradigm of blockchain-cloud integration (BcC) emerges for the domain of healthcare. In this paper, we provide an in-depth analysis of the BcC integration for the healthcare system to give the readers the motivations behind the emergence of this new paradigm, introduce a classification of existing architectures and their applications for better healthcare. We then review the development platforms and services and highlight the research challenges for the integrated BcC architecture, possible solutions, and future research directions. The results of this paper will be useful for the healthcare industry to design and develop a data management system for better patient care.

RevDate: 2021-06-15
CmpDate: 2021-06-02

Krzysztoń M, E Niewiadomska-Szynkiewicz (2021)

Intelligent Mobile Wireless Network for Toxic Gas Cloud Monitoring and Tracking.

Sensors (Basel, Switzerland), 21(11):.

Intelligent wireless networks that comprise self-organizing autonomous vehicles equipped with punctual sensors and radio modules support many hostile and harsh environment monitoring systems. This work's contribution shows the benefits of applying such networks to estimate clouds' boundaries created by hazardous toxic substances heavier than air when accidentally released into the atmosphere. The paper addresses issues concerning sensing networks' design, focussing on a computing scheme for online motion trajectory calculation and data exchange. A three-stage approach that incorporates three algorithms for sensing devices' displacement calculation in a collaborative network according to the current task, namely exploration and gas cloud detection, boundary detection and estimation, and tracking the evolving cloud, is presented. A network connectivity-maintaining virtual force mobility model is used to calculate subsequent sensor positions, and multi-hop communication is used for data exchange. The main focus is on the efficient tracking of the cloud boundary. The proposed sensing scheme is sensitive to crucial mobility model parameters. The paper presents five procedures for calculating the optimal values of these parameters. In contrast to widely used techniques, the presented approach to gas cloud monitoring does not calculate sensors' displacements based on exact values of gas concentration and concentration gradients. The sensor readings are reduced to two values: the gas concentration below or greater than the safe value. The utility and efficiency of the presented method were justified through extensive simulations, giving encouraging results. The test cases were carried out on several scenarios with regular and irregular shapes of clouds generated using a widely used box model that describes the heavy gas dispersion in the atmospheric air. The simulation results demonstrate that using only a rough measurement indicating that the threshold concentration value was exceeded can detect and efficiently track a gas cloud boundary. This makes the sensing system less sensitive to the quality of the gas concentration measurement. Thus, it can be easily used to detect real phenomena. Significant results are recommendations on selecting procedures for computing mobility model parameters while tracking clouds with different shapes and determining optimal values of these parameters in convex and nonconvex cloud boundaries.

RevDate: 2021-06-15

Tufail A, Namoun A, Sen AAA, et al (2021)

Moisture Computing-Based Internet of Vehicles (IoV) Architecture for Smart Cities.

Sensors (Basel, Switzerland), 21(11):.

Recently, the concept of combining 'things' on the Internet to provide various services has gained tremendous momentum. Such a concept has also impacted the automotive industry, giving rise to the Internet of Vehicles (IoV). IoV enables Internet connectivity and communication between smart vehicles and other devices on the network. Shifting the computing towards the edge of the network reduces communication delays and provides various services instantly. However, both distributed (i.e., edge computing) and central computing (i.e., cloud computing) architectures suffer from several inherent issues, such as high latency, high infrastructure cost, and performance degradation. We propose a novel concept of computation, which we call moisture computing (MC) to be deployed slightly away from the edge of the network but below the cloud infrastructure. The MC-based IoV architecture can be used to assist smart vehicles in collaborating to solve traffic monitoring, road safety, and management issues. Moreover, the MC can be used to dispatch emergency and roadside assistance in case of incidents and accidents. In contrast to the cloud which covers a broader area, the MC provides smart vehicles with critical information with fewer delays. We argue that the MC can help reduce infrastructure costs efficiently since it requires a medium-scale data center with moderate resources to cover a wider area compared to small-scale data centers in edge computing and large-scale data centers in cloud computing. We performed mathematical analyses to demonstrate that the MC reduces network delays and enhances the response time in contrast to the edge and cloud infrastructure. Moreover, we present a simulation-based implementation to evaluate the computational performance of the MC. Our simulation results show that the total processing time (computation delay and communication delay) is optimized, and delays are minimized in the MC as apposed to the traditional approaches.

RevDate: 2021-06-05

Sim SH, YS Jeong (2021)

Multi-Blockchain-Based IoT Data Processing Techniques to Ensure the Integrity of IoT Data in AIoT Edge Computing Environments.

Sensors (Basel, Switzerland), 21(10): pii:s21103515.

As the development of IoT technologies has progressed rapidly recently, most IoT data are focused on monitoring and control to process IoT data, but the cost of collecting and linking various IoT data increases, requiring the ability to proactively integrate and analyze collected IoT data so that cloud servers (data centers) can process smartly. In this paper, we propose a blockchain-based IoT big data integrity verification technique to ensure the safety of the Third Party Auditor (TPA), which has a role in auditing the integrity of AIoT data. The proposed technique aims to minimize IoT information loss by multiple blockchain groupings of information and signature keys from IoT devices. The proposed technique allows IoT information to be effectively guaranteed the integrity of AIoT data by linking hash values designated as arbitrary, constant-size blocks with previous blocks in hierarchical chains. The proposed technique performs synchronization using location information between the central server and IoT devices to manage the cost of the integrity of IoT information at low cost. In order to easily control a large number of locations of IoT devices, we perform cross-distributed and blockchain linkage processing under constant rules to improve the load and throughput generated by IoT devices.

RevDate: 2021-06-05
CmpDate: 2021-06-02

Melo GCG, Torres IC, Araújo ÍBQ, et al (2021)

A Low-Cost IoT System for Real-Time Monitoring of Climatic Variables and Photovoltaic Generation for Smart Grid Application.

Sensors (Basel, Switzerland), 21(9):.

Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time internet of things system for micro and mini photovoltaic generation systems that can monitor continuous voltage, continuous current, alternating power, and seven meteorological variables. The proposed system measures all relevant meteorological variables and directly acquires photovoltaic generation data from the plant (not from the inverter). The system is implemented using open software, connects to the internet without cables, stores data locally and in the cloud, and uses the network time protocol to synchronize the devices' clocks. To the best of our knowledge, no work reported in the literature presents these features altogether. Furthermore, experiments carried out with the proposed system showed good effectiveness and reliability. This system enables fog and cloud computing in a photovoltaic system, creating a time series measurements data set, enabling the future use of machine learning to create smart photovoltaic systems.

RevDate: 2021-06-05
CmpDate: 2021-06-02

Amoakoh AO, Aplin P, Awuah KT, et al (2021)

Testing the Contribution of Multi-Source Remote Sensing Features for Random Forest Classification of the Greater Amanzule Tropical Peatland.

Sensors (Basel, Switzerland), 21(10):.

Tropical peatlands such as Ghana's Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover in the tropics that limits optical remote sensing data acquisition. In this work we combine optical imagery with radar and elevation data to optimise land cover classification for the Greater Amanzule tropical peatland. Sentinel-2, Sentinel-1 and Shuttle Radar Topography Mission (SRTM) imagery were acquired and integrated to drive a machine learning land cover classification using a random forest classifier. Recursive feature elimination was used to optimize high-dimensional and correlated feature space and determine the optimal features for the classification. Six datasets were compared, comprising different combinations of optical, radar and elevation features. Results showed that the best overall accuracy (OA) was found for the integrated Sentinel-2, Sentinel-1 and SRTM dataset (S2+S1+DEM), significantly outperforming all the other classifications with an OA of 94%. Assessment of the sensitivity of land cover classes to image features indicated that elevation and the original Sentinel-1 bands contributed the most to separating tropical peatlands from other land cover types. The integration of more features and the removal of redundant features systematically increased classification accuracy. We estimate Ghana's Greater Amanzule peatland covers 60,187 ha. Our proposed methodological framework contributes a robust workflow for accurate and detailed landscape-scale monitoring of tropical peatlands, while our findings provide timely information critical for the sustainable management of the Greater Amanzule peatland.

RevDate: 2021-06-05
CmpDate: 2021-06-02

Puliafito A, Tricomi G, Zafeiropoulos A, et al (2021)

Smart Cities of the Future as Cyber Physical Systems: Challenges and Enabling Technologies.

Sensors (Basel, Switzerland), 21(10):.

A smart city represents an improvement of today's cities, both functionally and structurally, that strategically utilizes several smart factors, capitalizing on Information and Communications Technology (ICT) to increase the city's sustainable growth and strengthen the city's functions, while ensuring the citizens' enhanced quality of life and health. Cities can be viewed as a microcosm of interconnected "objects" with which citizens interact daily, which represents an extremely interesting example of a cyber physical system (CPS), where the continuous monitoring of a city's status occurs through sensors and processors applied within the real-world infrastructure. Each object in a city can be both the collector and distributor of information regarding mobility, energy consumption, air pollution as well as potentially offering cultural and tourist information. As a consequence, the cyber and real worlds are strongly linked and interdependent in a smart city. New services can be deployed when needed, and evaluation mechanisms can be set up to assess the health and success of a smart city. In particular, the objectives of creating ICT-enabled smart city environments target (but are not limited to) improved city services; optimized decision-making; the creation of smart urban infrastructures; the orchestration of cyber and physical resources; addressing challenging urban issues, such as environmental pollution, transportation management, energy usage and public health; the optimization of the use and benefits of next generation (5G and beyond) communication; the capitalization of social networks and their analysis; support for tactile internet applications; and the inspiration of urban citizens to improve their quality of life. However, the large scale deployment of cyber-physical-social systems faces a series of challenges and issues (e.g., energy efficiency requirements, architecture, protocol stack design, implementation, and security), which requires more smart sensing and computing methods as well as advanced networking and communications technologies to provide more pervasive cyber-physical-social services. In this paper, we discuss the challenges, the state-of-the-art, and the solutions to a set of currently unresolved key questions related to CPSs and smart cities.

RevDate: 2021-06-05
CmpDate: 2021-06-02

Albowarab MH, Zakaria NA, Z Zainal Abidin (2021)

Directionally-Enhanced Binary Multi-Objective Particle Swarm Optimisation for Load Balancing in Software Defined Networks.

Sensors (Basel, Switzerland), 21(10):.

Various aspects of task execution load balancing of Internet of Things (IoTs) networks can be optimised using intelligent algorithms provided by software-defined networking (SDN). These load balancing aspects include makespan, energy consumption, and execution cost. While past studies have evaluated load balancing from one or two aspects, none has explored the possibility of simultaneously optimising all aspects, namely, reliability, energy, cost, and execution time. For the purposes of load balancing, implementing multi-objective optimisation (MOO) based on meta-heuristic searching algorithms requires assurances that the solution space will be thoroughly explored. Optimising load balancing provides not only decision makers with optimised solutions but a rich set of candidate solutions to choose from. Therefore, the purposes of this study were (1) to propose a joint mathematical formulation to solve load balancing challenges in cloud computing and (2) to propose two multi-objective particle swarm optimisation (MP) models; distance angle multi-objective particle swarm optimization (DAMP) and angle multi-objective particle swarm optimization (AMP). Unlike existing models that only use crowding distance as a criterion for solution selection, our MP models probabilistically combine both crowding distance and crowding angle. More specifically, we only selected solutions that had more than a 0.5 probability of higher crowding distance and higher angular distribution. In addition, binary variants of the approaches were generated based on transfer function, and they were denoted by binary DAMP (BDAMP) and binary AMP (BAMP). After using MOO mathematical functions to compare our models, BDAMP and BAMP, with state of the standard models, BMP, BDMP and BPSO, they were tested using the proposed load balancing model. Both tests proved that our DAMP and AMP models were far superior to the state of the art standard models, MP, crowding distance multi-objective particle swarm optimisation (DMP), and PSO. Therefore, this study enables the incorporation of meta-heuristic in the management layer of cloud networks.

RevDate: 2021-06-05
CmpDate: 2021-06-04

Choi Y, Kim N, Hong S, et al (2021)

Critical Image Identification via Incident-Type Definition Using Smartphone Data during an Emergency: A Case Study of the 2020 Heavy Rainfall Event in Korea.

Sensors (Basel, Switzerland), 21(10):.

In unpredictable disaster scenarios, it is important to recognize the situation promptly and take appropriate response actions. This study proposes a cloud computing-based data collection, processing, and analysis process that employs a crowd-sensing application. Clustering algorithms are used to define the major damage types, and hotspot analysis is applied to effectively filter critical data from crowdsourced data. To verify the utility of the proposed process, it is applied to Icheon-si and Anseong-si, both in Gyeonggi-do, which were affected by heavy rainfall in 2020. The results show that the types of incident at the damaged site were effectively detected, and images reflecting the damage situation could be classified using the application of the geospatial analysis technique. For 5 August 2020, which was close to the date of the event, the images were classified with a precision of 100% at a threshold of 0.4. For 24-25 August 2020, the image classification precision exceeded 95% at a threshold of 0.5, except for the mudslide mudflow in the Yul area. The location distribution of the classified images showed a distribution similar to that of damaged regions in unmanned aerial vehicle images.

RevDate: 2021-06-05
CmpDate: 2021-06-02

Martínez-Gutiérrez A, Díez-González J, Ferrero-Guillén R, et al (2021)

Digital Twin for Automatic Transportation in Industry 4.0.

Sensors (Basel, Switzerland), 21(10):.

Industry 4.0 is the fourth industrial revolution consisting of the digitalization of processes facilitating an incremental value chain. Smart Manufacturing (SM) is one of the branches of the Industry 4.0 regarding logistics, visual inspection of pieces, optimal organization of processes, machine sensorization, real-time data adquisition and treatment and virtualization of industrial activities. Among these tecniques, Digital Twin (DT) is attracting the research interest of the scientific community in the last few years due to the cost reduction through the simulation of the dynamic behaviour of the industrial plant predicting potential problems in the SM paradigm. In this paper, we propose a new DT design concept based on external service for the transportation of the Automatic Guided Vehicles (AGVs) which are being recently introduced for the Material Requirement Planning satisfaction in the collaborative industrial plant. We have performed real experimentation in two different scenarios through the definition of an Industrial Ethernet platform for the real validation of the DT results obtained. Results show the correlation between the virtual and real experiments carried out in the two scenarios defined in this paper with an accuracy of 97.95% and 98.82% in the total time of the missions analysed in the DT. Therefore, these results validate the model created for the AGV navigation, thus fulfilling the objectives of this paper.

RevDate: 2021-06-05

Sobczak Ł, Filus K, Domański A, et al (2021)

LiDAR Point Cloud Generation for SLAM Algorithm Evaluation.

Sensors (Basel, Switzerland), 21(10):.

With the emerging interest in the autonomous driving level at 4 and 5 comes a necessity to provide accurate and versatile frameworks to evaluate the algorithms used in autonomous vehicles. There is a clear gap in the field of autonomous driving simulators. It covers testing and parameter tuning of a key component of autonomous driving systems, SLAM, frameworks targeting off-road and safety-critical environments. It also includes taking into consideration the non-idealistic nature of the real-life sensors, associated phenomena and measurement errors. We created a LiDAR simulator that delivers accurate 3D point clouds in real time. The point clouds are generated based on the sensor placement and the LiDAR type that can be set using configurable parameters. We evaluate our solution based on comparison of the results using an actual device, Velodyne VLP-16, on real-life tracks and the corresponding simulations. We measure the error values obtained using Google Cartographer SLAM algorithm and the distance between the simulated and real point clouds to verify their accuracy. The results show that our simulation (which incorporates measurement errors and the rolling shutter effect) produces data that can successfully imitate the real-life point clouds. Due to dedicated mechanisms, it is compatible with the Robotic Operating System (ROS) and can be used interchangeably with data from actual sensors, which enables easy testing, SLAM algorithm parameter tuning and deployment.

RevDate: 2021-06-25
CmpDate: 2021-06-25

Khamisy-Farah R, Furstenau LB, Kong JD, et al (2021)

Gynecology Meets Big Data in the Disruptive Innovation Medical Era: State-of-Art and Future Prospects.

International journal of environmental research and public health, 18(10):.

Tremendous scientific and technological achievements have been revolutionizing the current medical era, changing the way in which physicians practice their profession and deliver healthcare provisions. This is due to the convergence of various advancements related to digitalization and the use of information and communication technologies (ICTs)-ranging from the internet of things (IoT) and the internet of medical things (IoMT) to the fields of robotics, virtual and augmented reality, and massively parallel and cloud computing. Further progress has been made in the fields of addictive manufacturing and three-dimensional (3D) printing, sophisticated statistical tools such as big data visualization and analytics (BDVA) and artificial intelligence (AI), the use of mobile and smartphone applications (apps), remote monitoring and wearable sensors, and e-learning, among others. Within this new conceptual framework, big data represents a massive set of data characterized by different properties and features. These can be categorized both from a quantitative and qualitative standpoint, and include data generated from wet-lab and microarrays (molecular big data), databases and registries (clinical/computational big data), imaging techniques (such as radiomics, imaging big data) and web searches (the so-called infodemiology, digital big data). The present review aims to show how big and smart data can revolutionize gynecology by shedding light on female reproductive health, both in terms of physiology and pathophysiology. More specifically, they appear to have potential uses in the field of gynecology to increase its accuracy and precision, stratify patients, provide opportunities for personalized treatment options rather than delivering a package of "one-size-fits-it-all" healthcare management provisions, and enhance its effectiveness at each stage (health promotion, prevention, diagnosis, prognosis, and therapeutics).

RevDate: 2021-06-05

Jalowiczor J, Rozhon J, M Voznak (2021)

Study of the Efficiency of Fog Computing in an Optimized LoRaWAN Cloud Architecture.

Sensors (Basel, Switzerland), 21(9):.

The technologies of the Internet of Things (IoT) have an increasing influence on our daily lives. The expansion of the IoT is associated with the growing number of IoT devices that are connected to the Internet. As the number of connected devices grows, the demand for speed and data volume is also greater. While most IoT network technologies use cloud computing, this solution becomes inefficient for some use-cases. For example, suppose that a company that uses an IoT network with several sensors to collect data within a production hall. The company may require sharing only selected data to the public cloud and responding faster to specific events. In the case of a large amount of data, the off-loading techniques can be utilized to reach higher efficiency. Meeting these requirements is difficult or impossible for solutions adopting cloud computing. The fog computing paradigm addresses these cases by providing data processing closer to end devices. This paper proposes three possible network architectures that adopt fog computing for LoRaWAN because LoRaWAN is already deployed in many locations and offers long-distance communication with low-power consumption. The architecture proposals are further compared in simulations to select the optimal form in terms of total service time. The resulting optimal communication architecture could be deployed to the existing LoRaWAN with minimal cost and effort of the network operator.

RevDate: 2021-05-31

Spjuth O, Frid J, A Hellander (2021)

The machine learning life cycle and the cloud: implications for drug discovery.

Expert opinion on drug discovery [Epub ahead of print].

Introduction: Artificial intelligence (AI) and machine learning (ML) are increasingly used in many aspects of drug discovery. Larger data sizes and methods such as Deep Neural Networks contribute to challenges in data management, the required software stack, and computational infrastructure. There is an increasing need in drug discovery to continuously re-train models and make them available in production environments.Areas covered: This article describes how cloud computing can aid the ML life cycle in drug discovery. The authors discuss opportunities with containerization and scientific workflows and introduce the concept of MLOps and describe how it can facilitate reproducible and robust ML modeling in drug discovery organizations. They also discuss ML on private, sensitive and regulated data.Expert opinion: Cloud computing offers a compelling suite of building blocks to sustain the ML life cycle integrated in iterative drug discovery. Containerization and platforms such as Kubernetes together with scientific workflows can enable reproducible and resilient analysis pipelines, and the elasticity and flexibility of cloud infrastructures enables scalable and efficient access to compute resources. Drug discovery commonly involves working with sensitive or private data, and cloud computing and federated learning can contribute toward enabling collaborative drug discovery within and between organizations.Abbreviations: AI = Artificial Intelligence; DL = Deep Learning; GPU = Graphics Processing Unit; IaaS = Infrastructure as a Service; K8S = Kubernetes; ML = Machine Learning; MLOps = Machine Learning and Operations; PaaS = Platform as a Service; QC = Quality Control; SaaS = Software as a Service.

RevDate: 2021-06-21

Marchand JR, Pirard B, Ertl P, et al (2021)

CAVIAR: a method for automatic cavity detection, description and decomposition into subcavities.

Journal of computer-aided molecular design, 35(6):737-750.

The accurate description of protein binding sites is essential to the determination of similarity and the application of machine learning methods to relate the binding sites to observed functions. This work describes CAVIAR, a new open source tool for generating descriptors for binding sites, using protein structures in PDB and mmCIF format as well as trajectory frames from molecular dynamics simulations as input. The applicability of CAVIAR descriptors is showcased by computing machine learning predictions of binding site ligandability. The method can also automatically assign subcavities, even in the absence of a bound ligand. The defined subpockets mimic the empirical definitions used in medicinal chemistry projects. It is shown that the experimental binding affinity scales relatively well with the number of subcavities filled by the ligand, with compounds binding to more than three subcavities having nanomolar or better affinities to the target. The CAVIAR descriptors and methods can be used in any machine learning-based investigations of problems involving binding sites, from protein engineering to hit identification. The full software code is available on GitHub and a conda package is hosted on Anaconda cloud.

RevDate: 2021-06-05

Chandawarkar R, P Nadkarni (2021)

Safe clinical photography: best practice guidelines for risk management and mitigation.

Archives of plastic surgery, 48(3):295-304.

Clinical photography is an essential component of patient care in plastic surgery. The use of unsecured smartphone cameras, digital cameras, social media, instant messaging, and commercially available cloud-based storage devices threatens patients' data safety. This paper Identifies potential risks of clinical photography and heightens awareness of safe clinical photography. Specifically, we evaluated existing risk-mitigation strategies globally, comparing them to industry standards in similar settings, and formulated a framework for developing a risk-mitigation plan for avoiding data breaches by identifying the safest methods of picture taking, transfer to storage, retrieval, and use, both within and outside the organization. Since threats evolve constantly, the framework must evolve too. Based on a literature search of both PubMed and the web (via Google) with key phrases and child terms (for PubMed), the risks and consequences of data breaches in individual processes in clinical photography are identified. Current clinical-photography practices are described. Lastly, we evaluate current risk mitigation strategies for clinical photography by examining guidelines from professional organizations, governmental agencies, and non-healthcare industries. Combining lessons learned from the steps above into a comprehensive framework that could contribute to national/international guidelines on safe clinical photography, we provide recommendations for best practice guidelines. It is imperative that best practice guidelines for the simple, safe, and secure capture, transfer, storage, and retrieval of clinical photographs be co-developed through cooperative efforts between providers, hospital administrators, clinical informaticians, IT governance structures, and national professional organizations. This would significantly safeguard patient data security and provide the privacy that patients deserve and expect.

RevDate: 2021-06-15
CmpDate: 2021-06-15

Bowler AL, NJ Watson (2021)

Transfer learning for process monitoring using reflection-mode ultrasonic sensing.

Ultrasonics, 115:106468.

The fourth industrial revolution is set to integrate entire manufacturing processes using industrial digital technologies such as the Internet of Things, Cloud Computing, and machine learning to improve process productivity, efficiency, and sustainability. Sensors collect the real-time data required to optimise manufacturing processes and are therefore a key technology in this transformation. Ultrasonic sensors have benefits of being low-cost, in-line, non-invasive, and able to operate in opaque systems. Supervised machine learning models can correlate ultrasonic sensor data to useful information about the manufacturing materials and processes. However, this requires a reference measurement of the process material to label each data point for model training. Labelled data is often difficult to obtain in factory environments, and so a method of training models without this is desirable. This work compares two domain adaptation methods to transfer models across processes, so that no labelled data is required to accurately monitor a target process. The two method compared are a Single Feature transfer learning approach and Transfer Component Analysis using three features. Ultrasonic waveforms are unique to the sensor used, attachment procedure, and contact pressure. Therefore, only a small number of transferable features are investigated. Two industrially relevant processes were used as case studies: mixing and cleaning of fouling in pipes. A reflection-mode ultrasonic sensing technique was used, which monitors the sound wave reflected from the interface between the vessel wall and process material. Overall, the Single Feature method produced the highest prediction accuracies: up to 96.0% and 98.4% to classify the completion of mixing and cleaning, respectively; and R2 values of up to 0.947 and 0.999 to predict the time remaining until completion. These results highlight the potential of combining ultrasonic measurements with transfer learning techniques to monitor industrial processes. Although, further work is required to study various effects such as changing sensor location between source and target domains.

RevDate: 2021-06-16
CmpDate: 2021-06-16

Miller M, N Zaccheddu (2021)

Light for a Potentially Cloudy Situation: Approach to Validating Cloud Computing Tools.

Biomedical instrumentation & technology, 55(2):63-68.

RevDate: 2021-07-13

Sahu ML, Atulkar M, Ahirwal MK, et al (2021)

IoT-enabled cloud-based real-time remote ECG monitoring system.

Journal of medical engineering & technology, 45(6):473-485.

Statistical reports all around the world have deemed cardiovascular diseases (CVDs) as the largest contributor to the death count. The electrocardiogram (ECG) is a widely accepted technology employed for investigation of CVDs of the person. The proposed solution deals with an efficient internet of things (IoT) enabled real-time ECG monitoring system using cloud computing technologies. The article presents a cloud-centric solution to provide remote monitoring of CVD. Sensed ECG data are transmitted to S3 bucket provided by Amazon web service (AWS) through a mobile gateway. AWS cloud uses HTTP and MQTT servers to provide data visualisation, quick response and long-live connection to device and user. Bluetooth low energy (BLE 4.0) is used as a communication protocol for low-power data transmission between device and mobile gateway. The proposed system is implemented with filtering algorithms to ignore distractions, environmental noise and motion artefacts. It offers an analysis of ECG signals to detect various parameters such as heartbeat, PQRST wave and QRS complex intervals along with respiration rate. The proposed system prototype has been tested and validated for reliable ECG monitoring remotely in real-time.

RevDate: 2021-05-22

Usman Sana M, Z Li (2021)

Efficiency aware scheduling techniques in cloud computing: a descriptive literature review.

PeerJ. Computer science, 7:e509.

In the last decade, cloud computing becomes the most demanding platform to resolve issues and manage requests across the Internet. Cloud computing takes along terrific opportunities to run cost-effective scientific workflows without the requirement of possessing any set-up for customers. It makes available virtually unlimited resources that can be attained, organized, and used as required. Resource scheduling plays a fundamental role in the well-organized allocation of resources to every task in the cloud environment. However along with these gains many challenges are required to be considered to propose an efficient scheduling algorithm. An efficient Scheduling algorithm must enhance the implementation of goals like scheduling cost, load balancing, makespan time, security awareness, energy consumption, reliability, service level agreement maintenance, etc. To achieve the aforementioned goals many state-of-the-art scheduling techniques have been proposed based upon hybrid, heuristic, and meta-heuristic approaches. This work reviewed existing algorithms from the perspective of the scheduling objective and strategies. We conduct a comparative analysis of existing strategies along with the outcomes they provide. We highlight the drawbacks for insight into further research and open challenges. The findings aid researchers by providing a roadmap to propose efficient scheduling algorithms.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )