About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot


Bibliography Options Menu

19 Apr 2021 at 01:32
Hide Abstracts   |   Hide Additional Links
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing


Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 19 Apr 2021 at 01:32 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)


RevDate: 2021-04-16

Li Y, Wei J, Wu B, et al (2021)

Obfuscating encrypted threshold signature algorithm and its applications in cloud computing.

PloS one, 16(4):e0250259 pii:PONE-D-20-35997.

Current cloud computing causes serious restrictions to safeguarding users' data privacy. Since users' sensitive data is submitted in unencrypted forms to remote machines possessed and operated by untrusted service providers, users' sensitive data may be leaked by service providers. Program obfuscation shows the unique advantages that it can provide for cloud computing. In this paper, we construct an encrypted threshold signature functionality, which can outsource the threshold signing rights of users to cloud server securely by applying obfuscation, while revealing no more sensitive information. The obfuscator is proven to satisfy the average case virtual black box property and existentially unforgeable under the decisional linear (DLIN) assumption and computational Diffie-Hellman (CDH) assumption in the standard model. Moreover, we implement our scheme using the Java pairing-based cryptography library on a laptop.

RevDate: 2021-04-16

von Chamier L, Laine RF, Jukkala J, et al (2021)

Democratising deep learning for microscopy with ZeroCostDL4Mic.

Nature communications, 12(1):2276.

Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.

RevDate: 2021-04-12

Meena V, Gorripatti M, T Suriya Praba (2021)

Trust Enforced Computational Offloading for Health Care Applications in Fog Computing.

Wireless personal communications pii:8285 [Epub ahead of print].

Internet of Things (IoT) is a network of internet connected devices that generates huge amount of data every day. The usage of IoT devices such as smart wearables, smart phones, smart cities are increasing in the linear scale. Health care is one of the primary applications today that uses IoT devices. Data generated in this application may need computation, storage and data analytics operations which requires resourceful environment for remote patient health monitoring. The data related with health care applications are primarily private and should be readily available to the users. Enforcing these two constraints in cloud environment is a hard task. Fog computing is an emergent architecture for providing computation, storage, control and network services within user's proximity. To handle private data, the processing elements should be trustable entities in Fog environment. In this paper we propose novel Trust Enforced computation ofFLoading technique for trust worthy applications using fOg computiNg (TEFLON). The proposed system comprises of two algorithms namely optimal service offloader and trust assessment for addressing security and trust issues with reduced response time. And the simulation results show that proposed TEFLON framework improves success rate of fog collaboration with reduced average latency for delay sensitive applications and ensures trust for trustworthy applications.

RevDate: 2021-04-09

Trenerry B, Chng S, Wang Y, et al (2021)

Preparing Workplaces for Digital Transformation: An Integrative Review and Framework of Multi-Level Factors.

Frontiers in psychology, 12:620766.

The rapid advancement of new digital technologies, such as smart technology, artificial intelligence (AI) and automation, robotics, cloud computing, and the Internet of Things (IoT), is fundamentally changing the nature of work and increasing concerns about the future of jobs and organizations. To keep pace with rapid disruption, companies need to update and transform business models to remain competitive. Meanwhile, the growth of advanced technologies is changing the types of skills and competencies needed in the workplace and demanded a shift in mindset among individuals, teams and organizations. The recent COVID-19 pandemic has accelerated digitalization trends, while heightening the importance of employee resilience and well-being in adapting to widespread job and technological disruption. Although digital transformation is a new and urgent imperative, there is a long trajectory of rigorous research that can readily be applied to grasp these emerging trends. Recent studies and reviews of digital transformation have primarily focused on the business and strategic levels, with only modest integration of employee-related factors. Our review article seeks to fill these critical gaps by identifying and consolidating key factors important for an organization's overarching digital transformation. We reviewed studies across multiple disciplines and integrated the findings into a multi-level framework. At the individual level, we propose five overarching factors related to effective digital transformation among employees: technology adoption; perceptions and attitudes toward technological change; skills and training; workplace resilience and adaptability, and work-related wellbeing. At the group-level, we identified three factors necessary for digital transformation: team communication and collaboration; workplace relationships and team identification, and team adaptability and resilience. Finally, at the organizational-level, we proposed three factors for digital transformation: leadership; human resources, and organizational culture/climate. Our review of the literature confirms that multi-level factors are important when planning for and embarking on digital transformation, thereby providing a framework for future research and practice.

RevDate: 2021-04-09

Armgarth A, Pantzare S, Arven P, et al (2021)

A digital nervous system aiming toward personalized IoT healthcare.

Scientific reports, 11(1):7757.

Body area networks (BANs), cloud computing, and machine learning are platforms that can potentially enable advanced healthcare outside the hospital. By applying distributed sensors and drug delivery devices on/in our body and connecting to such communication and decision-making technology, a system for remote diagnostics and therapy is achieved with additional autoregulation capabilities. Challenges with such autarchic on-body healthcare schemes relate to integrity and safety, and interfacing and transduction of electronic signals into biochemical signals, and vice versa. Here, we report a BAN, comprising flexible on-body organic bioelectronic sensors and actuators utilizing two parallel pathways for communication and decision-making. Data, recorded from strain sensors detecting body motion, are both securely transferred to the cloud for machine learning and improved decision-making, and sent through the body using a secure body-coupled communication protocol to auto-actuate delivery of neurotransmitters, all within seconds. We conclude that both highly stable and accurate sensing-from multiple sensors-are needed to enable robust decision making and limit the frequency of retraining. The holistic platform resembles the self-regulatory properties of the nervous system, i.e., the ability to sense, communicate, decide, and react accordingly, thus operating as a digital nervous system.

RevDate: 2021-04-09

Li Y, Ye H, Ye F, et al (2021)

The Current Situation and Future Prospects of Simulators in Dental Education.

Journal of medical Internet research, 23(4):e23635 pii:v23i4e23635.

The application of virtual reality has become increasingly extensive as this technology has developed. In dental education, virtual reality is mainly used to assist or replace traditional methods of teaching clinical skills in preclinical training for several subjects, such as endodontics, prosthodontics, periodontics, implantology, and dental surgery. The application of dental simulators in teaching can make up for the deficiency of traditional teaching methods and reduce the teaching burden, improving convenience for both teachers and students. However, because of the technology limitations of virtual reality and force feedback, dental simulators still have many hardware and software disadvantages that have prevented them from being an alternative to traditional dental simulators as a primary skill training method. In the future, when combined with big data, cloud computing, 5G, and deep learning technology, dental simulators will be able to give students individualized learning assistance, and their functions will be more diverse and suitable for preclinical training. The purpose of this review is to provide an overview of current dental simulators on related technologies, advantages and disadvantages, methods of evaluating effectiveness, and future directions for development.

RevDate: 2021-04-08

Li F, Qu Z, R Li (2021)

Medical Cloud Computing Data Processing to Optimize the Effect of Drugs.

Journal of healthcare engineering, 2021:5560691.

In recent years, cloud computing technology is maturing in the process of growing. Hadoop originated from Apache Nutch and is an open-source cloud computing platform. Moreover, the platform is characterized by large scale, virtualization, strong stability, strong versatility, and support for scalability. It is necessary and far-reaching, based on the characteristics of unstructured medical images, to combine content-based medical image retrieval with the Hadoop cloud platform to conduct research. This study combines the impact mechanism of senile dementia vascular endothelial cells with cloud computing to construct a corresponding data retrieval platform of the cloud computing image set. Moreover, this study uses Hadoop's core framework distributed file system HDFS to upload images, store the images in the HDFS and image feature vectors in HBase, and use MapReduce programming mode to perform parallel retrieval, and each of the nodes cooperates with each other. The results show that the proposed method has certain effects and can be applied to medical research.

RevDate: 2021-04-06

Iyer TJ, Joseph Raj AN, Ghildiyal S, et al (2021)

Performance analysis of lightweight CNN models to segment infectious lung tissues of COVID-19 cases from tomographic images.

PeerJ. Computer science, 7:e368.

The pandemic of Coronavirus Disease-19 (COVID-19) has spread around the world, causing an existential health crisis. Automated detection of COVID-19 infections in the lungs from Computed Tomography (CT) images offers huge potential in tackling the problem of slow detection and augments the conventional diagnostic procedures. However, segmenting COVID-19 from CT Scans is problematic, due to high variations in the types of infections and low contrast between healthy and infected tissues. While segmenting Lung CT Scans for COVID-19, fast and accurate results are required and furthermore, due to the pandemic, most of the research community has opted for various cloud based servers such as Google Colab, etc. to develop their algorithms. High accuracy can be achieved using Deep Networks but the prediction time would vary as the resources are shared amongst many thus requiring the need to compare different lightweight segmentation model. To address this issue, we aim to analyze the segmentation of COVID-19 using four Convolutional Neural Networks (CNN). The images in our dataset are preprocessed where the motion artifacts are removed. The four networks are UNet, Segmentation Network (Seg Net), High-Resolution Network (HR Net) and VGG UNet. Trained on our dataset of more than 3,000 images, HR Net was found to be the best performing network achieving an accuracy of 96.24% and a Dice score of 0.9127. The analysis shows that lightweight CNN models perform better than other neural net models when to segment infectious tissue due to COVID-19 from CT slices.

RevDate: 2021-04-05

Rizwan Ali M, Ahmad F, Hasanain Chaudary M, et al (2021)

Petri Net based modeling and analysis for improved resource utilization in cloud computing.

PeerJ. Computer science, 7:e351 pii:cs-351.

The cloud is a shared pool of systems that provides multiple resources through the Internet, users can access a lot of computing power using their computer. However, with the strong migration rate of multiple applications towards the cloud, more disks and servers are required to store huge data. Most of the cloud storage service providers are replicating full copies of data over multiple data centers to ensure data availability. Further, the replication is not only a costly process but also a wastage of energy resources. Furthermore, erasure codes reduce the storage cost by splitting data in n chunks and storing these chunks into n + k different data centers, to tolerate k failures. Moreover, it also needs extra computation cost to regenerate the data object. Cache-A Replica On Modification (CAROM) is a hybrid file system that gets combined benefits from both the replication and erasure codes to reduce access latency and bandwidth consumption. However, in the literature, no formal analysis of CAROM is available which can validate its performance. To address this issue, this research firstly presents a colored Petri net based formal model of CAROM. The research proceeds by presenting a formal analysis and simulation to validate the performance of the proposed system. This paper contributes towards the utilization of resources in clouds by presenting a comprehensive formal analysis of CAROM.

RevDate: 2021-04-06

Capuccini M, Larsson A, Carone M, et al (2019)

On-demand virtual research environments using microservices.

PeerJ. Computer science, 5:e232.

The computational demands for scientific applications are continuously increasing. The emergence of cloud computing has enabled on-demand resource allocation. However, relying solely on infrastructure as a service does not achieve the degree of flexibility required by the scientific community. Here we present a microservice-oriented methodology, where scientific applications run in a distributed orchestration platform as software containers, referred to as on-demand, virtual research environments. The methodology is vendor agnostic and we provide an open source implementation that supports the major cloud providers, offering scalable management of scientific pipelines. We demonstrate applicability and scalability of our methodology in life science applications, but the methodology is general and can be applied to other scientific domains.

RevDate: 2021-04-05

Khani H, H Khanmirza (2019)

Randomized routing of virtual machines in IaaS data centers.

PeerJ. Computer science, 5:e211 pii:cs-211.

Cloud computing technology has been a game changer in recent years. Cloud computing providers promise cost-effective and on-demand resource computing for their users. Cloud computing providers are running the workloads of users as virtual machines (VMs) in a large-scale data center consisting a few thousands physical servers. Cloud data centers face highly dynamic workloads varying over time and many short tasks that demand quick resource management decisions. These data centers are large scale and the behavior of workload is unpredictable. The incoming VM must be assigned onto the proper physical machine (PM) in order to keep a balance between power consumption and quality of service. The scale and agility of cloud computing data centers are unprecedented so the previous approaches are fruitless. We suggest an analytical model for cloud computing data centers when the number of PMs in the data center is large. In particular, we focus on the assignment of VM onto PMs regardless of their current load. For exponential VM arrival with general distribution sojourn time, the mean power consumption is calculated. Then, we show the minimum power consumption under quality of service constraint will be achieved with randomize assignment of incoming VMs onto PMs. Extensive simulation supports the validity of our analytical model.

RevDate: 2021-04-06

Wan KW, Wong CH, Ip HF, et al (2021)

Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: a comparative study.

Quantitative imaging in medicine and surgery, 11(4):1381-1393.

Background: In recent years, there was an increasing popularity in applying artificial intelligence in the medical field from computer-aided diagnosis (CAD) to patient prognosis prediction. Given the fact that not all healthcare professionals have the required expertise to develop a CAD system, the aim of this study was to investigate the feasibility of using AutoML Vision, a highly automatic machine learning model, for future clinical applications by comparing AutoML Vision with some commonly used CAD algorithms in the differentiation of benign and malignant breast lesions on ultrasound.

Methods: A total of 895 breast ultrasound images were obtained from the two online open-access ultrasound breast images datasets. Traditional machine learning models (comprising of seven commonly used CAD algorithms) with three content-based radiomic features (Hu Moments, Color Histogram, Haralick Texture) extracted, and a convolutional neural network (CNN) model were built using python language. AutoML Vision was trained in Google Cloud Platform. Sensitivity, specificity, F1 score and average precision (AUCPR) were used to evaluate the diagnostic performance of the models. Cochran's Q test was used to evaluate the statistical significance between all studied models and McNemar test was used as the post-hoc test to perform pairwise comparisons. The proposed AutoML model was also compared with the current related studies that involve similar medical imaging modalities in characterizing benign or malignant breast lesions.

Results: There was significant difference in the diagnostic performance among all studied traditional machine learning classifiers (P<0.05). Random Forest achieved the best performance in the differentiation of benign and malignant breast lesions (accuracy: 90%; sensitivity: 71%; specificity: 100%; F1 score: 0.83; AUCPR: 0.90) which was statistically comparable to the performance of CNN (accuracy: 91%; sensitivity: 82%; specificity: 96%; F1 score: 0.87; AUCPR: 0.88) and AutoML Vision (accuracy: 86%; sensitivity: 84%; specificity: 88%; F1 score: 0.83; AUCPR: 0.95) based on Cochran's Q test (P>0.05).

Conclusions: In this study, the performance of AutoML Vision was not significantly different from that of Random Forest (the best classifier among traditional machine learning models) and CNN. AutoML Vision showed relatively high accuracy and comparable to current commonly used classifiers which may prompt for future application in clinical practice.

RevDate: 2021-04-06

Andreas A, Mavromoustakis CX, Mastorakis G, et al (2021)

Towards an optimized security approach to IoT devices with confidential healthcare data exchange.

Multimedia tools and applications [Epub ahead of print].

Reliable data exchange and efficient image transfer are currently significant research challenges in health care systems. To incentivize data exchange within the Internet of Things (IoT) framework, we need to ensure data sovereignty by facilitating secure data exchange between trusted parties. The security and reliability of data-sharing infrastructure require a community of trust. Therefore, this paper introduces an encryption frame based on data fragmentation. It also presents a novel, deterministic grey-scale optical encryption scheme based on fundamental mathematics. The objective is to use encryption as the underlying measure to make the data unintelligible while exploiting fragmentation to break down sensitive relationships between attributes. Thus, sensitive data distributed in separate data repositories for decryption and reconstruction using interpolation by knowing polynomial coefficients and personal values from the DBMS Database Management System. Aims also to ensure the secure acquisition of diagnostic images, micrography, and all types of medical imagery based on probabilistic approaches. Visual sharing of confidential medical imageries based on implementing a novel method, where transparencies ≤k - 1 out of n cannot reveal the original image.

RevDate: 2021-04-05

Mijuskovic A, Chiumento A, Bemthuis R, et al (2021)

Resource Management Techniques for Cloud/Fog and Edge Computing: An Evaluation Framework and Classification.

Sensors (Basel, Switzerland), 21(5): pii:s21051832.

Processing IoT applications directly in the cloud may not be the most efficient solution for each IoT scenario, especially for time-sensitive applications. A promising alternative is to use fog and edge computing, which address the issue of managing the large data bandwidth needed by end devices. These paradigms impose to process the large amounts of generated data close to the data sources rather than in the cloud. One of the considerations of cloud-based IoT environments is resource management, which typically revolves around resource allocation, workload balance, resource provisioning, task scheduling, and QoS to achieve performance improvements. In this paper, we review resource management techniques that can be applied for cloud, fog, and edge computing. The goal of this review is to provide an evaluation framework of metrics for resource management algorithms aiming at the cloud/fog and edge environments. To this end, we first address research challenges on resource management techniques in that domain. Consequently, we classify current research contributions to support in conducting an evaluation framework. One of the main contributions is an overview and analysis of research papers addressing resource management techniques. Concluding, this review highlights opportunities of using resource management techniques within the cloud/fog/edge paradigm. This practice is still at early development and barriers need to be overcome.

RevDate: 2021-04-05

H Hasan M, Abbasalipour A, Nikfarjam H, et al (2021)

Exploiting Pull-In/Pull-Out Hysteresis in Electrostatic MEMS Sensor Networks to Realize a Novel Sensing Continuous-Time Recurrent Neural Network.

Micromachines, 12(3): pii:mi12030268.

The goal of this paper is to provide a novel computing approach that can be used to reduce the power consumption, size, and cost of wearable electronics. To achieve this goal, the use of microelectromechanical systems (MEMS) sensors for simultaneous sensing and computing is introduced. Specifically, by enabling sensing and computing locally at the MEMS sensor node and utilizing the usually unwanted pull in/out hysteresis, we may eliminate the need for cloud computing and reduce the use of analog-to-digital converters, sampling circuits, and digital processors. As a proof of concept, we show that a simulation model of a network of three commercially available MEMS accelerometers can classify a train of square and triangular acceleration signals inherently using pull-in and release hysteresis. Furthermore, we develop and fabricate a network with finger arrays of parallel plate actuators to facilitate coupling between MEMS devices in the network using actuating assemblies and biasing assemblies, thus bypassing the previously reported coupling challenge in MEMS neural networks.

RevDate: 2021-04-05

Pintavirooj C, Keatsamarn T, T Treebupachatsakul (2021)

Multi-Parameter Vital Sign Telemedicine System Using Web Socket for COVID-19 Pandemics.

Healthcare (Basel, Switzerland), 9(3): pii:healthcare9030285.

Telemedicine has become an increasingly important part of the modern healthcare infrastructure, especially in the present situation with the COVID-19 pandemics. Many cloud platforms have been used intensively for Telemedicine. The most popular ones include PubNub, Amazon Web Service, Google Cloud Platform and Microsoft Azure. One of the crucial challenges of telemedicine is the real-time application monitoring for the vital sign. The commercial platform is, by far, not suitable for real-time applications. The alternative is to design a web-based application exploiting Web Socket. This research paper concerns the real-time six-parameter vital-sign monitoring using a web-based application. The six vital-sign parameters are electrocardiogram, temperature, plethysmogram, percent saturation oxygen, blood pressure and heart rate. The six vital-sign parameters were encoded in a web server site and sent to a client site upon logging on. The encoded parameters were then decoded into six vital sign signals. Our proposed multi-parameter vital-sign telemedicine system using Web Socket has successfully remotely monitored the six-parameter vital signs on 4G mobile network with a latency of less than 5 milliseconds.

RevDate: 2021-04-05
CmpDate: 2021-04-05

Kang S, David DSK, Yang M, et al (2021)

Energy-Efficient Ultrasonic Water Level Detection System with Dual-Target Monitoring.

Sensors (Basel, Switzerland), 21(6): pii:s21062241.

This study presents a developed ultrasonic water level detection (UWLD) system with an energy-efficient design and dual-target monitoring. The water level monitoring system with a non-contact sensor is one of the suitable methods since it is not directly exposed to water. In addition, a web-based monitoring system using a cloud computing platform is a well-known technique to provide real-time water level monitoring. However, the long-term stable operation of remotely communicating units is an issue for real-time water level monitoring. Therefore, this paper proposes a UWLD unit using a low-power consumption design for renewable energy harvesting (e.g., solar) by controlling the unit with dual microcontrollers (MCUs) to improve the energy efficiency of the system. In addition, dual targeting to the pavement and streamside is uniquely designed to monitor both the urban inundation and stream overflow. The real-time water level monitoring data obtained from the proposed UWLD system is analyzed with water level changing rate (WLCR) and water level index. The quantified WLCR and water level index with various sampling rates present a different sensitivity to heavy rain.

RevDate: 2021-04-05
CmpDate: 2021-04-05

Sergi I, Montanaro T, Benvenuto FL, et al (2021)

A Smart and Secure Logistics System Based on IoT and Cloud Technologies.

Sensors (Basel, Switzerland), 21(6): pii:s21062231.

Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions to guarantee quality and freshness of food through the whole cold chain. In this regard, the use of Internet of Things (IoT)-enabling technologies and its specific branch called edge computing is bringing different enhancements thereby achieving easy remote and real-time monitoring of transported goods. Due to the fast changes of the requirements and the difficulties that researchers can encounter in proposing new solutions, the fast prototype approach could contribute to rapidly enhance both the research and the commercial sector. In order to make easy the fast prototyping of solutions, different platforms and tools have been proposed in the last years, however it is difficult to guarantee end-to-end security at all the levels through such platforms. For this reason, based on the experiments reported in literature and aiming at providing support for fast-prototyping, end-to-end security in the logistics sector, the current work presents a solution that demonstrates how the advantages offered by the Azure Sphere platform, a dedicated hardware (i.e., microcontroller unit, the MT3620) device and Azure Sphere Security Service can be used to realize a fast prototype to trace fresh food conditions through its transportation. The proposed solution guarantees end-to-end security and can be exploited by future similar works also in other sectors.

RevDate: 2021-04-05

El-Rashidy N, El-Sappagh S, Islam SMR, et al (2021)

Mobile Health in Remote Patient Monitoring for Chronic Diseases: Principles, Trends, and Challenges.

Diagnostics (Basel, Switzerland), 11(4): pii:diagnostics11040607.

Chronic diseases are becoming more widespread. Treatment and monitoring of these diseases require going to hospitals frequently, which increases the burdens of hospitals and patients. Presently, advancements in wearable sensors and communication protocol contribute to enriching the healthcare system in a way that will reshape healthcare services shortly. Remote patient monitoring (RPM) is the foremost of these advancements. RPM systems are based on the collection of patient vital signs extracted using invasive and noninvasive techniques, then sending them in real-time to physicians. These data may help physicians in taking the right decision at the right time. The main objective of this paper is to outline research directions on remote patient monitoring, explain the role of AI in building RPM systems, make an overview of the state of the art of RPM, its advantages, its challenges, and its probable future directions. For studying the literature, five databases have been chosen (i.e., science direct, IEEE-Explore, Springer, PubMed, and science.gov). We followed the (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) PRISMA, which is a standard methodology for systematic reviews and meta-analyses. A total of 56 articles are reviewed based on the combination of a set of selected search terms including RPM, data mining, clinical decision support system, electronic health record, cloud computing, internet of things, and wireless body area network. The result of this study approved the effectiveness of RPM in improving healthcare delivery, increase diagnosis speed, and reduce costs. To this end, we also present the chronic disease monitoring system as a case study to provide enhanced solutions for RPMs.

RevDate: 2021-04-05
CmpDate: 2021-04-05

Lovén L, Lähderanta T, Ruha L, et al (2021)

EDISON: An Edge-Native Method and Architecture for Distributed Interpolation.

Sensors (Basel, Switzerland), 21(7): pii:s21072279.

Spatio-temporal interpolation provides estimates of observations in unobserved locations and time slots. In smart cities, interpolation helps to provide a fine-grained contextual and situational understanding of the urban environment, in terms of both short-term (e.g., weather, air quality, traffic) or long term (e.g., crime, demographics) spatio-temporal phenomena. Various initiatives improve spatio-temporal interpolation results by including additional data sources such as vehicle-fitted sensors, mobile phones, or micro weather stations of, for example, smart homes. However, the underlying computing paradigm in such initiatives is predominantly centralized, with all data collected and analyzed in the cloud. This solution is not scalable, as when the spatial and temporal density of sensor data grows, the required transmission bandwidth and computational capacity become unfeasible. To address the scaling problem, we propose EDISON: algorithms for distributed learning and inference, and an edge-native architecture for distributing spatio-temporal interpolation models, their computations, and the observed data vertically and horizontally between device, edge and cloud layers. We demonstrate EDISON functionality in a controlled, simulated spatio-temporal setup with 1 M artificial data points. While the main motivation of EDISON is the distribution of the heavy computations, the results show that EDISON also provides an improvement over alternative approaches, reaching at best a 10% smaller RMSE than a global interpolation and 6% smaller RMSE than a baseline distributed approach.

RevDate: 2021-04-05

Zhang J, Lu C, Cheng G, et al (2021)

A Blockchain-Based Trusted Edge Platform in Edge Computing Environment.

Sensors (Basel, Switzerland), 21(6): pii:s21062126.

Edge computing is a product of the evolution of IoT and the development of cloud computing technology, providing computing, storage, network, and other infrastructure close to users. Compared with the centralized deployment model of traditional cloud computing, edge computing solves the problems of extended communication time and high convergence traffic, providing better support for low latency and high bandwidth services. With the increasing amount of data generated by users and devices in IoT, security and privacy issues in the edge computing environment have become concerns. Blockchain, a security technology developed rapidly in recent years, has been adopted by many industries, such as finance and insurance. With the edge computing capability, deploying blockchain platforms/applications on edge computing platforms can provide security services for network edge environments. Although there are already solutions for integrating edge computing with blockchain in many IoT application scenarios, they slightly lack scalability, portability, and heterogeneous data processing. In this paper, we propose a trusted edge platform to integrate the edge computing framework and blockchain network for building an edge security environment. The proposed platform aims to preserve the data privacy of the edge computing client. The design based on the microservice architecture makes the platform lighter. To improve the portability of the platform, we introduce the Edgex Foundry framework and design an edge application module on the platform to improve the business capability of Edgex. Simultaneously, we designed a series of well-defined security authentication microservices. These microservices use the Hyperledger Fabric blockchain network to build a reliable security mechanism in the edge environment. Finally, we build an edge computing network using different hardware devices and deploy the trusted edge platform on multiple network nodes. The usability of the proposed platform is demonstrated by testing the round-trip time (RTT) of several important workflows. The experimental results demonstrate that the platform can meet the availability requirements in real-world usage scenarios.

RevDate: 2021-04-05

Klein I, Oppelt N, C Kuenzer (2021)

Application of Remote Sensing Data for Locust Research and Management-A Review.

Insects, 12(3): pii:insects12030233.

Recently, locust outbreaks around the world have destroyed agricultural and natural vegetation and caused massive damage endangering food security. Unusual heavy rainfalls in habitats of the desert locust (Schistocerca gregaria) and lack of monitoring due to political conflicts or inaccessibility of those habitats lead to massive desert locust outbreaks and swarms migrating over the Arabian Peninsula, East Africa, India and Pakistan. At the same time, swarms of the Moroccan locust (Dociostaurus maroccanus) in some Central Asian countries and swarms of the Italian locust (Calliptamus italicus) in Russia and China destroyed crops despite developed and ongoing monitoring and control measurements. These recent events underline that the risk and damage caused by locust pests is as present as ever and affects 100 million of human lives despite technical progress in locust monitoring, prediction and control approaches. Remote sensing has become one of the most important data sources in locust management. Since the 1980s, remote sensing data and applications have accompanied many locust management activities and contributed to an improved and more effective control of locust outbreaks and plagues. Recently, open-access remote sensing data archives as well as progress in cloud computing provide unprecedented opportunity for remote sensing-based locust management and research. Additionally, unmanned aerial vehicle (UAV) systems bring up new prospects for a more effective and faster locust control. Nevertheless, the full capacity of available remote sensing applications and possibilities have not been exploited yet. This review paper provides a comprehensive and quantitative overview of international research articles focusing on remote sensing application for locust management and research. We reviewed 110 articles published over the last four decades, and categorized them into different aspects and main research topics to summarize achievements and gaps for further research and application development. The results reveal a strong focus on three species-the desert locust, the migratory locust (Locusta migratoria), and the Australian plague locust (Chortoicetes terminifera)-and corresponding regions of interest. There is still a lack of international studies for other pest species such as the Italian locust, the Moroccan locust, the Central American locust (Schistocerca piceifrons), the South American locust (Schistocerca cancellata), the brown locust (Locustana pardalina) and the red locust (Nomadacris septemfasciata). In terms of applied sensors, most studies utilized Advanced Very-High-Resolution Radiometer (AVHRR), Satellite Pour l'Observation de la Terre VEGETATION (SPOT-VGT), Moderate-Resolution Imaging Spectroradiometer (MODIS) as well as Landsat data focusing mainly on vegetation monitoring or land cover mapping. Application of geomorphological metrics as well as radar-based soil moisture data is comparably rare despite previous acknowledgement of their importance for locust outbreaks. Despite great advance and usage of available remote sensing resources, we identify several gaps and potential for future research to further improve the understanding and capacities of the use of remote sensing in supporting locust outbreak- research and management.

RevDate: 2021-04-06
CmpDate: 2021-04-06

Poniszewska-Marańda A, E Czechowska (2021)

Kubernetes Cluster for Automating Software Production Environment.

Sensors (Basel, Switzerland), 21(5): pii:s21051910.

Microservices, Continuous Integration and Delivery, Docker, DevOps, Infrastructure as Code-these are the current trends and buzzwords in the technological world of 2020. A popular tool which can facilitate the deployment and maintenance of microservices is Kubernetes. Kubernetes is a platform for running containerized applications, for example microservices. There are two main questions which answer was important for us: how to deploy Kubernetes itself and how to ensure that the deployment fulfils the needs of a production environment. Our research concentrates on the analysis and evaluation of Kubernetes cluster as the software production environment. However, firstly it is necessary to determine and evaluate the requirements of production environment. The paper presents the determination and analysis of such requirements and their evaluation in the case of Kubernetes cluster. Next, the paper compares two methods of deploying a Kubernetes cluster: kops and eksctl. Both of the methods concern the AWS cloud, which was chosen mainly because of its wide popularity and the range of provided services. Besides the two chosen methods of deployment, there are many more, including the DIY method and deploying on-premises.

RevDate: 2021-04-05

Hadzovic S, Mrdovic S, M Radonjic (2021)

Identification of IoT Actors.

Sensors (Basel, Switzerland), 21(6): pii:s21062093.

The Internet of Things (IoT) is a leading trend with numerous opportunities accompanied by advantages as well as disadvantages. Parallel with IoT development, significant privacy and personal data protection challenges are also growing. In this regard, the General Data Protection Regulation (GDPR) is often considered the world's strongest set of data protection rules and has proven to be a catalyst for many countries around the world. The concepts and interaction of the data controller, the joint controllers, and the data processor play a key role in the implementation of the GDPR. Therefore, clarifying the blurred IoT actors' relationships to determine corresponding responsibilities is necessary. Given the IoT transformation reflected in shifting computing power from cloud to the edge, in this research we have considered how these computing paradigms are affecting IoT actors. In this regard, we have introduced identification of IoT actors according to a new five-computing layer IoT model based on the cloud, fog, edge, mist, and dew computing. Our conclusion is that identifying IoT actors in the light of the corresponding IoT data manager roles could be useful in determining the responsibilities of IoT actors for their compliance with data protection and privacy rules.

RevDate: 2021-04-05
CmpDate: 2021-04-05

Sedar R, Vázquez-Gallego F, Casellas R, et al (2021)

Standards-Compliant Multi-Protocol On-Board Unit for the Evaluation of Connected and Automated Mobility Services in Multi-Vendor Environments.

Sensors (Basel, Switzerland), 21(6): pii:s21062090.

Vehicle-to-everything (V2X) communications enable real-time information exchange between vehicles and infrastructure, which extends the perception range of vehicles beyond the limits of on-board sensors and, thus, facilitating the realisation of cooperative, connected, and automated mobility (CCAM) services that will improve road safety and traffic efficiency. In the context of CCAM, the successful deployments of cooperative intelligent transport system (C-ITS) use cases, with the integration of advanced wireless communication technologies, are effectively leading to make transport safer and more efficient. However, the evaluation of multi-vendor and multi-protocol based CCAM service architectures can become challenging and complex. Additionally, conducting on-demand field trials of such architectures with real vehicles involved is prohibitively expensive and time-consuming. In order to overcome these obstacles, in this paper, we present the development of a standards-compliant experimental vehicular on-board unit (OBU) that supports the integration of multiple V2X protocols from different vendors to communicate with heterogeneous cloud-based services that are offered by several original equipment manufacturers (OEMs). We experimentally demonstrate the functionalities of the OBU in a real-world deployment of a cooperative collision avoidance service infrastructure that is based on edge and cloud servers. In addition, we measure end-to-end application-level latencies of multi-protocol supported V2X information flows to show the effectiveness of interoperability in V2X communications between different vehicle OEMs.

RevDate: 2021-04-05
CmpDate: 2021-04-05

Wang Y, Wang L, Zheng R, et al (2021)

Latency-Optimal Computational Offloading Strategy for Sensitive Tasks in Smart Homes.

Sensors (Basel, Switzerland), 21(7): pii:s21072347.

In smart homes, the computational offloading technology of edge cloud computing (ECC) can effectively deal with the large amount of computation generated by smart devices. In this paper, we propose a computational offloading strategy for minimizing delay based on the back-pressure algorithm (BMDCO) to get the offloading decision and the number of tasks that can be offloaded. Specifically, we first construct a system with multiple local smart device task queues and multiple edge processor task queues. Then, we formulate an offloading strategy to minimize the queue length of tasks in each time slot by minimizing the Lyapunov drift optimization problem, so as to realize the stability of queues and improve the offloading performance. In addition, we give a theoretical analysis on the stability of the BMDCO algorithm by deducing the upper bound of all queues in this system. The simulation results show the stability of the proposed algorithm, and demonstrate that the BMDCO algorithm is superior to other alternatives. Compared with other algorithms, this algorithm can effectively reduce the computation delay.

RevDate: 2021-04-06
CmpDate: 2021-04-06

Agapiou A (2021)

Multi-Temporal Change Detection Analysis of Vertical Sprawl over Limassol City Centre and Amathus Archaeological Site in Cyprus during 2015-2020 Using the Sentinel-1 Sensor and the Google Earth Engine Platform.

Sensors (Basel, Switzerland), 21(5): pii:s21051884.

Urban sprawl can negatively impact the archaeological record of an area. In order to study the urbanisation process and its patterns, satellite images were used in the past to identify land-use changes and detect individual buildings and constructions. However, this approach involves the acquisition of high-resolution satellite images, the cost of which is increases according to the size of the area under study, as well as the time interval of the analysis. In this paper, we implemented a quick, automatic and low-cost exploration of large areas, for addressing this purpose, aiming to provide at a medium resolution of an overview of the landscape changes. This study focuses on using radar Sentinel-1 images to monitor and detect multi-temporal changes during the period 2015-2020 in Limassol, Cyprus. In addition, the big data cloud platform, Google Earth Engine, was used to process the data. Three different change detection methods were implemented in this platform as follow: (a) vertical transmit, vertical receive (VV) and vertical transmit, horizontal receive (VH) polarisations pseudo-colour composites; (b) the Rapid and Easy Change Detection in Radar Time-Series by Variation Coefficient (REACTIV) Google Earth Engine algorithm; and (c) a multi-temporal Wishart-based change detection algorithm. The overall findings are presented for the wider area of the Limassol city, with special focus on the archaeological site of "Amathus" and the city centre of Limassol. For validation purposes, satellite images from the multi-temporal archive from the Google Earth platform were used. The methods mentioned above were able to capture the urbanization process of the city that has been initiated during this period due to recent large construction projects.

RevDate: 2021-04-06
CmpDate: 2021-04-06

Hsiao CH, Lin FY, Fang ES, et al (2021)

Optimization-Based Resource Management Algorithms with Considerations of Client Satisfaction and High Availability in Elastic 5G Network Slices.

Sensors (Basel, Switzerland), 21(5): pii:s21051882.

A combined edge and core cloud computing environment is a novel solution in 5G network slices. The clients' high availability requirement is a challenge because it limits the possible admission control in front of the edge cloud. This work proposes an orchestrator with a mathematical programming model in a global viewpoint to solve resource management problems and satisfying the clients' high availability requirements. The proposed Lagrangian relaxation-based approach is adopted to solve the problems at a near-optimal level for increasing the system revenue. A promising and straightforward resource management approach and several experimental cases are used to evaluate the efficiency and effectiveness. Preliminary results are presented as performance evaluations to verify the proposed approach's suitability for edge and core cloud computing environments. The proposed orchestrator significantly enables the network slicing services and efficiently enhances the clients' satisfaction of high availability.

RevDate: 2021-04-02

Bentes PCL, J Nadal (2021)

A telediagnosis assistance system for multiple-lead electrocardiography.

Physical and engineering sciences in medicine [Epub ahead of print].

The diffusion of telemedicine opens-up a new perspective for the development of technologies furthered by Biomedical Engineering. In particular, herein we deal with those related to telediagnosis through multiple-lead electrocardiographic signals. This study focuses on the proof-of-concept of an internet-based telemedicine system as a use case that attests to the feasibility for the development, within the university environment, of techniques for remote processing of biomedical signals for adjustable detection of myocardial ischemia episodes. At each signal lead, QRS complexes are detected and delimited with the J-point marking. The same procedure to detect the complex is used to identify the respective T wave, then the area over the ST segment is applied to detect ischemia-related elevations. The entire system is designed on web-based telemedicine services using multiuser, remote access technologies, and database. The measurements for sensitivity and precision had their respective averages calculated at 11.79 and 24.21% for the leads of lower noise. The evaluations regarding the aspects of user friendliness and the usefulness of the application, resulted in 88.57 and 89.28% of broad or total acceptance, respectively. They are robust enough to enable scalability and can be offered by cloud computing, besides enabling the development of new biomedical signal processing techniques within the concept of distance services, using a modular architecture with collaborative bias.

RevDate: 2021-04-01

Deepika J, Rajan C, T Senthil (2021)

Security and Privacy of Cloud- and IoT-Based Medical Image Diagnosis Using Fuzzy Convolutional Neural Network.

Computational intelligence and neuroscience, 2021:6615411.

In recent times, security in cloud computing has become a significant part in healthcare services specifically in medical data storage and disease prediction. A large volume of data are produced in the healthcare environment day by day due to the development in the medical devices. Thus, cloud computing technology is utilised for storing, processing, and handling these large volumes of data in a highly secured manner from various attacks. This paper focuses on disease classification by utilising image processing with secured cloud computing environment using an extended zigzag image encryption scheme possessing a greater tolerance to different data attacks. Secondly, a fuzzy convolutional neural network (FCNN) algorithm is proposed for effective classification of images. The decrypted images are used for classification of cancer levels with different layers of training. After classification, the results are transferred to the concern doctors and patients for further treatment process. Here, the experimental process is carried out by utilising the standard dataset. The results from the experiment concluded that the proposed algorithm shows better performance than the other existing algorithms and can be effectively utilised for the medical image diagnosis.

RevDate: 2021-03-30

Floreano IX, LAF de Moraes (2021)

Land use/land cover (LULC) analysis (2009-2019) with Google Earth Engine and 2030 prediction using Markov-CA in the Rondônia State, Brazil.

Environmental monitoring and assessment, 193(4):239.

The Amazonian biome is important not only for South America but also for the entire planet, providing essential environmental services. The state of Rondônia ranks third in deforestation rates in the Brazilian Legal Amazon (BLA) political division. This study aims to evaluate the land use/land cover (LULC) changes over the past ten years (2009-2019), as well as, to predict the LULC in the next 10 years, using TerrSet 18.3 software, in the state of Rondônia, Brazil. The machine learning algorithms within the Google Earth Engine cloud-based platform employed a Random Forest classifier in image classifications. The Markov-CA deep learning algorithm predicted future LULC changes by comparing scenarios of one and three transitions. The results showed a reduction in forested areas of about 15.7% between 2009 and 2019 in the Rondônia state. According to the predictive model, by 2030, around 30% of the remaining forests will be logged, most likely converted into occupied areas. The results reinforce the importance of measures and policies integrated with investments in research and satellite monitoring to reduce deforestation in the Brazilian Amazon and ensure the continuity of the Amazonian role in halting climate change.

RevDate: 2021-03-29

Wimberly MC, de Beurs KM, Loboda TV, et al (2021)

Satellite Observations and Malaria: New Opportunities for Research and Applications.

Trends in parasitology pii:S1471-4922(21)00055-6 [Epub ahead of print].

Satellite remote sensing provides a wealth of information about environmental factors that influence malaria transmission cycles and human populations at risk. Long-term observations facilitate analysis of climate-malaria relationships, and high-resolution data can be used to assess the effects of agriculture, urbanization, deforestation, and water management on malaria. New sources of very-high-resolution satellite imagery and synthetic aperture radar data will increase the precision and frequency of observations. Cloud computing platforms for remote sensing data combined with analysis-ready datasets and high-level data products have made satellite remote sensing more accessible to nonspecialists. Further collaboration between the malaria and remote sensing communities is needed to develop and implement useful geospatial data products that will support global efforts toward malaria control, elimination, and eradication.

RevDate: 2021-03-27

Li C, Bao K, Qin S, et al (2021)

Grating-enabled high-speed high-efficiency surface-illuminated silicon photodiodes.

Optics express, 29(3):3458-3464.

High-speed, high-efficiency silicon photodetectors play important roles in the optical communication links that are used increasingly in data centers to handle the increasing volumes of data traffic and higher bandwidths required as use of big data and cloud computing continues to grow exponentially. Monolithic integration of the optical components with signal processing electronics on a single silicon chip is of paramount importance in the drive to reduce costs and improve performance. Here we report grating-enhanced light absorption in a silicon photodiode. The absorption efficiency is determined theoretically to be as high as 77% at 850 nm for the optimal structure, which has a thin intrinsic absorption layer with a thickness of 220 nm. The fabricated devices demonstrate a high bandwidth of 11.3 GHz and improved radio-frequency output power of more than 14 dB, thus making them suitable for use in data center optical communications.

RevDate: 2021-03-25

Schoenbachler JL, JJ Hughey (2021)

pmparser and PMDB: resources for large-scale, open studies of the biomedical literature.

PeerJ, 9:e11071 pii:11071.

PubMed is an invaluable resource for the biomedical community. Although PubMed is freely available, the existing API is not designed for large-scale analyses and the XML structure of the underlying data is inconvenient for complex queries. We developed an R package called pmparser to convert the data in PubMed to a relational database. Our implementation of the database, called PMDB, currently contains data on over 31 million PubMed Identifiers (PMIDs) and is updated regularly. Together, pmparser and PMDB can enable large-scale, reproducible, and transparent analyses of the biomedical literature. pmparser is licensed under GPL-2 and available at https://pmparser.hugheylab.org. PMDB is available in both PostgreSQL (DOI 10.5281/zenodo.4008109) and Google BigQuery (https://console.cloud.google.com/bigquery?project=pmdb-bq&d=pmdb).

RevDate: 2021-03-25

Yao L, Shang D, Zhao H, et al (2021)

Medical Equipment Comprehensive Management System Based on Cloud Computing and Internet of Things.

Journal of healthcare engineering, 2021:6685456.

The continuous progress in modern medicine is not only the level of medical technology, but also various high-tech medical auxiliary equipment. With the rapid development of hospital information construction, medical equipment plays a very important role in the diagnosis, treatment, and prognosis observation of the disease. However, the continuous growth of the types and quantity of medical equipment has caused considerable difficulties in the management of hospital equipment. In order to improve the efficiency of medical equipment management in hospital, based on cloud computing and the Internet of Things, this paper develops a comprehensive management system of medical equipment and uses the improved particle swarm optimization algorithm and chicken swarm algorithm to help the system reasonably achieve dynamic task scheduling. The purpose of this paper is to develop a comprehensive intelligent management system to master the procurement, maintenance, and use of all medical equipment in the hospital, so as to maximize the scientific management of medical equipment in the hospital. Scientific Management. It is very necessary to develop a preventive maintenance plan for medical equipment. From the experimental data, it can be seen that when the system simultaneously accesses 100 simulated users online, the corresponding time for submitting the equipment maintenance application form is 1228 ms, and the accuracy rate is 99.8%. When there are 1000 simulated online users, the corresponding time for submitting the equipment maintenance application form is 5123 ms, and the correct rate is 99.4%. On the whole, the medical equipment management information system has excellent performance in stress testing. It not only predicts the initial performance requirements, but also provides a large amount of data support for equipment management and maintenance.

RevDate: 2021-03-22

Caufield JH, Sigdel D, Fu J, et al (2021)

Cardiovascular Informatics: building a bridge to data harmony.

Cardiovascular research pii:6159766 [Epub ahead of print].

The search for new strategies for better understanding cardiovascular disease is a constant one, spanning multitudinous types of observations and studies. A comprehensive characterization of each disease state and its biomolecular underpinnings relies upon insights gleaned from extensive information collection of various types of data. Researchers and clinicians in cardiovascular biomedicine repeatedly face questions regarding which types of data may best answer their questions, how to integrate information from multiple datasets of various types, and how to adapt emerging advances in machine learning and/or artificial intelligence to their needs in data processing. Frequently lauded as a field with great practical and translational potential, the interface between biomedical informatics and cardiovascular medicine is challenged with staggeringly massive datasets. Successful application of computational approaches to decode these complex and gigantic amounts of information becomes an essential step toward realizing the desired benefits. In this review, we examine recent efforts to adapt informatics strategies to cardiovascular biomedical research: automated information extraction and unification of multifaceted -omics data. We discuss how and why this interdisciplinary space of Cardiovascular Informatics is particularly relevant to and supportive of current experimental and clinical research. We describe in detail how open data sources and methods can drive discovery while demanding few initial resources, an advantage afforded by widespread availability of cloud computing-driven platforms. Subsequently, we provide examples of how interoperable computational systems facilitate exploration of data from multiple sources, including both consistently-formatted structured data and unstructured data. Taken together, these approaches for achieving data harmony enable molecular phenotyping of cardiovascular (CV) diseases and unification of cardiovascular knowledge.

RevDate: 2021-03-22

Ogle C, Reddick D, McKnight C, et al (2021)

Named Data Networking for Genomics Data Management and Integrated Workflows.

Frontiers in big data, 4:582468 pii:582468.

Advanced imaging and DNA sequencing technologies now enable the diverse biology community to routinely generate and analyze terabytes of high resolution biological data. The community is rapidly heading toward the petascale in single investigator laboratory settings. As evidence, the single NCBI SRA central DNA sequence repository contains over 45 petabytes of biological data. Given the geometric growth of this and other genomics repositories, an exabyte of mineable biological data is imminent. The challenges of effectively utilizing these datasets are enormous as they are not only large in the size but also stored in geographically distributed repositories in various repositories such as National Center for Biotechnology Information (NCBI), DNA Data Bank of Japan (DDBJ), European Bioinformatics Institute (EBI), and NASA's GeneLab. In this work, we first systematically point out the data-management challenges of the genomics community. We then introduce Named Data Networking (NDN), a novel but well-researched Internet architecture, is capable of solving these challenges at the network layer. NDN performs all operations such as forwarding requests to data sources, content discovery, access, and retrieval using content names (that are similar to traditional filenames or filepaths) and eliminates the need for a location layer (the IP address) for data management. Utilizing NDN for genomics workflows simplifies data discovery, speeds up data retrieval using in-network caching of popular datasets, and allows the community to create infrastructure that supports operations such as creating federation of content repositories, retrieval from multiple sources, remote data subsetting, and others. Named based operations also streamlines deployment and integration of workflows with various cloud platforms. Our contributions in this work are as follows 1) we enumerate the cyberinfrastructure challenges of the genomics community that NDN can alleviate, and 2) we describe our efforts in applying NDN for a contemporary genomics workflow (GEMmaker) and quantify the improvements. The preliminary evaluation shows a sixfold speed up in data insertion into the workflow. 3) As a pilot, we have used an NDN naming scheme (agreed upon by the community and discussed in Section 4) to publish data from broadly used data repositories including the NCBI SRA. We have loaded the NDN testbed with these pre-processed genomes that can be accessed over NDN and used by anyone interested in those datasets. Finally, we discuss our continued effort in integrating NDN with cloud computing platforms, such as the Pacific Research Platform (PRP). The reader should note that the goal of this paper is to introduce NDN to the genomics community and discuss NDN's properties that can benefit the genomics community. We do not present an extensive performance evaluation of NDN-we are working on extending and evaluating our pilot deployment and will present systematic results in a future work.

RevDate: 2021-03-19

Guo J, Chen S, Tian S, et al (2021)

5G-enabled ultra-sensitive fluorescence sensor for proactive prognosis of COVID-19.

Biosensors & bioelectronics, 181:113160 pii:S0956-5663(21)00197-4 [Epub ahead of print].

The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is spreading around the globe since December 2019. There is an urgent need to develop sensitive and online methods for on-site diagnosing and monitoring of suspected COVID-19 patients. With the huge development of Internet of Things (IoT), the impact of Internet of Medical Things (IoMT) provides an impressive solution to this problem. In this paper, we proposed a 5G-enabled fluorescence sensor for quantitative detection of spike protein and nucleocapsid protein of SARS-CoV-2 by using mesoporous silica encapsulated up-conversion nanoparticles (UCNPs@mSiO2) labeled lateral flow immunoassay (LFIA). The sensor can detect spike protein (SP) with a detection of limit (LOD) 1.6 ng/mL and nucleocapsid protein (NP) with an LOD of 2.2 ng/mL. The feasibility of the sensor in clinical use was further demonstrated by utilizing virus culture as real clinical samples. Moreover, the proposed fluorescence sensor is IoMT enabled, which is accessible to edge hardware devices (personal computers, 5G smartphones, IPTV, etc.) through Bluetooth. Medical data can be transmitted to the fog layer of the network and 5G cloud server with ultra-low latency and high reliably for edge computing and big data analysis. Furthermore, a COVID-19 monitoring module working with the proposed the system is developed on a smartphone application (App), which endows patients and their families to record their medical data and daily conditions remotely, releasing the burdens of going to central hospitals. We believe that the proposed system will be highly practical in the future treatment and prevention of COVID-19 and other mass infectious diseases.

RevDate: 2021-03-19

Blamey B, Toor S, Dahlö M, et al (2021)

Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit.

GigaScience, 10(3):.

BACKGROUND: Large streamed datasets, characteristic of life science applications, are often resource-intensive to process, transport and store. We propose a pipeline model, a design pattern for scientific pipelines, where an incoming stream of scientific data is organized into a tiered or ordered "data hierarchy". We introduce the HASTE Toolkit, a proof-of-concept cloud-native software toolkit based on this pipeline model, to partition and prioritize data streams to optimize use of limited computing resources.

FINDINGS: In our pipeline model, an "interestingness function" assigns an interestingness score to data objects in the stream, inducing a data hierarchy. From this score, a "policy" guides decisions on how to prioritize computational resource use for a given object. The HASTE Toolkit is a collection of tools to adopt this approach. We evaluate with 2 microscopy imaging case studies. The first is a high content screening experiment, where images are analyzed in an on-premise container cloud to prioritize storage and subsequent computation. The second considers edge processing of images for upload into the public cloud for real-time control of a transmission electron microscope.

CONCLUSIONS: Through our evaluation, we created smart data pipelines capable of effective use of storage, compute, and network resources, enabling more efficient data-intensive experiments. We note a beneficial separation between scientific concerns of data priority, and the implementation of this behaviour for different resources in different deployment contexts. The toolkit allows intelligent prioritization to be `bolted on' to new and existing systems - and is intended for use with a range of technologies in different deployment scenarios.

RevDate: 2021-03-19

Kumar D (2021)

Urban objects detection from C-band synthetic aperture radar (SAR) satellite images through simulating filter properties.

Scientific reports, 11(1):6241.

Satellite-based remote sensing has a key role in the monitoring earth features, but due to flaws like cloud penetration capability and selective duration for remote sensing in traditional remote sensing methods, now the attention has shifted towards the use of alternative methods such as microwave or radar sensing technology. Microwave remote sensing utilizes synthetic aperture radar (SAR) technology for remote sensing and it can operate in all weather conditions. Previous researchers have reported about effects of SAR pre-processing for urban objects detection and mapping. Preparing high accuracy urban maps are critical to disaster planning and response efforts, thus result from this study can help to users on the required pre-processing steps and its effects. Owing to the induced errors (such as calibration, geometric, speckle noise) in the radar images, these images are affected by several distortions, therefore these distortions need to be processed before any applications, as it causes issues in image interpretation and these can destroy valuable information about shapes, size, pattern and tone of various desired objects. The present work aims to utilize the sentinel-1 SAR datasets for urban studies (i.e. urban object detection through simulation of filter properties). The work uses C-band SAR datasets acquired from Sentinel-1A/B sensor, and the Google Earth datasets to validate the recognized objects. It was observed that the Refined-Lee filter performed well to provide detailed information about the various urban objects. It was established that the attempted approach cannot be generalised as one suitable method for sensing or identifying accurate urban objects from the C-band SAR images. Hence some more datasets in different polarisation combinations are required to be attempted.

RevDate: 2021-03-18

Chandak T, CF Wong (2021)

EDock-ML: A Web Server for Using Ensemble Docking with Machine Learning to Aid Drug Discovery.

Protein science : a publication of the Protein Society [Epub ahead of print].

EDock-ML is a web server that facilitates the use of ensemble docking with machine learning to help decide whether a compound is worthwhile to be considered further in a drug discovery process. Ensemble docking provides an economical way to account for receptor flexibility in molecular docking. Machine learning improves the use of the resulting docking scores to evaluate whether a compound is likely to be useful. EDock-ML takes a bottom-up approach in which machine-learning models are developed one protein at a time to improve predictions for the proteins included in its database. Because the machine-learning models are intended to be used without changing the docking and model parameters with which the models were trained, novice users can use it directly without worrying about what parameters to choose. A user simply submits a compound specified by an ID from the ZINC database (Sterling, T.; Irwin, J. J., J Chem Inf Model 2015, 55(11), 2324-2337.) or upload a file prepared by a chemical drawing program and receives an output helping the user decide the likelihood of the compound to be active or inactive for a drug target. EDock-ML can be accessed freely at edock-ml.umsl.edu This article is protected by copyright. All rights reserved.

RevDate: 2021-03-18

Ali MA (2021)

Phylotranscriptomic analysis of Dillenia indica L. (Dilleniales, Dilleniaceae) and its systematics implication.

Saudi journal of biological sciences, 28(3):1557-1560.

The recent massive development in the next-generation sequencing platforms and bioinformatics tools including cloud based computing have proven extremely useful in understanding the deeper-level phylogenetic relationships of angiosperms. The present phylotranscriptomic analyses address the poorly known evolutionary relationships of the order Dilleniales to order of the other angiosperms using the minimum evolution method. The analyses revealed the nesting of the representative taxon of Dilleniales in the MPT but distinct from the representative of the order Santalales, Caryophyllales, Asterales, Cornales, Ericales, Lamiales, Saxifragales, Fabales, Malvales, Vitales and Berberidopsidales.

RevDate: 2021-03-17

Bandara E, Liang X, Foytik P, et al (2021)

A blockchain empowered and privacy preserving digital contact tracing platform.

Information processing & management, 58(4):102572.

The spread of the COVID-19 virus continues to increase fatality rates and exhaust the capacity of healthcare providers. Efforts to prevent transmission of the virus among humans remains a high priority. The current efforts to quarantine involve social distancing, monitoring and tracking the infected patients. However, the spread of the virus is too rapid to be contained only by manual and inefficient human contact tracing activities. To address this challenge, we have developed Connect, a blockchain empowered digital contact tracing platform that can leverage information on positive cases and notify people in their immediate proximity which would thereby reduce the rate at which the infection could spread. This would particularly be effective if sufficient people use the platform and benefit from the targeted recommendations. The recommendations would be made in a privacy-preserving fashion and contain the spread of the virus without the need for an extended period of potential lockdown. Connect is an identity wallet platform which will keep user digital identities and user activity trace data on a blockchain platform using Self-Sovereign Identity(SSI) proofs. User activities include the places he/she has travelled, the country of origin he/she came from, travel and dispatch updates from the airport etc. With these activity trace records, Connect platform can easily identify suspected patients who may be infected with the COVID-19 virus and take precautions before spreading it. By storing digital identities and activity trace records on blockchain-based SSI platform, Connect addresses the common issues in centralized cloud-based storage platforms (e.g. lack of data immutability, lack of traceability).

RevDate: 2021-03-16

Olivella R, Chiva C, Serret M, et al (2021)

QCloud2: An Improved Cloud-based Quality-Control System for Mass-Spectrometry-based Proteomics Laboratories.

Journal of proteome research [Epub ahead of print].

QCloud is a cloud-based system to support proteomics laboratories in daily quality assessment using a user-friendly interface, easy setup, and automated data processing. Since its release, QCloud has facilitated automated quality control for proteomics experiments in many laboratories. QCloud provides a quick and effortless evaluation of instrument performance that helps to overcome many analytical challenges derived from clinical and translational research. Here we present an improved version of the system, QCloud2. This new version includes enhancements in the scalability and reproducibility of the quality-control pipelines, and it features an improved front end for data visualization, user management, and chart annotation. The QCloud2 system also includes programmatic access and a standalone local version.

RevDate: 2021-03-15

Tanwar AS, Evangelatos N, Venne J, et al (2021)

Global Open Health Data Cooperatives Cloud in an Era of COVID-19 and Planetary Health.

Omics : a journal of integrative biology, 25(3):169-175.

Big data in both the public domain and the health care industry are growing rapidly, for example, with broad availability of next-generation sequencing and large-scale phenomics datasets on patient-reported outcomes. In parallel, we are witnessing new research approaches that demand sharing of data for the benefit of planetary society. Health data cooperatives (HDCs) is one such approach, where health data are owned and governed collectively by citizens who take part in the HDCs. Data stored in HDCs should remain readily available for translation to public health practice but at the same time, governed in a critically informed manner to ensure data integrity, veracity, and privacy, to name a few pressing concerns. As a solution, we suggest that data generated from high-throughput omics research and phenomics can be stored in an open cloud platform so that researchers around the globe can share health data and work collaboratively. We describe here the Global Open Health Data Cooperatives Cloud (GOHDCC) as a proposed cloud platform-based model for the sharing of health data between different HDCCs around the globe. GOHDCC's main objective is to share health data on a global scale for robust and responsible global science, research, and development. GOHDCC is a citizen-oriented model cooperatively governed by citizens. The model essentially represents a global sharing platform that could benefit all stakeholders along the health care value chain.

RevDate: 2021-03-13

Paredes-Pacheco J, López-González FJ, Silva-Rodríguez J, et al (2021)

SimPET - An open online platform for the Monte Carlo simulation of realistic brain PET data. Validation for 18 F-FDG scans.

Medical physics [Epub ahead of print].

PURPOSE: SimPET (www.sim-pet.org) is a free cloud-based platform for the generation of realistic brain Positron Emission Tomography (PET) data. In this work, we introduce the key features of the platform. In addition, we validate the platform by performing a comparison between simulated healthy brain FDG-PET images and real healthy subject data for three commercial scanners (GE Advance NXi, GE Discovery ST, and Siemens Biograph mCT).

METHODS: The platform provides a graphical user interface to a set of automatic scripts taking care of the code execution for the phantom generation, simulation (SimSET), and tomographic image reconstruction (STIR). We characterize the performance using activity and attenuation maps derived from PET/CT and MRI data of 25 healthy subjects acquired with a GE Discovery ST. We then use the created maps to generate synthetic data for the GE Discovery ST, the GE Advance NXi, and the Siemens Biograph mCT. The validation was carried out by evaluating Bland-Altman differences between real and simulated images for each scanner. In addition, SPM voxel-wise comparison was performed to highlight regional differences. Examples for amyloid PET and for the generation of ground-truth pathological patients are included.

RESULTS: The platform can be efficiently used for generating realistic simulated FDG-PET images in a reasonable amount of time. The validation showed small differences between SimPET and acquired FDG-PET images, with errors below 10% for 98.09% (GE Discovery ST), 95.09% (GE Advance NXi), and 91.35% (Siemens Biograph mCT) of the voxels. Nevertheless, our SPM analysis showed significant regional differences between the simulated images and real healthy patients, and thus, the use of the platform for converting control subject databases between different scanners requires further investigation.

CONCLUSIONS: The presented platform can potentially allow scientists in clinical and research settings to perform MC simulation experiments without the need for high-end hardware or advanced computing knowledge and in a reasonable amount of time.

RevDate: 2021-03-12

Wang X, Jiang X, J Vaidya (2021)

Efficient Verification for Outsourced Genome-wide Association Studies.

Journal of biomedical informatics pii:S1532-0464(21)00043-5 [Epub ahead of print].

With cloud computing is being widely adopted in conducting genome-wide association studies (GWAS), how to verify the integrity of outsourced GWAS computation remains to be accomplished. Here, we propose two novel algorithms to generate synthetic SNPs that are indistinguishable from real SNPs. The first method creates synthetic SNPs based on the phenotype vector, while the second approach creates synthetic SNPs based on real SNPs that are most similar to the phenotype vector. The time complexity of the first approach and the second approach is Om and Omlogn2, respectively, where m is the number of subjects while n is the number of SNPs. Furthermore, through a game theoretic analysis, we demonstrate that it is possible to incentivize honest behavior by the server by coupling appropriate payoffs with randomized verification. We conduct extensive experiments of our proposed methods, and the results show that beyond a formal adversarial model, when only a few synthetic SNPs are generated and mixed into the real data they cannot be distinguished from the real SNPs even by a variety of predictive machine learning models. We demonstrate that the proposed approach can ensure that logistic regression for GWAS can be outsourced in an efficient and trustworthy way.

RevDate: 2021-03-11

Bahmani A, Xing Z, Krishnan V, et al (2021)

Hummingbird: Efficient Performance Prediction for Executing Genomic Applications in the Cloud.

Bioinformatics (Oxford, England) pii:6162881 [Epub ahead of print].

MOTIVATION: A major drawback of executing genomic applications on cloud computing facilities is the lack of tools to predict which instance type is the most appropriate, often resulting in an over- or under- matching of resources. Determining the right configuration before actually running the applications will save money and time. Here, we introduce Hummingbird, a tool for predicting performance of computing instances with varying memory and CPU on multiple cloud platforms.

RESULTS: Our experiments on three major genomic data pipelines, including GATK HaplotypeCaller, GATK MuTect2, and ENCODE ATAC-seq, showed that Hummingbird was able to address applications in command line specified in JSON format or workflow description language (WDL) format, and accurately predicted the fastest, the cheapest, and the most cost-efficient compute instances in an economic manner.

AVAILABILITY: Hummingbird is available as an open source tool at: https://github.com/StanfordBioinformatics/Hummingbird.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2021-03-11

Wang M, Yang T, Flechas MA, et al (2020)

GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments.

Frontiers in big data, 3:604083 pii:604083.

Machine learning algorithms are becoming increasingly prevalent and performant in the reconstruction of events in accelerator-based neutrino experiments. These sophisticated algorithms can be computationally expensive. At the same time, the data volumes of such experiments are rapidly increasing. The demand to process billions of neutrino events with many machine learning algorithm inferences creates a computing challenge. We explore a computing model in which heterogeneous computing with GPU coprocessors is made available as a web service. The coprocessors can be efficiently and elastically deployed to provide the right amount of computing for a given processing task. With our approach, Services for Optimized Network Inference on Coprocessors (SONIC), we integrate GPU acceleration specifically for the ProtoDUNE-SP reconstruction chain without disrupting the native computing workflow. With our integrated framework, we accelerate the most time-consuming task, track and particle shower hit identification, by a factor of 17. This results in a factor of 2.7 reduction in the total processing time when compared with CPU-only production. For this particular task, only 1 GPU is required for every 68 CPU threads, providing a cost-effective solution.

RevDate: 2021-03-11

Qayyum A, Ijaz A, Usama M, et al (2020)

Securing Machine Learning in the Cloud: A Systematic Review of Cloud Machine Learning Security.

Frontiers in big data, 3:587139 pii:587139.

With the advances in machine learning (ML) and deep learning (DL) techniques, and the potency of cloud computing in offering services efficiently and cost-effectively, Machine Learning as a Service (MLaaS) cloud platforms have become popular. In addition, there is increasing adoption of third-party cloud services for outsourcing training of DL models, which requires substantial costly computational resources (e.g., high-performance graphics processing units (GPUs)). Such widespread usage of cloud-hosted ML/DL services opens a wide range of attack surfaces for adversaries to exploit the ML/DL system to achieve malicious goals. In this article, we conduct a systematic evaluation of literature of cloud-hosted ML/DL models along both the important dimensions-attacks and defenses-related to their security. Our systematic review identified a total of 31 related articles out of which 19 focused on attack, six focused on defense, and six focused on both attack and defense. Our evaluation reveals that there is an increasing interest from the research community on the perspective of attacking and defending different attacks on Machine Learning as a Service platforms. In addition, we identify the limitations and pitfalls of the analyzed articles and highlight open research issues that require further investigation.

RevDate: 2021-03-11

Werner M (2019)

Parallel Processing Strategies for Big Geospatial Data.

Frontiers in big data, 2:44.

This paper provides an abstract analysis of parallel processing strategies for spatial and spatio-temporal data. It isolates aspects such as data locality and computational locality as well as redundancy and locally sequential access as central elements of parallel algorithm design for spatial data. Furthermore, the paper gives some examples from simple and advanced GIS and spatial data analysis highlighting both that big data systems have been around long before the current hype of big data and that they follow some design principles which are inevitable for spatial data including distributed data structures and messaging, which are, however, incompatible with the popular MapReduce paradigm. Throughout this discussion, the need for a replacement or extension of the MapReduce paradigm for spatial data is derived. This paradigm should be able to deal with the imperfect data locality inherent to spatial data hindering full independence of non-trivial computational tasks. We conclude that more research is needed and that spatial big data systems should pick up more concepts like graphs, shortest paths, raster data, events, and streams at the same time instead of solving exactly the set of spatially separable problems such as line simplifications or range queries in manydifferent ways.

RevDate: 2021-03-11

Cai Y, Zeng M, YZ Chen (2021)

The pharmacological mechanism of Huashi Baidu Formula for the treatment of COVID-19 by combined network pharmacology and molecular docking.

Annals of palliative medicine pii:apm-20-1759 [Epub ahead of print].

BACKGROUND: Huashi Baidu Formula (HSBDF) is a traditional Chinese medicine formula consisting of fourteen parts, which has been proven effective for treating coronavirus disease 2019 (COVID-19) clinically. However, the therapeutic mechanism of the effect of HSBDF on COVID-19 remains unclear.

METHODS: The components and action targets of HSBDF were searched in the TCMSP, YaTCM, PubChem, and TargetNet databases. Disease targets related to ACE2 were screened in single-cell sequence data of colon epithelial cells from other reports. The therapeutic targets of HSBDF for COVID-19 were obtained by integrated analysis, and the protein-protein interaction was analyzed using the STRING database. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) processes were analyzed using the OmicsBean and Metascape databases. The communication between networks [component-target (C-T) network, component-target-pathway (C-T-P) network, herb-target (H-T) network, target-pathway (T-P) network, and meridian-tropism (M-T) network] was constructed by Cytoscape software. The Cloud computing molecular docking platform was used to verify the molecular docking.

RESULTS: The obtained 223 active ingredients and 358 targets of HSBDF. The 5,555 COVID-19 disease targets related to ACE2 were extracted, and 84 compound-disease common targets were found, of which the principal targets included ACE, ESR1, ADRA1A, and HDAC1. A total of 3,946 items were seized by GO enrichment analysis, mainly related to metabolism, protein binding, cellular response to the stimulus, and receptor activity. The enriched KEGG pathways screened 46 signaling pathways, including the reninangiotensin system, the renin secretion, NF-kappa B pathway, the arachidonic acid metabolism, and the AMPK signaling pathway. The molecular docking results showed that the bioactive components of HSBDF have an excellent binding ability with main proteins related to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).

CONCLUSIONS: HSBDF might act on SARS-CoV-2 through multiple components, targets, and pathways. Here we reveal preliminary results of the mechanism of action of HSBDF on SARS-CoV-2, providing a theoretical basis for future clinical applications.

RevDate: 2021-03-09

Qi Q, Tao F, Cheng Y, et al (2021)

New IT driven rapid manufacturing for emergency response.

Journal of manufacturing systems pii:S0278-6125(21)00052-2 [Epub ahead of print].

COVID-19, which is rampant around the world, has seriously disrupted people's normal work and living. To respond to public urgent needs such as COVID-19, emergency supplies are essential. However, due to the special requirements of supplies, when an emergency occurs, the supply reserve mostly cannot cope with the high demand. Given the importance of emergency supplies in public emergencies, rapid response manufacturing of emergency supplies is a necessity. The faster emergency supplies and facilities are manufactured, the more likely the pandemic can be controlled and the more human lives are saved. Besides, new generation information technology represented by cloud computing, IoT, big data, AI, etc. is rapidly developing and can be widely used to address such situations. Therefore, rapid response manufacturing enabled by New IT is presented to quickly meet emergency demands. And some policy suggestions are presented.

RevDate: 2021-03-08

Jha RR, Verma RK, Kishore A, et al (2020)

Mapping fear among doctors manning screening clinics for COVID19. Results from cloud based survey in Eastern parts of India.

Journal of family medicine and primary care, 9(12):6194-6200 pii:JFMPC-9-6194.

Background: As the number of cases of COVID19 from novel corona virus 2019 rises so are the number of deaths ensuing from it. Doctors have been in front in these calamitous times across the world. India has less number of doctors so doctors are overwhelmed with more number of patients to cater. Thereby they are also fearing that they will be exposed much as they often work in limited resource settings.

Methods: An on line survey was to include doctors from eastern states in India for measuring the reasons of their fear and suggest possible solutions based on the results achieved thus. After IEC clearance a semi-structured anonymous questionnaire was sent on google forms as links on known to doctors, working in screening OPDs or flu clinics especially for COVID-19.

Results: Out of 59 Doctors majority were provided with sanitizers for practicing hand hygiene. Gloves were provided everywhere but masks particularly N95 and Triple Layer surgical masks were not there for all. Training was not given universally. Fear was dependent on age in our sample.

Conclusion: Training and strict adherence to infection control measures along with resources can help in removing the fear.

RevDate: 2021-03-08

Krishna R, V Elisseev (2020)

User-centric genomics infrastructure: trends and technologies.

Genome [Epub ahead of print].

Genomics is both a data- and compute-intensive discipline. The success of genomics depends on an adequate informatics infrastructure that can address growing data demands and enable a diverse range of resource-intensive computational activities. Designing a suitable infrastructure is a challenging task, and its success largely depends on its adoption by users. In this article, we take a user-centric view of the genomics, where users are bioinformaticians, computational biologists, and data scientists. We try to take their point of view on how traditional computational activities for genomics are expanding due to data growth, as well as the introduction of big data and cloud technologies. The changing landscape of computational activities and new user requirements will influence the design of future genomics infrastructures.

RevDate: 2021-03-06

Augustyn DR, Wyciślik Ł, D Mrozek (2021)

Perspectives of using Cloud computing in integrative analysis of multi-omics data.

Briefings in functional genomics pii:6155979 [Epub ahead of print].

Integrative analysis of multi-omics data is usually computationally demanding. It frequently requires building complex, multi-step analysis pipelines, applying dedicated techniques for data processing and combining several data sources. These efforts lead to a better understanding of life processes, current health state or the effects of therapeutic activities. However, many omics data analysis solutions focus only on a selected problem, disease, types of data or organisms. Moreover, they are implemented for general-purpose scientific computational platforms that most often do not easily scale the calculations natively. These features are not conducive to advances in understanding genotype-phenotypic relationships. Fortunately, with new technological paradigms, including Cloud computing, virtualization and containerization, these functionalities could be orchestrated for easy scaling and building independent analysis pipelines for omics data. Therefore, solutions can be re-used for purposes that they were not primarily designed. This paper shows perspectives of using Cloud computing advances and containerization approach for such a purpose. We first review how the Cloud computing model is utilized in multi-omics data analysis and show weak points of the adopted solutions. Then, we introduce containerization concepts, which allow both scaling and linking of functional services designed for various purposes. Finally, on the Bioconductor software package example, we disclose a verified concept model of a universal solution that exhibits the potentials for performing integrative analysis of multiple omics data sources.

RevDate: 2021-03-06

Hossain MD, Sultana T, Hossain MA, et al (2021)

Fuzzy Decision-Based Efficient Task Offloading Management Scheme in Multi-Tier MEC-Enabled Networks.

Sensors (Basel, Switzerland), 21(4): pii:s21041484.

Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we do not know the end users' demands in advance. Therefore, quality of service (QoS) deteriorates because of increasing task failures and long execution latency from congestion. To reduce latency and avoid task failures from resource-constrained edge servers, vertical offloading between mobile devices with local-edge collaboration or with local edge-remote cloud collaboration have been proposed in previous studies. However, they ignored the nearby edge server in the same tier that has excess computing resources. Therefore, this paper introduces a fuzzy decision-based cloud-MEC collaborative task offloading management system called FTOM, which takes advantage of powerful remote cloud-computing capabilities and utilizes neighboring edge servers. The main objective of the FTOM scheme is to select the optimal target node for task offloading based on server capacity, latency sensitivity, and the network's condition. Our proposed scheme can make dynamic decisions where local or nearby MEC servers are preferred for offloading delay-sensitive tasks, and delay-tolerant high resource-demand tasks are offloaded to a remote cloud server. Simulation results affirm that our proposed FTOM scheme significantly improves the rate of successfully executing offloaded tasks by approximately 68.5%, and reduces task completion time by 66.6%, when compared with a local edge offloading (LEO) scheme. The improved and reduced rates are 32.4% and 61.5%, respectively, when compared with a two-tier edge orchestration-based offloading (TTEO) scheme. They are 8.9% and 47.9%, respectively, when compared with a fuzzy orchestration-based load balancing (FOLB) scheme, approximately 3.2% and 49.8%, respectively, when compared with a fuzzy workload orchestration-based task offloading (WOTO) scheme, and approximately 38.6%% and 55%, respectively, when compared with a fuzzy edge-orchestration based collaborative task offloading (FCTO) scheme.

RevDate: 2021-03-06

Choi J, S Ahn (2021)

Optimal Service Provisioning for the Scalable Fog/Edge Computing Environment.

Sensors (Basel, Switzerland), 21(4): pii:s21041506.

In recent years, we observed the proliferation of cloud data centers (CDCs) and the Internet of Things (IoT). Cloud computing based on CDCs has the drawback of unpredictable response times due to variant delays between service requestors (IoT devices and end devices) and CDCs. This deficiency of cloud computing is especially problematic in providing IoT services with strict timing requirements and as a result, gives birth to fog/edge computing (FEC) whose responsiveness is achieved by placing service images near service requestors. In FEC, the computing nodes located close to service requestors are called fog/edge nodes (FENs). In addition, for an FEN to execute a specific service, it has to be provisioned with the corresponding service image. Most of the previous work on the service provisioning in the FEC environment deals with determining an appropriate FEN satisfying the requirements like delay, CPU and storage from the perspective of one or more service requests. In this paper, we determined how to optimally place service images in consideration of the pre-obtained service demands which may be collected during the prior time interval. The proposed FEC environment is scalable in the sense that the resources of FENs are effectively utilized thanks to the optimal provisioning of services on FENs. We propose two approaches to provision service images on FENs. In order to validate the performance of the proposed mechanisms, intensive simulations were carried out for various service demand scenarios.

RevDate: 2021-03-06

Adnan M, Iqbal J, Waheed A, et al (2021)

On the Design of Efficient Hierarchic Architecture for Software Defined Vehicular Networks.

Sensors (Basel, Switzerland), 21(4): pii:s21041400.

Modern vehicles are equipped with various sensors, onboard units, and devices such as Application Unit (AU) that support routing and communication. In VANETs, traffic management and Quality of Service (QoS) are the main research dimensions to be considered while designing VANETs architectures. To cope with the issues of QoS faced by the VANETs, we design an efficient SDN-based architecture where we focus on the QoS of VANETs. In this paper, QoS is achieved by a priority-based scheduling algorithm in which we prioritize traffic flow messages in the safety queue and non-safety queue. In the safety queue, the messages are prioritized based on deadline and size using the New Deadline and Size of data method (NDS) with constrained location and deadline. In contrast, the non-safety queue is prioritized based on First Come First Serve (FCFS) method. For the simulation of our proposed scheduling algorithm, we use a well-known cloud computing framework CloudSim toolkit. The simulation results of safety messages show better performance than non-safety messages in terms of execution time.

RevDate: 2021-03-06

Fang J, Shi J, Lu S, et al (2021)

An Efficient Computation Offloading Strategy with Mobile Edge Computing for IoT.

Micromachines, 12(2): pii:mi12020204.

With the rapidly development of mobile cloud computing (MCC), the Internet of Things (IoT), and artificial intelligence (AI), user equipment (UEs) are facing explosive growth. In order to effectively solve the problem that UEs may face with insufficient capacity when dealing with computationally intensive and delay sensitive applications, we take Mobile Edge Computing (MEC) of the IoT as the starting point and study the computation offloading strategy of UEs. First, we model the application generated by UEs as a directed acyclic graph (DAG) to achieve fine-grained task offloading scheduling, which makes the parallel processing of tasks possible and speeds up the execution efficiency. Then, we propose a multi-population cooperative elite algorithm (MCE-GA) based on the standard genetic algorithm, which can solve the offloading problem for tasks with dependency in MEC to minimize the execution delay and energy consumption of applications. Experimental results show that MCE-GA has better performance compared to the baseline algorithms. To be specific, the overhead reduction by MCE-GA can be up to 72.4%, 38.6%, and 19.3%, respectively, which proves the effectiveness and reliability of MCE-GA.

RevDate: 2021-03-06

Shang M, J Luo (2021)

The Tapio Decoupling Principle and Key Strategies for Changing Factors of Chinese Urban Carbon Footprint Based on Cloud Computing.

International journal of environmental research and public health, 18(4): pii:ijerph18042101.

The expansion of Xi'an City has caused the consumption of energy and land resources, leading to serious environmental pollution problems. For this purpose, this study was carried out to measure the carbon carrying capacity, net carbon footprint and net carbon footprint pressure index of Xi'an City, and to characterize the carbon sequestration capacity of Xi'an ecosystem, thereby laying a foundation for developing comprehensive and reasonable low-carbon development measures. This study expects to provide a reference for China to develop a low-carbon economy through Tapio decoupling principle. The decoupling relationship between CO2 and driving factors was explored through Tapio decoupling model. The time-series data was used to calculate the carbon footprint. The auto-encoder in deep learning technology was combined with the parallel algorithm in cloud computing. A general multilayer perceptron neural network realized by a parallel BP learning algorithm was proposed based on Map-Reduce on a cloud computing cluster. A partial least squares (PLS) regression model was constructed to analyze driving factors. The results show that in terms of city size, the variable importance in projection (VIP) output of the urbanization rate has a strong inhibitory effect on carbon footprint growth, and the VIP value of permanent population ranks the last; in terms of economic development, the impact of fixed asset investment and added value of the secondary industry on carbon footprint ranks third and fourth. As a result, the marginal effect of carbon footprint is greater than that of economic growth after economic growth reaches a certain stage, revealing that the driving forces and mechanisms can promote the growth of urban space.

RevDate: 2021-03-06

Goyal S, Bhushan S, Kumar Y, et al (2021)

An Optimized Framework for Energy-Resource Allocation in A Cloud Environment based on the Whale Optimization Algorithm.

Sensors (Basel, Switzerland), 21(5): pii:s21051583.

Cloud computing offers the services to access, manipulate and configure data online over the web. The cloud term refers to an internet network which is remotely available and accessible at anytime from anywhere. Cloud computing is undoubtedly an innovation as the investment in the real and physical infrastructure is much greater than the cloud technology investment. The present work addresses the issue of power consumption done by cloud infrastructure. As there is a need for algorithms and techniques that can reduce energy consumption and schedule resource for the effectiveness of servers. Load balancing is also a significant part of cloud technology that enables the balanced distribution of load among multiple servers to fulfill users' growing demand. The present work used various optimization algorithms such as particle swarm optimization (PSO), cat swarm optimization (CSO), BAT, cuckoo search algorithm (CSA) optimization algorithm and the whale optimization algorithm (WOA) for balancing the load, energy efficiency, and better resource scheduling to make an efficient cloud environment. In the case of seven servers and eight server's settings, the results revealed that whale optimization algorithm outperformed other algorithms in terms of response time, energy consumption, execution time and throughput.

RevDate: 2021-03-05

Stevens L, Kao D, Hall J, et al (2020)

ML-MEDIC: A Preliminary Study of an Interactive Visual Analysis Tool Facilitating Clinical Applications of Machine Learning for Precision Medicine.

Applied sciences (Basel, Switzerland), 10(9):.

Accessible interactive tools that integrate machine learning methods with clinical research and reduce the programming experience required are needed to move science forward. Here, we present Machine Learning for Medical Exploration and Data-Inspired Care (ML-MEDIC), a point-and-click, interactive tool with a visual interface for facilitating machine learning and statistical analyses in clinical research. We deployed ML-MEDIC in the American Heart Association (AHA) Precision Medicine Platform to provide secure internet access and facilitate collaboration. ML-MEDIC's efficacy for facilitating the adoption of machine learning was evaluated through two case studies in collaboration with clinical domain experts. A domain expert review was also conducted to obtain an impression of the usability and potential limitations.

RevDate: 2021-03-05

Shiff S, Helman D, IM Lensky (2021)

Worldwide continuous gap-filled MODIS land surface temperature dataset.

Scientific data, 8(1):74.

Satellite land surface temperature (LST) is vital for climatological and environmental studies. However, LST datasets are not continuous in time and space mainly due to cloud cover. Here we combine LST with Climate Forecast System Version 2 (CFSv2) modeled temperatures to derive a continuous gap filled global LST dataset at a spatial resolution of 1 km. Temporal Fourier analysis is used to derive the seasonality (climatology) on a pixel-by-pixel basis, for LST and CFSv2 temperatures. Gaps are filled by adding the CFSv2 temperature anomaly to climatological LST. The accuracy is evaluated in nine regions across the globe using cloud-free LST (mean values: R2 = 0.93, Root Mean Square Error (RMSE) = 2.7 °C, Mean Absolute Error (MAE) = 2.1 °C). The provided dataset contains day, night, and daily mean LST for the Eastern Mediterranean. We provide a Google Earth Engine code and a web app that generates gap filled LST in any part of the world, alongside a pixel-based evaluation of the data in terms of MAE, RMSE and Pearson's r.

RevDate: 2021-03-03

Figueroa CA, Aguilera A, Chakraborty B, et al (2021)

Adaptive learning algorithms to optimize mobile applications for behavioral health: guidelines for design decisions.

Journal of the American Medical Informatics Association : JAMIA pii:6154382 [Epub ahead of print].

OBJECTIVE: Providing behavioral health interventions via smartphones allows these interventions to be adapted to the changing behavior, preferences, and needs of individuals. This can be achieved through reinforcement learning (RL), a sub-area of machine learning. However, many challenges could affect the effectiveness of these algorithms in the real world. We provide guidelines for decision-making.

MATERIALS AND METHODS: Using thematic analysis, we describe challenges, considerations, and solutions for algorithm design decisions in a collaboration between health services researchers, clinicians, and data scientists. We use the design process of an RL algorithm for a mobile health study "DIAMANTE" for increasing physical activity in underserved patients with diabetes and depression. Over the 1.5-year project, we kept track of the research process using collaborative cloud Google Documents, Whatsapp messenger, and video teleconferencing. We discussed, categorized, and coded critical challenges. We grouped challenges to create thematic topic process domains.

RESULTS: Nine challenges emerged, which we divided into 3 major themes: 1. Choosing the model for decision-making, including appropriate contextual and reward variables; 2. Data handling/collection, such as how to deal with missing or incorrect data in real-time; 3. Weighing the algorithm performance vs effectiveness/implementation in real-world settings.

CONCLUSION: The creation of effective behavioral health interventions does not depend only on final algorithm performance. Many decisions in the real world are necessary to formulate the design of problem parameters to which an algorithm is applied. Researchers must document and evaulate these considerations and decisions before and during the intervention period, to increase transparency, accountability, and reproducibility.

TRIAL REGISTRATION: clinicaltrials.gov, NCT03490253.

RevDate: 2021-03-03

Huang Q, Yue W, Yang Y, et al (2021)

P2GT: Fine-Grained Genomic Data Access Control with Privacy-Preserving Testing in Cloud Computing.

IEEE/ACM transactions on computational biology and bioinformatics, PP: [Epub ahead of print].

With the rapid development of bioinformatics and the availability of genetic sequencing technologies, genomic data has been used to facilitate personalized medicine. Cloud computing, features as low cost, rich storage and rapid processing can precisely respond to the challenges brought by the emergence of massive genomic data. Considering the security of cloud platform and the privacy of genomic data, we firstly introduce P2GT which utilizes key-policy attribute-based encryption to realize genomic data access control with unbounded attributes, and employs equality test algorithm to achieve personalized medicine test by matching digitized single nucleotide polymorphisms (SNPs) directly on the users' ciphertext without encrypting multiple times. We then propose an enhanced scheme P2GT+, which adopts identity-based encryption with equality test supporting flexible joint authorization to realize privacy-preserving paternity test, genetic compatibility test and disease susceptibility test over the encrypted SNPs with P2GT. We prove the security of proposed schemes and conduct extensive experiments with the 1000 Genomes dataset. The results show that P2GT and P2GT+ are practical and scalable enough to meet the privacy-preserving and authorized genetic testing requirements in cloud computing.

RevDate: 2021-03-03

Elgendy IA, Muthanna A, Hammoudeh M, et al (2021)

Advanced Deep Learning for Resource Allocation and Security Aware Data Offloading in Industrial Mobile Edge Computing.

Big data [Epub ahead of print].

The internet of things (IoT) is permeating our daily lives through continuous environmental monitoring and data collection. The promise of low latency communication, enhanced security, and efficient bandwidth utilization lead to the shift from mobile cloud computing to mobile edge computing. In this study, we propose an advanced deep reinforcement resource allocation and security-aware data offloading model that considers the constrained computation and radio resources of industrial IoT devices to guarantee efficient sharing of resources between multiple users. This model is formulated as an optimization problem with the goal of decreasing energy consumption and computation delay. This type of problem is non-deterministic polynomial time-hard due to the curse-of-dimensionality challenge, thus, a deep learning optimization approach is presented to find an optimal solution. In addition, a 128-bit Advanced Encryption Standard-based cryptographic approach is proposed to satisfy the data security requirements. Experimental evaluation results show that the proposed model can reduce offloading overhead in terms of energy and time by up to 64.7% in comparison with the local execution approach. It also outperforms the full offloading scenario by up to 13.2%, where it can select some computation tasks to be offloaded while optimally rejecting others. Finally, it is adaptable and scalable for a large number of mobile devices.

RevDate: 2021-03-03

Machi D, Bhattacharya P, Hoops S, et al (2021)

Scalable Epidemiological Workflows to Support COVID-19 Planning and Response.

medRxiv : the preprint server for health sciences.

The COVID-19 global outbreak represents the most significant epidemic event since the 1918 influenza pandemic. Simulations have played a crucial role in supporting COVID-19 planning and response efforts. Developing scalable workflows to provide policymakers quick responses to important questions pertaining to logistics, resource allocation, epidemic forecasts and intervention analysis remains a challenging computational problem. In this work, we present scalable high performance computing-enabled workflows for COVID-19 pandemic planning and response. The scalability of our methodology allows us to run fine-grained simulations daily, and to generate county-level forecasts and other counter-factual analysis for each of the 50 states (and DC), 3140 counties across the USA. Our workflows use a hybrid cloud/cluster system utilizing a combination of local and remote cluster computing facilities, and using over 20,000 CPU cores running for 6-9 hours every day to meet this objective. Our state (Virginia), state hospital network, our university, the DOD and the CDC use our models to guide their COVID-19 planning and response efforts. We began executing these pipelines March 25, 2020, and have delivered and briefed weekly updates to these stakeholders for over 30 weeks without interruption.

RevDate: 2021-03-01

Abbasi WA, Abbas SA, Andleeb S, et al (2021)

COVIDC: An Expert System to Diagnose COVID-19 and Predict its Severity using Chest CT Scans: Application in Radiology.

Informatics in medicine unlocked pii:S2352-9148(21)00030-7 [Epub ahead of print].

Early diagnosis of Coronavirus disease 2019 (COVID-19) is significantly important, especially in the absence or inadequate provision of a specific vaccine, to stop the surge of this lethal infection by advising quarantine. This diagnosis is challenging as most of the patients having COVID-19 infection stay asymptomatic while others showing symptoms are hard to distinguish from patients having different respiratory infections such as severe flu and Pneumonia. Due to cost and time-consuming wet-lab diagnostic tests for COVID-19, there is an utmost requirement for some alternate, non-invasive, rapid, and discounted automatic screening system. A chest CT scan can effectively be used as an alternative modality to detect and diagnose the COVID-19 infection. In this study, we present an automatic COVID-19 diagnostic and severity prediction system called COVIDC (COVID-19 detection using CT scans) that uses deep feature maps from the chest CT scans for this purpose. Our newly proposed system not only detects COVID-19 but also predicts its severity by using a two-phase classification approach (COVID vs non-COVID, and COVID-19 severity) with deep feature maps and different shallow supervised classification algorithms such as SVMs and random forest to handle data scarcity. We performed a stringent COVIDC performance evaluation not only through 10-fold cross-validation and an external validation dataset but also in a real setting under the supervision of an experienced radiologist. In all the evaluation settings, COVIDC outperformed all the existing state-of-the-art methods designed to detect COVID-19 with an F1 score of 0.94 on the validation dataset and justified its use to diagnose COVID-19 effectively in the real setting by classifying correctly 9 out of 10 COVID-19 CT scans. We made COVIDC openly accessible through a cloud-based webserver and python code available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/covidc.

RevDate: 2021-03-01

Smidt HJ, O Jokonya (2021)

The challenge of privacy and security when using technology to track people in times of COVID-19 pandemic.

Procedia computer science, 181:1018-1026.

Since the start of the Coronavirus disease 2019 (COVID-19) governments and health authorities across the world have find it very difficult in controlling infections. Digital technologies such as artificial intelligence (AI), big data, cloud computing, blockchain and 5G have effectively improved the efficiency of efforts in epidemic monitoring, virus tracking, prevention, control and treatment. Surveillance to halt COVID-19 has raised privacy concerns, as many governments are willing to overlook privacy implications to save lives. The purpose of this paper is to conduct a focused Systematic Literature Review (SLR), to explore the potential benefits and implications of using digital technologies such as AI, big data and cloud to track COVID-19 amongst people in different societies. The aim is to highlight the risks of security and privacy to personal data when using technology to track COVID-19 in societies and identify ways to govern these risks. The paper uses the SLR approach to examine 40 articles published during 2020, ultimately down selecting to the most relevant 24 studies. In this SLR approach we adopted the following steps; formulated the problem, searched the literature, gathered information from studies, evaluated the quality of studies, analysed and integrated the outcomes of studies while concluding by interpreting the evidence and presenting the results. Papers were classified into different categories such as technology use, impact on society and governance. The study highlighted the challenge for government to balance the need of what is good for public health versus individual privacy and freedoms. The findings revealed that although the use of technology help governments and health agencies reduce the spread of the COVID-19 virus, government surveillance to halt has sparked privacy concerns. We suggest some requirements for government policy to be ethical and capable of commanding the trust of the public and present some research questions for future research.

RevDate: 2021-02-26

Brivio S, Ly DRB, Vianello E, et al (2021)

Non-linear Memristive Synaptic Dynamics for Efficient Unsupervised Learning in Spiking Neural Networks.

Frontiers in neuroscience, 15:580909.

Spiking neural networks (SNNs) are a computational tool in which the information is coded into spikes, as in some parts of the brain, differently from conventional neural networks (NNs) that compute over real-numbers. Therefore, SNNs can implement intelligent information extraction in real-time at the edge of data acquisition and correspond to a complementary solution to conventional NNs working for cloud-computing. Both NN classes face hardware constraints due to limited computing parallelism and separation of logic and memory. Emerging memory devices, like resistive switching memories, phase change memories, or memristive devices in general are strong candidates to remove these hurdles for NN applications. The well-established training procedures of conventional NNs helped in defining the desiderata for memristive device dynamics implementing synaptic units. The generally agreed requirements are a linear evolution of memristive conductance upon stimulation with train of identical pulses and a symmetric conductance change for conductance increase and decrease. Conversely, little work has been done to understand the main properties of memristive devices supporting efficient SNN operation. The reason lies in the lack of a background theory for their training. As a consequence, requirements for NNs have been taken as a reference to develop memristive devices for SNNs. In the present work, we show that, for efficient CMOS/memristive SNNs, the requirements for synaptic memristive dynamics are very different from the needs of a conventional NN. System-level simulations of a SNN trained to classify hand-written digit images through a spike timing dependent plasticity protocol are performed considering various linear and non-linear plausible synaptic memristive dynamics. We consider memristive dynamics bounded by artificial hard conductance values and limited by the natural dynamics evolution toward asymptotic values (soft-boundaries). We quantitatively analyze the impact of resolution and non-linearity properties of the synapses on the network training and classification performance. Finally, we demonstrate that the non-linear synapses with hard boundary values enable higher classification performance and realize the best trade-off between classification accuracy and required training time. With reference to the obtained results, we discuss how memristive devices with non-linear dynamics constitute a technologically convenient solution for the development of on-line SNN training.

RevDate: 2021-02-24

Bai J, Bandla C, Guo J, et al (2021)

BioContainers Registry: Searching Bioinformatics and Proteomics Tools, Packages, and Containers.

Journal of proteome research [Epub ahead of print].

BioContainers is an open-source project that aims to create, store, and distribute bioinformatics software containers and packages. The BioContainers community has developed a set of guidelines to standardize software containers including the metadata, versions, licenses, and software dependencies. BioContainers supports multiple packaging and container technologies such as Conda, Docker, and Singularity. The BioContainers provide over 9000 bioinformatics tools, including more than 200 proteomics and mass spectrometry tools. Here we introduce the BioContainers Registry and Restful API to make containerized bioinformatics tools more findable, accessible, interoperable, and reusable (FAIR). The BioContainers Registry provides a fast and convenient way to find and retrieve bioinformatics tool packages and containers. By doing so, it will increase the use of bioinformatics packages and containers while promoting replicability and reproducibility in research.

RevDate: 2021-02-23

Katakol S, Elbarashy B, Herranz L, et al (2021)

Distributed Learning and Inference with Compressed Images.

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, PP: [Epub ahead of print].

Modern computer vision requires processing large amounts of data, both while training the model and/or during inference, once the model is deployed. Scenarios where images are captured and processed in physically separated locations are increasingly common (e.g. autonomous vehicles, cloud computing, smartphones). In addition, many devices suffer from limited resources to store or transmit data (e.g. storage space, channel capacity). In these scenarios, lossy image compression plays a crucial role to effectively increase the number of images collected under such constraints. However, lossy compression entails some undesired degradation of the data that may harm the performance of the downstream analysis task at hand, since important semantic information may be lost in the process. Moreover, we may only have compressed images at training time but are able to use original images at inference time (i.e. test), or vice versa, and in such a case, the downstream model suffers from covariate shift. In this paper, we analyze this phenomenon, with a special focus on vision-based perception for autonomous driving as a paradigmatic scenario. We see that loss of semantic information and covariate shift do indeed exist, resulting in a drop in performance that depends on the compression rate. In order to address the problem, we propose dataset restoration, based on image restoration with generative adversarial networks (GANs). Our method is agnostic to both the particular image compression method and the downstream task; and has the advantage of not adding additional cost to the deployed models, which is particularly important in resource-limited devices. The presented experiments focus on semantic segmentation as a challenging use case, cover a broad range of compression rates and diverse datasets, and show how our method is able to significantly alleviate the negative effects of compression on the downstream visual task.

RevDate: 2021-02-21

Seong Y, You SC, Ostropolets A, et al (2021)

Incorporation of Korean Electronic Data Interchange Vocabulary into Observational Medical Outcomes Partnership Vocabulary.

Healthcare informatics research, 27(1):29-38.

OBJECTIVES: We incorporated the Korean Electronic Data Interchange (EDI) vocabulary into Observational Medical Outcomes Partnership (OMOP) vocabulary using a semi-automated process. The goal of this study was to improve the Korean EDI as a standard medical ontology in Korea.

METHODS: We incorporated the EDI vocabulary into OMOP vocabulary through four main steps. First, we improved the current classification of EDI domains and separated medical services into procedures and measurements. Second, each EDI concept was assigned a unique identifier and validity dates. Third, we built a vertical hierarchy between EDI concepts, fully describing child concepts through relationships and attributes and linking them to parent terms. Finally, we added an English definition for each EDI concept. We translated the Korean definitions of EDI concepts using Google.Cloud.Translation.V3, using a client library and manual translation. We evaluated the EDI using 11 auditing criteria for controlled vocabularies.

RESULTS: We incorporated 313,431 concepts from the EDI to the OMOP Standardized Vocabularies. For 10 of the 11 auditing criteria, EDI showed a better quality index within the OMOP vocabulary than in the original EDI vocabulary.

CONCLUSIONS: The incorporation of the EDI vocabulary into the OMOP Standardized Vocabularies allows better standardization to facilitate network research. Our research provides a promising model for mapping Korean medical information into a global standard terminology system, although a comprehensive mapping of official vocabulary remains to be done in the future.

RevDate: 2021-02-19

Rao PMM, Singh SK, Khamparia A, et al (2021)

Multi-class Breast Cancer Classification using Ensemble of Pretrained models and Transfer Learning.

Current medical imaging pii:CMIR-EPUB-114326 [Epub ahead of print].

AIMS: Early detection of breast cancer has reduced many deaths. Earlier CAD systems used to be the second opinion for radiologists and clinicians. Machine learning and deep learning has brought tremendous changes in medical diagnosis and imagining.

BACKGROUND: Breast cancer is the most commonly occurring cancer in the women and it is the second most common cancer overall. According to the 2018 statistics, there were over 2million cases all over the world. Belgium and Luxembourg have the highest rate of cancer.

OBJECTIVE: Proposed a method for breast cancer detection using Ensemble learning. 2-class and 8-class classification is performed.

METHOD: To deal with imbalance classification the authors have proposed an ensemble of pretrained models.

RESULT: 98.5% training accuracy and 89% of test accuracy are achieved on 8-class classification. And 99.1% and 98% train and test accuracy are achieved on 2 class classification.

CONCLUSION: it is found that there are high misclassifications in class DC when compared to the other classes, this is due to the imbalance in the dataset. In future, one can increase the size of the datasets or use different methods. In implement this research work, authors have used 2 Nvidia Tesla V100 GPU's in google cloud platform.

RevDate: 2021-02-18

R Niakan Kalhori S, Bahaadinibeigy K, Deldar K, et al (2020)

Review Study of Digital Health-related Solutions to Control COVID-19 Pandemic: Analysis for the 10 Highest Prevalent Countries.

Journal of medical Internet research [Epub ahead of print].

BACKGROUND: The novel coronavirus disease (COVID-19) as a case of pneumonia becomes a global pandemic, affecting most of the countries around the world. digital health as information technologies that can be applied in three aspects including digital patients, digital devices, and digital clinics could help against this pandemic.

OBJECTIVE: Recent reviews have examined the role of digital health in controlling COVID-19 to identify the potential of digital health to fight against the disease. However, this study is aimed at reviewing and analyzing applied digital technology to control the COVID-19 pandemic in ten countries with the highest prevalence of the disease.

METHODS: For this review, Google Scholar, PubMed, Web of Science, and Scopus databases were searched in August 2020 to retrieve publications from December 2019 to 15 March 2020. Furthermore, the Google search engine was also investigated to identify additional applications of digital health for COVID-19 pandemic control.

RESULTS: 32 papers were included in this review reported 37 digital health applications for COVID-19 control. Most of the projects for COVID-19 fighting were telemedicine visit (N=11, 30%). Digital learning packages for informing about the disease (N=7, 19%), GIS and QR code application for real-time case tracking (N=7, 19%), as well as cloud /mobile based systems for self-care and patient tracking (N=7, 19%) were in the second rank of digital tool applications. projects deployed by collaboration of European countries, USA, Australia, and China.

CONCLUSIONS: Having considered the potential of available information technologies across the world in the 21st century, particularly in developed countries, it seems that more digital health products with higher level of intelligence capability have remained to be applied for pandemic and health related crisis management.

RevDate: 2021-02-16

Jheng YC, Wang YP, Lin HE, et al (2021)

A novel machine learning-based algorithm to identify and classify lesions and anatomical landmarks in colonoscopy images.

Surgical endoscopy [Epub ahead of print].

OBJECTIVES: Computer-aided diagnosis (CAD)-based artificial intelligence (AI) has been shown to be highly accurate for detecting and characterizing colon polyps. However, the application of AI to identify normal colon landmarks and differentiate multiple colon diseases has not yet been established. We aimed to develop a convolutional neural network (CNN)-based algorithm (GUTAID) to recognize different colon lesions and anatomical landmarks.

METHODS: Colonoscopic images were obtained to train and validate the AI classifiers. An independent dataset was collected for verification. The architecture of GUTAID contains two major sub-models: the Normal, Polyp, Diverticulum, Cecum and CAncer (NPDCCA) and Narrow-Band Imaging for Adenomatous/Hyperplastic polyps (NBI-AH) models. The development of GUTAID was based on the 16-layer Visual Geometry Group (VGG16) architecture and implemented on Google Cloud Platform.

RESULTS: In total, 7838 colonoscopy images were used for developing and validating the AI model. An additional 1273 images were independently applied to verify the GUTAID. The accuracy for GUTAID in detecting various colon lesions/landmarks is 93.3% for polyps, 93.9% for diverticula, 91.7% for cecum, 97.5% for cancer, and 83.5% for adenomatous/hyperplastic polyps.

CONCLUSIONS: A CNN-based algorithm (GUTAID) to identify colonic abnormalities and landmarks was successfully established with high accuracy. This GUTAID system can further characterize polyps for optical diagnosis. We demonstrated that AI classification methodology is feasible to identify multiple and different colon diseases.

RevDate: 2021-02-15

Hacking S, V Bijol (2021)

Deep learning for the classification of medical kidney disease: a pilot study for electron microscopy.

Ultrastructural pathology [Epub ahead of print].

Artificial intelligence (AI) is a new frontier and often enigmatic for medical professionals. Cloud computing could open up the field of computer vision to a wider medical audience and deep learning on the cloud allows one to design, develop, train and deploy applications with ease. In the field of histopathology, the implementation of various applications in AI has been successful for whole slide images rich in biological diversity. However, the analysis of other tissue medias, including electron microscopy, is yet to be explored. The present study aims to evaluate deep learning for the classification of medical kidney disease on electron microscopy images: amyloidosis, diabetic glomerulosclerosis, membranous nephropathy, membranoproliferative glomerulonephritis (MPGN), and thin basement membrane disease (TBMD). We found good overall classification with the MedKidneyEM-v1 Classifier and when looking at normal and diseased kidneys, the average area under the curve for precision and recall was 0.841. The average area under the curve for precision and recall on the disease only cohort was 0.909. Digital pathology will shape a new era for medical kidney disease and the present study demonstrates the feasibility of deep learning for electron microscopy. Future approaches could be used by renal pathologists to improve diagnostic concordance, determine therapeutic strategies, and optimize patient outcomes in a true clinical environment.

RevDate: 2021-02-12

Tradacete M, Santos C, Jiménez JA, et al (2021)

Turning Base Transceiver Stations into Scalable and Controllable DC Microgrids Based on a Smart Sensing Strategy.

Sensors (Basel, Switzerland), 21(4): pii:s21041202.

This paper describes a practical approach to the transformation of Base Transceiver Stations (BTSs) into scalable and controllable DC Microgrids in which an energy management system (EMS) is developed to maximize the economic benefit. The EMS strategy focuses on efficiently managing a Battery Energy Storage System (BESS) along with photovoltaic (PV) energy generation, and non-critical load-shedding. The EMS collects data such as real-time energy consumption and generation, and environmental parameters such as temperature, wind speed and irradiance, using a smart sensing strategy whereby measurements can be recorded and computing can be performed both locally and in the cloud. Within the Spanish electricity market and applying a two-tariff pricing, annual savings per installed battery power of 16.8 euros/kW are achieved. The system has the advantage that it can be applied to both new and existing installations, providing a two-way connection to the electricity grid, PV generation, smart measurement systems and the necessary management software. All these functions are integrated in a flexible and low cost HW/SW architecture. Finally, the whole system is validated through real tests carried out on a pilot plant and under different weather conditions.

RevDate: 2021-02-11

St-Onge C, Benmakrelouf S, Kara N, et al (2021)

Generic SDE and GA-based workload modeling for cloud systems.

Journal of cloud computing (Heidelberg, Germany), 10(1):6.

Workload models are typically built based on user and application behavior in a system, limiting them to specific domains. Undoubtedly, such a practice creates a dilemma in a cloud computing (cloud) environment, where a wide range of heterogeneous applications are running and many users have access to these resources. The workload model in such an infrastructure must adapt to the evolution of the system configuration parameters, such as job load fluctuation. The aim of this work is to propose an approach that generates generic workload models (1) which are independent of user behavior and the applications running in the system, and can fit any workload domain and type, (2) model sharp workload variations that are most likely to appear in cloud environments, and (3) with high degree of fidelity with respect to observed data, within a short execution time. We propose two approaches for workload estimation, the first being a Hull-White and Genetic Algorithm (GA) combination, while the second is a Support Vector Regression (SVR) and Kalman-filter combination. Thorough experiments are conducted on real CPU and throughput datasets from virtualized IP Multimedia Subsystem (IMS), Web and cloud environments to study the efficiency of both propositions. The results show a higher accuracy for the Hull-White-GA approach with marginal overhead over the SVR-Kalman-Filter combination.

RevDate: 2021-02-11

Gangiredla J, Rand H, Benisatto D, et al (2021)

GalaxyTrakr: a distributed analysis tool for public health whole genome sequence data accessible to non-bioinformaticians.

BMC genomics, 22(1):114.

BACKGROUND: Processing and analyzing whole genome sequencing (WGS) is computationally intense: a single Illumina MiSeq WGS run produces ~ 1 million 250-base-pair reads for each of 24 samples. This poses significant obstacles for smaller laboratories, or laboratories not affiliated with larger projects, which may not have dedicated bioinformatics staff or computing power to effectively use genomic data to protect public health. Building on the success of the cloud-based Galaxy bioinformatics platform (http://galaxyproject.org), already known for its user-friendliness and powerful WGS analytical tools, the Center for Food Safety and Applied Nutrition (CFSAN) at the U.S. Food and Drug Administration (FDA) created a customized 'instance' of the Galaxy environment, called GalaxyTrakr (https://www.galaxytrakr.org), for use by laboratory scientists performing food-safety regulatory research. The goal was to enable laboratories outside of the FDA internal network to (1) perform quality assessments of sequence data, (2) identify links between clinical isolates and positive food/environmental samples, including those at the National Center for Biotechnology Information sequence read archive (https://www.ncbi.nlm.nih.gov/sra/), and (3) explore new methodologies such as metagenomics. GalaxyTrakr hosts a variety of free and adaptable tools and provides the data storage and computing power to run the tools. These tools support coordinated analytic methods and consistent interpretation of results across laboratories. Users can create and share tools for their specific needs and use sequence data generated locally and elsewhere.

RESULTS: In its first full year (2018), GalaxyTrakr processed over 85,000 jobs and went from 25 to 250 users, representing 53 different public and state health laboratories, academic institutions, international health laboratories, and federal organizations. By mid-2020, it has grown to 600 registered users and processed over 450,000 analytical jobs. To illustrate how laboratories are making use of this resource, we describe how six institutions use GalaxyTrakr to quickly analyze and review their data. Instructions for participating in GalaxyTrakr are provided.

CONCLUSIONS: GalaxyTrakr advances food safety by providing reliable and harmonized WGS analyses for public health laboratories and promoting collaboration across laboratories with differing resources. Anticipated enhancements to this resource will include workflows for additional foodborne pathogens, viruses, and parasites, as well as new tools and services.

RevDate: 2021-02-09

Kumar R, Al-Turjman F, Anand L, et al (2021)

Genomic sequence analysis of lung infections using artificial intelligence technique.

Interdisciplinary sciences, computational life sciences [Epub ahead of print].

Attributable to the modernization of Artificial Intelligence (AI) procedures in healthcare services, various developments including Support Vector Machine (SVM), and profound learning. For example, Convolutional Neural systems (CNN) have prevalently engaged in a significant job of various classificational investigation in lung malignant growth, and different infections. In this paper, Parallel based SVM (P-SVM) and IoT has been utilized to examine the ideal order of lung infections caused by genomic sequence. The proposed method develops a new methodology to locate the ideal characterization of lung sicknesses and determine its growth in its early stages, to control the growth and prevent lung sickness. Further, in the investigation, the P-SVM calculation has been created for arranging high-dimensional distinctive lung ailment datasets. The data used in the assessment has been fetched from real-time data through cloud and IoT. The acquired outcome demonstrates that the developed P-SVM calculation has 83% higher accuracy and 88% precision in characterization with ideal informational collections when contrasted with other learning methods.

RevDate: 2021-02-09

Jensen JN, Hannemose M, Bærentzen JA, et al (2021)

Surface Reconstruction from Structured Light Images Using Differentiable Rendering.

Sensors (Basel, Switzerland), 21(4): pii:s21041068.

When 3D scanning objects, the objective is usually to obtain a continuous surface. However, most surface scanning methods, such as structured light scanning, yield a point cloud. Obtaining a continuous surface from a point cloud requires a subsequent surface reconstruction step, which is directly affected by any error from the computation of the point cloud. In this work, we propose a one-step approach in which we compute the surface directly from structured light images. Our method minimizes the least-squares error between photographs and renderings of a triangle mesh, where the vertex positions of the mesh are the parameters of the minimization problem. To ensure fast iterations during optimization, we use differentiable rendering, which computes images and gradients in a single pass. We present simulation experiments demonstrating that our method for computing a triangle mesh has several advantages over approaches that rely on an intermediate point cloud. Our method can produce accurate reconstructions when initializing the optimization from a sphere. We also show that our method is good at reconstructing sharp edges and that it is robust with respect to image noise. In addition, our method can improve the output from other reconstruction algorithms if we use these for initialization.

RevDate: 2021-02-09

Lahoura V, Singh H, Aggarwal A, et al (2021)

Cloud Computing-Based Framework for Breast Cancer Diagnosis Using Extreme Learning Machine.

Diagnostics (Basel, Switzerland), 11(2): pii:diagnostics11020241.

Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently, machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diagnostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.

RevDate: 2021-02-08

Ahmad S, Mehfuz S, Beg J, et al (2021)

Fuzzy Cloud Based COVID-19 Diagnosis Assistant for identifying affected cases globally using MCDM.

Materials today. Proceedings pii:S2214-7853(21)00329-1 [Epub ahead of print].

The COVID-19, Coronavirus Disease 2019, emerged as a hazardous disease that led to many causalities across the world. Early detection of COVID-19 in patients and proper treatment along with awareness can help to contain COVID-19. Proposed Fuzzy Cloud-Based (FCB) COVID-19 Diagnosis Assistant aims to identify the patients as confirmed, suspects, or suspicious of COVID-19. It categorized the patients into four categories as mild, moderate, severe, or critical. As patients register themselves online on the FCB COVID-19 DA in real-time, it creates the database for the same. This database helps to improve diagnostic accuracy as it contains the latest updates from real-world cases data. A team of doctors, experts, consultants are integrated with the FCB COVID-19 DA for better consultation and prevention. The ultimate aim of this proposed theory of FCB COVID-19 DA is to take control of COVID-19 pandemic and de-accelerate its rate of transmission among the society.

RevDate: 2021-02-06

Alsharif M, DB Rawat (2021)

Study of Machine Learning for Cloud Assisted IoT Security as a Service.

Sensors (Basel, Switzerland), 21(4): pii:s21041034.

Machine learning (ML) has been emerging as a viable solution for intrusion detection systems (IDS) to secure IoT devices against different types of attacks. ML based IDS (ML-IDS) normally detect network traffic anomalies caused by known attacks as well as newly introduced attacks. Recent research focuses on the functionality metrics of ML techniques, depicting their prediction effectiveness, but overlooked their operational requirements. ML techniques are resource-demanding that require careful adaptation to fit the limited computing resources of a large sector of their operational platform, namely, embedded systems. In this paper, we propose cloud-based service architecture for managing ML models that best fit different IoT device operational configurations for security. An IoT device may benefit from such a service by offloading to the cloud heavy-weight activities such as feature selection, model building, training, and validation, thus reducing its IDS maintenance workload at the IoT device and get the security model back from the cloud as a service.

RevDate: 2021-02-06

Meyer H, Wei P, X Jiang (2021)

Intelligent Video Highlights Generation with Front-Camera Emotion Sensing.

Sensors (Basel, Switzerland), 21(4): pii:s21041035.

In this paper, we present HOMER, a cloud-based system for video highlight generation which enables the automated, relevant, and flexible segmentation of videos. Our system outperforms state-of-the-art solutions by fusing internal video content-based features with the user's emotion data. While current research mainly focuses on creating video summaries without the use of affective data, our solution achieves the subjective task of detecting highlights by leveraging human emotions. In two separate experiments, including videos filmed with a dual camera setup, and home videos randomly picked from Microsoft's Video Titles in the Wild (VTW) dataset, HOMER demonstrates an improvement of up to 38% in F1-score from baseline, while not requiring any external hardware. We demonstrated both the portability and scalability of HOMER through the implementation of two smartphone applications.

RevDate: 2021-02-05

Alshehri M, Bharadwaj A, Kumar M, et al (2021)

Cloud and IoT based Smart Architecture for Desalination Water Treatment.

Environmental research pii:S0013-9351(21)00106-7 [Epub ahead of print].

Increasing water demand and the deteriorating environment has continuously stressed the requirement for new technology and methods to attain optimized use of resources and desalination management, converting seawater into pure drinking water. In this age, the Internet of Things use allows us to optimize a series of previously complicated processes to perform and required enormous resources. One of these is optimizing the management of water treatment. This research presents an implementable water treatment model and suggests smart environment that can control water treatment plants. The proposed system gathers data and analysing to provide the most efficient approach for water desalination operations. The desalination framework integrates smart enabling technologies such as Cloud Portal, Network communication, Internet of Things, Sensors powered by solar energy with ancient water purification as part of seawater's desalination project. The proposed framework incorporates the new-age technologies, which are essential for efficient and effective operations of desalination systems. The implemented desalination dual membrane framework uses solar energy for purifying saline water using ancient methods to produce clean water for drinking and irrigation. The desalination produced 0.47 m3/l of freshwater from a saline concentration of 10 g/l, consuming 8.31 KWh/m3 energy for production from the prototype implementation, which makes desalination process cost effective.

RevDate: 2021-02-05

Vahidy F, Jones SL, Tano ME, et al (2021)

Rapid Response to Drive COVID-19 Research in a Learning Healthcare System: The Houston Methodist COVID-19 Surveillance and Outcomes Registry (CURATOR).

JMIR medical informatics [Epub ahead of print].

BACKGROUND: The COVID-19 pandemic has exacerbated the challenge of meaningful healthcare digitization. The need for rapid yet validated decision making requires robust data infrastructure. Organizations with a Learning Healthcare (LHC) systems focus tend to adapt better to rapidly evolving data needs. The literature lacks examples of successful implementation of data digitization principles in an LHC context across healthcare systems during the COVID-19 pandemic.

OBJECTIVE: We share our experience and provide a framework for assembling and organizing multi-disciplinary resources, structuring and regulating research needs, and developing a single source of truth (SSoT) for COVID-19 research by applying fundamental principles of healthcare digitization, in the context of LHC across a complex healthcare organization.

METHODS: Houston Methodist (HM) comprises eight tertiary care hospitals, and an expansive primary care network across Greater Houston - one of the most populous and diverse U.S. regions. Early in the pandemic, institutional leadership envisioned the need to streamline COVID-19 research and establish the retrospective research task force (RRTF). We provide an account of structure, functioning and productivity of RRTF. We further elucidate the technical and structural details of a comprehensive data repository - the HM COVID-19 Surveillance and Outcomes Registry (CURATOR). We particularly highlight how CURATOR conforms to standard healthcare digitization principles in the LHC context.

RESULTS: The HM COVID-19 RRTF comprises expertise in epidemiology, health systems, clinical domains, data sciences, information technology, and research regulation. RRTF initially convened in March 2020 to prioritize and streamline COVID-19 observational research, and to date has reviewed over 60 protocols and made recommendations to the institutional review board (IRB). The RRTF also established the charter for CURATOR which in itself was IRB approved in April 2020. CURATOR is a relational Structured Query Language database that is directly populated with data from electronic health records, via largely automated extract, transform and load procedures. The CURATOR design enables longitudinal tracking of COVID-19 patients and controls before and after COVID-19 testing. CURATOR has been set up following the single source of truth (SSoT) principle and is harmonized across other COVID-19 data sources. CURATOR eliminates data silos by leveraging unique and disparate big data sources for COVID-19 research and provides a platform to capitalize on institutional investment in cloud computing. Currently hosting deeply phenotyped socio-demographic, clinical and outcomes data on approximately 200,000 COVID-19 tested individuals, CURATOR supports more than 30 IRB approved protocols across several clinical domains and has generated a track record of publications from its core and associated data sources.

CONCLUSIONS: A data-driven decision-making mindset is paramount to the success of healthcare organizations. Investment in cross-disciplinary expertise, healthcare technology and leadership commitment are key ingredients to foster an LHC system. Such systems can mitigate the effects of ongoing and future healthcare catastrophes by providing timely and validated decision support.

RevDate: 2021-02-04

Filippucci M, Miccolis S, Castagnozzi A, et al (2021)

Seismicity of the Gargano promontory (Southern Italy) after 7 years of local seismic network operation: Data release of waveforms from 2013 to 2018.

Data in brief, 35:106783 pii:S2352-3409(21)00067-6.

The University of Bari (Italy), in cooperation with the National Institute of Geophysics and Volcanology (INGV) (Italy), has installed the OTRIONS micro-earthquake network to better understand the active tectonics of the Gargano promontory (Southern Italy). The OTRIONS network operates since 2013 and consists of 12 short period, 3 components, seismic stations located in the Apulian territory (Southern Italy). This data article releases the waveform database collected from 2013 to 2018 and describes the characteristics of the local network in the current configuration. At the end of 2018, we implemented a cloud infrastructure to make more robust the acquisition and storage system of the network through a collaboration with the RECAS-Bari computing centre of the University of Bari (Italy) and of the National Institute of Nuclear Physics (Italy). Thanks to this implementation, waveforms recorded after the beginning of 2019 and the station metadata are accessible through the European Integrated Data Archive (EIDA, https://www.orfeus-eu.org/data/eida/nodes/INGV/).

RevDate: 2021-02-04

Li Z, E Peng (2021)

Software-Defined Optimal Computation Task Scheduling in Vehicular Edge Networking.

Sensors (Basel, Switzerland), 21(3): pii:s21030955.

With the development of smart vehicles and various vehicular applications, Vehicular Edge Computing (VEC) paradigm has attracted from academic and industry. Compared with the cloud computing platform, VEC has several new features, such as the higher network bandwidth and the lower transmission delay. Recently, vehicular computation-intensive task offloading has become a new research field for the vehicular edge computing networks. However, dynamic network topology and the bursty computation tasks offloading, which causes to the computation load unbalancing for the VEC networking. To solve this issue, this paper proposed an optimal control-based computing task scheduling algorithm. Then, we introduce software defined networking/OpenFlow framework to build a software-defined vehicular edge networking structure. The proposed algorithm can obtain global optimum results and achieve the load-balancing by the virtue of the global load status information. Besides, the proposed algorithm has strong adaptiveness in dynamic network environments by automatic parameter tuning. Experimental results show that the proposed algorithm can effectively improve the utilization of computation resources and meet the requirements of computation and transmission delay for various vehicular tasks.

RevDate: 2021-02-03

Uslu BÇ, Okay E, E Dursun (2020)

Analysis of factors affecting IoT-based smart hospital design.

Journal of cloud computing (Heidelberg, Germany), 9(1):67.

Currently, rapidly developing digital technological innovations affect and change the integrated information management processes of all sectors. The high efficiency of these innovations has inevitably pushed the health sector into a digital transformation process to optimize the technologies and methodologies used to optimize healthcare management systems. In this transformation, the Internet of Things (IoT) technology plays an important role, which enables many devices to connect and work together. IoT allows systems to work together using sensors, connection methods, internet protocols, databases, cloud computing, and analytic as infrastructure. In this respect, it is necessary to establish the necessary technical infrastructure and a suitable environment for the development of smart hospitals. This study points out the optimization factors, challenges, available technologies, and opportunities, as well as the system architecture that come about by employing IoT technology in smart hospital environments. In order to do that, the required technical infrastructure is divided into five layers and the system infrastructure, constraints, and methods needed in each layer are specified, which also includes the smart hospital's dimensions and extent of intelligent computing and real-time big data analytic. As a result of the study, the deficiencies that may arise in each layer for the smart hospital design model and the factors that should be taken into account to eliminate them are explained. It is expected to provide a road map to managers, system developers, and researchers interested in optimization of the design of the smart hospital system.

RevDate: 2021-02-03

Nguyen V, Khanh TT, Nguyen TDT, et al (2020)

Flexible computation offloading in a fuzzy-based mobile edge orchestrator for IoT applications.

Journal of cloud computing (Heidelberg, Germany), 9(1):66.

In the Internet of Things (IoT) era, the capacity-limited Internet and uncontrollable service delays for various new applications, such as video streaming analysis and augmented reality, are challenges. Cloud computing systems, also known as a solution that offloads energy-consuming computation of IoT applications to a cloud server, cannot meet the delay-sensitive and context-aware service requirements. To address this issue, an edge computing system provides timely and context-aware services by bringing the computations and storage closer to the user. The dynamic flow of requests that can be efficiently processed is a significant challenge for edge and cloud computing systems. To improve the performance of IoT systems, the mobile edge orchestrator (MEO), which is an application placement controller, was designed by integrating end mobile devices with edge and cloud computing systems. In this paper, we propose a flexible computation offloading method in a fuzzy-based MEO for IoT applications in order to improve the efficiency in computational resource management. Considering the network, computation resources, and task requirements, a fuzzy-based MEO allows edge workload orchestration actions to decide whether to offload a mobile user to local edge, neighboring edge, or cloud servers. Additionally, increasing packet sizes will affect the failed-task ratio when the number of mobile devices increases. To reduce failed tasks because of transmission collisions and to improve service times for time-critical tasks, we define a new input crisp value, and a new output decision for a fuzzy-based MEO. Using the EdgeCloudSim simulator, we evaluate our proposal with four benchmark algorithms in augmented reality, healthcare, compute-intensive, and infotainment applications. Simulation results show that our proposal provides better results in terms of WLAN delay, service times, the number of failed tasks, and VM utilization.

RevDate: 2021-02-03

Tan H, Wang Y, Wu M, et al (2021)

Distributed Group Coordination of Multiagent Systems in Cloud Computing Systems Using a Model-Free Adaptive Predictive Control Strategy.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

This article studies the group coordinated control problem for distributed nonlinear multiagent systems (MASs) with unknown dynamics. Cloud computing systems are employed to divide agents into groups and establish networked distributed multigroup-agent systems (ND-MGASs). To achieve the coordination of all agents and actively compensate for communication network delays, a novel networked model-free adaptive predictive control (NMFAPC) strategy combining networked predictive control theory with model-free adaptive control method is proposed. In the NMFAPC strategy, each nonlinear agent is described as a time-varying data model, which only relies on the system measurement data for adaptive learning. To analyze the system performance, a simultaneous analysis method for stability and consensus of ND-MGASs is presented. Finally, the effectiveness and practicability of the proposed NMFAPC strategy are verified by numerical simulations and experimental examples. The achievement also provides a solution for the coordination of large-scale nonlinear MASs.

RevDate: 2021-02-02

Su K, Zhang X, Liu Q, et al (2020)

Strategies of similarity propagation in web service recommender systems.

Mathematical biosciences and engineering : MBE, 18(1):530-550.

Recently, web service recommender systems have attracted much attention due to the popularity of Service-Oriented Computing and Cloud Computing. Memory-based collaborative filtering approaches which mainly rely on the similarity calculation are widely studied to realize the recommendation. In these research works, the similarity between two users is computed based on the QoS data of their commonly-invoked services and the similarity between two services is computed based on the common users who invoked them. However, most approaches ignore that the similarity calculation is not always accurate under a sparse data condition. To address this problem, we propose a similarity propagation method to accurately evaluate the similarities between users or services. Similarity propagation means that "if A and B are similar, and B and C are similar, then A and C will be similar to some extent". Firstly, the similarity graph of users or services is constructed according to the QoS data. Then, the similarity propagation paths between two nodes on the similarity graph are discovered. Finally, the similarity along each propagation path is measured and the indirect similarity between two users or services is evaluated by aggregating the similarities of different paths connecting them. Comprehensive experiments on real-world datasets demonstrate that our similarity propagation method can outstandingly improve the QoS prediction accuracy of memory-based collaborative filtering approaches.

RevDate: 2021-02-02

Jiang W, Ye X, Chen R, et al (2020)

Wearable on-device deep learning system for hand gesture recognition based on FPGA accelerator.

Mathematical biosciences and engineering : MBE, 18(1):132-153.

Gesture recognition is critical in the field of Human-Computer Interaction, especially in healthcare, rehabilitation, sign language translation, etc. Conventionally, the gesture recognition data collected by the inertial measurement unit (IMU) sensors is relayed to the cloud or a remote device with higher computing power to train models. However, it is not convenient for remote follow-up treatment of movement rehabilitation training. In this paper, based on a field-programmable gate array (FPGA) accelerator and the Cortex-M0 IP core, we propose a wearable deep learning system that is capable of locally processing data on the end device. With a pre-stage processing module and serial-parallel hybrid method, the device is of low-power and low-latency at the micro control unit (MCU) level, however, it meets or exceeds the performance of single board computers (SBC). For example, its performance is more than twice as much of Cortex-A53 (which is usually used in Raspberry Pi). Moreover, a convolutional neural network (CNN) and a multilayer perceptron neural network (NN) is used in the recognition model to extract features and classify gestures, which helps achieve a high recognition accuracy at 97%. Finally, this paper offers a software-hardware co-design method that is worth referencing for the design of edge devices in other scenarios.

RevDate: 2021-01-29

Neely BA (2021)

Cloudy with a Chance of Peptides: Accessibility, Scalability, and Reproducibility with Cloud-Hosted Environments.

Journal of proteome research [Epub ahead of print].

Cloud-hosted environments offer known benefits when computational needs outstrip affordable local workstations, enabling high-performance computation without a physical cluster. What has been less apparent, especially to novice users, is the transformative potential for cloud-hosted environments to bridge the digital divide that exists between poorly funded and well-resourced laboratories, and to empower modern research groups with remote personnel and trainees. Using cloud-based proteomic bioinformatic pipelines is not predicated on analyzing thousands of files, but instead can be used to improve accessibility during remote work, extreme weather, or working with under-resourced remote trainees. The general benefits of cloud-hosted environments also allow for scalability and encourage reproducibility. Since one possible hurdle to adoption is awareness, this paper is written with the nonexpert in mind. The benefits and possibilities of using a cloud-hosted environment are emphasized by describing how to setup an example workflow to analyze a previously published label-free data-dependent acquisition mass spectrometry data set of mammalian urine. Cost and time of analysis are compared using different computational tiers, and important practical considerations are described. Overall, cloud-hosted environments offer the potential to solve large computational problems, but more importantly can enable and accelerate research in smaller research groups with inadequate infrastructure and suboptimal local computational resources.

RevDate: 2021-01-29

Alvarez RV, Mariño-Ramírez L, D Landsman (2021)

Transcriptome annotation in the cloud: complexity, best practices, and cost.

GigaScience, 10(2):.

BACKGROUND: The NIH Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) initiative provides NIH-funded researchers cost-effective access to commercial cloud providers, such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). These cloud providers represent an alternative for the execution of large computational biology experiments like transcriptome annotation, which is a complex analytical process that requires the interrogation of multiple biological databases with several advanced computational tools. The core components of annotation pipelines published since 2012 are BLAST sequence alignments using annotated databases of both nucleotide or protein sequences almost exclusively with networked on-premises compute systems.

FINDINGS: We compare multiple BLAST sequence alignments using AWS and GCP. We prepared several Jupyter Notebooks with all the code required to submit computing jobs to the batch system on each cloud provider. We consider the consequence of the number of query transcripts in input files and the effect on cost and processing time. We tested compute instances with 16, 32, and 64 vCPUs on each cloud provider. Four classes of timing results were collected: the total run time, the time for transferring the BLAST databases to the instance local solid-state disk drive, the time to execute the CWL script, and the time for the creation, set-up, and release of an instance. This study aims to establish an estimate of the cost and compute time needed for the execution of multiple BLAST runs in a cloud environment.

CONCLUSIONS: We demonstrate that public cloud providers are a practical alternative for the execution of advanced computational biology experiments at low cost. Using our cloud recipes, the BLAST alignments required to annotate a transcriptome with ∼500,000 transcripts can be processed in <2 hours with a compute cost of ∼$200-$250. In our opinion, for BLAST-based workflows, the choice of cloud platform is not dependent on the workflow but, rather, on the specific details and requirements of the cloud provider. These choices include the accessibility for institutional use, the technical knowledge required for effective use of the platform services, and the availability of open source frameworks such as APIs to deploy the workflow.

RevDate: 2021-01-29

Liu X, Kar B, Montiel Ishino FA, et al (2020)

Assessing the Reliability of Relevant Tweets and Validation Using Manual and Automatic Approaches for Flood Risk Communication.

ISPRS international journal of geo-information, 9(9):.

While Twitter has been touted as a preeminent source of up-to-date information on hazard events, the reliability of tweets is still a concern. Our previous publication extracted relevant tweets containing information about the 2013 Colorado flood event and its impacts. Using the relevant tweets, this research further examined the reliability (accuracy and trueness) of the tweets by examining the text and image content and comparing them to other publicly available data sources. Both manual identification of text information and automated (Google Cloud Vision, application programming interface (API)) extraction of images were implemented to balance accurate information verification and efficient processing time. The results showed that both the text and images contained useful information about damaged/flooded roads/streets. This information will help emergency response coordination efforts and informed allocation of resources when enough tweets contain geocoordinates or location/venue names. This research will identify reliable crowdsourced risk information to facilitate near real-time emergency response through better use of crowdsourced risk communication platforms.

RevDate: 2021-01-28

Gaw LY, DR Richards (2021)

Development of spontaneous vegetation on reclaimed land in Singapore measured by NDVI.

PloS one, 16(1):e0245220 pii:PONE-D-20-26617.

Population and economic growth in Asia has led to increased urbanisation. Urbanisation has many detrimental impacts on ecosystems, especially when expansion is unplanned. Singapore is a city-state that has grown rapidly since independence, both in population and land area. However, Singapore aims to develop as a 'City in Nature', and urban greenery is integral to the landscape. While clearing some areas of forest for urban sprawl, Singapore has also reclaimed land from the sea to expand its coastline. Reclaimed land is usually designated for future urban development, but must first be left for many years to stabilise. During the period of stabilisation, pioneer plant species establish, growing into novel forest communities. The rate of this spontaneous vegetation development has not been quantified. This study tracks the temporal trends of normalized difference vegetation index (NDVI), as a proxy of vegetation maturity, on reclaimed land sensed using LANDSAT images. Google Earth Engine was used to mosaic cloud-free annual LANDSAT images of Singapore from 1988 to 2015. Singapore's median NDVI increased by 0.15 from 0.47 to 0.62 over the study period, while its land area grew by 71 km2. Five reclaimed sites with spontaneous vegetation development showed variable vegetation covers, ranging from 6% to 43% vegetated cover in 2015. On average, spontaneous vegetation takes 16.9 years to develop to a maturity of 0.7 NDVI, but this development is not linear and follows a quadratic trajectory. Patches of spontaneous vegetation on isolated reclaimed lands are unlikely to remain forever since they are in areas slated for future development. In the years that these patches exist, they have potential to increase urban greenery, support biodiversity, and provide a host of ecosystem services. With this knowledge on spontaneous vegetation development trajectories, urban planners can harness the resource when planning future developments.


RJR Experience and Expertise


Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.


Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.


Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.


Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.


While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.


Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.


Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.


Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226


E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )