About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot


Bibliography Options Menu

29 Sep 2020 at 01:35
Hide Abstracts   |   Hide Additional Links
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing


Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 29 Sep 2020 at 01:35 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)


RevDate: 2020-09-28
CmpDate: 2020-09-28

Grigorescu S, Cocias T, Trasnea B, et al (2020)

Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI Inference Engines in Autonomous Vehicles.

Sensors (Basel, Switzerland), 20(19): pii:s20195450.

Self-driving cars and autonomous vehicles are revolutionizing the automotive sector, shaping the future of mobility altogether. Although the integration of novel technologies such as Artificial Intelligence (AI) and Cloud/Edge computing provides golden opportunities to improve autonomous driving applications, there is the need to modernize accordingly the whole prototyping and deployment cycle of AI components. This paper proposes a novel framework for developing so-called AI Inference Engines for autonomous driving applications based on deep learning modules, where training tasks are deployed elastically over both Cloud and Edge resources, with the purpose of reducing the required network bandwidth, as well as mitigating privacy issues. Based on our proposed data driven V-Model, we introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment and evaluation on the target ECUs (Electronic Control Units) is performed as Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles, that is environment perception and most probable path prediction.

RevDate: 2020-09-25

Cheng CW, Brown CR, Venugopalan J, et al (2020)

Towards an Effective Patient Health Engagement System Using Cloud-Based Text Messaging Technology.

IEEE journal of translational engineering in health and medicine, 8:2700107 pii:2700107.

Patient and health provider interaction via text messaging (TM) has become an accepted form of communication, often favored by adolescents and young adults. While integration of TM in disease management has aided health interventions and behavior modifications, broader adoption is hindered by expense, fixed reporting schedules, and monotonic communication. A low-cost, flexible TM reporting system (REMOTES) was developed using inexpensive cloud-based services with features of two-way communication, personalized reporting scheduling, and scalable and secured data storage. REMOTES is a template-based reporting tool adaptable to a wide-range of complexity in response formats. In a pilot study, 27 adolescents with sickle cell disease participated to assess feasibility of REMOTES in both inpatient and outpatient settings. Subject compliance with at least one daily self-report pain query was 94.9% (112/118) during inpatient and 91.1% (327/359) during outpatient, with an overall accuracy of 99.2% (970/978). With use of a more complex 8-item questionnaire, 30% (7/21) inpatient and 66.6% (36/54) outpatient responses were reported with 98.1% (51/52) reporting accuracy. All participants expressed high pre-trial expectation (88%) and post-trial satisfaction (89%). The study suggests that cloud-based text messaging is feasible and an easy-of-use solution for low-cost and personalized patient engagement.

RevDate: 2020-09-24

Wang X, P Qiu (2020)

A freight integer linear programming model under fog computing and its application in the optimization of vehicle networking deployment.

PloS one, 15(9):e0239628 pii:PONE-D-20-15346.

The increase in data amount makes the traditional Internet of Vehicles (IoV) fail to meet users' needs. Hence, the IoV is explored in series. To study the construction of freight integer linear programming (ILP) model based on fog computing (FG), and to analyze the application of the model in the optimization of the networking deployment (ND) of the IoV. FG and ILP are combined to build a freight computing ILP model. The model is used to analyze the application of ND optimization in the IoV system through simulations. The results show that while analyzing the ND results in different scenarios, the model is more suitable for small-scale scenarios and can optimize the objective function; however, its utilization rate is low in large-scale scenarios. While comparing and analyzing the network cost and running time, compared with traditional cloud computing solutions, the ND solution based on FG requires less cost, shorter running time, and has apparent effectiveness and efficiency. Therefore, it is found that the FG-based model has low cost, short running time, and apparent efficiency, which provides an experimental basis for the application of the later deployment of freight vehicles (FVs) in the Internet of Things (IoT) system for ND optimization. The results will provide important theoretical support for the overall deployment of IoV.

RevDate: 2020-09-24

Sun G, Jin Y, Li S, et al (2020)

Virtual Coformer Screening by Crystal Structure Predictions: Crucial Role of Crystallinity in Pharmaceutical Cocrystallization.

The journal of physical chemistry letters [Epub ahead of print].

One of the most popular strategies of optimization of drug properties in pharmaceutical industry appears to be a solid form change into a cocrystalline. A number of virtual screening approaches have been previously developed to allow a selection of the most promising cocrystal formers (coformers) for an experimental follow-up. A significant drawback of those methods is related to the lack of accounting for the crystallinity contribution to cocrystal formation. To address this issue, we propose in this study two virtual coformer screening approaches based on a modern cloud-computing crystal structure prediction (CSP) technology at a dispersion-corrected density functional theory (DFT-D) level. The CSP-based methods were for the first time validated on challenging cases of indomethacin and paracetamol cocrystallization, in respect of which the previously developed approaches provided poor predictions. The calculations demonstrated a dramatic improvement of the virtual coformer screening performance relative to the other methods. It is demonstrated that the crystallinity contribution to the formation of paracetamol and indomethacin cocrystals is a dominant one, and therefore should not be ignored in the virtual screening calculations. Our results encourage a broad utilization of the proposed CSP-based technology in pharmaceutical industry, as the only virtual coformer screening method, which directly accounts for the crystallinity contribution.

RevDate: 2020-09-24

Peter BG, Messina JP, Lin Z, et al (2020)

Crop climate suitability mapping on the cloud: a geovisualization application for sustainable agriculture.

Scientific reports, 10(1):15487 pii:10.1038/s41598-020-72384-x.

Climate change, food security, and environmental sustainability are pressing issues faced by today's global population. As production demands increase and climate threatens crop productivity, agricultural research develops innovative technologies to meet these challenges. Strategies include biodiverse cropping arrangements, new crop introductions, and genetic modification of crop varieties that are resilient to climatic and environmental stressors. Geography in particular is equipped to address a critical question in this pursuit-when and where can crop system innovations be introduced? This manuscript presents a case study of the geographic scaling potential utilizing common bean, delivers an open access Google Earth Engine geovisualization application for mapping the fundamental climate niche of any crop, and discusses food security and legume biodiversity in Sub-Saharan Africa. The application is temporally agile, allowing variable growing season selections and the production of 'living maps' that are continually producible as new data become available. This is an essential communication tool for the future, as practitioners can evaluate the potential geographic range for newly-developed, experimental, and underrepresented crop varieties for facilitating sustainable and innovative agroecological solutions.

RevDate: 2020-09-24

Tahir A, Chen F, Khan HU, et al (2020)

A Systematic Review on Cloud Storage Mechanisms Concerning e-Healthcare Systems.

Sensors (Basel, Switzerland), 20(18): pii:s20185392.

As the expenses of medical care administrations rise and medical services experts are becoming rare, it is up to medical services organizations and institutes to consider the implementation of medical Health Information Technology (HIT) innovation frameworks. HIT permits health associations to smooth out their considerable cycles and offer types of assistance in a more productive and financially savvy way. With the rise of Cloud Storage Computing (CSC), an enormous number of associations and undertakings have moved their healthcare data sources to distributed storage. As the information can be mentioned whenever universally, the accessibility of information becomes an urgent need. Nonetheless, outages in cloud storage essentially influence the accessibility level. Like the other basic variables of cloud storage (e.g., reliability quality, performance, security, and protection), availability also directly impacts the data in cloud storage for e-Healthcare systems. In this paper, we systematically review cloud storage mechanisms concerning the healthcare environment. Additionally, in this paper, the state-of-the-art cloud storage mechanisms are critically reviewed for e-Healthcare systems based on their characteristics. In short, this paper summarizes existing literature based on cloud storage and its impact on healthcare, and it likewise helps researchers, medical specialists, and organizations with a solid foundation for future studies in the healthcare environment.

RevDate: 2020-09-23

Cerasoli FT, Sherbert K, Sławińska J, et al (2020)

Quantum computation of silicon electronic band structure.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

Development of quantum architectures during the last decade has inspired hybrid classical-quantum algorithms in physics and quantum chemistry that promise simulations of fermionic systems beyond the capability of modern classical computers, even before the era of quantum computing fully arrives. Strong research efforts have been recently made to obtain minimal depth quantum circuits which could accurately represent chemical systems. Here, we show that unprecedented methods used in quantum chemistry, designed to simulate molecules on quantum processors, can be extended to calculate properties of periodic solids. In particular, we present minimal depth circuits implementing the variational quantum eigensolver algorithm and successfully use it to compute the band structure of silicon on a quantum machine for the first time. We are convinced that the presented quantum experiments performed on cloud-based platforms will stimulate more intense studies towards scalable electronic structure computation of advanced quantum materials.

RevDate: 2020-09-23

Brown AP, SM Randall (2020)

Secure Record Linkage of Large Health Data Sets: Evaluation of a Hybrid Cloud Model.

JMIR medical informatics, 8(9):e18920 pii:v8i9e18920.

BACKGROUND: The linking of administrative data across agencies provides the capability to investigate many health and social issues with the potential to deliver significant public benefit. Despite its advantages, the use of cloud computing resources for linkage purposes is scarce, with the storage of identifiable information on cloud infrastructure assessed as high risk by data custodians.

OBJECTIVE: This study aims to present a model for record linkage that utilizes cloud computing capabilities while assuring custodians that identifiable data sets remain secure and local.

METHODS: A new hybrid cloud model was developed, including privacy-preserving record linkage techniques and container-based batch processing. An evaluation of this model was conducted with a prototype implementation using large synthetic data sets representative of administrative health data.

RESULTS: The cloud model kept identifiers on premises and uses privacy-preserved identifiers to run all linkage computations on cloud infrastructure. Our prototype used a managed container cluster in Amazon Web Services to distribute the computation using existing linkage software. Although the cost of computation was relatively low, the use of existing software resulted in an overhead of processing of 35.7% (149/417 min execution time).

CONCLUSIONS: The result of our experimental evaluation shows the operational feasibility of such a model and the exciting opportunities for advancing the analysis of linkage outputs.

RevDate: 2020-09-22

Huang W, Zheng P, Cui Z, et al (2020)

MMAP: A Cloud Computing Platform for Mining the Maximum Accuracy of Predicting Phenotypes from Genotypes.

Bioinformatics (Oxford, England) pii:5909989 [Epub ahead of print].

Accurately predicting phenotypes from genotypes holds great promise to improve health management in humans and animals, and breeding efficiency in animals and plants. Although many prediction methods have been developed, the optimal method differs across datasets due to multiple factors, including species, environments, populations, and traits of interest. Studies have demonstrated that the number of genes underlying a trait and its heritability are the two key factors that determine which method fits the trait the best. In many cases, however, these two factors are unknown for the traits of interest. We developed a cloud computing platform for Mining the Maximum Accuracy of Predicting phenotypes from genotypes (MMAP) using unsupervised learning on publicly available real data and simulated data. MMAP provides a user interface to upload input data, manage projects and analyses, and download the output results. The platform is free for the public to conduct computations for predicting phenotypes and genetic merit using the best prediction method optimized from many available ones, including Ridge Regression, gBLUP, compressed BLUP, Bayesian LASSO, Bayes A, B, Cpi, and many more. Users can also use the platform to conduct data analyses with any methods of their choice. It is expected that extensive usage of MMAP would enrich the training data, which in turn results in continual improvement of the identification of the best method for use with particular traits.

AVAILABILITY: The MMAP user manual, tutorials, and example datasets are available at http://zzlab.net/MMAP.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2020-09-22

Ashraf I, Umer M, Majeed R, et al (2020)

Home automation using general purpose household electric appliances with Raspberry Pi and commercial smartphone.

PloS one, 15(9):e0238480 pii:PONE-D-20-18224.

This study presents the design and implementation of a home automation system that focuses on the use of ordinary electrical appliances for remote control using Raspberry Pi and relay circuits and does not use expensive IP-based devices. Common Lights, Heating, Ventilation, and Air Conditioning (HVAC), fans, and other electronic devices are among the appliances that can be used in this system. A smartphone app is designed that helps the user to design the smart home to his actual home via easy and interactive drag & drop option. The system provides control over the appliances via both the local network and remote access. Data logging over the Microsoft Azure cloud database ensures system recovery in case of gateway failure and data record for lateral use. Periodical notifications also help the user to optimize the usage of home appliances. Moreover, the user can set his preferences and the appliances are auto turned off and on to meet user-specific requirements. Raspberry Pi acting as the server maintains the database of each appliance. HTTP web interface and apache server are used for communication between the android app and raspberry pi. With a 5v relay circuit and micro-processor Raspberry Pi, the proposed system is low-cost, energy-efficient, easy to operate, and affordable for low-income houses.

RevDate: 2020-09-21

Huang PJ, Chang JH, Lin HH, et al (2020)

DeepVariant-on-Spark: Small-Scale Genome Analysis Using a Cloud-Based Computing Framework.

Computational and mathematical methods in medicine, 2020:7231205.

Although sequencing a human genome has become affordable, identifying genetic variants from whole-genome sequence data is still a hurdle for researchers without adequate computing equipment or bioinformatics support. GATK is a gold standard method for the identification of genetic variants and has been widely used in genome projects and population genetic studies for many years. This was until the Google Brain team developed a new method, DeepVariant, which utilizes deep neural networks to construct an image classification model to identify genetic variants. However, the superior accuracy of DeepVariant comes at the cost of computational intensity, largely constraining its applications. Accordingly, we present DeepVariant-on-Spark to optimize resource allocation, enable multi-GPU support, and accelerate the processing of the DeepVariant pipeline. To make DeepVariant-on-Spark more accessible to everyone, we have deployed the DeepVariant-on-Spark to the Google Cloud Platform (GCP). Users can deploy DeepVariant-on-Spark on the GCP following our instruction within 20 minutes and start to analyze at least ten whole-genome sequencing datasets using free credits provided by the GCP. DeepVaraint-on-Spark is freely available for small-scale genome analysis using a cloud-based computing framework, which is suitable for pilot testing or preliminary study, while reserving the flexibility and scalability for large-scale sequencing projects.

RevDate: 2020-09-19

Silva LAZD, Vidal VF, Honório LM, et al (2020)

A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections.

Sensors (Basel, Switzerland), 20(18): pii:s20185296.

The increase in the development of digital twins brings several advantages to inspection and maintenance, but also new challenges. Digital models capable of representing real equipment for full remote inspection demand the synchronization, integration, and fusion of several sensors and methodologies such as stereo vision, monocular Simultaneous Localization and Mapping (SLAM), laser and RGB-D camera readings, texture analysis, filters, thermal, and multi-spectral images. This multidimensional information makes it possible to have a full understanding of given equipment, enabling remote diagnosis. To solve this problem, the present work uses an edge-fog-cloud architecture running over a publisher-subscriber communication framework to optimize the computational costs and throughput. In this approach, each process is embedded in an edge node responsible for prepossessing a given amount of data that optimizes the trade-off of processing capabilities and throughput delays. All information is integrated with different levels of fog nodes and a cloud server to maximize performance. To demonstrate this proposal, a real-time 3D reconstruction problem using moving cameras is shown. In this scenario, a stereo and RDB-D cameras run over edge nodes, filtering, and prepossessing the initial data. Furthermore, the point cloud and image registration, odometry, and filtering run over fog clusters. A cloud server is responsible for texturing and processing the final results. This approach enables us to optimize the time lag between data acquisition and operator visualization, and it is easily scalable if new sensors and algorithms must be added. The experimental results will demonstrate precision by comparing the results with ground-truth data, scalability by adding further readings and performance.

RevDate: 2020-09-19

Moreno-Martínez Á, Izquierdo-Verdiguier E, Maneta MP, et al (2020)

Multispectral high resolution sensor fusion for smoothing and gap-filling in the cloud.

Remote sensing of environment, 247:111901.

Remote sensing optical sensors onboard operational satellites cannot have high spectral, spatial and temporal resolutions simultaneously. In addition, clouds and aerosols can adversely affect the signal contaminating the land surface observations. We present a HIghly Scalable Temporal Adaptive Reflectance Fusion Model (HISTARFM) algorithm to combine multispectral images of different sensors to reduce noise and produce monthly gap free high resolution (30 m) observations over land. Our approach uses images from the Landsat (30 m spatial resolution and 16 day revisit cycle) and the MODIS missions, both from Terra and Aqua platforms (500 m spatial resolution and daily revisit cycle). We implement a bias-aware Kalman filter method in the Google Earth Engine (GEE) platform to obtain fused images at the Landsat spatial-resolution. The added bias correction in the Kalman filter estimates accounts for the fact that both model and observation errors are temporally auto-correlated and may have a non-zero mean. This approach also enables reliable estimation of the uncertainty associated with the final reflectance estimates, allowing for error propagation analyses in higher level remote sensing products. Quantitative and qualitative evaluations of the generated products through comparison with other state-of-the-art methods confirm the validity of the approach, and open the door to operational applications at enhanced spatio-temporal resolutions at broad continental scales.

RevDate: 2020-09-18

Platt S, Sanabria-Russo L, M Oliver (2020)

CoNTe: A Core Network Temporal Blockchain for 5G.

Sensors (Basel, Switzerland), 20(18): pii:s20185281.

Virtual Network Functions allow the effective separation between hardware and network functionality, a strong paradigm shift from previously tightly integrated monolithic, vendor, and technology dependent deployments. In this virtualized paradigm, all aspects of network operations can be made to deploy on demand, dynamically scale, as well as be shared and interworked in ways that mirror behaviors of general cloud computing. To date, although seeing rising demand, distributed ledger technology remains largely incompatible in such elastic deployments, by its nature as functioning as an immutable record store. This work focuses on the structural incompatibility of current blockchain designs and proposes a novel, temporal blockchain design built atop federated byzantine agreement, which has the ability to dynamically scale and be packaged as a Virtual Network Function (VNF) for the 5G Core.

RevDate: 2020-09-17

Mayfield CA, Gigler ME, Snapper L, et al (2020)

Using cloud-based, open-source technology to evaluate, improve, and rapidly disseminate community-based intervention data.

Journal of the American Medical Informatics Association : JAMIA pii:5907066 [Epub ahead of print].

Building Uplifted Families (BUF) is a cross-sector community initiative to improve health and economic disparities in Charlotte, North Carolina. A formative evaluation strategy was used to support iterative process improvement and collaborative engagement of cross-sector partners. To address challenges with electronic data collection through REDCap Cloud, we developed the BUF Rapid Dissemination (BUF-RD) model, a multistage data governance system supplemented by open-source technologies, such as: Stage 1) data collection; Stage 2) data integration and analysis; and Stage 3) dissemination. In Stage 3, results were disseminated through an interactive dashboard developed in RStudio using RShiny and Shiny Server solutions. The BUF-RD model was successfully deployed in a 6-month beta test to reduce the time lapse between data collection and dissemination from 3 months to 2 weeks. Having up-to-date preliminary results led to improved BUF implementation, enhanced stakeholder engagement, and greater responsiveness and alignment of program resources to specific participant needs.

RevDate: 2020-09-17

Lagzian M, Dadkhah M, A Mehraeen (2020)

Investigating the Capabilities of Information Technologies to support Policymaking in COVID-19 Crisis Management; A Systematic Review and Expert opinions.

European journal of clinical investigation [Epub ahead of print].

BACKGROUND: Today, numerous countries are fighting to protect themselves against the Covid-19 crisis, while the policymakers are confounded and empty-handed in dealing with this chaotic circumstance. The infection and its impacts have made it difficult to make optimal and suitable decisions. New information technologies play significant roles in such critical situations to address and relieve stress during the coronavirus crisis. This article endeavors to recognize the challenges policymakers have typically experienced during pandemic diseases, including Covid-19, and, accordingly, new information technology capabilities to encounter with them.

MATERIAL AND METHODS: The current study utilizes the synthesis of findings of experts' opinions within the systematic review process as the research method to recognize the best available evidence drawn from text and opinion to offer practical guidance for policymakers.

RESULTS: The results illustrate that the challenges fall into two categories including; encountering the disease and reducing the results of the disease. Furthermore, Internet of Things, cloud computing, machine learning, and social networking play the most significant roles to address these challenges.

RevDate: 2020-09-17

Albrecht B, Bağcı C, DH Huson (2020)

MAIRA- real-time taxonomic and functional analysis of long reads on a laptop.

BMC bioinformatics, 21(Suppl 13):390 pii:10.1186/s12859-020-03684-2.

BACKGROUND: Advances in mobile sequencing devices and laptop performance make metagenomic sequencing and analysis in the field a technologically feasible prospect. However, metagenomic analysis pipelines are usually designed to run on servers and in the cloud.

RESULTS: MAIRA is a new standalone program for interactive taxonomic and functional analysis of long read metagenomic sequencing data on a laptop, without requiring external resources. The program performs fast, online, genus-level analysis, and on-demand, detailed taxonomic and functional analysis. It uses two levels of frame-shift-aware alignment of DNA reads against protein reference sequences, and then performs detailed analysis using a protein synteny graph.

CONCLUSIONS: We envision this software being used by researchers in the field, when access to servers or cloud facilities is difficult, or by individuals that do not routinely access such facilities, such as medical researchers, crop scientists, or teachers.

RevDate: 2020-09-17

Koubaa A, Ammar A, Alahdab M, et al (2020)

DeepBrain: Experimental Evaluation of Cloud-Based Computation Offloading and Edge Computing in the Internet-of-Drones for Deep Learning Applications.

Sensors (Basel, Switzerland), 20(18): pii:s20185240.

Unmanned Aerial Vehicles (UAVs) have been very effective in collecting aerial images data for various Internet-of-Things (IoT)/smart cities applications such as search and rescue, surveillance, vehicle detection, counting, intelligent transportation systems, to name a few. However, the real-time processing of collected data on edge in the context of the Internet-of-Drones remains an open challenge because UAVs have limited energy capabilities, while computer vision techniquesconsume excessive energy and require abundant resources. This fact is even more critical when deep learning algorithms, such as convolutional neural networks (CNNs), are used for classification and detection. In this paper, we first propose a system architecture of computation offloading for Internet-connected drones. Then, we conduct a comprehensive experimental study to evaluate the performance in terms of energy, bandwidth, and delay of the cloud computation offloading approach versus the edge computing approach of deep learning applications in the context of UAVs. In particular, we investigate the tradeoff between the communication cost and the computation of the two candidate approaches experimentally. The main results demonstrate that the computation offloading approach allows us to provide much higher throughput (i.e., frames per second) as compared to the edge computing approach, despite the larger communication delays.

RevDate: 2020-09-16

Wang S, Di Tommaso S, Deines JM, et al (2020)

Mapping twenty years of corn and soybean across the US Midwest using the Landsat archive.

Scientific data, 7(1):307 pii:10.1038/s41597-020-00646-4.

Field-level monitoring of crop types in the United States via the Cropland Data Layer (CDL) has played an important role in improving production forecasts and enabling large-scale study of agricultural inputs and outcomes. Although CDL offers crop type maps across the conterminous US from 2008 onward, such maps are missing in many Midwestern states or are uneven in quality before 2008. To fill these data gaps, we used the now-public Landsat archive and cloud computing services to map corn and soybean at 30 m resolution across the US Midwest from 1999-2018. Our training data were CDL from 2008-2018, and we validated the predictions on CDL 1999-2007 where available, county-level crop acreage statistics, and state-level crop rotation statistics. The corn-soybean maps, which we call the Corn-Soy Data Layer (CSDL), are publicly hosted on Google Earth Engine and also available for download online.

RevDate: 2020-09-15

Utomo D, PA Hsiung (2020)

A Multitiered Solution for Anomaly Detection in Edge Computing for Smart Meters.

Sensors (Basel, Switzerland), 20(18): pii:s20185159.

In systems connected to smart grids, smart meters with fast and efficient responses are very helpful in detecting anomalies in realtime. However, sending data with a frequency of a minute or less is not normal with today's technology because of the bottleneck of the communication network and storage media. Because mitigation cannot be done in realtime, we propose prediction techniques using Deep Neural Network (DNN), Support Vector Regression (SVR), and k-Nearest Neighbors (KNN). In addition to these techniques, the prediction timestep is chosen per day and wrapped in sliding windows, and clustering using Kmeans and intersection Kmeans and HDBSCAN is also evaluated. The predictive ability applied here is to predict whether anomalies in electricity usage will occur in the next few weeks. The aim is to give the user time to check their usage and from the utility side, whether it is necessary to prepare a sufficient supply. We also propose the latency reduction to counter higher latency as in the traditional centralized system by adding layer Edge Meter Data Management System (MDMS) and Cloud-MDMS as the inference and training model. Based on the experiments when running in the Raspberry Pi, the best solution is choosing DNN that has the shortest latency 1.25 ms, 159 kB persistent file size, and at 128 timesteps.

RevDate: 2020-09-09

Mrozek D (2020)

A review of Cloud computing technologies for comprehensive microRNA analyses.

Computational biology and chemistry, 88:107365 pii:S1476-9271(20)30769-6 [Epub ahead of print].

Cloud computing revolutionized many fields that require ample computational power. Cloud platforms may also provide huge support for microRNA analysis mainly through disclosing scalable resources of different types. In Clouds, these resources are available as services, which simplifies their allocation and releasing. This feature is especially useful during the analysis of large volumes of data, like the one produced by next generation sequencing experiments, which require not only extended storage space but also a distributed computing environment. In this paper, we show which of the Cloud properties and service models can be especially beneficial for microRNA analysis. We also explain the most useful services of the Cloud (including storage space, computational power, web application hosting, machine learning models, and Big Data frameworks) that can be used for microRNA analysis. At the same time, we review several solutions for microRNA and show that the utilization of the Cloud in this field is still weak, but can increase in the future when the awareness of their applicability grows.

RevDate: 2020-09-09

Long E, Chen J, Wu X, et al (2020)

Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing.

NPJ digital medicine, 3:112.

A challenge of chronic diseases that remains to be solved is how to liberate patients and medical resources from the burdens of long-term monitoring and periodic visits. Precise management based on artificial intelligence (AI) holds great promise; however, a clinical application that fully integrates prediction and telehealth computing has not been achieved, and further efforts are required to validate its real-world benefits. Taking congenital cataract as a representative, we used Bayesian and deep-learning algorithms to create CC-Guardian, an AI agent that incorporates individualized prediction and scheduling, and intelligent telehealth follow-up computing. Our agent exhibits high sensitivity and specificity in both internal and multi-resource validation. We integrate our agent with a web-based smartphone app and prototype a prediction-telehealth cloud platform to support our intelligent follow-up system. We then conduct a retrospective self-controlled test validating that our system not only accurately detects and addresses complications at earlier stages, but also reduces the socioeconomic burdens compared to conventional methods. This study represents a pioneering step in applying AI to achieve real medical benefits and demonstrates a novel strategy for the effective management of chronic diseases.

RevDate: 2020-09-14
CmpDate: 2020-09-14

Hwang YW, IY Lee (2020)

A Study on CP-ABE-based Medical Data Sharing System with Key Abuse Prevention and Verifiable Outsourcing in the IoMT Environment.

Sensors (Basel, Switzerland), 20(17): pii:s20174934.

Recent developments in cloud computing allow data to be securely shared between users. This can be used to improve the quality of life of patients and medical staff in the Internet of Medical Things (IoMT) environment. However, in the IoMT cloud environment, there are various security threats to the patient's medical data. As a result, security features such as encryption of collected data and access control by legitimate users are essential. Many studies have been conducted on access control techniques using ciphertext-policy attribute-based encryption (CP-ABE), a form of attribute-based encryption, among various security technologies and studies are underway to apply them to the medical field. However, several problems persist. First, as the secret key does not identify the user, the user may maliciously distribute the secret key and such users cannot be tracked. Second, Attribute-Based Encryption (ABE) increases the size of the ciphertext depending on the number of attributes specified. This wastes cloud storage, and computational times are high when users decrypt. Such users must employ outsourcing servers. Third, a verification process is needed to prove that the results computed on the outsourcing server are properly computed. This paper focuses on the IoMT environment for a study of a CP-ABE-based medical data sharing system with key abuse prevention and verifiable outsourcing in a cloud environment. The proposed scheme can protect the privacy of user data stored in a cloud environment in the IoMT field, and if there is a problem with the secret key delegated by the user, it can trace a user who first delegated the key. This can prevent the key abuse problem. In addition, this scheme reduces the user's burden when decoding ciphertext and calculates accurate results through a server that supports constant-sized ciphertext output and verifiable outsourcing technology. The goal of this paper is to propose a system that enables patients and medical staff to share medical data safely and efficiently in an IoMT environment.

RevDate: 2020-09-18

Nguyen UNT, Pham LTH, TD Dang (2020)

Correction to: an automatic water detection approach using Landsat 8 OLI and Google earth engine cloud computing to map lakes and reservoirs in New Zealand.

Environmental monitoring and assessment, 192(9):616 pii:10.1007/s10661-020-08581-y.

In the published article:"An automatic water detection approach using Landsat 8 OLI and Google Earth Engine cloud computing to map lakes and reservoirs in New Zealand", the Acknowledgements was published incorrectly and funding statement was missing.

RevDate: 2020-09-08

Mei L, Rozanov V, JP Burrows (2020)

A fast and accurate radiative transfer model for aerosol remote sensing.

Journal of quantitative spectroscopy & radiative transfer, 256:107270.

After several decades' development of retrieval techniques in aerosol remote sensing, no fast and accurate analytical Radiative Transfer Model (RTM) has been developed and applied to create global aerosol products for non-polarimetric instruments such as Ocean and Land Colour Instrument/Sentinel-3 (OLCI/Sentinel-3) and Meteosat Second Generation/Spinning Enhanced Visible and Infrared Imager (MSG/SEVIRI). Global aerosol retrieval algorithms are typically based on a Look-Up-Table (LUT) technique, requiring high-performance computers. The current eXtensible Bremen Aerosol/cloud and surfacE parameters Retrieval (XBAER) algorithm also utilizes the LUT method. In order to have a near-real time retrieval and achieve a quick and accurate "FIRST-LOOK" aerosol product without high-demand of computing resource, we have developed a Fast and Accurate Semi-analytical Model of Atmosphere-surface Reflectance (FASMAR) for aerosol remote sensing. The FASMAR is developed based on a successive order of scattering technique. In FASMAR, the first three orders of scattering are calculated exactly. The contribution of higher orders of scattering is estimated using an extrapolation technique and an additional correction function. The evaluation of FASMAR has been performed by comparing with radiative transfer model SCIATRAN for all typical observation/illumination geometries, surface/aerosol conditions, and wavelengths 412, 550, 670, 870, 1600, 2100 nm used for aerosol remote sensing. The selected observation/illumination conditions are based on the observations from both geostationary satellite (e.g. MSG/SEVIRI) and polar-orbit satellite (e.g. OLCI/Sentinel-3). The percentage error of the top of atmosphere reflectance calculated by FASMAR is within ± 3% for typical polar-orbit/geostationary satellites' observation/illumination geometries. The accuracy decreases for solar and viewing zenith angles larger than 70∘. However, even in such cases, the error is within the range ± 5%. The evaluation of model performance also shows that FASMAR can be used for all typical surfaces with albedo in the interval [ 0 - 1 ] and aerosol with optical thickness in the range [ 0.01 - 1 ] .

RevDate: 2020-09-02

Wang X, Xiao X, Zou Z, et al (2020)

Tracking annual changes of coastal tidal flats in China during 1986-2016 through analyses of Landsat images with Google Earth Engine.

Remote sensing of environment, 238:.

Tidal flats (non-vegetated area), along with coastal vegetation area, constitute the coastal wetlands (intertidal zone) between high and low water lines, and play an important role in wildlife, biodiversity and biogeochemical cycles. However, accurate annual maps of coastal tidal flats over the last few decades are unavailable and their spatio-temporal changes in China are unknown. In this study, we analyzed all the available Landsat TM/ETM+/OLI imagery (~ 44,528 images) using the Google Earth Engine (GEE) cloud computing platform and a robust decision tree algorithm to generate annual frequency maps of open surface water body and vegetation to produce annual maps of coastal tidal flats in eastern China from 1986 to 2016 at 30-m spatial resolution. The resulting map of coastal tidal flats in 2016 was evaluated using very high-resolution images available in Google Earth. The total area of coastal tidal flats in China in 2016 was about 731,170 ha, mostly distributed in the provinces around Yellow River Delta and Pearl River Delta. The interannual dynamics of coastal tidal flats area in China over the last three decades can be divided into three periods: a stable period during 1986-1992, an increasing period during 1993-2001 and a decreasing period during 2002-2016. The resulting annual coastal tidal flats maps could be used to support sustainable coastal zone management policies that preserve coastal ecosystem services and biodiversity in China.

RevDate: 2020-09-03

Samea F, Azam F, Rashid M, et al (2020)

A model-driven framework for data-driven applications in serverless cloud computing.

PloS one, 15(8):e0237317 pii:PONE-D-19-28565.

In a serverless cloud computing environment, the cloud provider dynamically manages the allocation of resources whereas the developers purely focus on their applications. The data-driven applications in serverless cloud computing mainly address the web as well as other distributed scenarios, and therefore, it is essential to offer a consistent user experience across different connection types. In order to address the issues of data-driven application in a real-time distributed environment, the use of GraphQL (Graph Query Language) is getting more and more popularity in state-of-the-art cloud computing approaches. However, the existing solutions target the low level implementation of GraphQL, for the development of a complex data-driven application, which may lead to several errors and involve a significant amount of development efforts due to various users' requirements in real-time. Therefore, it is critical to simplify the development process of data-driven applications in a serverless cloud computing environment. Consequently, this research introduces UMLPDA (Unified Modeling Language Profile for Data-driven Applications), which adopts the concepts of UML-based Model-driven Architectures to model the frontend as well as the backend requirements for data-driven applications developed at a higher abstraction level. Particularly, a modeling approach is proposed to resolve the development complexities such as data communication and synchronization. Subsequently, a complete open source transformation engine is developed using a Model-to-Text approach to automatically generate the frontend as well as backend low level implementations of Angular2 and GraphQL respectively. The validation of proposed work is performed with three different case studies, deployed on Amazon Web Services platform. The results show that the proposed framework enables to develop the data-driven applications with simplicity.

RevDate: 2020-09-18
CmpDate: 2020-09-01

Fuentes H, D Mauricio (2020)

Smart water consumption measurement system for houses using IoT and cloud computing.

Environmental monitoring and assessment, 192(9):602 pii:10.1007/s10661-020-08535-4.

Presently, in several parts of the world, water consumption is not measured or visualized in real time, in addition, water leaks are not detected in time and with high precision, generating unnecessary waste of water. That is why this article presents the implementation of a smart water measurement consumption system under an architecture design, with high decoupling and integration of various technologies, which allows real-time visualizing the consumptions, in addition, a leak detection algorithm is proposed based on rules, historical context, and user location that manages to cover 10 possible water consumption scenarios between normal and anomalous consumption. The system allows data to be collected by a smart meter, which is preprocessed by a local server (Gateway) and sent to the Cloud from time to time to be analyzed by the leak detection algorithm and, simultaneously, be viewed on a web interface. The results show that the algorithm has 100% Accuracy, Recall, Precision, and F1 score to detect leaks, far better than other procedures, and a margin of error of 4.63% recorded by the amount of water consumed.

RevDate: 2020-09-03

Pang R, Wei Z, Liu W, et al (2020)

Influence of the Pandemic Dissemination of COVID-19 on Facial Rejuvenation: A Survey of Twitter.

Journal of cosmetic dermatology [Epub ahead of print].

BACKGROUND: With the pandemic dissemination of COVID-19, attitude and sentiment surrounding facial rejuvenation have evolved rapidly.

AIMS: The purpose of this study was to understanding the impact of pandemic on the attitude of people towards facial skin rejuvenation.

METHODS: Twitter data related to facial rejuvenation were collected from January 1, 2020, to April 30, 2020. Sentiment analysis, frequency analysis, and word cloud were performed to analyze the data. Statistical analysis included two-tailed t-tests and chi-square tests.

RESULTS: In the post-declaration, the number of tweets about facial rejuvenation increased significantly, and the search volume in Google Trends decreased. Negative public emotions increased, but positive emotions still dominate. The words frequency of "discounts" and "purchase" decreased. The dominant words in word cloud were "Botox", "facelift", "hyaluronic" and "skin".

CONCLUSION: The public has a positive attitude toward facial rejuvenation during the pandemic. In particular, minimally invasive procedures dominate the mainstream, such as "Botox", "Hyaluronic acid" and "PRP". The practitioners could understand the change of the public interest in facial rejuvenation in time and decide what to focus on.

RevDate: 2020-08-27

Mahmood T, MS Mubarik (2020)

Balancing innovation and exploitation in the fourth industrial revolution: Role of intellectual capital and technology absorptive capacity.

Technological forecasting and social change, 160:120248.

Industry 4.0, which features the Internet of things (IoT), cloud computing, big-data, digitalization, and cyber-physical systems, is transforming the way businesses are being run. It is making the business processes more autonomous, automated and intelligent, and is transmuting the organizational structures of businesses by digitalizing their end-to-end business processes. In this context, balancing innovation and exploitation-organization's ambidexterity-while stepping into the fourth industrial revolution can be critical for organizational capability. This study examines the role of intellectual capital (IC)-human capital, structural capital and relational capital-in balancing the innovation and exploitation activities. It also examines the role of technology's absorptive capacity in the relationship between IC and organizational ambidexterity (OA). Data were collected from 217 small and medium enterprises from the manufacturing sector of Pakistan using a closed-ended Likert scale-based questionnaire. The study employs partial least square-Structural Equation Modeling (PLS-SEM) for data analysis. Findings indicate a profound influence of all dimensions of IC, both overall and by dimensions on organizations' ambidexterity. Findings also exhibit a significant partial mediating role of technology absorptive capacity (TAC) in the association of IC and ambidexterity. The findings of the study emphasize the creation of specific policies aimed to develop IC of a firm, which in turn can enable a firm to maintain a balance between innovation and market exploitation activities. The study integrates the TAC with the IC-OA relationship, which is the novelty of the study.

RevDate: 2020-09-09

Hsu IC, CC Chang (2020)

Integrating machine learning and open data into social Chatbot for filtering information rumor.

Journal of ambient intelligence and humanized computing [Epub ahead of print].

Social networks have become a major platform for people to disseminate information, which can include negative rumors. In recent years, rumors on social networks has caused grave problems and considerable damages. We attempted to create a method to verify information from numerous social media messages. We propose a general architecture that integrates machine learning and open data with a Chatbot and is based cloud computing (MLODCCC), which can assist users in evaluating information authenticity on social platforms. The proposed MLODCCC architecture consists of six integrated modules: cloud computing, machine learning, data preparation, open data, chatbot, and intelligent social application modules. Food safety has garnered worldwide attention. Consequently, we used the proposed MLODCCC architecture to develop a Food Safety Information Platform (FSIP) that provides a friendly hyperlink and chatbot interface on Facebook to identify credible food safety information. The performance and accuracy of three binary classification algorithms, namely the decision tree, logistic regression, and support vector machine algorithms, operating in different cloud computing environments were compared. The binary classification accuracy was 0.769, which indicates that the proposed approach accurately classifies using the developed FSIP.

RevDate: 2020-08-24

Ghinita G, Nguyen K, Maruseac M, et al (2020)

A secure location-based alert system with tunable privacy-performance trade-off.

GeoInformatica pii:410 [Epub ahead of print].

Monitoring location updates from mobile users has important applications in many areas, ranging from public health (e.g., COVID-19 contact tracing) and national security to social networks and advertising. However, sensitive information can be derived from movement patterns, thus protecting the privacy of mobile users is a major concern. Users may only be willing to disclose their locations when some condition is met, for instance in proximity of a disaster area or an event of interest. Currently, such functionality can be achieved using searchable encryption. Such cryptographic primitives provide provable guarantees for privacy, and allow decryption only when the location satisfies some predicate. Nevertheless, they rely on expensive pairing-based cryptography (PBC), of which direct application to the domain of location updates leads to impractical solutions. We propose secure and efficient techniques for private processing of location updates that complement the use of PBC and lead to significant gains in performance by reducing the amount of required pairing operations. We implement two optimizations that further improve performance: materialization of results to expensive mathematical operations, and parallelization. We also propose an heuristic that brings down the computational overhead through enlarging an alert zone by a small factor (given as system parameter), therefore trading off a small and controlled amount of privacy for significant performance gains. Extensive experimental results show that the proposed techniques significantly improve performance compared to the baseline, and reduce the searchable encryption overhead to a level that is practical in a computing environment with reasonable resources, such as the cloud.

RevDate: 2020-09-09

Ibrahim AU, Al-Turjman F, Sa'id Z, et al (2020)

Futuristic CRISPR-based biosensing in the cloud and internet of things era: an overview.

Multimedia tools and applications [Epub ahead of print].

Biosensors-based devices are transforming medical diagnosis of diseases and monitoring of patient signals. The development of smart and automated molecular diagnostic tools equipped with biomedical big data analysis, cloud computing and medical artificial intelligence can be an ideal approach for the detection and monitoring of diseases, precise therapy, and storage of data over the cloud for supportive decisions. This review focused on the use of machine learning approaches for the development of futuristic CRISPR-biosensors based on microchips and the use of Internet of Things for wireless transmission of signals over the cloud for support decision making. The present review also discussed the discovery of CRISPR, its usage as a gene editing tool, and the CRISPR-based biosensors with high sensitivity of Attomolar (10-18M), Femtomolar (10-15M) and Picomolar (10-12M) in comparison to conventional biosensors with sensitivity of nanomolar 10-9M and micromolar 10-3M. Additionally, the review also outlines limitations and open research issues in the current state of CRISPR-based biosensing applications.

RevDate: 2020-09-09

Al-Zinati M, Alrashdan R, Al-Duwairi B, et al (2020)

A re-organizing biosurveillance framework based on fog and mobile edge computing.

Multimedia tools and applications [Epub ahead of print].

Biological threats are becoming a serious security issue for many countries across the world. Effective biosurveillance systems can primarily support appropriate responses to biological threats and consequently save human lives. Nevertheless, biosurveillance systems are costly to implement and hard to operate. Furthermore, they rely on static infrastructures that might not cope with the evolving dynamics of the monitored environment. In this paper, we present a reorganizing biosurveillance framework for the detection and localization of biological threats with fog and mobile edge computing support. In the proposed framework, a hierarchy of fog nodes are responsible for aggregating monitoring data within their regions and detecting potential threats. Although fog nodes are deployed on a fixed base station infrastructure, the framework provides an innovative technique for reorganizing the monitored environment structure to adapt to the evolving environmental conditions and to overcome the limitations of the static base station infrastructure. Evaluation results illustrate the ability of the framework to localize biological threats and detect infected areas. Moreover, the results show the effectiveness of the reorganization mechanisms in adjusting the environment structure to cope with the highly dynamic environment.

RevDate: 2020-08-24

Blair GS (2020)

A Tale of Two Cities: Reflections on Digital Technology and the Natural Environment.

Patterns (New York, N.Y.), 1(5):100068.

Contemporary digital technologies can make a profound impact on our understanding of the natural environment in moving toward sustainable futures. Examples of such technologies included sources of new data (e.g., an environmental Internet of Things), the ability to storage and process the large datasets that will result from this (e.g., through cloud computing), and the potential of data science and AI to make sense of these data alongside human experts. However, these same trends pose a threat to sustainable futures through, for example, the carbon footprint of digital technology and the risks of this escalating through the very trends mentioned above.

RevDate: 2020-08-24

Li H, Lan C, Fu X, et al (2020)

A Secure and Lightweight Fine-Grained Data Sharing Scheme for Mobile Cloud Computing.

Sensors (Basel, Switzerland), 20(17): pii:s20174720.

With the explosion of various mobile devices and the tremendous advancement in cloud computing technology, mobile devices have been seamlessly integrated with the premium powerful cloud computing known as an innovation paradigm named Mobile Cloud Computing (MCC) to facilitate the mobile users in storing, computing and sharing their data with others. Meanwhile, Attribute Based Encryption (ABE) has been envisioned as one of the most promising cryptographic primitives for providing secure and flexible fine-grained "one to many" access control, particularly in large scale distributed system with unknown participators. However, most existing ABE schemes are not suitable for MCC because they involve expensive pairing operations which pose a formidable challenge for resource-constrained mobile devices, thus greatly delaying the widespread popularity of MCC. To this end, in this paper, we propose a secure and lightweight fine-grained data sharing scheme (SLFG-DSS) for a mobile cloud computing scenario to outsource the majority of time-consuming operations from the resource-constrained mobile devices to the resource-rich cloud servers. Different from the current schemes, our novel scheme can enjoy the following promising merits simultaneously: (1) Supporting verifiable outsourced decryption, i.e., the mobile user can ensure the validity of the transformed ciphertext returned from the cloud server; (2) resisting decryption key exposure, i.e., our proposed scheme can outsource decryption for intensive computing tasks during the decryption phase without revealing the user's data or decryption key; (3) achieving a CCA security level; thus, our novel scheme can be applied to the scenarios with higher security level requirement. The concrete security proof and performance analysis illustrate that our novel scheme is proven secure and suitable for the mobile cloud computing environment.

RevDate: 2020-08-26
CmpDate: 2020-08-26

Sarker VK, Gia TN, Ben Dhaou I, et al (2020)

Smart Parking System with Dynamic Pricing, Edge-Cloud Computing and LoRa.

Sensors (Basel, Switzerland), 20(17): pii:s20174669.

A rapidly growing number of vehicles in recent years cause long traffic jams and difficulty in the management of traffic in cities. One of the most significant reasons for increased traffic jams on the road is random parking in unauthorized and non-permitted places. In addition, managing of available parking places cannot achieve the expected reduction in traffic congestion related problems due to mismanagement, lack of real-time parking guidance to the drivers, and general ignorance. As the number of roads, highways and related resources has not increased significantly, a rising need for a smart, dynamic and effective parking solution is observed. Accordingly, with the use of multiple sensors, appropriate communication network and advanced processing capabilities of edge and cloud computing, a smart parking system can help manage parking effectively and make it easier for the vehicle owners. In this paper, we propose a multi-layer architecture for smart parking system consisting of multi-parametric parking slot sensor nodes, latest long-range low-power wireless communication technology and Edge-Cloud computation. The proposed system enables dynamic management of parking for large areas while providing useful information to the drivers about available parking locations and related services through near real-time monitoring of vehicles. Furthermore, we propose a dynamic pricing algorithm to yield maximum possible revenue for the parking authority and optimum parking slot availability for the drivers.

RevDate: 2020-09-18
CmpDate: 2020-08-26

Nguyen TT, Yeom YJ, Kim T, et al (2020)

Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration.

Sensors (Basel, Switzerland), 20(16):.

Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA's performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.

RevDate: 2020-09-05

Yang H, Y Kim (2020)

Design and Implementation of Fast Fault Detection in Cloud Infrastructure for Containerized IoT Services.

Sensors (Basel, Switzerland), 20(16):.

The container-based cloud is used in various service infrastructures as it is lighter and more portable than a virtual machine (VM)-based infrastructure and is configurable in both bare-metal and VM environments. The Internet-of-Things (IoT) cloud-computing infrastructure is also evolving from a VM-based to a container-based infrastructure. In IoT clouds, the service availability of the cloud infrastructure is more important for mission-critical IoT services, such as real-time health monitoring, vehicle-to-vehicle (V2V) communication, and industrial IoT, than for general computing services. However, in the container environment that runs on a VM, the current fault detection method only considers the container's infra, thus limiting the level of availability necessary for the performance of mission-critical IoT cloud services. Therefore, in a container environment running on a VM, fault detection and recovery methods that consider both the VM and container levels are necessary. In this study, we analyze the fault-detection architecture in a container environment and designed and implemented a Fast Fault Detection Manager (FFDM) architecture using OpenStack and Kubernetes for realizing fast fault detection. Through performance measurements, we verified that the FFDM can improve the fault detection time by more than three times over the existing method.

RevDate: 2020-09-01

Zhao L, Batta I, Matloff W, et al (2020)

Neuroimaging PheWAS (Phenome-Wide Association Study): A Free Cloud-Computing Platform for Big-Data, Brain-Wide Imaging Association Studies.

Neuroinformatics pii:10.1007/s12021-020-09486-4 [Epub ahead of print].

Large-scale, case-control genome-wide association studies (GWASs) have revealed genetic variations associated with diverse neurological and psychiatric disorders. Recent advances in neuroimaging and genomic databases of large healthy and diseased cohorts have empowered studies to characterize effects of the discovered genetic factors on brain structure and function, implicating neural pathways and genetic mechanisms in the underlying biology. However, the unprecedented scale and complexity of the imaging and genomic data requires new advanced biomedical data science tools to manage, process and analyze the data. In this work, we introduce Neuroimaging PheWAS (phenome-wide association study): a web-based system for searching over a wide variety of brain-wide imaging phenotypes to discover true system-level gene-brain relationships using a unified genotype-to-phenotype strategy. This design features a user-friendly graphical user interface (GUI) for anonymous data uploading, study definition and management, and interactive result visualizations as well as a cloud-based computational infrastructure and multiple state-of-art methods for statistical association analysis and multiple comparison correction. We demonstrated the potential of Neuroimaging PheWAS with a case study analyzing the influences of the apolipoprotein E (APOE) gene on various brain morphological properties across the brain in the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. Benchmark tests were performed to evaluate the system's performance using data from UK Biobank. The Neuroimaging PheWAS system is freely available. It simplifies the execution of PheWAS on neuroimaging data and provides an opportunity for imaging genetics studies to elucidate routes at play for specific genetic variants on diseases in the context of detailed imaging phenotypic data.

RevDate: 2020-09-09

McRoy C, Patel L, Gaddam DS, et al (2020)

Radiology Education in the Time of COVID-19: A Novel Distance Learning Workstation Experience for Residents.

Academic radiology [Epub ahead of print].

RATIONALE AND OBJECTIVES: The coronavirus disease of 2019 (COVID-19) pandemic has challenged the educational missions of academic radiology departments nationwide. We describe a novel cloud-based HIPAA compliant and accessible education platform which simulates a live radiology workstation for continued education of first year radiology (R1) residents, with an emphasis on call preparation and peer to peer resident learning.

MATERIALS AND METHODS: Three tools were used in our education model: Pacsbin (Orion Medical Technologies, Baltimore, MD, pacsbin.com), Zoom (Zoom Video Communications, San Jose, CA, zoom.us), and Google Classroom (Google, Mountain View, CA, classroom.google.com). A senior radiology resident (R2-R4) (n = 7) driven workflow was established to provide scrollable Digital Imaging and Communications in Medicine (DICOM) based case collections to the R1 residents (n = 9) via Pacsbin. A centralized classroom was created using Google Classroom for assignments, reports, and discussion where attending radiologists could review content for accuracy. Daily case collections over an 8-week period from March to May were reviewed via Zoom video conference readout in small groups consisting of a R2-R4 teacher and R1 residents. Surveys were administered to R1 residents, R2-4 residents, and attending radiologist participants.

RESULTS: Hundred percent of R1 residents felt this model improved their confidence and knowledge to take independent call. Seventy-eight percent of the R1 residents (n = 7/9) demonstrated strong interest in continuing the project after pandemic related restrictions are lifted. Based on a Likert "helpfulness" scale of 1-5 with 5 being most helpful, the project earned an overall average rating of 4.9. Two R2-R4 teachers demonstrated increased interest in pursuing academic radiology.

CONCLUSION: In response to unique pandemic circumstances, our institution implemented a novel cloud-based distance learning solution to simulate the radiology workstation. This platform helped continue the program's educational mission, offered first year residents increased call preparation, and promoted peer to peer learning. This approach to case-based learning could be used at other institutions to educate residents.

RevDate: 2020-09-18
CmpDate: 2020-08-12

D'Amico G, L'Abbate P, Liao W, et al (2020)

Understanding Sensor Cities: Insights from Technology Giant Company Driven Smart Urbanism Practices.

Sensors (Basel, Switzerland), 20(16):.

The data-driven approach to sustainable urban development is becoming increasingly popular among the cities across the world. This is due to cities' attention in supporting smart and sustainable urbanism practices. In an era of digitalization of urban services and processes, which is upon us, platform urbanism is becoming a fundamental tool to support smart urban governance, and helping in the formation of a new version of cities-i.e., City 4.0. This new version utilizes urban dashboards and platforms in its operations and management tasks of its complex urban metabolism. These intelligent systems help in maintaining the robustness of our cities, integrating various sensors (e.g., internet-of-things) and big data analysis technologies (e.g., artificial intelligence) with the aim of optimizing urban infrastructures and services (e.g., water, waste, energy), and turning the urban system into a smart one. The study generates insights from the sensor city best practices by placing some of renowned projects, implemented by Huawei, Cisco, Google, Ericsson, Microsoft, and Alibaba, under the microscope. The investigation findings reveal that the sensor city approach: (a) Has the potential to increase the smartness and sustainability level of cities; (b) Manages to engage citizens and companies in the process of planning, monitoring and analyzing urban processes; (c) Raises awareness on the local environmental, social and economic issues, and; (d) Provides a novel city blueprint for urban administrators, managers and planners. Nonetheless, the use of advanced technologies-e.g., real-time monitoring stations, cloud computing, surveillance cameras-poses a multitude of challenges related to: (a) Quality of the data used; (b) Level of protection of traditional and cybernetic urban security; (c) Necessary integration between the various urban infrastructure, and; (d) Ability to transform feedback from stakeholders into innovative urban policies.

RevDate: 2020-08-10

Giménez-Alventosa V, Segrelles JD, Moltó G, et al (2020)

APRICOT: Advanced Platform for Reproducible Infrastructures in the Cloud via Open Tools.

Methods of information in medicine [Epub ahead of print].

BACKGROUND: Scientific publications are meant to exchange knowledge among researchers but the inability to properly reproduce computational experiments limits the quality of scientific research. Furthermore, bibliography shows that irreproducible preclinical research exceeds 50%, which produces a huge waste of resources on nonprofitable research at Life Sciences field. As a consequence, scientific reproducibility is being fostered to promote Open Science through open databases and software tools that are typically deployed on existing computational resources. However, some computational experiments require complex virtual infrastructures, such as elastic clusters of PCs, that can be dynamically provided from multiple clouds. Obtaining these infrastructures requires not only an infrastructure provider, but also advanced knowledge in the cloud computing field.

OBJECTIVES: The main aim of this paper is to improve reproducibility in life sciences to produce better and more cost-effective research. For that purpose, our intention is to simplify the infrastructure usage and deployment for researchers.

METHODS: This paper introduces Advanced Platform for Reproducible Infrastructures in the Cloud via Open Tools (APRICOT), an open source extension for Jupyter to deploy deterministic virtual infrastructures across multiclouds for reproducible scientific computational experiments. To exemplify its utilization and how APRICOT can improve the reproduction of experiments with complex computation requirements, two examples in the field of life sciences are provided. All requirements to reproduce both experiments are disclosed within APRICOT and, therefore, can be reproduced by the users.

RESULTS: To show the capabilities of APRICOT, we have processed a real magnetic resonance image to accurately characterize a prostate cancer using a Message Passing Interface cluster deployed automatically with APRICOT. In addition, the second example shows how APRICOT scales the deployed infrastructure, according to the workload, using a batch cluster. This example consists of a multiparametric study of a positron emission tomography image reconstruction.

CONCLUSION: APRICOT's benefits are the integration of specific infrastructure deployment, the management and usage for Open Science, making experiments that involve specific computational infrastructures reproducible. All the experiment steps and details can be documented at the same Jupyter notebook which includes infrastructure specifications, data storage, experimentation execution, results gathering, and infrastructure termination. Thus, distributing the experimentation notebook and needed data should be enough to reproduce the experiment.

RevDate: 2020-08-11

Apolo-Apolo OE, Pérez-Ruiz M, Martínez-Guanter J, et al (2020)

A Cloud-Based Environment for Generating Yield Estimation Maps From Apple Orchards Using UAV Imagery and a Deep Learning Technique.

Frontiers in plant science, 11:1086.

Farmers require accurate yield estimates, since they are key to predicting the volume of stock needed at supermarkets and to organizing harvesting operations. In many cases, the yield is visually estimated by the crop producer, but this approach is not accurate or time efficient. This study presents a rapid sensing and yield estimation scheme using off-the-shelf aerial imagery and deep learning. A Region-Convolutional Neural Network was trained to detect and count the number of apple fruit on individual trees located on the orthomosaic built from images taken by the unmanned aerial vehicle (UAV). The results obtained with the proposed approach were compared with apple counts made in situ by an agrotechnician, and an R2 value of 0.86 was acquired (MAE: 10.35 and RMSE: 13.56). As only parts of the tree fruits were visible in the top-view images, linear regression was used to estimate the number of total apples on each tree. An R2 value of 0.80 (MAE: 128.56 and RMSE: 130.56) was obtained. With the number of fruits detected and tree coordinates two shapefile using Python script in Google Colab were generated. With the previous information two yield maps were displayed: one with information per tree and another with information per tree row. We are confident that these results will help to maximize the crop producers' outputs via optimized orchard management.

RevDate: 2020-08-20

Petit RA, TD Read (2020)

Bactopia: a Flexible Pipeline for Complete Analysis of Bacterial Genomes.

mSystems, 5(4):.

Sequencing of bacterial genomes using Illumina technology has become such a standard procedure that often data are generated faster than can be conveniently analyzed. We created a new series of pipelines called Bactopia, built using Nextflow workflow software, to provide efficient comparative genomic analyses for bacterial species or genera. Bactopia consists of a data set setup step (Bactopia Data Sets [BaDs]), which creates a series of customizable data sets for the species of interest, the Bactopia Analysis Pipeline (BaAP), which performs quality control, genome assembly, and several other functions based on the available data sets and outputs the processed data to a structured directory format, and a series of Bactopia Tools (BaTs) that perform specific postprocessing on some or all of the processed data. BaTs include pan-genome analysis, computing average nucleotide identity between samples, extracting and profiling the 16S genes, and taxonomic classification using highly conserved genes. It is expected that the number of BaTs will increase to fill specific applications in the future. As a demonstration, we performed an analysis of 1,664 public Lactobacillus genomes, focusing on Lactobacillus crispatus, a species that is a common part of the human vaginal microbiome. Bactopia is an open source system that can scale from projects as small as one bacterial genome to ones including thousands of genomes and that allows for great flexibility in choosing comparison data sets and options for downstream analysis. Bactopia code can be accessed at https://www.github.com/bactopia/bactopiaIMPORTANCE It is now relatively easy to obtain a high-quality draft genome sequence of a bacterium, but bioinformatic analysis requires organization and optimization of multiple open source software tools. We present Bactopia, a pipeline for bacterial genome analysis, as an option for processing bacterial genome data. Bactopia also automates downloading of data from multiple public sources and species-specific customization. Because the pipeline is written in the Nextflow language, analyses can be scaled from individual genomes on a local computer to thousands of genomes using cloud resources. As a usage example, we processed 1,664 Lactobacillus genomes from public sources and used comparative analysis workflows (Bactopia Tools) to identify and analyze members of the L. crispatus species.

RevDate: 2020-08-25

Navarro E, Costa N, A Pereira (2020)

A Systematic Review of IoT Solutions for Smart Farming.

Sensors (Basel, Switzerland), 20(15):.

The world population growth is increasing the demand for food production. Furthermore, the reduction of the workforce in rural areas and the increase in production costs are challenges for food production nowadays. Smart farming is a farm management concept that may use Internet of Things (IoT) to overcome the current challenges of food production. This work uses the preferred reporting items for systematic reviews (PRISMA) methodology to systematically review the existing literature on smart farming with IoT. The review aims to identify the main devices, platforms, network protocols, processing data technologies and the applicability of smart farming with IoT to agriculture. The review shows an evolution in the way data is processed in recent years. Traditional approaches mostly used data in a reactive manner. In more recent approaches, however, new technological developments allowed the use of data to prevent crop problems and to improve the accuracy of crop diagnosis.

RevDate: 2020-08-05

Ranchal R, Bastide P, Wang X, et al (2020)

Disrupting Healthcare Silos: Addressing Data Volume, Velocity and Variety with a Cloud-Native Healthcare Data Ingestion Service.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

Healthcare enterprises are starting to adopt cloud computing due to its numerous advantages over traditional infrastructures. This has become a necessity because of the increased volume, velocity and variety of healthcare data, and the need to facilitate data correlation and large-scale analysis. Cloud computing infrastructures have the power to offer continuous acquisition of data from multiple heterogeneous sources, efficient data integration, and big data analysis. At the same time, security, availability, and disaster recovery are critical factors aiding towards the adoption of cloud computing. However, the migration of healthcare workloads to cloud is not straightforward due to the vagueness in healthcare data standards, heterogeneity and sensitive nature of healthcare data, and many regulations that govern its usage. This paper highlights the need for providing healthcare data acquisition using cloud infrastructures and presents the challenges, requirements, use-cases, and best practices for building a state-of-the-art healthcare data ingestion service on cloud.

RevDate: 2020-08-14

Frake AN, Peter BG, Walker ED, et al (2020)

Leveraging big data for public health: Mapping malaria vector suitability in Malawi with Google Earth Engine.

PloS one, 15(8):e0235697.

In an era of big data, the availability of satellite-derived global climate, terrain, and land cover imagery presents an opportunity for modeling the suitability of malaria disease vectors at fine spatial resolutions, across temporal scales, and over vast geographic extents. Leveraging cloud-based geospatial analytical tools, we present an environmental suitability model that considers water resources, flow accumulation areas, precipitation, temperature, vegetation, and land cover. In contrast to predictive models generated using spatially and temporally discontinuous mosquito presence information, this model provides continuous fine-spatial resolution information on the biophysical drivers of suitability. For the purposes of this study the model is parameterized for Anopheles gambiae s.s. in Malawi for the rainy (December-March) and dry seasons (April-November) in 2017; however, the model may be repurposed to accommodate different mosquito species, temporal periods, or geographical boundaries. Final products elucidate the drivers and potential habitat of Anopheles gambiae s.s. Rainy season results are presented by quartile of precipitation; Quartile four (Q4) identifies areas most likely to become inundated and shows 7.25% of Malawi exhibits suitable water conditions (water only) for Anopheles gambiae s.s., approximately 16% for water plus another factor, and 8.60% is maximally suitable, meeting suitability thresholds for water presence, terrain characteristics, and climatic conditions. Nearly 21% of Malawi is suitable for breeding based on land characteristics alone and 28.24% is suitable according to climate and land characteristics. Only 6.14% of the total land area is suboptimal. Dry season results show 25.07% of the total land area is suboptimal or unsuitable. Approximately 42% of Malawi is suitable based on land characteristics alone during the dry season, and 13.11% is suitable based on land plus another factor. Less than 2% meets suitability criteria for climate, water, and land criteria. Findings illustrate environmental drivers of suitability for malaria vectors, providing an opportunity for a more comprehensive approach to malaria control that includes not only modeled species distributions, but also the underlying drivers of suitability for a more effective approach to environmental management.

RevDate: 2020-08-05

Kilper DC, N Peyghambarian (2020)

Changing evolution of optical communication systems at the network edges.

Applied optics, 59(22):G209-G218.

Metro and data center networks are growing rapidly, while global fixed Internet traffic growth shows evidence of slowing. An analysis of the distribution of network capacity versus distance reveals capacity gaps in networks important to wireless backhaul networks and cloud computing. These networks are built from layers of electronic aggregation switches. Photonic integration and software-defined networking control are identified as key enabling technologies for the use of optical switching in these applications. Advances in optical switching for data center and metro networks in the CIAN engineering research center are reviewed and examined as potential directions for optical communication system evolution.

RevDate: 2020-09-08

Camara Gradim LC, Archanjo Jose M, Marinho Cezar da Cruz D, et al (2020)

IoT Services and Applications in Rehabilitation: An Interdisciplinary and Meta-Analysis Review.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, 28(9):2043-2052.

Internet of things (IoT) is a designation given to a technological system that can enhance possibilities of connectivity between people and things and has been showing to be an opportunity for developing and improving smart rehabilitation systems and helps in the e-Health area.

OBJECTIVE: to identify works involving IoT that deal with the development, architecture, application, implementation, use of technological equipment in the area of patient rehabilitation. Technology or Method: A systematic review based on Kitchenham's suggestions combined to the PRISMA protocol. The search strategy was carried out comprehensively in the IEEE Xplore Digital Library, Web of Science and Scopus databases with the data extraction method for assessment and analysis consist only of primary studies articles related to the IoT and Rehabilitation of patients.

RESULTS: We found 29 studies that addressed the research question, and all were classified based on scientific evidence.

CONCLUSIONS: This systematic review presents the current state of the art on the IoT in health rehabilitation and identifies findings in interdisciplinary researches in different clinical cases with technological systems including wearable devices and cloud computing. The gaps in IoT for rehabilitation include the need for more clinical randomized controlled trials and longitudinal studies. Clinical Impact: This paper has an interdisciplinary feature and includes areas such as Internet of Things Information and Communication Technology with their application to the medical and rehabilitation domains.

RevDate: 2020-08-25

Jo JH, Jo B, Kim JH, et al (2020)

Implementation of IoT-Based Air Quality Monitoring System for Investigating Particulate Matter (PM10) in Subway Tunnels.

International journal of environmental research and public health, 17(15):.

Air quality monitoring for subway tunnels in South Korea is a topic of great interest because more than 8 million passengers per day use the subway, which has a concentration of particulate matter (PM10) greater than that of above ground. In this paper, an Internet of Things (IoT)-based air quality monitoring system, consisting of an air quality measurement device called Smart-Air, an IoT gateway, and a cloud computing web server, is presented to monitor the concentration of PM10 in subway tunnels. The goal of the system is to efficiently monitor air quality at any time and from anywhere by combining IoT and cloud computing technologies. This system was successfully implemented in Incheon's subway tunnels to investigate levels of PM10. The concentration of particulate matter was greatest between the morning and afternoon rush hours. In addition, the residence time of PM10 increased as the depth of the monitoring location increased. During the experimentation period, the South Korean government implemented an air quality management system. An analysis was performed to follow up after implementation and assess how the change improved conditions. Based on the experiments, the system was efficient and effective at monitoring particulate matter for improving air quality in subway tunnels.

RevDate: 2020-08-22

Watts P, Breedon P, Nduka C, et al (2020)

Cloud Computing Mobile Application for Remote Monitoring of Bell's Palsy.

Journal of medical systems, 44(9):149.

Mobile applications provide the healthcare industry with a means of connecting with patients in their own home utilizing their own personal mobile devices such as tablets and phones. This allows therapists to monitor the progress of people under their care from a remote location and all with the added benefit that patients are familiar with their own mobile devices; thereby reducing the time required to train patients with the new technology. There is also the added benefit to the health service that there is no additional cost required to purchase devices for use. The Facial Remote Activity Monitoring Eyewear (FRAME) mobile application and web service framework has been designed to work on the IOS and android platforms, the two most commonly used today. Results: The system utilizes secure cloud based data storage to collect, analyse and store data, this allows for near real time, secure access remotely by therapists to monitor their patients and intervene when required. The underlying framework has been designed to be secure, anonymous and flexible to ensure compliance with the data protection act and the latest General Data Protection Regulation (GDPR); this new standard came into effect in April 2018 and replaces the Data Protection Act in the UK and Europe.

RevDate: 2020-07-28

Krissaane I, De Niz C, Gutiérrez-Sacristán A, et al (2020)

Scalability and cost-effectiveness analysis of whole genome-wide association studies on Google Cloud Platform and Amazon Web Services.

Journal of the American Medical Informatics Association : JAMIA pii:5876972 [Epub ahead of print].

OBJECTIVE: Advancements in human genomics have generated a surge of available data, fueling the growth and accessibility of databases for more comprehensive, in-depth genetic studies.

METHODS: We provide a straightforward and innovative methodology to optimize cloud configuration in order to conduct genome-wide association studies. We utilized Spark clusters on both Google Cloud Platform and Amazon Web Services, as well as Hail (http://doi.org/10.5281/zenodo.2646680) for analysis and exploration of genomic variants dataset.

RESULTS: Comparative evaluation of numerous cloud-based cluster configurations demonstrate a successful and unprecedented compromise between speed and cost for performing genome-wide association studies on 4 distinct whole-genome sequencing datasets. Results are consistent across the 2 cloud providers and could be highly useful for accelerating research in genetics.

CONCLUSIONS: We present a timely piece for one of the most frequently asked questions when moving to the cloud: what is the trade-off between speed and cost?

RevDate: 2020-09-05

Li B, Gould J, Yang Y, et al (2020)

Cumulus provides cloud-based data analysis for large-scale single-cell and single-nucleus RNA-seq.

Nature methods, 17(8):793-798.

Massively parallel single-cell and single-nucleus RNA sequencing has opened the way to systematic tissue atlases in health and disease, but as the scale of data generation is growing, so is the need for computational pipelines for scaled analysis. Here we developed Cumulus-a cloud-based framework for analyzing large-scale single-cell and single-nucleus RNA sequencing datasets. Cumulus combines the power of cloud computing with improvements in algorithm and implementation to achieve high scalability, low cost, user-friendliness and integrated support for a comprehensive set of features. We benchmark Cumulus on the Human Cell Atlas Census of Immune Cells dataset of bone marrow cells and show that it substantially improves efficiency over conventional frameworks, while maintaining or improving the quality of results, enabling large-scale studies.

RevDate: 2020-08-25

Song Y, Zhu Y, Nan T, et al (2020)

Accelerating Faceting Wide-Field Imaging Algorithm with FPGA for SKA Radio Telescope as a Vast Sensor Array.

Sensors (Basel, Switzerland), 20(15):.

The SKA (Square Kilometer Array) radio telescope will become the most sensitive telescope by correlating a huge number of antenna nodes to form a vast array of sensors in a region over one hundred kilometers. Faceting, the wide-field imaging algorithm, is a novel approach towards solving image construction from sensing data where earth surface curves cannot be ignored. However, the traditional processor of cloud computing, even if the most sophisticated supercomputer is used, cannot meet the extremely high computation performance requirement. In this paper, we propose the design and implementation of high-efficiency FPGA (Field Programmable Gate Array) -based hardware acceleration of the key algorithm, faceting in SKA by focusing on phase rotation and gridding, which are the most time-consuming phases in the faceting algorithm. Through the analysis of algorithm behavior and bottleneck, we design and optimize the memory architecture and computing logic of the FPGA-based accelerator. The simulation and tests on FPGA are done to confirm the acceleration result of our design and it is shown that the acceleration performance we achieved on phase rotation is 20× the result of the previous work. We then further designed and optimized an efficient microstructure of loop unrolling and pipeline for the gridding accelerator, and the designed system simulation was done to confirm the performance of our structure. The result shows that the acceleration ratio is 5.48 compared to the result tested on software in gridding parts. Hence, our approach enables efficient acceleration of the faceting algorithm on FPGAs with high performance to meet the computational constraints of SKA as a representative vast sensor array.

RevDate: 2020-08-21

Saarikko J, Niela-Vilen H, Ekholm E, et al (2020)

Continuous 7-Month Internet of Things-Based Monitoring of Health Parameters of Pregnant and Postpartum Women: Prospective Observational Feasibility Study.

JMIR formative research, 4(7):e12417.

BACKGROUND: Monitoring during pregnancy is vital to ensure the mother's and infant's health. Remote continuous monitoring provides health care professionals with significant opportunities to observe health-related parameters in their patients and to detect any pathological signs at an early stage of pregnancy, and may thus partially replace traditional appointments.

OBJECTIVE: This study aimed to evaluate the feasibility of continuously monitoring the health parameters (physical activity, sleep, and heart rate) of nulliparous women throughout pregnancy and until 1 month postpartum, with a smart wristband and an Internet of Things (IoT)-based monitoring system.

METHODS: This prospective observational feasibility study used a convenience sample of 20 nulliparous women from the Hospital District of Southwest Finland. Continuous monitoring of physical activity/step counts, sleep, and heart rate was performed with a smart wristband for 24 hours a day, 7 days a week over 7 months (6 months during pregnancy and 1 month postpartum). The smart wristband was connected to a cloud server. The total number of possible monitoring days during pregnancy weeks 13 to 42 was 203 days and 28 days in the postpartum period.

RESULTS: Valid physical activity data were available for a median of 144 (range 13-188) days (75% of possible monitoring days), and valid sleep data were available for a median of 137 (range 0-184) days (72% of possible monitoring days) per participant during pregnancy. During the postpartum period, a median of 15 (range 0-25) days (54% of possible monitoring days) of valid physical activity data and 16 (range 0-27) days (57% of possible monitoring days) of valid sleep data were available. Physical activity decreased from the second trimester to the third trimester by a mean of 1793 (95% CI 1039-2548) steps per day (P<.001). The decrease continued by a mean of 1339 (95% CI 474-2205) steps to the postpartum period (P=.004). Sleep during pregnancy also decreased from the second trimester to the third trimester by a mean of 20 minutes (95% CI -0.7 to 42 minutes; P=.06) and sleep time shortened an additional 1 hour (95% CI 39 minutes to 1.5 hours) after delivery (P<.001). The mean resting heart rate increased toward the third trimester and returned to the early pregnancy level during the postpartum period.

CONCLUSIONS: The smart wristband with IoT technology was a feasible system for collecting representative data on continuous variables of health parameters during pregnancy. Continuous monitoring provides real-time information between scheduled appointments and thus may help target and tailor pregnancy follow-up.

RevDate: 2020-08-27

Jabbar R, Kharbeche M, Al-Khalifa K, et al (2020)

Blockchain for the Internet of Vehicles: A Decentralized IoT Solution for Vehicles Communication Using Ethereum.

Sensors (Basel, Switzerland), 20(14):.

The concept of smart cities has become prominent in modern metropolises due to the emergence of embedded and connected smart devices, systems, and technologies. They have enabled the connection of every "thing" to the Internet. Therefore, in the upcoming era of the Internet of Things, the Internet of Vehicles (IoV) will play a crucial role in newly developed smart cities. The IoV has the potential to solve various traffic and road safety problems effectively in order to prevent fatal crashes. However, a particular challenge in the IoV, especially in Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications, is to ensure fast, secure transmission and accurate recording of the data. In order to overcome these challenges, this work is adapting Blockchain technology for real time application (RTA) to solve Vehicle-to-Everything (V2X) communications problems. Therefore, the main novelty of this paper is to develop a Blockchain-based IoT system in order to establish secure communication and create an entirely decentralized cloud computing platform. Moreover, the authors qualitatively tested the performance and resilience of the proposed system against common security attacks. Computational tests showed that the proposed solution solved the main challenges of Vehicle-to-X (V2X) communications such as security, centralization, and lack of privacy. In addition, it guaranteed an easy data exchange between different actors of intelligent transportation systems.

RevDate: 2020-07-21

Onnela JP (2020)

Opportunities and challenges in the collection and analysis of digital phenotyping data.

Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology pii:10.1038/s41386-020-0771-3 [Epub ahead of print].

The broad adoption and use of smartphones has led to fundamentally new opportunities for capturing social, behavioral, and cognitive phenotypes in free-living settings, outside of research laboratories and clinics. Predicated on the use of existing personal devices rather than the introduction of additional instrumentation, smartphone-based digital phenotyping presents us with several opportunities and challenges in data collection and data analysis. These two aspects are strongly coupled, because decisions about what data to collect and how to collect it constrain what statistical analyses can be carried out, now and years later, and therefore ultimately determine what scientific, clinical, and public health questions may be asked and answered. Digital phenotyping combines the excitement of fast-paced technologies, smartphones, cloud computing and machine learning, with deep mathematical and statistical questions, and it does this in the service of a better understanding our own behavior in ways that are objective, scalable, and reproducible. We will discuss some fundamental aspects of collection and analysis of digital phenotyping data, which takes us on a brief tour of several important scientific and technological concepts, from the open-source paradigm to computational complexity, with some unexpected insights provided by fields as varied as zoology and quantum mechanics.

RevDate: 2020-07-16

Mubarakali A, Durai AD, Alshehri M, et al (2020)

Fog-Based Delay-Sensitive Data Transmission Algorithm for Data Forwarding and Storage in Cloud Environment for Multimedia Applications.

Big data [Epub ahead of print].

Fog computing is playing a vital role in data transmission to distributed devices in the Internet of Things (IoT) and another network paradigm. The fundamental element of fog computing is an additional layer added between an IoT device/node and a cloud server. These fog nodes are used to speed up time-critical applications. Current research efforts and user trends are pushing for fog computing, and the path is far from being paved. Unless it can reap the benefits of applying software-defined networks and network function virtualization techniques, network monitoring will be an additional burden for fog. However, the seamless integration of these techniques in fog computing is not easy and will be a challenging task. To overcome the issues as already mentioned, the fog-based delay-sensitive data transmission algorithm develops a robust optimal technique to ensure the low and predictable delay in delay-sensitive applications such as traffic monitoring and vehicle tracking applications. The method reduces latency by storing and processing the data close to the source of information with optimal depth in the network. The deployment results show that the proposed algorithm reduces 15.67 ms round trip time and 2 seconds averaged delay on 10 KB, 100 KB, and 1 MB data set India, Singapore, and Japan Amazon Datacenter Regions compared with conventional methodologies.

RevDate: 2020-08-18
CmpDate: 2020-07-20

Slamnik-Kriještorac N, Silva EBE, Municio E, et al (2020)

Network Service and Resource Orchestration: A Feature and Performance Analysis within the MEC-Enhanced Vehicular Network Context.

Sensors (Basel, Switzerland), 20(14): pii:s20143852.

By providing storage and computational resources at the network edge, which enables hosting applications closer to the mobile users, Multi-Access Edge Computing (MEC) uses the mobile backhaul, and the network core more efficiently, thereby reducing the overall latency. Fostering the synergy between 5G and MEC brings ultra-reliable low-latency in data transmission, and paves the way towards numerous latency-sensitive automotive use cases, with the ultimate goal of enabling autonomous driving. Despite the benefits of significant latency reduction, bringing MEC platforms into 5G-based vehicular networks imposes severe challenges towards poorly scalable network management, as MEC platforms usually represent a highly heterogeneous environment. Therefore, there is a strong need to perform network management and orchestration in an automated way, which, being supported by Software Defined Networking (SDN) and Network Function Virtualization (NFV), will further decrease the latency. With recent advances in SDN, along with NFV, which aim to facilitate management automation for tackling delay issues in vehicular communications, we study the closed-loop life-cycle management of network services, and map such cycle to the Management and Orchestration (MANO) systems, such as ETSI NFV MANO. In this paper, we provide a comprehensive overview of existing MANO solutions, studying their most important features to enable network service and resource orchestration in MEC-enhanced vehicular networks. Finally, using a real testbed setup, we conduct and present an extensive performance analysis of Open Baton and Open Source MANO that are, due to their lightweight resource footprint, and compliance to ETSI standards, suitable solutions for resource and service management and orchestration within the network edge.

RevDate: 2020-07-13

Zeng Y, J Zhang (2020)

A machine learning model for detecting invasive ductal carcinoma with Google Cloud AutoML Vision.

Computers in biology and medicine, 122:103861.

OBJECTIVES: This study is aimed to assess the feasibility of AutoML technology for the identification of invasive ductal carcinoma (IDC) in whole slide images (WSI).

METHODS: The study presents an experimental machine learning (ML) model based on Google Cloud AutoML Vision instead of a handcrafted neural network. A public dataset of 278,124 labeled histopathology images is used as the original dataset for the model creation. In order to balance the number of positive and negative IDC samples, this study also augments the original public dataset by rotating a large portion of positive image samples. As a result, a total number of 378,215 labeled images are applied.

RESULTS: A score of 91.6% average accuracy is achieved during the model evaluation as measured by the area under precision-recall curve (AuPRC). A subsequent test on a held-out test dataset (unseen by the model) yields a balanced accuracy of 84.6%. These results outperform the ones reported in the earlier studies. Similar performance is observed from a generalization test with new breast tissue samples we collected from the hospital.

CONCLUSIONS: The results obtained from this study demonstrate the maturity and feasibility of an AutoML approach for IDC identification. The study also shows the advantage of AutoML approach when combined at scale with cloud computing.

RevDate: 2020-08-21
CmpDate: 2020-08-21

Wang SY, Pershing S, Lee AY, et al (2020)

Big data requirements for artificial intelligence.

Current opinion in ophthalmology, 31(5):318-323.

PURPOSE OF REVIEW: To summarize how big data and artificial intelligence technologies have evolved, their current state, and next steps to enable future generations of artificial intelligence for ophthalmology.

RECENT FINDINGS: Big data in health care is ever increasing in volume and variety, enabled by the widespread adoption of electronic health records (EHRs) and standards for health data information exchange, such as Digital Imaging and Communications in Medicine and Fast Healthcare Interoperability Resources. Simultaneously, the development of powerful cloud-based storage and computing architectures supports a fertile environment for big data and artificial intelligence in health care. The high volume and velocity of imaging and structured data in ophthalmology and is one of the reasons why ophthalmology is at the forefront of artificial intelligence research. Still needed are consensus labeling conventions for performing supervised learning on big data, promotion of data sharing and reuse, standards for sharing artificial intelligence model architectures, and access to artificial intelligence models through open application program interfaces (APIs).

SUMMARY: Future requirements for big data and artificial intelligence include fostering reproducible science, continuing open innovation, and supporting the clinical use of artificial intelligence by promoting standards for data labels, data sharing, artificial intelligence model architecture sharing, and accessible code and APIs.

RevDate: 2020-07-10

Jiang W, Guo L, Wu H, et al (2020)

Use of a smartphone for imaging, modelling, and evaluation of keloids.

Burns : journal of the International Society for Burn Injuries pii:S0305-4179(20)30392-2 [Epub ahead of print].

OBJECTIVE: We used a smartphone to construct three-dimensional (3D) models of keloids, then quantitatively simulate and evaluate these tissues.

METHODS: We uploaded smartphone photographs of 33 keloids on the chest, shoulder, neck, limbs, or abdomen of 28 patients. We used the parallel computing power of a graphics processing unit to calculate the spatial co-ordinates of each pixel in the cloud, then generated 3D models. We obtained the longest diameter, thickness, and volume of each keloid, then compared these data to findings obtained by traditional methods.

RESULTS: Measurement repeatability was excellent: intraclass correlation coefficients were 0.998 for longest diameter, 0.978 for thickness, and 0.993 for volume. When measuring the longest diameter and volume, the results agreed with Vernier caliper measurements and with measurements obtained after the injection of water into the cavity. When measuring thickness, the findings were similar to those obtained by ultrasound. Bland-Altman analyses showed that the ratios of 95% confidence interval extremes were 3.03% for longest diameter, 3.03% for volume, and 6.06% for thickness.

CONCLUSION: Smartphones were used to acquire data that was then employed to construct 3D models of keloids; these models yielded quantitative data with excellent reliability and validity. The smartphone can serve as an additional tool for keloid diagnosis and research, and will facilitate medical treatment over the internet.

RevDate: 2020-07-10

Tilton JC, Wolfe RE, Lin GG, et al (2019)

On-Orbit Measurement of the Effective Focal Length and Band-to-Band Registration of Satellite-Borne Whiskbroom Imaging Sensors.

IEEE journal of selected topics in applied earth observations and remote sensing, 12(11):4622-4633.

We have developed an approach for the measurement of the Effective Focal Length (EFL) and Band-to-Band Registration (BBR) of selected spectral bands of satellite-borne whiskbroom imaging sensors from on-orbit data. Our approach is based on simulating the coarser spatial resolution whiskbroom sensor data with finer spatial resolution Landsat 7 ETM+ or Landsat 8 OLI data using the geolocation (Earth location) information from each sensor, and computing the correlation between the simulated and original data. For each scan of a selected spectral band of the whiskbroom data set, various subsets of the data are examined to find the subset with the highest spatial correlation between the original and simulated data using the nominal geolocation information. Then, for this best subset, the focal length value and the spatial shift are varied to find the values that produce the highest spatial correlation between the original and simulated data. This best focal length value is taken to be the measured instrument EFL and the best spatial shift is taken to be the registration of the whiskbroom data relative to the Landsat data, from which the BBR is inferred. Best results are obtained with cloud-free subsets with contrasting land features. This measurement is repeated over other scans with cloud-free subsets. We demonstrate our approach with on-orbit data from the Aqua and Terra MODIS instruments and SNPP and J1 VIIRS instruments.

RevDate: 2020-07-09

Di Gennaro SF, A Matese (2020)

Evaluation of novel precision viticulture tool for canopy biomass estimation and missing plant detection based on 2.5D and 3D approaches using RGB images acquired by UAV platform.

Plant methods, 16:91.

Background: The knowledge of vine vegetative status within a vineyard plays a key role in canopy management in order to achieve a correct vine balance and reach the final desired yield/quality. Detailed information about canopy architecture and missing plants distribution provides useful support for farmers/winegrowers to optimize canopy management practices and the replanting process, respectively. In the last decade, there has been a progressive diffusion of UAV (Unmanned Aerial Vehicles) technologies for Precision Viticulture purposes, as fast and accurate methodologies for spatial variability of geometric plant parameters. The aim of this study was to implement an unsupervised and integrated procedure of biomass estimation and missing plants detection, using both the 2.5D-surface and 3D-alphashape methods.

Results: Both methods showed good overall accuracy respect to ground truth biomass measurements with high values of R2 (0.71 and 0.80 for 2.5D and 3D, respectively). The 2.5D method led to an overestimation since it is derived by considering the vine as rectangular cuboid form. On the contrary, the 3D method provided more accurate results as a consequence of the alphashape algorithm, which is capable to detect each single shoot and holes within the canopy. Regarding the missing plants detection, the 3D approach confirmed better performance in cases of hidden conditions by shoots of adjacent plants or sparse canopy with some empty spaces along the row, where the 2.5D method based on the length of section of the row with lower thickness than the threshold used (0.10 m), tended to return false negatives and false positives, respectively.

Conclusions: This paper describes a rapid and objective tool for the farmer to promptly identify canopy management strategies and drive replanting decisions. The 3D approach provided results closer to real canopy volume and higher performance in missing plant detection. However, the dense cloud based analysis required more processing time. In a future perspective, given the continuous technological evolution in terms of computing performance, the overcoming of the current limit represented by the pre- and post-processing phases of the large image dataset should mainstream this methodology.

RevDate: 2020-08-07

Pastor-Vargas R, Tobarra L, Robles-Gómez A, et al (2020)

A WoT Platform for Supporting Full-Cycle IoT Solutions from Edge to Cloud Infrastructures: A Practical Case.

Sensors (Basel, Switzerland), 20(13):.

Internet of Things (IoT) learning involves the acquisition of transversal skills ranging from the development based on IoT devices and sensors (edge computing) to the connection of the devices themselves to management environments that allow the storage and processing (cloud computing) of data generated by sensors. The usual development cycle for IoT applications consists of the following three stages: stage 1 corresponds to the description of the devices and basic interaction with sensors. In stage 2, data acquired by the devices/sensors are employed by communication models from the origin edge to the management middleware in the cloud. Finally, stage 3 focuses on processing and presentation models. These models present the most relevant indicators for IoT devices and sensors. Students must acquire all the necessary skills and abilities to understand and develop these types of applications, so lecturers need an infrastructure to enable the learning of development of full IoT applications. A Web of Things (WoT) platform named Labs of Things at UNED (LoT@UNED) has been used for this goal. This paper shows the fundamentals and features of this infrastructure, and how the different phases of the full development cycle of solutions in IoT environments are implemented using LoT@UNED. The proposed system has been tested in several computer science subjects. Students can perform remote experimentation with a collaborative WoT learning environment in the cloud, including the possibility to analyze the generated data by IoT sensors.

RevDate: 2020-08-12

Lavysh D, G Neu-Yilik (2020)

UPF1-Mediated RNA Decay-Danse Macabre in a Cloud.

Biomolecules, 10(7):.

Nonsense-mediated RNA decay (NMD) is the prototype example of a whole family of RNA decay pathways that unfold around a common central effector protein called UPF1. While NMD in yeast appears to be a linear pathway, NMD in higher eukaryotes is a multifaceted phenomenon with high variability with respect to substrate RNAs, degradation efficiency, effector proteins and decay-triggering RNA features. Despite increasing knowledge of the mechanistic details, it seems ever more difficult to define NMD and to clearly distinguish it from a growing list of other UPF1-mediated RNA decay pathways (UMDs). With a focus on mammalian, we here critically examine the prevailing NMD models and the gaps and inconsistencies in these models. By exploring the minimal requirements for NMD and other UMDs, we try to elucidate whether they are separate and definable pathways, or rather variations of the same phenomenon. Finally, we suggest that the operating principle of the UPF1-mediated decay family could be considered similar to that of a computing cloud providing a flexible infrastructure with rapid elasticity and dynamic access according to specific user needs.

RevDate: 2020-07-04

Hyder A, AA May (2020)

Translational data analytics in exposure science and environmental health: a citizen science approach with high school students.

Environmental health : a global access science source, 19(1):73.

BACKGROUND: Translational data analytics aims to apply data analytics principles and techniques to bring about broader societal or human impact. Translational data analytics for environmental health is an emerging discipline and the objective of this study is to describe a real-world example of this emerging discipline.

METHODS: We implemented a citizen-science project at a local high school. Multiple cohorts of citizen scientists, who were students, fabricated and deployed low-cost air quality sensors. A cloud-computing solution provided real-time air quality data for risk screening purposes, data analytics and curricular activities.

RESULTS: The citizen-science project engaged with 14 high school students over a four-year period that is continuing to this day. The project led to the development of a website that displayed sensor-based measurements in local neighborhoods and a GitHub-like repository for open source code and instructions. Preliminary results showed a reasonable comparison between sensor-based and EPA land-based federal reference monitor data for CO and NOx.

CONCLUSIONS: Initial sensor-based data collection efforts showed reasonable agreement with land-based federal reference monitors but more work needs to be done to validate these results. Lessons learned were: 1) the need for sustained funding because citizen science-based project timelines are a function of community needs/capacity and building interdisciplinary rapport in academic settings and 2) the need for a dedicated staff to manage academic-community relationships.

RevDate: 2020-06-29

Saleh N, A Abo Agyla (2020)

An integrated assessment system for the accreditation of medical laboratories.

Biomedizinische Technik. Biomedical engineering pii:/j/bmte.ahead-of-print/bmt-2019-0133/bmt-2019-0133.xml [Epub ahead of print].

Medical laboratory accreditation becomes a trend to be trustable for diagnosis of diseases. It is always performed at regular intervals to assure competence of quality management systems (QMS) based on pre-defined standards. However, few attempts were carried out to assess the quality level of medical laboratory services. Moreover, there is no realistic study that classifies and makes analyses of laboratory performance based on a computational model. The purpose of this study was to develop an integrated system for medical laboratory accreditation that assesses QMS against ISO 15189. In addition, a deep analysis of factors that sustain accreditation was presented. The system started with establishing a core matrix that maps QMS elements with ISO 15189 clauses. Through this map, a questionnaire was developed to measure the performance. Therefore, score indices were calculated for the QMS. A fuzzy logic model was designed based on the calculated scores to classify medical laboratories according to their tendency for accreditation. Further, in case of failure of accreditation, cause-and-effect root analysis was done to realize the causes. Finally, cloud computing principles were employed to launch a web application in order to facilitate user interface with the proposed system. In verification, the system has been tested using a dataset of 12 medical laboratories in Egypt. Results have proved system robustness and consistency. Thus, the system is considered as a self-assessment tool that demonstrates points of weakness and strength.

RevDate: 2020-07-17
CmpDate: 2020-06-29

Ogiela L, Ogiela MR, H Ko (2020)

Intelligent Data Management and Security in Cloud Computing.

Sensors (Basel, Switzerland), 20(12):.

This paper will present the authors' own techniques of secret data management and protection, with particular attention paid to techniques securing data services. Among the solutions discussed, there will be information-sharing protocols dedicated to the tasks of secret (confidential) data sharing. Such solutions will be presented in an algorithmic form, aimed at solving the tasks of protecting and securing data against unauthorized acquisition. Data-sharing protocols will execute the tasks of securing a special type of information, i.e., data services. The area of data protection will be defined for various levels, within which will be executed the tasks of data management and protection. The authors' solution concerning securing data with the use of cryptographic threshold techniques used to split the secret among a specified group of secret trustees, simultaneously enhanced by the application of linguistic methods of description of the shared secret, forms a new class of protocols, i.e., intelligent linguistic threshold schemes. The solutions presented in this paper referring to the service management and securing will be dedicated to various levels of data management. These levels could be differentiated both in the structure of a given entity and in its environment. There is a special example thereof, i.e., the cloud management processes. These will also be subject to the assessment of feasibility of application of the discussed protocols in these areas. Presented solutions will be based on the application of an innovative approach, in which we can use a special formal graph for the creation of a secret representation, which can then be divided and transmitted over a distributed network.

RevDate: 2020-08-19
CmpDate: 2020-08-19

Coman Schmid D, Crameri K, Oesterle S, et al (2020)

SPHN - The BioMedIT Network: A Secure IT Platform for Research with Sensitive Human Data.

Studies in health technology and informatics, 270:1170-1174.

The BioMedIT project is funded by the Swiss government as an integral part of the Swiss Personalized Health Network (SPHN), aiming to provide researchers with access to a secure, powerful and versatile IT infrastructure for doing data-driven research on sensitive biomedical data while ensuring data privacy protection. The BioMedIT network gives researchers the ability to securely transfer, store, manage and process sensitive research data. The underlying BioMedIT nodes provide compute and storage capacity that can be used locally or through a federated environment. The network operates under a common Information Security Policy using state-of-the-art security techniques. It utilizes cloud computing, virtualization, compute accelerators (GPUs), big data storage as well as federation technologies to lower computational boundaries for researchers and to guarantee that sensitive data can be processed in a secure and lawful way. Building on existing expertise and research infrastructure at the partnering Swiss institutions, the BioMedIT network establishes a competitive Swiss private-cloud - a secure national infrastructure resource that can be used by researchers of Swiss universities, hospitals and other research institutions.

RevDate: 2020-08-27
CmpDate: 2020-08-27

Niyitegeka D, Bellafqira R, Genin E, et al (2020)

Secure Collapsing Method Based on Fully Homomorphic Encryption.

Studies in health technology and informatics, 270:412-416.

In this paper, we propose a new approach for performing privacy-preserving genome-wide association study (GWAS) in cloud environments. This method allows a Genomic Research Unit (GRU) who possesses genetic variants of diseased individuals (cases) to compare his/her data against genetic variants of healthy individuals (controls) from a Genomic Research Center (GRC). The originality of this work stands on a secure version of the collapsing method based on the logistic regression model considering that all data of GRU are stored into the cloud. To do so, we take advantage of fully homomorphic encryption and of secure multiparty computation. Experiment results carried out on real genetic data using the BGV cryptosystem indicate that the proposed scheme provides the same results as the ones achieved on clear data.

RevDate: 2020-06-25

Lawlor B, RD Sleator (2020)

The democratization of bioinformatics: A software engineering perspective.

GigaScience, 9(6):.

Today, thanks to advances in cloud computing, it is possible for small teams of software developers to produce internet-scale products, a feat that was previously the preserve of large organizations. Herein, we describe how these advances in software engineering can be made more readily available to bioinformaticians. In the same way that cloud computing has democratized access to distributed systems engineering for generalist software engineers, access to scalable and reproducible bioinformatic engineering can be democratized for generalist bioinformaticians and biologists. We present solutions, based on our own efforts, to achieve this goal.

RevDate: 2020-08-07

Ehwerhemuepha L, Gasperino G, Bischoff N, et al (2020)

HealtheDataLab - a cloud computing solution for data science and advanced analytics in healthcare with application to predicting multi-center pediatric readmissions.

BMC medical informatics and decision making, 20(1):115.

BACKGROUND: There is a shortage of medical informatics and data science platforms using cloud computing on electronic medical record (EMR) data, and with computing capacity for analyzing big data. We implemented, described, and applied a cloud computing solution utilizing the fast health interoperability resources (FHIR) standardization and state-of-the-art parallel distributed computing platform for advanced analytics.

METHODS: We utilized the architecture of the modern predictive analytics platform called Cerner® HealtheDataLab and described the suite of cloud computing services and Apache Projects that it relies on. We validated the platform by replicating and improving on a previous single pediatric institution study/model on readmission and developing a multi-center model of all-cause readmission for pediatric-age patients using the Cerner® Health Facts Deidentified Database (now updated and referred to as the Cerner Real World Data). We retrieved a subset of 1.4 million pediatric encounters consisting of 48 hospitals' data on pediatric encounters in the database based on a priori inclusion criteria. We built and analyzed corresponding random forest and multilayer perceptron (MLP) neural network models using HealtheDataLab.

RESULTS: Using the HealtheDataLab platform, we developed a random forest model and multi-layer perceptron model with AUC of 0.8446 (0.8444, 0.8447) and 0.8451 (0.8449, 0.8453) respectively. We showed the distribution in model performance across hospitals and identified a set of novel variables under previous resource utilization and generic medications that may be used to improve existing readmission models.

CONCLUSION: Our results suggest that high performance, elastic cloud computing infrastructures such as the platform presented here can be used for the development of highly predictive models on EMR data in a secure and robust environment. This in turn can lead to new clinical insights/discoveries.

RevDate: 2020-08-19
CmpDate: 2020-08-19

Deep B, Mathur I, N Joshi (2020)

Coalescing IoT and Wi-Fi technologies for an optimized approach in urban route planning.

Environmental science and pollution research international, 27(27):34434-34441.

The quality of air that we breathe is one of the more serious environmental challenges that the government faces all around the world. It is a matter of concern for almost all developed and developing countries. The National Air Quality Index (NAQI) in India was first initiated and unveiled by the central government under the Swachh Bharat Abhiyan (Clean India Campaign). It was launched to spread cleanliness, and awareness to work towards a clean and healthy environment among all citizens living in India. This index is computed based on values obtained by monitoring eight types of pollutants that are known to commonly permeate around our immediate environment. These are particulate matter PM10; particulate matter PM2.5; nitrogen dioxide; sulfur dioxide; carbon monoxide; lead; ammonia; and ozone. Studies conducted have shown that almost 90% of particulate matters are produced from vehicular emissions, dust, debris on roads, and industries and from construction sites spanning across rural, semi-urban, and urban areas. While the State and Central governments have devised and implemented several schemes to keep air pollution levels under control, these alone have proved inadequate in cases such as the Delhi region of India. Internet of Things (IoT) offers a range of options that do extends into the domain of environmental management. Using an online monitoring system based on IoT technologies, users can stay informed on fluctuating levels of air pollution. In this paper, the design of a low-price pollution measurement kit working around a dust sensor, capable of transmitting data to a cloud service through a Wi-Fi module, is described. A system overview of urban route planning is also proposed. The proposed model can make users aware of pollutant concentrations at any point of time and can also act as useful input towards the design of the least polluted path prediction app. Hence, the proposed model can help travelers to plan a less polluted route in urban areas.

RevDate: 2020-07-15

Lee D, Moon H, Oh S, et al (2020)

mIoT: Metamorphic IoT Platform for On-Demand Hardware Replacement in Large-Scaled IoT Applications.

Sensors (Basel, Switzerland), 20(12): pii:s20123337.

As the Internet of Things (IoT) is becoming more pervasive in our daily lives, the number of devices that connect to IoT edges and data generated at the edges are rapidly increasing. On account of the bottlenecks in servers, due to the increase in data, as well as security and privacy issues, the IoT paradigm has shifted from cloud computing to edge computing. Pursuant to this trend, embedded devices require complex computation capabilities. However, due to various constraints, edge devices cannot equip enough hardware to process data, so the flexibility of operation is reduced, because of the limitations of fixed hardware functions, relative to cloud computing. Recently, as application fields and collected data types diversify, and, in particular, applications requiring complex computation such as artificial intelligence (AI) and signal processing are applied to edges, flexible processing and computation capabilities based on hardware acceleration are required. In this paper, to meet these needs, we propose a new IoT platform, called a metamorphic IoT (mIoT) platform, which can various hardware acceleration with limited hardware platform resources, through on-demand transmission and reconfiguration of required hardware at edges instead of via transference of sensing data to a server. The proposed platform reconfigures the edge's hardware with minimal overhead, based on a probabilistic value, known as callability. The mIoT consists of reconfigurable edge devices based on RISC-V architecture and a server that manages the reconfiguration of edge devices based on callability. Through various experimental results, we confirmed that the callability-based mIoT platform can provide the hardware required by the edge device in real time. In addition, by performing various functions with small hardware, power consumption, which is a major constraint of IoT, can be reduced.

RevDate: 2020-08-21

Suver C, Thorogood A, Doerr M, et al (2020)

Bringing Code to Data: Do Not Forget Governance.

Journal of medical Internet research, 22(7):e18087.

Developing or independently evaluating algorithms in biomedical research is difficult because of restrictions on access to clinical data. Access is restricted because of privacy concerns, the proprietary treatment of data by institutions (fueled in part by the cost of data hosting, curation, and distribution), concerns over misuse, and the complexities of applicable regulatory frameworks. The use of cloud technology and services can address many of the barriers to data sharing. For example, researchers can access data in high performance, secure, and auditable cloud computing environments without the need for copying or downloading. An alternative path to accessing data sets requiring additional protection is the model-to-data approach. In model-to-data, researchers submit algorithms to run on secure data sets that remain hidden. Model-to-data is designed to enhance security and local control while enabling communities of researchers to generate new knowledge from sequestered data. Model-to-data has not yet been widely implemented, but pilots have demonstrated its utility when technical or legal constraints preclude other methods of sharing. We argue that model-to-data can make a valuable addition to our data sharing arsenal, with 2 caveats. First, model-to-data should only be adopted where necessary to supplement rather than replace existing data-sharing approaches given that it requires significant resource commitments from data stewards and limits scientific freedom, reproducibility, and scalability. Second, although model-to-data reduces concerns over data privacy and loss of local control when sharing clinical data, it is not an ethical panacea. Data stewards will remain hesitant to adopt model-to-data approaches without guidance on how to do so responsibly. To address this gap, we explored how commitments to open science, reproducibility, security, respect for data subjects, and research ethics oversight must be re-evaluated in a model-to-data context.

RevDate: 2020-09-18

Margheri A, Masi M, Miladi A, et al (2020)

Decentralised provenance for healthcare data.

International journal of medical informatics, 141:104197 pii:S1386-5056(19)31203-1 [Epub ahead of print].

OBJECTIVE: The creation and exchange of patients' Electronic Healthcare Records have developed significantly in the last decade. Patients' records are however distributed in data silos across multiple healthcare facilities, posing technical and clinical challenges that may endanger patients' safety. Current healthcare sharing systems ensure interoperability of patients' records across facilities, but they have limits in presenting doctors with the clinical context of the data in the records. We design and implement a platform for managing provenance tracking of Electronic Healthcare Records based on blockchain technology, compliant with the latest healthcare standards and following the patient-informed consent preferences.

METHODS: The platform leverages two pillars: the use of international standards such as Integrating the Healthcare Enterprise (IHE), Health Level Seven International (HL7) and Fast Healthcare Interoperability Resources (FHIR) to achieve interoperability, and the use of a provenance creation process that by-design, avoids personal data storage within the blockchain. The platform consists of: (1) a smart contract implemented within the Hyperledger Fabric blockchain that manages provenance according to the W3C PROV for medical document in standardised formats (e.g. a CDA document, a FHIR resource, a DICOM study, etc.); (2) a Java Proxy that intercepts all the document submissions and retrievals for which provenance shall be evaluated; (3) a service used to retrieve the PROV document.

RESULTS: We integrated our decentralised platform with the SpiritEHR engine, an enterprise-grade healthcare system, and we stored and retrieved the available documents in the Mandel's sample CDA repository,1 which contained no protected health information. Using a cloud-based blockchain solution, we observed that the overhead added to the typical processing time of reading and writing medical data is in the order of milliseconds. Moreover, the integration of the Proxy at the level of exchanged messages in EHR systems allows transparent usage of provenance data in multiple health computing domains such as decision making, data reconciliation, and patient consent auditing.

CONCLUSIONS: By using international healthcare standards and a cloud-based blockchain deployment, we delivered a solution that can manage provenance of patients' records via transparent integration within the routine operations on healthcare data.

RevDate: 2020-06-15

Ben Hassen H, Ayari N, B Hamdi (2020)

A home hospitalization system based on the Internet of things, Fog computing and cloud computing.

Informatics in medicine unlocked, 20:100368.

In recent years, the world has witnessed a significant increase in the number of elderly who often suffer from chronic diseases, and has witnessed in recent months a major spread of the new coronavirus (COVID-19), which has led to thousands of deaths, especially among the elderly and people who suffer from chronic diseases. Coronavirus has also caused many problems in hospitals, where these are no longer able to accommodate a large number of patients. This virus has also begun to spread between medical and paramedical teams, and this causes a major risk to the health of patients staying in hospitals. To reduce the spread of the virus and maintain the health of patients who need a hospital stay, home hospitalization is one of the best possible solutions. This paper proposes a home hospitalization system based on the Internet of Things (IoT), Fog computing, and Cloud computing, which are among the most important technologies that have contributed to the development of the healthcare sector in a significant way. These systems allow patients to recover and receive treatment in their homes and among their families, where patient health and the hospitalization room environmental state are monitored, to enable doctors to follow the hospitalization process and make recommendations to patients and their supervisors, through monitoring units and mobile applications developed for this purpose. The results of evaluation have shown great acceptance of this system by patients and doctors alike.

RevDate: 2020-09-04

Kurzawski JW, Mikellidou K, Morrone MC, et al (2020)

The visual white matter connecting human area prostriata and the thalamus is retinotopically organized.

Brain structure & function, 225(6):1839-1853.

The human visual system is capable of processing visual information from fovea to the far peripheral visual field. Recent fMRI studies have shown a full and detailed retinotopic map in area prostriata, located ventro-dorsally and anterior to the calcarine sulcus along the parieto-occipital sulcus with strong preference for peripheral and wide-field stimulation. Here, we report the anatomical pattern of white matter connections between area prostriata and the thalamus encompassing the lateral geniculate nucleus (LGN). To this end, we developed and utilized an automated pipeline comprising a series of Apps that run openly on the cloud computing platform brainlife.io to analyse 139 subjects of the Human Connectome Project (HCP). We observe a continuous and extended bundle of white matter fibers from which two subcomponents can be extracted: one passing ventrally parallel to the optic radiations (OR) and another passing dorsally circumventing the lateral ventricle. Interestingly, the loop travelling dorsally connects the thalamus with the central visual field representation of prostriata located anteriorly, while the other loop travelling more ventrally connects the LGN with the more peripheral visual field representation located posteriorly. We then analyse an additional cohort of 10 HCP subjects using a manual plane extraction method outside brainlife.io to study the relationship between the two extracted white matter subcomponents and eccentricity, myelin and cortical thickness gradients within prostriata. Our results are consistent with a retinotopic segregation recently demonstrated in the OR, connecting the LGN and V1 in humans and reveal for the first time a retinotopic segregation regarding the trajectory of a fiber bundle between the thalamus and an associative visual area.

RevDate: 2020-08-24
CmpDate: 2020-08-24

Alnajrani HM, Norman AA, BH Ahmed (2020)

Privacy and data protection in mobile cloud computing: A systematic mapping study.

PloS one, 15(6):e0234312 pii:PONE-D-19-33649.

As a result of a shift in the world of technology, the combination of ubiquitous mobile networks and cloud computing produced the mobile cloud computing (MCC) domain. As a consequence of a major concern of cloud users, privacy and data protection are getting substantial attention in the field. Currently, a considerable number of papers have been published on MCC with a growing interest in privacy and data protection. Along with this advance in MCC, however, no specific investigation highlights the results of the existing studies in privacy and data protection. In addition, there are no particular exploration highlights trends and open issues in the domain. Accordingly, the objective of this paper is to highlight the results of existing primary studies published in privacy and data protection in MCC to identify current trends and open issues. In this investigation, a systematic mapping study was conducted with a set of six research questions. A total of 1711 studies published from 2009 to 2019 were obtained. Following a filtering process, a collection of 74 primary studies were selected. As a result, the present data privacy threats, attacks, and solutions were identified. Also, the ongoing trends of data privacy exercise were observed. Moreover, the most utilized measures, research type, and contribution type facets were emphasized. Additionally, the current open research issues in privacy and data protection in MCC were highlighted. Furthermore, the results demonstrate the current state-of-the-art of privacy and data protection in MCC, and the conclusion will help to identify research trends and open issues in MCC for researchers and offer useful information in MCC for practitioners.

RevDate: 2020-08-18

Wang Y, An G, Becker A, et al (2020)

Rapid community-driven development of a SARS-CoV-2 tissue simulator.

bioRxiv : the preprint server for biology.

The 2019 novel coronavirus, SARS-CoV-2, is an emerging pathogen of critical significance to international public health. Knowledge of the interplay between molecular-scale virus-receptor interactions, single-cell viral replication, intracellular-scale viral transport, and emergent tissue-scale viral propagation is limited. Moreover, little is known about immune system-virus-tissue interactions and how these can result in low-level (asymptomatic) infections in some cases and acute respiratory distress syndrome (ARDS) in others, particularly with respect to presentation in different age groups or pre-existing inflammatory risk factors like diabetes. Given the nonlinear interactions within and among each of these processes, multiscale simulation models can shed light on the emergent dynamics that lead to divergent outcomes, identify actionable "choke points" for pharmacologic interactions, screen potential therapies, and identify potential biomarkers that differentiate patient outcomes. Given the complexity of the problem and the acute need for an actionable model to guide therapy discovery and optimization, we introduce and iteratively refine a prototype of a multiscale model of SARS-CoV-2 dynamics in lung tissue. The first prototype model was built and shared internationally as open source code and an online interactive model in under 12 hours, and community domain expertise is driving rapid refinements with a two-to-four week release cycle. In a sustained community effort, this consortium is integrating data and expertise across virology, immunology, mathematical biology, quantitative systems physiology, cloud and high performance computing, and other domains to accelerate our response to this critical threat to international health.

RevDate: 2020-07-20

Khan F, Khan MA, Abbas S, et al (2020)

Cloud-Based Breast Cancer Prediction Empowered with Soft Computing Approaches.

Journal of healthcare engineering, 2020:8017496.

The developing countries are still starving for the betterment of health sector. The disease commonly found among the women is breast cancer, and past researches have proven results that if the cancer is detected at a very early stage, the chances to overcome the disease are higher than the disease treated or detected at a later stage. This article proposed cloud-based intelligent BCP-T1F-SVM with 2 variations/models like BCP-T1F and BCP-SVM. The proposed BCP-T1F-SVM system has employed two main soft computing algorithms. The proposed BCP-T1F-SVM expert system specifically defines the stage and the type of cancer a person is suffering from. Expert system will elaborate the grievous stages of the cancer, to which extent a patient has suffered. The proposed BCP-SVM gives the higher precision of the proposed breast cancer detection model. In the limelight of breast cancer, the proposed BCP-T1F-SVM expert system gives out the higher precision rate. The proposed BCP-T1F expert system is being employed in the diagnosis of breast cancer at an initial stage. Taking different stages of cancer into account, breast cancer is being dealt by BCP-T1F expert system. The calculations and the evaluation done in this research have revealed that BCP-SVM is better than BCP-T1F. The BCP-T1F concludes out the 96.56 percentage accuracy, whereas the BCP-SVM gives accuracy of 97.06 percentage. The above unleashed research is wrapped up with the conclusion that BCP-SVM is better than the BCP-T1F. The opinions have been recommended by the medical expertise of Sheikh Zayed Hospital Lahore, Pakistan, and Cavan General Hospital, Lisdaran, Cavan, Ireland.

RevDate: 2020-06-06

Soriano-Valdez D, Pelaez-Ballestas I, Manrique de Lara A, et al (2020)

The basics of data, big data, and machine learning in clinical practice.

Clinical rheumatology pii:10.1007/s10067-020-05196-z [Epub ahead of print].

Health informatics and biomedical computing have introduced the use of computer methods to analyze clinical information and provide tools to assist clinicians during the diagnosis and treatment of diverse clinical conditions. With the amount of information that can be obtained in the healthcare setting, new methods to acquire, organize, and analyze the data are being developed each day, including new applications in the world of big data and machine learning. In this review, first we present the most basic concepts in data science, including the structural hierarchy of information and how it is managed. A section is dedicated to discussing topics relevant to the acquisition of data, importantly the availability and use of online resources such as survey software and cloud computing services. Along with digital datasets, these tools make it possible to create more diverse models and facilitate collaboration. After, we describe concepts and techniques in machine learning used to process and analyze health data, especially those most widely applied in rheumatology. Overall, the objective of this review is to aid in the comprehension of how data science is used in health, with a special emphasis on the relevance to the field of rheumatology. It provides clinicians with basic tools on how to approach and understand new trends in health informatics analysis currently being used in rheumatology practice. If clinicians understand the potential use and limitations of health informatics, this will facilitate interdisciplinary conversations and continued projects relating to data, big data, and machine learning.

RevDate: 2020-06-24

Zhang C, Liu L, Zhou L, et al (2020)

Self-Powered Sensor for Quantifying Ocean Surface Water Waves Based on Triboelectric Nanogenerator.

ACS nano, 14(6):7092-7100.

An ocean wave contains various marine information, but it is generally difficult to obtain the high-precision quantification to meet the needs of ocean development and utilization. Here, we report a self-powered and high-performance triboelectric ocean-wave spectrum sensor (TOSS) fabricated using a tubular triboelectric nanogenerator (TENG) and hollow ball buoy, which not only can adapt to the measurement of ocean surface water waves in any direction but also can eliminate the influence of seawater on the performance of the sensor. Based on the high-sensitivity advantage of TENG, an ultrahigh sensitivity of 2530 mV mm-1 (which is 100 times higher than that of previous work) and a minimal monitoring error of 0.1% are achieved in monitoring wave height and wave period, respectively. Importantly, six basic ocean-wave parameters (wave height, wave period, wave frequency, wave velocity, wavelength, and wave steepness), wave velocity spectrum, and mechanical energy spectrum have been derived by the electrical signals of TOSS. Our finding not only can provide ocean-wave parameters but also can offer significant and accurate data support for cloud computing of ocean big data.

RevDate: 2020-09-10

Wang L, CA Alexander (2020)

Big data analytics in medical engineering and healthcare: methods, advances and challenges.

Journal of medical engineering & technology, 44(6):267-283.

Big data analytics are gaining popularity in medical engineering and healthcare use cases. Stakeholders are finding big data analytics reduce medical costs and personalise medical services for each individual patient. Big data analytics can be used in large-scale genetics studies, public health, personalised and precision medicine, new drug development, etc. The introduction of the types, sources, and features of big data in healthcare as well as the applications and benefits of big data and big data analytics in healthcare is key to understanding healthcare big data and will be discussed in this article. Major methods, platforms and tools of big data analytics in medical engineering and healthcare are also presented. Advances and technology progress of big data analytics in healthcare are introduced, which includes artificial intelligence (AI) with big data, infrastructure and cloud computing, advanced computation and data processing, privacy and cybersecurity, health economic outcomes and technology management, and smart healthcare with sensing, wearable devices and Internet of things (IoT). Current challenges of dealing with big data and big data analytics in medical engineering and healthcare as well as future work are also presented.

RevDate: 2020-06-04

Corbane C, Politis P, Kempeneers P, et al (2020)

A global cloud free pixel- based image composite from Sentinel-2 data.

Data in brief, 31:105737 pii:105737.

Large-scale land cover classification from satellite imagery is still a challenge due to the big volume of data to be processed, to persistent cloud-cover in cloud-prone areas as well as seasonal artefacts that affect spatial homogeneity. Sentinel-2 times series from Copernicus Earth Observation program offer a great potential for fine scale land cover mapping thanks to high spatial and temporal resolutions, with a decametric resolution and five-day repeat time. However, the selection of best available scenes, their download together with the requirements in terms of storage and computing resources pose restrictions for large-scale land cover mapping. The dataset presented in this paper corresponds to global cloud-free pixel based composite created from the Sentinel-2 data archive (Level L1C) available in Google Earth Engine for the period January 2017- December 2018. The methodology used for generating the image composite is described and the metadata associated with the 10 m resolution dataset is presented. The data with a total volume of 15 TB is stored on the Big Data platform of the Joint Research Centre. It can be downloaded per UTM grid zone, loaded into GIS clients and displayed easily thanks to pre-computed overviews.

RevDate: 2020-06-26

Abd-El-Atty B, Iliyasu AM, Alaskar H, et al (2020)

A Robust Quasi-Quantum Walks-Based Steganography Protocol for Secure Transmission of Images on Cloud-Based E-healthcare Platforms.

Sensors (Basel, Switzerland), 20(11):.

Traditionally, tamper-proof steganography involves using efficient protocols to encrypt the stego cover image and/or hidden message prior to embedding it into the carrier object. However, as the inevitable transition to the quantum computing paradigm beckons, its immense computing power will be exploited to violate even the best non-quantum, i.e., classical, stego protocol. On its part, quantum walks can be tailored to utilise their astounding 'quantumness' to propagate nonlinear chaotic behaviours as well as its sufficient sensitivity to alterations in primary key parameters both important properties for efficient information security. Our study explores using a classical (i.e., quantum-inspired) rendition of the controlled alternate quantum walks (i.e., CAQWs) model to fabricate a robust image steganography protocol for cloud-based E-healthcare platforms by locating content that overlays the secret (or hidden) bits. The design employed in our technique precludes the need for pre and/or post encryption of the carrier and secret images. Furthermore, our design simplifies the process to extract the confidential (hidden) information since only the stego image and primary states to run the CAQWs are required. We validate our proposed protocol on a dataset of medical images, which exhibited remarkable outcomes in terms of their security, good visual quality, high resistance to data loss attacks, high embedding capacity, etc., making the proposed scheme a veritable strategy for efficient medical image steganography.

RevDate: 2020-06-26
CmpDate: 2020-06-09

Silva FSD, Silva E, Neto EP, et al (2020)

A Taxonomy of DDoS Attack Mitigation Approaches Featured by SDN Technologies in IoT Scenarios.

Sensors (Basel, Switzerland), 20(11):.

The Internet of Things (IoT) has attracted much attention from the Information and Communication Technology (ICT) community in recent years. One of the main reasons for this is the availability of techniques provided by this paradigm, such as environmental monitoring employing user data and everyday objects. The facilities provided by the IoT infrastructure allow the development of a wide range of new business models and applications (e.g., smart homes, smart cities, or e-health). However, there are still concerns over the security measures which need to be addressed to ensure a suitable deployment. Distributed Denial of Service (DDoS) attacks are among the most severe virtual threats at present and occur prominently in this scenario, which can be mainly owed to their ease of execution. In light of this, several research studies have been conducted to find new strategies as well as improve existing techniques and solutions. The use of emerging technologies such as those based on the Software-Defined Networking (SDN) paradigm has proved to be a promising alternative as a means of mitigating DDoS attacks. However, the high granularity that characterizes the IoT scenarios and the wide range of techniques explored during the DDoS attacks make the task of finding and implementing new solutions quite challenging. This problem is exacerbated by the lack of benchmarks that can assist developers when designing new solutions for mitigating DDoS attacks for increasingly complex IoT scenarios. To fill this knowledge gap, in this study we carry out an in-depth investigation of the state-of-the-art and create a taxonomy that describes and characterizes existing solutions and highlights their main limitations. Our taxonomy provides a comprehensive view of the reasons for the deployment of the solutions, and the scenario in which they operate. The results of this study demonstrate the main benefits and drawbacks of each solution set when applied to specific scenarios by examining current trends and future perspectives, for example, the adoption of emerging technologies based on Cloud and Edge (or Fog) Computing.

RevDate: 2020-07-02

Mohanraj S, Díaz-Mejía JJ, Pham MD, et al (2020)

CReSCENT: CanceR Single Cell ExpressioN Toolkit.

Nucleic acids research, 48(W1):W372-W379.

CReSCENT: CanceR Single Cell ExpressioN Toolkit (https://crescent.cloud), is an intuitive and scalable web portal incorporating a containerized pipeline execution engine for standardized analysis of single-cell RNA sequencing (scRNA-seq) data. While scRNA-seq data for tumour specimens are readily generated, subsequent analysis requires high-performance computing infrastructure and user expertise to build analysis pipelines and tailor interpretation for cancer biology. CReSCENT uses public data sets and preconfigured pipelines that are accessible to computational biology non-experts and are user-editable to allow optimization, comparison, and reanalysis for specific experiments. Users can also upload their own scRNA-seq data for analysis and results can be kept private or shared with other users.

RevDate: 2020-06-20

Ye Q, Zhou J, H Wu (2020)

Using Information Technology to Manage the COVID-19 Pandemic: Development of a Technical Framework Based on Practical Experience in China.

JMIR medical informatics, 8(6):e19515.

BACKGROUND: The coronavirus disease (COVID-19) epidemic poses an enormous challenge to the global health system, and governments have taken active preventive and control measures. The health informatics community in China has actively taken action to leverage health information technologies for epidemic monitoring, detection, early warning, prevention and control, and other tasks.

OBJECTIVE: The aim of this study was to develop a technical framework to respond to the COVID-19 epidemic from a health informatics perspective.

METHODS: In this study, we collected health information technology-related information to understand the actions taken by the health informatics community in China during the COVID-19 outbreak and developed a health information technology framework for epidemic response based on health information technology-related measures and methods.

RESULTS: Based on the framework, we review specific health information technology practices for managing the outbreak in China, describe the highlights of their application in detail, and discuss critical issues to consider when using health information technology. Technologies employed include mobile and web-based services such as Internet hospitals and Wechat, big data analyses (including digital contact tracing through QR codes or epidemic prediction), cloud computing, Internet of things, Artificial Intelligence (including the use of drones, robots, and intelligent diagnoses), 5G telemedicine, and clinical information systems to facilitate clinical management for COVID-19.

CONCLUSIONS: Practical experience in China shows that health information technologies play a pivotal role in responding to the COVID-19 epidemic.

RevDate: 2020-06-19

Li NS, Chen YT, Hsu YP, et al (2020)

Mobile healthcare system based on the combination of a lateral flow pad and smartphone for rapid detection of uric acid in whole blood.

Biosensors & bioelectronics, 164:112309.

Excessive production of uric acid (UA) in blood may lead to gout, hyperuricaemia and kidney disorder; thus, a fast, simple and reliable biosensor is needed to routinely determine the UA concentration in blood without pretreatment. The purpose of this study was to develop a mobile healthcare (mHealth) system using a drop of blood, which comprised a lateral flow pad (LFP), mesoporous Prussian blue nanoparticles (MPBs) as artificial nanozymes and auto-calculation software for on-site determination of UA in blood and data management. A standard curve was found to be linear in the range of 1.5-8.5 mg/dL UA, and convenience, cloud computing and personal information management were simultaneously achieved for the proposed mHealth system. Our mHealth system appropriately met the requirements of application in patients' homes, with the potential of real-time monitoring by their primary care physicians (PCPs).

RevDate: 2020-06-04

Lee V, Parekh K, Matthew G, et al (2020)

JITA: A Platform for Enabling Real Time Point-of-Care Patient Recruitment.

AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science, 2020:355-359.

Timely accrual continues to be a challenge in clinical trials. The evolution of Electronic Health Record systems and cohort selection tools like i2b2 have improved identification of potential candidate participants. However, delays in receiving relevant patient information and lack of real time patient identification cause difficulty in meeting recruitment targets. The authors have designed and developed a proof of concept platform that informs authorized study team members about potential participant matches while the patient is at a healthcare setting. This Just-In-Time Alert (JITA) application leverages Health Level 7 (HL7) messages and parses them against study eligibility criteria using Amazon Web Services (AWS) cloud technologies. When required conditions are satisfied, the rules engine triggers an alert to the study team. Our pilot tests using difficult to recruit trials currently underway at the UMass Medical School have shown significant potential by generating more than 90 patient alerts in a 90-day testing timeframe.

RevDate: 2020-06-22
CmpDate: 2020-06-22

Liu A, Wu Q, X Cheng (2020)

Using the Google Earth Engine to estimate a 10 m resolution monthly inventory of soil fugitive dust emissions in Beijing, China.

The Science of the total environment, 735:139174.

Soil fugitive dust (SFD) is an important contributor to ambient particulate matter (PM), but most current SFD emission inventories are updated slowly or have low resolution. In areas where vegetation coverage and climatic conditions undergo significant seasonal changes, the classic wind erosion equation (WEQ) tends to underestimate SFD emissions, increasing the need for higher spatiotemporal data resolution. Continuous acquisition of precise bare soil maps is the key barrier to compiling monthly high-resolution SFD emission inventories. In this study, we proposed taking advantage of the massive Landsat and Sentinel-2 imagery data sets stored in the Google Earth Engine (GEE) cloud platform to enable the rapid production of bare soil maps with spatial resolutions of up to 10 m. The resulting improved spatiotemporal resolution of wind erosion parameters allowed us to estimate SFD emissions in Beijing as being ~5-7 times the level calculated by the WEQ. Spring and winter accounted for >85% of SFD emissions, while April was the dustiest month with SFD emissions of PM10 exceeding 11,000 t. Our results highlighted the role of SFD in air pollution during winter and spring in northern China, and suggested that GEE should be further used for image acquisition, data processing, and compilation of gridded SFD inventories. These inventories can help identify the location and intensity of SFD sources while providing supporting information for local authorities working to develop targeted mitigation measures.

RevDate: 2020-06-01

Massaad E, P Cherfan (2020)

Social Media Data Analytics on Telehealth During the COVID-19 Pandemic.

Cureus, 12(4):e7838.

INTRODUCTION: Physical distancing during the coronavirus Covid-19 pandemic has brought telehealth to the forefront to keep up with patient care amidst an international crisis that is exhausting healthcare resources. Understanding and managing health-related concerns resulting from physical distancing measures are of utmost importance.

OBJECTIVES: To describe and analyze the volume, content, and geospatial distribution of tweets associated with telehealth during the Covid-19 pandemic.

METHODS: We inquired Twitter public data to access tweets related to telehealth from March 30, 2020 to April 6, 2020. We analyzed tweets using natural language processing (NLP) and unsupervised learning methods. Clustering analysis was performed to classify tweets. Geographic tweet distribution was correlated with Covid-19 confirmed cases in the United States. All analyses were carried on the Google Cloud computing service "Google Colab" using Python libraries (Python Software Foundation).

RESULTS: A total of 41,329 tweets containing the term "telehealth" were retrieved. The most common terms appearing alongside 'telehealth' were "covid", "health", "care", "services", "patients", and "pandemic". Mental health was the most common health-related topic that appeared in our search reflecting a high need for mental healthcare during the pandemic. Similarly, Medicare was the most common appearing health plan mirroring the accelerated access to telehealth and change in coverage policies. The geographic distribution of tweets related to telehealth and having a specific location within the United States (n=19,367) was significantly associated with the number of confirmed Covid-19 cases reported in each state (p<0.001).

CONCLUSION: Social media activity is an accurate reflection of disease burden during the Covid-19 pandemic. Widespread adoption of telehealth-favoring policies is necessary and mostly needed to address mental health problems that may arise in areas of high infection and death rates.

RevDate: 2020-08-26

Tian L, Li Y, Edmonson MN, et al (2020)

CICERO: a versatile method for detecting complex and diverse driver fusions using cancer RNA sequencing data.

Genome biology, 21(1):126.

To discover driver fusions beyond canonical exon-to-exon chimeric transcripts, we develop CICERO, a local assembly-based algorithm that integrates RNA-seq read support with extensive annotation for candidate ranking. CICERO outperforms commonly used methods, achieving a 95% detection rate for 184 independently validated driver fusions including internal tandem duplications and other non-canonical events in 170 pediatric cancer transcriptomes. Re-analysis of TCGA glioblastoma RNA-seq unveils previously unreported kinase fusions (KLHL7-BRAF) and a 13% prevalence of EGFR C-terminal truncation. Accessible via standard or cloud-based implementation, CICERO enhances driver fusion detection for research and precision oncology. The CICERO source code is available at https://github.com/stjude/Cicero.

RevDate: 2020-08-10

Zarowitz BJ (2020)

Emerging Pharmacotherapy and Health Care Needs of Patients in the Age of Artificial Intelligence and Digitalization.

The Annals of pharmacotherapy, 54(10):1038-1046.

Advances in the application of artificial intelligence, digitization, technology, iCloud computing, and wearable devices in health care predict an exciting future for health care professionals and our patients. Projections suggest an older, generally healthier, better-informed but financially less secure patient population of wider cultural and ethnic diversity that live throughout the United States. A pragmatic yet structured approach is recommended to prepare health care professionals and patients for emerging pharmacotherapy needs. Clinician training should include genomics, cloud computing, use of large data sets, implementation science, and cultural competence. Patients will need support for wearable devices and reassurance regarding digital medicine.

RevDate: 2020-07-31
CmpDate: 2020-07-31

Cheng C, Zhou H, Chai X, et al (2020)

Adoption of image surface parameters under moving edge computing in the construction of mountain fire warning method.

PloS one, 15(5):e0232433.

In order to cope with the problems of high frequency and multiple causes of mountain fires, it is very important to adopt appropriate technologies to monitor and warn mountain fires through a few surface parameters. At the same time, the existing mobile terminal equipment is insufficient in image processing and storage capacity, and the energy consumption is high in the data transmission process, which requires calculation unloading. For this circumstance, first, a hierarchical discriminant analysis algorithm based on image feature extraction is introduced, and the image acquisition software in the mobile edge computing environment in the android system is designed and installed. Based on the remote sensing data, the land surface parameters of mountain fire are obtained, and the application of image recognition optimization algorithm in the mobile edge computing (MEC) environment is realized to solve the problem of transmission delay caused by traditional mobile cloud computing (MCC). Then, according to the forest fire sensitivity index, a forest fire early warning model based on MEC is designed. Finally, the image recognition response time and bandwidth consumption of the algorithm are studied, and the occurrence probability of mountain fire in Muli county, Liangshan prefecture, Sichuan is predicted. The results show that, compared with the MCC architecture, the algorithm presented in this study has shorter recognition and response time to different images in WiFi network environment; compared with MCC, MEC architecture can identify close users and transmit less data, which can effectively reduce the bandwidth pressure of the network. In most areas of Muli county, Liangshan prefecture, the probability of mountain fire is relatively low, the probability of mountain fire caused by non-surface environment is about 8 times that of the surface environment, and the influence of non-surface environment in the period of high incidence of mountain fire is lower than that in the period of low incidence. In conclusion, the surface parameters of MEC can be used to effectively predict the mountain fire and provide preventive measures in time.

RevDate: 2020-05-30

Hylton A, Henselman-Petrusek G, Sang J, et al (2019)

Tuning the Performance of a Computational Persistent Homology Package.

Software: practice & experience, 49(5):885-905.

In recent years, persistent homology has become an attractive method for data analysis. It captures topological features, such as connected components, holes, and voids from point cloud data and summarizes the way in which these features appear and disappear in a filtration sequence. In this project, we focus on improving the performance of Eirene, a computational package for persistent homology. Eirene is a 5000-line open-source software library implemented in the dynamic programming language Julia. We use the Julia profiling tools to identify performance bottlenecks and develop novel methods to manage them, including the parallelization of some time-consuming functions on multicore/manycore hardware. Empirical results show that performance can be greatly improved.

RevDate: 2020-06-16

Kim M, Yu S, Lee J, et al (2020)

Design of Secure Protocol for Cloud-Assisted Electronic Health Record System Using Blockchain.

Sensors (Basel, Switzerland), 20(10):.

In the traditional electronic health record (EHR) management system, each medical service center manages their own health records, respectively, which are difficult to share on the different medical platforms. Recently, blockchain technology is one of the popular alternatives to enable medical service centers based on different platforms to share EHRs. However, it is hard to store whole EHR data in blockchain because of the size and the price of blockchain. To resolve this problem, cloud computing is considered as a promising solution. Cloud computing offers advantageous properties such as storage availability and scalability. Unfortunately, the EHR system with cloud computing can be vulnerable to various attacks because the sensitive data is sent over a public channel. We propose the secure protocol for cloud-assisted EHR system using blockchain. In the proposed scheme, blockchain technology is used to provide data integrity and access control using log transactions and the cloud server stores and manages the patient's EHRs to provide secure storage resources. We use an elliptic curve cryptosystems (ECC) to provide secure health data sharing with cloud computing. We demonstrate that the proposed EHR system can prevent various attacks by using informal security analysis and automated validation of internet security protocols and applications (AVISPA) simulation. Furthermore, we prove that the proposed EHR system provides secure mutual authentication using BAN logic analysis. We then compare the computation overhead, communication overhead, and security properties with existing schemes. Consequently, the proposed EHR system is suitable for the practical healthcare system considering security and efficiency.


RJR Experience and Expertise


Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.


Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.


Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.


Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.


While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.


Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.


Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.


Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Order from Amazon

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226


E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )