picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
22 Jan 2021 at 01:35
HITS:
1713
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 22 Jan 2021 at 01:35 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2021-01-20

Farid F, Elkhodr M, Sabrina F, et al (2021)

A Smart Biometric Identity Management Framework for Personalised IoT and Cloud Computing-Based Healthcare Services.

Sensors (Basel, Switzerland), 21(2): pii:s21020552.

This paper proposes a novel identity management framework for Internet of Things (IoT) and cloud computing-based personalized healthcare systems. The proposed framework uses multimodal encrypted biometric traits to perform authentication. It employs a combination of centralized and federated identity access techniques along with biometric based continuous authentication. The framework uses a fusion of electrocardiogram (ECG) and photoplethysmogram (PPG) signals when performing authentication. In addition to relying on the unique identification characteristics of the users' biometric traits, the security of the framework is empowered by the use of Homomorphic Encryption (HE). The use of HE allows patients' data to stay encrypted when being processed or analyzed in the cloud. Thus, providing not only a fast and reliable authentication mechanism, but also closing the door to many traditional security attacks. The framework's performance was evaluated and validated using a machine learning (ML) model that tested the framework using a dataset of 25 users in seating positions. Compared to using just ECG or PPG signals, the results of using the proposed fused-based biometric framework showed that it was successful in identifying and authenticating all 25 users with 100% accuracy. Hence, offering some significant improvements to the overall security and privacy of personalized healthcare systems.

RevDate: 2021-01-20

Raghavan A, Demircioglu MA, A Taeihagh (2021)

Public Health Innovation through Cloud Adoption: A Comparative Analysis of Drivers and Barriers in Japan, South Korea, and Singapore.

International journal of environmental research and public health, 18(1): pii:ijerph18010334.

Governments are increasingly using cloud computing to reduce cost, increase access, improve quality, and create innovations in healthcare. Existing literature is primarily based on successful examples from developed western countries, and there is a lack of similar evidence from Asia. With a population close to 4.5 billion people, Asia faces healthcare challenges that pose an immense burden on economic growth and policymaking. Cloud computing in healthcare can potentially help increase the quality of healthcare delivery and reduce the economic burden, enabling governments to address healthcare challenges effectively and within a short timeframe. Advanced Asian countries such as Japan, South Korea, and Singapore provide successful examples of how cloud computing can be used to develop nationwide databases of electronic health records; real-time health monitoring for the elderly population; genetic database to support advanced research and cancer treatment; telemedicine; and health cities that drive the economy through medical industry, tourism, and research. This article examines these countries and identifies the drivers and barriers of cloud adoption in healthcare and makes policy recommendations to enable successful public health innovations through cloud adoption.

RevDate: 2021-01-19

Anselmo C, Attili M, Horton R, et al (2021)

Hey You, Get On the Cloud: Safe and Compliant Use of Cloud Computing with Medical Devices.

Biomedical instrumentation & technology, 55(1):1-15.

RevDate: 2021-01-18

Patel YS, Malwi Z, Nighojkar A, et al (2021)

Truthful online double auction based dynamic resource provisioning for multi-objective trade-offs in IaaS clouds.

Cluster computing pii:3225 [Epub ahead of print].

Auction designs have recently been adopted for static and dynamic resource provisioning in IaaS clouds, such as Microsoft Azure and Amazon EC2. However, the existing mechanisms are mostly restricted to simple auctions, single-objective, offline setting, one-sided interactions either among cloud users or cloud service providers (CSPs), and possible misreports of cloud user's private information. This paper proposes a more realistic scenario of online auctioning for IaaS clouds, with the unique characteristics of elasticity for time-varying arrival of cloud user requests under the time-based server maintenance in cloud data centers. We propose an online truthful double auction technique for balancing the multi-objective trade-offs between energy, revenue, and performance in IaaS clouds, consisting of a weighted bipartite matching based winning-bid determination algorithm for resource allocation and a Vickrey-Clarke-Groves (VCG) driven algorithm for payment calculation of winning bids. Through rigorous theoretical analysis and extensive trace-driven simulation studies exploiting Google cluster workload traces, we demonstrate that our mechanism significantly improves the performance while promising truthfulness, heterogeneity, economic efficiency, individual rationality, and has a polynomial-time computational complexity.

RevDate: 2021-01-16

Lee YL, Arizky SN, Chen YR, et al (2021)

High-Availability Computing Platform with Sensor Fault Resilience.

Sensors (Basel, Switzerland), 21(2): pii:s21020542.

Modern computing platforms usually use multiple sensors to report system information. In order to achieve high availability (HA) for the platform, the sensors can be used to efficiently detect system faults that make a cloud service not live. However, a sensor may fail and disable HA protection. In this case, human intervention is needed, either to change the original fault model or to fix the sensor fault. Therefore, this study proposes an HA mechanism that can continuously provide HA to a cloud system based on dynamic fault model reconstruction. We have implemented the proposed HA mechanism on a four-layer OpenStack cloud system and tested the performance of the proposed mechanism for all possible sets of sensor faults. For each fault model, we inject possible system faults and measure the average fault detection time. The experimental result shows that the proposed mechanism can accurately detect and recover an injected system fault with disabled sensors. In addition, the system fault detection time increases as the number of sensor faults increases, until the HA mechanism is degraded to a one-system-fault model, which is the worst case as the system layer heartbeating.

RevDate: 2021-01-16

Lozano Domínguez JM, TJ Mateo Sanguino (2021)

Walking Secure: Safe Routing Planning Algorithm and Pedestrian's Crossing Intention Detector Based on Fuzzy Logic App.

Sensors (Basel, Switzerland), 21(2): pii:s21020529.

Improving road safety through artificial intelligence is now crucial to achieving more secure smart cities. With this objective, a mobile app based on the integration of the smartphone sensors and a fuzzy logic strategy to determine the pedestrian's crossing intention around crosswalks is presented. The app developed also allows the calculation, tracing and guidance of safe routes thanks to an optimization algorithm that includes pedestrian areas on the paths generated over the whole city through a cloud database (i.e., zebra crossings, pedestrian streets and walkways). The experimentation carried out consisted in testing the fuzzy logic strategy with a total of 31 volunteers crossing and walking around a crosswalk. For that, the fuzzy logic approach was subjected to a total of 3120 samples generated by the volunteers. It has been proven that a smartphone can be successfully used as a crossing intention detector system with an accuracy of 98.63%, obtaining a true positive rate of 98.27% and a specificity of 99.39% according to a receiver operating characteristic analysis. Finally, a total of 30 routes were calculated by the proposed algorithm and compared with Google Maps considering the values of time, distance and safety along the routes. As a result, the routes generated by the proposed algorithm were safer than the routes obtained with Google Maps, achieving an increase in the use of safe pedestrian areas of at least 183%.

RevDate: 2021-01-12

Singh K, J Malhotra (2021)

Cloud based ensemble machine learning approach for smart detection of epileptic seizures using higher order spectral analysis.

Physical and engineering sciences in medicine [Epub ahead of print].

The present paper proposes a smart framework for detection of epileptic seizures using the concepts of IoT technologies, cloud computing and machine learning. This framework processes the acquired scalp EEG signals by Fast Walsh Hadamard transform. Then, the transformed frequency-domain signals are examined using higher-order spectral analysis to extract amplitude and entropy-based statistical features. The extracted features have been selected by means of correlation-based feature selection algorithm to achieve more real-time classification with reduced complexity and delay. Finally, the samples containing selected features have been fed to ensemble machine learning techniques for classification into several classes of EEG states, viz. normal, interictal and ictal. The employed techniques include Dagging, Bagging, Stacking, MultiBoost AB and AdaBoost M1 algorithms in integration with C4.5 decision tree algorithm as the base classifier. The results of the ensemble techniques are also compared with standalone C4.5 decision tree and SVM algorithms. The performance analysis through simulation results reveals that the ensemble of AdaBoost M1 and C4.5 decision tree algorithms with higher-order spectral features is an adequate technique for automated detection of epileptic seizures in real-time. This technique achieves 100% classification accuracy, sensitivity and specificity values with optimally small classification time.

RevDate: 2021-01-12

Li D, Xu S, P Li (2021)

Deep Reinforcement Learning-Empowered Resource Allocation for Mobile Edge Computing in Cellular V2X Networks.

Sensors (Basel, Switzerland), 21(2): pii:s21020372.

With the rapid development of vehicular networks, vehicle-to-everything (V2X) communications have huge number of tasks to be calculated, which brings challenges to the scarce network resources. Cloud servers can alleviate the terrible situation regarding the lack of computing abilities of vehicular user equipment (VUE), but the limited resources, the dynamic environment of vehicles, and the long distances between the cloud servers and VUE induce some potential issues, such as extra communication delay and energy consumption. Fortunately, mobile edge computing (MEC), a promising computing paradigm, can ameliorate the above problems by enhancing the computing abilities of VUE through allocating the computational resources to VUE. In this paper, we propose a joint optimization algorithm based on a deep reinforcement learning algorithm named the double deep Q network (double DQN) to minimize the cost constituted of energy consumption, the latency of computation, and communication with the proper policy. The proposed algorithm is more suitable for dynamic scenarios and requires low-latency vehicular scenarios in the real world. Compared with other reinforcement learning algorithms, the algorithm we proposed algorithm improve the performance in terms of convergence, defined cost, and speed by around 30%, 15%, and 17%.

RevDate: 2021-01-11

Santos JA, Inácio PRM, BMC Silva (2021)

Towards the Use of Blockchain in Mobile Health Services and Applications.

Journal of medical systems, 45(2):17.

With the advent of cryptocurrencies and blockchain, the growth and adaptation of cryptographic features and capabilities were quickly extended to new and underexplored areas, such as healthcare. Currently, blockchain is being implemented mainly as a mechanism to secure Electronic Health Records (EHRs). However, new studies have shown that this technology can be a powerful tool in empowering patients to control their own health data, as well for enabling a fool-proof health data history and establishing medical responsibility. Additionally, with the proliferation of mobile health (m-Health) sustained on service-oriented architectures, the adaptation of blockchain mechanisms into m-Health applications creates the possibility for a more decentralized and available healthcare service. Hence, this paper presents a review of the current security best practices for m-Health and the most used and widely known implementations of the blockchain protocol, including blockchain technologies in m-Health. The main goal of this comprehensive review is to further discuss and elaborate on identified open-issues and potential use cases regarding the uses of blockchain in this area. Finally, the paper presents the major findings, challenges and advantages on future blockchain implementations for m-Health services and applications.

RevDate: 2021-01-10

Mennen AC, Turk-Browne NB, Wallace G, et al (2020)

Cloud-Based Functional Magnetic Resonance Imaging Neurofeedback to Reduce the Negative Attentional Bias in Depression: A Proof-of-Concept Study.

Biological psychiatry. Cognitive neuroscience and neuroimaging pii:S2451-9022(20)30310-4 [Epub ahead of print].

Individuals with depression show an attentional bias toward negatively valenced stimuli and thoughts. In this proof-of-concept study, we present a novel closed-loop neurofeedback procedure intended to remediate this bias. Internal attentional states were detected in real time by applying machine learning techniques to functional magnetic resonance imaging data on a cloud server; these attentional states were externalized using a visual stimulus that the participant could learn to control. We trained 15 participants with major depressive disorder and 12 healthy control participants over 3 functional magnetic resonance imaging sessions. Exploratory analysis showed that participants with major depressive disorder were initially more likely than healthy control participants to get stuck in negative attentional states, but this diminished with neurofeedback training relative to controls. Depression severity also decreased from pre- to posttraining. These results demonstrate that our method is sensitive to the negative attentional bias in major depressive disorder and showcase the potential of this novel technique as a treatment that can be evaluated in future clinical trials.

RevDate: 2021-01-08

Nowakowski K, Carvalho P, Six JB, et al (2021)

Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.

Medical & biological engineering & computing [Epub ahead of print].

Recent learning strategies such as reinforcement learning (RL) have favored the transition from applied artificial intelligence to general artificial intelligence. One of the current challenges of RL in healthcare relates to the development of a controller to teach a musculoskeletal model to perform dynamic movements. Several solutions have been proposed. However, there is still a lack of investigations exploring the muscle control problem from a biomechanical point of view. Moreover, no studies using biological knowledge to develop plausible motor control models for pathophysiological conditions make use of reward reshaping. Consequently, the objective of the present work was to design and evaluate specific bioinspired reward function strategies for human locomotion learning within an RL framework. The deep deterministic policy gradient (DDPG) method for a single-agent RL problem was applied. A 3D musculoskeletal model (8 DoF and 22 muscles) of a healthy adult was used. A virtual interactive environment was developed and simulated using opensim-rl library. Three reward functions were defined for walking, forward, and side falls. The training process was performed with Google Cloud Compute Engine. The obtained outcomes were compared to the NIPS 2017 challenge outcomes, experimental observations, and literature data. Regarding learning to walk, simulated musculoskeletal models were able to walk from 18 to 20.5 m for the best solutions. A compensation strategy of muscle activations was revealed. Soleus, tibia anterior, and vastii muscles are main actors of the simple forward fall. A higher intensity of muscle activations was also noted after the fall. All kinematics and muscle patterns were consistent with experimental observations and literature data. Regarding the side fall, an intensive level of muscle activation on the expected fall side to unbalance the body was noted. The obtained outcomes suggest that computational and human resources as well as biomechanical knowledge are needed together to develop and evaluate an efficient and robust RL solution. As perspectives, current solutions will be extended to a larger parameter space in 3D. Furthermore, a stochastic reinforcement learning model will be investigated in the future in scope with the uncertainties of the musculoskeletal model and associated environment to provide a general artificial intelligence solution for human locomotion learning. Graphical abstract.

RevDate: 2021-01-08

Chen Y, Yan W, Xie Z, et al (2021)

Comparative analysis of target gene exon sequencing by cognitive technology using a next generation sequencing platform in patients with lung cancer.

Molecular and clinical oncology, 14(2):36.

Next generation sequencing (NGS) technology is an increasingly important clinical tool for therapeutic decision-making. However, interpretation of NGS data presents challenges at the point of care, due to limitations in understanding the clinical importance of gene variants and efficiently translating results into actionable information for the clinician. The present study compared two approaches for annotating and reporting actionable genes and gene mutations from tumor samples: The traditional approach of manual curation, annotation and reporting using an experienced molecular tumor bioinformationist; and a cloud-based cognitive technology, with the goal to detect gene mutations of potential significance in Chinese patients with lung cancer. Data from 285 gene-targeted exon sequencing previously conducted on 115 patient tissue samples between 2014 and 2016 and subsequently manually annotated and evaluated by the Guangdong Lung Cancer Institute (GLCI) research team were analyzed by the Watson for Genomics (WfG) cognitive genomics technology. A comparative analysis of the annotation results of the two methods was conducted to identify quantitative and qualitative differences in the mutations generated. The complete congruence rate of annotation results between WfG analysis and the GLCI bioinformatician was 43.48%. In 65 (56.52%) samples, WfG analysis identified and interpreted, on average, 1.54 more mutation sites in each sample than the manual GLCI review. These mutation sites were located on 27 genes, including EP300, ARID1A, STK11 and DNMT3A. Mutations in the EP300 gene were most prevalent, and present in 30.77% samples. The Tumor Mutation Burden (TMB) interpreted by WfG analysis (1.82) was significantly higher than the TMB (0.73) interpreted by GLCI review. Compared with manual curation by a bioinformatician, WfG analysis provided comprehensive insights and additional genetic alterations to inform clinical therapeutic strategies for patients with lung cancer. These findings suggest the valuable role of cognitive computing to increase efficiency in the comprehensive detection and interpretation of genetic alterations which may inform opportunities for targeted cancer therapies.

RevDate: 2021-01-07

Rajendran S, Obeid JS, Binol H, et al (2021)

Cloud-Based Federated Learning Implementation Across Medical Centers.

JCO clinical cancer informatics, 5:1-11.

PURPOSE: Building well-performing machine learning (ML) models in health care has always been exigent because of the data-sharing concerns, yet ML approaches often require larger training samples than is afforded by one institution. This paper explores several federated learning implementations by applying them in both a simulated environment and an actual implementation using electronic health record data from two academic medical centers on a Microsoft Azure Cloud Databricks platform.

MATERIALS AND METHODS: Using two separate cloud tenants, ML models were created, trained, and exchanged from one institution to another via a GitHub repository. Federated learning processes were applied to both artificial neural networks (ANNs) and logistic regression (LR) models on the horizontal data sets that are varying in count and availability. Incremental and cyclic federated learning models have been tested in simulation and real environments.

RESULTS: The cyclically trained ANN showed a 3% increase in performance, a significant improvement across most attempts (P < .05). Single weight neural network models showed improvement in some cases. However, LR models did not show much improvement after federated learning processes. The specific process that improved the performance differed based on the ML model and how federated learning was implemented. Moreover, we have confirmed that the order of the institutions during the training did influence the overall performance increase.

CONCLUSION: Unlike previous studies, our work has shown the implementation and effectiveness of federated learning processes beyond simulation. Additionally, we have identified different federated learning models that have achieved statistically significant performances. More work is needed to achieve effective federated learning processes in biomedicine, while preserving the security and privacy of the data.

RevDate: 2021-01-07

Jones DE, Alimi TO, Pordell P, et al (2021)

Pursuing Data Modernization in Cancer Surveillance by Developing a Cloud-Based Computing Platform: Real-Time Cancer Case Collection.

JCO clinical cancer informatics, 5:24-29.

Cancer surveillance is a field focused on collection of data to evaluate the burden of cancer and apply public health strategies to prevent and control cancer in the community. A key challenge facing the cancer surveillance community is the number of manual tasks required to collect cancer surveillance data, thereby resulting in possible delays in analysis and use of the information. To modernize and automate cancer data collection and reporting, the Centers for Disease Control and Prevention is planning, developing, and piloting a cancer surveillance cloud-based computing platform (CS-CBCP) with standardized electronic reporting from laboratories and health-care providers. With this system, automation of the cancer case collection process and access to real-time cancer case data can be achieved, which could not be done before. Furthermore, the COVID-19 pandemic has illustrated the importance of continuity of operations plans, and the CS-CBCP has the potential to provide such a platform suitable for remote operations of central cancer registries.

RevDate: 2021-01-07

Chattopadhyay T, Mondal H, Mondal S, et al (2020)

Prescription digitization, online preservation, and retrieval on a smartphone.

Journal of family medicine and primary care, 9(10):5295-5302 pii:JFMPC-9-5295.

Background: Medical records are important documents that should be stored for at least 3 years after the commencement of the treatment of an adult patient in India. In a health care facility, patients' data is saved in an online or offline retrieval system. However, in the case of the primary care physician, the data is not commonly kept in an easily retrievable system.

Aim: To test the feasibility of using a set of free web-based services in digitization, preservation, and retrieval of prescription on a smartphone by primary care physicians.

Methods: This study was conducted with 12 primary care physicians. They were provided hands-on guides on creating an online form for uploading a prescription and using an application for retrieval of the prescription on a smartphone. Their feedback on the training material was collected by a telephonic survey, which had a 10-point Likert-type response option. Then, an in-depth interview was conducted to ascertain their perception on the tutorial and the process of digitization and retrieval system.

Results: All of the participants were able to create an online form on their smartphone. They uploaded their prescription and associated data and were able to retrieve it. The physicians opined positively on the "cost of the system," "portability" on a smartphone and ease of the "tutorial". They opined negatively on the "limited storage," chances of "loss of data," and "time constraints" for entry of the patients' data.

Conclusion: Free web-based and smartphone applications can be used by a primary care physician for personal storage and retrieval of prescriptions. The simple tutorial presented in this article would help many primary care physicians in resource-limited settings.

RevDate: 2021-01-07

Feldmann J, Youngblood N, Karpov M, et al (2021)

Parallel convolutional processing using an integrated photonic tensor core.

Nature, 589(7840):52-58.

With the proliferation of ultrahigh-speed mobile networks and internet-connected devices, along with the rise of artificial intelligence (AI)1, the world is generating exponentially increasing amounts of data that need to be processed in a fast and efficient way. Highly parallelized, fast and scalable hardware is therefore becoming progressively more important2. Here we demonstrate a computationally specific integrated photonic hardware accelerator (tensor core) that is capable of operating at speeds of trillions of multiply-accumulate operations per second (1012 MAC operations per second or tera-MACs per second). The tensor core can be considered as the optical analogue of an application-specific integrated circuit (ASIC). It achieves parallelized photonic in-memory computing using phase-change-material memory arrays and photonic chip-based optical frequency combs (soliton microcombs3). The computation is reduced to measuring the optical transmission of reconfigurable and non-resonant passive components and can operate at a bandwidth exceeding 14 gigahertz, limited only by the speed of the modulators and photodetectors. Given recent advances in hybrid integration of soliton microcombs at microwave line rates3-5, ultralow-loss silicon nitride waveguides6,7, and high-speed on-chip detectors and modulators, our approach provides a path towards full complementary metal-oxide-semiconductor (CMOS) wafer-scale integration of the photonic tensor core. Although we focus on convolutional processing, more generally our results indicate the potential of integrated photonics for parallel, fast, and efficient computational hardware in data-heavy AI applications such as autonomous driving, live video processing, and next-generation cloud computing services.

RevDate: 2021-01-07

Bertuccio S, Tardiolo G, Giambò FM, et al (2021)

ReportFlow: an application for EEG visualization and reporting using cloud platform.

BMC medical informatics and decision making, 21(1):7.

BACKGROUND: The cloud is a promising resource for data sharing and computing. It can optimize several legacy processes involving different units of a company or more companies. Recently, cloud technology applications are spreading out in the healthcare setting as well, allowing to cut down costs for physical infrastructures and staff movements. In a public environment the main challenge is to guarantee the patients' data protection. We describe a cloud-based system, named ReportFlow, developed with the aim to improve the process of reporting and delivering electroencephalograms.

METHODS: We illustrate the functioning of this application through a use-case scenario occurring in an Italian hospital, and describe the corresponding key encryption and key management used for data security guarantee. We used the X2 test or the unpaired Student t test to perform pre-post comparisons of some indexes, in order to evaluate significant changes after the application of ReportFlow.

RESULTS: The results obtained through the use of ReportFlow show a reduction of the time for exam reporting (t = 19.94; p < 0.001) and for its delivering (t = 14.95; p < 0.001), as well as an increase of the number of neurophysiologic examinations performed (about 20%), guaranteeing data integrity and security. Moreover, 68% of exam reports were delivered completely digitally.

CONCLUSIONS: The application resulted to be an optimal solution to optimize the legacy process adopted in this scenario. The comparative pre-post analysis showed promising preliminary results of performance. Future directions will be the creation and release of certificates automatically.

RevDate: 2021-01-07

Li J, Qiao Z, Zhang K, et al (2021)

A Lattice-Based Homomorphic Proxy Re-Encryption Scheme with Strong Anti-Collusion for Cloud Computing.

Sensors (Basel, Switzerland), 21(1): pii:s21010288.

The homomorphic proxy re-encryption scheme combines the characteristics of a homomorphic encryption scheme and proxy re-encryption scheme. The proxy can not only convert a ciphertext of the delegator into a ciphertext of the delegatee, but also can homomorphically calculate the original ciphertext and re-encryption ciphertext belonging to the same user, so it is especially suitable for cloud computing. Yin et al. put forward the concept of a strong collusion attack on a proxy re-encryption scheme, and carried out a strong collusion attack on the scheme through an example. The existing homomorphic proxy re-encryption schemes use key switching algorithms to generate re-encryption keys, so it can not resist strong collusion attack. In this paper, we construct the first lattice-based homomorphic proxy re-encryption scheme with strong anti-collusion (HPRE-SAC). Firstly, algorithm TrapGen is used to generate an encryption key and trapdoor, then trapdoor sampling is used to generate a decryption key and re-encryption key, respectively. Finally, in order to ensure the homomorphism of ciphertext, a key switching algorithm is only used to generate the evaluation key. Compared with the existing homomorphic proxy re-encryption schemes, our HPRE-SAC scheme not only can resist strong collusion attacks, but also has smaller parameters.

RevDate: 2021-01-07

Coelho AA (2021)

Ab initio structure solution of proteins at atomic resolution using charge-flipping techniques and cloud computing.

Acta crystallographica. Section D, Structural biology, 77(Pt 1):98-107.

Large protein structures at atomic resolution can be solved in minutes using charge-flipping techniques operating on hundreds of virtual machines (computers) on the Amazon Web Services cloud-computing platform driven by the computer programs TOPAS or TOPAS-Academic at a small financial cost. The speed of operation has allowed charge-flipping techniques to be investigated and modified, leading to two strategies that can solve a large range of difficult protein structures at atomic resolution. Techniques include the use of space-group symmetry restraints on the electron density as well as increasing the intensity of a randomly chosen high-intensity electron-density peak. It is also shown that the use of symmetry restraints increases the chance of finding a solution for low-resolution data. Finally, a flipping strategy that negates `uranium atom solutions' has been developed for structures that exhibit such solutions during charge flipping.

RevDate: 2021-01-07
CmpDate: 2021-01-07

Tian X, Zhu J, Xu T, et al (2021)

Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds.

Sensors (Basel, Switzerland), 21(1): pii:s21010229.

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user's mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.

RevDate: 2021-01-05

Yun T, Li H, Chang PC, et al (2021)

Accurate, scalable cohort variant calls using DeepVariant and GLnexus.

Bioinformatics (Oxford, England) pii:6064144 [Epub ahead of print].

MOTIVATION: Population-scale sequenced cohorts are foundational resources for genetic analyses, but processing raw reads into analysis-ready cohort-level variants remains challenging.

RESULTS: We introduce an open-source cohort-calling method that uses the highly-accurate caller DeepVariant and scalable merging tool GLnexus. Using callset quality metrics based on variant recall and precision in benchmark samples and Mendelian consistency in father-mother-child trios, we optimized the method across a range of cohort sizes, sequencing methods, and sequencing depths. The resulting callsets show consistent quality improvements over those generated using existing best practices with reduced cost. We further evaluate our pipeline in the deeply sequenced 1000 Genomes Project (1KGP) samples and show superior callset quality metrics and imputation reference panel performance compared to an independently-generated GATK Best Practices pipeline.

We publicly release the 1KGP individual-level variant calls and cohort callset (https://console.cloud.google.com/storage/browser/brain-genomics-public/research/cohort/1KGP) to foster additional development and evaluation of cohort merging methods as well as broad studies of genetic variation. Both DeepVariant (https://github.com/google/deepvariant) and GLnexus (https://github.com/dnanexus-rnd/GLnexus) are open-sourced, and the optimized GLnexus setup discovered in this study is also integrated into GLnexus public releases v1.2.2 and later.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2021-01-05

Yang L, Culbertson EA, Thomas NK, et al (2021)

A cloud platform for atomic pair distribution function analysis: PDFitc.

Acta crystallographica. Section A, Foundations and advances, 77(Pt 1):2-6.

A cloud web platform for analysis and interpretation of atomic pair distribution function (PDF) data (PDFitc) is described. The platform is able to host applications for PDF analysis to help researchers study the local and nanoscale structure of nanostructured materials. The applications are designed to be powerful and easy to use and can, and will, be extended over time through community adoption and development. The currently available PDF analysis applications, structureMining, spacegroupMining and similarityMapping, are described. In the first and second the user uploads a single PDF and the application returns a list of best-fit candidate structures, and the most likely space group of the underlying structure, respectively. In the third, the user can upload a set of measured or calculated PDFs and the application returns a matrix of Pearson correlations, allowing assessment of the similarity between different data sets. structureMining is presented here as an example to show the easy-to-use workflow on PDFitc. In the future, as well as using the PDFitc applications for data analysis, it is hoped that the community will contribute their own codes and software to the platform.

RevDate: 2021-01-04

Lima MS (2021)

Information theory inspired optimization algorithm for efficient service orchestration in distributed systems.

PloS one, 16(1):e0242285 pii:PONE-D-20-24346.

Distributed Systems architectures are becoming the standard computational model for processing and transportation of information, especially for Cloud Computing environments. The increase in demand for application processing and data management from enterprise and end-user workloads continues to move from a single-node client-server architecture to a distributed multitier design where data processing and transmission are segregated. Software development must considerer the orchestration required to provision its core components in order to deploy the services efficiently in many independent, loosely coupled-physically and virtually interconnected-data centers spread geographically, across the globe. This network routing challenge can be modeled as a variation of the Travelling Salesman Problem (TSP). This paper proposes a new optimization algorithm for optimum route selection using Algorithmic Information Theory. The Kelly criterion for a Shannon-Bernoulli process is used to generate a reliable quantitative algorithm to find a near optimal solution tour. The algorithm is then verified by comparing the results with benchmark heuristic solutions in 3 test cases. A statistical analysis is designed to measure the significance of the results between the algorithms and the entropy function can be derived from the distribution. The tested results shown an improvement in the solution quality by producing routes with smaller length and time requirements. The quality of the results proves the flexibility of the proposed algorithm for problems with different complexities without relying in nature-inspired models such as Genetic Algorithms, Ant Colony, Cross Entropy, Neural Networks, 2opt and Simulated Annealing. The proposed algorithm can be used by applications to deploy services across large cluster of nodes by making better decision in the route design. The findings in this paper unifies critical areas in Computer Science, Mathematics and Statistics that many researchers have not explored and provided a new interpretation that advances the understanding of the role of entropy in decision problems encoded in Turing Machines.

RevDate: 2021-01-04

Khan R, H Gilani (2021)

Global drought monitoring with big geospatial datasets using Google Earth Engine.

Environmental science and pollution research international [Epub ahead of print].

Drought or dryness occurs due to the accumulative effect of certain climatological and hydrological variables over a certain period. Droughts are studied through numerically computed simple or compound indices. Vegetation condition index (VCI) is used for observing the change in vegetation that causes agricultural drought. Since the land surface temperature has minimum influence from cloud contamination and humidity in the air, so the temperature condition index (TCI) is used for studying the temperature change. Dryness or wetness of soil is a major indicator for agriculture and hydrological drought and for that purpose, the index, soil moisture condition index (SMCI), is computed. The deviation of precipitation from normal is a major cause for meteorological droughts and for that purpose, precipitation condition index (PCI) is computed. The years when the indices escalated the dryness situation to severe and extreme are pointed out in this research. Furthermore, an interactive dashboard is generated in the Google Earth Engine (GEE) for users to compute the said indices using country boundary, time period, and ecological mask of their choice: Agriculture Drought Monitoring. Apart from global results, three case studies of droughts (2002 in Australia, 2013 in Brazil, and 2019 in Thailand) computed via the dashboard are discussed in detail in this research.

RevDate: 2021-01-03

Yadav S, Luthra S, D Garg (2021)

Modelling Internet of things (IoT)-driven global sustainability in multi-tier agri-food supply chain under natural epidemic outbreaks.

Environmental science and pollution research international [Epub ahead of print].

Epidemic outbreak (COVID-19, SARS-CoV-2) is an exceptional scenario of agri-food supply chain (AFSC) risk at the globalised level which is characterised by logistics' network breakdown (ripple effects), demand mismatch (uncertainty), and sustainable issues. Thus, the aim of this research is the modelling of the sustainable based multi-tier system for AFSC, which is managed through the different emerging application of Internet of things (IoT) technology. Different IoT technologies, viz., Blockchain, robotics, Big data analysis, and cloud computing, have developed a competitive AFSC at the global level. Competitive AFSC needs cautious incorporation of multi-tiers suppliers, specifically during dealing with globalised sustainability issues. Firms have been advancing towards their multi suppliers for driving social, environments and economical practices. This paper also studies the interrelationship of 14 enablers and their cause and effect magnitude as contributing to IoT-based food secure model. The methodology used in the paper is interpretative structural modelling (ISM) for establishing interrelationship among the enablers and Fuzzy-Decision-Making Trial and Evaluation Laboratory (F-DEMATEL) to provide the magnitude of the cause-effect strength of the hierarchical framework. This paper also provides some theoretical contribution supported by information processing theory (IPT) and dynamic capability theory (DCT). This paper may guide the organisation's managers in their strategic planning based on enabler's classification into cause and effect groups. This paper may also encourage the mangers for implementing IoT technologies in AFSC.

RevDate: 2020-12-31

Khomtchouk BB, Nelson CS, Vand KA, et al (2020)

HeartBioPortal2.0: new developments and updates for genetic ancestry and cardiometabolic quantitative traits in diverse human populations.

Database : the journal of biological databases and curation, 2020:.

Cardiovascular disease (CVD) is the leading cause of death worldwide for all genders and across most racial and ethnic groups. However, different races and ethnicities exhibit different rates of CVD and its related cardiorenal and metabolic comorbidities, suggesting differences in genetic predisposition and risk of onset, as well as socioeconomic and lifestyle factors (diet, exercise, etc.) that act upon an individual's unique underlying genetic background. Here, we present HeartBioPortal2.0, a major update to HeartBioPortal, the world's largest CVD genetics data precision medicine platform for harmonized CVD-relevant genetic variants, which now enables search and analysis of human genetic information related to heart disease across ethnically diverse populations and cardiovascular/renal/metabolic quantitative traits pertinent to CVD pathophysiology. HeartBioPortal2.0 is structured as a cloud-based computing platform and knowledge portal that consolidates a multitude of CVD-relevant genomic data modalities into a single powerful query and browsing interface between data and user via a user-friendly web application publicly available to the scientific research community. Since its initial release, HeartBioPortal2.0 has added new cardiovascular/renal/metabolic disease-relevant gene expression data as well as genetic association data from numerous large-scale genome-wide association study consortiums such as CARDIoGRAMplusC4D, TOPMed, FinnGen, AFGen, MESA, MEGASTROKE, UK Biobank, CHARGE, Biobank Japan and MyCode, among other studies. In addition, HeartBioPortal2.0 now includes support for quantitative traits and ethnically diverse populations, allowing users to investigate the shared genetic architecture of any gene or its variants across the continuous cardiometabolic spectrum from health (e.g. blood pressure traits) to disease (e.g. hypertension), facilitating the understanding of CVD trait genetics that inform health-to-disease transitions and endophenotypes. Custom visualizations in the new and improved user interface, including performance enhancements and new security features such as user authentication, collectively re-imagine HeartBioPortal's user experience and provide a data commons that co-locates data, storage and computing infrastructure in the context of studying the genetic basis behind the leading cause of global mortality. Database URL: https://www.heartbioportal.com/.

RevDate: 2020-12-31

Halty A, Sánchez R, Vázquez V, et al (2020)

Scheduling in cloud manufacturing systems: Recent systematic literature review.

Mathematical biosciences and engineering : MBE, 17(6):7378-7397.

Cloud Manufacturing (CMFg) is a novel production paradigm that benefits from Cloud Computing in order to develop manufacturing systems linked by the cloud. These systems, based on virtual platforms, allow direct linkage between customers and suppliers of manufacturing services, regardless of geographical distance. In this way, CMfg can expand both markets for producers, and suppliers for customers. However, these linkages imply a new challenge for production planning and decision-making process, especially in Scheduling. In this paper, a systematic literature review of articles addressing scheduling in Cloud Manufacturing environments is carried out. The review takes as its starting point a seminal study published in 2019, in which all problem features are described in detail. We pay special attention to the optimization methods and problem-solving strategies that have been suggested in CMfg scheduling. From the review carried out, we can assert that CMfg is a topic of growing interest within the scientific community. We also conclude that the methods based on bio-inspired metaheuristics are by far the most widely used (they represent more than 50% of the articles found). On the other hand, we suggest some lines for future research to further consolidate this field. In particular, we want to highlight the multi-objective approach, since due to the nature of the problem and the production paradigm, the optimization objectives involved are generally in conflict. In addition, decentralized approaches such as those based on game theory are promising lines for future research.

RevDate: 2020-12-30

Sahlmann K, Clemens V, Nowak M, et al (2020)

MUP: Simplifying Secure Over-The-Air Update with MQTT for Constrained IoT Devices.

Sensors (Basel, Switzerland), 21(1): pii:s21010010.

Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices.

RevDate: 2020-12-30

Asif R, Ghanem K, J Irvine (2020)

Proof-of-PUF Enabled Blockchain: Concurrent Data and Device Security for Internet-of-Energy.

Sensors (Basel, Switzerland), 21(1): pii:s21010028.

A detailed review on the technological aspects of Blockchain and Physical Unclonable Functions (PUFs) is presented in this article. It stipulates an emerging concept of Blockchain that integrates hardware security primitives via PUFs to solve bandwidth, integration, scalability, latency, and energy requirements for the Internet-of-Energy (IoE) systems. This hybrid approach, hereinafter termed as PUFChain, provides device and data provenance which records data origins, history of data generation and processing, and clone-proof device identification and authentication, thus possible to track the sources and reasons of any cyber attack. In addition to this, we review the key areas of design, development, and implementation, which will give us the insight on seamless integration with legacy IoE systems, reliability, cyber resilience, and future research challenges.

RevDate: 2020-12-30

Lin HY, YM Hung (2020)

An Improved Proxy Re-Encryption Scheme for IoT-Based Data Outsourcing Services in Clouds.

Sensors (Basel, Switzerland), 21(1): pii:s21010067.

IoT-based data outsourcing services in clouds could be regarded as a new trend in recent years, as they could reduce the hardware and software cost for enterprises and obtain higher flexibility. To securely transfer an encrypted message in the cloud, a so-called proxy re-encryption scheme is a better alternative. In such schemes, a ciphertext designated for a data aggregation is able to be re-encrypted as one designated for another by a semi-trusted proxy without decryption. In this paper, we introduce a secure proxy re-encryption protocol for IoT-based data outsourcing services in clouds. The proposed scheme is provably secure assuming the hardness of the bilinear inverse Diffie-Hellman problem (BIDHP). In particular, our scheme is bidirectional and supports the functionality of multi-hop, which allows an uploaded ciphertext to be transformed into a different one multiple times. The ciphertext length of our method is independent of the number of involved IoT nodes. Specifically, the re-encryption process only takes one exponentiation computation which is around 54 ms when sharing the data with 100 IoT devices. For each IoT node, the decryption process only requires two exponentiation computations. When compared with a related protocol presented by Kim and Lee, the proposed one also exhibits lower computational costs.

RevDate: 2020-12-30

Abbas Q, A Alsheddy (2020)

Driver Fatigue Detection Systems Using Multi-Sensors, Smartphone, and Cloud-Based Computing Platforms: A Comparative Analysis.

Sensors (Basel, Switzerland), 21(1): pii:s21010056.

Internet of things (IoT) cloud-based applications deliver advanced solutions for smart cities to decrease traffic accidents caused by driver fatigue while driving on the road. Environmental conditions or driver behavior can ultimately lead to serious roadside accidents. In recent years, the authors have developed many low-cost, computerized, driver fatigue detection systems (DFDs) to help drivers, by using multi-sensors, and mobile and cloud-based computing architecture. To promote safe driving, these are the most current emerging platforms that were introduced in the past. In this paper, we reviewed state-of-the-art approaches for predicting unsafe driving styles using three common IoT-based architectures. The novelty of this article is to show major differences among multi-sensors, smartphone-based, and cloud-based architectures in multimodal feature processing. We discussed all of the problems that machine learning techniques faced in recent years, particularly the deep learning (DL) model, to predict driver hypovigilance, especially in terms of these three IoT-based architectures. Moreover, we performed state-of-the-art comparisons by using driving simulators to incorporate multimodal features of the driver. We also mention online data sources in this article to test and train network architecture in the field of DFDs on public available multimodal datasets. These comparisons assist other authors to continue future research in this domain. To evaluate the performance, we mention the major problems in these three architectures to help researchers use the best IoT-based architecture for detecting DFDs in a real-time environment. Moreover, the important factors of Multi-Access Edge Computing (MEC) and 5th generation (5G) networks are analyzed in the context of deep learning architecture to improve the response time of DFD systems. Lastly, it is concluded that there is a research gap when it comes to implementing the DFD systems on MEC and 5G technologies by using multimodal features and DL architecture.

RevDate: 2020-12-29

Alankar B, Sharma G, Kaur H, et al (2020)

Experimental Setup for Investigating the Efficient Load Balancing Algorithms on Virtual Cloud.

Sensors (Basel, Switzerland), 20(24): pii:s20247342.

Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper.

RevDate: 2020-12-28

Das Choudhury S, Maturu S, Samal A, et al (2020)

Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction.

Frontiers in plant science, 11:521431.

High throughput image-based plant phenotyping facilitates the extraction of morphological and biophysical traits of a large number of plants non-invasively in a relatively short time. It facilitates the computation of advanced phenotypes by considering the plant as a single object (holistic phenotypes) or its components, i.e., leaves and the stem (component phenotypes). The architectural complexity of plants increases over time due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. One of the central challenges to computing phenotypes from 2-dimensional (2D) single view images of plants, especially at the advanced vegetative stage in presence of self-occluding leaves, is that the information captured in 2D images is incomplete, and hence, the computed phenotypes are inaccurate. We introduce a novel algorithm to compute 3-dimensional (3D) plant phenotypes from multiview images using voxel-grid reconstruction of the plant (3DPhenoMV). The paper also presents a novel method to reliably detect and separate the individual leaves and the stem from the 3D voxel-grid of the plant using voxel overlapping consistency check and point cloud clustering techniques. To evaluate the performance of the proposed algorithm, we introduce the University of Nebraska-Lincoln 3D Plant Phenotyping Dataset (UNL-3DPPD). A generic taxonomy of 3D image-based plant phenotypes are also presented to promote 3D plant phenotyping research. A subset of these phenotypes are computed using computer vision algorithms with discussion of their significance in the context of plant science. The central contributions of the paper are (a) an algorithm for 3D voxel-grid reconstruction of maize plants at the advanced vegetative stages using images from multiple 2D views; (b) a generic taxonomy of 3D image-based plant phenotypes and a public benchmark dataset, i.e., UNL-3DPPD, to promote the development of 3D image-based plant phenotyping research; and (c) novel voxel overlapping consistency check and point cloud clustering techniques to detect and isolate individual leaves and stem of the maize plants to compute the component phenotypes. Detailed experimental analyses demonstrate the efficacy of the proposed method, and also show the potential of 3D phenotypes to explain the morphological characteristics of plants regulated by genetic and environmental interactions.

RevDate: 2020-12-22

Chen B, Chen H, Yuan D, et al (2020)

3D Fast Object Detection Based on Discriminant Images and Dynamic Distance Threshold Clustering.

Sensors (Basel, Switzerland), 20(24): pii:s20247221.

The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.

RevDate: 2020-12-21

Bibi N, Sikandar M, Ud Din I, et al (2020)

IoMT-Based Automated Detection and Classification of Leukemia Using Deep Learning.

Journal of healthcare engineering, 2020:6648574.

For the last few years, computer-aided diagnosis (CAD) has been increasing rapidly. Numerous machine learning algorithms have been developed to identify different diseases, e.g., leukemia. Leukemia is a white blood cells- (WBC-) related illness affecting the bone marrow and/or blood. A quick, safe, and accurate early-stage diagnosis of leukemia plays a key role in curing and saving patients' lives. Based on developments, leukemia consists of two primary forms, i.e., acute and chronic leukemia. Each form can be subcategorized as myeloid and lymphoid. There are, therefore, four leukemia subtypes. Various approaches have been developed to identify leukemia with respect to its subtypes. However, in terms of effectiveness, learning process, and performance, these methods require improvements. This study provides an Internet of Medical Things- (IoMT-) based framework to enhance and provide a quick and safe identification of leukemia. In the proposed IoMT system, with the help of cloud computing, clinical gadgets are linked to network resources. The system allows real-time coordination for testing, diagnosis, and treatment of leukemia among patients and healthcare professionals, which may save both time and efforts of patients and clinicians. Moreover, the presented framework is also helpful for resolving the problems of patients with critical condition in pandemics such as COVID-19. The methods used for the identification of leukemia subtypes in the suggested framework are Dense Convolutional Neural Network (DenseNet-121) and Residual Convolutional Neural Network (ResNet-34). Two publicly available datasets for leukemia, i.e., ALL-IDB and ASH image bank, are used in this study. The results demonstrated that the suggested models supersede the other well-known machine learning algorithms used for healthy-versus-leukemia-subtypes identification.

RevDate: 2020-12-18

Khorsheed MB, Zainel QM, Hassen OA, et al (2020)

The Application of Fractal Transform and Entropy for Improving Fault Tolerance and Load Balancing in Grid Computing Environments.

Entropy (Basel, Switzerland), 22(12): pii:e22121410.

This paper applies the entropy-based fractal indexing scheme that enables the grid environment for fast indexing and querying. It addresses the issue of fault tolerance and load balancing-based fractal management to make computational grids more effective and reliable. A fractal dimension of a cloud of points gives an estimate of the intrinsic dimensionality of the data in that space. The main drawback of this technique is the long computing time. The main contribution of the suggested work is to investigate the effect of fractal transform by adding R-tree index structure-based entropy to existing grid computing models to obtain a balanced infrastructure with minimal fault. In this regard, the presented work is going to extend the commonly scheduling algorithms that are built based on the physical grid structure to a reduced logical network. The objective of this logical network is to reduce the searching in the grid paths according to arrival time rate and path's bandwidth with respect to load balance and fault tolerance, respectively. Furthermore, an optimization searching technique is utilized to enhance the grid performance by investigating the optimum number of nodes extracted from the logical grid. The experimental results indicated that the proposed model has better execution time, throughput, makespan, latency, load balancing, and success rate.

RevDate: 2020-12-17

Khan A, Nawaz U, Ulhaq A, et al (2020)

Real-time plant health assessment via implementing cloud-based scalable transfer learning on AWS DeepLens.

PloS one, 15(12):e0243243 pii:PONE-D-20-32970.

The control of plant leaf diseases is crucial as it affects the quality and production of plant species with an effect on the economy of any country. Automated identification and classification of plant leaf diseases is, therefore, essential for the reduction of economic losses and the conservation of specific species. Various Machine Learning (ML) models have previously been proposed to detect and identify plant leaf disease; however, they lack usability due to hardware sophistication, limited scalability and realistic use inefficiency. By implementing automatic detection and classification of leaf diseases in fruit trees (apple, grape, peach and strawberry) and vegetable plants (potato and tomato) through scalable transfer learning on Amazon Web Services (AWS) SageMaker and importing it into AWS DeepLens for real-time functional usability, our proposed DeepLens Classification and Detection Model (DCDM) addresses such limitations. Scalability and ubiquitous access to our approach is provided by cloud integration. Our experiments on an extensive image data set of healthy and unhealthy fruit trees and vegetable plant leaves showed 98.78% accuracy with a real-time diagnosis of diseases of plant leaves. To train DCDM deep learning model, we used forty thousand images and then evaluated it on ten thousand images. It takes an average of 0.349s to test an image for disease diagnosis and classification using AWS DeepLens, providing the consumer with disease information in less than a second.

RevDate: 2020-12-17

Molina-Molina A, Ruiz-Malagón EJ, Carrillo-Pérez F, et al (2020)

Validation of mDurance, A Wearable Surface Electromyography System for Muscle Activity Assessment.

Frontiers in physiology, 11:606287.

The mDurance® system is an innovative digital tool that combines wearable surface electromyography (sEMG), mobile computing and cloud analysis to streamline and automatize the assessment of muscle activity. The tool is particularly devised to support clinicians and sport professionals in their daily routines, as an assessment tool in the prevention, monitoring rehabilitation and training field. This study aimed at determining the validity of the mDurance system for measuring muscle activity by comparing sEMG output with a reference sEMG system, the Delsys® system. Fifteen participants were tested during isokinetic knee extensions at three different speeds (60, 180, and 300 deg/s), for two muscles (rectus femoris [RF] and vastus lateralis [VL]) and two different electrodes locations (proximal and distal placement). The maximum voluntary isometric contraction was carried out for the normalization of the signal, followed by dynamic isokinetic knee extensions for each speed. The sEMG output for both systems was obtained from the raw sEMG signal following mDurance's processing and filtering. Mean, median, first quartile, third quartile and 90th percentile was calculated from the sEMG amplitude signals for each system. The results show an almost perfect ICC relationship for the VL (ICC > 0.81) and substantial to almost perfect for the RF (ICC > 0.762) for all variables and speeds. The Bland-Altman plots revealed heteroscedasticity of error for mean, quartile 3 and 90th percentile (60 and 300 deg/s) for RF and at mean and 90th percentile for VL (300 deg/s). In conclusion, the results indicate that the mDurance® sEMG system is a valid tool to measure muscle activity during dynamic contractions over a range of speeds. This innovative system provides more time for clinicians (e.g., interpretation patients' pathologies) and sport trainers (e.g., advising athletes), thanks to automatic processing and filtering of the raw sEMG signal and generation of muscle activity reports in real-time.

RevDate: 2020-12-17
CmpDate: 2020-12-17

Filev Maia R, Ballester Lurbe C, Agrahari Baniya A, et al (2020)

IRRISENS: An IoT Platform Based on Microservices Applied in Commercial-Scale Crops Working in a Multi-Cloud Environment.

Sensors (Basel, Switzerland), 20(24): pii:s20247163.

Research has shown the multitude of applications that Internet of Things (IoT), cloud computing, and forecast technologies present in every sector. In agriculture, one application is the monitoring of factors that influence crop development to assist in making crop management decisions. Research on the application of such technologies in agriculture has been mainly conducted at small experimental sites or under controlled conditions. This research has provided relevant insights and guidelines for the use of different types of sensors, application of a multitude of algorithms to forecast relevant parameters as well as architectural approaches of IoT platforms. However, research on the implementation of IoT platforms at the commercial scale is needed to identify platform requirements to properly function under such conditions. This article evaluates an IoT platform (IRRISENS) based on fully replicable microservices used to sense soil, crop, and atmosphere parameters, interact with third-party cloud services for scheduling irrigation and, potentially, control irrigation automatically. The proposed IoT platform was evaluated during one growing season at four commercial-scale farms on two broadacre irrigated crops with very different water management requirements (rice and cotton). Five main requirements for IoT platforms to be used in agriculture at commercial scale were identified from implementing IRRISENS as an irrigation support tool for rice and cotton production: scalability, flexibility, heterogeneity, robustness to failure, and security. The platform addressed all these requirements. The results showed that the microservice-based approach used is robust against both intermittent and critical failures in the field that could occur in any of the monitored sites. Further, processing or storage overload caused by datalogger malfunctioning or other reasons at one farm did not affect the platform's performance. The platform was able to deal with different types of data heterogeneity. Since there are no shared microservices among farms, the IoT platform proposed here also provides data isolation, maintaining data confidentiality for each user, which is relevant in a commercial farm scenario.

RevDate: 2020-12-17

Suryanto N, Kang H, Kim Y, et al (2020)

A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization.

Sensors (Basel, Switzerland), 20(24): pii:s20247158.

Adversarial attack techniques in deep learning have been studied extensively due to its stealthiness to human eyes and potentially dangerous consequences when applied to real-life applications. However, current attack methods in black-box settings mainly employ a large number of queries for crafting their adversarial examples, hence making them very likely to be detected and responded by the target system (e.g., artificial intelligence (AI) service provider) due to its high traffic volume. A recent proposal able to address the large query problem utilizes a gradient-free approach based on Particle Swarm Optimization (PSO) algorithm. Unfortunately, this original approach tends to have a low attack success rate, possibly due to the model's difficulty of escaping local optima. This obstacle can be overcome by employing a multi-group approach for PSO algorithm, by which the PSO particles can be redistributed, preventing them from being trapped in local optima. In this paper, we present a black-box adversarial attack which can significantly increase the success rate of PSO-based attack while maintaining a low number of query by launching the attack in a distributed manner. Attacks are executed from multiple nodes, disseminating queries among the nodes, hence reducing the possibility of being recognized by the target system while also increasing scalability. Furthermore, we utilize Multi-Group PSO with Random Redistribution (MGRR-PSO) for perturbation generation, performing better than the original approach against local optima, thus achieving a higher success rate. Additionally, we propose to efficiently remove excessive perturbation (i.e, perturbation pruning) by utilizing again the MGRR-PSO rather than a standard iterative method as used in the original approach. We perform five different experiments: comparing our attack's performance with existing algorithms, testing in high-dimensional space in ImageNet dataset, examining our hyperparameters (i.e., particle size, number of clients, search boundary), and testing on real digital attack to Google Cloud Vision. Our attack proves to obtain a 100% success rate on MNIST and CIFAR-10 datasets and able to successfully fool Google Cloud Vision as a proof of the real digital attack by maintaining a lower query and wide applicability.

RevDate: 2020-12-16

Faes L, Wagner SK, Fu DJ, et al (2019)

Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study.

The Lancet. Digital health, 1(5):e232-e242.

BACKGROUND: Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise.

METHODS: We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.

FINDINGS: Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3-97·0%; specificity 67-100%; AUPRC 0·87-1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.

INTERPRETATION: All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.

FUNDING: National Institute for Health Research and Moorfields Eye Charity.

RevDate: 2020-12-14

Karim HMR (2020)

Cloud computing-based remote pre-anaesthetic check-up: An adapted approach during corona pandemic.

Indian journal of anaesthesia, 64(Suppl 4):S248-S249.

RevDate: 2020-12-10

Singh NK, Kumar N, AK Singh (2020)

Physiology to Disease Transmission of Respiratory Tract Infection: A Narrative Review.

Infectious disorders drug targets pii:IDDT-EPUB-112241 [Epub ahead of print].

INTRODUCTION: In present days scenario of pandemic of COVID 19 the protective reflexes namely Sneeze and Cough have received great importance but not in terms of protection but in terms of spread of infection. The present review tries to bring out the correlation between the physiology of sneeze and cough taking into consideration the various receptors that initiate the two reflexes then correlating it with the formation of expelled droplets and the significance of various aspect of droplets that lead to the spread of infection.

MATERIAL AND METHODS: For the compilation of the present review we searched the terms "Physiology of cough", "Physiology of sneeze", "droplets", "aerosols" and "Aerosols in COVID 19". The above-mentioned terms were extensively searched on pubmed, google scholar and google search engine and after reviewing the various available material the most significant research has been taken into consideration for this review.

CONCLUSION: Through this review we conclude that there are various factors that are responsible for the initiation of sneeze and cough but in case of infection it is mainly the inflammatory reaction that directly stimulates the receptors to produce the reflex outburst of air. As the flow of air during expiration is turbulent it causes damage to the Epithelial Lining Fluid present in the respiratory conduit and also gets admixed with the saliva in the oropharynx and oral cavity and mucus in the nose to form droplets of various sizes. Large droplets Settle close and are responsible for Droplet and Fomite transmission but the smaller droplets remain suspended in air and travel farther distances to cause Airborne transmission. The spread of droplet cloud in sneezing may range to 6m or more as compared to cough hence the concept of 1m to 2m of social distancing does not hold good if the patient is sneezing.

RevDate: 2020-12-10

Sheng J, Liu C, Chen L, et al (2020)

Research on Community Detection in Complex Networks Based on Internode Attraction.

Entropy (Basel, Switzerland), 22(12): pii:e22121383.

With the rapid development of computer technology, the research on complex networks has attracted more and more attention. At present, the research directions of cloud computing, big data, internet of vehicles, and distributed systems with very high attention are all based on complex networks. Community structure detection is a very important and meaningful research hotspot in complex networks. It is a difficult task to quickly and accurately divide the community structure and run it on large-scale networks. In this paper, we put forward a new community detection approach based on internode attraction, named IACD. This algorithm starts from the perspective of the important nodes of the complex network and refers to the gravitational relationship between two objects in physics to represent the forces between nodes in the network dataset, and then perform community detection. Through experiments on a large number of real-world datasets and synthetic networks, it is shown that the IACD algorithm can quickly and accurately divide the community structure, and it is superior to some classic algorithms and recently proposed algorithms.

RevDate: 2020-12-09

Abbasi WA, Yaseen A, Hassan FU, et al (2020)

ISLAND: in-silico proteins binding affinity prediction using sequence information.

BioData mining, 13(1):20 pii:10.1186/s13040-020-00231-w.

BACKGROUND: Determining binding affinity in protein-protein interactions is important in the discovery and design of novel therapeutics and mutagenesis studies. Determination of binding affinity of proteins in the formation of protein complexes requires sophisticated, expensive and time-consuming experimentation which can be replaced with computational methods. Most computational prediction techniques require protein structures that limit their applicability to protein complexes with known structures. In this work, we explore sequence-based protein binding affinity prediction using machine learning.

METHOD: We have used protein sequence information instead of protein structures along with machine learning techniques to accurately predict the protein binding affinity.

RESULTS: We present our findings that the true generalization performance of even the state-of-the-art sequence-only predictor is far from satisfactory and that the development of machine learning methods for binding affinity prediction with improved generalization performance is still an open problem. We have also proposed a sequence-based novel protein binding affinity predictor called ISLAND which gives better accuracy than existing methods over the same validation set as well as on external independent test dataset. A cloud-based webserver implementation of ISLAND and its python code are available at https://sites.google.com/view/wajidarshad/software .

CONCLUSION: This paper highlights the fact that the true generalization performance of even the state-of-the-art sequence-only predictor of binding affinity is far from satisfactory and that the development of effective and practical methods in this domain is still an open problem.

RevDate: 2020-12-09

Balaniuk R, Isupova O, S Reece (2020)

Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning.

Sensors (Basel, Switzerland), 20(23): pii:s20236936.

This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.

RevDate: 2020-12-09

Zhang S, Wen Q, Li W, et al (2020)

A Multi-User Public Key Encryption with Multi-Keyword Search out of Bilinear Pairings.

Sensors (Basel, Switzerland), 20(23): pii:s20236962.

Internet of Things (IoT) and cloud computing are adopted widely in daily life and industrial production. Sensors of IoT equipment gather personal, sensitive and important data, which is stored in a cloud server. The cloud helps users to save cost and collaborate. However, the privacy of data is also at risk. Public-key encryption with keyword search (PEKS) is convenient for users to use the data without leaking privacy. In this article, we give a scheme of PEKS for a multi-user to realize the multi-keyword search at once and extend it to show a rank based on keywords match. The receiver can finish the search by himself or herself. With private cloud and server cloud, most users' computing can be outsourced. Moreover, the PEKS can be transferred to a multi-user model in which the private cloud is used to manage receivers and outsource. The store cloud and the private cloud both obtain nothing with the keyword information. Then our IoT devices can easily run these protocols. As we do not use any pairing operations, the scheme is under more general assumptions that means the devices do not need to take on the heavy task of calculating pairing.

RevDate: 2020-12-08

Chen Y, Yang T, Li C, et al (2020)

A Binarized Segmented ResNet Based on Edge Computing for Re-Identification.

Sensors (Basel, Switzerland), 20(23): pii:s20236902.

With the advent of the Internet of Everything, more and more devices are connected to the Internet every year. In major cities, in order to maintain normal social order, the demand for deployed cameras is also increasing. In terms of public safety, person Re-Identification (ReID) can play a big role. However, the current methods of ReID are to transfer the collected pedestrian images to the cloud for processing, which will bring huge communication costs. In order to solve this problem, we combine the recently emerging edge computing and use the edge to combine the end devices and the cloud to implement our proposed binarized segmented ResNet. Our method is mainly to divide a complete ResNet into three parts, corresponding to the end devices, the edge, and the cloud. After joint training, the corresponding segmented sub-network is deployed to the corresponding side, and inference is performed to realize ReID. In our experiments, we compared some traditional ReID methods in terms of accuracy and communication overhead. It can be found that our method can greatly reduce the communication cost on the basis of basically not reducing the recognition accuracy of ReID. In general, the communication cost can be reduced by four to eight times.

RevDate: 2020-12-08

Zhou Y, Li N, Tian Y, et al (2020)

Public Key Encryption with Keyword Search in Cloud: A Survey.

Entropy (Basel, Switzerland), 22(4): pii:e22040421.

With the popularization of cloud computing, many business and individuals prefer to outsource their data to cloud in encrypted form to protect data confidentiality. However, how to search over encrypted data becomes a concern for users. To address this issue, searchable encryption is a novel cryptographic primitive that enables user to search queries over encrypted data stored on an untrusted server while guaranteeing the privacy of the data. Public key encryption with keyword search (PEKS) has received a lot of attention as an important branch. In this paper, we focus on the development of PEKS in cloud by providing a comprehensive research survey. From a technological viewpoint, the existing PEKS schemes can be classified into several variants: PEKS based on public key infrastructure, PEKS based on identity-based encryption, PEKS based on attribute-based encryption, PEKS based on predicate encryption, PEKS based on certificateless encryption, and PEKS supporting proxy re-encryption. Moreover, we propose some potential applications and valuable future research directions in PEKS.

RevDate: 2020-12-04

Whaiduzzaman M, Hossain MR, Shovon AR, et al (2020)

A Privacy-Preserving Mobile and Fog Computing Framework to Trace and Prevent COVID-19 Community Transmission.

IEEE journal of biomedical and health informatics, 24(12):3564-3575.

To slow down the spread of COVID-19, governments worldwide are trying to identify infected people, and contain the virus by enforcing isolation, and quarantine. However, it is difficult to trace people who came into contact with an infected person, which causes widespread community transmission, and mass infection. To address this problem, we develop an e-government Privacy-Preserving Mobile, and Fog computing framework entitled PPMF that can trace infected, and suspected cases nationwide. We use personal mobile devices with contact tracing app, and two types of stationary fog nodes, named Automatic Risk Checkers (ARC), and Suspected User Data Uploader Node (SUDUN), to trace community transmission alongside maintaining user data privacy. Each user's mobile device receives a Unique Encrypted Reference Code (UERC) when registering on the central application. The mobile device, and the central application both generate Rotational Unique Encrypted Reference Code (RUERC), which broadcasted using the Bluetooth Low Energy (BLE) technology. The ARCs are placed at the entry points of buildings, which can immediately detect if there are positive or suspected cases nearby. If any confirmed case is found, the ARCs broadcast pre-cautionary messages to nearby people without revealing the identity of the infected person. The SUDUNs are placed at the health centers that report test results to the central cloud application. The reported data is later used to map between infected, and suspected cases. Therefore, using our proposed PPMF framework, governments can let organizations continue their economic activities without complete lockdown.

RevDate: 2020-12-03

Salama AbdELminaam D, Almansori AM, Taha M, et al (2020)

A deep facial recognition system using computational intelligent algorithms.

PloS one, 15(12):e0242269 pii:PONE-D-20-15335.

The development of biometric applications, such as facial recognition (FR), has recently become important in smart cities. Many scientists and engineers around the world have focused on establishing increasingly robust and accurate algorithms and methods for these types of systems and their applications in everyday life. FR is developing technology with multiple real-time applications. The goal of this paper is to develop a complete FR system using transfer learning in fog computing and cloud computing. The developed system uses deep convolutional neural networks (DCNN) because of the dominant representation; there are some conditions including occlusions, expressions, illuminations, and pose, which can affect the deep FR performance. DCNN is used to extract relevant facial features. These features allow us to compare faces between them in an efficient way. The system can be trained to recognize a set of people and to learn via an online method, by integrating the new people it processes and improving its predictions on the ones it already has. The proposed recognition method was tested with different three standard machine learning algorithms (Decision Tree (DT), K Nearest Neighbor(KNN), Support Vector Machine (SVM)). The proposed system has been evaluated using three datasets of face images (SDUMLA-HMT, 113, and CASIA) via performance metrics of accuracy, precision, sensitivity, specificity, and time. The experimental results show that the proposed method achieves superiority over other algorithms according to all parameters. The suggested algorithm results in higher accuracy (99.06%), higher precision (99.12%), higher recall (99.07%), and higher specificity (99.10%) than the comparison algorithms.

RevDate: 2020-12-03

Aigouy B, Cortes C, Liu S, et al (2020)

EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning.

Development (Cambridge, England) pii:dev.194589 [Epub ahead of print].

Epithelia are dynamic tissues that self-remodel during their development. During morphogenesis, the tissue-scale organization of epithelia is obtained through a sum of individual contributions of the cells constituting the tissue. Therefore, understanding any morphogenetic event first requires a thorough segmentation of its constituent cells. This task, however, usually implies extensive manual correction, even with semi-automated tools. Here we present EPySeg, an open-source, coding-free software that uses deep learning to segment membrane-stained epithelial tissues automatically and very efficiently. EPySeg, which comes with a straightforward graphical user interface, can be used as a python package on a local computer, or on the cloud via Google Colab for users not equipped with deep-learning compatible hardware. By substantially reducing human input in image segmentation, EPySeg accelerates and improves the characterization of epithelial tissues for all developmental biologists.

RevDate: 2020-12-03

Li J, Liang X, Dai C, et al (2019)

Reversible Data Hiding Algorithm in Fully Homomorphic Encrypted Domain.

Entropy (Basel, Switzerland), 21(7): pii:e21070625.

This paper proposes a reversible data hiding scheme by exploiting the DGHV fully homomorphic encryption, and analyzes the feasibility of the scheme for data hiding from the perspective of information entropy. In the proposed algorithm, additional data can be embedded directly into a DGHV fully homomorphic encrypted image without any preprocessing. On the sending side, by using two encrypted pixels as a group, a data hider can get the difference of two pixels in a group. Additional data can be embedded into the encrypted image by shifting the histogram of the differences with the fully homomorphic property. On the receiver side, a legal user can extract the additional data by getting the difference histogram, and the original image can be restored by using modular arithmetic. Besides, the additional data can be extracted after decryption while the original image can be restored. Compared with the previous two typical algorithms, the proposed scheme can effectively avoid preprocessing operations before encryption and can successfully embed and extract additional data in the encrypted domain. The extensive testing results on the standard images have certified the effectiveness of the proposed scheme.

RevDate: 2020-12-03

Tilei G, Tong L, Ming Y, et al (2019)

Research on a Trustworthiness Measurement Method of Cloud Service Construction Processes Based on Information Entropy.

Entropy (Basel, Switzerland), 21(5): pii:e21050462.

The popularity of cloud computing has made cloud services gradually become the leading computing model nowadays. The trustworthiness of cloud services depends mainly on construction processes. The trustworthiness measurement of cloud service construction processes (CSCPs) is crucial for cloud service developers. It can help to find out the causes of failures and to improve the development process, thereby ensuring the quality of cloud service. Herein, firstly, a trustworthiness hierarchy model of CSCP was proposed, and the influential factors of the processes were identified following the international standard ISO/IEC 12207 of the software development process.Further, a method was developed combined with the theory of information entropy and the concept of trustworthiness. It aimed to calculate the risk uncertainty and risk loss expectation affecting trustworthiness. Also, the trustworthiness of cloud service and its main construction processes were calculated. Finally, the feasibility of the measurement method were verified through a case study, and through comparing with AHP and CMM/CMMI methods, the advantages of this method were embodied.

RevDate: 2020-12-03

Cai Y, Tang C, Q Xu (2020)

Two-Party Privacy-Preserving Set Intersection with FHE.

Entropy (Basel, Switzerland), 22(12): pii:e22121339.

A two-party private set intersection allows two parties, the client and the server, to compute an intersection over their private sets, without revealing any information beyond the intersecting elements. We present a novel private set intersection protocol based on Shuhong Gao's fully homomorphic encryption scheme and prove the security of the protocol in the semi-honest model. We also present a variant of the protocol which is a completely novel construction for computing the intersection based on Bloom filter and fully homomorphic encryption, and the protocol's complexity is independent of the set size of the client. The security of the protocols relies on the learning with errors and ring learning with error problems. Furthermore, in the cloud with malicious adversaries, the computation of the private set intersection can be outsourced to the cloud service provider without revealing any private information.

RevDate: 2020-12-03

Froiz-Míguez I, Lopez-Iturri P, Fraga-Lamas P, et al (2020)

Design, Implementation, and Empirical Validation of an IoT Smart Irrigation System for Fog Computing Applications Based on LoRa and LoRaWAN Sensor Nodes.

Sensors (Basel, Switzerland), 20(23): pii:s20236865.

Climate change is driving new solutions to manage water more efficiently. Such solutions involve the development of smart irrigation systems where Internet of Things (IoT) nodes are deployed throughout large areas. In addition, in the mentioned areas, wireless communications can be difficult due to the presence of obstacles and metallic objects that block electromagnetic wave propagation totally or partially. This article details the development of a smart irrigation system able to cover large urban areas thanks to the use of Low-Power Wide-Area Network (LPWAN) sensor nodes based on LoRa and LoRaWAN. IoT nodes collect soil temperature/moisture and air temperature data, and control water supply autonomously, either by making use of fog computing gateways or by relying on remote commands sent from a cloud. Since the selection of IoT node and gateway locations is essential to have good connectivity and to reduce energy consumption, this article uses an in-house 3D-ray launching radio-planning tool to determine the best locations in real scenarios. Specifically, this paper provides details on the modeling of a university campus, which includes elements like buildings, roads, green areas, or vehicles. In such a scenario, simulations and empirical measurements were performed for two different testbeds: a LoRaWAN testbed that operates at 868 MHz and a testbed based on LoRa with 433 MHz transceivers. All the measurements agree with the simulation results, showing the impact of shadowing effects and material features (e.g., permittivity, conductivity) in the electromagnetic propagation of near-ground and underground LoRaWAN communications. Higher RF power levels are observed for 433 MHz due to the higher transmitted power level and the lower radio propagation losses, and even in the worst gateway location, the received power level is higher than the sensitivity threshold (-148 dBm). Regarding water consumption, the provided estimations indicate that the proposed smart irrigation system is able to reduce roughly 23% of the amount of used water just by considering weather forecasts. The obtained results provide useful guidelines for future smart irrigation developers and show the radio planning tool accuracy, which allows for optimizing the sensor network topology and the overall performance of the network in terms of coverage, cost, and energy consumption.

RevDate: 2020-12-03

Santos J, Wauters T, Volckaert B, et al (2017)

Fog Computing: Enabling the Management and Orchestration of Smart City Applications in 5G Networks.

Entropy (Basel, Switzerland), 20(1): pii:e20010004.

Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.

RevDate: 2020-12-02

Xu S, C Guo (2020)

Computation Offloading in a Cognitive Vehicular Networks with Vehicular Cloud Computing and Remote Cloud Computing.

Sensors (Basel, Switzerland), 20(23): pii:s20236820.

To satisfy the explosive growth of computation-intensive vehicular applications, we investigated the computation offloading problem in a cognitive vehicular networks (CVN). Specifically, in our scheme, the vehicular cloud computing (VCC)- and remote cloud computing (RCC)-enabled computation offloading were jointly considered. So far, extensive research has been conducted on RCC-based computation offloading, while the studies on VCC-based computation offloading are relatively rare. In fact, due to the dynamic and uncertainty of on-board resource, the VCC-based computation offloading is more challenging then the RCC one, especially under the vehicular scenario with expensive inter-vehicle communication or poor communication environment. To solve this problem, we propose to leverage the VCC's computation resource for computation offloading with a perception-exploitation way, which mainly comprise resource discovery and computation offloading two stages. In resource discovery stage, upon the action-observation history, a Long Short-Term Memory (LSTM) model is proposed to predict the on-board resource utilizing status at next time slot. Thereafter, based on the obtained computation resource distribution, a decentralized multi-agent Deep Reinforcement Learning (DRL) algorithm is proposed to solve the collaborative computation offloading with VCC and RCC. Last but not least, the proposed algorithms' effectiveness is verified with a host of numerical simulation results from different perspectives.

RevDate: 2020-12-01

Bandyopadhyay A, Kumar Singh V, Mukhopadhyay S, et al (2020)

Matching IoT Devices to the Fog Service Providers: A Mechanism Design Perspective.

Sensors (Basel, Switzerland), 20(23): pii:s20236761.

In the Internet of Things (IoT) + Fog + Cloud architecture, with the unprecedented growth of IoT devices, one of the challenging issues that needs to be tackled is to allocate Fog service providers (FSPs) to IoT devices, especially in a game-theoretic environment. Here, the issue of allocation of FSPs to the IoT devices is sifted with game-theoretic idea so that utility maximizing agents may be benign. In this scenario, we have multiple IoT devices and multiple FSPs, and the IoT devices give preference ordering over the subset of FSPs. Given such a scenario, the goal is to allocate at most one FSP to each of the IoT devices. We propose mechanisms based on the theory of mechanism design without money to allocate FSPs to the IoT devices. The proposed mechanisms have been designed in a flexible manner to address the long and short duration access of the FSPs to the IoT devices. For analytical results, we have proved the economic robustness, and probabilistic analyses have been carried out for allocation of IoT devices to the FSPs. In simulation, mechanism efficiency is laid out under different scenarios with an implementation in Python.

RevDate: 2020-12-01

Díaz-de-Arcaya J, Miñón R, Torre-Bastida AI, et al (2020)

PADL: A Modeling and Deployment Language for Advanced Analytical Services.

Sensors (Basel, Switzerland), 20(23): pii:s20236712.

In the smart city context, Big Data analytics plays an important role in processing the data collected through IoT devices. The analysis of the information gathered by sensors favors the generation of specific services and systems that not only improve the quality of life of the citizens, but also optimize the city resources. However, the difficulties of implementing this entire process in real scenarios are manifold, including the huge amount and heterogeneity of the devices, their geographical distribution, and the complexity of the necessary IT infrastructures. For this reason, the main contribution of this paper is the PADL description language, which has been specifically tailored to assist in the definition and operationalization phases of the machine learning life cycle. It provides annotations that serve as an abstraction layer from the underlying infrastructure and technologies, hence facilitating the work of data scientists and engineers. Due to its proficiency in the operationalization of distributed pipelines over edge, fog, and cloud layers, it is particularly useful in the complex and heterogeneous environments of smart cities. For this purpose, PADL contains functionalities for the specification of monitoring, notifications, and actuation capabilities. In addition, we provide tools that facilitate its adoption in production environments. Finally, we showcase the usefulness of the language by showing the definition of PADL-compliant analytical pipelines over two uses cases in a smart city context (flood control and waste management), demonstrating that its adoption is simple and beneficial for the definition of information and process flows in such environments.

RevDate: 2020-11-25

Gonzalez Villasanti H, Justice LM, Chaparro-Moreno LJ, et al (2020)

Automatized analysis of children's exposure to child-directed speech in reschool settings: Validation and application.

PloS one, 15(11):e0242511 pii:PONE-D-20-15910.

The present study explored whether a tool for automatic detection and recognition of interactions and child-directed speech (CDS) in preschool classrooms could be developed, validated, and applied to non-coded video recordings representing children's classroom experiences. Using first-person video recordings collected by 13 preschool children during a morning in their classrooms, we extracted high-level audiovisual features from recordings using automatic speech recognition and computer vision services from a cloud computing provider. Using manual coding for interactions and transcriptions of CDS as reference, we trained and tested supervised classifiers and linear mappings to measure five variables of interest. We show that the supervised classifiers trained with speech activity, proximity, and high-level facial features achieve adequate accuracy in detecting interactions. Furthermore, in combination with an automatic speech recognition service, the supervised classifier achieved error rates for CDS measures that are in line with other open-source automatic decoding tools in early childhood settings. Finally, we demonstrate our tool's applicability by using it to automatically code and transcribe children's interactions and CDS exposure vertically within a classroom day (morning to afternoon) and horizontally over time (fall to winter). Developing and scaling tools for automatized capture of children's interactions with others in the preschool classroom, as well as exposure to CDS, may revolutionize scientific efforts to identify precise mechanisms that foster young children's language development.

RevDate: 2020-11-24

Haas T (2020)

Developing political-ecological theory: The need for many-task computing.

PloS one, 15(11):e0226861 pii:PONE-D-19-33903.

Models of political-ecological systems can inform policies for managing ecosystems that contain endangered species. To increase the credibility of these models, massive computation is needed to statistically estimate the model's parameters, compute confidence intervals for these parameters, determine the model's prediction error rate, and assess its sensitivity to parameter misspecification. To meet this statistical and computational challenge, this article delivers statistical algorithms and a method for constructing ecosystem management plans that are coded as distributed computing applications. These applications can run on cluster computers, the cloud, or a collection of in-house workstations. This downloadable code is used to address the challenge of conserving the East African cheetah (Acinonyx jubatus). This demonstration means that the new standard of credibility that any political-ecological model needs to meet is the one given herein.

RevDate: 2020-12-01

Hassan SR, Ahmad I, Ahmad S, et al (2020)

Remote Pain Monitoring Using Fog Computing for e-Healthcare: An Efficient Architecture.

Sensors (Basel, Switzerland), 20(22):.

The integration of medical signal processing capabilities and advanced sensors into Internet of Things (IoT) devices plays a key role in providing comfort and convenience to human lives. As the number of patients is increasing gradually, providing healthcare facilities to each patient, particularly to the patients located in remote regions, not only has become challenging but also results in several issues, such as: (i) increase in workload on paramedics, (ii) wastage of time, and (iii) accommodation of patients. Therefore, the design of smart healthcare systems has become an important area of research to overcome these above-mentioned issues. Several healthcare applications have been designed using wireless sensor networks (WSNs), cloud computing, and fog computing. Most of the e-healthcare applications are designed using the cloud computing paradigm. Cloud-based architecture introduces high latency while processing huge amounts of data, thus restricting the large-scale implementation of latency-sensitive e-healthcare applications. Fog computing architecture offers processing and storage resources near to the edge of the network, thus, designing e-healthcare applications using the fog computing paradigm is of interest to meet the low latency requirement of such applications. Patients that are minors or are in intensive care units (ICUs) are unable to self-report their pain conditions. The remote healthcare monitoring applications deploy IoT devices with bio-sensors capable of sensing surface electromyogram (sEMG) and electrocardiogram (ECG) signals to monitor the pain condition of such patients. In this article, fog computing architecture is proposed for deploying a remote pain monitoring system. The key motivation for adopting the fog paradigm in our proposed approach is to reduce latency and network consumption. To validate the effectiveness of the proposed approach in minimizing delay and network utilization, simulations were carried out in iFogSim and the results were compared with the cloud-based systems. The results of the simulations carried out in this research indicate that a reduction in both latency and network consumption can be achieved by adopting the proposed approach for implementing a remote pain monitoring system.

RevDate: 2020-11-19

Van Horn JD (2020)

Bridging the Brain and Data Sciences.

Big data [Epub ahead of print].

Brain scientists are now capable of collecting more data in a single experiment than researchers a generation ago might have collected over an entire career. Indeed, the brain itself seems to thirst for more and more data. Such digital information not only comprises individual studies but is also increasingly shared and made openly available for secondary, confirmatory, and/or combined analyses. Numerous web resources now exist containing data across spatiotemporal scales. Data processing workflow technologies running via cloud-enabled computing infrastructures allow for large-scale processing. Such a move toward greater openness is fundamentally changing how brain science results are communicated and linked to available raw data and processed results. Ethical, professional, and motivational issues challenge the whole-scale commitment to data-driven neuroscience. Nevertheless, fueled by government investments into primary brain data collection coupled with increased sharing and community pressure challenging the dominant publishing model, large-scale brain and data science is here to stay.

RevDate: 2020-11-20

Giardini ME, IAT Livingstone (2020)

Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation.

Advances in experimental medicine and biology, 1260:161-174.

Driven by the global increase in the size and median age of the world population, sight loss is becoming a major public health challenge. Furthermore, the increased survival of premature neonates in low- and middle-income countries is causing an increase in developmental paediatric ophthalmic disease. Finally, there is an ongoing change in health-seeking behaviour worldwide, with consequent demand for increased access to healthcare, including ophthalmology. There is therefore the need to maximise the reach of resource-limited ophthalmology expertise in the context of increasing demand. Yet, ophthalmic diagnostics critically relies on visualisation, through optical imaging, of the front and of the back of the eye, and teleophthalmology, the remote visualisation of diagnostic images, shows promise to offer a viable solution.In this chapter, we first explore the strategies at the core of teleophthalmology and, in particular, real-time vs store-and-forward remote visualisation techniques, including considerations on suitability for different tasks and environments. We then introduce the key technologies suitable for teleophthalmology: anterior segment imaging, posterior segment imaging (retinal imaging) and, briefly, radiographic/tomographic techniques. We highlight enabling factors, such as high-resolution handheld imaging, high data rate mobile transmission, cloud storage and computing, 3D printing and other rapid fabrication technologies and patient and healthcare system acceptance of remote consultations. We then briefly discuss four canonical implementation settings, namely, national service provision integration, field and community screening, optometric decision support and virtual clinics, giving representative examples. We conclude with considerations on the outlook of the field, in particular, on artificial intelligence and on robotic actuation of the patient end point as a complement to televisualisation.

RevDate: 2020-11-20

Tsai VF, Zhuang B, Pong YH, et al (2020)

Web- and Artificial Intelligence-Based Image Recognition For Sperm Motility Analysis: Verification Study.

JMIR medical informatics, 8(11):e20031 pii:v8i11e20031.

BACKGROUND: Human sperm quality fluctuates over time. Therefore, it is crucial for couples preparing for natural pregnancy to monitor sperm motility.

OBJECTIVE: This study verified the performance of an artificial intelligence-based image recognition and cloud computing sperm motility testing system (Bemaner, Createcare) composed of microscope and microfluidic modules and designed to adapt to different types of smartphones.

METHODS: Sperm videos were captured and uploaded to the cloud with an app. Analysis of sperm motility was performed by an artificial intelligence-based image recognition algorithm then results were displayed. According to the number of motile sperm in the vision field, 47 (deidentified) videos of sperm were scored using 6 grades (0-5) by a male-fertility expert with 10 years of experience. Pearson product-moment correlation was calculated between the grades and the results (concentration of total sperm, concentration of motile sperm, and motility percentage) computed by the system.

RESULTS: Good correlation was demonstrated between the grades and results computed by the system for concentration of total sperm (r=0.65, P<.001), concentration of motile sperm (r=0.84, P<.001), and motility percentage (r=0.90, P<.001).

CONCLUSIONS: This smartphone-based sperm motility test (Bemaner) accurately measures motility-related parameters and could potentially be applied toward the following fields: male infertility detection, sperm quality test during preparation for pregnancy, and infertility treatment monitoring. With frequent at-home testing, more data can be collected to help make clinical decisions and to conduct epidemiological research.

RevDate: 2020-11-21

Choi JH, Kim T, Jung J, et al (2020)

Fully automated web-based tool for identifying regulatory hotspots.

BMC genomics, 21(Suppl 10):616.

BACKGROUND: Regulatory hotspots are genetic variations that may regulate the expression levels of many genes. It has been of great interest to find those hotspots utilizing expression quantitative trait locus (eQTL) analysis. However, it has been reported that many of the findings are spurious hotspots induced by various unknown confounding factors. Recently, methods utilizing complicated statistical models have been developed that successfully identify genuine hotspots. Next-generation Intersample Correlation Emended (NICE) is one of the methods that show high sensitivity and low false-discovery rate in finding regulatory hotspots. Even though the methods successfully find genuine hotspots, they have not been widely used due to their non-user-friendly interfaces and complex running processes. Furthermore, most of the methods are impractical due to their prohibitively high computational complexity.

RESULTS: To overcome the limitations of existing methods, we developed a fully automated web-based tool, referred to as NICER (NICE Renew), which is based on NICE program. First, we dramatically reduced running and installing burden of NICE. Second, we significantly reduced running time by incorporating multi-processing. Third, besides our web-based NICER, users can use NICER on Google Compute Engine and can readily install and run the NICER web service on their local computers. Finally, we provide different input formats and visualizations tools to show results. Utilizing a yeast dataset, we show that NICER can be successfully used in an eQTL analysis to identify many genuine regulatory hotspots, for which more than half of the hotspots were previously reported elsewhere.

CONCLUSIONS: Even though many hotspot analysis tools have been proposed, they have not been widely used for many practical reasons. NICER is a fully-automated web-based solution for eQTL mapping and regulatory hotspots analysis. NICER provides a user-friendly interface and has made hotspot analysis more viable by reducing the running time significantly. We believe that NICER will become the method of choice for increasing power of eQTL hotspot analysis.

RevDate: 2020-12-01

Sadique KM, Rahmani R, P Johannesson (2020)

IMSC-EIoTD: Identity Management and Secure Communication for Edge IoT Devices.

Sensors (Basel, Switzerland), 20(22):.

The Internet of things (IoT) will accommodate several billions of devices to the Internet to enhance human society as well as to improve the quality of living. A huge number of sensors, actuators, gateways, servers, and related end-user applications will be connected to the Internet. All these entities require identities to communicate with each other. The communicating devices may have mobility and currently, the only main identity solution is IP based identity management which is not suitable for the authentication and authorization of the heterogeneous IoT devices. Sometimes devices and applications need to communicate in real-time to make decisions within very short times. Most of the recently proposed solutions for identity management are cloud-based. Those cloud-based identity management solutions are not feasible for heterogeneous IoT devices. In this paper, we have proposed an edge-fog based decentralized identity management and authentication solution for IoT devices (IoTD) and edge IoT gateways (EIoTG). We have also presented a secure communication protocol for communication between edge IoT devices and edge IoT gateways. The proposed security protocols are verified using Scyther formal verification tool, which is a popular tool for automated verification of security protocols. The proposed model is specified using the PROMELA language. SPIN model checker is used to confirm the specification of the proposed model. The results show different message flows without any error.

RevDate: 2020-11-29
CmpDate: 2020-11-19

Liu H, Li S, W Sun (2020)

Resource Allocation for Edge Computing without Using Cloud Center in Smart Home Environment: A Pricing Approach.

Sensors (Basel, Switzerland), 20(22):.

Recently, more and more smart homes have become one of important parts of home infrastructure. However, most of the smart home applications are not interconnected and remain isolated. They use the cloud center as the control platform, which increases the risk of link congestion and data security. Thus, in the future, smart homes based on edge computing without using cloud center become an important research area. In this paper, we assume that all applications in a smart home environment are composed of edge nodes and users. In order to maximize the utility of users, we assume that all users and edge nodes are placed in a market and formulate a pricing resource allocation model with utility maximization. We apply the Lagrangian method to analyze the model, so an edge node (provider in the market) allocates its resources to a user (customer in the market) based on the prices of resources and the utility related to the preference of users. To obtain the optimal resource allocation, we propose a pricing-based resource allocation algorithm by using low-pass filtering scheme and conform that the proposed algorithm can achieve an optimum within reasonable convergence times through some numerical examples.

RevDate: 2020-12-01

Singh P, R Kaur (2020)

An integrated fog and Artificial Intelligence smart health framework to predict and prevent COVID-19.

Global transitions, 2:283-292.

Nowadays, COVID-19 is spreading at a rapid rate in almost all the continents of the world. It has already affected many people who are further spreading it day by day. Hence, it is the most essential to alert nearby people to be aware of it due to its communicable behavior. Till May 2020, no vaccine is available for the treatment of this COVID-19, but the existing technologies can be used to minimize its effect. Cloud/fog computing could be used to monitor and control this rapidly spreading infection in a cost-effective and time-saving manner. To strengthen COVID-19 patient prediction, Artificial Intelligence(AI) can be integrated with cloud/fog computing for practical solutions. In this paper, fog assisted the internet of things based quality of service framework is presented to prevent and protect from COVID-19. It provides real-time processing of users' health data to predict the COVID-19 infection by observing their symptoms and immediately generates an emergency alert, medical reports, and significant precautions to the user, their guardian as well as doctors/experts. It collects sensitive information from the hospitals/quarantine shelters through the patient IoT devices for taking necessary actions/decisions. Further, it generates an alert message to the government health agencies for controlling the outbreak of chronic illness and for tanking quick and timely actions.

RevDate: 2020-12-01

Alanazi SA, Kamruzzaman MM, Alruwaili M, et al (2020)

Measuring and Preventing COVID-19 Using the SIR Model and Machine Learning in Smart Health Care.

Journal of healthcare engineering, 2020:8857346.

COVID-19 presents an urgent global challenge because of its contagious nature, frequently changing characteristics, and the lack of a vaccine or effective medicines. A model for measuring and preventing the continued spread of COVID-19 is urgently required to provide smart health care services. This requires using advanced intelligent computing such as artificial intelligence, machine learning, deep learning, cognitive computing, cloud computing, fog computing, and edge computing. This paper proposes a model for predicting COVID-19 using the SIR and machine learning for smart health care and the well-being of the citizens of KSA. Knowing the number of susceptible, infected, and recovered cases each day is critical for mathematical modeling to be able to identify the behavioral effects of the pandemic. It forecasts the situation for the upcoming 700 days. The proposed system predicts whether COVID-19 will spread in the population or die out in the long run. Mathematical analysis and simulation results are presented here as a means to forecast the progress of the outbreak and its possible end for three types of scenarios: "no actions," "lockdown," and "new medicines." The effect of interventions like lockdown and new medicines is compared with the "no actions" scenario. The lockdown case delays the peak point by decreasing the infection and affects the area equality rule of the infected curves. On the other side, new medicines have a significant impact on infected curve by decreasing the number of infected people about time. Available forecast data on COVID-19 using simulations predict that the highest level of cases might occur between 15 and 30 November 2020. Simulation data suggest that the virus might be fully under control only after June 2021. The reproductive rate shows that measures such as government lockdowns and isolation of individuals are not enough to stop the pandemic. This study recommends that authorities should, as soon as possible, apply a strict long-term containment strategy to reduce the epidemic size successfully.

RevDate: 2020-12-01

Gorgulla C, PadmanabhaDas K, Leigh KE, et al (2020)

A Multi-Pronged Approach Targeting SARS-CoV-2 Proteins Using Ultra-Large Virtual Screening.

ChemRxiv : the preprint server for chemistry.

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), previously known as 2019 novel coronavirus (2019-nCoV), has spread rapidly across the globe, creating an unparalleled global health burden and spurring a deepening economic crisis. As of July 7th, 2020, almost seven months into the outbreak, there are no approved vaccines and few treatments available. Developing drugs that target multiple points in the viral life cycle could serve as a strategy to tackle the current as well as future coronavirus pandemics. Here we leverage the power of our recently developed in silico screening platform, VirtualFlow, to identify inhibitors that target SARS-CoV-2. VirtualFlow is able to efficiently harness the power of computing clusters and cloud-based computing platforms to carry out ultra-large scale virtual screens. In this unprecedented structure-based multi-target virtual screening campaign, we have used VirtualFlow to screen an average of approximately 1 billion molecules against each of 40 different target sites on 17 different potential viral and host targets in the cloud. In addition to targeting the active sites of viral enzymes, we also target critical auxiliary sites such as functionally important protein-protein interaction interfaces. This multi-target approach not only increases the likelihood of finding a potent inhibitor, but could also help identify a collection of anti-coronavirus drugs that would retain efficacy in the face of viral mutation. Drugs belonging to different regimen classes could be combined to develop possible combination therapies, and top hits that bind at highly conserved sites would be potential candidates for further development as coronavirus drugs. Here, we present the top 200 in silico hits for each target site. While in-house experimental validation of some of these compounds is currently underway, we want to make this array of potential inhibitor candidates available to researchers worldwide in consideration of the pressing need for fast-tracked drug development.

RevDate: 2020-11-17

Hesselmann G (2020)

No conclusive evidence that difficult general knowledge questions cause a "Google Stroop effect". A replication study.

PeerJ, 8:e10325.

Access to the digital "all-knowing cloud" has become an integral part of our daily lives. It has been suggested that the increasing offloading of information and information processing services to the cloud will alter human cognition and metacognition in the short and long term. A much-cited study published in Science in 2011 provided first behavioral evidence for such changes in human cognition. Participants had to answer difficult trivia questions, and subsequently showed longer response times in a variant of the Stroop task with internet-related words ("Google Stroop effect"). The authors of this study concluded that the concept of the Internet is automatically activated in situations where information is missing (e.g., because we might feel the urge to "google" the information). However, the "Google Stroop effect" could not be replicated in two recent replication attempts as part of a large replicability project. After the failed replication was published in 2018, the first author of the original study pointed out some problems with the design of the failed replication. In our study, we therefore aimed to replicate the "Google Stroop effect" with a research design closer to the original experiment. Our results revealed no conclusive evidence in favor of the notion that the concept of the Internet or internet access (via computers or smartphones) is automatically activated when participants are faced with hard trivia questions. We provide recommendations for follow-up research.

RevDate: 2020-11-17

Guerra-Assunção JA, Conde L, Moghul I, et al (2020)

GenomeChronicler: The Personal Genome Project UK Genomic Report Generator Pipeline.

Frontiers in genetics, 11:518644.

In recent years, there has been a significant increase in whole genome sequencing data of individual genomes produced by research projects as well as direct to consumer service providers. While many of these sources provide their users with an interpretation of the data, there is a lack of free, open tools for generating reports exploring the data in an easy to understand manner. GenomeChronicler was developed as part of the Personal Genome Project UK (PGP-UK) to address this need. PGP-UK provides genomic, transcriptomic, epigenomic and self-reported phenotypic data under an open-access model with full ethical approval. As a result, the reports generated by GenomeChronicler are intended for research purposes only and include information relating to potentially beneficial and potentially harmful variants, but without clinical curation. GenomeChronicler can be used with data from whole genome or whole exome sequencing, producing a genome report containing information on variant statistics, ancestry and known associated phenotypic traits. Example reports are available from the PGP-UK data page (personalgenomes.org.uk/data). The objective of this method is to leverage existing resources to find known phenotypes associated with the genotypes detected in each sample. The provided trait data is based primarily upon information available in SNPedia, but also collates data from ClinVar, GETevidence, and gnomAD to provide additional details on potential health implications, presence of genotype in other PGP participants and population frequency of each genotype. The analysis can be run in a self-contained environment without requiring internet access, making it a good choice for cases where privacy is essential or desired: any third party project can embed GenomeChronicler within their off-line safe-haven environments. GenomeChronicler can be run for one sample at a time, or in parallel making use of the Nextflow workflow manager. The source code is available from GitHub (https://github.com/PGP-UK/GenomeChronicler), container recipes are available for Docker and Singularity, as well as a pre-built container from SingularityHub (https://singularity-hub.org/collections/3664) enabling easy deployment in a variety of settings. Users without access to computational resources to run GenomeChronicler can access the software from the Lifebit CloudOS platform (https://lifebit.ai/cloudos) enabling the production of reports and variant calls from raw sequencing data in a scalable fashion.

RevDate: 2020-11-17

Kirkland P, Di Caterina G, Soraghan J, et al (2020)

Perception Understanding Action: Adding Understanding to the Perception Action Cycle With Spiking Segmentation.

Frontiers in neurorobotics, 14:568319.

Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.

RevDate: 2020-12-01

Hamdan S, Ayyash M, S Almajali (2020)

Edge-Computing Architectures for Internet of Things Applications: A Survey.

Sensors (Basel, Switzerland), 20(22):.

The rapid growth of the Internet of Things (IoT) applications and their interference with our daily life tasks have led to a large number of IoT devices and enormous sizes of IoT-generated data. The resources of IoT devices are limited; therefore, the processing and storing IoT data in these devices are inefficient. Traditional cloud-computing resources are used to partially handle some of the IoT resource-limitation issues; however, using the resources in cloud centers leads to other issues, such as latency in time-critical IoT applications. Therefore, edge-cloud-computing technology has recently evolved. This technology allows for data processing and storage at the edge of the network. This paper studies, in-depth, edge-computing architectures for IoT (ECAs-IoT), and then classifies them according to different factors such as data placement, orchestration services, security, and big data. Besides, the paper studies each architecture in depth and compares them according to various features. Additionally, ECAs-IoT is mapped according to two existing IoT layered models, which helps in identifying the capabilities, features, and gaps of every architecture. Moreover, the paper presents the most important limitations of existing ECAs-IoT and recommends solutions to them. Furthermore, this survey details the IoT applications in the edge-computing domain. Lastly, the paper recommends four different scenarios for using ECAs-IoT by IoT applications.

RevDate: 2020-11-19

LaRochelle EPM, BW Pogue (2020)

Theoretical lateral and axial sensitivity limits and choices of molecular reporters for Cherenkov-excited luminescence in tissue during x-ray beam scanning.

Journal of biomedical optics, 25(11):.

PURPOSE: Unlike fluorescence imaging utilizing an external excitation source, Cherenkov emissions and Cherenkov-excited luminescence occur within a medium when irradiated with high-energy x-rays. Methods to improve the understanding of the lateral spread and axial depth distribution of these emissions are needed as an initial step to improve the overall system resolution.

METHODS: Monte Carlo simulations were developed to investigate the lateral spread of thin sheets of high-energy sources and compared to experimental measurements of similar sources in water. Additional simulations of a multilayer skin model were used to investigate the limits of detection using both 6- and 18-MV x-ray sources with fluorescence excitation for inclusion depths up to 1 cm.

RESULTS: Simulations comparing the lateral spread of high-energy sources show approximately 100 × higher optical yield from electrons than photons, although electrons showed a larger penumbra in both the simulations and experimental measurements. Cherenkov excitation has a roughly inverse wavelength squared dependence in intensity but is largely redshifted in excitation through any distance of tissue. The calculated emission spectra in tissue were convolved with a database of luminescent compounds to produce a computational ranking of potential Cherenkov-excited luminescence molecular contrast agents.

CONCLUSIONS: Models of thin x-ray and electron sources were compared with experimental measurements, showing similar trends in energy and source type. Surface detection of Cherenkov-excited luminescence appears to be limited by the mean free path of the luminescence emission, where for the given simulation only 2% of the inclusion emissions reached the surface from a depth of 7 mm in a multilayer tissue model.

RevDate: 2020-11-13

Zasada SJ, Wright DW, PV Coveney (2020)

Large-scale binding affinity calculations on commodity compute clouds.

Interface focus, 10(6):20190133.

In recent years, it has become possible to calculate binding affinities of compounds bound to proteins via rapid, accurate, precise and reproducible free energy calculations. This is imperative in drug discovery as well as personalized medicine. This approach is based on molecular dynamics (MD) simulations and draws on sequence and structural information of the protein and compound concerned. Free energies are determined by ensemble averages of many MD replicas, each of which requires hundreds of cores and/or GPU accelerators, which are now available on commodity cloud computing platforms; there are also requirements for initial model building and subsequent data analysis stages. To automate the process, we have developed a workflow known as the binding affinity calculator. In this paper, we focus on the software infrastructure and interfaces that we have developed to automate the overall workflow and execute it on commodity cloud platforms, in order to reliably predict their binding affinities on time scales relevant to the domains of application, and illustrate its application to two free energy methods.

RevDate: 2020-11-27

Jeon S, Seo J, Kim S, et al (2020)

Proposal and Assessment of a De-Identification Strategy to Enhance Anonymity of the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) in a Public Cloud-Computing Environment: Anonymization of Medical Data Using Privacy Models.

Journal of medical Internet research, 22(11):e19597 pii:v22i11e19597.

BACKGROUND: De-identifying personal information is critical when using personal health data for secondary research. The Observational Medical Outcomes Partnership Common Data Model (CDM), defined by the nonprofit organization Observational Health Data Sciences and Informatics, has been gaining attention for its use in the analysis of patient-level clinical data obtained from various medical institutions. When analyzing such data in a public environment such as a cloud-computing system, an appropriate de-identification strategy is required to protect patient privacy.

OBJECTIVE: This study proposes and evaluates a de-identification strategy that is comprised of several rules along with privacy models such as k-anonymity, l-diversity, and t-closeness. The proposed strategy was evaluated using the actual CDM database.

METHODS: The CDM database used in this study was constructed by the Anam Hospital of Korea University. Analysis and evaluation were performed using the ARX anonymizing framework in combination with the k-anonymity, l-diversity, and t-closeness privacy models.

RESULTS: The CDM database, which was constructed according to the rules established by Observational Health Data Sciences and Informatics, exhibited a low risk of re-identification: The highest re-identifiable record rate (11.3%) in the dataset was exhibited by the DRUG_EXPOSURE table, with a re-identification success rate of 0.03%. However, because all tables include at least one "highest risk" value of 100%, suitable anonymizing techniques are required; moreover, the CDM database preserves the "source values" (raw data), a combination of which could increase the risk of re-identification. Therefore, this study proposes an enhanced strategy to de-identify the source values to significantly reduce not only the highest risk in the k-anonymity, l-diversity, and t-closeness privacy models but also the overall possibility of re-identification.

CONCLUSIONS: Our proposed de-identification strategy effectively enhanced the privacy of the CDM database, thereby encouraging clinical research involving multiple centers.

RevDate: 2020-11-14

Cecilia JM, Cano JC, Morales-García J, et al (2020)

Evaluation of Clustering Algorithms on GPU-Based Edge Computing Platforms.

Sensors (Basel, Switzerland), 20(21):.

Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as "dark data", i.e., data generated but never analyzed. The efficient analysis of this data is mandatory to create intelligent applications for the next generation of IoT applications that benefits society. Artificial Intelligence (AI) techniques are very well suited to identifying hidden patterns and correlations in this data deluge. In particular, clustering algorithms are of the utmost importance for performing exploratory data analysis to identify a set (a.k.a., cluster) of similar objects. Clustering algorithms are computationally heavy workloads and require to be executed on high-performance computing clusters, especially to deal with large datasets. This execution on HPC infrastructures is an energy hungry procedure with additional issues, such as high-latency communications or privacy. Edge computing is a paradigm to enable light-weight computations at the edge of the network that has been proposed recently to solve these issues. In this paper, we provide an in-depth analysis of emergent edge computing architectures that include low-power Graphics Processing Units (GPUs) to speed-up these workloads. Our analysis includes performance and power consumption figures of the latest Nvidia's AGX Xavier to compare the energy-performance ratio of these low-cost platforms with a high-performance cloud-based counterpart version. Three different clustering algorithms (i.e., k-means, Fuzzy Minimals (FM), and Fuzzy C-Means (FCM)) are designed to be optimally executed on edge and cloud platforms, showing a speed-up factor of up to 11× for the GPU code compared to sequential counterpart versions in the edge platforms and energy savings of up to 150% between the edge computing and HPC platforms.

RevDate: 2020-11-14

Ghazal M, Basmaji T, Yaghi M, et al (2020)

Cloud-Based Monitoring of Thermal Anomalies in Industrial Environments Using AI and the Internet of Robotic Things.

Sensors (Basel, Switzerland), 20(21):.

Recent advancements in cloud computing, artificial intelligence, and the internet of things (IoT) create new opportunities for autonomous industrial environments monitoring. Nevertheless, detecting anomalies in harsh industrial settings remains challenging. This paper proposes an edge-fog-cloud architecture with mobile IoT edge nodes carried on autonomous robots for thermal anomalies detection in aluminum factories. We use companion drones as fog nodes to deliver first response services and a cloud back-end for thermal anomalies analysis. We also propose a self-driving deep learning architecture and a thermal anomalies detection and visualization algorithm. Our results show our robot surveyors are low-cost, deliver reduced response time, and more accurately detect anomalies compared to human surveyors or fixed IoT nodes monitoring the same industrial area. Our self-driving architecture has a root mean square error of 0.19 comparable to VGG-19 with a significantly reduced complexity and three times the frame rate at 60 frames per second. Our thermal to visual registration algorithm maximizes mutual information in the image-gradient domain while adapting to different resolutions and camera frame rates.

RevDate: 2020-11-14

Kołakowska A, Szwoch W, M Szwoch (2020)

A Review of Emotion Recognition Methods Based on Data Acquired via Smartphone Sensors.

Sensors (Basel, Switzerland), 20(21):.

In recent years, emotion recognition algorithms have achieved high efficiency, allowing the development of various affective and affect-aware applications. This advancement has taken place mainly in the environment of personal computers offering the appropriate hardware and sufficient power to process complex data from video, audio, and other channels. However, the increase in computing and communication capabilities of smartphones, the variety of their built-in sensors, as well as the availability of cloud computing services have made them an environment in which the task of recognising emotions can be performed at least as effectively. This is possible and particularly important due to the fact that smartphones and other mobile devices have become the main computer devices used by most people. This article provides a systematic overview of publications from the last 10 years related to emotion recognition methods using smartphone sensors. The characteristics of the most important sensors in this respect are presented, and the methods applied to extract informative features on the basis of data read from these input channels. Then, various machine learning approaches implemented to recognise emotional states are described.

RevDate: 2020-11-12

Wibberg D, Batut B, Belmann P, et al (2019)

The de.NBI / ELIXIR-DE training platform - Bioinformatics training in Germany and across Europe within ELIXIR.

F1000Research, 8:.

The German Network for Bioinformatics Infrastructure (de.NBI) is a national and academic infrastructure funded by the German Federal Ministry of Education and Research (BMBF). The de.NBI provides (i) service, (ii) training, and (iii) cloud computing to users in life sciences research and biomedicine in Germany and Europe and (iv) fosters the cooperation of the German bioinformatics community with international network structures. The de.NBI members also run the German node (ELIXIR-DE) within the European ELIXIR infrastructure. The de.NBI / ELIXIR-DE training platform, also known as special interest group 3 (SIG 3) 'Training & Education', coordinates the bioinformatics training of de.NBI and the German ELIXIR node. The network provides a high-quality, coherent, timely, and impactful training program across its eight service centers. Life scientists learn how to handle and analyze biological big data more effectively by applying tools, standards and compute services provided by de.NBI. Since 2015, more than 300 training courses were carried out with about 6,000 participants and these courses received recommendation rates of almost 90% (status as of July 2020). In addition to face-to-face training courses, online training was introduced on the de.NBI website in 2016 and guidelines for the preparation of e-learning material were established in 2018. In 2016, ELIXIR-DE joined the ELIXIR training platform. Here, the de.NBI / ELIXIR-DE training platform collaborates with ELIXIR in training activities, advertising training courses via TeSS and discussions on the exchange of data for training events essential for quality assessment on both the technical and administrative levels. The de.NBI training program trained thousands of scientists from Germany and beyond in many different areas of bioinformatics.

RevDate: 2020-11-12

Bremer E, Saltz J, JS Almeida (2020)

ImageBox 2 - Efficient and Rapid Access of Image Tiles from Whole-Slide Images Using Serverless HTTP Range Requests.

Journal of pathology informatics, 11:29.

Background: Whole-slide images (WSI) are produced by a high-resolution scanning of pathology glass slides. There are a large number of whole-slide imaging scanners, and the resulting images are frequently larger than 100,000 × 100,000 pixels which typically image 100,000 to one million cells, ranging from several hundred megabytes to many gigabytes in size.

Aims and Objectives: Provide HTTP access over the web to Whole Slide Image tiles that do not have localized tiling servers but only basic HTTP access. Move all image decode and tiling functions to calling agent (ImageBox).

Methods: Current software systems require tiling image servers to be installed on systems providing local disk access to these images. ImageBox2 breaks this requirement by accessing tiles from remote HTTP source via byte-level HTTP range requests. This method does not require changing the client software as the operation is relegated to the ImageBox2 server which is local (or remote) to the client and can access tiles from remote images that have no server of their own such as Amazon S3 hosted images. That is, it provides a data service [on a server that does not need to be managed], the definition of serverless execution model increasingly favored by cloud computing infrastructure.

Conclusions: The specific methodology described and assessed in this report preserves normal client connection semantics by enabling cloud-friendly tiling, promoting a web of http connected whole-slide images from a wide-ranging number of sources, and providing tiling where local tiling servers would have been otherwise unavailable.

RevDate: 2020-11-09

Bergier I, Papa M, Silva R, et al (2020)

Cloud/edge computing for compliance in the Brazilian livestock supply chain.

The Science of the total environment pii:S0048-9697(20)36807-8 [Epub ahead of print].

Brazil is an important player in the global agribusiness markets, in which grain and beef make up the majority of exports. Barriers to access more valuable sustainable markets emerge from the lack of adequate compliance in supply chains. Here is depicted a mobile application based on cloud/edge computing for the livestock supply chain to circumvent that limitation. The application, called BovChain, is a peer-to-peer (P2P) network connecting landowners and slaughterhouses. The objective of the application is twofold. Firstly, it maximizes sustainable business by reducing transaction costs and by strengthening ties between state-authorized stakeholders. Secondly, it creates metadata useful for digital certification by exploiting CMOS and GPS sensor technologies embedded in low-cost smartphones. Successful declarative transactions in the digital space are recorded as metadata, and the corresponding big data might be valuable for the certification of livestock origin and traceability for sustainability compliance in 'glocal' beef markets.

RevDate: 2020-11-17

Hanif M, Lee C, S Helal (2020)

Predictive topology refinements in distributed stream processing system.

PloS one, 15(11):e0240424 pii:PONE-D-20-03970.

Cloud computing has evolved the big data technologies to a consolidated paradigm with SPaaS (Streaming processing-as-a-service). With a number of enterprises offering cloud-based solutions to end-users and other small enterprises, there has been a boom in the volume of data, creating interest of both industry and academia in big data analytics, streaming applications, and social networking applications. With the companies shifting to cloud-based solutions as a service paradigm, the competition grows in the market. Good quality of service (QoS) is a must for the enterprises, as they strive to survive in a competitive environment. However, achieving reasonable QoS goals to meet SLA agreement cost-effectively is challenging due to variation in workload over time. This problem can be solved if the system has the ability to predict the workload for the near future. In this paper, we present a novel topology-refining scheme based on a workload prediction mechanism. Predictions are made through a model based on a combination of SVR, autoregressive, and moving average model with a feedback mechanism. Our streaming system is designed to increase the overall performance by making the topology refining robust to the incoming workload on the fly, while still being able to achieve QoS goals of SLA constraints. Apache Flink distributed processing engine is used as a testbed in the paper. The result shows that the prediction scheme works well for both workloads, i.e., synthetic as well as real traces of data.

RevDate: 2020-11-05

Wang G, Wignall J, Kinard D, et al (2020)

An implementation model for managing cloud-based longitudinal care plans for children with medical complexity.

Journal of the American Medical Informatics Association : JAMIA pii:5956341 [Epub ahead of print].

OBJECTIVE: We aimed to iteratively refine an implementation model for managing cloud-based longitudinal care plans (LCPs) for children with medical complexity (CMC).

MATERIALS AND METHODS: We conducted iterative 1-on-1 design sessions with CMC caregivers (ie, parents/legal guardians) and providers between August 2017 and March 2019. During audio-recorded sessions, we asked participants to walk through role-specific scenarios of how they would create, review, and edit an LCP using a cloud-based prototype, which we concurrently developed. Between sessions, we reviewed audio recordings to identify strategies that would mitigate barriers that participants reported relating to 4 processes for managing LCPs: (1) taking ownership, (2) sharing, (3) reviewing, and (4) editing. Analysis informed iterative implementation model revisions.

RESULTS: We conducted 30 design sessions, with 10 caregivers and 20 providers. Participants emphasized that cloud-based LCPs required a team of owners: the caregiver(s), a caregiver-designated clinician, and a care coordinator. Permission settings would need to include universal accessibility for emergency providers, team-level permission options, and some editing restrictions for caregivers. Notifications to review and edit the LCP should be sent to team members before and after clinic visits and after hospital encounters. Mitigating double documentation barriers would require alignment of data fields between the LCP and electronic health record to maximize interoperability.

DISCUSSION: These findings provide a model for how we may leverage emerging Health Insurance Portability and Accountability Act-compliant cloud computing technologies to support families and providers in comanaging health information for CMC.

CONCLUSIONS: Utilizing these management strategies when implementing cloud-based LCPs has the potential to improve team-based care across settings.

RevDate: 2020-11-05

Long A, Glogowski A, Meppiel M, et al (2020)

The technology behind TB DEPOT: a novel public analytics platform integrating tuberculosis clinical, genomic, and radiological data for visual and statistical exploration.

Journal of the American Medical Informatics Association : JAMIA pii:5956336 [Epub ahead of print].

OBJECTIVE: Clinical research informatics tools are necessary to support comprehensive studies of infectious diseases. The National Institute of Allergy and Infectious Diseases (NIAID) developed the publicly accessible Tuberculosis Data Exploration Portal (TB DEPOT) to address the complex etiology of tuberculosis (TB).

MATERIALS AND METHODS: TB DEPOT displays deidentified patient case data and facilitates analyses across a wide range of clinical, socioeconomic, genomic, and radiological factors. The solution is built using Amazon Web Services cloud-based infrastructure, .NET Core, Angular, Highcharts, R, PLINK, and other custom-developed services. Structured patient data, pathogen genomic variants, and medical images are integrated into the solution to allow seamless filtering across data domains.

RESULTS: Researchers can use TB DEPOT to query TB patient cases, create and save patient cohorts, and execute comparative statistical analyses on demand. The tool supports user-driven data exploration and fulfills the National Institute of Health's Findable, Accessible, Interoperable, and Reusable (FAIR) principles.

DISCUSSION: TB DEPOT is the first tool of its kind in the field of TB research to integrate multidimensional data from TB patient cases. Its scalable and flexible architectural design has accommodated growth in the data, organizations, types of data, feature requests, and usage. Use of client-side technologies over server-side technologies and prioritizing maintenance have been important lessons learned. Future directions are dynamically prioritized and key functionality is shared through an application programming interface.

CONCLUSION: This paper describes the platform development methodology, resulting functionality, benefits, and technical considerations of a clinical research informatics application to support increased understanding of TB.

RevDate: 2020-11-06

Frontoni E, Romeo L, Bernardini M, et al (2020)

A Decision Support System for Diabetes Chronic Care Models Based on General Practitioner Engagement and EHR Data Sharing.

IEEE journal of translational engineering in health and medicine, 8:3000112.

Objective Decision support systems (DSS) have been developed and promoted for their potential to improve quality of health care. However, there is a lack of common clinical strategy and a poor management of clinical resources and erroneous implementation of preventive medicine. Methods To overcome this problem, this work proposed an integrated system that relies on the creation and sharing of a database extracted from GPs' Electronic Health Records (EHRs) within the Netmedica Italian (NMI) cloud infrastructure. Although the proposed system is a pilot application specifically tailored for improving the chronic Type 2 Diabetes (T2D) care it could be easily targeted to effectively manage different chronic-diseases. The proposed DSS is based on EHR structure used by GPs in their daily activities following the most updated guidelines in data protection and sharing. The DSS is equipped with a Machine Learning (ML) method for analyzing the shared EHRs and thus tackling the high variability of EHRs. A novel set of T2D care-quality indicators are used specifically to determine the economic incentives and the T2D features are presented as predictors of the proposed ML approach. Results The EHRs from 41237 T2D patients were analyzed. No additional data collection, with respect to the standard clinical practice, was required. The DSS exhibited competitive performance (up to an overall accuracy of 98%±2% and macro-recall of 96%±1%) for classifying chronic care quality across the different follow-up phases. The chronic care quality model brought to a significant increase (up to 12%) of the T2D patients without complications. For GPs who agreed to use the proposed system, there was an economic incentive. A further bonus was assigned when performance targets are achieved. Conclusions The quality care evaluation in a clinical use-case scenario demonstrated how the empowerment of the GPs through the use of the platform (integrating the proposed DSS), along with the economic incentives, may speed up the improvement of care.

RevDate: 2020-11-14
CmpDate: 2020-11-06

Chukhno O, Chukhno N, Araniti G, et al (2020)

Optimal Placement of Social Digital Twins in Edge IoT Networks.

Sensors (Basel, Switzerland), 20(21): pii:s20216181.

In next-generation Internet of Things (IoT) deployments, every object such as a wearable device, a smartphone, a vehicle, and even a sensor or an actuator will be provided with a digital counterpart (twin) with the aim of augmenting the physical object's capabilities and acting on its behalf when interacting with third parties. Moreover, such objects can be able to interact and autonomously establish social relationships according to the Social Internet of Things (SIoT) paradigm. In such a context, the goal of this work is to provide an optimal solution for the social-aware placement of IoT digital twins (DTs) at the network edge, with the twofold aim of reducing the latency (i) between physical devices and corresponding DTs for efficient data exchange, and (ii) among DTs of friend devices to speed-up the service discovery and chaining procedures across the SIoT network. To this aim, we formulate the problem as a mixed-integer linear programming model taking into account limited computing resources in the edge cloud and social relationships among IoT devices.

RevDate: 2020-11-03

Wang YL, Wang F, Shi XX, et al (2020)

Cloud 3D-QSAR: a web tool for the development of quantitative structure-activity relationship models in drug discovery.

Briefings in bioinformatics pii:5934782 [Epub ahead of print].

Effective drug discovery contributes to the treatment of numerous diseases but is limited by high costs and long cycles. The Quantitative Structure-Activity Relationship (QSAR) method was introduced to evaluate the activity of a large number of compounds virtually, reducing the time and labor costs required for chemical synthesis and experimental determination. Hence, this method increases the efficiency of drug discovery. To meet the needs of researchers to utilize this technology, numerous QSAR-related web servers, such as Web-4D-QSAR and DPubChem, have been developed in recent years. However, none of the servers mentioned above can perform a complete QSAR modeling and supply activity prediction functions. We introduce Cloud 3D-QSAR by integrating the functions of molecular structure generation, alignment, molecular interaction field (MIF) computing and results analysis to provide a one-stop solution. We rigidly validated this server, and the activity prediction correlation was R2 = 0.934 in 834 test molecules. The sensitivity, specificity and accuracy were 86.9%, 94.5% and 91.5%, respectively, with AUC = 0.981, AUCPR = 0.971. The Cloud 3D-QSAR server may facilitate the development of good QSAR models in drug discovery. Our server is free and now available at http://chemyang.ccnu.edu.cn/ccb/server/cloud3dQSAR/ and http://agroda.gzu.edu.cn:9999/ccb/server/cloud3dQSAR/.

RevDate: 2020-11-14
CmpDate: 2020-11-05

Zhao L (2020)

Privacy-Preserving Distributed Analytics in Fog-Enabled IoT Systems.

Sensors (Basel, Switzerland), 20(21):.

The Internet of Things (IoT) has evolved significantly with advances in gathering data that can be extracted to provide knowledge and facilitate decision-making processes. Currently, IoT data analytics encountered challenges such as growing data volumes collected by IoT devices and fast response requirements for time-sensitive applications in which traditional Cloud-based solution is unable to meet due to bandwidth and high latency limitations. In this paper, we develop a distributed analytics framework for fog-enabled IoT systems aiming to avoid raw data movement and reduce latency. The distributed framework leverages the computational capacities of all the participants such as edge devices and fog nodes and allows them to obtain the global optimal solution locally. To further enhance the privacy of data holders in the system, a privacy-preserving protocol is proposed using cryptographic schemes. Security analysis was conducted and it verified that exact private information about any edge device's raw data would not be inferred by an honest-but-curious neighbor in the proposed secure protocol. In addition, the accuracy of solution is unaffected in the secure protocol comparing to the proposed distributed algorithm without encryption. We further conducted experiments on three case studies: seismic imaging, diabetes progression prediction, and Enron email classification. On seismic imaging problem, the proposed algorithm can be up to one order of magnitude faster than the benchmarks in reaching the optimal solution. The evaluation results validate the effectiveness of the proposed methodology and demonstrate its potential to be a promising solution for data analytics in fog-enabled IoT systems.

RevDate: 2020-12-01

Li J, Tooth S, Zhang K, et al (2020)

Visualisation of flooding along an unvegetated, ephemeral river using Google Earth Engine: Implications for assessment of channel-floodplain dynamics in a time of rapid environmental change.

Journal of environmental management, 278(Pt 2):111559 pii:S0301-4797(20)31484-5 [Epub ahead of print].

Given rapid environmental change, the development of new, data-driven, interdisciplinary approaches is essential for improving assessment and management of river systems, especially with respect to flooding. In the world's extensive drylands, difficulties in obtaining field observations of major hydrological events mean that remote sensing techniques are commonly used to map river floods and assess flood impacts. Such techniques, however, are dependent on available cloud-free imagery during or immediately after peak discharge, and single images may omit important flood-related hydrogeomorphological events. Here, we combine multiple Landsat images from Google Earth Engine (GEE) with precipitation datasets and high-resolution (<0.65 m) satellite imagery to visualise flooding and assess the associated channel-floodplain dynamics along a 25 km reach of the unvegetated, ephemeral Río Colorado, Bolivia. After cloud and shadow removal, Landsat surface reflectance data were used to calculate the Modified Normalized Difference Water Index (MNDWI) and map flood extents and patterns. From 2004 through 2016, annual flooding area along the narrow (<30 m), shallow (<1.7 m), fine-grained (dominantly silt/clay) channels was positively correlated (R2 = 0.83) with 2-day maximum precipitation totals. Rapid meander bend migration, bank erosion, and frequent overbank flooding was associated with formation of crevasse channels, splays, and headward-eroding channels, and with avulsion (shifting of flow from one channel to another). These processes demonstrate ongoing, widespread channel-floodplain dynamics despite low stream powers and cohesive sediments. Application of our study approaches to other dryland rivers will help generate comparative data on the controls, rates, patterns and timescales of channel-floodplain dynamics under scenarios of climate change and direct human impacts, with potential implications for improved river management.

RevDate: 2020-11-14
CmpDate: 2020-11-03

Fang J, Hu J, Wei J, et al (2020)

An Efficient Resource Allocation Strategy for Edge-Computing Based Environmental Monitoring System.

Sensors (Basel, Switzerland), 20(21):.

The cloud computing and microsensor technology has greatly changed environmental monitoring, but it is difficult for cloud-computing based monitoring system to meet the computation demand of smaller monitoring granularity and increasing monitoring applications. As a novel computing paradigm, edge computing deals with this problem by deploying resource on edge network. However, the particularity of environmental monitoring applications is ignored by most previous studies. In this paper, we proposed a resource allocation algorithm and a task scheduling strategy to reduce the average completion latency of environmental monitoring application, when considering the characteristic of environmental monitoring system and dependency among task. Simulations are conducted, and the results show that compared with the traditional algorithms. With considering the emergency task, the proposed methods decrease the average completion latency by 21.6% in the best scenario.

RevDate: 2020-10-30

Singh K, Singh S, J Malhotra (2020)

Spectral features based convolutional neural network for accurate and prompt identification of schizophrenic patients.

Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine [Epub ahead of print].

Schizophrenia is a fatal mental disorder, which affects millions of people globally by the disturbance in their thinking, feeling and behaviour. In the age of the internet of things assisted with cloud computing and machine learning techniques, the computer-aided diagnosis of schizophrenia is essentially required to provide its patients with an opportunity to own a better quality of life. In this context, the present paper proposes a spectral features based convolutional neural network (CNN) model for accurate identification of schizophrenic patients using spectral analysis of multichannel EEG signals in real-time. This model processes acquired EEG signals with filtering, segmentation and conversion into frequency domain. Then, given frequency domain segments are divided into six distinct spectral bands like delta, theta-1, theta-2, alpha, beta and gamma. The spectral features including mean spectral amplitude, spectral power and Hjorth descriptors (Activity, Mobility and Complexity) are extracted from each band. These features are independently fed to the proposed spectral features-based CNN and long short-term memory network (LSTM) models for classification. This work also makes use of raw time-domain and frequency-domain EEG segments for classification using temporal CNN and spectral CNN models of same architectures respectively. The overall analysis of simulation results of all models exhibits that the proposed spectral features based CNN model is an efficient technique for accurate and prompt identification of schizophrenic patients among healthy individuals with average classification accuracies of 94.08% and 98.56% for two different datasets with optimally small classification time.

RevDate: 2020-11-17
CmpDate: 2020-11-17

Romansky RP, IS Noninska (2020)

Challenges of the digital age for privacy and personal data protection.

Mathematical biosciences and engineering : MBE, 17(5):5288-5303.

Digital age can be described as a collection of different technological solutions as virtual environments, digital services, intelligent applications, machine learning, knowledge-based systems, etc., determining the specific characteristics of contemporary world globalization, e-communications, information sharing, virtualization, etc. However, there is an opportunity the technologies of the digital age to violate some basic principles of the information security and privacy by unregulated access to information and personal data, stored in different nodes of the global network. The goal of the article is to determine some special features of information and personal data protection and to summarise the main challenges of the digital age for the user's security and privacy. A brief presentation of the fundamental legislation in the fields of privacy and personal data protection is made in the introduction, followed by a review of related work on the topic. Components of information security for counteracting threats and attacks and basic principles in the organization of personal data protection are discussed. A summary of the basic challenges of the digital age is made by systematizing the negatives for user's privacy of the contemporary technologies as social computing, cloud services, Internet of Things, Big Data and Big Data Analytics and separate requirements to secure privacy of the participants based on General Data Protection Regulation principles are formulated.

RevDate: 2020-11-17
CmpDate: 2020-11-17

Zhu Y, Jiang ZP, Mo XH, et al (2020)

A study on the design methodology of TAC3 for edge computing.

Mathematical biosciences and engineering : MBE, 17(5):4406-4421.

The following scenarios, such as complex application requirements, ZB (Zettabyte) order of magnitude of network data, and tens of billions of connected devices, pose serious challenges to the capabilities and security of the three pillars of ICT: Computing, network, and storage. Edge computing came into being. Following the design methodology of "description-synthesis-simulation-optimization", TAC3 (Tile-Architecture Cluster Computing Core) was proposed as the lightweight accelerated ECN (Edge Computing Node). ECN with a Tile-Architecture be designed and simulated through the method of executable description specification and polymorphous parallelism DSE (Design Space Exploration). By reasonable configuration of the edge computing environment and constant optimization of typical application scenarios, such as convolutional neural network and processing of image and graphic, we can meet the challenges of network bandwidth, end-cloud delay and privacy security brought by massive data of the IoE. The philosophy of "Edge-Cloud complements each other, and Edge-AI energizes each other" will become a new generation of IoE behavior principle.

RevDate: 2020-10-30

Wu Z, Sun J, Zhang Y, et al (2020)

Scheduling-Guided Automatic Processing of Massive Hyperspectral Image Classification on Cloud Computing Architectures.

IEEE transactions on cybernetics, PP: [Epub ahead of print].

The large data volume and high algorithm complexity of hyperspectral image (HSI) problems have posed big challenges for efficient classification of massive HSI data repositories. Recently, cloud computing architectures have become more relevant to address the big computational challenges introduced in the HSI field. This article proposes an acceleration method for HSI classification that relies on scheduling metaheuristics to automatically and optimally distribute the workload of HSI applications across multiple computing resources on a cloud platform. By analyzing the procedure of a representative classification method, we first develop its distributed and parallel implementation based on the MapReduce mechanism on Apache Spark. The subtasks of the processing flow that can be processed in a distributed way are identified as divisible tasks. The optimal execution of this application on Spark is further formulated as a divisible scheduling framework that takes into account both task execution precedences and task divisibility when allocating the divisible and indivisible subtasks onto computing nodes. The formulated scheduling framework is an optimization procedure that searches for optimized task assignments and partition counts for divisible tasks. Two metaheuristic algorithms are developed to solve this divisible scheduling problem. The scheduling results provide an optimized solution to the automatic processing of HSI big data on clouds, improving the computational efficiency of HSI classification by exploring the parallelism during the parallel processing flow. Experimental results demonstrate that our scheduling-guided approach achieves remarkable speedups by facilitating the automatic processing of HSI classification on Spark, and is scalable to the increasing HSI data volume.

RevDate: 2020-11-14
CmpDate: 2020-10-29

Krishnamurthi R, Kumar A, Gopinathan D, et al (2020)

An Overview of IoT Sensor Data Processing, Fusion, and Analysis Techniques.

Sensors (Basel, Switzerland), 20(21):.

In the recent era of the Internet of Things, the dominant role of sensors and the Internet provides a solution to a wide variety of real-life problems. Such applications include smart city, smart healthcare systems, smart building, smart transport and smart environment. However, the real-time IoT sensor data include several challenges, such as a deluge of unclean sensor data and a high resource-consumption cost. As such, this paper addresses how to process IoT sensor data, fusion with other data sources, and analyses to produce knowledgeable insight into hidden data patterns for rapid decision-making. This paper addresses the data processing techniques such as data denoising, data outlier detection, missing data imputation and data aggregation. Further, it elaborates on the necessity of data fusion and various data fusion methods such as direct fusion, associated feature extraction, and identity declaration data fusion. This paper also aims to address data analysis integration with emerging technologies, such as cloud computing, fog computing and edge computing, towards various challenges in IoT sensor network and sensor data analysis. In summary, this paper is the first of its kind to present a complete overview of IoT sensor data processing, fusion and analysis techniques.

RevDate: 2020-11-14
CmpDate: 2020-10-29

Biswash SK, DNK Jayakody (2020)

A Fog Computing-Based Device-Driven Mobility Management Scheme for 5G Networks.

Sensors (Basel, Switzerland), 20(21):.

The fog computing-based device-driven network is a promising solution for high data rates in modern cellular networks. It is a unique framework to reduce the generated-data, data management overheads, network scalability challenges, and help us to provide a pervasive computation environment for real-time network applications, where the mobile data is easily available and accessible to nearby fog servers. It explores a new dimension of the next generation network called fog networks. Fog networks is a complementary part of the cloud network environment. The proposed network architecture is a part of the newly emerged paradigm that extends the network computing infrastructure within the device-driven 5G communication system. This work explores a new design of the fog computing framework to support device-driven communication to achieve better Quality of Service (QoS) and Quality of Experience (QoE). In particular, we focus on, how potential is the fog computing orchestration framework? How it can be customized to the next generation of cellular communication systems? Next, we propose a mobility management procedure for fog networks, considering the static and dynamic mobile nodes. We compare our results with the legacy of cellular networks and observed that the proposed work has the least energy consumption, delay, latency, signaling cost as compared to LTE/LTE-A networks.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )