Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
Standards in Genomic Sciences, 11(1):69
BACKGROUND: Efforts to harmonize genomic data standards used by the biodiversity and metagenomic research communities have shown that prokaryotic data cannot be understood or represented in a traditional, classical biological context for conceptual reasons, not technical ones.
RESULTS: Biology, like physics, has a fundamental duality-the classical macroscale eukaryotic realm vs. the quantum microscale microbial realm-with the two realms differing profoundly, and counter-intuitively, from one another. Just as classical physics is emergent from and cannot explain the microscale realm of quantum physics, so classical biology is emergent from and cannot explain the microscale realm of prokaryotic life. Classical biology describes the familiar, macroscale realm of multi-cellular eukaryotic organisms, which constitute a highly derived and constrained evolutionary subset of the biosphere, unrepresentative of the vast, mostly unseen, microbial world of prokaryotic life that comprises at least half of the planet's biomass and most of its genetic diversity. The two realms occupy fundamentally different mega-niches: eukaryotes interact primarily mechanically with the environment, prokaryotes primarily physiologically. Further, many foundational tenets of classical biology simply do not apply to prokaryotic biology.
CONCLUSIONS: Classical genetics one held that genes, arranged on chromosomes like beads on a string, were the fundamental units of mutation, recombination, and heredity. Then, molecular analysis showed that there were no fundamental units, no beads, no string. Similarly, classical biology asserts that individual organisms and species are fundamental units of ecology, evolution, and biodiversity, composing an evolutionary history of objectively real, lineage-defined groups in a single-rooted tree of life. Now, metagenomic tools are forcing a recognition that there are no completely objective individuals, no unique lineages, and no one true tree. The newly revealed biosphere of microbial dark matter cannot be understood merely by extending the concepts and methods of eukaryotic macrobiology. The unveiling of biological dark matter is allowing us to see, for the first time, the diversity of the entire biosphere and, to paraphrase Darwin, is providing a new view of life. Advancing and understanding that view will require major revisions to some of the most fundamental concepts and theories in biology
Standards in Genomic Sciences, 9(3):1236-1250
This report summarizes the proceedings of the 14th workshop of the Genomic Standards Consortium (GSC) held at the University of Oxford in September 2012. The primary goal of the workshop was to work towards the launch of the Genomic Observatories (GOs) Network under the GSC. For the first time, it brought together potential GOs sites, GSC members, and a range of interested partner organizations. It thus represented the first meeting of the GOs Network (GOs1). Key outcomes include the formation of a core group of “champions” ready to take the GOs Network forward, as well as the formation of working groups. The workshop also served as the first meeting of a wide range of participants in the Ocean Sampling Day (OSD) initiative, a first GOs action. Three projects with complementary interests - COST Action ES1103, MG4U and Micro B3 - organized joint sessions at the workshop. A two-day GSC Hackathon followed the main three days of meetings.
Standards in Genomic Sciences, 9(3):599
The Genomic Standards Consortium (GSC) is an open-membership community that was founded in 2005 to work towards the development, implementation and harmonization of standards in the field of genomics. Starting with the defined task of establishing a minimal set of descriptions the GSC has evolved into an active standards-setting body that currently has 18 ongoing projects, with additional projects regularly proposed from within and outside the GSC. Here we describe our recently enacted policy for proposing new activities that are in-tended to be taken on by the GSC, along with the template for proposing such new activities.
Standards in Genomic Sciences, 9(3):585
The workshop-hackathon was convened by the Global Biodiversity Information Facility (GBIF) at its secretariat in Copenhagen over 22-24 May 2013 with additional support from several projects (RCN4GSC, EAGER, VertNet, BiSciCol, GGBN, and Micro B3). It assembled a team of experts to address the challenge of adapting the Darwin Core standard for a wide variety of sample data. Topics addressed in the workshop included 1) a review of outstanding issues in the Darwin Core standard, 2) issues relating to publishing of biodiversity data through Darwin Core Archives, 3) use of Darwin Core Archives for publishing sample and monitoring data, 4) the case for modifying the Darwin Core Text Guide specification to sup-port many-to-many relations, and 5) the generalization of the Darwin Core Archive to a “Bio-diversity Data Archive”. A wide variety of use cases were assembled and discussed in order to inform further developments.
Standards in Genomic Sciences, 9(1):17
We describe the outcomes of three recent workshops aimed at advancing development of the Biological Collections Ontology (BCO), the Population and Community Ontology (PCO), and tools to annotate data using those and other ontologies. The first workshop gathered use cases to help grow the PCO, agreed upon a format for modeling challenging concepts such as ecological niche, and developed ontology design patterns for defining collections of organisms and population-level phenotypes. The second focused on mapping datasets to ontology terms and converting them to Resource Description Framework (RDF), using the BCO. To follow-up, a BCO hackathon was held concurrently with the 16th Genomics Standards Consortium Meeting, during which we converted additional datasets to RDF, developed a Material Sample Core for the Global Biodiversity Information Framework, created a Web Ontology Language (OWL) file for importing Darwin Core classes and properties into BCO, and developed a workflow for converting biodiversity data among formats.
Standards in Genomic Sciences, 8(2):352-9
Standards in Genomic Sciences, 7(1):171-4
Following up on efforts from two earlier workshops, a meeting was convened in San Diego to (a) establish working connections between experts in the use of the Darwin Core and the GSC MIxS standards, (b) conduct mutual briefings to promote knowledge exchange and to increase the understanding of the two communities' approaches, constraints, community goals, subtleties, etc., (c) perform an element-by-element comparison of the two standards, assessing the compatibility and complementarity of the two approaches, (d) propose and consider possible use cases and test beds in which a joint annotation approach might be tried, to useful scientific effect, and (e) propose additional action items necessary to continue the development of this joint effort. Several focused working teams were identified to continue the work after the meeting ended
Standards in Genomic Sciences, 7(1):159-65
Building on the planning efforts of the RCN4GSC project, a workshop was convened in San Diego to bring together experts from genomics and metagenomics, biodiversity, ecology, and bioinformatics with the charge to identify potential for positive interactions and progress, especially building on successes at establishing data standards by the GSC and by the biodiversity and ecological communities. Until recently, the contribution of microbial life to the biomass and biodiversity of the biosphere was largely overlooked (because it was resistant to systematic study). Now, emerging genomic and metagenomic tools are making investigation possible. Initial research findings suggest that major advances are in the offing. Although different research communities share some overlapping concepts and traditions, they differ significantly in sampling approaches, vocabularies and workflows. Likewise, their definitions of 'fitness for use' for data differ significantly, as this concept stems from the specific research questions of most importance in the different fields. Nevertheless, there is little doubt that there is much to be gained from greater coordination and integration. As a first step toward interoperability of the information systems used by the different communities, participants agreed to conduct a case study on two of the leading data standards from the two formerly disparate fields: (a) GSC's standard checklists for genomics and metagenomics and (b) TDWG's Darwin Core standard, used primarily in taxonomy and systematic biology
Standards in Genomic Sciences, 7(1):153-8
At the GSC11 meeting (4-6 April 2011, Hinxton, England, the GSC's genomic biodiversity working group (GBWG) developed an initial model for a data management testbed at the interface of biodiversity with genomics and metagenomics. With representatives of the Global Biodiversity Information Facility (GBIF) participating, it was agreed that the most useful course of action would be for GBIF to collaborate with the GSC in its ongoing GBWG workshops to achieve common goals around interoperability/data integration across (meta)-genomic and species level data. It was determined that a quick comparison should be made of the contents of the Darwin Core (DwC) and the GSC data checklists, with a goal of determining their degree of overlap and compatibility. An ad-hoc task group lead by Renzo Kottman and Peter Dawyndt undertook an initial comparison between the Darwin Core (DwC) standard used by the Global Biodiversity Information Facility (GBIF) and the MIxS checklists put forward by the Genomic Standards Consortium (GSC). A term-by-term comparison showed that DwC and GSC concepts complement each other far more than they compete with each other. Because the preliminary analysis done at this meeting was based on expertise with GSC standards, but not with DwC standards, the group recommended that a joint meeting of DwC and GSC experts be convened as soon as possible to continue this joint assessment and to propose additional work going forward
Standards in Genomic Sciences, 6(2):276
This report details the outcome of the 13th Meeting of the Genomic Standards Consortium. The three-day conference was held at the Kingkey Palace Hotel, Shenzhen, China, on March 5-7, 2012, and was hosted by the Beijing Genomics Institute. The meeting, titled From Genomes to Interactions to Communities to Models, highlighted the role of data standards associated with genomic, metagenomic, and amplicon sequence data and the contextual information associated with the sample. To this end the meeting focused on genomic projects for animals, plants, fungi, and viruses; metagenomic studies in host-microbe interactions; and the dynamics of microbial communities. In addition, the meeting hosted a Genomic Observatories Network session, a Genomic Standards Consortium biodiversity working group session, and a Microbiology of the Built Environment session sponsored by the Alfred P. Sloan Foundation.
Standards in Genomic Sciences, 6(1):136-44
Microbial ecology has been enhanced greatly by the ongoing 'omics revolution, bringing half the world's biomass and most of its biodiversity into analytical view for the first time; indeed, it feels almost like the invention of the microscope and the discovery of the new world at the same time. With major microbial ecology research efforts accumulating prodigious quantities of sequence, protein, and metabolite data, we are now poised to address environmental microbial research at macro scales, and to begin to characterize and understand the dimensions of microbial biodiversity on the planet. What is currently impeding progress is the need for a framework within which the research community can develop, exchange and discuss predictive ecosystem models that describe the biodiversity and functional interactions. Such a framework must encompass data and metadata transparency and interoperation; data and results validation, curation, and search; application programming interfaces for modeling and analysis tools; and human and technical processes and services necessary to ensure broad adoption. Here we discuss the need for focused community interaction to augment and deepen established community efforts, beginning with the Genomic Standards Consortium (GSC), to create a science-driven strategic plan for a Genomic Software Institute (GSI)
Position Paper, prepared in conjunction with the NSF thirty-year review of LTER, 59 pages.
An assessment of data-management issues associated with the Long-Term Ecological Reseach (LTER) program funded by the US National Science Foundation. LTER was created to allow the study of long-term phenomena that could not be studied effectively over the course of a typical three- or five-year funded project. If the work of LTER today is to contribute to insights on phenomena spanning multiple decades, or even centuries, it will more likely be from archived data than from the published literature. Thus, the creation and sharing of long-term data sets is clearly an essential part, a sine qua non, of the LTER program. Such long-term data sets will be valuable only if they are:
available: the data must collected and then stored in a way that they can be retrieved for future use,
locatable: archived data sets that cannot be found are of the same value as data sets that never existed,
accessible: the data set must be accessible after it is located (a data set stored on obsolete media can be little better than lost data),
understandable: the data must be sufficiently well documented so that they can be used sensibly; for example, to compare average daily temperatures across multiple data sets one must know how the averages were calculated — as weighted averages across minute-by-minute measurements (as can readily be done with today’s instruments) or as the half-way point between the daily maximum and minimum (as was the only possible with max-min thermometers), and
usable: to be truly usable, data sets should be automatically parsable, meaning that it should be easy for software to manipulate unambiguously the individual components of the data set.
At its inception, LTER was the only major program in ecological research with data-management and data-distribution policies. Throughout its existence, LTER has been a leader in developing both policies and technologies in support of ecological data sharing. Is the LTER model for data sharing perfect? No. Could it be improved? Yes. But, most importantly, an approach for LTER data sharing is in place and it is generally accepted across the LTER network that data sharing must be the norm. However, the process of LTER data sharing needs to be rethought into a model of data publishing, with defined data products and services. So long as access to LTER data is through individual, idiosyncratic, site-specific web sites, so long will LTER data be at risk and accessing LTER data be tedious and frustrating. Shifting to a data publishing model will not, to be sure, magically solve all problems, but it will help to control expectations, to facilitate standardized search and access, and to encourage the development of third-party tools to assist in the use of the published data.
Transl Behav Med, 1(1):155-164
Helping women make choices to reduce cancer risk and to improve breast health behaviors is important, but the best ways to reach more people with intervention assistance is not known. To test the efficacy of a Web-based intervention designed to help women make better breast health choices, we adapted our previously tested, successful breast health intervention package to be delivered on the Internet, and then we tested it in a randomized trial. We recruited women from the general public to be randomized to either an active intervention group or a delayed intervention control group. The intervention consisted of a specialized Web site providing tailored and personalized risk information to all participants, followed by offers of additional support if needed. Follow-up at 1-year post-randomization revealed significant improvements in mammography screening in intervention women compared with control women (improvement of 13 percentage points). The intervention effects were more powerful in women who increased breast health knowledge and decreased cancer worry during intervention. These data indicate that increases in mammography can be accomplished in population-based mostly insured samples by implementing this simple, low resource intensive intervention
Recent developments in our ability to capture, curate, and analyze data, the field of data-intensive science (DIS), have indeed made these interesting and challenging times for scientific practice as well as policy making in real time. We are confronted with immense datasets that challenge our ability to pool, transfer, analyze, or interpret scientific observations. We have more data available than ever before, yet more questions to be answered as well, and no clear path to answer them. We are excited by the potential for science-based solutions to humankind's problems, yet stymied by the limitations of our current cyberinfrastructure and existing public policies. Importantly, DIS signals a transformation of the hypothesis-driven tradition of science ("first hypothesize, then experiment") to one that is typified by "first experiment, then hypothesize" mode of discovery. Another hallmark of DIS is that it amasses data that are public goods (i.e., creates a "commons") that can further be creatively mined for various applications in different sectors. As such, this calls for a science policy vision that is long term. We herein reflect on how best to approach to policy making at this critical inflection point when DIS applications are being diversified in agriculture, ecology, marine biology, and environmental research internationally. This article outlines the key policy issues and gaps that emerged from the multidisciplinary discussions at the NSF-funded DIS workshop held at the Seattle Children's Research Institute in Seattle, on September 19-20, 2010
BMC Med Inform Decis Mak, 9:31
BACKGROUND: Data protection is important for all information systems that deal with human-subjects data. Grid-based systems--such as the cancer Biomedical Informatics Grid (caBIG)--seek to develop new mechanisms to facilitate real-time federation of cancer-relevant data sources, including sources protected under a variety of regulatory laws, such as HIPAA and 21CFR11. These systems embody new models for data sharing, and hence pose new challenges to the regulatory community, and to those who would develop or adopt them. These challenges must be understood by both systems developers and system adopters. In this paper, we describe our work collecting policy statements, expectations, and requirements from regulatory decision makers at academic cancer centers in the United States. We use these statements to examine fundamental assumptions regarding data sharing using data federations and grid computing.
METHODS: An interview-based study of key stakeholders from a sample of US cancer centers. Interviews were structured, and used an instrument that was developed for the purpose of this study. The instrument included a set of problem scenarios--difficult policy situations that were derived during a full-day discussion of potentially problematic issues by a set of project participants with diverse expertise. Each problem scenario included a set of open-ended questions that were designed to elucidate stakeholder opinions and concerns. Interviews were transcribed verbatim and used for both qualitative and quantitative analysis. For quantitative analysis, data was aggregated at the individual or institutional unit of analysis, depending on the specific interview question.
RESULTS: Thirty-one (31) individuals at six cancer centers were contacted to participate. Twenty-four out of thirty-one (24/31) individuals responded to our request- yielding a total response rate of 77%. Respondents included IRB directors and policy-makers, privacy and security officers, directors of offices of research, information security officers and university legal counsel. Nineteen total interviews were conducted over a period of 16 weeks. Respondents provided answers for all four scenarios (a total of 87 questions). Results were grouped by broad themes, including among others: governance, legal and financial issues, partnership agreements, de-identification, institutional technical infrastructure for security and privacy protection, training, risk management, auditing, IRB issues, and patient/subject consent.
CONCLUSION: The findings suggest that with additional work, large scale federated sharing of data within a regulated environment is possible. A key challenge is developing suitable models for authentication and authorization practices within a federated environment. Authentication--the recognition and validation of a person's identity--is in fact a global property of such systems, while authorization - the permission to access data or resources--mimics data sharing agreements in being best served at a local level. Nine specific recommendations result from the work and are discussed in detail. These include: (1) the necessity to construct separate legal or corporate entities for governance of federated sharing initiatives on this scale; (2) consensus on the treatment of foreign and commercial partnerships; (3) the development of risk models and risk management processes; (4) development of technical infrastructure to support the credentialing process associated with research including human subjects; (5) exploring the feasibility of developing large-scale, federated honest broker approaches; (6) the development of suitable, federated identity provisioning processes to support federated authentication and authorization; (7) community development of requisite HIPAA and research ethics training modules by federation members; (8) the recognition of the need for central auditing requirements and authority, and; (9) use of two-protocol data exchange models where possible in the federation
Position paper, submitted to the NSF BIO/AC committee, 7 pages.
A discussion of the challenges associated with federal support for cyberinfrastructure and a consideration of the opportunities afforded by the American Recovery and Reinvestment Act (ARRA).
Prev Chronic Dis, 1(4):A15
Much is written about Internet access, Web access, Web site accessibility, and access to online health information. The term access has, however, a variety of meanings to authors in different contexts when applied to the Internet, the Web, and interactive health communication. We have summarized those varied uses and definitions and consolidated them into a framework that defines Internet and Web access issues for health researchers. We group issues into two categories: connectivity and human interface. Our focus is to conceptualize access as a multicomponent issue that can either reduce or enhance the public health utility of electronic communications
J Health Psychol, 8(1):175-86
The Internet might transform the way in which health information is communicated to patient and general populations. Understanding differences in usage patterns will be critically important to ensuring the successful distribution of health information. The present study presents early data on the use patterns and predictors of use of a Web-based intervention in a population-based subsample of women aged 18-74 in King County, WA. By three months over half (51%) of users had logged into the website, using multiple components. Predictors of use by three months included employment, perceptions of health and mental health scores. These data have implications for how to conduct Web-based intervention research and for individuals that may not benefit from such interventions
Health Care Women Int, 24(10):940-51
A random, population-based sample of 431 women aged 18-74 in King County, Washington, USA, completed a survey module on Internet use and access. Level of mental health, level of general health perceptions, older age, and higher income predicted women's health-related Internet use. Participants without access reported various barriers to obtaining access; perceived lack of usefulness of the Internet as an information source and unfamiliarity with using this technology appear equally important reasons as financial cost for not adopting the Internet. Internet use motivators are complex; these findings have relevance to the design of Internet-based interventions
Journal of Computational Biology, 3(3):465-478
Bioinformatics (the application of computers to biological information management) is part of the that supports biological investigations. However, bioinformatics is not just information infrastructure another infrastructure component, no more deserving of special consideration than, say, biomicroscopy (the application of magnification to biological investigations). Instead, bioinformatics is a special case, requiring coordinated attention by members of the research community, by representatives of professional societies, and by funding agencies. With the spread of global networking, biological information resources, such as community databases, must be capable at some level of working together, of interoperating, so that users may interact with them collectively as a federated information infrastructure. In contrast, enabling infrastructure for other science, such as particle accelerators or orbiting telescopes, may operate usefully as essentially stand-alone facilities. Researchers interact with them, carry out work, and take the results back to their desks (or computers). This requirement of interoperability means that mere excellence as a stand-alone facility is not good enough—bioinformatics projects must also be excellent components in a larger, integrated system. This can be achieved only as a result of coordination among those who develop the systems, among the professional societies and other advisory bodies that help guide the projects, and among the agencies that support the work. The required level of coordination in maintaining these facilities is much greater than that seen in most other sponsored research or research infrastructure activities. This article presents a brief overview of bioinformatics activities, calling attention to those aspects that would most benefit from coordinated international attention. Issues presented are drawn more from interactions with community researchers and from many recent reports of community workshops on bioinformatics and much less from compiled statistics on supported activities. Several examples are drawn from the genome project, since it is a successful, large-scale international biological project with a major informatics component.
In Collado-Vides J, Magasanik B, and Smith T. [eds.] Integrative Approaches to Molecular Biology. Cambridge: MIT Press. pp. 63-90.
IEEE Engineering in Medicine and Biology Magazine, 14(6):746-759
IEEE Engineering in Medicine and Biology Magazine, 14(6):694-701
Publishing Research Quarterly, 10(1):3-27
Biology is entering a new era in which data are being generated that cannot be published in the traditional literature. Databases are taking the role of scientific literature in distributing this information to the community. The success of some major biological undertakings, such as the Human Genome Project, will depend upon the development of a system for electronic data publishing. Many biological databases began as secondary literature--reviews in which certain kinds of data were collected from the primary literature. Now these databases are becoming a new kind of primary literature with findings being submitted directly to the data- base and never being published in print form. Some databases are offering publishing on demand services, where users can identify subsets of the data that are of interest, then subscribe to periodic distributions of the requested data. New systems, such as the Internet Gopher, make building electronic information resources easy and af- fordable while offering a powerful search tool to the scientific community. Al- though many questions remain regarding the ultimate interactions between electronic and traditional data publishing and about their respective roles in the scientific process, electronic data publishing is here now, changing the way biology is done. The technical problems associated with mounting cost-effective electronic data pub- lishing are either solved, or solutions seem in reach. What is needed now, to take us all the way into electronic data publishing as a new, formal literature, is the devel- opment of more high-quality, professionally operated EDP sites. The key to trans- forming these into a new scientific literature is the establishment of appropriate editorial and review policies for electronic data publishing sites. Editors have the opportunity and the responsibility to work in the vanguard of a revolution in scientific publishing.
Journal of Computational Biology, 1(3):173-190
The Human Genome Program (HGP) is producing large quantities of complex map and DNA sequence Informatics projects in algorithms, software, and databases are crucial in accumulating and interpreting these data in a robust and automated fashion at genome and sequencing centers. Furthermore, the data will need to be captured into robust community databases and accessed with equally robust analysis tools; biologists will need to ask questions of the data accumulated by the genome program and other research. The future success of the genome project will depend on the ease with which accurate and timely answers to interesting questions about genomic data can be obtained. The Department of Energy (DOE), the National Institutes of Health (NIH), and other agencies must exercise leadership and control to ensure that needed informatics systems are developed and operated appropriately. Recognizing the importance of informatics to the success of the genome project, DOE supports a portfolio of independent research projects in genome informatics, as well as core informatics activities at genome centers. To ensure the continuing high quality of these programs, last year OHER/DOE asked an independent panel to review the entire DOE program of informatics projects. The meeting reported here is a continuation of the long-term planning process initiated with that review. This meeting was also designed to feed into DOE development of new 5-year plans for the HGP. In addition, this planning will be useful in other DOE or OHER/DOE research programs that will require an infrastructure to collect, interpret, and integrate diverse biological data. Some other programs or interests include health effects, mutation research, structural biology, biotechnology research (including applications in environmental biotechnology and the biological production of fuels, biomass, or other materials), and environmental research (including research and modeling that allow a better understanding of the effects of environmental perturbations on organisms, their ecology, and the environment).
In Suhai S [Ed] Computational Methods in Genome Research. New York: Plenum Publishing. pp.85-96.
Although the word “gene” may be the most frequently used word in biology, it has proven remarkably difficult to define. Entire books have been written describing the early history of the gene concept (Carlson, 1966), and many eminent biologists addressed the question during the classical period of genetics (Demerec, 1933; Demerec, 1955; Muller, 1945; Stadler, 1954). In the modern era, major textbooks on molecular and cellular biology all devote significant efforts to defining the gene (e.g., Alberts, et al., 1983; Darnell, et al., 1986), with one recent work (Singer and Berg, 1992) simply claiming that no single definition of the gene exists: The unexpected features of eukaryotic genes have stimulated discussion about how a gene ... should be defined. Several different possible definitions are plausible, but no single one is entirely satisfactory or appropriate for every gene. If genes cannot be defined, then how is one to design a data model to represent them? And, without a definition for genes, how possibly could we represent genomic maps? It is a truism in information science that an adequate data model cannot be developed without an understanding of the thing being modeled. Therefore, to build good databases we must turn our attention to the notion of the gene.
Journal of Heredity, 85(1):48-52
The variable white mutation arose spontaneously in 1983 within a laboratory stock of wild-type deer mice (Peromyscus maniculatus). The original mutant animal was born to a wild-type pair that had previously produced several entirely wild-type litters. Other variable white animals were bred from the initial individual. Variable white deer mice exhibit extensive areas of white on the head, sides, and tail. Usually a portion of pigmented pelage occurs dorsally and on the shoulders, but the extent of white varies from nearly all white to patches of white on the muzzle, tip of tail, and sides. The pattern is irregular, but not entirely asymmetrical. Eyes are pigmented, but histologically reveal a decrease in thickness and pigmentation of the choroid layer. Many variable white animals do not respond to auditory stimuli, an effect that is particularly evident in animals in which the head is entirely white. Ataxic behavior is also prevalent. Pigment distribution, together with auditory and retinal deficiencies, suggests a neural crest cell migration defect. Breeding data are consistent with an autosomal semidominant, lethal mode of inheritance. The trait differs from two somewhat similar variants in Peromyscus: from dominant spot (S) in extent and pattern of pigmentation and from whiteside (ws), an autosomal recessive trait, in the mode of inheritance and viability. Evidence for possible homology with the Va (varitint-waddler) locus in house mouse (Mus) is presented. The symbol Vw is tentatively assigned for the variable white locus in Peromyscus
Nucleic Acids Research, 21(13):3003-3006
Version 5.0 of the Genome Data Base (GDBTM) was released in March 1993. This document describes some of the significant changes to the types of data which are stored within the GDB. In addition to handling a wider scope of data, the GDB 5.0 application software now supports the X-Windows protocol. Although the GDB software still remains the most widely utilized method for accessing the data, alternate methods of access are now available, Including direct SQL (Structured Query Language) queries, FTP (Internet File Transfer Protocol), WAIS (Wide Area Information Server), and other tools produced by third-party developers.
Nucleic Acids Research, 20(Suppl):2201
IEEE Engineering in Medicine and Biology Magazine, 11(1):25-34
Proceedings of the 17th International Conference on Very Large Data Bases,
Cytogenetics and Cell Genetics, 58:1833-1838
HGM 10.5 marked the introduction of the genome Data Base (GDB) as the official system for maintaining and accessing mapping data in support of the international Human Genome project. While the prototype system was successful in compiling and making available the HGM consensus map, critical work remained in four areas prior to HGM 11: (1) enhancing editorial modules based upon expressed needs of the HGM 10.5 attendees; (2) extending the content domain covered by the database to include additional information, especially physical mapping data; (3) increasing the functionality of the General User Interface and related query tools used by the wider scientific community; and (4) improving the system's robustness, as part of the general maturation process in moving from an operational prototype to a full production system.
Applied Entomology and Zoology, 19:254-256
Journal of Heredity, 73(1):69-70
An autosomal recessive mutation affecting hair and eye pigmentation was discovered in the F1 progeny of wild-type deer mice, (Peromyscus manlculatus), trapped near East Lansing, Michigan. When homozygous, the mutation (designated as blonde, bt), reduces both black and yellow pigmentation deposited In the fur, reduces or eliminates pigmentation in the non-follicular melanocytes of the outer ear, perl-orbital skin and tall, slightly reduces the amount of pigmentation in the choroidal melanocytes, and completely eliminates pigmentation of the retinal epithelium.
Steinbeck Quarterly, 13:86
Proceedings of the 9th Vertebrate Pest Conference (1980),
Although bait shyness has long been recognized as a problem to be overcome in the control of vertebrate pests, it has recently been suggested that the phenomenon might be turned to an advantage and used as an alternative, non-lethal form of control. Unfortunately, this technique has not proven to be as useful as hoped, as the work which has been done on coyotes is inconclusive at best and some recent work on rodents has cast serious doubts upon the method's potential. However, an extensive literature dealing with the formation of poison-based food aversions now exists, and insights gained from these studies can be used to increase the efficacy of traditional, lethal control techniques. For example, the efficacy of pre-baiting may be greatly increased if the pre-bait is treated with a non-toxic flavor which mimics the flavor of the subsequently used toxin, even if this non-toxic flavor decreases the acceptability of the pre-bait.
Learning & Behavior, 8(4):534-542
An investigation was made of the occurrence of learned and nonlearned aversions in the acquisition of illness-induced taste aversions in mice of the genus Peromyscus. It was deter mined: (1) that illness following the ingestion of a novel flavor both produced aversions specific to that flavor and also enhanced neophobia directed toward novel flavors in general; (2) that the specific aversion and the enhanced neophobia appeared to be mediated by independent processes, with no indication that the enhanced neophobia was dependent upon the integrity of the specific aversion; and (3) that illness following the ingestion of familiar water produced enhanced neophobia, which did not appear to be mediated by an aversion to water. It was noted that the results were fundamentally in agreement with those previously obtained with laboratory rats, except that a demonstration of the independence between the two types of aversions has not yet been reported in those animals.
Behavioral and Neural Biology, 30(1):80-9
An examination of the effect of sex upon taste-aversion learning in deer mice (Peromyscus maniculatus bairdi) found that (a) sex has no apparent effect upon either the acquisition or the extinction of a LiCl-induced aversion to sucrose solution if the animals are tested while fluid deprived, but that (b) if the animals are tested under nondeprived conditions, males exhibit a greater initial aversion than females but both sexes seem to extinguish their aversions at similar rates. These findings differ from those previously reported for laboratory rats, in which it has been found that sex affects the extinction but not the acquisition of poison- induced taste aversions. It was suggested that either (a) sex interacts with taste- aversion learning via different mechanisms in deer mice and in rats, or (b) the apparent differences in extinction rates reported for rats might conceivably reflect differences in initial aversion strength which were undetected due to the use of high doses of toxin.
Applied Entomology and Zoology, 15(3):352-355
Behavioral and Neural Biology, 25(3):387-397
An investigation was made of the effects of prior familiarization with sucrose on the acquisition and extinction of LiCl-induced aversions to sucrose by mice of the genus Peromyscus. As in previous studies on other species, it was found that flavor familiarization inhibits the formation of learned taste aversions. However, in contrast to some reports on other species, it was demonstrated that for Peromyscus familiarization does not accelerate, but instead retards, the extinction of taste aversions. It was noted that (a) the contrasting extinction results reported for other species may be confounded with masked acquisition effects, (b) the latent inhibition effect is often not obtained with fewer than 20 preexposures, yet the flavor-preexposure effect has been demonstrated with as few as one preexposure, (c) the flavor-preexposure schedule is logically and operationally equivalent to a short partial-reinforcement schedule, and (d) both the acquisition and extinction effects shown by Peromyscus are consistent with a partial-reinforcement interpretation. Therefore, it was suggested that future analysis of the phenomenon might profitably consider the possibility that the flavor-preexposure effect upon taste-aversion learning may be a case of partial reinforcement.
J Comp Physiol Psychol, 92(4):642-50
A series of experiments tested the ability of mice of the native genus Peromyscus to form learned taste aversions. It was found that (a) the mice acquired a strong aversion after a single flavor/toxicosis pairing, (b) naive mice drinking a LiCl solution apparently began to experience toxic effects within 90 sec after the beginning of consumption, (c) the mice acquired a total aversion after a single flavor/delayed illness pairing when high doses of toxin were employed, and (d) the aversion produced by a single flavor/delayed-illness pairing was specific to the flavor paired with illness and was dependent on the contingency between the flavor and illness. Although these responses are qualitatively similar to those reported for domestic rats, the mice formed considerably weaker aversions than those previously reported for laboratory rats tested with the same weight-specific doses of LiCl
Lab Anim Sci, 27(6):1038-9
Poster presented at: Gordon Research Conference: Microbial Ecology in the Era of OMICS. 2012 Jun 24–29; Lucca, Italy.
Poster presented at: Educause CAMP Med Workshop: Identity and Access Management for Medical Applications. 2005 Feb 9–11; Baltimore, Maryland.
RJR Experience and Expertise
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Science Policy & Funding
Big Data & Informatics