Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Apr 21, 2017

March for Science - to march or not to march

Apparently, there is a big controversy with regard to March for Science (M4S) that will take place this Saturday April 22, 2017 in DC and in many other cities around the US.

The main stated goal of the march is to support publicly funded and publicly communicated science as a pillar of human freedom and prosperity. I was set on going because it seems that nowadays science needs support, because regardless of whether you believe in such thing as objective truth-seeking (I have my doubts), scientists can and should be political in defending their institutions and their role in public life. But mostly I was set on going because we need to resist anti-intellectualism and assaults on reason. My own reasons more-less clear, I didn't pay much attention for any discussion around the march. And I bought a t-shirt even though merchandising around protest movements seems out-of-place. Perhaps, because this march is not a protest or social justice movement.

Many people feel strongly that the march is wrong. That they were excluded from planning and organizing. Most importantly, that the march marginalizes non-white non-male scientists and disregards diversity. That it is a microcosm of liberal racism and that march organizers pushed out those who argued for inclusiveness and intersectionality. The controversy is scattered across mass and social media, but to summarize one side (organizers) is complicit in making the march a watered-down non-political "celebration of science". The other side (#MarginSci-ers) perceives the march as a social justice movements and wants the message of diversity (which applies to any context of American life) be reinforced through this movement as well. An interesting analysis of the march diversity discourse shows how organizers shifted their position with regard to diversity, thereby conforming to existing stereotypes and dominant discourse:
Unfortunately, through various miscommunications, including from the co-chairs and other key members of the MfS committee, the MfS audience has been primed to reinforce the established discourse about science. It took the better part of two months of constant lobbying and external pressure from minority scientists for the MfS organisers to finally reverse their stance. The fourth diversity statement finally states that science is political. At the same time, more recent media interviews that position diversity as a “distraction” undermine this stance.
In a sense, controversy is good. It highlights gaps in a movement and could potentially help to develop a robust program and action plan. But what is this movement? Upon reading the history of its organization, the march seems more like a top-down attempt to organize and contain rather than a grass-root protest and demand for change. It's being done professionally with attempts to control the message and the goals. Is "celebration" enough to ensure change? Do I need to celebrate science or to improve the mutual relationship between science and society? Are we mobilizing only because we want public funding and therefore need to "educate" the public and policy-makers?

There is a high probability that with the goals of celebration, connections, understanding and outreach, M4S will follow #Occupy and Women's March movements - much enthusiasm and no action due to the lack of clear vision and strategies for change. A strong movement should have strong demands, which can then translate to specific legislation and policies. For example,

  • Equal pay and opportunities in science and research
  • Strong science education across all states
  • Protections for whistle-blowers and government scientists from political repressions
  • No marketization of science and education
  • Exposing and dismantling the military-industrial-scientific complex



Oct 2, 2013

About research objects

Notes from the article by Bechhofer, Buchan, De Roure, Missier, Ainsworth et al. "Why linked data is not enough", Future Generation Computer Systems, 2011, (pdf).

Scientific research is increasingly digital and collaborative, therefore a new framework is needed that would facilitate the reuse and exchange of digital knowledge. Simply publishing data fails to reflect the research methodology and respect the rights and reputation of the researcher.

The concept of Research Objects (ROs) as semantically rich aggregations of resources can serve as a cornerstone of such new framework. ROs would include research questions, hypotheses, abstracts, organisational context (e.g., ethical and governance approvals, investigators, etc.), study design, methods (workflows, scripts, services, software packages. etc.), data, results, answers (e.g., publications, slides, DOIs), etc. The authors argue that this approach is better than linked data, but later they acknowledge that linked data works fine, it just needs to be revised and extended.

Important assumptions in the paper:

  • ROs work well in the context of e-Laboratories - environments that are mostly based on automated management systems and execution of in silico experiments
  • Reproducible research is ultimately possible in any domain and always desirable.
  • All elements of scientific research can be made explicit and encoded in a machine-readable way, if not now, then in the future.

Terms that refer to different ways of reusability:

  • Reusable - reuse as a whole or single entity.
  • Repurposeable - reuse as parts, e.g., taking an RO and substituting alternative services or data for those used in the study.
  • Repeatable - repeat the study, perhaps years later.
  • Reproducible - reproduce or replicate a result (start with the same inputs and methods and see if a prior result can be confirmed).
  • Replayable - automated studies can be replayed rather than executed again.
  • Referenceable - citataions for ROs.
  • Revealable - audit the steps performed in the research in order to be convinced of the validity of results.
  • Respectful - credit and attribution.

The authors describe several environments that try to implement aggregation of resources into ROs approach.

  • myExperiment Virtual Research Environment relies on the notion of "packs", collections of items that can be shared as a single entity.
  • Systems Biology of Microorganisms (SysMO) project has a web plaform SysMO-DB and a catalog SysmoSEEK. It relies on a JERM (Just Enough Results Model), which is based on the ISA (Investigation/Study/Assay) format. Another approach to support ROs within the systems biology community is SBRML (Systems Biology Results Markup Language). Most of the experiments in this domain are wet lab experiments, so traceability and referenceability are more relevant than repeatability and replayability.
  • MethodBox is part of the Obesity e-Lab, that allows researchers to "shop for variables" from studies related to obesity in the UK. The paper doesn't describe what method is used to support RO aggregations.

Packs in myExperiment is the most advanced implementation of the idea of ROs and, ironically, it's based on linked data: "Work in myExperiment makes use of the OAI-ORE vocabulary and model in order to deliver ROs in a Linked Data friendly way" (p. 10).

OAI-ORE defines standards for the description and exchange of aggregations of Web resources. It is agnostic to relationship types, so it needs to be extended. The authors propose the following extensions: the Research Objects Upper Model (ROUM) and the Research Object Domain Schemas (RODS). ROUM provides basic vocabulary to describe general properties of RO, such as the basic lifecycle states. RODS provide domain specific vocabulary. Not much details are provided about these two extensions.

Rather than arguing that linked data is not enough, it seems that the paper argues that current implementations of linked data in packaging scientific results needs to be revised to explicitly include the structure of aggregations. The purpose of articulating structure in a machine-readable way is to create an environment where every component of research (including hypotheses, methods, data and results) can be re-enacted. A more obvious and important conclusion from the discussion about ROs is that a) we need to keep encouraging exchange and sharing of research in ways that are more transparent; b) there is still a shortage of platforms to do that. MyExperiment is a nice example, but it's still domain and platform-specific.

The approach described in this paper is quite forward-looking. It is a call for rather radical changes in scientific practices. I wonder how many labs have automated experiment management environments where all datasets, workflows, scripts and results can be connected and reconstructed without much back-channeling. Another question is how much effort it takes to create ROs in a way that would make science fully "re-enactable". We probably won't be able to do that with legacy data.

Apr 25, 2013

Strategy for Civil Earth Observations - Data Management for Societal Benefit

The US National Science and Technology Council recently released a National Strategy for Civil Earth Observations. The goal of this strategy is to provide a framework for developing a more detailed plan that would enable "stable, continuous, and coordinated global Earth observation capabilities for the benefit of society."

The strategy establishes a way to evaluate Earth-observing systems and their information products around 12 societal benefit areas: agriculture and forestry, biodiversity, climate, disasters, ecosystems (terrestrial and freshwater), energy and mineral resources, human health, ocean and coastal resources, space weather, transportation, water resources, weather, and reference measurements. The production and dissemination of information products should be based on the following principles:

  • Full and open access
  • Timeliness
  • Non-discrimination
  • Minimum cost
  • Preservation
  • Information quality
  • Ease of use
Data management in federal agencies that are responsible for earth science data is described based on the three components of the data life cycle: planning and production, data management, and usage. The latter two components are the main focus of the data management strategy. The suggested activities for those are:

  • Data management
    • Data collection and processing - initial steps to store data and create usable data records.
    • Quality control - follow the principles of the “Quality Assurance Framework for Earth Observation” (QA4EO)
    • Documentation - basic information about the sensor systems, location and time available at the moment of data collection, etc.
    • Dissemination - data should be offered in formats that are known to work with a broad range of scientific or decision-support tools. Common vocabularies, semantics, and data models should be employed.
    • Cataloging - establishing formal standards-based catalog services, building thematic or agency-specific portals, enabling commercial search engines to index data holdings, and implementing emerging techniques such as feeds, self-advertising data, and casting.
    • Preservation and stewardship - guarantee the authenticity and quality of digital holdings over time.
    • Usage tracking - measuring whether the data are actually being used; to enable better usage tracking, data should be made available through application programming interfaces (APIs).
    • Final disposition - not all data and derived products must be archived, derived products that most users have access to may adequately replace raw data and processing algorithms.
  • Usage activities
    • Discovery - enabled by dissemination, cataloging and documentation activities.
    • Analysis - includes quick evaluaionts to assess the usefulness of a data set and an actual scientific analysis.
    • Product generation - creating new products by averaging, combining, differencing, interpolating, or assimilating data.
    • User feedback - mechanisms to provide feedback to improve usability and resolve data-related issues.
    • Citation - different data products, e.g., classifications, model runs, data subsets, etc., need to be citable.
    • Tagging - identify a data set as relevant to some event, phenomenon, purpose, program, or agency without needing to modify the original metadata.
    • Gap analysis - the determination by users that more data are needed, which influences the requirements-gathering for new data life cycles.

Each activity raises a lot of questions and challenges. The activities of cataloging, usage tracking, final disposition, tagging and gap analysis are particularly interesting. They raise questions that are rarely addressed in the data management literature. Does anybody use data that are being shared? Do all the data need to be preserved? How can we avoid duplicates and unnecessary modifications of metadata if data are being re-used? To what extent do we need to serve immediate user interests versus the future possibilities for research?

Apr 19, 2013

NIH report: Big data recommendations based on small data?

I've been browsing slides from the last BRDI Symposium, "Finding the Needle in the Haystack: A Symposium on Strategies for Discovering Research Data Online", and found a report for the National Institutes of Health about the management and analysis of large biomedical research data (pdf available here).

It is an interesting report that provides a lot of details about data and technologies in biomedical research as well as about existing efforts in data sharing. The recommendations make sense, since they follow most of the recommendations with regard to research data - more money, more policy, more training:

  • Promote data sharing by establishing a minimal metadata framework for data sharing, creating catalogs and tools and enhancing data sharing policy for NIH-funded research.
  • Support the development and dissemination of informatics methods and applications by funding software development.
  • Train the workforce in quantitative sciences by funding quantitative training initiatives and enhancing review expertise in quantitative methods of bioinformatics and biostatistics.
Even more interesting is what evidence is provided to support these recommendations. The report is based on a relatively small literature corpus (~25 citations plus footnotes) and on the analysis of comments that were solicited via an NIH request for information on management, integration, and analysis of large biomedical datasets. Overall, 50 respondents replied and made 244 suggestions. Is it enough data to make recommendations for NIH? If we begin with the assumption that more support for large datasets and biomedical computations is needed (which seems to be the case with this report), then there is almost no need to analyze costs and benefits of data sharing, the role of large datasets in providing solutions for biomedical problems, and so on.

Feb 11, 2013

Philosophy and science: A need for Ph in PhD

An interesting piece published recently in the Science magazine: Shaking Up Science by Jennifer Couzin-Frankel. It's behind the paywall and pretty long, so here is a quick summary.

The essay is a story about two biology scientists, Ferric Fang from University of Washington and Arturo Casadevall from Albert Einstein College of Medicine in the Bronx, New York. They were brought together by "disenchantment", i.e., they both had worries about what is going on in academia and science:

Discovery for its own sake was being sidelined by a push to publish in high-impact journals. Funding was scarcer than ever. Scientists focused on narrow fields and often couldn't communicate their professional passions at a cocktail party.
They were both editors of an immunology journal, so they started writing opinion pieces about grants, peer review process and so on. At some point they got interested in research misconduct, more specifically, how many papers are being retracted, where and why. First, they wanted to see if there is a connection between a journal's impact factor and its retraction rate. They searched the PubMed database and found a robust correlation - the higher the impact factor, the more retractions the journal had.
Then they looked closely to retractions between 1977 and 2000 and found that about 67% of all the retractions were attributed to scientific misconduct, including fraud and plagiarism.

The next step was (and it's usually the most difficult one) to figure out why it happens. I like the possible explanation, but it's not clear from the essay whether it was supported with evidence or not. It makes a lot of sense though.

The scientists believe that the race for grants and funding encourages misconduct.

"It's all about money," Fang says. "How can you be sure that you get money?" The answer comes back to publications—and sometimes skirting the rules to get them.
The story up to this point is more or less obvious. A lot of people talk about problems with depending on grants in funding science and scientists (soft money) and peer review. What is interesting is what kind of solutions are proposed. The scientists argue for more generalized science education instead of the extreme specialization. And for more philosophical training, particularly in epistemology and metaphysics that encourages asking questions like "What is it that you know?" and "How do you know what you know?"

Even though we may never go back to making philosophy a required subject (which I had as part of my graduate studies in Russia), I think it'd be great. Asking broad questions about the nature of knowledge and, more importantly, its justifications and limits, encourages people to step back, look at the larger picture and think critically about what they're doing. By doing that the sciences can be what they're supposed to be - a self-correcting institution based on the Mertonian norms of communalism, universalism, disinterestedness, originality and skepticism.

Sep 25, 2012

Gender bias in science: It's real

A simple study on gender bias discussed here:

Whenever the subject of women in science comes up, there are people fiercely committed to the idea that sexism does not exist. They will point to everything and anything else to explain differences while becoming angry and condescending if you even suggest that discrimination could be a factor. But these people are wrong. This data shows they are wrong. And if you encounter them, you can now use this study to inform them they’re wrong. You can say that a study found that absolutely all other factors held equal, females are discriminated against in science. Sexism exists. It’s real.

The results are not that surprising, e.g., that both females and males are biased against females in science or that most of this bias is unconscious, i.e., scientists used rational reasons to explain why they wouldn't hire a woman.

I like the author's suggestion though: there are definitely people out there who find this situation disturbing, so it's important to disseminate this information and hopefully something changes.

Aug 23, 2012

Metadata webinar

Notes from the NISO / DCMI webinar "Metadata for managing scientific research data".

General impression: it seems that people who research metadata (and larger information/knowledge organization issues) are so deep into their domains that they think everybody else knows nothing about data/metadata. Perhaps, the audience of this webinar consisted largely of people who are unaware of anything relate to this topic. And that's why the first half hour was spent on pretty simple and uninformative issues of "what is data-metadata-science".

I heard such conversations so many times without any progress, that I began to think we should just skip it and move on. No agreed upon definitions can ever be provided for any more-less complex concept. And still talking about metadata as "data about data" is almost embarrassing. It's better to emphasize that having a shared description of data, e.g., who created them, where they come from, what they are about, etc., helps to produce good and verifiable research and to (re)use the data in the future.

As for how to create metadata, it seems that it still needs to be figured out and systematized, so researchers and librarians are on their own. The metadata world is messy. Possible criteria for the selection and evaluation of metadata schemes include:


From Public Broadcasting Metadata Dictionary Project

  1. Objectives/principles, such as interoperability, specific needs, expertise required.
  2. Domains (genre focus, format variation)
  3. Architectural layout (flat, hierarchical, granular, etc.)

And below are some common schemes according to their level of complexity:

Simple
(interoperable, easy to generate, multidisciplinary, flat, 15-25 properties)
Moderate
(requires some expertise, more domain focused, extensible via connecting to other schemes)
Complex
(requires domain expertise, hierarchical, many properties):
Dublin Core Darwin Core FGDC Content standard for digital geospatial metadata
MARC Access to biological collections data (ABCD)
DataCite Ecological metadata language Data Documentation Initiative (DDI)

A couple of interesting questions/challenges: how to integrate metadata creation into social settings / workflows, automated generation of metadata, metadata as linked data.

Jul 30, 2012

Digital science ecosystem

From the GRDI2020 Final roadmap report: Global scientific data infrastructures: The big data challenges (pdf):

Data- any digitally encoded information, including data from instruments and simulations; results from previous research; material produced by publishing, broadcasting and entertainment; digitized representations of diverse collections of objects, e.g. of museums’ curated objects.

Research Data Infrastructures - managed networked environments (services and tools) that support the whole research cycle and the movement of data and information across domains and agencies.

An ecosystem metaphor is used to conceptualize science universe and its processes. A digital science ecosystem is composed of:

  • Digital Data Libraries that are designed to ensure the long-term stewardship and provision of quality-assessed data and data services.
  • Digital Data Archives that consist of older data that is still important and necessary for future reference, as well as data that must be retained for regulatory compliance.
  • Digital Research Libraries as a collection of electronic documents.
  • Communities of Research as communities organized around disciplines, methodologies, model systems, project types, research topics, technologies, theories, etc.

While I can see how the metaphor of ecosystem can be beneficial in conceptualizing science universe, I don’t think it was developed enough here. The whole report is structured around tools and infrastructure as it is understood rather narrowly. It seems that the biggest roadblocks are in the domains of human interactions: all those issues of social hierarchies and capital built into our social institutions.

Paul Edwards (one of the authors of another reading that seemed more sophisticated to me) somewhat wrote about it in his book “A vast machine” about infrastructure surrounding weather forecasting and climate change. He talks about how many-many efforts of various social actors facilitated the creation and inversion of infrastructure by constantly questioning data, models, and prognoses. Here is a large quote from the conclusion chapter of that book to demonstrate the emphasis on people and the making of data-knowledge-infrastructure (in bold, which is mine):

“Beyond the obvious partisan motives for stoking controversy, beyond disinformation and the (very real) “war on science,” these debates regenerate for a more fundamental reason. In climate science you are stuck with the data you already have: numbers collected decades or even centuries ago. The men and women who gathered those numbers are gone forever. Their memories are dust. Yet you want to learn new things from what they left behind, and you want the maximum possible precision. You face not only data friction (the struggle to assemble records scattered across the world) but also metadata friction (the labor of recovering data’s context of creation, restoring the memory of how those numbers were made). The climate knowledge infrastructure never disappears from view, because it functions by infrastructural inversion : continual self-interrogation, examining and reexamining its own past. The black box of climate history is never closed. Scientists are always opening it up again, rummaging around in there to find out more about how old numbers were made. New metadata beget new data models; those data models, in turn, generate new pictures of the past.” (P. N. Edwards, “A vast machine”, p. 432)

Why should we trust climate change and its infrastructures? Because of a “vast machine” that is built by a large community of researchers who constantly try to invert it. So in order to understand, develop and advance data-intensive environments, we shouldn’t consider social forces as external. They are part, if not the foundation, of the data universe. So I’d propose to equally emphasize tools (storage-, transfer- and sharing tools) and social arrangements (individuals, institutions, political contexts, events, and so on) as elements of ecosystem.

Aug 25, 2010

Stem cells ban debate

Just to clarify what I understood from the ruling (Civ. No. 1:09-cv-1575 (RCL) and not from information in the media that rely on emotions and obscured values.

Plaintiffs J. Sherley et al. (which include embryos), asked for a preliminary injunction (ban) of the 2009 NIH guidelines for Human Stem Cell Research. These guidelines allowed NIH funding for research using human embryonic stem cells (ESC). The guidelines separate ESC research from the derivation of ESC; they allow federal funding for the former and restrict it for the latter only to embryos from in vitro fertilization that were no longer needed.

The plaintiffs argued that these guidelines violate the 1996 Dickey-Wicker Amendment, which prohibited the use of federal funds for the creation of a human embryo for research purposes and research in which a human embryo or embryos are destroyed. The defendants argued that the language of the Dickey-Wicker Amendment was ambiguous and the guidelines cleared the issues. They also argued that research with ESC doesn't destroy embryos.

The court ruled that the language of the Amendment is not ambiguous, it clearly communicates the broad prohibition on ESC research. And that to conduct ESC research, ESCs must be derived from an embryo. Deriving ESCs from an embryo results in the destruction of the embryo. Thus, ESC research necessarily depends upon the
destruction of a human embryo. And therefore the guidelines violate the amendment.

All this seems logic and reasonable to me. What I don't understand, is that the court also accepted the plaintiffs argument that federal funding for ESC research injures the plaintiffs competitor standing, because they do research with adult stem cells (ASC). In other words, if ESC research is allowed they would have to compete not only with ASC researchers but also with the ESC ones. So what? Competition is considered to be the main driver of innovation and progress in a capitalist society. Why then removing competitors by a court decision is ok?.

Aug 24, 2010

Paper > Changes in science

Science 2.0 (change will happen ...) by Jean–Claude Burgelman, David Osimo, and Marc Bogdanowicz. First Monday, Volume 15, Number 7 - 5 July 2010:

As we have tried to show in this paper, science will undergo deep changes in the years leading up to 2030, and the speed of change is likely to accelerate. In particular, we envisage that the proliferation of scientific authorship, fragmentation of research output, and increased availability of data will lead to:
* A more unequal distribution of influence, with increased resources being concentrated on a few world–class and star researchers and research centres;
* A disruption of the value chain of scientific production, with a particular difficulty for publishers to maintain their role as “gate–keepers”;
* A blurring of the boundaries between scientific and cultural production;
* A new model of science, thanks to unprecedented data availability, where correlation supersedes causation;
* An increased importance of reputation, and the adoption of more open reputation management systems for scientific careers; and,
* An increased need for scientists to communicate to diverse audiences.

Jun 8, 2010

Larry's law

John Tierney at New York Times published a piece "Daring to Discuss Women in Science", where he recites some old and boring arguments about gender bias and argues that there is no evidence of gender bias, only the evidence of sex differences in cognitive abilities. As with any other issues, everybody can have their own opinion. But this piece not only creates a problem out of nothing, it also masks the author's opinion by appealing to evidence and statistics.

The problem, according to the author, is that the House of Representatives passed a law ("Larry's Law") that would require poor science guys to go to some weird workshops where, as the author worries, they won't be allowed to talk about "the new evidence supporting Dr. Summers’s controversial hypothesis about differences in the sexes’ aptitude for math and science". Is that what the law is about? Mr. Tierney says that the official title of this legislation is "Fullfilling the potential of women in academic science and engineering" as if the whole legislation is about those poor guys and workshops.

In fact, the legislation is titled "To invest in innovation through research and development, to improve the competitiveness of the United States, and for other purposes". The short title is "America COMPETES Reauthorization Act of 2010". The document consists of 248 pages and tens of sections. It's about policies regarding the national nanotechnology initiative, NSF, various STEM initiatives, etc. The section "Fulfilling..." that John Tierney is so concerned about is in "Other provisions" and takes a few pages. It is primarily about overcoming gender bias among the researchers who receive federal funding by organizing workshops, doing surveys, etc. So what?

I can see a lot of arguments against such workshops. And I think it's pathetic that gender equity in science and technology is thought to be achieved by such means. I even understand those who challenge such equity altogether (not in terms of abilities, but in terms of this being a positive social arrangement). However, Tierney's piece is not about any of this. As I said before, it uses a small part of a larger legislation concerned with a whole lot of other issues to reiterate once again that sex differences in cognitive abilities exist. What's new and worthy of expressing an opinion in NYT?

Putting aside considerations about the quality of social science research and statistics (pre-existing biases built into the instrument, indicators that do not measure what they're supposed to measure, correlation is not causation, etc., etc.), the author of this article fails to see that the point is not in whether gender differences or biases exist. The point is in how we deal with them. Kind of old news too.

May 22, 2010

Synthetic cell

Another achievement in synthetic biology - researchers say they created a synthetic cell. According to the press release from the J. Craig Venter Institute (JCVI), where the cell was constructed, the bacterial DNA of one bacterium (M. mycoides) was assembled from small fragments and grown in yeast cells. Then an error correction method that allowed to make sure that this synthetic DNA was viable has been developed. Then the synthetic bacterial DNA of M. mycoides was transplanted into another bacterium, Mycoplasma capricolum, where it started producing proteins. The initial genome was either destroyed or lost during replication and after two days there were viable M. mycoides cells rather than M. capricolum.

There are many issues here (including the White House finally getting interested in all this), but I'd like point a couple of things:

  • Other scientists quoted in newspapers downplay the achievement by saying that it's not a big deal or that it's not a creation of new life anyway. Why? It can be a journalistic way of presenting "diverse" viewpoints, a clash of scientific paradigms, jealousy, or something else.
  • In previous reports about the synthesis of life forms (e.g., in February 2008) the companies that provided DNA cassettes were usually omitted. Now the provider Blue Heron in also in the news. It can be nothing. Or something related to ownership, commercial interests, patenting, etc.
  • The project was funded by Synthetic Genomics, which has a contract from Exxon to generate biofules from algae. So while Dr. Venter tries to present the achievement of JCVI as an advance that raises philosophical, epistemological and other questions (and it does), the goal of all this is profit. And it means that if we want the ultimate questions of life, the universe and everything to be sufficiently addressed, it must be done by somebody else. Who?

Mar 11, 2010

Biosecurity webcast

A meeting at Wilson International Center for Scholars discusses the issues of biosecurity in the context of synthetic biology and DIY biology (http://wilsoncenter.org/index.cfm?topic_id=1414&fuseaction=topics.event_summary&event_id=601732). Here are some notes from their webcast.

Jason Bobe, co-founder of DIYbio:
  • Synthetic biology is on the rise, there are a lot of people who are interested in "doing" biology.
  • DIYbio is a community of people who are involved in genetic experiments.
  • There are different groups of people involved: entrepreneurs, amateurs, hackers, artists, moonlighters, educators, etc.
  • People from DIYbio are working on projects such as doing genetic self-testing, trying to replicate studies done at university labs, generating ideas.
  • Possible futures - biosurveillance, or distributed biosecurity, where everybody has the ability to evaluate the security of water; competitions among non-institutional participants in biology; involvement with synthetic biology.
  • Question about biosecurity: Who gets access to the equipment and techniques? Amateurs want to engage with synthetic biology, but this poses some issues. Possible models to look at: other amateur communities such as scuba diving and its practices of licensing, certification, etc.
Edward You, a special agent with the FBI Weapons of Mass Destruction Directorate, Countermeasures Unit, Bioterrorism Team:
  •  Challenges of synbio biosecurity - living organisms are harder to manage and contain; multiple communities and cultures bring misconceptions and misperceptions
  • Increased restrictions may not work, we need a culture of responsibility - something like neighborhood watch, when everybody watches everybody else.
  • FBI Synthetic Biology Tripwire Initiative - mechanism to prevent unauthorized purchase of dangerous pathogens or toxins by contacting FBI WMD coordinators, who then report to WMD directorate and all related agencies.
  • FBI engages in activities of mitigating the potential risks by outreach, partnerships, and information sharing.

Oct 27, 2009

A long-term fix for science education

In Wall Street Journal three experts share their thoughts on how to improve math and science education in the US (Why we are failing math and science, Oct 26 2009). In short, they suggest:

  • recruit better teachers
  • spend more money to attract talent and reward excellence
  • use technology
  • scare people that if they don't compete with Chinese and Indians in education, they will fall behind
  • make K-12 education more competitive and bring business to education

Fear and business are certainly not good strategies, but these suggestions also ignore students as an active component of the system. Students need to value knowledge and education to succeed. So far science has been valued for its capabilities to drive business, medicine, technology, etc. "You see, science and math are very useful, because they can help you to become rich and successful. And they can help our country become richer and more powerful." This is a weak motivation to study math and science because a) connections are not that obvious, especially for children at a young age, b) there are other ways to become rich and successful.

Science and math are an essential part of overall learning that every person has to do. It's just part of everybody's life in contemporary society. Growing up includes learning to talk, count, read, think, do math, get some knowledge about how the world works. It is certainly not "fun" to learn and study, because it requires effort and perseverance. Yet one should study, not because there are great material and financial rewards for that, but because there is no other way. Knowledge and striving for knowledge are essential not as a means to something but as an end (a non-ending end). Unless this attitude is adopted, no money, genius teachers, or technology can fix any education system.

Aug 16, 2009

My translation of "Physicists and lyricists"

In 1959 a Soviet poet Boris Slutsky wrote a poem about scientists and poets (or physicists and lyricists in the original). Few people know this poem (here it is in Russian), but the terms "physicist" and "lyricist" are widely used in Russia to refer to different worlds of science and culture and those who see themselves as belonging to one or both of these worlds.

The poem reflects on why science is valued so much, while poetry is often considered useless. It is a very simple yet powerful verse. And here is my English translation:

Scientists and poets

Somehow scientists are in favor,
Somehow poets are in disgrace.
It has not been done on purpose
Everything has its own place

Did the truth come out in verses?
Did we stir somebody's soul?
Our rhymes are weak and hollow
They can't fly, they barely crawl.

Our stallion Pegasus
Has no wings, no briskly pace.
That's why scientists are in favor,
That's why poets're in disgrace.

It is obvious and clear.
Arguing won't bring a change.
And it even doesn't pain me;
It is interesting and strange

Watching how our soapy poems
Rise and settle in frustration,
And the greatness little by little
goes to numbers and calculation.

Aug 11, 2009

Parts for biomachines

The re-definition of biological systems as machines continues in the press and scholarly publications (Scientists Use Curvy DNA to Build Molecular Parts - NYTimes.com):
You can’t build a machine without parts. That’s true for large machines like engines and pumps, and it’s true for the tiniest machines, the kind that scientists want to build on the scale of molecules to do work inside the body.