OpenWetWare:Information management/current publishing pro con

From OpenWetWare
Jump to navigationJump to search
  • Lucks 18:25, 1 August 2006 (EDT): I think it is a good exercise to discuss what are the pro's and con's of the existing literature system to be able to asses what any future system should provide.
  • Austin 09:29, 6 August 2006 (EDT): You may also want to discuss the economic models of the traditional, open access, and what the future would look like. Below mainly talks about the "smallest publishable unit" (SPU) of publishing but it seems as the SPU approaches zero, then the cost model needs to be adjusted even from the open-access author pays for a complete article model.

Current Publishing System: Description

Pro's

What are the good qualities about this sysem that we want to preserve in a future system?

  • Scientific results are communicated through publishing and distribution of journals.
  • Scientific results are reviewed by 2 to 3 reviewers and an editor before they are published. (I would argue that even more reviewing eyes would be beneficial to reduce the chances of lazy or biased reviewers).
  • The publication process allows works to be cited and archived.

Con's

What are the problems with this system that need to be addressed

  • Smaller scientific achieviments don't count as currency. You don't get credit until you can publish a paper on it. This biases the system against researchers who have short attention spans, and also encourages researchers to force publications out, when maybe they shouldn't be forced out. Since the curruncy is published works (and highly cited - the currency is actually citations), the funding structure can impose deadlines that causes considerable amount of rushing to publication. This is the most serious problem with the current literature system.
  • The process takes way too long. Even if we stick with the current currency model of a polished paper, then the reviewing and layout process can have a lag time of up to a year in some cases - way too long to wait when working on cutting edge science. Even when we eliminate these 2 processes (as has been done by Paul Ginsparg's revolutionary arxiv.org), the researcher still has to hoard smaller, but useful mini-investigations until the paper is written. I would argue that it is better to get out complete investigations (little mini-hypotheses) out before a paper can be written - these are often useful to more people than the author.
  • Too much information is lost - since there is no venue to publish the mini-investigations, most of those that don't fit in the paper wallow in lab notebooks never to be seen again. This also brings up the debate about publishing negative results, which often don't make it into papers, but have a clear utility to science.
  • The peer review process can break down with lazy, unbiased, or uncompetent reviewers. It would be better to have a system that let a broad group of people comment on smaller parts of the larger work as well as the work as a whole. Make peer review change to community review.
  • Literature searching is a very difficult task. This is in part due to poor search tools designed for the needs of a scientific liteature search, but also due to the paper presenting a 'story' that spins the content of the research. In my experience, I perform a query - those articles whos TITLES look promising then get their abstracts read. The point is that titles can often be chosen for spin-factor. If the model separated form from content (what CSS does for HTML), then you can search based on scientific content, find all papers or work that has that content, and then read the work - rather than searching based on the authors choice of words.