Imagine webmentions being used for referencing journal articles, academic samizdat, or even OER? Suggestions and improvement could accumulate on the original content itself rather than being spread across dozens of social silos on the web.
Tag: journal articles
👓 How long do floods throughout the millennium remain in the collective memory? | Nature
Is there some kind of historical memory and folk wisdom that ensures that a community remembers about very extreme phenomena, such as catastrophic floods, and learns to establish new settlements in safer locations? We tested a unique set of empirical data on 1293 settlements founded in the course of nine centuries, during which time seven extreme floods occurred. For a period of one generation after each flood, new settlements appeared in safer places. However, respect for floods waned in the second generation and new settlements were established closer to the river. We conclude that flood memory depends on living witnesses, and fades away already within two generations. Historical memory is not sufficient to protect human settlements from the consequences of rare catastrophic floods.
I wonder what the equivalent sorts of things would be for C. elegans, drosophila, etc. for testing things on smaller timescales?
Background
Organisms live and die by the amount of information they acquire about their environment. The systems analysis of complex metabolic networks allows us to ask how such information translates into fitness. A metabolic network transforms nutrients into biomass. The better it uses information on available nutrient availability, the faster it will allow a cell to divide.
Results
I here use metabolic flux balance analysis to show that the accuracy I (in bits) with which a yeast cell can sense a limiting nutrient's availability relates logarithmically to fitness as indicated by biomass yield and cell division rate. For microbes like yeast, natural selection can resolve fitness differences of genetic variants smaller than 10-6, meaning that cells would need to estimate nutrient concentrations to very high accuracy (greater than 22 bits) to ensure optimal growth. I argue that such accuracies are not achievable in practice. Natural selection may thus face fundamental limitations in maximizing the information processing capacity of cells.
Conclusion
The analysis of metabolic networks opens a door to understanding cellular biology from a quantitative, information-theoretic perspective.
https://doi.org/10.1186/1752-0509-1-33
Received: 01 March 2007 Accepted: 30 July 2007 Published: 30 July 2007
Self-replication is a capacity common to every species of living thing, and simple physical intuition dictates that such a process must invariably be fueled by the production of entropy. Here, we undertake to make this intuition rigorous and quantitative by deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
https://doi.org/10.1063/1.4818538
Hat tip to Paul Davies in The Demon in the Machine
👓 Top cancer expert forgot to mention $3.5M industry ties—he just resigned | Ars Technica
For years, José Baselga didn’t mention industry links in dozens of top medical pubs.
👓 Top Cancer Researcher Fails to Disclose Corporate Financial Ties in Major Research Journals | New York Times
A senior official at Memorial Sloan Kettering Cancer Center has received millions of dollars in payments from companies that are involved in medical research.
I’m kind of shocked that major publishers like Elsevier are continually saying they add so much value to the chain of publishing they do, yet somehow, in all the major profits they (and others) are making that they don’t do these sorts of checks as a matter of course.
🔖 ❤️ Protohedgehog tweet
DO NOT add this as the URL for a bookmark:
— Jon Tennant (@Protohedgehog) June 10, 2018
javascript:location.hostname += '.sci-hub.tw'
Which when you click on a paywalled research article, automatically takes you to the @Sci_Hub version of it.
And DO NOT try this, see that it works wonderfully, and share it with others.
🔖 Upcoming Special Issue “Information Theory in Neuroscience” | Entropy (MDPI)
As the ultimate information processing device, the brain naturally lends itself to be studied with information theory. Application of information theory to neuroscience has spurred the development of principled theories of brain function, has led to advances in the study of consciousness, and to the development of analytical techniques to crack the neural code, that is to unveil the language used by neurons to encode and process information. In particular, advances in experimental techniques enabling precise recording and manipulation of neural activity on a large scale now enable for the first time the precise formulation and the quantitative test of hypotheses about how the brain encodes and transmits across areas the information used for specific functions. This Special Issue emphasizes contributions on novel approaches in neuroscience using information theory, and on the development of new information theoretic results inspired by problems in neuroscience. Research work at the interface of neuroscience, Information Theory and other disciplines is also welcome. A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory". Deadline for manuscript submissions: 1 December 2017
Background: Elucidating gene regulatory networks is crucial for understanding normal cell physiology and complex pathologic phenotypes. Existing computational methods for the genome-wide ``reverse engineering'' of such networks have been successful only for lower eukaryotes with simple genomes. Here we present ARACNE, a novel algorithm, using microarray expression profiles, specifically designed to scale up to the complexity of regulatory networks in mammalian cells, yet general enough to address a wider range of network deconvolution problems. This method uses an information theoretic approach to eliminate the majority of indirect interactions inferred by co-expression methods. Results: We prove that ARACNE reconstructs the network exactly (asymptotically) if the effect of loops in the network topology is negligible, and we show that the algorithm works well in practice, even in the presence of numerous loops and complex topologies. We assess ARACNE's ability to reconstruct transcriptional regulatory networks using both a realistic synthetic dataset and a microarray dataset from human B cells. On synthetic datasets ARACNE achieves very low error rates and outperforms established methods, such as Relevance Networks and Bayesian Networks. Application to the deconvolution of genetic networks in human B cells demonstrates ARACNE's ability to infer validated transcriptional targets of the c MYC proto-oncogene. We also study the effects of mis estimation of mutual information on network reconstruction, and show that algorithms based on mutual information ranking are more resilient to estimation errors.
doi:10.1186/1471-2105-7-s1-s7
MOTIVATION: Traditional sequence distances require an alignment and therefore are not directly applicable to the problem of whole genome phylogeny where events such as rearrangements make full length alignments impossible. We present a sequence distance that works on unaligned sequences using the information theoretical concept of Kolmogorov complexity and a program to estimate this distance.
RESULTS: We establish the mathematical foundations of our distance and illustrate its use by constructing a phylogeny of the Eutherian orders using complete unaligned mitochondrial genomes. This phylogeny is consistent with the commonly accepted one for the Eutherians. A second, larger mammalian dataset is also analyzed, yielding a phylogeny generally consistent with the commonly accepted one for the mammals.
AVAILABILITY: The program to estimate our sequence distance, is available at http://www.cs.cityu.edu.hk/~cssamk/gencomp/GenCompress1.htm. The distance matrices used to generate our phylogenies are available at http://www.math.uwaterloo.ca/~mli/distance.html.
PMID: 11238070
MOTIVATION: As an increasing number of protein structures become available, the need for algorithms that can quantify the similarity between protein structures increases as well. Thus, the comparison of proteins' structures, and their clustering accordingly to a given similarity measure, is at the core of today's biomedical research. In this paper, we show how an algorithmic information theory inspired Universal Similarity Metric (USM) can be used to calculate similarities between protein pairs. The method, besides being theoretically supported, is surprisingly simple to implement and computationally efficient.
RESULTS: Structural similarity between proteins in four different datasets was measured using the USM. The sample employed represented alpha, beta, alpha-beta, tim-barrel, globins and serpine protein types. The use of the proposed metric allows for a correct measurement of similarity and classification of the proteins in the four datasets.
AVAILABILITY: All the scripts and programs used for the preparation of this paper are available at http://www.cs.nott.ac.uk/~nxk/USM/protocol.html. In that web-page the reader will find a brief description on how to use the various scripts and programs.
PMID: 14751983 DOI: 10.1093/bioinformatics/bth031
Microscopy, phenotyping and visual screens are frequently applied to model organisms in combination with genetics. Although widely used, these techniques for multicellular organisms have mostly remained manual and low-throughput. Here we report the complete automation of sample handling, high-resolution microscopy, phenotyping and sorting of Caenorhabditis elegans. The engineered microfluidic system, coupled with customized software, has enabled high-throughput, high-resolution microscopy and sorting with no human intervention and may be combined with any microscopy setup. The microchip is capable of robust local temperature control, self-regulated sample-loading and automatic sample-positioning, while the integrated software performs imaging and classification of worms based on morphological and intensity features. We demonstrate the ability to perform sensitive and quantitative screens based on cellular and subcellular phenotypes with over 95% accuracy per round and a rate of several hundred worms per hour. Screening time can be reduced by orders of magnitude; moreover, screening is completely automated.
Related: https://www.news.gatech.edu/2008/06/23/automated-microfluidic-device-reduces-time-screen-small-organisms
Previously reported better fertilization rate after intracytoplasmic single sperm injection (ICSI) than after subzonal insemination of several spermatozoa was confirmed in a controlled comparison of the two procedures in 11 patients. Intracytoplasmic sperm injection was carried out in 150 consecutive treatment cycles of 150 infertile couples, who had failed to have fertilized oocytes after standard in-vitro fertilization (IVF) procedures or who were not accepted for IVF because not enough motile spermatozoa were present in the ejaculate. A single spermatozoon was injected into the ooplasm of 1409 metaphase II oocytes. Only 117 oocytes (8.3%) were damaged by the procedure and 830 oocytes (64.2% of the successfully injected oocytes) had two distinct pronuclei the morning after the injection procedure. The fertilization rate was not influenced by semen characteristics. After 24 h of further in-vitro culture, 71.2% of these oocytes developed into embryos, which were transferred or cryopreserved. Only 15 patients did not have embryos replaced. Three-quarters of the transfers were triple-embryo transfers. High pregnancy rates were noticed since 67 pregnancies were achieved, of which 53 were clinical, i.e. a total and clinical pregnancy rate of 44.7% and 35.3% per started cycle and 49.6% and 39.2% per embryo transfer. A total of 237 supernumerary embryos were cryopreserved in 71 treatment cycles.