🎧 ‘The Daily’: The Hunt for the Golden State Killer | New York Times

Listened to ‘The Daily’: The Hunt for the Golden State Killer by Michael Barbaro from nytimes.com

Paul Holes was on the verge of retirement, having never completed his decades-long mission to catch the Golden State Killer. Then he had an idea: Upload DNA evidence to a genealogy website.

On today’s episode:

• Paul Holes, an investigator in California who helped to crack the case.

Background reading:

• A spate of murders and rapes across California in the 1970s and 1980s went unsolved for decades. Then, last week, law enforcement officials arrested Joseph James DeAngelo, 72, a former police officer.

• Investigators submitted DNA collected at a crime scene to the genealogy website GEDmatch, through which they were able to track down distant relatives of the suspect. The method has raised concerns about privacy and ethics.

A stunning story with some ingenious detective work. I worry what the potential privacy problems are off in the future, though one of the ideas here is that it actually helps protect the privacy of some individuals who are wrongly and maliciously accused and thus saves a lot of time and money.

The subtleties will be when we’re using this type of DNA evidence more frequently for lower level crimes while at the same time the technology gets increasingly cheaper to carry out.

👓 How Many Genes Do Cells Need? Maybe Almost All of Them | Quanta Magazine

Read How Many Genes Do Cells Need? Maybe Almost All of Them (Quanta Magazine)
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.
There could be some interesting data to play with here if available.

I also can’t help but wonder about applying some of Stuart Kauffman’s ideas to something like this. In particular, this sounds very reminiscent to his analogy of what happens when one strings thread randomly among a pile of buttons and the resulting complexity.

👓 Voting me, voting you: Eurovision | The Economist (Espresso)

Read Voting me, voting you: Eurovision (Economist Espresso)
​The competition, whose finals play out tonight, is as famed for its politics as its cheesy
I often read the Economist’s Espresso daily round up, but don’t explicitly post that I do. I’m making an exception in this case because I find the voting partnerships mentioned here quite interesting. Might be worth delving into some of the underlying voting statistics for potential application to other real life examples. I’m also enamored of the nice visualization they provide. I wonder what the overlap of this data is with other related world politics looks like?

🎧 Season 2 Episode 10 The Basement Tapes | Revisionist History

Listened to Season 2 Episode 10 The Basement Tapes by Malcolm GladwellMalcolm Gladwell from Revisionist History

A cardiologist in Minnesota searches through the basement of his childhood home for a missing box of data from a long-ago experiment. What he discovers changes our understanding of the modern American diet — but also teaches us something profound about what really matters when we honor our parents’ legacy.

There’s a little bit of everything here.

👓 One space between each sentence, they said. Science just proved them wrong. | Washington Post

Read One space between each sentence, they said. Science just proved them wrong. by Avi Selk (Washington Post)

“Professionals and amateurs in a variety of fields have passionately argued for either one or two spaces following this punctuation mark,” they wrote in a paper published last week in the journal Attention, Perception, & Psychophysics.

They cite dozens of theories and previous research, arguing for one space or two.  A 2005 study that found two spaces reduced lateral interference in the eye and helped reading.  A 2015 study that found the opposite.  A 1998 experiment that suggested it didn't matter.

“However,” they wrote, “to date, there has been no direct empirical evidence in support of these claims, nor in favor of the one-space convention.”

I love that the permalink for this article has a trailing 2, which indicates to me that it took the editors a second attempt to add the additional space into the headline for their CMS. And if nothing else, this article is interesting for its layout and typesetting.

I’ll circle back to read the full journal article shortly.1

 

References

1.
Johnson RL, Bui B, Schmitt LL. Are two spaces better than one? The effect of spacing following periods and commas during reading. Atten Percept Psychophys. April 2018. doi:10.3758/s13414-018-1527-6

👓 Your behavior in Starbucks may reveal more about you than you think | Science | AAAS

Read Your behavior in Starbucks may reveal more about you than you think (Science | AAAS)
Cultural differences are revealed in coffee shop etiquette, study in China finds

❤️ DrAndrewV2 tweet about reading journal articles

Liked a tweet by Jenny Andrew (Twitter)

🔖 The Theory of Quantum Information by John Watrous

Bookmarked The Theory of Quantum Information by Tom Watrous (cs.uwaterloo.ca)

To be published by Cambridge University Press in April 2018.

Upon publication this book will be available for purchase through Cambridge University Press and other standard distribution channels. Please see the publisher's web page to pre-order the book or to obtain further details on its publication date.

A draft, pre-publication copy of the book can be found below. This draft copy is made available for personal use only and must not be sold or redistributed.

This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.

h/t to @michael_nielsen via Nuzzel

👓 Large Cache of Texts May Offer Insight Into One of Africa’s Oldest Written Languages | Smithsonian Magazine

Read Large Cache of Texts May Offer Insight Into One of Africa's Oldest Written Languages (Smithsonian)
Archaeologists in Sudan have uncovered the largest assemblage of Meroitic inscriptions to date
This is a cool discovery, in great part because their documentation was interesting enough to be able to suggest further locations to check for more archaeological finds. This might also be something one could apply some linguistic analysis and information theory to in an attempt to better pull apart the language and grammar.

h/t to @ArtsJournalNews, bookmarked on April 17, 2018 at 08:16AM

Following Ilyas Khan

Followed Ilyas Khan (LinkedIn)
Ilyas Khan Co-Founder and CEO at Cambridge Quantum Computing
Dear god, I wish Ilyas had a traditional blog with a true feed, but I’m willing to put up with the inconvenience of manually looking him up from time to time to see what he’s writing about quantum mechanics, quantum computing, category theory, and other areas of math.

Reply to A (very) gentle comment on Algebraic Geometry for the faint-hearted | Ilyas Khan

Replied to A (very) gentle comment on Algebraic Geometry for the faint-hearted by Ilyas KhanIlyas Khan (LinkedIn)
This short article is the result of various conversations over the course of the past year or so that arose on the back of two articles/blog pieces that I have previously written about Category Theory (here and here). One of my objectives with such articles, whether they be on aspects of quantum computing or about aspects of maths, is to try and de-mystify as much of the associated jargon as possible, and bring some of the stunning beauty and wonder of the subject to as wide an audience as possible. Whilst it is clearly not possible to become an expert overnight, and it is certainly not my objective to try and provide more than an introduction (hopefully stimulating further research and study), I remain convinced that with a little effort, non-specialists and even self confessed math-phobes can grasp some of the core concepts. In the case of my articles on Category Theory, I felt that even if I could generate one small gasp of excited comprehension where there was previously only confusion, then the articles were worth writing.
I just finished a course on Algebraic Geometry through UCLA Extension, which was geared toward non-traditional math students and professionals, and wish I had known about Smith’s textbook when I’d started. I did spend some time with Cox, Little, and O’Shea’s Ideals, Varieties, and Algorithms which is a pretty good introduction to the area, but written a bit more for computer scientists and engineers in mind rather than the pure mathematician, which might recommend it more toward your audience here as well. It’s certainly more accessible than Hartshorne for the faint-of-heart.

I’ve enjoyed your prior articles on category theory which have spurred me to delve deeper into the area. For others who are interested, I thought I’d also mention that physicist and information theorist John Carlos Baez at UCR has recently started an applied category theory online course which I suspect is a bit more accessible than most of the higher graduate level texts and courses currently out. For more details, I’d suggest starting here: https://johncarlosbaez.wordpress.com/2018/03/26/seven-sketches-in-compositionality/

🔖 Special Issue : Information Dynamics in Brain and Physiological Networks | Entropy

Bookmarked Special Issue "Information Dynamics in Brain and Physiological Networks" (mdpi.com)

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: 30 December 2018

It is, nowadays, widely acknowledged that the brain and several other organ systems, including the cardiovascular, respiratory, and muscular systems, among others, exhibit complex dynamic behaviors that result from the combined effects of multiple regulatory mechanisms, coupling effects and feedback interactions, acting in both space and time.

The field of information theory is becoming more and more relevant for the theoretical description and quantitative assessment of the dynamics of the brain and physiological networks, defining concepts, such as those of information generation, storage, transfer, and modification. These concepts are quantified by several information measures (e.g., approximate entropy, conditional entropy, multiscale entropy, transfer entropy, redundancy and synergy, and many others), which are being increasingly used to investigate how physiological dynamics arise from the activity and connectivity of different structural units, and evolve across a variety of physiological states and pathological conditions.

This Special Issue focuses on blending theoretical developments in the new emerging field of information dynamics with innovative applications targeted to the analysis of complex brain and physiological networks in health and disease. To favor this multidisciplinary view, contributions are welcome from different fields, ranging from mathematics and physics to biomedical engineering, neuroscience, and physiology.

Prof. Dr. Luca Faes
Prof. Dr. Alberto Porta
Prof. Dr. Sebastiano Stramaglia
Guest Editors

👓 The Scientific Paper Is Obsolete | The Atlantic

Read The Scientific Paper Is Obsolete by James Somers (The Atlantic)
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
Not quite the cutting edge stuff I would have liked, but generally an interesting overview of relatively new technology and UI set ups like Mathematica and Jupyter.