I’ve just gotten a copy of Remember: The Science of Memory and the Art of Forgetting by Lisa Genova which came out earlier this week.

Simple white book cover of Remember by Lisa Genova featuring a piece of red string tied into a knotted bow

I’ve thumbed through it quickly and done some targeted searches of the text. From all appearances, it looks like she’s approaching the topic of memory from a neuroscientist’s perspective and talking about broad psychology and culture.

There are a few references to the method of loci and a tangential reference to the phonetic major system in chapter 5. She approaches these briefly with a mention of Joshua Foer’s Moonwalking with Einstein and his PAO system (without using the word Person-Action-Object), but dismisses all too quickly.

But you would have to do a lot of memorizing before you can actually use these techniques (and others like them) to remember the stuff you’re interested in remembering. If the thought of doing this kind of mental labor sounds exhausting, I’m right there with you. I don’t have the dedication or time. Unless you’re motivated to become an elite memory athlete or your life’s dream is to memorize 111,700 digits of pi, I suspect you don’t, either. Most of us will never want or need to memorize that kind or that amount of information. But many of us would like to be better at memorizing the ten things on our to-do list, our Wi-Fi password, or the six things we need at the grocery store.

Sadly she doesn’t bring up the much easier to use phonetic major system, but blows right by it.

I’ll try to delve into the rest of the text shortly, but I was really hoping for more on the mnemonics front. I mnemonists won’t get much out of it on the techniques front, but might find it useful for an overview of the neuroscience or psychology fronts from Hermann Ebbinghaus onwards.

Dr. Lynne Kelly’s research on history, indigenous people, and memory, and a dovetail with Big History

David Christian, Fred Spier, Bill Gates, Big History Institute, and other Big History researchers and thinkers, if you’re not already aware of her, allow me to introduce you to researcher Dr. Lynne Kelly. Her work dramatically expands our understanding of pre-literate societies’ learning, memory, and particularly collective learning. Further, it makes for a strong and fascinating story that could not only be integrated into Big History; it provides links between modern and pre-modern humans and ties deeply into ideas of origin stories, mythology, and early religion; and it provides actual methods for improving student’s memories and particularly that for history.

I think her work has some profound impact on the arc of Big History, particularly with respect to Threshold 6, well into Threshold 7, and continuing into the Renaissance and the Industrial Revolution. In true big history fashion, her thesis also touches heavily on a broad array of topics including anthropology, archaeology, psychology, neuroscience, history, and education.

A broad, reasonable introduction to her work can be had in CalTech physicist Sean Carroll’s  recent podcast interview.

Another short introduction is her TEDx Melbourne talk:

A solid popular science encapsulation of her work can be found in her book The Memory Code: The Secrets of Stonehenge, Easter Island and Other Ancient Monuments (Pegasus Books, 2017).

A more thorough academic treatment of her work can naturally be found in:

With some work, I think her research could become a better foundational basis for a stronger bridge from threshold 6 into threshold 7 with dramatic impact on how we view origin stories, mythology, religion. It also has some spectacular implications for improving pedagogy and memory within our educational systems and how we view and use collective memory and even innovation in the modern world.

Read Moving Your Eyes Improves Memory, Study Suggests (livescience.com)
If you’re looking for a quick memory fix, move your eyes from side-to-side for 30 seconds, researchers say. Horizontal eye movements are thought to cause the two hemispheres of the brain to interact more with one another, and communication between brain hemispheres is important for retrieving certain types of memories.
Bookmarked How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories by Alex Rosenberg (The MIT Press)

Why we learn the wrong things from narrative history, and how our love for stories is hard-wired.

To understand something, you need to know its history. Right? Wrong, says Alex Rosenberg in How History Gets Things Wrong. Feeling especially well-informed after reading a book of popular history on the best-seller list? Don't. Narrative history is always, always wrong. It's not just incomplete or inaccurate but deeply wrong, as wrong as Ptolemaic astronomy. We no longer believe that the earth is the center of the universe. Why do we still believe in historical narrative? Our attachment to history as a vehicle for understanding has a long Darwinian pedigree and a genetic basis. Our love of stories is hard-wired. Neuroscience reveals that human evolution shaped a tool useful for survival into a defective theory of human nature.

Stories historians tell, Rosenberg continues, are not only wrong but harmful. Israel and Palestine, for example, have dueling narratives of dispossession that prevent one side from compromising with the other. Henry Kissinger applied lessons drawn from the Congress of Vienna to American foreign policy with disastrous results. Human evolution improved primate mind reading―the ability to anticipate the behavior of others, whether predators, prey, or cooperators―to get us to the top of the African food chain. Now, however, this hard-wired capacity makes us think we can understand history―what the Kaiser was thinking in 1914, why Hitler declared war on the United States―by uncovering the narratives of what happened and why. In fact, Rosenberg argues, we will only understand history if we don't make it into a story.

hat tip Jeff Jarvis.
Read Blue Brain solves a century-old neuroscience problem (ScienceDaily)
New research explains how the shapes of neurons can be classified using mathematical methods from the field of algebraic topology. Neuroscientists can now start building a formal catalogue for all the types of cells in the brain. Onto this catalogue of cells, they can systematically map the function and role in disease of each type of neuron in the brain.

👓 Neuroscience Readies for a Showdown Over Consciousness Ideas | Quanta Magazine

Read Neuroscience Readies for a Showdown Over Consciousness Ideas by Philip BallPhilip Ball (Quanta Magazine)
To make headway on the mystery of consciousness, some researchers are trying a rigorous new way to test competing theories.
Many of these ideas of consciousness seem ridiculous to me. I suppose that people need to be thinking about these ideas, iterating, and even doing some philosophy to ever get around to some better ideas and science, but it’s still very early days on the topic. I am glad that they’re actively attempting to come up with some actual science and testing of some of these theories to find a better answer.

If nothing else, this article does a reasonable job of giving an overview of some of the most recent schools of thought. And of course, it’s Philip Ball, so who could resist reading it…

📑 Walter Pitts by Neil Smalheiser | Journal Perspectives in Biology and Medicine

Bookmarked Walter Pitts by Neil SmalheiserNeil Smalheiser (Journal Perspectives in Biology and Medicine. Volume 43. Issue 2. Page 217 - 226.)
Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.  
This looks like an interesting bio to read.

📑 A logical calculus of the ideas immanent in nervous activity by Warren S. McCulloch, Walter Pitts

Bookmarked A logical calculus of the ideas immanent in nervous activity by Warren S. McCulloch, Walter Pitts (The bulletin of mathematical biophysics December 1943, Volume 5, Issue 4, pp 115–133)
Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.
Found reference to this journal article in a review of Henry Quastler’s book Information Theory in Biology. It said:

A more serious thing, in the reviewer’s opinion, is the complete absence of contributions dealing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts’ approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neural mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

👓 Who says neuroscientists don’t need more brains? Annotation with SciBot | Hypothesis

Read Who says neuroscientists don’t need more brains? Annotation with SciBot by Maryann Martone (web.hypothes.is/blog/)
You might think that neuroscientists already have enough brains, but apparently not. Over 100 neuroscientists attending the recent annual meeting of the Society for Neuroscience (SFN), took part in an annotation challenge: modifying scientific papers to add simple references that automatically generate and attach Hypothesis annotations, filled with key related information. To sweeten the pot, our friends at Gigascience gave researchers who annotated their own papers their very own brain hats.

🔖 CNS*2018 Workshop on Methods of Information Theory in Computational Neuroscience

Read Information Theory in Computational Neuroscience Workshop (CNS*2018) by Joseph Lizier (lizier.me)
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

🔖 Special Issue : Information Dynamics in Brain and Physiological Networks | Entropy

Bookmarked Special Issue "Information Dynamics in Brain and Physiological Networks" (mdpi.com)

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: 30 December 2018

It is, nowadays, widely acknowledged that the brain and several other organ systems, including the cardiovascular, respiratory, and muscular systems, among others, exhibit complex dynamic behaviors that result from the combined effects of multiple regulatory mechanisms, coupling effects and feedback interactions, acting in both space and time.

The field of information theory is becoming more and more relevant for the theoretical description and quantitative assessment of the dynamics of the brain and physiological networks, defining concepts, such as those of information generation, storage, transfer, and modification. These concepts are quantified by several information measures (e.g., approximate entropy, conditional entropy, multiscale entropy, transfer entropy, redundancy and synergy, and many others), which are being increasingly used to investigate how physiological dynamics arise from the activity and connectivity of different structural units, and evolve across a variety of physiological states and pathological conditions.

This Special Issue focuses on blending theoretical developments in the new emerging field of information dynamics with innovative applications targeted to the analysis of complex brain and physiological networks in health and disease. To favor this multidisciplinary view, contributions are welcome from different fields, ranging from mathematics and physics to biomedical engineering, neuroscience, and physiology.

Prof. Dr. Luca Faes
Prof. Dr. Alberto Porta
Prof. Dr. Sebastiano Stramaglia
Guest Editors

🔖 Upcoming Special Issue “Information Theory in Neuroscience” | Entropy (MDPI)

Bookmarked Special Issue "Information Theory in Neuroscience" (Entropy | MDPI)
As the ultimate information processing device, the brain naturally lends itself to be studied with information theory. Application of information theory to neuroscience has spurred the development of principled theories of brain function, has led to advances in the study of consciousness, and to the development of analytical techniques to crack the neural code, that is to unveil the language used by neurons to encode and process information. In particular, advances in experimental techniques enabling precise recording and manipulation of neural activity on a large scale now enable for the first time the precise formulation and the quantitative test of hypotheses about how the brain encodes and transmits across areas the information used for specific functions. This Special Issue emphasizes contributions on novel approaches in neuroscience using information theory, and on the development of new information theoretic results inspired by problems in neuroscience. Research work at the interface of neuroscience, Information Theory and other disciplines is also welcome. A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory". Deadline for manuscript submissions: 1 December 2017

👓 Steven Pinker Explains the Neuroscience of Swearing | Open Culture

Read Steven Pinker Explains the Neuroscience of Swearing by Matthias Rascher (Open Culture)
Pinker talking about his then new book, The Stuff of Thought: Language as a Window into Human Nature, and doing what he does best: combining psychology and neuroscience with linguistics. The result is as entertaining as it is insightful.

🔖 An Introduction to Transfer Entropy: Information Flow in Complex Systems

Bookmarked An Introduction to Transfer Entropy: Information Flow in Complex Systems (Springer; 1st ed. 2016 edition)
This book considers a relatively new metric in complex systems, transfer entropy, derived from a series of measurements, usually a time series. After a qualitative introduction and a chapter that explains the key ideas from statistics required to understand the text, the authors then present information theory and transfer entropy in depth. A key feature of the approach is the authors' work to show the relationship between information flow and complexity. The later chapters demonstrate information transfer in canonical systems, and applications, for example in neuroscience and in finance. The book will be of value to advanced undergraduate and graduate students and researchers in the areas of computer science, neuroscience, physics, and engineering. ISBN: 978-3-319-43221-2 (Print), 978-3-319-43222-9 (Online)
Want to read; h/t to Joseph Lizier.
Continue reading 🔖 An Introduction to Transfer Entropy: Information Flow in Complex Systems

🔖 Cognition and biology: perspectives from information theory

Bookmarked Cognition and biology: perspectives from information theory (ncbi.nlm.nih.gov)
The intimate relation between biology and cognition can be formally examined through statistical models constrained by the asymptotic limit theorems of communication theory, augmented by methods from statistical mechanics and nonequilibrium thermodynamics. Cognition, often involving submodules that act as information sources, is ubiquitous across the living state. Less metabolic free energy is consumed by permitting crosstalk between biological information sources than by isolating them, leading to evolutionary exaptations that assemble shifting, tunable cognitive arrays at multiple scales, and levels of organization to meet dynamic patterns of threat and opportunity. Cognition is thus necessary for life, but it is not sufficient: An organism represents a highly patterned outcome of path-dependent, blind, variation, selection, interaction, and chance extinction in the context of an adequate flow of free energy and an environment fit for development. Complex, interacting cognitive processes within an organism both record and instantiate those evolutionary and developmental trajectories.