👓 Why Facts Don’t Change Our Minds | The New Yorker

Why Facts Don’t Change Our Minds by Elizabeth Kolbert (The New Yorker)
New discoveries about the human mind show the limitations of reason.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight. Credit Illustration by Gérard DuBois

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

Cartoon“Thanks again for coming—I usually find these office parties rather awkward.”

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. 

Source: Why Facts Don’t Change Our Minds | The New Yorker

Syndicated copies to:

In Discarded Women’s March Signs, Professors Saw a Chance to Save History | The Chronicle of Higher Education

In Discarded Women’s March Signs, Professors Saw a Chance to Save History by Fernanda Zamudio-Suaréz (The Chronicle of Higher Education)
Posters from the rally in Boston will be cataloged and archived.
Dwayne Desaulniers, AP Images

Signs line the fence surrounding Boston Common after the Boston Women’s March for America on Saturday. Some of those signs could end up in an archive at Northeastern U.

The signs were pink, blue, black, white. Some were hoisted with wooden sticks, and others were held in protesters’ hands. A few sparkled with glitter, and some had original designs, created on computers with the help of a few internet memes.

Still, at the Boston Women’s March for America on Saturday, hundreds of the signs criticizing President Trump’s campaign promises and administrative agenda ended up wrapped around the fence near Boston Common, laid down like a carpet covering the sidewalk. Continue reading “In Discarded Women’s March Signs, Professors Saw a Chance to Save History | The Chronicle of Higher Education”

Reply to Antonio Sánchez-Padial about webmentions for academic research

a tweet by Antonio Sánchez-PadialAntonio Sánchez-Padial (Twitter)

Many academics are using academic related social platforms (silos) like Mendeley, Academia.edu, Research Gate and many others to collaborate, share data, and publish their work. (And should they really be trusting that data to those outside corporations?)

A few particular examples: I follow physicist John Carlos Baez and mathematician Terry Tao who both have one or more academic blogs for various topics which they POSSE work to several social silos including Google+ and Twitter. While they get some high quality response to posts natively, some of their conversations are forked/fragmented to those other silos. It would be far more useful if they were using webementions (and Brid.gy) so that all of that conversation was being aggregated to their original posts. If they supported webmentions directly, I suspect that some of their collaborators would post their responses on their own sites and send them after publication as comments. (This also helps to protect primacy and the integrity of the original responses as the receiving site could moderate them out of existence, delete them outright, or even modify them!)

While it’s pretty common for researchers to self-publish (sometimes known as academic samizdat) their work on their own site and then cross-publish to a pre-print server (like arXiv.org), prior to publishing in a (preferrably) major journal. There’s really no reason they shouldn’t just use their own personal websites, or online research journals like yours, to publish their work and then use that to collect direct comments, responses, and replies to it. Except possibly where research requires hosting uber-massive data sets which may be bandwidth limiting (or highly expensive) at the moment, there’s no reason why researchers shouldn’t self-host (and thereby own) all of their work.

Instead of publishing to major journals, which are all generally moving to an online subscription/readership model anyway, they might publish to topic specific hubs (akin to pre-print servers or major publishers’ websites). This could be done in much the same way many Indieweb users publish articles/links to IndieWeb News: they publish the piece on their own site and then syndicate it to the hub by webmention using the hub’s endpoint. The hub becomes a central repository of the link to the original as well as making it easier to subscribe to updates via email, RSS, or other means for hundreds or even thousands of researchers in the given area. Additional functionality could be built into these to support popularity measures as well to help filter some of the content on a weekly or monthly basis, which is essentially what many publishers are doing now.

In the end, citation metrics could be measured directly on the author’s original page by the number of incoming webmetions they’ve received on it as others referencing them would be linking to them and therefore sending webmentions. (PLOS|One does something kind of like this by showing related tweets which mention particular papers now: here’s an example.)

Naturally there is some fragility in some of this and protective archive measures should be taken to preserve sites beyond the authors lives, but much of this could be done by institutional repositories like University libraries which do much of this type of work already.

I’ve been meaning to write up a much longer post about how to use some of these types of technologies to completely revamp academic publishing, perhaps I should finish doing that soon? Hopefully the above will give you a little bit of an idea of what could be done.

Syndicated copies to:

📺 Chris Aldrich watched “My Research Process!” on YouTube

My Research Process! by Ellie Mackin (YouTube)

From idea to finished manuscript – this is all the ins and outs of how I do my research – it goes quite well with this blog post, which I neglected to mention in the video… http://www.elliemackin.net/blog/tech-tools-and-research

My bookshelf! https://ellie.libib.com
Using the Gantt chart in my research planning: https://www.youtube.com/watch?v=pKD5hDGfVb8
Research planning in a Bullet Journal: https://www.youtube.com/watch?v=DHL9t9e-hjQ
Academic Bullet Journal: https://www.youtube.com/watch?v=IZ3Aacpelic
Academic Otters: https://lizgloyn.wordpress.com/2011/07/21/the-proper-care-and-feeding-of-academic-otters/
CamScanner: https://www.camscanner.com

 

💬 Reply to video
In addition to camscanner, and because you use OneNote, you might find Office Lens to be a useful phone app for photographing individual pages and transferring them directly into your OneNote application. It usually does a great job of taking poorly positioned photographs or photos from odd angles and cleaning them up to look as if you’d spent far more time positioning the pages and taking the photos.

For those capturing photographs of primary sources, I’ve recently found Google’s PhotoScan mobile app to be incredibly good, particularly at re-positioning the corners of photos and reducing glare.

Syndicated copies to:

📺 Chris Aldrich watched “Using the Gantt Chart in my research planning” on YouTube

Using the Gantt Chart in my research planning
How I use my gantt chart in my research planning.
You can download a printable of my gantt chart, the research pipeline, and the monthly spread here: http://www.elliemackin.net/research-planning.html

I’ve used Gantt Charts for other things, but never considered them for academic research.

Syndicated copies to:

📺 Chris Aldrich watched “Research Planning in a Bullet Journal” on YouTube

Research Planning in a Bullet Journal by Ellie Mackin (youtube.com)

I’m Ellie, an early career ancient historian working on Greek religion. I make videos about research, being an early career academic, and my work.
Printables!
Calendex

Syndicated copies to:

Reply to: little by little, brick by brick

Thanks for the thoughts here Liz. Somehow I hadn’t heard of ReadCube, but it looks very interesting and incredibly similar to Mendeley‘s set up and functionality. I’ve been using Mendeley for quite a while now and am reasonably happy with it, particularly being able to use their bookmarklet to save things for later and then do reading and annotations within the material. If researchers in your area are using Mendeley’s social features, this is also a potential added benefit, though platforms like Academia and ResearchGate should be explored as well.

Given their disparate functionalities, you may be better off choosing one of Evernote and OneNote and separately Mendeley or ReadCube. Personally I don’t think the four are broadly interchangeable though they may be easier to work with in pairs for their separate functionalities. While I loved Evernote, I have generally gone “all in” on OneNote because it’s much better integrated with the other MS Office tools like email, calendar, and my customized to do lists there.

Another interesting option you may find for sorting/organizing thousands of documents is Calibre e-book management. It works like an iTunes but for e-books, pdfs, etc. If you use it primarily for pdfs, you can save your notes/highlights/marginalia in them directly. Calibre also allows for adding your own meta-data fields and is very extensible. The one thing I haven’t gotten it to do well (yet) is export for citation management, though it does keep and maintain all the meta data for doing so. One of the ways that Mendeley and ReadCube seem to monetize is by selling a subscription for storage so if this is an issue for you, you might consider Calibre as a free alternative.

I’ve been ever working on a better research workflow, but generally prefer to try to use platforms on which I own all the data or it’s easily exportable and then own-able. I use my own website on WordPress as a commonplace book of sorts to capture all of what I’m reading, writing, and thinking about–though much of it is published privately or saved as drafts/pending on the back end of the platform. This seems to work relatively well and makes everything pretty easily searchable for later reference.

Here are some additional posts I’ve written relatively recently which may help your thinking about how to organize things on/within your website if you use it as a research tool:

I’ve also recently done some significant research and come across what I think is the most interesting and forward-thinking WordPress plugin for academic citations on my blog: Academic Blogger’s Toolkit. It’s easily the best thing currently on the market for its skillset.

Another research tool I can’t seem to live without, though it may be more specific to some of the highly technical nature of the math, physics, and engineering I do as well as the conferences/workshops I attend, is my Livescribe.com Pulse pen which I use to take not only copious notes, but to simultaneously record the audio portion of those lectures. The pen and technology link the writing to the audio portion directly so that I can more easily relisten/review over portions which may have been no so clear the first time around. The system also has an optional and inexpensive optical character recognition plugin which can be used for converting handwritten notes into typed text which can be very handy. For just about $200 the system has been one of the best investments I’ve made in the last decade.

If you haven’t come across it yet, I also highly recommend regularly reading the ProfHacker blog of the Chronicle of Higher Education which often has useful tips and tools for academic research use. They also do a very good job of covering some of the though in the digital humanities which you might find appealing.

Syndicated copies to:

Notes, Highlights, and Marginalia: From E-books to Online

Notes on an outlined workflow for sharing notes, highlights, and annotations from ebooks online.

For several years now, I’ve been meaning to do something more interesting with the notes, highlights, and marginalia from the various books I read. In particular, I’ve specifically been meaning to do it for the non-fiction I read for research, and even more so for e-books, which tend to have slightly more extract-able notes given their electronic nature. This fits in to the way in which I use this site as a commonplace book as well as the IndieWeb philosophy to own all of one’s own data.[1]

Over the past month or so, I’ve been experimenting with some fiction to see what works and what doesn’t in terms of a workflow for status updates around reading books, writing book reviews, and then extracting and depositing notes, highlights, and marginalia online. I’ve now got a relatively quick and painless workflow for exporting the book related data from my Amazon Kindle and importing it into the site with some modest markup and CSS for display. I’m sure the workflow will continue to evolve (and further automate) somewhat over the coming months, but I’m reasonably happy with where things stand.

The fact that the Amazon Kindle allows for relatively easy highlighting and annotation in e-books is excellent, but having the ability to sync to a laptop and do a one click export of all of that data, is incredibly helpful. Adding some simple CSS to the pre-formatted output gives me a reasonable base upon which to build for future writing/thinking about the material. In experimenting, I’m also coming to realize that simply owning the data isn’t enough, but now I’m driven to help make that data more directly useful to me and potentially to others.

As part of my experimenting, I’ve just uploaded some notes, highlights, and annotations for David Christian’s excellent text Maps of Time: An Introduction to Big History[2] which I read back in 2011/12. While I’ve read several of the references which I marked up in that text, I’ll have to continue evolving a workflow for doing all the related follow up (and further thinking and writing) on the reading I’ve done in the past.

I’m still reminded me of Rick Kurtzman’s sage advice to me when I was a young pisher at CAA in 1999: “If you read a script and don’t tell anyone about it, you shouldn’t have wasted the time having read it in the first place.” His point was that if you don’t try to pass along the knowledge you found by reading, you may as well give up. Even if the thing was terrible, at least say that as a minimum. In a digitally connected era, we no longer need to rely on nearly illegible scrawl in the margins to pollinate the world at a snail’s pace.[4] Take those notes, marginalia, highlights, and meta data and release it into the world. The fact that this dovetails perfectly with Cesar Hidalgo’s thesis in Why Information Grows: The Evolution of Order, from Atoms to Economies,[3] furthers my belief in having a better process for what I’m attempting here.

Hopefully in the coming months, I’ll be able to add similar data to several other books I’ve read and reviewed here on the site.

If anyone has any thoughts, tips, tricks for creating/automating this type of workflow/presentation, I’d love to hear them in the comments!

Footnotes

[1]
“Own your data,” IndieWeb. [Online]. Available: http://indieweb.org/own_your_data. [Accessed: 24-Oct-2016]
[2]
D. Christian and W. McNeill H., Maps of Time: An Introduction to Big History, 2nd ed. University of California Press, 2011.
[3]
C. Hidalgo, Why Information Grows: The Evolution of Order, from Atoms to Economies, 1st ed. Basic Books, 2015.
[4]
O. Gingerich, The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus. Bloomsbury Publishing USA, 2004.
Syndicated copies to:

Transplantation of spinal cord–derived neural stem cells for ALS

Transplantation of spinal cord–derived neural stem cells for ALS (neurology.org)
Analysis of phase 1 and 2 trials testing the safety of spinal cord transplantation of human stem cells in patients with amyotrophic lateral sclerosis (ALS) with escalating doses and expansion of the trial to multiple clinical centers.

I built the microinjectors used in these experiments for injecting stem cells into the first human patients.

CNN also has a general interest article talking about some of the results.

Links to some earlier articles:

Transplantation of spinal cord–derived neural stem cells for ALS

Analysis of phase 1 and 2 trials

Authors: Jonathan D. Glass, MD; Vicki S. Hertzberg, PhD; Nicholas M. Boulis, MD; Jonathan Riley, MD; Thais Federici, PhD; Meraida Polak, RN; Jane Bordeau, RN; Christina Fournier, MD; Karl Johe, PhD; Tom Hazel, PhD; Merit Cudkowicz, MD; Nazem Atassi, MD; Lawrence F. Borges, MD; Seward B. Rutkove, MD; Jayna Duell, RN; Parag G. Patil, MD; Stephen A. Goutman, MD; Eva L. Feldman, MD, PhD

ABSTRACT

Objective: To test the safety of spinal cord transplantation of human stem cells in patients with amyotrophic lateral sclerosis (ALS) with escalating doses and expansion of the trial to multiple clinical centers.

Methods: This open-label trial included 15 participants at 3 academic centers divided into 5 treatment groups receiving increasing doses of stem cells by increasing numbers of cells/injection and increasing numbers of injections. All participants received bilateral injections into the cervical spinal cord (C3-C5). The final group received injections into both the lumbar (L2-L4) and cervical cord through 2 separate surgical procedures. Participants were assessed for adverse events and progression of disease, as measured by the ALS Functional Rating Scale–Revised, forced vital capacity, and quantitative measures of strength. Statistical analysis focused on the slopes of decline of these phase 2 trial participants alone or in combination with the phase 1 participants (previously reported), comparing these groups to 3 separate historical control groups.

Results: Adverse events were mostly related to transient pain associated with surgery and to side effects of immunosuppressant medications. There was one incident of acute postoperative deterioration in neurologic function and another incident of a central pain syndrome. We could not discern differences in surgical outcomes between surgeons. Comparisons of the slopes of decline with the 3 separate historical control groups showed no differences in mean rates of progression.

Conclusions: Intraspinal transplantation of human spinal cord–derived neural stem cells can be safely accomplished at high doses, including successive lumbar and cervical procedures. The procedure can be expanded safely to multiple surgical centers.

Classification of evidence: This study provides Class IV evidence that for patients with ALS, spinal cord transplantation of human stem cells can be safely accomplished and does not accelerate the progression of the disease. This study lacks the precision to exclude important benefit or safety issues.

Source: Transplantation of spinal cord–derived neural stem cells for ALS

Syndicated copies to:

Ten Simple Rules for Taking Advantage of Git and GitHub

Ten Simple Rules for Taking Advantage of Git and GitHub by Yasset Perez-Riverol , Laurent Gatto, Rui Wang, Timo Sachsenberg, Julian Uszkoreit, Felipe da Veiga Leprevost, Christian Fufezan, Tobias Ternent, Stephen J. Eglen, Daniel S. Katz, Tom J. Pollard, Alexander Konovalov, Robert M. Flight, Kai Blin, Juan Antonio Vizcaíno (journals.plos.org)
Bioinformatics is a broad discipline in which one common denominator is the need to produce and/or use software that can be applied to biological data in different contexts. To enable and ensure the replicability and traceability of scientific claims, it is essential that the scientific publication, the corresponding datasets, and the data analysis are made publicly available [1,2]. All software used for the analysis should be either carefully documented (e.g., for commercial software) or, better yet, openly shared and directly accessible to others [3,4]. The rise of openly available software and source code alongside concomitant collaborative development is facilitated by the existence of several code repository services such as SourceForge, Bitbucket, GitLab, and GitHub, among others. These resources are also essential for collaborative software projects because they enable the organization and sharing of programming tasks between different remote contributors. Here, we introduce the main features of GitHub, a popular web-based platform that offers a free and integrated environment for hosting the source code, documentation, and project-related web content for open-source projects. GitHub also offers paid plans for private repositories (see Box 1) for individuals and businesses as well as free plans including private repositories for research and educational use.
Syndicated copies to:

Hypothes.is and the IndieWeb

A new plugin helps to improve annotations on the internet

Last night I saw two great little articles about Hypothes.is, a web-based annotation engine, written by a proponent of the IndieWeb:

Hypothes.is as a public research notebook

Hypothes.is Aggregator ― a WordPress plugin

As a researcher, I fully appreciate the pro-commonplace book conceptualization of the first post, and the second takes things amazingly further with a plugin that allows one to easily display one’s hypothes.is annotations on one’s own WordPress-based site in a dead-simple fashion.

This functionality is a great first step, though honestly, in keeping with IndieWeb principles of owning one’s own data, I think it would be easier/better if Hypothes.is both accepted and sent webmentions. This would potentially allow me to physically own the data on my own site while still participating in the larger annotation community as well as give me notifications when someone either comments or augments on one of my annotations or even annotates one of my own pages (bits of which I’ve written about before.)

Either way, kudos to Kris Shaffer for moving the ball forward!

Examples

My Hypothes.is Notebook

The plugin mentioned in the second article allows me to keep a running online “notebook” of all of my Hypothes.is annotations on my own site.

My IndieWeb annotations

I can also easily embed my recent annotations about the IndieWeb below:

[ hypothesis user = 'chrisaldrich' tags = 'indieweb']

Syndicated copies to:

Some Thoughts on Academic Publishing and “Who’s downloading pirated papers? Everyone” from Science | AAAS

Who's downloading pirated papers? Everyone by John Bohannon (Science | AAAS)
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.

Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.

From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.

Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).

Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).

Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.

Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?

“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone

Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?

My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:

She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)

(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?

Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.

I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.

Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.

Syndicated copies to:

How can we be sure old books were ever read? – University of Glasgow Library

This is a great little overview for people reading the books of others. There are also lots of great links to other resources.

Syndicated copies to:

Thoughts on “Some academics remain skeptical of Academia.edu” | University Affairs

This morning I ran across a tweet from colleague Andrew Eckford:

His response was probably innocuous enough, but I thought the article should be put to task a bit more.

“35 million academics, independent scholars and graduate students as users, who collectively have uploaded some eight million texts”

35 million users is an okay number, but their engagement must be spectacularly bad if only 8 million texts are available. How many researchers do you know who’ve published only a quarter of an article anywhere, much less gotten tenure?

“the platform essentially bans access for academics who, for whatever reason, don’t have an Academia.edu account. It also shuts out non-academics.”

They must have changed this, as pretty much anyone with an email address (including non-academics) can create a free account and use the system. I’m fairly certain that the platform was always open to the public from the start, but the article doesn’t seem to question the statement at all. If we want to argue about shutting out non-academics or even academics in poorer countries, let’s instead take a look at “big publishing” and their $30+/paper paywalls and publishing models, shall we?

“I don’t trust academia.edu”

Given his following discussion, I can only imagine what he thinks of big publishers in academia and that debate.

“McGill’s Dr. Sterne calls it “the gamification of research,”

Most research is too expensive to really gamify in such a simple manner. Many researchers are publishing to either get or keep their jobs and don’t have much time, information, or knowledge to try to game their reach in these ways. And if anything, the institutionalization of “publish or perish” has already accomplished far more “gamification”, Academia.edu is just helping to increase the reach of the publication. Given that research shows that most published research isn’t even read, much less cited, how bad can Academia.edu really be? [Cross reference: Reframing What Academic Freedom Means in the Digital Age]

If we look at Twitter and the blogging world as an analogy with Academia.edu and researchers, Twitter had a huge ramp up starting in 2008 and helped bloggers obtain eyeballs/readers, but where is it now? Twitter, even with a reasonable business plan is stagnant with growing grumblings that it may be failing. I suspect that without significant changes that Academia.edu (which is a much smaller niche audience than Twitter) will also eventually fall by the wayside.

The article rails against not knowing what the business model is or what’s happening with the data. I suspect that the platform itself doesn’t have a very solid business plan and they don’t know what to do with the data themselves except tout the numbers. I’d suspect they’re trying to build “critical mass” so that they can cash out by selling to one of the big publishers like Elsevier, who might actually be able to use such data. But this presupposes that they’re generating enough data; my guess is that they’re not. And on that subject, from a journalistic viewpoint, where’s the comparison to the rest of the competition including ResearchGate.net or Mendeley.com, which in fact was purchased by Elsevier? As it stands, this simply looks like a “hit piece” on Academia.edu, and sadly not a very well researched or reasoned one.

In sum, the article sounds to me like a bunch of Luddites running around yelling “fire”, particularly when I’d imagine that most referred to in the piece feed into the more corporate side of publishing in major journals rather than publishing it themselves on their own websites. I’d further suspect they’re probably not even practicing academic samizdat. It feels to me like the author and some of those quoted aren’t actively participating in the social media space to be able to comment on it intelligently. If the paper wants to pick at the academy in this manner, why don’t they write an exposé on the fact that most academics still have websites that look like they’re from 1995 (if, in fact, they have anything beyond their University’s mandated business card placeholder) when there are a wealth of free and simple tools they could use? Let’s at least build a cart before we start whipping the horse.

For academics who really want to spend some time and thought on a potential solution to all of this, I’ll suggest that they start out by owning their own domain and own their own data and work. The #IndieWeb movement certainly has an interesting philosophy that’s a great start in fixing the problem; it can be found at http://www.indiewebcamp.com.

Syndicated copies to:

Dr. Michael Miller Math Class Hints and Tips | UCLA Extension

An informal orientation for those taking math classes from Dr. Michael Miller through UCLA Extension.

Congratulations on your new math class, and welcome to the “family”!

Beginners Welcome!

Invariably the handful of new students every year eventually figure the logistics of campus out, but it’s easier and more fun to know some of the options available before you’re comfortable halfway through the class. To help get you over the initial hump, I’ll share a few of the common questions and tips to help get you oriented. Others are welcome to add comments and suggestions below. If you have any questions, feel free to ask anyone in the class, we’re all happy to help.

First things first, for those who’ve never visited UCLA before, here’s a map of campus to help you orient yourself. Using the Waze app on your smartphone can also be incredibly helpful in getting to campus more quickly through the tail end of rush hour traffic.

Whether you’re a professional mathematician, engineer, physicist, physician, or even a hobbyist interested in mathematics you’ll be sure to get something interesting out of Dr. Miller’s math courses, not to mention the camaraderie of 20-30 other “regulars” with widely varying backgrounds (from actors to surgeons and evolutionary theorists to engineers) who’ve been taking almost everything Mike has offered over the years (and yes, he’s THAT good — we’re sure you’ll be addicted too.) Whether you’ve been away from serious math for decades or use it every day or even if you’ve never gone past Calculus or Linear Algebra, this is bound to be the most entertaining thing you can do with your Tuesday nights in the Autumn and Winter. If you’re not sure what you’re getting into (or are scared a bit by the course description), I highly encourage to come and join us for at least the first class before you pass up on the opportunity. I’ll mention that the greater majority of new students to Mike’s classes join the ever-growing group of regulars who take almost everything he teaches subsequently.

Don’t be intimidated if you feel like everyone in the class knows each other fairly well — most of us do. Dr. Miller and mathematics can be addictive so many of us have been taking classes from him for 5-20+ years, and over time we’ve come to know each other.

Tone of Class

If you’ve never been to one of Dr. Miller’s classes before, they’re fairly informal and he’s very open to questions from those who don’t understand any of the concepts or follow his reasoning. He’s a retired mathematician from RAND and long-time math professor at UCLA. Students run the gamut from the very serious who read multiple textbooks and do every homework problem to hobbyists who enjoy listening to the lectures and don’t take the class for a grade of any sort (and nearly every stripe in between). He’ll often recommend a textbook that he intends to follow, but it’s never been a “requirement” and more often that not, the bookstore doesn’t list or carry his textbook until the week before class. (Class insiders will usually find out about the book months before class and post it to the Google Group – see below).

His class notes are more than sufficient for making it through the class and doing the assigned (optional) homework. He typically hands out homework in handout form, so the textbook is rarely, if ever, required to make it through the class. Many students will often be seen reading various other texts relating to the topic at hand as they desire. Usually he’ll spend an 45-60 minutes at the opening of each class after the first to go over homework problems or questions that anyone has.

For those taking the class for a grade or pass/fail, his usual policy is to assign a take home problem set around week 9 or 10 to be handed in at the penultimate class. [As a caveat, make sure you check his current policy on grading as things may change, but the preceding has been the usual policy for a decade or more.]

Parking Options

Lot 9 – Located at the northern terminus of Westwood Boulevard, one can purchase a parking pass for about $12 a day at the kiosk in the middle of the street just before Westwood Blvd. ends. The kiosk is also conveniently located right next to the parking structure. If there’s a basketball game or some other major event, Lot 8 is just across the street as well, though it’s just a tad further away from the Math Sciences Building. Since more of the class uses this as their parking structure of choice, there is always a fairly large group walking back there after class for the more security conscious.

Lot 2 – Located off of Hilgard Avenue, this is another common option for easy parking as well. While fairly close to class, not as many use it as it’s on the quieter/darker side of campus and can be a bit more of a security issue for the reticent.

Tip: For those opting for on-campus parking, one can usually purchase a quarter-long parking pass for a small discount at the beginning of the term.

Westwood Village and Neighborhood – Those looking for less expensive options street parking is available in the surrounding community, but use care to check signs and parking meters as you assuredly will get a ticket. Most meters in the surrounding neighborhoods end at either 6pm or 8pm making parking virtually free (assuming you’re willing to circle the neighborhood to find one of the few open spots.)

There are a huge variety of lots available in the Village for a range of prices, but the two most common, inexpensive, and closer options seem to be:

  • Broxton Avenue Public Parking at 1036 Broxton Avenue just across from the Fox Village and Bruin Theaters – $3 for entering after 6pm / $9 max for the day
  • Geffen Playhouse Parking at 10928 Le Conte Ave. between Broxton and Westwood – price varies based on the time of day and potential events (screenings/plays in Westwood Village) but is usually $5 in the afternoon and throughout the evening

Dining Options

More often than not a group of between 4 and 15 students will get together every evening before class for a quick bit to eat and to catch up and chat. This has always been an informal group and anyone from class is more than welcome to join. Typically we’ll all meet in the main dining hall of Ackerman Union (Terrace Foodcourt, Ackerman Level 1) between 6 and 6:30 (some with longer commutes will arrive as early as 3-4pm, but this can vary) and dine until about 6:55pm at which time we walk over to class.

The food options on Ackerman Level 1 include Panda Express, Rubio’s Tacos, Sbarro, Wolfgang Puck, and Greenhouse along with some snack options including Wetzel’s Pretzels and a candy store. One level down on Ackerman A-level is a Taco Bell, Carl’s Jr., Jamba Juice, Kikka, Buzz, and Curbside, though one could get takeout and meet the rest of the “gang” upstairs.

There are also a number of other on-campus options as well though many are a reasonable hike from the class location. The second-closest to class is the Court of Sciences Student Center with a Subway, Yoshinoya, Bombshelter Bistro, and Fusion.

Naturally, for those walking up from Westwood Village, there are additional fast food options like In-N-Out, Chick-fil-A, Subway, and many others.

Killing Time

For those who’ve already eaten or aren’t hungry, you’ll often find one or more of us browsing the math and science sections of the campus bookstore on the ground level of Ackerman Union to kill time before class. Otherwise there are usually a handful of us who arrive a half an hour early and camp out in the classroom itself (though this can often be dauntingly quiet as most use the chance to catch up on reading here.) If you arrive really early, there are a number of libraries and study places on campus. Boelter Hall has a nice math/science library on the 8th Floor.

Mid-class Break Options

Usually about halfway through class we’ll take a 10-12 minute coffee break. For those with a caffeine habit or snacking urges, there are a few options:

Kerckhoff Hall Coffee Shop is just a building or two over and is open late as snack stop and study location. They offer coffee and various beverages as well as snacks, bagels, pastries, and ice cream. Usually 5-10 people will wander over as a group to pick up something quick.

The Math Sciences Breezeway, just outside of class, has a variety of soda, coffee, and vending machines with a range of beverages and snacks. Just a short walk around the corner will reveal another bank of vending machines if your vice isn’t covered. The majority of class will congregate in the breezeway to chat informally during the break.

The Court of Sciences Student Center, a four minute walk South, with the restaurant options noted above if you need something quick and more substantial, though few students use this option at the break.

Bathrooms – The closest bathrooms to class are typically on the 5th floor of the Math Sciences Building. The women’s is just inside the breezeway doors and slightly to the left. The men’s rooms are a bit further and are either upstairs on the 6th floor (above the women’s), or a hike down the hall to the left and into Boelter hall. I’m sure the adventurous may find others, but take care not to get lost.

Informal Class Resources

Google Group

Over the years, as an informal resource, members of the class have created and joined a private Google Group (essentially an email list-serv) to share thoughts, ideas, events, and ask questions of each other. There are over 50 people in the group, most of whom are past Miller students, though there are a few other various mathematicians, physicists, engineers, and even professors. You can request to join the private group to see the resources available there. We only ask that you keep things professional and civil and remember that replying to all reaches a fairly large group of friends. Browsing through past messages will give you an idea of the types of posts you can expect. The interface allows you to set your receipt preferences to one email per message posted, daily digest, weekly digest, or no email (you’re responsible for checking the web yourself), so be sure you have the setting you require as some messages are more timely than others. There are usually only 1-2 posts per week, so don’t expect to be inundated.

Study Groups

Depending on students’ moods, time requirements, and interests, we’ve arranged informal study groups for class through the Google Group above. Additionally, since Dr. Miller only teaches during the Fall and Winter quarters, some of us also take the opportunity to set up informal courses during the Spring/Summer depending on interests. In the past, we’ve informally studied Lie Groups, Quantum Mechanics, Algebraic Geometry, and Category Theory in smaller groups on the side.

Dropbox

As a class resource, some of us share a document repository via Dropbox. If you’d like access, please make a post to the Google Group.

Class Notes

Many people within the class use Livescribe.com digital pens to capture not only the written notes but the audio discussion that occurred in class as well (the technology also links the two together to make it easier to jump around within a particular lecture). If it helps to have a copy of these notes, please let one of the users know you’d like them – we’re usually pretty happy to share. If you miss a class (sick, traveling, etc.) please let one of us know as the notes are so unique that it will be almost like you didn’t miss anything at all.

You can typically receive a link to the downloadable version of the notes in Livescribe’s Pencast .pdf format. This is a special .pdf file but it’s a bit larger in size because it has an embedded audio file in it that is playable with the more recent version of Adobe Reader X (or above) installed. (This means to get the most out of the file you have to download the file and open it in Reader X to get the audio portion. You can view the written portion in most clients, you’ll just be missing out on all the real fun and value of the full file.) With the notes, you should be able to toggle the settings in the file to read and listen to the notes almost as if you were attending the class live.

Viewing and Playing a Pencast PDF

Pencast PDF is a new format of notes and audio that can play in Adobe Reader X or above.

You can open a Pencast PDF as you would other PDF files in Adobe Reader X. The main difference is that a Pencast PDF can contain ink that has associated audio—called “active ink”. Click active ink to play its audio. This is just like playing a Pencast from Livescribe Online or in Livescribe Desktop. When you first view a notebook page, active ink appears in green type. When you click active ink, it turns gray and the audio starts playing. As audio playback continues, the gray ink turns green in synchronization with the audio. Non-active ink (ink without audio) is black and does not change appearance.

Audio Control Bar

Pencast PDFs have an audio control bar for playing, pausing, and stopping audio playback. The control bar also has jump controls, bookmarks (stars), and an audio timeline control.

Active Ink View Button

There is also an active ink view button. Click this button to toggle the “unwritten” color of active ink from gray to invisible. In the default (gray) setting, the gray words turn green as the audio plays. In the invisible setting, green words seem to write themselves on blank paper as the audio plays.

History

For those interested in past years’ topics, here’s the list I’ve been able to put together thus far:

Fall 2006: Complex Analysis
Winter 2007: Field Theory
Fall 2007: Algebraic Topology
Winter 2008: Integer Partitions
Fall 2008: Calculus on Manifolds
Winter 2009: Calculus on Manifolds: The Sequel
Fall 2009: Group Theory
Winter 2010: Galois Theory
Fall 2010: Differential Geometry
Winter 2011: Differential Geometry II
Fall 2011: p-Adic Analysis
Winter 2012: Group Representations
Fall 2012: Set Theory
Winter 2013: Functional Analysis
Fall 2013: Number Theory (Skipped)
Winter 2014: Measure Theory
Fall 2014: Introduction to Lie Groups and Lie Algebras Part I
Winter 2015: Introduction to Lie Groups and Lie Algebras Part II
Fall 2015: Algebraic Number Theory
Winter 2016: Algebraic Number Theory: The Sequel
Fall 2016: Introduction to Complex Analysis, Part I
Winter 2017: Introduction to Complex Analysis, Part II

Syndicated copies to: