Reply to A (very) gentle comment on Algebraic Geometry for the faint-hearted | Ilyas Khan

Replied to A (very) gentle comment on Algebraic Geometry for the faint-hearted by Ilyas KhanIlyas Khan (LinkedIn)
This short article is the result of various conversations over the course of the past year or so that arose on the back of two articles/blog pieces that I have previously written about Category Theory (here and here). One of my objectives with such articles, whether they be on aspects of quantum computing or about aspects of maths, is to try and de-mystify as much of the associated jargon as possible, and bring some of the stunning beauty and wonder of the subject to as wide an audience as possible. Whilst it is clearly not possible to become an expert overnight, and it is certainly not my objective to try and provide more than an introduction (hopefully stimulating further research and study), I remain convinced that with a little effort, non-specialists and even self confessed math-phobes can grasp some of the core concepts. In the case of my articles on Category Theory, I felt that even if I could generate one small gasp of excited comprehension where there was previously only confusion, then the articles were worth writing.
I just finished a course on Algebraic Geometry through UCLA Extension, which was geared toward non-traditional math students and professionals, and wish I had known about Smith’s textbook when I’d started. I did spend some time with Cox, Little, and O’Shea’s Ideals, Varieties, and Algorithms which is a pretty good introduction to the area, but written a bit more for computer scientists and engineers in mind rather than the pure mathematician, which might recommend it more toward your audience here as well. It’s certainly more accessible than Hartshorne for the faint-of-heart.

I’ve enjoyed your prior articles on category theory which have spurred me to delve deeper into the area. For others who are interested, I thought I’d also mention that physicist and information theorist John Carlos Baez at UCR has recently started an applied category theory online course which I suspect is a bit more accessible than most of the higher graduate level texts and courses currently out. For more details, I’d suggest starting here: https://johncarlosbaez.wordpress.com/2018/03/26/seven-sketches-in-compositionality/

🔖 [1803.05316] Seven Sketches in Compositionality: An Invitation to Applied Category Theory

Bookmarked Seven Sketches in Compositionality: An Invitation to Applied Category Theory by Brendan Fong, David I. Spivak (arxiv.org)
This book is an invitation to discover advanced topics in category theory through concrete, real-world examples. It aims to give a tour: a gentle, quick introduction to guide later exploration. The tour takes place over seven sketches, each pairing an evocative application, such as databases, electric circuits, or dynamical systems, with the exploration of a categorical structure, such as adjoint functors, enriched categories, or toposes. No prior knowledge of category theory is assumed. [.pdf]
This is the textbook that John Carlos Baez is going to use for his online course in Applied Category Theory.

👓 Applied Category Theory – Online Course | John Carlos Baez

Some awesome news just as I’ve wrapped up a class on Algebraic Geometry and was actively looking to delve into some category theory over the summer. John Carlos Baez announced that he’s going to offer an online course in applied category theory. He’s also already posted some videos and details!

Repost of John Carlos Baez’ Biology as Information Dynamics

Bookmarked Biology as Information Dynamics by John Carlos Baez (Google+)
I'm giving a talk at the Stanford Complexity Group this Thursday afternoon, April 20th. If you're around - like in Silicon Valley - please drop by! It will be in Clark S361 at 4 pm. Here's the idea. Everyone likes to say that biology is all about information. There's something true about this - just think about DNA. But what does this insight actually do for us? To figure it out, we need to do some work. Biology is also about things that can make copies of themselves. So it makes sense to figure out how information theory is connected to the 'replicator equation' — a simple model of population dynamics for self-replicating entities. To see the connection, we need to use relative information: the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus. It turns out that free energy — energy in forms that can actually be used, not just waste heat — is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information. But there's a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher's fundamental theorem of natural selection, and more. So this what I'll talk about! You can see slides of an old version here: http://math.ucr.edu/home/baez/bio_asu/ but my Stanford talk will be videotaped and it'll eventually be here: https://www.youtube.com/user/StanfordComplexity You can already see lots of cool talks at this location! #biology
Wondering if there’s a way I can manufacture a reason to head to Northern California this week…