Hear The Epic of Gilgamesh Read in its Original Ancient Language, Akkadian

a tweet by Open CultureOpen Culture (Twitter)
Hear The Epic of Gilgamesh Read in its Original Ancient Language, Akkadian http://bit.ly/2dF0SFZpic.twitter.com/sdgjcJX5ZK

Dave Harris is sure to appreciate this.

 

Syndicated copies to:

GitHub have published some guidance on persistence and archiving of repositories for academics #openscience

GitHub have published some guidance on persistence and archiving of repositories for academics #openscience by Arfon SmithArfon Smith (Twitter)
GitHub have published some guidance on persistence and archiving of repositories for academics https://help.github.com/articles/about-archiving-content-and-data-on-github/ #openscience

The crowd from Dodging the Memory Hole are sure to find this interesting!

Syndicated copies to:

The fun was out there | First Person | Johns Hopkins Magazine | Hub

The fun was out there by Matt Gross (Johns Hopkins Magazine, Summer 2016)
For the first couple of months of freshman year, I spent my evenings breaking into buildings on campus.

Having just passed our 20th college reunion, an old friend starts spilling the beans…

Apparently the statute of limitations on college shenanigans has run out and one of my best friends has written a nice little essay about some of “our” adventures. Fortunately he has kindly left out the names of his co-conspirators, so I’ll also remain silent about who was responsible for which particular crimes. Like him, I will leave the numerous other crimes he redacted unsung.


For the first couple of months of freshman year, I spent my evenings breaking into buildings on campus. This began, naturally, because a few of us who lived in and around the Vincent-Willard dorm had mail ordered lock-picking kits, and, well, we needed something to practice on besides our own dorm rooms.

So down into the midnight bowels of Krieger we crept, sneaking deep underground into disused classrooms, mute hallways, and one strange lab whose floor was tight-knit mesh wiring with a Silence of the Lambs–esque chamber below. We touched little, took nothing (except, once, a jar of desiccant—sorry!), and were never caught.

Such was the state of fun at Johns Hopkins in the fall of 1992, an era when the administration seemed to have adopted a policy of benign neglect toward the extracurricular happiness of its undergraduate body. We had Spring Fair and the occasional bus trip to New York for the day. What more could we want?

For many—really, most—of my cutthroat classmates, this was reason to grumble. Why, they moaned from the depths of D-level, couldn’t school be more exciting? A student union, they pleaded. A bar. A café. Anything to make campus life more bearable.

But for my friends and me, the school’s DGAF attitude meant freedom: We could do whatever we wanted, on campus or off. When lock-picking grew old (quickly, I’m pleased to say), we began to roam, wandering among the half-abandoned industrial sites that lined the unreconstructed harbor, or driving (when someone happened to have a car) under the interstates that cut through and around the city. We were set loose upon Baltimore, and all we ever wanted was to go and see what there was.

Here’s what we found: A large yellow smiley face painted on the end of an oil-storage tank. The 16mm film collection at the Pratt Library. A man who claimed to have been hanging out with Mama Cass Elliot of the Mamas & the Papas the night she lost her virginity. The Baltimore Streetcar Museum. How to clear the dance floor at Club Midnite by playing the 1978 song “Fish Heads” (eat them up, yum!). The big slice at Angelo’s and the $4.95 crabcake subs at Sip & Bite. Smart drugs, Neal Stephenson, and 2600 magazine at Atomic Books. The indie movie screenings at Skizz Cyzyk’s funeral home “mansion.”

None of these alone was world-changing (okay, except maybe “Fish Heads”). Put together, though, they amounted to a constant stream of stimulation, novelty, and excitement, the discoveries that make new adulthood feel fresh and occasionally profound.

All the while, I heard the no-fun grumbling from around campus and failed to understand it. We had freedom—what more could we need? The world was all around us, begging to be explored. We didn’t even have to leave campus: One spring, my girlfriend and I simply stepped off the sidewalk next to Mudd Hall into a little dell—and discovered a stand of wild scallions. We picked a ton, brought them home, and feasted on our foraged bounty. All we’d had to do was to leave the asphalt path—no red brick in those days—behind.

Matt Gross, Johns Hopkins A&S ’96, ’98 (MA), is a food and travel writer/editor who’s worked for everyone from The New York Times and Bon Appétit to The Guardian, The Village Voice, and Saveur. He lives in Brooklyn with his wife, Jean Liu, A&S ’96, and their two daughters.

Incidentally he also had two other meaty pieces that came out yesterday as well:

Syndicated copies to:

Mathematics in Popular Science Books | The Economist

Big bang (The Economist)
Popular physics has enjoyed a new-found regard. Now comes a brave attempt to inject mathematics into an otherwise fashionable subject

This review of Brian Cox and Jeff Forshaw’s forthcoming book The Quantum Universe: Everything That Can Happen Does Happen sounds intriguing. I’m highly impressed that so much of the review focuses on the author’s decision to include a more mathematical treatment of their subject for what is supposed to be a popular science book. I always wish books like these at least had the temerity to include much more in the way of the mathematical underpinnings of their subjects; I’m glad that the popular press (or at least The Economist in this case) is willing to be asking for the mathematics as well. Hopefully it will mark a broader trend in popular books on scientific topics!

Fundamental physics

Big bang

Popular physics has enjoyed a new-found regard. Now comes a brave attempt to inject mathematics into an otherwise fashionable subject

Nov 5th 2011 | from the print edition

The Quantum Universe: Everything That Can Happen Does Happen. By Brian Cox and Jeff Forshaw. Allen Lane; 255 pages; £20. To be published in America in January by Da Capo Press; $25.

PREVIOUSLY the preserve of dusty, tweed-jacketed academics, physics has enjoyed a surprising popular renaissance over the past few years. In America Michio Kaku, a string theorist, has penned several successful books and wowed television and radio audiences with his presentations on esoteric subjects such as the existence of wormholes and the possibility of alien life. In Britain Brian Cox, a former pop star whose music helped propel Tony Blair to power, has become the front man for physics, which recently regained its status as a popular subject in British classrooms, an effect many attribute to Mr Cox’s astonishing appeal.

Mr Cox, a particle physicist, is well-known as the presenter of two BBC television series that have attracted millions of viewers (a third series will be aired next year) and as a bestselling author and public speaker. His latest book, “The Quantum Universe”, which he co-wrote with Jeff Forshaw of the University of Manchester, breaks the rules of popular science-writing that were established over two decades ago by Stephen Hawking, who launched the modern genre with his famous book, “A Brief History of Time”.

Mr Hawking’s literary success was ascribed to his eschewing equations. One of his editors warned him that sales of the book would be halved by every equation he included; Mr Hawking inserted just one, E=mc2, and, even then, the volume acquired a sorry reputation for being bought but not read. By contrast, Mr Cox, whose previous book with Mr Forshaw investigated “Why does E=mc2?” (2009), has bravely sloshed a generous slug of mathematics throughout his texts.

The difficulties in explaining physics without using maths are longstanding. Einstein mused, “The eternal mystery of the world is its comprehensibility,” and “the fact that it is comprehensible is a miracle.” Yet the language in which the world is described is that of maths, a relatively sound grasp of which is needed to comprehend the difficulties that physicists are trying to resolve as well as the possible solutions. Mr Cox has secured a large fan base with his boyish good looks, his happy turns of phrase and his knack for presenting complex ideas using simple analogies. He also admirably shies away from dumbing down. “The Quantum Universe” is not a dry undergraduate text book, but nor is it a particularly easy read.

The subject matter is hard. Quantum mechanics, which describes in subatomic detail a shadowy world in which cats can be simultaneously alive and dead, is notoriously difficult to grasp. Its experiments yield bizarre results that can be explained only by embracing the maths that describe them, and its theories make outrageous predictions (such as the existence of antimatter) that have nevertheless later been verified. Messrs Cox and Forshaw say they have included the maths “mainly because it allows us to really explain why things are the way they are. Without it, we should have to resort to the physicist-guru mentality whereby we pluck profundities out of thin air, and neither author would be comfortable with guru status.”

That stance might comfort the authors, but to many readers they will nonetheless seem to pluck equations out of thin air. Yet their decision to include some of the hard stuff leaves open the possibility that some readers might actually engage in the slog that leads to higher pleasures. For non-sloggers alternative routes are offered: Messrs Cox and Forshaw use clockfaces to illustrate how particles interact with one another, a drawing of how guitar strings twang and a photograph of a vibrating drum. A diagram, rather than an equation, is used to explain one promising theory of how matter acquires mass, a question that experiments on the Large Hadron Collider at CERN, the European particle-physics laboratory near Geneva, will hopefully soon answer.

The authors have wisely chosen to leaven their tome with amusing tales of dysfunctional characters among scholars who developed quantum mechanics in the 1920s and beyond, as well as with accounts of the philosophical struggles with which they grappled and the occasional earthy aside. Where the subject matter is a trifle dull, Messrs Cox and Forshaw acknowledge it: of Heinrich Kayser, who a century ago completed a six-volume reference book documenting the spectral lines generated by every known element, they observe, “He must have been great fun at dinner parties.” And they make some sweeping generalisations about their colleagues who pore over equations, “Physicists are very lazy, and they would not go to all this trouble unless it saved time in the long run.”

Whether or not readers of “The Quantum Universe” will follow all the maths, the authors’ love for their subject shines through the book. “There is no better demonstration of the power of the scientific method than quantum theory,” they write. That may be so, but physicists all over the world, Messrs Cox and Forshaw included, are longing for the next breakthrough that will supersede the claim. Hopes are pinned on experiments currently under way at CERN that may force physicists to rethink their understanding of the universe, and inspire Messrs Cox and Forshaw to write their next book—equations and all.

from the print edition | Books and arts

Entropy Is Universal Rule of Language | Wired Science

Entropy Is Universal Rule of Language by Lisa Grossman (Wired)
The amount of information carried in the arrangement of words is the same across all languages, even languages that aren't related to each other. This consistency could hint at a single common ancestral language, or universal features of how human brains process speech. "It doesn't matter what language or style you take," said systems biologist…

The research this article is based on is quite interesting for those doing language research.

The amount of information carried in the arrangement of words is the same across all languages, even languages that aren’t related to each other. This consistency could hint at a single common ancestral language, or universal features of how human brains process speech.

“It doesn’t matter what language or style you take,” said systems biologist Marcelo Montemurro of England’s University of Manchester, lead author of a study May 13 in PLoS ONE. “In languages as diverse as Chinese, English and Sumerian, a measure of the linguistic order, in the way words are arranged, is something that seems to be a universal of languages.”

Language carries meaning both in the words we choose, and the order we put them in. Some languages, like Finnish, carry most of their meaning in tags on the words themselves, and are fairly free-form in how words are arranged. Others, like English, are more strict “John loves Mary” means something different from “Mary loves John.”

Montemurro realized that he could quantify the amount of information encoded in word order by computing a text’s “entropy,” or a measure of how evenly distributed the words are. Drawing on methods from information theory, Montemurro co-author Dami??n Zanette of the National Atomic Energy Commission in Argentina calculated the entropy of thousands of texts in eight different languages: English, French, German, Finnish, Tagalog, Sumerian, Old Egyptian and Chinese.

Then the researchers randomly rearranged all the words in the texts, which ranged from the complete works of Shakespeare to The Origin of Species to prayers written on Sumerian tablets.

“If we destroy the original text by scrambling all the words, we are preserving the vocabulary,” Montemurro said. “What we are destroying is the linguistic order, the patterns that we use to encode information.”

The researchers found that the original texts spanned a variety of entropy values in different languages, reflecting differences in grammar and structure.

But strangely, the difference in entropy between the original, ordered text and the randomly scrambled text was constant across languages. This difference is a way to measure the amount of information encoded in word order, Montemurro says. The amount of information lost when they scrambled the text was about 3.5 bits per word.

“We found, very interestingly, that for all languages we got almost exactly the same value,” he said. “For some reason these languages evolved to be constrained in this framework, in these patterns of word ordering.”

This consistency could reflect some cognitive constraints that all human brains run up against, or give insight into the evolution of language, Montemurro suggests.

Cognitive scientists are still debating whether languages have universal features. Some pioneering linguists suggested that languages should evolve according to a limited set of rules, which would produce similar features of grammar and structure. But a study published last month that looked at the structure and syntax of thousands of languages found no such rules.

It may be that universal properties of language show up only at a higher level of organization, suggests linguist Kenny Smith of the University of Edinburgh.

“Maybe these broad-brushed features get down to what’s really essential” about language, he said. “Having words, and having rules for how the words are ordered, maybe those are the things that help you do the really basic functions of language. And the places where linguists traditionally look to see universals are not where the fundamentals of language are.”

Image: James Morrison/Flickr.

Citation:”Universal Entropy of Word Ordering Across Linguistic Families.” Marcelo A. Montemurro and Damián H. Zanette. PLoS ONE, Vol. 6, Issue 5, May 13, 2011. DOI: 10.1371/journal.pone.0019875.

via Wired.com

 

Syndicated copies to:

IPTV primer: an overview of the fusion of TV and the Internet | Ars Technica

IPTV primer: an overview of the fusion of TV and the Internet by Iljitsch Van BeijnumIljitsch Van Beijnum (Ars Technica)

This brief overview of IPTV is about as concise as they get. It’s recommended for entertainment executives who need to get caught up on the space as well as for people who are contemplating “cutting the cable cord.” There’s still a lot of improvement the area can use…

Profound as it may be, the Internet revolution still pales in comparison to that earlier revolution that first brought screens in millions of homes: the TV revolution. Americans still spend more of their non-sleep, non-work time on watching TV than on any other activity. And now the immovable object (the couch potato) and the irresistible force (the business-model destroying Internet) are colliding.

For decades, the limitations of technology only allowed viewers to watch TV programs as they were broadcast. Although limiting, this way of watching TV has the benefit of simplicity: the viewer only has to turn on the set and select a channel. They then get to see what was deemed broadcast-worthy at that particular time. This is the exact opposite of the Web, where users type a search query or click a link and get their content whenever they want. Unsurprisingly, TV over the Internet, a combination that adds Web-like instant gratification to the TV experience, has seen an enormous growth in popularity since broadband became fast enough to deliver decent quality video. So is the Internet going to wreck TV, or is TV going to wreck the Internet? Arguments can certainly be made either way.

The process of distributing TV over a data network such as the Internet, a process often called IPTV, is a little more complex than just sending files back and forth. Unless, that is, a TV broadcast is recorded and turned into a file. The latter, file-based model is one that Apple has embraced with its iTunes Store, where shows are simply downloaded like any other file. This has the advantage that shows can be watched later, even when there is no longer a network connection available, but the download model doesn’t exactly lend itself to live broadcasts—or instant gratification, for that matter.

Streaming

Most of the new IPTV services, like Netflix and Hulu, and all types of live broadcasts use a streaming model. Here, the program is set out in real time. The computer—or, usually by way of a set-top-box, the TV—decodes the incoming stream of audio and video and then displays it pretty much immediately. This has the advantage that the video starts within seconds. However, it also means that the network must be fast enough to carry the audio/video at the bitrate that it was encoded with. The bitrate can vary a lot depending on the type of program—talking heads compress a lot better than car crashes—but for standard definition (SD) video, think two megabits per second (Mbps).

To get a sense just how significant this 2Mbps number is, it’s worth placing it in the context of the history of the Internet, as it has moved from transmitting text to images to audio and video. A page of text that takes a minute to read is a few kilobytes in size. Images are tens to a few hundred kilobytes. High quality audio starts at about 128 kilobits per second (kbps), or about a megabyte per minute. SD TV can be shoehorned in some two megabits per second (Mbps), or about 15 megabytes per minute. HDTV starts around 5Mbps, 40 megabytes per minute. So someone watching HDTV over the Internet uses about the same bandwidth as half a million early-1990s text-only Web surfers. Even today, watching video uses at least ten times as much bandwidth as non-video use of the network.

In addition to raw capacity, streaming video also places other demands on the network. Most applications communicate through TCP, a layer in the network stack that takes care of retransmitting lost data and delivering data to the receiving application in the right order. This is despite the fact that the IP packets that do TCP’s bidding may arrive out of order. And when the network gets congested, TCP’s congestion control algorithms slow down the transmission rate at the sender, so the network remains usable.

However, for real-time audio and video, TCP isn’t such a good match. If a fraction of a second of audio or part of a video frame gets lost, it’s much better to just skip over the lost data and continue with what follows, rather than wait for a retransmission to arrive. So streaming audio and video tended to run on top of UDP rather than TCP. UDP is the thinnest possible layer on top of IP and doesn’t care about lost packets and such. But UDP also means that TCP’s congestion control is out the door, so a video stream may continue at full speed even though the network is overloaded and many packets—also from other users—get lost. However, more advanced streaming solutions are able to switch to lower quality video when network conditions worsen. And Apple has developed a way to stream video using standard HTTP on top of TCP, by splitting the stream into small files that are downloaded individually. Should a file fail to download because of network problems, it can be skipped, continuing playback with the next file.

Where are the servers? Follow the money

Like any Internet application, streaming of TV content can happen from across town or across the world. However, as the number of users increases, the costs of sending such large amounts of data over large distances become significant. For this reason, content delivery networks (CDNs), of which Akamai is probably the most well-known, try to place servers as close to the end-users as possible, either close to important interconnect locations where lots of Internet traffic comes together, or actually inside the networks of large ISPs.

Interestingly, it appears that CDNs are actually paying large ISPs for this privilege. This makes the IPTV business a lot like the cable TV business. On the Internet, the assumption is that both ends (the consumer and the provider of over-the-Internet services) pay their own ISPs for the traffic costs, and the ISPs just transport the bits and aren’t involved otherwise. In the cable TV world, this is very different. An ISP provides access to the entire Internet; a cable TV provider doesn’t provide access to all possible TV channels. Often, the cable companies pay for access to content.

A recent dispute between Level3 and Comcast can be interpreted as evidence of a power struggle between the CDNs and the ISPs in the IPTV arena.

Walled gardens

For services like Netflix or Hulu, where everyone is watching their own movie or their own show, streaming makes a lot of sense. Not so much with live broadcasts.

So far, we’ve only been looking at IPTV over the public Internet. However, many ISPs around the world already provide cable-like service on top of ADSL or Fiber-To-The-Home (FTTH). With such complete solutions, the ISPs can control the whole service, from streaming servers to the set-top box that decodes the IPTV data and delivers it to a TV. This “walled garden” type of IPTV typically provides a better and more TV-like experience—changing channels is faster, image quality is better, and the service is more reliable.

Such an IPTV Internet access service is a lot like what cable networks provide, but there is a crucial difference: with cable, the bandwidth of the analog cable signal is split into channels, which can be used for analog or digital TV broadcasts or for data. TV and data don’t get in each other’s way. With IPTV on the other hand, TV and Internet data are communication vessels: what is used by one is unavailable to the other. And to ensure a good experience, IPTV packets are given higher priority than other packets. When bandwidth is plentiful, this isn’t an issue, but when a network fills up to the point that Internet packets regularly have to take a backseat to IPTV packets, this could easily become a network neutrality headache.

Multicast to the rescue

Speaking of networks that fill up: for services like Netflix or Hulu, where everyone is watching their own movie or their own show, streaming makes a lot of sense. Not so much with live broadcasts. If 30 million people were to tune into Dancing with the Stars using streaming, that means 30 million copies of each IPTV packet must flow down the tubes. That’s not very efficient, especially given that routers and switches have the capability to take one packet and deliver a copy to anyone who’s interested. This ability to make multiple copies of a packet is called multicast, and it occupies territory between broadcasts, which go to everyone, and regular communications (called unicast), which go to only one recipient. Multicast packets are addressed to a special group address. Only systems listening for the right group address get a copy of the packet.

Multicast is already used in some private IPTV networks, but it has never gained traction on the public Internet. Partially, this is a chicken/egg situation, where there is no demand because there is no supply and vice versa. But multicast is also hard to make work as the network gets larger and the number of multicast groups increases. However, multicast is very well suited to broadcast type network infrastructures, such as cable networks and satellite transmission. Launching multiple satellites that just send thousands of copies of the same packets to thousands of individual users would be a waste of perfectly good rockets.

Peer-to-peer and downloading

Converging to a single IP network that can carry the Web, other data services, telephony, and TV seems like a no-brainer.

Multicast works well for a relatively limited number of streams that are each watched by a reasonably sized group of people—but having very many multicast groups takes up too much memory in routers and switches. For less popular content, there’s another delivery method that requires no or few streaming servers: peer-to-peer streaming. This was the technology used by the Joost service in 2007 and 2008. With peer-to-peer streaming, all the systems interested in a given stream get blocks of audio/video data from upstream peers, and then send those on to downstream peers. This approach has two downsides: the bandwidth of the stream has to be limited to fit within the upload capacity of most peers, and changing channels is a very slow process because a whole new set of peers must be contacted.

For less time-critical content, downloading can work very well. Especially in a form like podcasts, where an RSS feed allows a computer to download new episodes of shows without user intervention. It’s possible to imagine a system where regular network TV shows are made available for download one or two days before they air—but in encrypted form. Then, “airing” the show would just entail distributing the decryption keys to viewers. This could leverage unused network capacity at night. Downloads might also happen using IP packets with a lower priority, so they don’t get in the way of interactive network use.

IP addresses and home networks

A possible issue with IPTV could be the extra IP addresses required. There are basically two approaches to handling this issue: the one where the user is in full control, and the one where an IPTV service provider (usually the ISP) has some control. In the former case, streaming and downloading happens through the user’s home network and no extra addresses are required. However, wireless home networks may not be able to provide bandwidth with enough consistency to make streaming work well, so pulling Ethernet cabling may be required.

When the IPTV provider provides a set-top box, it’s often necessary to address packets toward that set-top box, so the box must be addressable in some way. This can eat up a lot of addresses, which is a problem in these IPv4-starved times. For really large ISPs, the private address ranges in IPv4 may not even be sufficient to provide a unique address to every customer. Issues in this area are why Comcast has been working on adopting IPv6 in the non-public part of its network for many years. When an IPTV provider provides a home gateway, this gateway is often outfitted with special quality-of-service mechanisms that make (wireless) streaming work better than run-of-the-mill home gateways that treat all packets the same.

Predicting the future

Converging to a single IP network that can carry the Web, other data services, telephony, and TV seems like a no-brainer. The phone companies have been working on this for years because that will allow them to buy cheap off-the-shelf routers and switches, rather than the specialty equipment they use now. So it seems highly likely that in the future, we’ll be watching our TV shows over the Internet—or at least over an IP network of some sort. The extra bandwidth required is going to be significant, but so far, the Internet has been able to meet all challenges thrown at it in this area. Looking at the technologies, it would make sense to combine nightly pushed downloads for popular non-live content, multicast for popular live content, and regular streaming or peer-to-peer streaming for back catalog shows and obscure live content.

However, the channel flipping model of TV consumption has proven to be quite popular over the past half century, and many consumers may want to stick with it—for at least part of their TV viewing time. If nothing else, this provides an easy way to discover new shows. The networks are also unlikely to move away from this model voluntarily, because there is no way they’ll be able to sell 16 minutes of commercials per hour using most of the other delivery methods. However, we may see some innovations. For instance, if you stumble upon a show in progress, wouldn’t it be nice to be able to go back to the beginning? In the end, TV isn’t going anywhere, and neither is the Internet, so they’ll have to find a way to live together.

Correction: The original article incorrectly stated that cable providers get paid by TV networks. For broadcast networks, cable operators are required by the law’s “must carry” provisions to carry all of the TV stations broadcast in a market. Ars regrets the error.

‘The Information’ by James Gleick – Book Review by Janet Maslin | New York Times

‘The Information’ by James Gleick - Review by Janet Maslin (nytimes.com)
“The Information,” by James Gleick, is to the nature, history and significance of data what the beach is to sand.

This book is assuredly going to have to skip up to the top of my current reading list.

“The Information” is so ambitious, illuminating and sexily theoretical that it will amount to aspirational reading for many of those who have the mettle to tackle it. Don’t make the mistake of reading it quickly. Imagine luxuriating on a Wi-Fi-equipped desert island with Mr. Gleick’s book, a search engine and no distractions. “The Information” is to the nature, history and significance of data what the beach is to sand.

In this relaxed setting, take the time to differentiate among the Brownian (motion), Bodleian (library) and Boolean (logic) while following Mr. Gleick’s version of what Einstein called “spukhafte Fernwirkung,” or “spooky action at a distance.” Einstein wasn’t precise about what this meant, and Mr. Gleick isn’t always precise either. His ambitions for this book are diffuse and far flung, to the point where providing a thumbnail description of “The Information” is impossible.

So this book’s prologue is its most slippery section. It does not exactly outline a unifying thesis. Instead it hints at the amalgam of logic, philosophy, linguistics, research, appraisal and anecdotal wisdom that will follow. If Mr. Gleick has one overriding goal it is to provide an animated history of scientific progress, specifically the progress of the technology that allows information to be recorded, transmitted and analyzed. This study’s range extends from communication by drumbeat to cognitive assault by e-mail.

As an illustration of Mr. Gleick’s versatility, consider what he has to say about the telegraph. He describes the mechanical key that made telegraphic transmission possible; the compression of language that this new medium encouraged; that it literally was a medium, a midway point between fully verbal messages and coded ones; the damaging effect its forced brevity had on civility; the confusion it created as to what a message actually was (could a mother send her son a dish of sauerkraut?) and the new conceptual thinking that it helped implement. The weather, which had been understood on a place-by-place basis, was suddenly much more than a collection of local events.

Beyond all this Mr. Gleick’s telegraph chapter, titled “A Nervous System for the Earth,” finds time to consider the kind of binary code that began to make sense in the telegraph era. It examines the way letters came to treated like numbers, the way systems of ciphers emerged. It cites the various uses to which ciphers might be put by businessmen, governments or fiction writers (Lewis Carroll, Jules Verne and Edgar Allan Poe). Most of all it shows how this phase of communication anticipated the immense complexities of our own information age.

Although “The Information” unfolds in a roughly chronological way, Mr. Gleick is no slave to linearity. He freely embarks on colorful digressions. Some are included just for the sake of introducing the great eccentrics whose seemingly marginal inventions would prove to be prophetic. Like Richard Holmes’s “Age of Wonder” this book invests scientists with big, eccentric personalities. Augusta Ada Lovelace, the daughter of Lord Byron, may have been spectacularly arrogant about what she called “my immense reasoning faculties,” claiming that her brain was “something more than merely mortal.” But her contribution to the writing of algorithms can, in the right geeky circles, be mentioned in the same breath as her father’s contribution to poetry.

The segments of “The Information” vary in levels of difficulty. Grappling with entropy, randomness and quantum teleportation is the price of enjoying Mr. Gleick’s simple, entertaining riffs on the Oxford English Dictionary’s methodology, which has yielded 30-odd spellings of “mackerel” and an enchantingly tongue-tied definition of “bada-bing” and on the cyber-battles waged via Wikipedia. (As he notes, there are people who have bothered to fight over Wikipedia’s use of the word “cute” to accompany a picture of a young polar bear.) That Amazon boasts of being able to download a book called “Data Smog” in less than a minute does not escape his keen sense of the absurd.

As it traces our route to information overload, “The Information” pays tribute to the places that made it possible. He cites and honors the great cogitation hives of yore. In addition to the Institute for Advanced Study in Princeton, N.J., the Mount Rushmore of theoretical science, he acknowledges the achievements of corporate facilities like Bell Labs and I.B.M.’s Watson Research Center in the halcyon days when many innovations had not found practical applications and progress was its own reward.

“The Information” also lauds the heroics of mathematicians, physicists and computer pioneers like Claude Shannon, who is revered in the computer-science realm for his information theory but not yet treated as a subject for full-length, mainstream biography. Mr. Shannon’s interest in circuitry using “if … then” choices conducting arithmetic in a binary system had novelty when he began formulating his thoughts in 1937. “Here in a master’s thesis by a research assistant,” Mr. Gleick writes, “was the essence of the computer revolution yet to come.”

Among its many other virtues “The Information” has the rare capacity to work as a time machine. It goes back much further than Shannon’s breakthroughs. And with each step backward Mr. Gleick must erase what his readers already know. He casts new light on the verbal flourishes of the Greek poetry that preceded the written word: these turns of phrase could be as useful for their mnemonic power as for their art. He explains why the Greeks arranged things in terms of events, not categories; how one Babylonian text that ends with “this is the procedure” is essentially an algorithm; and why the telephone and the skyscraper go hand in hand. Once the telephone eliminated the need for hand-delivered messages, the sky was the limit.

In the opinion of “The Information” the world of information still has room for expansion. We may be drowning in spam, but the sky’s still the limit today.