Search Engine Land is the leading industry source for daily, must-read news and in-depth analysis about search engine technology.
Do you know the overlaps between SEO and accessibility? If you’re optimizing for search engines, you’re also affecting how people using assistive technologies experience your site. Let's examine the effects and best practices for keyword usage, text formatting, and links.
Back in December, mentioning or potentially person-tagging someone (inline). By adding a link to a person’s website onto any mentions of their name in my posts, my website will automatically send them a notification that they were mentioned. They can then determine what they want to do or not do with that information.and the functionality of sending notifications using webmentions. Within the IndieWeb, this is known as
While I want people that I mention in some of my posts to be aware that they’ve been mentioned by me, I don’t necessarily need to add to the visual cruft and clutter of the pages by intentionally calling out that link with the traditional color change and underline that
<a> links in HTML often have. After all, I’m linking to them to send a notification to them, not necessarily to highlight them to everyone else. In some sense, I’m doing this because I’ve never quite liked that Twitter uses @names highlighted within posts. All the additional cruft in Twitter like the “@” and “#” prefixes, while adding useful functionality, have always dramatically decreased the readability and enjoyment of their interface for me. So why not just get rid of them?! I’m glad to have this power and ability to do so on my own website and hope others appreciate it.
In the past I’ve tried “blind notifying” (or bcc’ing via Webmention) people by adding invisible or hidden links in the page, but this has been confusing to some. This is why one of the general principles of the IndieWeb is to
Use & publish visible data for humans first, machines second.
Thus, I’ve added a tiny bit of CSS to those notification links so that they appear just like the rest of the text on the site. The notifications via Webmention will still work, and those who are mentioned will be able to see their names appear within the post.
For those interested, I’ve left in some hover UI so if you hover your mouse over these “hidden” links, they will still indicate there’s a link there and it will work as expected.
As an example of the functionality here within this particular post, I’ve hidden the link on the words “mentioning” and “person-tagging” in the first paragraph. Loqi, the IndieWeb chat bot, should pick up the mention of those wiki pages via WebSub and syndicate my post into the IndieWeb meta chat room, and those interested in the ideas can still hover over the word and click on it for more details. In practice, I’ll typically be doing this for less relevant links as well as for tagging other people solely to send them notifications.
I’m curious if there are any edge cases or ideas I’m missing in this sort of user interface? Sadly it won’t work in most feed readers, but perhaps there’s a standardizable way of indicating this? If you have ideas about improved presentation for this sort of functionality, I’d be thrilled to hear them in the comments below.
I’ve had an idea in my task list for a week or so now, and I just haven’t made the time to write about it, at least not as I originally intended when I read the post that inspired it. J…
Some ideas worth chewing on here. Paul almost uses the phrase “thought spaces” here and though he doesn’t, he’s certainly dancing around it.
(Joe’s full article is here.)
Yes, here we are again—I think what you’re saying is that even a single-line annotation of a link, even just a few words of human curation do wonders when you’re out discovering the world. (Perhaps even more than book recommendations—where we know that at leas...
Highlights, Quotes, Annotations, & Marginalia
it made me feel like we were trying to send some kind of concentrated transmission to the author—linking as a greeting, links as an invitation. ❧
December 19, 2018 at 04:14PM
I do find that Webmentions are really enhancing linking—by offering a type of bidirectional hyperlink. I think if they could see widespread use, we’d see a Renaissance of blogging on the Web. ❧
December 19, 2018 at 04:17PM
I’m really not sure if linking, in general, has changed over the years. I’ve been doing it the same since day one. But that’s just me. ❧
December 19, 2018 at 04:22PM
Don’t apologize for links. It’s the web and links are important. In fact I might think that you could have a few additional links here! I would have seen it anyway, but I was a tad sad not to have seen a link to that massive pullquote/photo you made at the top of the post which would have sent me a webmention to boot. (Of course WordPress doesn’t make it easy on this front either, so your best bet would have been an invisible
<link> hidden in the text maybe?)
I’ve been in the habit of person-tagging people in posts to actively send them webmentions, but I also have worried about the extra “visual clutter” and cognitive load of the traditional presentation of links as mentioned by John. (If he wasn’t distracted by the visual underlines indicating links, he might have been as happy?) As a result, I’m now considering adding some CSS to my site so that some of these webmention links simply look like regular text. This way the notifications will be triggered, but without adding the seeming “cruft” visually or cognitively. Win-win? Thanks for the inspiration!
In your case here, you’ve kindly added enough context about what to expect about the included links that the reader can decide for themselves while still making your point. You should sleep easily on this point and continue linking to your heart’s content.
In some sense, I think that the more links the better. I suspect the broader thesis of Cesar Hidalgo’s book Why Information Grows: The Evolution of Order, from Atoms to Economies would give you some theoretical back up for the idea.
'These sort of blind peer-to-peer shares are really important in determining what news gets circulated and what just fades off the public radar'
Highlights, Quotes, Annotations, & Marginalia
Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Red–Example to work through
…the high barriers to becoming a Christian had to be abolished. Circumcision and the strict food laws had to be relaxed.
make it easier to create links!
But when you add enough links such that each node has an average of one link, a miracle happens: A unique giant cluster emerges.
Random network theory tells us that as the average number of links per node increases beyond the critical one, the number of nodes left out of the giant cluster decreases exponentially.
If the network is large, despite the links’ completely random placement, almost all nodes will have approximately the same number of links.
seminal 1959 paper of Erdős and Rényi to bookmark
“On Random Graphs. I” (PDF). Publicationes Mathematicae. 6: 290–297.
In Igy irtok ti, or This is How You Write, Frigyes Karinthy
But there is one story, entitled “Lancszemek,” or “Chains,” that deserves our attention
Karinthy’s 1929 insight that people are linked by at most five links was the first published appearance of the concept we know today as “six degrees of separation.”
He [Stanley Milgram] did not seem to have been aware of the body of work on networks in graph theory and most likely had never heard of Erdős and Rényi. He is known to have been influenced by the work of Ithel de Sole Pool of MIT and Manfred Kochen of IBM, who circulated manuscripts about the small world problem within a group of colleagues for decades without publishing them, because they felt they had never “broken the back of the problem.”
Think about the small world problem of published research.
We don’t have a social search engine so we may never know the real number with total certainty.
Facebook has fixed this in the erstwhile. As of 2016 it’s down to 3.57 degrees of separation
google the n-gram of this word to see it’s incidence over time. How frequent was it when this book was written? It was apparently a thing beginning in the mid 1960’s.
Mark Newman, a physicist at the Santa Fe Institute… had already written several papers on small worlds that are now considered classics.
Therefore, Watts and Strogatz’s most important discovery is that clustering does not stop at the boundary of social networks.
To explain the ubiquity of clustering in most real networks, Watts and Strogatz offered an alternative to Erdős and Rényi’s random network model in their 1998 study published in Nature.
The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there.
Do Facebook and Twitter subvert some of this effect? What types of possible solutions could this give to the IndieWeb for social networking models with healthier results?
On the Web, the measure of visibility is the number of links. The more incoming links pointing to your Webpage, the more visible it is. […] Therefore, the liklihood that a typical document links to your Webpage is close to zero.
The hubs are the strongest argument against the utopian vision of an egalitarian cyberspace. […] In a collective manner, we somehow create hubs, Websites to which everyone links. They are very easy to find, no matter where you are on the Web. Compared to these hubs, the rest of the Web is invisible.
Every four years the United States inaugurates a new social hub–the president.
But every time an 80/20 rule truly applies, you can bet that there is a power law behind it. […] Power laws rarely emerge in systems completely dominated bya roll of the dice. Physicists have learned that most often they signal a transition from disorder to order.
If the disorder to order is the case, then what is the order imposed by earthquakes which apparently work on a power law distribution?
Leo Kadanoff, a physicist at the University of Illinois at Urbana, had a sudden insight: In the vicinity of the critical point we need to stop viewing atoms separately. Rather, they should be considered communities that act in unison. Atoms must be replaced by boxes of atoms such that within each box all atoms behave as one.
Kenneth Wilson […] submitted simultaneously on June 2, 1971, and published in November of the same year by Physical Review B, turned statistical physics around. The proposed an elegant and all-encompassing theory of phase transitions. Wilson took the scaling ideas developed by Kadanoff and molded them into a powerful theory called renormalization. The starting point of his approach was scale invariance: He assumed that in the vicinity of the critical point the laws of physics applied in an identical manner at all scales, from single atoms to boxes containing millions of identical atoms acting in unison. By giving rigorous mathematical foundation to scale invariance, his theory spat out power laws each time he approached the critical point, the place where disorder makes room for order.
The random model of Erdős and Rényi rests on two simple and often disregarded assumptions. First, we start with an inventory of nodes. Having all the nodes available from the beginning, we assume that the number of nodes is fixed and remains unchanged throughout the network’s life. Second, all nodes are equivalent. Unable to distinguish between the nodes, we link them randomly to each other. These assumptions were unquestioned in over forty years of network research.
Both in the Erdős-Rényi and Watts-Strogatz models assumed that we have a fixed number of nodes that are wired together in some clever way. The networks generated by these models are therefore static, meaning that the number of nodes remains unchanged during the network’s life. In contrast, our examples suggested that for real networks the static hypothesis is not appropriate. Instead, we should incorporate growth into our network models.
It demonstrated, however, that growth alone cannot explain the emergence of power laws.
They are hubs. The better known they are, the more links point to them. The more links they attract, the easier it is to find them on the Web and so the more familiar we are with them. […] The bottom line is that when deciding where to link on the Web, we follow preferential attachment: When choosing between two pages, one with twice as many links as the other, about twice as many people link to the more connected page. While our individual choices are highly unpredictable, as a group we follow strict patterns.
The model is very simple, as growth and preferential attachment lead to an algorithm defined by two straightforward rules:
A. Growth: For each given period of time we add a new node to the network. This step underscores the fact that networks are assembled one node at a time.
B. Preferential attachment: We assume that each new node connects to the existing nodes with two links. The probability that it will chose a given node is proportional to the numver of links the chosen node has. That is, given the choice between two nodes, one with twice as many links as the other, it is twice as likely that the new node will connect to the more connected node.
The how and why remain for each are of application though.
In Hollywood, 94 percent of links are internal, formed when two established actors work together for the first time.
These shifts in thinking created a set of opposites: static versus growing, random versus scale-free, structure versus evolution.
[…] Does the presence of power laws imply that real networks are the result of a phase transition from disorder to order? The answer we’ve arrived at is simple: Networks are not en route from a random to an ordered state. Neither are they at the edge of randomness and chaos. Rather, the scale-free topology is evidence of organizing principles acting at each stage of the network formation process. There is little mystery here, since growth and preferential attachment can explain the basic features of the networks see in nature. No matter how large and complex a network becomes, as long as preferential attachment and growth are present it will maintain its hub-dominated scale-free topology.
The introduction of fitness does not eliminate growth and preferential attachment, the two basic mechanisms governing network evolution. It changes, however, what is considered attractive in a competitive environment. In the scale-free model, we assumed that a node’s attractiveness was determined solely by it’s number of links. In a competitive environment, fitness also plays a role: Nodes with higher fitness are linked to more frequently. A simple way to incorporate fitness into the scal-free model is to assume that preferential attachment is driven by the product of the node’s fitness and the number of links it has. Each new node decides where to link by comparing the fitness connectivity product of all available nodes and linking with a higher probability to those that have a higher product and therefore are more attractive.
Bianconi’s calculation s first confirmed our suspicion that in the presence of fitness the early bird is not necessarily the winner. Rather, fitness is in the driver’s seat, making or breaking the hubs.
But there was a indeed a precise mathematical mapping between the fitness model of a Bose gas. According to this mapping, each node in the network corresponds to an energy level in the Bose gas.
…in some networks, the winner can take all. Just as in a Bose-Einstein condensate all particles crowd into the the lowest energy level, leaving the rest of the energy levels unpopulated, in some networks the fittest node could theoretically grab all the links, leaving none for the rest of the nodes. The winner takes all.
But even though each system, from the Web to Hollywood, has a unique fitness distribution, Bianconi’s calculation indicated that in terms of topology all networks fall into one of only two possible categories. […] The first category includes all networks in which, despite the fierce competition for links, the scale-free topology survives. These networks display a fit-get-rich behavior, meaning that the fittest node will inevitably grow to beome the biggest hub. The winner’s lead is never significant, however. The largest hub is closely followed by a smaller one, which acquires almost as many links as the fittest node. Ata any moment we have a hierarchy of nodes whose degree distribution follows a power law. In most complex networks, the power laws and the fight for links thus are not antagonistic but can coexist peacefully.
In […] the second category, the winner takes all, meaning tht the fittest node grabs all the links, leaving very little for the rest of the nodes. Such networks develop a star topology. […] A winner-takes-all network is not scale-free.
…the western blackout highlighted an often ignored property of complex networks: vulnerability due to interconnectivity
Yet, if the number of removed nodes reaches a critical point, the system abruptly breaks into tiny unconnected islands.
Computer simulations we performed on networks generated by the scale-free model indicated that a significant fraction of nodes can be randomly removed from any scale-free network without its breaking apart.
…percolation theory, the field of physics that developed a set of tools that now are widely used in studies of random networks.
…they set out to calculate the fraction of nodes that must be removed from an arbitrarily chosen network, random or scale-free, to break it into pieces. On one hand, their calculation accounted for the well-known result that random networks fall apart after a critical number of nodes have been removed. On the other hand, they found that for scale-free networks the critical threshold disapears in cases where the degree exponent is smaller or equal to three.
Disable a few of the hubs and a scale-free network will fall to pieces in no time.
If, however, a drug or an illness shuts down the genes encoding the most connected proteins, the cell will not survive.
Obviously, the likelihood that a local failure will handicap the whole system is much higher if we perturb the most-connected nodes. This was supported by the findings of Duncan Watts, from Columbia University, who investigated a model designed to capture the generic features of cascading failures, such as power outages, and the opposite phenomenon, the cascading popularity of books, movies, and albums, which can be described within the same framework.
If a new product passes the crucial test of the innovators, based on their recommendation, the early adopters will pick it up.
What, if any, role is played by the social network in the spread of a virus or an innovation?
In 1954, Elihu Katz, a researcher at the Bureau of Applied Social Research at columbia University, circulated a proposal to study the effect of social ties on behavior.
When it came to the spread of tetracyclin, the doctors named by three or more other doctors as friends were three times more likely to adopt the new drug than those who had not been named by anybody.
Hubs, often referred to in marketing as “opinion leaders,” “power users,” or “influencers,” are individuals who communicate with more people about a certain product than does the average person.
Aiming to explain the disappearance of some fads and viruses and the spread of others, social scientists and epidemiologists developed a very useful tool called the threshold model.
any relation to Granovetter?
…critical threshold, a quantity determined by the properties of the network in which the innovation spreads.
For decades, a simple but powerful paradigm dominated our treatment of diffusion problems. If we wanted to estimate the probability that an innovation would spread, we needed only to know it’s spreading rate and the critical threshold it faced. Nobody questioned this paradigm. Recently, however, we have learned that some viruses and innovations are oblivious to it.
On the Internet, computers are not connected to each other randomly.
In scale-free networks the epidemic threshold miraculously vanished!
Hubs are among the first infected thanks to their numerous sexual contacts. Once infected, they quickly infect hundreds of others. If our sex web formed a homogeneous, random, network, AIDS might have died out long ago. The scale-free topology at AIDS’s disposal allowed the virus to spread and persist.
As we’ve established, hubs play a key role in these processes. Their unique role suggest a bold but cruel solution: As long as resources are finite we should treat only the hubs. That is, when a treatment exists but there is not enough money to offer it to everybody who needs it, we should primarily give it to the hubs. (Pastor-Satorras and Vespignani; and Zoltan Dezso)
Are we prepared to abandon the less connected patients for the benefit of the population at large?
They [Michalis Faloutsos, Petros Faloutsos, and Christos Faloutsos] found that the connectivity distribution of the Internet routers follows a power law. In their seminar paper “On Power-Law Relationship of the Internet Topology” they showed that the Internet […] is a scale-free network.
Routers offering more bandwidth likely have more links as well. […] This simple effect is a possible source of preferential attachment. We do not know for sure whether it is the only one, but preferential attachment is unquestionably present on the Internet.
After many discussions and tutorials on how computers communicate, a simple but controversial idea emerged: parasitic computing.
Starting from any page (on the Internet), we can reach only about 24 percent of all documents.
If you want to go from A to D, you can start from node A, then go to node B, which has a link to node C, which points to D. But you can’t make a round-trip.
Not necessarily the case with bidirectional webmentions.
[Cass] Sustein fears that by limiting access to conflicting viewpoints, the emerging online universe encourages segregation and social fragmentation. Indeed, the mechanisms behind social and political isolation on the Web are self-reinforcing.
Looks like we’ve known this for a very long time! Sadly it’s coming to a head in the political space of 2016 onward.
Communities are essential components of human social history. Granovetter’s circles of friends, the elementary building blocks of communities, pointed to this fact. […]
early indications that Facebook could be a thing…
One reason is that there are no sharp boundaries between various communities. Indeed, the same Website can belong simultaneously to different groups. For example, a physicist’s Webpage might mix links to physics, music, and mountain climbing, combining professional interests with hobbies. In which community should we place such a page? The size of communities also varies a lot. For example, while the community interested in “cryptography” is small and relatively easy to locate, the one consisting of devotees of “English literature” is much harder to identify and fragmented into many subcommunities ranging from Shakespeare enghusiasts to Kurt Vonnegut fans.
Search for this type of community problem is an NP complete problem. This section may be of interest to Brad Enslen and Kicks Condor. Cross reference research suggested by Gary Flake, Steve Lawrence, and Lee Giles from NEC.
Such differences in the structure of competing communities have important consequences for their ability to market and organize themselves for a common cause.
He continues to talk about how the pro-life movement is better connected and therefore better equipped to fight against the pro-choice movement.
Code–or software–is the bricks and mortar of cyberspace. The architecture is what we build, using the code as building blocks. The great architects of human history, from Michelangelo to Frank Lloyd Wright, demonstrated that, whereas raw materials are limited, the architectural possibilities are not. Code can curtail behavior, and it does influence architecture. It does not uniquely determine it, however.
Added on November 3, 2018 at 5:26 PM
Yes, we do have free speech on the Web. Chances are, however, that our voices are too weak to be heard. pages with only a few incoming links are impossible to find by casual browsing. Instead, over and over we are steered toward the hubs. It is tempting to believe that robots can avoid this popularity-driven trap.
Facebook and Twitter applications? Algorithms help to amplify “unheard” voices to some extent, but gamifying the reading can also get people to read more (crap) than they were reading before because it’s so easy.
Your ability to find my Webpage is determined by one factor only: its position on the Web.
Facebook takes advantage of this with their algorithm
Thus the Web’s large-scale topology–that is, its true architecture–enforces more severe limitations on our behavior and visibilityon the Web than government or industry could ever achieve by tinkering with the code. Regulations come and go, but the topology and the fundamental natural laws governing it are time invariant. As long as we continue to delegate to the individual the choice of where to link, we will not be able to significantly alter the Web’s large-scale topology, and we will have to live with the consequences.
After selling Alexa to Amazon.com in 1999
Brewster Kahle’s Alexa Internet company is apparently the root of the Amazon Alexa?
To return to our car analogy, it is…
Where before? I don’t recall this at all. Did it get removed from the text?
ref somewhere about here… personalized medicine
After researching the available databases, we settled on a new one, run by the Argonne National Laboratory outside Chicago, nicknamed “What Is There?” which compiled the matabolic network of forty-three diverse organisms.
…for the vast majority of organisms the ten most-connected molecules are the same. Adenosine triphosphate (ATP) is almost always the biggest hub, followed closely by adenosine diphosphate (ADP) and water.
A key prediction of the scale-free model is that nodes with a large number of links are those that have been added early to the network. in terms of metabolism this would imply that the most connected molecules should be the oldest ones within the cell. […] Therefore, the first mover advantage seems to pervade the emergence of life as well.
Comparing the metabolic network of all forty-three organisms, we found that only 4 percent of the molecules appear in all of them.
Developed by Stanley Fields in 1989, the two-hybrid method offers a relatively rapid semiautomated technique for detecting protein-protein interactions.
They [the results of work by Oltvai, Jeong, Barabasi, Mason (2000)] demonstrated that the protein interaction network has a scale-free topology.
…the cell’s scale-free topology is a result of a common mistake cells make while reproducing.
In short, it is now clear that the number of genes is not proportional to our perceived complexity.
We have learned that a sparse network of a few powerful directors controls all major appointments in Fortune 1000 companies; […]
Regardless of industry and scope, the network behind all twentieth century corporations has the same structure: It is a tree, where the CEO occupies the root and the bifurcating branches represent the increasingly specialized and nonoverlapping tasks of lower-level managers and workers. Responsibility decays as you move down the branches, ending with the drone executors of orders conceived at the roots.
Only for completely top down , but what about bottom up or middle out?
We have gotten to the point that we can produce anything that we can dream of. The expensive question now is, what should that be?
It is a fundamental rethinking of how to respond to the new business environment in the postindustrial era, dubbed the information economy.
This is likely late, but certainly an early instance of “information economy” in popular literature.
Therefore, companies aiming to compete in a fast-moving marketplace are shifting from a static and optimized tree into a dynamic and evolving web, offering a more malleable, flexible command structure.
While 79 percent of directors serve on only one board, 14 percent serve on two, and about 7 percent serve on three or more.
Indeed, the number of companies that entered in partnership with exactly k other institutions, representing the number of links they have within the network, followed a power law, the signature of a scale-free topology.
Makes me wonder if the 2008 economic collapse could have been predicted by “weak” links?
As research, innovation, product development, and marketing become more and more specialized and divorced from each other, we are converging to a network economy in which strategic alliances and partnerships are the means for survival in all industries.
This is troubling in the current political climate where there is little if any trust or truth being spread around by the leader of the Republican party.
As Walter W. Powell writes in Neither Market nor Hierarchy: Network Forms of Organization, “in markets the standard strategy is to drive the hardest possible bargain on the immediate exchange. In networks, the preferred option is often creating indebtedness and reliance over the long haul.” Therefore, in a network economy, buyers and suppliers are not competitors but partners. The relationship between them is often very long lasting and stable.
Trump vs. Trump
The stability of these links allows companies to concentrate on their core business. If these partnerships break down, the effects can be severe. Most of the time failures handicap only the partners of the broken link. Occasionally, however, they send ripples through the whole economy. As we will see next, macroeconomic failures can throw entire nations into deep financial disarray, while failures in corporate partnerships can severly damage the jewels of the new economy.
In some sense this predicts the effects of the 2008 downturn.
early use of the word?
A me attitude, where the companies immediate financial balance is the only factor, limits network thinking. Not understanding how the actions of one node affect other nodes easily cripples whole segments of the network.
Hierarchical thinking does not fit a network economy.
We must help eliminate the need and desire of the nodes to form links to terrorist organizations by offering them a chance to belong to more constructive and meaningful webs.
And for poverty and gangs as well as immigration.
“Their work has a powerful philosophy: “revelation through concealment.” By hiding the details they allow us to focus entirely on the form. The wrapping sharpens our vision, making us more aware and observant, turning ordinary objects into monumental sculptures and architectural pieces.
not too dissimilar to the font I saw today for memory improvement
"She was Joan of Arc, Madame Curie, and Florence Nightingale--all wrapped up in one."
One long, hot afternoon on Capitol Hill, in the summer of 1991, the most powerful man in Congress took on the most powerful person in American science. Science won. What does it take to end a reign of terror? The science fraud panic of the 1990s, part two of two.
An astonishing percentage of what I do with my clients’ web copy involves eradicating the phrase “click here” from their links. For more information, click here.
You see it everywhere. Everyone’s doing it, so it must be a best practice, right?
Wrong. It’s the worst possible practice. You should never, ever use “click here” in a web link.
“Click here” requires context.
Some good solid advice here for creating links!
So wait. Where are you? I guess I’m caught in my filter bubble again. After #DeleteFacebook, maybe you went back to your bicycle and your Polaroid camera. But maybe you’re out there still...
I love some of the growing ideas about links, discovery, and serendipity that I’m seeing bubble up in my feed reader lately. I bookmarked this one quickly while skimming on my birthday and finally got to revisit it. Can’t wait to see where Kicks Condor takes the exercise.
Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see. Meanwhile, I’ll be linking, linking, linking…
Every post I write oftentimes has a link to an external post, either as a reference or as a recommendation. And every single time, I go through this struggle of deciding which word should carry the link. It was so naive of me to think Dave Winer won’t have written about it. Of course, Dave had. He...
Within the social media space there’s a huge number of services that provide a variety of what I would call bookmark-type functionality of one sort or another. They go under a variety of monikers including bookmarks, likes, favorites, stars, reads, follows, claps, and surely many quirky others. Each platform has created its own semantics which don’t always overlap with the others.
Because I’m attempting to own all of my own data, I’ve roughly mapped many of these intents into my own website. But because I have the ultimate control over them, I get to form my own personal definitions. I also have a lot more control over them in addition to adding other metadata to each for better after-the-fact search and use within my personal online commonplace book. As such, I thought it might be useful to lay out some definitions (both for myself and others) for how I view these on my website.
At the basest level, I look at most of these interactions simply as URL permalinks to interesting content and their aggregation as a “linkblog”, or a feed of interesting links I’ve come across. The specific names given to them imply a level of specificity about what I think exactly makes them interesting.
In addition to a bookmark specific feed, which by itself could be considered a “traditional” linkblog, my site also has separate aggregated feeds for things I’ve liked, read, followed, and favorited. It’s the semantic reasons for saving or featuring these pieces of content which ultimately determine which names they ultimately have. (For those interested in subscribing to one or or more, or all of these, one can add
/feed/ to the ends of the specific types’ URLs, which I’ve linked, for an RSS feed. Thus, for example, http://boffosocko.com/type/link/feed/ will give you the RSS feed for the “Master” linkblog that includes all the bookmarks, likes, reads, follows, and favorites.)
On my site, I try to provide a title for the content and some type of synopsis of what the content is about. These help to provide some context to others seeing them as well as a small reminder to me of what they were about. When appropriate/feasible, I’ll try to include an image for similar reasons. I’ll also often add a line of text or two as a commentary or supplement to my thoughts on the piece. Finally, I add an icon to help to quickly visually indicate which of the types of posts each is, so they can be more readily distinguished when seen in aggregate.
In relative order of decreasing importance or value to me I would put them in roughly the following order of importance (with their attached meanings as I view them on my site):
- Favorite – This is often something which might easily have had designations of bookmark, like, and/or read, or even multiple of them at the same time. In any case they’re often things which I personally find important or valuable in the long term. There are far less of these than any of the other types of linkblog-like posts.
- Follow – Indicating that I’m now following a person, organization, or source of future content which I deem to have enough regular constant value to my life that I want to be able to see what that source is putting out on a regular basis. Most often these sources have RSS feeds which I consume in a feed reader, but frequently they’ll appear on other social silos which I will have ported into a feed reader as well. Of late I try to be much more selective in what I’m following and why. I also categorize sources based on topics of value to me. Follows often include sources which I have either previously often liked or bookmarked or suspect I would like or bookmark frequently in the future. For more details see: A Following Page (aka some significant updates to my Blogroll) and the actual Following page.
- Read – These are linkblog-like posts which I found interesting enough for one reason or another to have actually spent the time to read in their entirety. For things I wish to highlight or found most interesting, I’ll often add additional thought or commentary in conjunction with the post.
- Like – Depending on the content, these posts may not always have been read in their entirety, but I found them more interesting than the majority of content which I’ve come across. Most often these posts serve to show my appreciation for the original source of the related post as a means of saying “congratulations”, “kudos”, “good job”, or in cases of more personal level content “I appreciate this”, “you’re awesome”, or simply as the tag says “I liked this.”
- Bookmark – Content which I find interesting, but might not necessarily have the time to deal with at present. Often I’ll wish to circle back to the content at some future point and engage with at a deeper level. Bookmarking it prevents me from losing track of it altogether. I may optionally add a note about how the content came to my attention to be able to better remember it at a future time. While there are often things here which others might have “liked” or “favorited” on other social silos, on my site these things have been found interesting enough to have been bookmarked, but I haven’t personally read into them enough yet to form any specific opinion about them beyond their general interest to me or potentially followers interested in various category tags I use. I feel like this is the lowest level of interaction, and one in which I see others often like, favorite, or even repost on other social networks without having actually read anything other than the headline, if they’ve even bothered to do that. In my case, however, I more often than not actually come back to the content while others on social media rarely, if ever, do.
While occasionally some individual specimens of each might “outrank” others in the category above this is roughly the order of how I perceive them. Within this hierarchy, I do have some reservations about including the “follow” category, which in some sense I feel stands apart from the continuum represented by the others. Still it fits into the broader category of a thing with a URL, title, and high interest to me. Perhaps the difference is that it represents a store of future potentially useful information that hasn’t been created or consumed yet? An unseen anti-library of people instead of books in some sense of the word.
I might also include the Reply post type toward the top of the list, but for some time I’ve been categorizing these as “statuses” or “note-like” content rather than as “links”. These obviously have a high priority if lumped in as I’ve not only read and appreciated the underlying content, but I’ve spent the time and thought to provide a reasoned reply, particularly in cases where the reply has taken some time to compose. I suppose I might more likely include these as linkblog content if I didn’t prefer readers to value them more highly than if they showed up in those feeds. In some sense, I value the replies closer on par to my longer articles for the value of not only my response, but for that of the original posts themselves.
In general, if I take the time to add additional commentary, notes, highlights, or other marginalia, then the content obviously resonated with me much more than those which stand as simple links with titles and descriptions.
Perhaps in the near future, I’ll write about how I view these types on individual social media platforms. Often I don’t post likes/favorites from social platforms to my site as they often have less meaning to me directly and likely even less meaning to my audiences here. I suppose I could aggregate them here on my site privately, but I have many similar questions and issues that Peter Molnar brings up in his article Content, Bloat, privacy, arichives.
I’m curious to hear how others apply meaning to their linkblog type content especially since there’s such a broad range of meaning from so many social sites. Is there a better way to do it all? Is it subtly different on sites which don’t consider themselves (or act as) commonplace books?