In just a few minutes in a quick demo, I’ve been able to build a local website. This seems a bit easier than I had initially expected, but there’s still a way to go…
Within higher education, requests to build websites for individual faculty members sit at the absolute bottom of the work queue for most marketing/communications teams. If this type of product is offered at all, it typically uses a self-service model; the institution will provide the platform while the faculty member will provide the content. And while this is the most sustainable model for most small and mid-sized web teams, it tends to produce multiple websites that are ineffective at communicating even simple messages. Worse, they have a high tendency to become the poorest reflections of the institution with a high rate of abandonment or misuse.
Let's fix that tendency together. With a careful examination of what really matters to faculty members who are looking to create and maintain their own websites, we can begin to build better sites. With better sites (and a little luck), you can start to derive value from the project at the bottom of your work pile.
Together we'll talk about:
- A simple analysis of the types of content that you'll typically find within a faculty website.
- A "wish list" for the types of content that you (as a marketer) would really like to see from these types of sites.
- A working example of a theme that delivers on these key concepts and adds some "quick wins" which makes for a better experience.
- How to leverage the capabilities of WordPress multisite to produce more value from collections of these type of sites.
I totally want to start using something like this myself to not only test it out, but to build in the proper microformats v2 mark up so that it’s IndieWeb friendly. Perhaps a project at the planned IWC Pop-up Theme raising session?
ISBNdb gathers data from hundreds of libraries, publishers, merchants and other sources around the globe to compile a vast collection of unique book data searchable by ISBN, title, author, or publisher. Get a FREE 7 day trial and get access to the full database of 24 + million books and all data points including title, author, publisher, publish date, binding, pages, list price, and more.
Pingbacks are essentially dead and in personal experience some of the few sites that still support them are in academia, but they’re relatively rare and have horrible UI in the best of times. Webmention is a much better evolutionary extension of the pingback idea and have been rapidly growing since before the spec was standardized by the W3C.
I’ve sketched out how individual academics could use their own websites and publish pre-prints and syndicate them to pre-print servers and even to their final publications while still leveraging Webmentions to allow their journal articles, books, other works, to accept and receive webmentions from other web publications as well as social media platforms that reference them.
I think the Microformats process is probably the best standardized way of doing this with classes and basic HTML and there is a robust offering of parsers that work in a variety of programming languages to help get this going. To my mind the pre-existing
h-cite is probably the best route to use along with the well-distributed and oft-used
<cite> tag with authorship details easily fitting into the
As an example, if Zeynep were to cite Tessie, then she could write up her citation in basic HTML with a few microformats and include a link to the original paper (with a rel=”canonical” or copies on pre-print servers or other journal repositories with a rel=”alternate” markup). On publishing a standard Webmention would be sent and verified and Tessie could have the option of displaying the citation on her website in something like a “Citation” section. The Post Type Discovery algorithm is reasonably sophisticated enough that I think a “citation” like this could be included in the parsing so as to help automate the way that these are found and displayed while still providing some flexibility to both ends of the transaction.
Ideally all participants would also support sending salmentions so that the online version of the “officially” published paper, say in Nature, that receives citations would forward any mentions back to the canonical version or the pre-print versions.
Since most of the basic citation data is semantic enough in mark up the receiver with parsing should be able to designate any of the thousands of journal citation formats that they like to display any particular flavor on the receiving website, which may be it’s own interesting sub-problem.
Of course those wishing to use schema.org or JSON-LD could include additional markup for those as well as parsing if they liked.
Perhaps I ought to write a longer journal article with a full outline and diagrams to formalize it and catch some of the potential edge cases.
I think some of the POSSE (Post on your Own Site, Syndicate Elsewhere) model may work to smooth some of this over. For example, I can write my response to everyone on my own WordPress site and fairly easily syndicate it to Twitter to have the best of both worlds.
If this weekend isn’t convenient, let’s host a pop-up session or mini-conference in a bit to discuss it and see what we can hack together.
Things can be worse for more independent or self-published works where the author doesn’t know how these things work. These may often have no ISBN at all regardless of the format.
The least “indie” thing one could do would be to use the Amazon Standard Identification Number which is a number assigned by Amazon. ASINs are easy to find on Goodreads solely because they’re owned by Amazon. In many cases, there are far more editions on Goodreads than actually exist because of the lack of use of ISBNs and de-duplication of editions which they import from a variety of data sources, including Amazon itself.
To my knowledge, the only true way to find the “correct” ISBN is to copy it directly from the book/source itself.
Institute ends negotiations for a new journals contract in the absence of a proposal aligning with the MIT Framework for Publisher Contracts.
#pcPopUp2020 is 6 hours old in the UK, but way older in other parts of the world, feel free to join in— PressED Conf - A tweeting WordPress conference (@pressedconf) May 27, 2020
Below are my initial thoughts and problems.
/home/ page has a lot of errors and warnings. (Never a good sign.)
It took me a few minutes to figure out where the Wik-it! bookmarklet button was hiding. Ideally it would have been in the start card that described how the bookmarklet would work (in addition to its original spot).
The Wikity theme seems to have some issues when using for http vs. https.
- Less seems to work out of the box with https
- The main card for entering “Name of Concept or Data” didn’t work at all under https. It only showed the title and wouldn’t save. Switching to http seemed to fix it and show the editor bar.
- Nothing seemed to work at all when I had my site as https. In fact, it redirected to a URL that seemed like it wanted to run
update.phpfor some bizarre reason.
- On http I at least get a card saying that the process failed.
- Not sure what may be causing this.
- Doesn’t seem to matter how many cards it is.
- Perhaps it’s the fact that Aaron’s site is https? I notice that his checkbox export functionality duplicates his entire URL including the https:// within the export box which seems to automatically prepend http://
- Copying to my own wiki seems to vaguely work using http, but failed on https.
Multiple * in the markdown editor functionality within WordPress doesn’t seem to format the way I’d expect.
Sadly, the original Wikity.cc site is down, but the theme still includes a link to it front and center on my website.
The home screen quick new card has some wonky CSS that off centers it.
Toggling full screen editing mode in new cards from the home screen makes them too big and obscures the UI making things unusable.
The primary multi-card home display doesn’t work well with markup the way the individual posts do.
The custom theme seems to be hiding some of the IndieWeb pieces. It may also be hampering the issuance of webmention as I tried sending one to myself and it only showed up as a pingback. It didn’t feel worth the effort to give the system a full IndieWeb test drive beyond this.
Doing this set up as a theme and leveraging posts seems like a very odd choice. From my reading, Mike Caulfield was relatively new to WordPress development when he made this. Even if he was an intermediate developer, he should be proud of his effort, including his attention to some minute bits of UI that others wouldn’t have considered. To make this a more ubiquitous solution, it may have been a better choice to create it as a plugin, do a custom post type for wiki cards and create a separate section of the database for them instead of trying to leverage posts. This way it could have been installed on any pre-existing WordPress install and the user could choose their own favorite theme and still have a wiki built into it. In this incarnation it’s really only meant to be installed on a fresh stand-alone site.
I only used the Classic Editor and didn’t even open up the Gutenberg box of worms in any of my tests.
The Wikity theme hasn’t been maintained in four years and it looks like it’s going to take quite a bit of work (or a complete refactoring) to make it operate the way I’d want it to. Given the general conceptualization it may make much more sense to try to find a better maintained solution for a wiki.
The overarching idea of what he was trying to accomplish, particularly within the education space and the OER space, was awesome. I would love nothing more than to have wiki-like functionality built into my personal WordPress website, particularly if I could have different presentations for the two sides but still maintain public/private versions of pieces and still have site-wide tagging and search. Having the ability to port data from site to site is a particularly awesome idea.
Is anyone actively still using it? I’d love to hear others’ thoughts about problems/issues they’ve seen. Is it still working for you as expected? Is it worth upgrading the broken bits? Is it worth refactoring into a standalone plugin?
Joel walks us through his 20+ year strong personal website, and digs into his frustrations with past versions, and how he's building the latest edition to generate both a website and a book.
As a result of the development of the COVID-19 outbreak, ALT had to cancel the face to face OER20 Conference in London, 1-2 April 2020 (12 March announcement).
Online (United Kingdom)
April 1, 2020 at 01:30AM- April 1, 2020 at 08:30AM
Imagine webmentions being used for referencing journal articles, academic samizdat, or even OER? Suggestions and improvement could accumulate on the original content itself rather than being spread across dozens of social silos on the web.
An ebook published using TiddlyWiki