CoryDransfeldt
Sharing links via RSS, sharing links via APIs
#development#ai#rss#eleventy#javascript
I follow and subscribe to a whole bunch of blogs and less and less high-volume news via RSS. It's one of my absolute favorite mediums for keeping up with and reading content on the web. It's distributed, open and decentralized and remains one of those under-appreciated layers that stitches content across the web together.
For the past year and change I've been usingReadwise's Reader as my RSS and read it later app of choice. It has a lightweight API that consists of, essentially, aread
andwrite
endpoint.The read endpoint is paginated and they were even kind enough to add asource_url
property when asked. Theirread
endpoint is paginated and rate-limited1 so Icache the data I need from my saved articles in a B2 bucket.
I fetch new links when my site rebuilds, adding them tomy links page andmy now page. Next, to actuallyshare the links, I buildan RSS feed sourced from the fetched Readwise data (the Readwise endpoint also provides link descriptions that you can see in the feed) and a JSON feed that's syndicated periodically to Mastodon.2
All of this is kicked off when I tag a link asshare
and archive it to offer some reasonable assurance that I've read it. This is all a bit contrived, I suppose and I had expected it to be more fragile than it's proven to be.
Implementation aside, all of this goes to what makes the web so interesting and has for a long time. It's all links and all links can be shared on an equal, open basis. It's up to us to share and surface what we like and to continue doing so. To a degree it's about curation but it's really about highlighting things you've liked in the hopes that someone else will too.
It's encouraging to see more and more folks sharing links, boosting them and blogging them. It makes discovery easier when it's done cooperatively.
We're heading towards a web filled with more AI-generated and SEO-motivated sludge which, while heartbreaking, can be countered in minor degrees by being discerning readers and continuing to share. We don't need AI mediating sharing or discovery and, frankly, products like Arc Search are both insulting to users and creators — they both assume that they know better than users by surfacing summations and snippets of content, while providing now benefit to the owners of sites that they're scraping.
Jim Nielsen got this quite right inhis post about subversive hyperlinks:
The web has a superpower: permission-less link sharing.
I send you a link and as long as you have an agent, i.e. a browser (or a mere HTTP client), you can access the content at that link.
This ability to create and disseminate links is almost radical against the backdrop of today’s platforms.
We're all empowered to engage in this and platforms can't control it or take that away. They can and, annoyingly, do engage in putting scare screens in front of users that are linkedout of their platforms but they cannot change the fundamental behavior.
A lot has changed and the commercialization of the web has yielded barriers and walled gardens, butfeed://
andhttp(s)://
persist to enable unmediated discovery and distribution and that's more important than ever.
Which totally makes sense! Abuse is bad and performance is important. ↩
The technical details arewritten up in another post. ↩