You may notice at the bottom of this page is a comment submission form. As of this post, comments are live, using a prototype store and forward relay server. Right now, the only functionality is to accept posts and store them. The forwarding part hasn’t been written. Also, any comments that I receive will have to be manually added by me (and moderated at the same time). Let me know if it’s working.
And you though 2021 would be any better.... I woke this morning to the news that Ron Paul has been banned from Facebook. While I hate Facebook’s guts and everyone involved in creating that cesspool of conformity and control, and I think them having one fewer person captive in their walled garden is a good thing, there are some extremely disturbing trends coalescing. The ban was done on the grounds of “Violation of Facebook’s Terms of Service”, implying that Ron Paul did something wrong, which is a massive, steaming pile of bull.
I’ve been following this thread that mostly boils down to this: if I have the SHA-256 hash of a file, can I use that to get a file from IPFS? The short answer is not really?, because Merkle-DAG. The slightly-longer answer is that because Merkle-DAG is required to allow chunking files and verifying those chunks as they come in, and SHA-256 does not have a facility to combine hashes of two components into a single hash for the combined block, you can’t find files on IPFS my the hash of the entire file in a way where you can verify that each block belongs to that hash without having to download the entire file.
I have been using the project described here for some time now to get Archlinux package updates, even going so far as to join the pinning cluster, but I’m not sure how much longer that is going to last. As of five days ago, the cluster has stopped getting updates, and according to the status page, the cluster is offline for the foreseeable future. The quick(ish) fix is to roll the IPFS software back from the development version of ipfs-0.
Another weekend, another wabt-hole. I got crystal to compile to WebAsembly and then run in Brave, and it only took two days to get setup and get it to print out “Hello, World!” in the console. I have tried to use WebAssembly a couple of years ago, but dropped it once I failed to get a simple hello world program to function. This is the same reason I don’t have Haskell under my programming utility belt, no hello world.
While doing a websearch for “steam ipfs”, I happened upon the website en.wikipedia-on-ipfs.org. I have known that there was a copy of Wikipedia made in 2017 for English, Kurdish and Turkish, but only today did I find out that somebody has registered a domain name for it. I looks like a recent development.
Over the past couple of years, I’ve been thinking about things that could be used to replace parts of the internet and web we currently use. There are projects like cjdns that are looking to replace the network routing layer of the internet with a system that does not require a centralized authority to issue IP addresses. There are other parts of the web stack that are looking to be replaced (IPFS is one of them, looking to replace HTTP(S)), but the one I will be looking at in this post is the Domain Name System (DNS).
Planned obsolescence is evil. It robs the greater part of the people of their things by consciously designing those things so they break well before the end of their useful life. And it is robbery. Theft. Its incredibly sad that people now expect their things to constantly break. The perfected form of planned obsolescence would be to have everything break as soon as it was taken out of the box, so that you were forced to turn right around an go buy another one, which itself would promptly break, and so on, until you run completely out of both money and debt, reducing you to abject poverty while making the rich even richer.
I run an instance of Nextcloud for storing files, contacts, calendars and similar things that most other people use Google or Microsoft for. I self-host the server, as should be done to truly own the data, but about a month ago, the official Nextcloud app on my phone stopped working for no discernible reason. I have got to where fixing that was the next task, so that’s what I tackled today.
I have made another change to the git repo handling code so that when publishing, only the repos that have been updated are added to IPFS again. This way, large repos only slow down the publish process when they are updated and not every time any repository is updated. The new process is this: post-update hook adds its path to the spool directory Monitor process sees update and starts publish The existing repo is added to a temporary directory in the mutable file system (MFS) For each updated repo: The repo is added to ipfs without pinning The directory for that repo in MFS is removed and replaced with the new hash The old hash is unpinned and the new hash pinned The root hash of the new repo directory structure is published Remove temporary directory from MFS Now, git push is almost exactly the same speed as a plain ssh remote (only an additional flag set), the update is fast for small repositories and only slows down when processing a large repo.