In my time using IPFS, there were things that I think were very good. Even though the balance in my opinion is net negative, the good parts should be lifted to use with follow on systems (this is what I plan to do).
HTTP Gateway
Probably the best part of the system is that it has an HTTP gateway built in from the beginning. This allows for piecemeal bootstrapping, which is one of the largest problems facing any new protocol or system.
I have begun the process of retiring my usage of IPFS in favor of a system I am prototyping. Until the new system is up and running, I will be moving all the projects I have been publishing to IPFS to the normal clearnet World Wide Web.
My reasons for retiring IPFS are many, but here are the main ones:
It is not private
While it claims to have a level censorship resistance, as of December 2021, IPFS contains no way to provide data without your public IP from being globally enumerable, along with being able to find out what content you are hosting.
When I first came across WebAssembly, I was already aware of the IPFS project, and immediately thought that the two would work well together. Specifically, I thought that WebAssembly modules stored in IPFS could be loaded automatically by an edge web server to allow for dynamic content without a centralized server and without requiring IPFS on the device utilizing the service. The latter is important for supporting legacy devices and services that cannot be updated.
I’ve been following this thread that mostly boils down to this: if I have the SHA-256 hash of a file, can I use that to get a file from IPFS? The short answer is not really?, because Merkle-DAG.
The slightly-longer answer is that because Merkle-DAG is required to allow chunking files and verifying those chunks as they come in, and SHA-256 does not have a facility to combine hashes of two components into a single hash for the combined block, you can’t find files on IPFS my the hash of the entire file in a way where you can verify that each block belongs to that hash without having to download the entire file.
I have been using the project described here for some time now to get Archlinux package updates, even going so far as to join the pinning cluster, but I’m not sure how much longer that is going to last. As of five days ago, the cluster has stopped getting updates, and according to the status page, the cluster is offline for the foreseeable future. The quick(ish) fix is to roll the IPFS software back from the development version of ipfs-0.
While doing a websearch for “steam ipfs”, I happened upon the website en.wikipedia-on-ipfs.org. I have known that there was a copy of Wikipedia made in 2017 for English, Kurdish and Turkish, but only today did I find out that somebody has registered a domain name for it. I looks like a recent development.
Over the past couple of years, I’ve been thinking about things that could be used to replace parts of the internet and web we currently use. There are projects like cjdns that are looking to replace the network routing layer of the internet with a system that does not require a centralized authority to issue IP addresses.
There are other parts of the web stack that are looking to be replaced (IPFS is one of them, looking to replace HTTP(S)), but the one I will be looking at in this post is the Domain Name System (DNS).
I have made another change to the git repo handling code so that when publishing, only the repos that have been updated are added to IPFS again. This way, large repos only slow down the publish process when they are updated and not every time any repository is updated. The new process is this:
post-update hook adds its path to the spool directory Monitor process sees update and starts publish The existing repo is added to a temporary directory in the mutable file system (MFS) For each updated repo: The repo is added to ipfs without pinning The directory for that repo in MFS is removed and replaced with the new hash The old hash is unpinned and the new hash pinned The root hash of the new repo directory structure is published Remove temporary directory from MFS Now, git push is almost exactly the same speed as a plain ssh remote (only an additional flag set), the update is fast for small repositories and only slows down when processing a large repo.
If you’ve been following my IPFS Scanner, you will have notices some changes today. I’ve added tags. There are also a good number of sites listed now, with widely varying levels of stability and content. Don’t blame me if there isn’t good content there: go make a site and make sure it is published with you node’s primary key (that would be ipfs name publish /ipfs/QyourSiteHashGoesHere).
I’ve reworked the site generator to allow me to attach tags to every site in the index by /ipns/ key.
I have a set of git repos published to IPFS that I talked about before here. Since that post a month ago, the repos have grown in count and size to the point that it is no longer feasible to use the automatic publish as it was. I have made one change and found that I am required to make further changes for it to remain usable.
The change already made is to no longer run the publish from the post-update hook and to instead have that create a file in a spool directory and have a separate process monitor the spool directory and launch the publish script.