delighted to hear! iroh-blobs is Rüdiger's love letter to BLAKE3, and hot dang has he taken this piece of machinery quite far. Much of this is covered in the post, but some highlights:
* fetch any sub-sequence of bytes, verified on send & receive
* fetch sub-sequences of bytes in collections (sets of blobs / directories)
* store on disk, inlining small blobs into the database for faster lookups
* fan in from disk & the network
* "multi-provider" fan in that can re-plan a fetch on the fly
* should land support for WASM compilation (browsers) soon! https://github.com/n0-computer/iroh-blobs/pull/187
We're hard at work on making the API more ergonomic, but as a foundational protocol it's truly impressive. Rudi has been working with the BLAKE3 authors on both perf testing & the hazmat API.
It uses a third server to facilitate initial p2p connections but I keep loosing/fail to connect to this server. I don't know if it's because of many restarts during development or something else.
Windows Defender nukes this from orbit, making it nearly impossible to ship to clients in a trusting fashion. But I guess any program which punches through the firewall is suspect.
Windows Defender is an interesting challenge. It would be interesting to know if signing the executable has a positive effect here. At $previouscompany we had a software that looked very keylogger-like, and all our Windows Defender issues vanished once we started using EV codesigning certificates. They are not cheap ($300/year), but Defender seems to take the fact that the code is bound to a verified legal entity as a strong trust signal
That's interesting, because the connection to the relay server is established using HTTP1.1 over TLS. Followed by a WebSocket upgrade. It should look like any other webserver connection on the internet. Could be worth investigating your network conditions and filing an issue for this.
[insert yet another comment about having short product introductions at the top pf blog posts]
From their docs page:
> Iroh lets you establish direct peer-to-peer connections whenever possible, falling back to relay servers if necessary. This gives you fast, reliable connections that are authenticated and encrypted end-to-end using QUIC.
and iroh-blobs: provides blob and blob sequence transfer support for iroh. It implements a simple request-response protocol based on BLAKE3 verified streaming
Tailscale is a system service / DevOps deploy-time architectural middleware tool for putting entire devices onto managed OS-level networks.
Iroh is a development-time library for building software that forms open decentralized application-specific networks.
The closer comparison for Iroh would be to something like libp2p. (Or maybe libzmq, given its toolkit-of-very-well-thought-out-primitives approach. I might describe Iroh as the decentralized complement to libzmq.)
I'm going to guess that the difference is that Tailscale lets your machines find each other within a managed flat virtual network where as Iroh lets your applications talk to each other without any regard to which machine anything is running on.
Not sure about tailscale coordination server but once you establish connection to a headscale server, the clients don't strictly need headscale after that (although it's recommended to keep it active). So, maybe the only difference is headscale acts as a relay for once
Headscale is just a open source implementation of the Tailscale coordination server.
The coordination server just provides the IPs by which you use wireguard to connect. It can see that metadata (what machines are in a tailnet), but not anything else.
I’m also wondering if it’s possible to use MoQ from iroh, for streaming unidirectional broadcast data that don’t need historical buffers, mainly to freeload on Cloudflare’s free MoQ relays.
Also how do the public relays provides by Iroh compare with Tailscale’s public DERP servers, operationally wise?
> One thing to keep in mind when using the connection pool: the connection pool needs the ability to track which connections are currently being used. To do this, the connection pool does not return Connection but ConnectionRef, a struct that derefs to Connection but contains some additional lifetime tracking.
> But Connection is Clone, so in principle there is nothing stopping you from cloning the wrapped connection and losing the lifetime tracking. Don't do this. If you work with connections from the pool, you should pass around either a ConnectionRef or a &Connection to make sure the underlying ConnectionRef stays alive.
Hmmm...
I'd like to see the incovenient API. Or maybe there's a bit more work that could be done to make it convenient? Is there an insurmountable problem that prevents completely hiding the underlying Connection?
Is it just me or is the safe and “unsafe” versions of using the connection pool identical? Seems like a typo with a clone in the “correct” example that shouldn’t be there?
It's extremely subtle, fooled me initially too. The `fn handle_connection` takes a different argument, so rust `Derefs` the `ConnectionRef` into `Connection` for the first example. A bit too subtle to my liking.
Oh wow. Ok. Subtle and error prone. This screams for a more ergonomic API like not making Connection cloneable or doing as_ref instead of Deref or not decoupling the lifetime when you do a clone.
I’ve been intending to play with it more, it’s given me so many little project ideas that otherwise would be a pain
reply