Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

W3C was way too optimistic about XML namespaces leading to creation of infinitely extensible vocabularies (XHTML2 was DOA, and even XHTML1 couldn't break past tagsoup-compatible minimum).

This was the alternative – simpler, focused, fully IE-compatible.

W3C tried proper Semantic Web again with RDF, RDFa, JSON-LD. HTML5 tried Microdata a compromise between extensibility of RDF and simplicity of Microformats, but nothing really took off.

Eventually HTML5 gave up on it, and took position that invisible metadata should be avoided. Page authors (outside the bunch who have Valid XHTML buttons on their pages) tend to implement and maintain only the minimum needed for human visitors, so on the Web invisible markup has a systemic disadvantage. It rarely exists at all, and when it does it can be invalid, out of date, or most often a SEO spam.


Schema.org metadata (using microdata, RDFa or JSON-LD) is quite common actually, search engines rely on it for "rich" SERP features. With LLMs being able to sanity-check the metadata for basic consistency with the page contents, SEO spam will ultimately be at a disadvantage. It just becomes easier and cheaper to penalize/ignore spam while still rewarding sites that include accurate data.

The schema.org vocab is being actively maintained, the latest major version came out last March w/ the latest minor release in September.


The git protocol is more complex and harder to scale. It's especially wasteful if people are going to redownload all packages every time their amnesiac CI runs.

Single-file archives are much easier to distribute.

Digests and signatures have standard algorithms, not unique to git. Key/identity management is the hard part, but git doesn't solve it for you (if you don't confuse git with GitHub).


git bundles exist to solve the single-file caching and distribution problems


Going crazy: we cold also adopt the container registry api for distributing gems, similar to how helm charts are also distributed nower days.


In the RPG analogy, why waste your skill points on maxing out "checking code for UB", when you can get +10 bonus on UB checks from having Rust, and put the skill points into other things.


Unless lack of social media presence will be taken as a signal that you have something to hide, you terrorist/bot.


Or if you find yourself targeted for non-media criteria, like being Hispanic and needing to buy something from Home Depot.


That's a rare exception, and just a design choice of this particular library function. It had to intentionally implement a workaround integrated with the async runtime to survive normal cancellation. (BTW, the anti-cancellation workaround isn't compatible with Rust's temporary references, which can be painfully restrictive. When people say Rust's async sucks, they often actually mean `tokio:spawn()` made their life miserable).

Regular futures don't behave like this. They're passive, and can't force their owner to keep polling them, and can't prevent their owner from dropping them.

When a Future is dropped, it has only one chance to immediately do something before all of its memory is obliterated, and all of its inputs are invalidated. In practice, this requires immediately aborting all the work, as doing anything else would be either impossible (risking use-after-free bugs), or require special workarounds (e.g. io_uring can't work with the bare Future API, and requires an external drop-surviving buffer pool).


By that criteria there's no meaningful C++ compiler/spec.


How so? There are compiler-agnostic C++ specs, and compiler devs try to be compatible with it.

What the GP is suggesting is that the rust compiler should be written and then a spec should be codified after the fact (I guess just for fun?).


> compiler devs try to be compatible with it.

You have to squint fairly hard to get here for any of the major C++ compilers.

I guess maybe someone like Sean Baxter will know the extent to which, in theory, you can discern the guts of C++ by reading the ISO document (or, more practically, the freely available PDF drafts, essentially nobody reads the actual document, no not even Microsoft bothers to spend $$$ to buy an essentially identical PDF)

My guess would be that it's at least helpful, but nowhere close to enough.

And that's ignoring the fact that the popular implementations do not implement any particular ISO standard, in each case their target is just C++ in some more general sense, they might offer "version" switches, but they explicitly do not promise to implement the actual versions of the ISO C++ programming language standard denoted by those versions.


"Cancel correctness" makes a lot of sense, because it puts the cancellation in some context.

I don't like the "cancel safety" term. Not only it's unrelated to the Rust's concept of safety, it's also unnecessarily judgemental.

Safe/unsafe implies there's a better or worse behavior, but what is desirable for cancellation to do is highly context-dependent.

Futures awaiting spawned tasks are called "cancellation safe", because they won't stop the task when dropped. But that's not an inherently safe behavior – leaving tasks running after their spawner has been cancelled could be a bug: piling up work that won't be used, and even interfering with the rest of the program by keeping locks locked or ports used. OTOH a spawn handle that stops the task when dropped would be called "cancellation unsafe", despite being a very useful construct specifically for propagating cleanup to dependent tasks.


Another aspect is that the majority of projects that keep using C, do it specifically to maximize performance or low-level control (codecs, game engines, drivers, kernels, embedded).

For such projects, a GC runtime goes against the reason why they used C in the first place. Rust can replace C where Fil-C can't.

A technically memory-safe C with overhead is not that groundbreaking. It has already been possible with sandboxing and interpreters/VMs.

We've had the tradeoff between zero-overhead-but-unsafe and safer-but-slower languages forever, even before C existed. Moving C from one category to the other is a feat of engineering, but doesn't remove the trade-off. It's a better ASAN, but not miraculously fixing C.

Most projects that didn't mind having a GC and runtime overhead are already using Java, C#, Go, etc. Many languages are memory-safe while claiming to have almost C-like performance if used right.

The whole point of Rust is getting close to removing the fast-or-safe tradeoff, not merely moving to the other side of it.


There are so many programs written in C or C++, but not in any of the languages you cite, that run just fine in Fil-C.

The thing that is interesting about Fil-C isn’t that it’s a garbage collected language. It’s that it’s just C and C++ but with all of the safety of any other memory safe language so you can have a memory safe Linux userland.

Also, Fil-C isn’t anything like ASAN. ASAN isn’t memory safe. Fil-C is.


Many people know and like C. Many companies have access to plenty of C talent, but no Rust talent.

These are two great reasons to try Fil-C instead of Rust. It seems that many Rustaceans think in terms of their own small community and don’t really have a feel for how massive the C (or C++) universe is and how many different motivations exist for using the language.


I agree, but the converse is also true and is where the value of this DARPA grant lies:

There's a lot of legacy C code that people want to expand on today, but they can't because the existing program is in C and they don't want to write more potentially unsafe C code or add onto an already iffy system.

If we can rewrite in Rust, we can get safety, which is cool but not the main draw. The main draw, I think, is you now have a rust codebase, and you can continue on with that.


Many project do keep using C out of cultural and human reasons as well, they could have long written in a memory safe language, but goes against their world view, even when proven otherwise.


This has been recognised as the next important milestone for Rust and there's work towards making it happen. The details are uncertain yet, because move constructors are a big change for Rust (it promised that address of owned objects is meaningless, and they can be simply memcpy'd to a new address).


The Rust folks want a more usable Pin<> facility for reasons independent of C++ support (it matters for the async ecosystem) and Pin allows objects to keep their addresses once pinned.


Carbon devs explicitly say use Rust if you can.

Non-trivial C++ programs depend on OOP design patterns, webs of shared-mutable objects, and other features which Rust doesn't want to support (Rust's safety also comes from not having certain features that defeat static analysis).

Rust really needs things written the Rust way. It's usually easier with C, because it won't have clever templates and deep inheritance hierarchies.

Even if your goal is to move to Rust, it may make sense to start with Carbon or Circle to refactor the code first to have more immutability and tree-like data flow.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: