Hacker Newsnew | past | comments | ask | show | jobs | submit | gucci-on-fleek's commentslogin

> At what point in history have you owned a particular piece of hardware [...] and installed a major OEM operating system release a full 7 years after release without issue?

A few years ago, I installed Windows 10 on a cheap laptop from 2004—the laptop was running Windows XP, had 1GB of memory, a 32-bit-only processor, and a 150GB hard drive. The computer didn't support USB boot, but once I got the installer running, it never complained that the hardware was unsupported.

To be fair, the computer ran horrendously slow, but nothing ever crashed on me, and I actually think that it ran a little bit faster with Windows 10 than with Windows XP. And I used this as my daily driver for about 4 months, so this wasn't just based off of a brief impression.


> How do CVEs get issued? Where do I apply, who makes decisions

For most (but certainly not all) projects, you fill out a simple form [0]. I've done it before and it's fairly easy.

> and what software is covered by them?

All software is covered by someone, usually by the vendor themselves or MITRE.

> Can a CVE be issued in retrospect?

Absolutely, but it's fairly uncommon.

[0]: https://cveform.mitre.org/


The first "sort" sorts the input lines lexicographically (which is required for "uniq" to work); the second "sort" sorts the output of "uniq" numerically (so that lines are ordered from most-frequent to least-frequent):

  $ echo c a b c | tr ' ' '\n'
  c
  a
  b
  c
  
  $ echo c a b c | tr ' ' '\n' | sort
  a
  b
  c
  c
  
  $ echo c a b c | tr ' ' '\n' | sort | uniq -c
        1 a
        1 b
        2 c
  
  $ echo c a b c | tr ' ' '\n' | sort | uniq -c | sort -rn
        2 c
        1 b
        1 a

> I can’t ever remember seeing a bug in either bash

Shellshock [0] is a rather famous example, but bugs like that are rare enough that they make the news when they're found.

[0] https://en.wikipedia.org/wiki/Shellshock_%28software_bug%29


Wow, I'm not deaf, but almost everything you mentioned applies to me too. I've never met anyone else who has experienced this before, yet all of your following points apply exactly to me:

> standard Canadian English is my native language

> Most native English speakers claim my speech is unmarked

> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?

> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.

At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is

  American English    89%
  Australian English  3%
  French              3%
which is pretty close to correct.


I was born in Brooklyn, to Yiddish speaking parents and Yiddish was my first language. I now spend half my time in California and half in Israel. The accent checker said 80% American English, 16% Spanish, and 4% Brazilian Portuguese. In Israel they ask if I’m Russian when I speak Hebrew. In the US, people ask where I’m from all the time because my accent—and especially my grammar—is odd. The accent checker doesn’t look for grammatical oddities but that’s where a lot of my “accent” comes from.


I'm a maritimer and I'm constantly getting asked if I'm from South England, even by brits them self.

More bizarrely? Locals often assume I'm not from around here as well. I actually don't understand it.


> I'm a maritimer and I'm constantly getting asked if I'm from South England, even by brits them self.

I'm assuming that you're from NS/NB? Because it would be pretty fair for someone to mix up a British and a Newfoundland accent. (I'm from Alberta)


About 30% percent of traffic to Cloudflare uses HTTP/3 [0], so it seems pretty popular already. For comparison, this is 3× as much traffic as HTTP/1.1.

[0]: https://radar.cloudflare.com/adoption-and-usage#http1x-vs-ht...


and then cloudflare converts that to http/2 or even 1.1 for the backend


So? Those protocols work fine within the reliable low latency network of a datacenter.


I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade


At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.

With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.


Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.


Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.


Is the protocol inherently inferior in situations like that, or is this because we've spent decades optimizing for TCP and building into kernels and hardware? If we imagine a future where QUIC gets that kind of support, will it still be a downgrade?


There is no performance disadvantage at the normal speed of most implementations. With a good QUIC implementation and a good network stack you can drive ~100 Gb/s per core on a regular processor from userspace with 1500-byte MTU with no segmentation offload if you use a unencrypted QUIC configuration. If you use encryption, then you will bottleneck on the encryption/decryption bandwidth of ~20-50 Gb/s depending on your processor.

On the Linux kernel [1], for some benchmark they average ~24 Gb/s for unencrypted TCP from kernel space with 1500-byte MTU using segmentation offload. For encrypted transport, they average ~11 Gb/s. Even using 9000-byte MTU for unencrypted TCP they only average ~39 Gb/s. So there is no inherent disadvantage when considering implementations of this performance level.

And yes, that is a link to a Linux kernel QUIC vs Linux kernel TCP comparison. And yes, the Linux kernel QUIC implementation is only driving ~5 Gb/s which is 20x slower than what I stated is possible for a QUIC implementation above. Every QUIC implementation in the wild is dreadfully slow compared to what you could actually achieve with a proper implementation.

Theoretically, there is a small fundamental advantage to TCP due to not having multiple streams which could allow it maybe a ~2x performance advantage when comparing perfectly optimal implementations. But, you are comparing a per-core control plane throughput using 1500-byte MTU of, by my estimation, ~300 Gb/s on QUIC vs ~600 Gb/s on TCP at which point both are probably bottlenecking on your per-core memory bandwidth anyways.

[1] https://lwn.net/ml/all/cover.1751743914.git.lucien.xin@gmail...


> Email deliverability is a frustration outside of the M365/Gmail ecosystems, but it’s not as bad as it’s sometimes made out to be […] I’m curious if they see increases/decreases in spam, missed messages, successful phishing attempts, etc.

It's probably not much of an issue in this specific case. If someone doesn't get your email, that's your (the sender's) problem; but if someone doesn't get the government's email, then that's their (the recipient's) problem.


To add to this, most emails are likely within the organization and/or between public institution.

E-Mail was (last time I checked) not an approved medium for delivery of important documents as it does not (per design) have a mandatory receipt of the message being delivered. So a citizens does not need to worry a lot about this for important documents/mail.

(Fax was so popular for public institutions in Germany because it satisfied this standard. It meant it usually was the lowest barrier option and you could rely on it for all (un)important documents)


> Could we make a whiteboard+marker that had more resistance? Like some hall effect or something. Sounds too complex relative to just using chalkboards.

I think that whiteboard vs chalkboard is just personal preference/cultural, and that the explanations in the article are just trying to justify it (which is totally fair IMHO). So I don't think that there's any need to "fix" that problem with whiteboards.


> If the US SEC were replaced by 50 per-state SECs, you'd probably see Alaska or Wyoming becoming the scam capitals because they lack the resources to properly regulate.

Sure, but BC is 10× the population of Wyoming, so that's not really the best comparison. Plus, Delaware is tiny, yet their business regulations are fairly strong.


> thankfully it's already available through Let's Encrypt, via the "shortlived" profile

Maybe if you're the developer of a major web server :), but the rest of us still have to wait for general availability [0] [1].

[0]: https://letsencrypt.org/docs/profiles/#shortlived

[1]: https://community.letsencrypt.org/t/shortlived-is-currently-...


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: