It's more common the more expensive the SFP host equipment, yes. This "compatibility" stuff is generally euphemism for "ridiculously primitive DRM" - lots of higher end network equipment checks the SFP Vendor ID and Serial Number and will reject it if it doesn't match an allow-list of "qualified" hardware. Programmers like these let you clone the VID/Serial from a "qualified" SFP onto a random SFP.
Lithography, usually - still widely in use today. Drawings were made or transferred to an etchable surface (initially limestone, then metal) using an etch-resistant substance. Then etching agent (acid, usually) was applied to the surface. Everything was washed and voila, a plate was produced which had a positive image which could be inked and pressed just like letters. By 1938, offset printing might have been employed, which is basically the same thing but with a rubber drum as an intermediary between the plate and the paper.
This exact thing is irrelevant to Asahi; the reason they don't support suspend-to-disk is that their drivers don't support full reconfiguration. This is a difficult task, as is "true suspend," because Macs have tons and tons of peripheral SoCs running firmware with their own SRAM, so resuming from suspend or hibernate creates a delta between the firmware state and the system state. (and, before the usual Apple trolls show up, this is true on x86 lately too, but on x86 the driver and platform interface is more standardized to support these kind of state changes without as much OS support).
Needing a way to securely verify the hibernate image is ALSO a problem, and one of the reasons Asahi haven't focused on suspend-to-disk, but it's not the first-order issue.
The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
It's more subtle than that and is not actually that simple (though the attack is). The "modern" curve constructions pioneered by Bernstein are supposed to be misuse-resistant in this regard; Bernstein popularized both Montgomery and Edwards curves. His two major curve implementations are Curve25519 and Ed25519, which are different mathematical representations of the same underlying curve. Curve25519 famously isn't vulnerable to this attack!
> We need Good Samaritan laws that legally protect and reward white hats.
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.
For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.
This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.
> If a company has a security hole that leads to disclosure of sensitive information, it should be fined
What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?
> such fines can be used for rewards.
So we're back to the award allocation problem.
> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.
Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.
You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?
It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.
> You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?
My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.
You're (thankfully) never going to get a legal framework that allows "white hats" to test another person's computer without their permission.
There's a reason Good Samaritan laws are built around rendering aid to injured humans: there is no equivalent if you go down the street popping peoples' car hoods to refill their windshield wiper fluid.
ADS-B, as regulated, is a terrible solution for this stuff. EIRP requirements make it extremely impractical as a transmission solution for small devices, most ADS-B In equipment isn't designed to correctly alert for separation with non-fixed wing devices, and (due in no small part to the very high EIRP), there are concerns about both air-time saturation and management plane saturation (ie - ADS-B In equipment also wasn't designed to track very many entities).
There’s a strong air of grantware to it. The notion that it could be end-to-end auditable from the RTL up is interesting, though, and generally Wireguard performance will tank with a large routing table and small MTUs like you might suffer on a VPN endpoint server while this project seems to target line speed even at the absolute worst case routing x packets scenario.
The project got a grant from NLnet. I think they do a great job, they gave grants to many nice projects (and also some projects that are going nowhere, but I guess that is all in the game). NLnet really deserves praise for what they are doing!! https://nlnet.nl/thema/NGI0CommonsFund.html
Academic projects which receive grant money to produce papers and slides. This still can advance the state of the art, to be clear, and I like the papers and slides coming out of this project. But I wouldn’t cross my fingers for a working solution anytime soon.
This is conceptually interesting but seems quite a ways from a real end to end implementation - a bit of a smell of academic grantware that I hope can reach completion.
Fully available source from RTL up (although the license seems proprietary?) is very interesting from an audit standpoint, and 1G line speed performance, although easily achieved by any recent desktop hardware, is quite respectable in worst case scenarios (large routing table and small frames). The architecture makes sense (software managed handshakes configure a hardware packet pipeline). WireGuard really lacks acceleration in most contexts (newer Intel QAT supposedly can accelerate ChaCha20 but trying to figure out how one might actually make it work is truly mind bending), so it’s a pretty interesting place to do a hardware implementation.
The safe assumption to make when met with a contradiction in licensing would be to assume that the more restrictive license holds, no? Especially when the permissive license is a general repo-wide license and the restrictive license is specifically applied to certain files.
So for all intents and purposes, in my opinion, large parts of this Wireguard FPGA project are under this weird proprietary Chili Chips license. In fact, the license is so proprietary that the people who made this wireguard FPGA repository and made it visible to the public are seemingly in violation of it.
It puts us in a weird spot as well: I'm now the "holder of" a file and am obligated to keep all information within it confidential and to protect the file from disclosure. So I guess I can't share a link to the repo, since that would violate my obligation to protect the files within it from disclosure.
I would link to the files in question, but, well, that wouldn't protect them from disclosure now would it.
reply