On a Pico? No - the PIOs replace other peripherals a µC might be able to use to achieve this sort of bitrate, so you'd not really have the tools you'd need to change GPIO pin states once every 3-4 CPU clock cycles.
In a sense the PIO is a bit 'cheaty' when claiming "bit-banging", because the PIO is the ultimate peripheral, programmable to be whatever you need. It's no mean feat to make the PIO do the sorts of things happening here, by any stretch, but "bit-banging" typically means using the CPU to work around the lack of a particular peripheral.
From that perspective, there's precious few µCs out there that could bit-bang 100MBit/s Ethernet - I'm no expert, but I _think_ that's a 125MHz IO clock, so if you want 4 CPU cycles per transition to load data and push it onto pins, you're looking for a 500MHz µC, and at those speeds you definitely have to worry about the bus characteristics, stalls, caching, and all those fun bits; it's not your old 8-bit CPU bit-banging a slow serial protocol over the parallel port any more.
This is significant. It's using a hardware peripheral that is designed and intended for high frequency IO manipulation without CPU intervention. This isn't bit-banging, lest we start calling it "bit-banging" any time an FPGA or ASIC or even a microcontroller peripheral handles any kind of signalling.
Ehhhhh the picture shows a very short cable. You can most certainly find micros that can run 100Mb/s communication interfaces, though sure maybe not bitbanged. However, you really need a PHY and magnetics. MII is 25MHz which seems fine. GMII is 125 MHz SDR which is something. Honestly that would've been a cooler demo IMO than running 2 inches
Using The Approved Set™ from your browser or OS carries no privacy issues: it's just another little bit of data your machine pulls down from some mothership periodically, along with everyone else. There's nothing distinguishing you from anyone else there.
You may want to pull landmarks from CAs outside of The Approved Set™ for inclusion in what your machine trusts, and this means you'll need data from somewhere else periodically. All the usual privacy concerns over how you get what from where apply; if you're doing a web transaction a third party may be able to see your DNS lookup, your connection to port 443, and the amount of traffic you exchange, but they shouldn't be able to see what you asked for or what you go. Your OS or browser can snitch on you as normal, though.
I don't personally see any new privacy threats, but I may not have considered all angles.
Different machines will need to have variations in when they grab updates to avoid thundering herd problems.
I could see the list of client-supplied available roots being added to client fingerprinting code for passive monitoring (e.g. JA4) if it’s in the client hello, or for the benefit of just the server if it’s encrypted in transit.
CQRS should really only guide you to designing separate query and command interfaces. If your processing is asynchronous then you have no choice but to have state about processing-in-flight, and your commands should return an acknowledgement of successful receipt of valid commands with a unique identifier for querying progress or results. If your processing is synchronous make your life easier by just returning the result. Purity of CQRS void-only commands is presentation fodder, not practicality.
(One might argue that all RPC is asynchronous; all such arguments eventually lead to message buses, at-least-once delivery, and the reply-queue pattern, but maybe that's also just presentation fodder.)
There's no self-propagation happening, that's just the terrible article's breathless hyping of how devastating the attack is. It's plain old deliberately injected and launched malware. OpenVSX is a huge vector for malicious actors taking real Marketplace extensions, injecting a payload, and uploading them. The article lists exactly one affected Marketplace extension, but that extension does not exist.
> Has no one thought to review the AI slop before publishing?
If only Koi reviewed their AI slop before publishing :(
$6/month. It's $3 for the first month (or first months, on longer subscription cycles, but it's first unit of subscription cycle at half price only).
At $6/month it's still pretty reasonable, IMO, and chucking less than $10 at it for three months probably gets you to the next pop-up token retailer offering introductory pricing, so long as the bubble doesn't burst before then.
I wound up subbing for the three months. My experience has been pretty positive, using Cline in VS Code and moving away from Qwen3 Coder on OpenRouter. Q3C did a good job overall but I was using the free model (with $10 credit sitting on OR to increase limits) and that's pretty painful to sit through repeated 429 errors. GLM-4.6 has been comparable, maybe a fraction worse, but without 429 errors it blazes through tasks.
That form of domain name is very common in DNS configuration. All it means is the name is complete already and should not have any local search domains appended. It's unusual to see it in URLs, but its presence should be harmless; that it's not harmless in Caddy is definitely an error - but I can't begin to understand why it would be seen as a particularly significant one.
Specifications that are formally verified can definitely cover real-time guarantees, behaviour under error returns from operations like allocation, and similar things. Hardware failures can be accounted for in hardware verification, which is much like software verification: specification + hardware design = verified design; if the spec covers it, the verification guarantees it.
Considering software alone isn't pretty useless, nor is having the guarantee that "inc x = x - 1" will always go from an Int to an Int, even if it's not "fully right" at least trying to increment a string or a complex number will be rejected at compile time. Giving up on any improvements in the correctness of code because it doesn't get you all the way to 100% correct is, IMO, defeatist.
(Giving up on it because it has diminishing returns and isn't worth the effort is reasonable, of course!)
Hardware verification doesn't prevent hardware failures. There is a reason RAM comes with ECC. It's not because RAM designers are too lazy to do formal verification. Even with ECC RAM, bit flips can still happen if multiple bits flip at the same time.
There are also things like CPUs taking the wrong branch that occasionally happen. You can't assume that the hardware will work perfectly in the real world and have to design for failure.
Designing around hardware failure in software seems cumbersome to insane. If the CPU can randomly execute arbitrary code because it jumps to wherever, no guarantees apply.
What you actually do here is consider the probability of a cosmic ray flip, and then accept a certain failure probability. For things like train signals, it's one failure in a billion hours.
> Designing around hardware failure in software seems cumbersome to insane.
Yet for some reason you chose to post this comment over TCP/IP! And I'm guessing you loaded the browser you typed it in from an SSD that uses ECC. And probably earlier today you retrieved some data from GFS, for example by making a Google search. All three of those are instances of software designed around hardware failure.
If "a cosmic ray could mess with your program counter, so you must model your program as if every statement may be followed by a random GOTO" sounds like a realistic scenario software verification should address, you will never be able to verify anything ever.
I agree, you definitely won't be able to verify your software under that assumption; you need some hardware to handle it, such as watchdog timers (when just crashing and restarting is acceptable) and duplex processors like some Cortex-R chips. Or TMR.
An approach that has been taken for hardware in space is to have 3 identical systems running at the same time.
Execution continues while all systems are in agreement.
If a cosmic ray causes a bit-flip in one of the systems, the system not in agreement with the other two takes on the state of the other two and continues.
If there is no agreement between all 3 systems, or the execution ends up in an invalid state, all systems restart.
>Designing around hardware failure in software seems cumbersome to insane
I mean there are places to do it. For example ZFS and filesystem checksums. If you've ever been bit by a hard drive that says everything is fine but returns garbage you'll appreciate it.
Well of course hardware fails, and of course verification doesn't make things work perfectly. Verification says the given design meets the specification, assumptions and all. When the assumptions don't hold, the design shouldn't be expected to work correctly, either. When the assumptions do hold, formal verification says the design will work correctly (plus or minus errors in tools and materials).
We know dynamic RAM is susceptible to bit-flip errors. We can quantify the likelihood of it pretty well under various conditions. We can design a specification to detect and correct single bit errors. We can design hardware to meet that specification. We can formally verify it. That's how we get ECC RAM.
CPUs are almost never formally verified, at least not in full. Reliability engineering around systems too complex to verify, too expensive to engineer to never fail, or that might operate outside of the safe assumptions of their verified specifications, usually means something like redundancy and majority-rules designs. That doesn't mean verification plays no part. How do you know your majority-rules design works in the face of hardware errors? Specify it, verify it.
The Australian federal government goes through waves of "reducing the size of the public service" by firing and/or capping full-time hires, but the work's still there to be done so contractors get the gig.
This is the thing. Its much less expensive to have these sorts of knowledge employees on government staff who just do this sort of work all the time, but governments prefer to spend more (much much more) on contractors. I suspect its partly because they are always wanting to announce down-sizing initiatives to appease the right, but I think more cynically, its because contractors will more reliably give them the 'right answer' than career civil servants and there's also the potential for kickbacks. Some of those profits paid to contractor companies might find their way back into campaign contributions.
Here in Australia the (single party) government of the day was dismissed in 1975 after failing to secure a supply bill. The government was dismissed by the Governor General, the Crown's representative in Australia, and the event sparked a bit of a ruckus. Google: The Whitlam Dismissal.
There's lots of instances of our government requesting dissolution of the Houses following failure to secure votes, but in most cases they're for things other than operating expense bills, taken as proxies indicating the government does not have the confidence of the House to continue to act. Since failure to secure a bill is grounds for dissolving Parliament, it's not likely to be used for political grandstanding here.
Always good to come across fellow Australians in here!
I’d probably argue for an exception on that one, given the Whitlam government didn’t have a senate majority… but at the very least, I feel like a single case in the last 50 years is pretty supportive of my argument. The US government is on the verge of shutdown so often these days that I wonder how many people are desensitised to the situation!
How does failing to pass budget affect debt repayments? Could they simply end up defaulting sometime in future? That is not great outlook for a "reserve currency".
I mean it happened earlier this year in Tasmania, and it was absolutely for grandstanding purposes, given they'd had an election less than a year before.
In a sense the PIO is a bit 'cheaty' when claiming "bit-banging", because the PIO is the ultimate peripheral, programmable to be whatever you need. It's no mean feat to make the PIO do the sorts of things happening here, by any stretch, but "bit-banging" typically means using the CPU to work around the lack of a particular peripheral.
From that perspective, there's precious few µCs out there that could bit-bang 100MBit/s Ethernet - I'm no expert, but I _think_ that's a 125MHz IO clock, so if you want 4 CPU cycles per transition to load data and push it onto pins, you're looking for a 500MHz µC, and at those speeds you definitely have to worry about the bus characteristics, stalls, caching, and all those fun bits; it's not your old 8-bit CPU bit-banging a slow serial protocol over the parallel port any more.
reply