One of the key tenets of uv is virtualenvs should be disposable. So barring any bugs with uv there should never be any debugging environments. Worst case just delete .venv and continue as normal.
I've been using pip-tools for the best part of a decade. uv isn't the first time we got lock files. The main difference with uv is how it abstracts away the virtualenv and you run everything using `uv run` instead, like cargo. But you can still activate the virtualenv if you want. At that point the only difference is it's faster.
This dude did a complete walkthrough setting up a Talos cluster on bare metal: https://datavirke.dk/posts/bare-metal-kubernetes-part-1-talo... It's a nice read. I have my own Talos cluster running in my homelab now for over a year with similar stuff (but no Ceph).
Within a module, yes. But it's ok for one module to be different from another because, you know, they are different modules that do different things (the difficult part is deciding what level your modules are split at, of course).
You're describing the hedonic treadmill. But I think you are missing something.
The thing is it does feel good to fix things and upgrade. The treadmill just says your baseline reverts back to where it was. So yeah you're just as happy with the expensive TV as you were with the shitty one, but it did feel good to upgrade, if only for a little while.
So the key is to introduce tiny upgrades and often. If you blow your budget for the whole year on a TV then you only get to be happier once. If you tinker and introduce tiny, sustainable upgrades you can be happier every day.
The sustainable part is important. You can only afford something if you can buy it twice. Don't ever take out a loan to buy anything (apart from a house).
It's called dependency injection and there's loads written about it. It's a really powerful technique which is also used for dependency inversion, key for decoupling components. I really like how it enables simple tests without any mocking.
The book Architecture Patterns in Python by Percival and Gregory is one of the few books that talks about this stuff using Python. It's available online and been posted on HN a few times before.
DI (generally) tends to point more towards constructing objects or systems. This would be a bit closer to a functional equivalent of the OO "template method" pattern: https://en.wikipedia.org/wiki/Template_method_pattern
You write a concrete set of steps, but delegate the execution of the steps to the caller by invoking the supplied functions at the desired time with the desired arguments.
Agree with zdragnar; this is not traditional DI which is generally focused on injecting objects.
The difference between the two is that when you inject an object, the receiving side must know a potentially large surface area of the object's behavior.
When injecting a function, now the receiving side only needs to know the inputs and outputs of the singular function.
This subtlety is what makes this approach more powerful.
I don't see the difference, but I do agree that DI is generally used to mean constructing systems. It's what you do in your main or "bootstrap" part of the program and there are frameworks to do it for you. But really it's the same thing. You're just composing functionality by passing objects (functions are objects) that satisfy an interface. It might be more acceptable to just say it's dependency inversion.
reply