I started using mise back when it was still called rtx. I was a little annoyed by asdf's quirks and having it replicate that behavior while being faster and less intrusive in my shell configuration was great.
Since then, mise has folded in two capabilities that I needed the most: Task Running and Env Vars.
Overall, it has been a fantastic experience for me. I love how the developer has spent a lot of time ensuring compatibility with existing tools while still building future capabilities.
I will add one thing that I knew I needed but couldn't find anywhere was added through the recent backends feature. I do a lot of trust and R development and there are dev tools that I need installed that I don't use as libraries, just the binaries. It was a problem making sure that those dependencies were installed in a new environment. Now, it's just so easy: I list those in my `mise.toml` file, and that ensures they are installed and installable.
The biggest visible boost has been in my shell startup times. Buying a computer after 5 years with 4 times as many cores and it feeling just as sluggish because nvm and pyenv are parsing the same set of bash files reading from disk was not pleasant. Mise actually made me feel, I didn’t just throw the money into a void
I don't understand how people don't notice the massive tax they're paying by using nvm:
$ hyperfine "~/.nvm/nvm.sh" "mise env"
Benchmark 1: ~/.nvm/nvm.sh
Time (mean ± σ): 1.722 s ± 0.032 s [User: 0.064 s, System: 0.112 s]
Range (min … max): 1.684 s … 1.805 s 10 runs
Benchmark 2: mise env
Time (mean ± σ): 13.4 ms ± 5.7 ms [User: 10.0 ms, System: 21.3 ms]
Range (min … max): 9.4 ms … 42.2 ms 29 runs
Summary
mise env ran
128.14 ± 53.94 times faster than ~/.nvm/nvm.sh
100x is definitely something you'll notice
EDIT: for some reason in discord we're getting very conflicting results with this test. idk why, but maybe try this yourself and just see what happens.
I built my own version on nvm (called nsnvm, for "Nuño's stupid node version manager") to solve this. You can see it here: https://github.com/NunoSempere/nsnvm Absurdly fewer features (and might break npm install -g), but very worth it for me for the reduced startup times.
Count of grey hairs on my head and face is only increasing so I'm gonna be that guy:
Nix/NixOS and Guix are two solid solutions to the problem, because they spin up completely independent, immutable, environments. You don't need to mess around with shell hacks to swap out the correct `npm` or `ruby` binary based on a string in one of several dozen dotfiles.
More or less python-style virtual envs on steroids where it's not just python stuff that isolated, but the entire setup. All your tools, all your config. Throw in `direnv` so you can make your editor and GUI tools aware of it.
The only initial headache is making sure the package is available to pull in -it's easy when it's distributed, but when tools are published through NPM or RubyGems or Crates or just on github that you have to run `go install` to get, then it's a bit of faff. But the same faff that distro managers have keeping, say, debian's sources up to date.
I feel like nix has been thoroughly discussed in this post already, so you're not the only guy.
> You don't need to mess around with shell hacks
Shell integration is optional, you can use `mise en` just like `nix-shell` or `nix develop`. You could also just invoke commands/scripts through mise tasks or mise exec.
> based on a string in one of several dozen dotfiles
The "Idiomatic" files like .nvmrc, .python-version, etc are supported but most people just use a mise.toml, which (a bit like flake files) contains all the config for the environment.
> but when tools are published through NPM or RubyGems or Crates or just on github that you have to run `go install` to get, then it's a bit of faff
And this is what mise excels at: `mise use npm:cowsay` or `mise use ubi:junegunn/fzf`
I think Nix/Guix are great, but also terrible. For me today, it's not worth the pain.
I recently switched to Mise for all of my JS, ruby, python, and java sdk management needs, and I’ve been delighted with it so far. Not having to install RVM, NVM, some toxic brew of python installers until I get a working python environment, and SDKMan has been such a breath of fresh air.
It does but you need to run commands through uv to use it, I assume this means if you run bare python commands in the task runner or whatever mise will use the venv.
`layout python` means that the venv is managed by the layout script in direnv, but when sourcing manually I can create it with uv (or pyenv, because I sometimes need to pin the python version) and then just add the source line.
`layout python` is great when that just works. I have trouble juggling various Python version effectively with direnv's layout script (I _know_ I'm doing something wrong, but I can just set up a virtual env as a one time operation so...)
(I also like sourcing bceause I know _exactly_ what's happening, as I know more about the activation scripts than the layout script direnv provides. But that's just a personal thing)
I like direnv too. But if you're not planning to use uv, you might want to give pipenv a try - the officially recommended tool for the purpose. Pipenv has just one command to create a virtual environment and install packages. And while pipenv can handle traditional requirements.text file, it's real strength is pipfile - a richer format with supporting lock files.
Pipenv doesn't automatically activate the venv on entry into a shell. But a shell plugin named pipenv-activate supports this. It does what you use direnv for in this case, without an envrc file in the source.
One major difference of pipenv from vanilla venv is that pipenv creates the venv in a common location outside the project (like poetry does). But this shouldn't be a big problem, since you wouldn't commit the venv into VCS anyway.
I'm loving how we're just 3 levels down into "python installers/package management" and it's already a heap of radioactive waste. Within 3 comments, 6 different python packaging and/or environment management tools were mentioned, and as a seasoned python user, I haven't even heard of direnv yet. Every week, a new pip/py/v/dir/fuck/shit/tit/arse -env tool emerges and adds to the pile of turd that is python packaging. It's truly getting to parody level of incompetence that we are displaying in the python community here.
You know: uv works, pipenv works (even in 2024), pyenv works, direnv (non-Python but fits my use cases), pipx works, pip-tools work, pip/python -m venv/virtualenv work, poetry works (in its own opinionated way), apt/dnf/apk, etc work
uv manages python binaries, user python tools, venvs, project/script dependencies, lock files. Other tools do less or different e.g., pipenv may use pyenv to install the desired python version.
I've used all of these tools successfully (at different times and/or for different use cases).
uv is the best attempt to circumvent "no size fits all" so far.
If you think a language that has just one tool is better, then it just means your use-cases are very narrow.
I don't know any other language that have such variety of users, applications (besides C). There may be more popular languages but nothing with such range.
I'm sorry but you are showcasing everything that's wrong with the python packaging /environment ecosystem. You list 10 tools which all work to varying degrees of overlap between each other. Which one should one use? And you don't even need to answer that because the answer changes every year. Last year it was poetry, this year it's uv. Next year it will be some other silly attempt.
This is a bad joke. It's a mature language where the answer to: "I want to manage my programming language version and installed libraries" is that you have to try a dozen different tools, each of which will cover some but not all of your requirements to do this one simple thing.
> I've used all of these tools successfully (at different times and/or for different use cases).
And you see nothing wrong with that? Pretty much every modern language has one way (or at most just a small handful of ways) to build a library and manage your dependencies/environment. The reason is that packaging and environment management is a side show. A necessary evil we have to do so that the actual crux of our work can be done. When I set out on a project, I don't say: "I want to spend a week figuring out which is the current environment management tool supported by the python mindshare". I don't want to deal with a dozen different ways of installing the dependencies of a package. This is insane. When I pick up a language, I want to know what is the way of managing dependencies and packaging up my stuff. And I don't want this to change on a yearly basis, doubly so for a mature language.
> If you think a language that has just one tool is better, then it just means your use-cases are very narrow.
Yes, I prefer that there's a choice of tools for say time series analysis or running a web service. Competition in those areas is good. That's how innovation is driven forward.
When it comes to package and environment management, I don't want innovation, I want stability. I want one agreed way to do it so that I don't need to fuck around with things which are completely orthogonal to the work I actually want to do and that will put bread on the table. I don't want to spend brain cycles keeping up to date with yet another hare brained way of defining what is a python package.
In my view, the reason why we are in this quagmire is because the roots of python are in a fairly simple scripting language, and the packaging has not well escaped these roots. You can have loose python files, you can have directories containing python files which are automagically modules, then we had to hack in a way to package collections of modules and manage their different versions. It's all hacks upon hacks upon hacks, and it has to be backwards compatible all the way to being able to run a loose python file.
I'm not saying this is easy to solve. It's the posterchild of the "Situation: there are 15 competing standards" XKCD. The time to solve it would've been 15 years ago while Guido was still the BDFL. There are too many stakeholders now to get any sort of consensus.
Well, that heavily depends on what you want to do. Python has a number of concerns when it comes to code and package management, some of which are not present in most other languages. Here's an incomplete list, off the top of my head:
1. installation and management of installed packages
2. management of Python versions present in an environment
3. management of virtual environments (Python version + packages installed and ready to use, in an isolated context)
4. building of distribution packages (nowadays pretty much only wheels, with or without native extensions, which depend on the target platform)
5. publishing of distribution packages (to PyPI or a compatible repo)
6. defining repeatable deployment environments (a subset of #1)
Most developers face some combination of above problems, and various tools offer solutions for certain combinations -- with a few covering all of them, to different levels of quality. It is crucial to understand your needs and select the tool(s) that offer the right solutions in the way that fits your usual workflow the best.
This article [0] is a good starting point to understanding the current Python packaging landscape, with a clear overview of which problem is covered by which tool.
And in this very thread, people seem to accept that this is fine. There's nothing wrong with the fact that there are 6 separate tools just for building a package, some support publishing, some don't, some also manage environments or python versions? Which of these are currently supported, which are deprecated, which are going to become deprecated in 2025?
But also there's a tool which only does publishing (twine)? The diagram is not even correct, because conda itself requires package building, except it's about building conda packages, which are cross-platform/language and separate from building a python package.
Why is there a separate set of tools for package management and package publishing?
The blog post is indeed helpful to allow someone new to python to at least see what options there are and roll the dice, but it will also raise extremely serious alarm bells that there's something fundamentally rotten at the core of the python ecosystem.
The fact that I need to read some unofficial post from 2023 to gain an overview of the python packaging and environment management ecosystem is itself completely nuts. And I can guarantee you that by now this blog post is getting outdated, because some mad genius is cooking the new best tool and ready to unleash it on the unsuspecting world.
The only case where this question is rhetorical is when you do not really use Python enough to require deciding which management tool to use. Which tells me everything I need to know about you.
Yes you are right. I do not want to care about which management tool to use. Programming is a difficult to discipline as it is, the tools should make it easier, not more complicated. Some people might do programming purely as an exercise of self-fulfilment and they don't mind. For me it's a tool, a means to achieve some ends. If you think the python landscape and tools are in an optimal state, more power to you. Meanwhile, I have a dozen quants complaining at me that they are wasting time on irrelevant crap learning yet another set of pacakging tools and that the technologists need to figure out one consistent tool (or at least a stable set of tools) for managing packaging/environment concerns instead of inventing a new one each year.
But yes, let's go throwing around thinly veiled insults instead. This tells me everything I need to know about you :)
Yup - hard agree on all the Python parts, but I'll happily recommend direnv wherever possible. It doesn't do much, but it brings a ton of sanity and simplicity to shell env without including any installers - it's just an activator, and a very simple one, so it'll survive next year's python and npm and etc tool flavors with no issues.
pip solved a lot of baseline problems. It doesn't solve all of them. There were a bunch of failed attempts to be "better pip". They have not worked, really, but people stick around to some tools despite this.
I do think uv is different, on account of working and being very reactive to ecosystem pains.
(direnv is not Python-specific! It's just a tool to set env vars in a directory. But because Python venv's can work through setting two env vars....)
haha, thank you for summarising my thoughts on python package management... and very few people have mentioned poetry which is what we and most teams I know use.
I use poetry as well, but lately I've been looking at uv and had to actively stop myself because it'd be the 3rd tool I migrated to in the last 3 years.
Rust programmers seem to be lousy with ECS frameworks that never get used in any games but seem hell bent on proving that rust is the best language for game programming, and python programmers seem to break out in a case of "packaging tool building". I don't know what causes this. Perhaps some sort of pathological thinking that "I can do better"?
I really like python (and I like rust too), but if I were to take an honest look at the python packaging and environment ecosystem, I'd think that I'm being trolled. I lived through the age of setuptools.py, and while it was not good, at least there was only one approach really. Now we have a bazillion approaches that are all good, and zero consensus on what to use. Each individual tool is much better than what we had before, but the landscape has become so fractured that as a whole it's a complete shitshow.
I'm using it to unify my teams toolchain without resorting to nix or running everything in docker.
I still use docker to run services and I still like the idea of nix, but the DX of mise is too good. Tasks are really nice too, all my repo scripts now have completions.
Did you try https://devenv.sh/? It uses Nix under the hood but with an improved DX experience. I haven't used it myself personally since I find Nix good enough but I am curious if you would still choose mise over devenv.
We are starting to adopt devenv in our team. Overall it's really good--we have control over our toolchain, environment, and the processes that we start. There are some lingering papercuts though, like they haven't yet released a version where they allow specifying the Go toolchain version, and they seem to periodically re-download the Nix package archive. But I think they are improving fairly quickly.
Ultimately, we might still end up moving to straight Nix flakes, just not sure yet.
> they haven't yet released a version where they allow specifying the Go toolchain version
Devenv's maintainers are friendly and responsive when it comes to contributions from downstream, and like 90% of the devenv repo is plain ol' Nix code written in idioms that are common in the community.
I mention it because my team has hit some papercuts as well, but I've been really happy with how easy it's proven to address them. :)
I agree, the fix is in the main branch, they just haven't released it yet. It's just that the existing released versions just don't allow customizing the Go version because of some hardcoded assumptions in the Nix code. So I'll wait for the released version. I did say these are papercuts, not showstoppers ;-)
I briefly tried devenv and I find it much easier to use than raw nix, but I also had issues with my nix install on macos (using both the official and the DS installer). It worked well on my linux machine.
Today though mise has so many other great features, I would still choose it over devenv or devbox.
I like Devbox and I'm familiar with its features so I'll just mention the extras the mise has.
The ubi backend means I can use nearly any binary published on GitHub without needing to worry about a flake or nixpkgs. Just get the binary directly from the author. Same for many of the other backends https://mise.jdx.dev/dev-tools/backends/.
Tasks are very powerful, they can use dependencies, flags, args, completions, and watches. Also can be defined either as strings in the config or point to shell scripts or even python/node/whatever scripts.
The fact that mise doesn't depend on nix is both a blessing and a curse. I have less tools available and I don't have the power of a package manager, but on the flip side I don't need to deal with the idiosyncrasies of nix.
Thank you, TIL about ubi! My brain is now compiling a list of places where this will replace either a shell script or flake for my projects :).
Tasks sounds similar process-compose which is bundled into Devbox. I'll have to read up more on tasks though to see if that's an accurate assessment.
Nix is definitely a double-edged sword... One thing I like about Devbox is that it keeps Nix mostly (!) out of sight, mostly unless I want a binary from a GitHub release :).
FYI, ubi is just one of the backends. Besides many languages specific backends (cargo, go, ...), it has 3 backends that support generic packages - asdf, vfox and its own (core). Besides, the default backends are defined for many packages, so that you can let mise choose it for you.
My experience with such tools is that when you do everything, you don't do anything right.
The chances that it doesn't leak greatly the underlying abstraction and creates troubles to figure it out when it will invariably fail is zero.
Because most people barely know in depth the packaging challenges for one ecosystem. In Python there are maybe a dozen in the world that have a good hang of __all__ of it.
And the devs of this tool would need to know so many.
Of course they don't, they wrap existing tools, which implies exactly what I said above.
I wonder if you misunderstood what mise is based on your mention of "packaging challenges". mise deals with language runtimes and dev tools—it doesn't manage dependencies or package anything.
I often hear suspicion about mise for this reason from people that haven't used it. I suppose it's not surprising. That said, I have spent over a decade in the developer productivity space as well as hundreds if not thousands of hours working on mise in the last 2 years—if there is someone that can build this I'm probably the right guy for the job.
Particularly with dev tools, it's long been the case that mise has solved this problem. Improvements are continuing to be made with things like improving supply chain security and ergonomics with python—though it's not like the python community itself has its DX figured out.
Of course I'm still fixing bugs pretty regularly and that probably won't ever change but there are hundreds of thousands of developers out there using mise (kind of a guess, but I'm pretty sure) and it's working great for them. It's in the top #100 tools in homebrew now: https://formulae.brew.sh/analytics/install-on-request/30d/
This definitely isn't some scrappy project—I've devoted much of my life to this problem and I think all evidence points it it being a resounding success.
I have to say, I've been reading your replies here (and your big reply in the just thread) and I'm super super impressed with your dedication to this project. I can tell just in how you write, the volume of responses in this thread, and your tone that this is a real passion project and you're deeply serious about this. I love seeing this. Thanks for your dedication!
In all scripting language, packaging problems mostly stem from bootstrapping.
Nvm shims break, python path confusion, gem installed on the wrong ruby interpretters, etc.
Maybe you managed the impossible.
But in 20 years of python I've seen only one tool doing bootstrapping in the right direction, all the other ones have failed.
So I'm suspicious of something that does multiple languages.
In the case of mise, it delegates this complexity to the user. E.g: for python, you have to know the config choices, and choose the right backend like asdf, pyenv, or indygreg.
Then you better understand the consequences of that choice.
Specifically for Python, right now, uv is showing the most promises (and mise has an uv backend, btw) for bootstrapping.
They have carefully avoided 90% of the mistakes of all other tools, and I have a long list. They don't live in a bubble.
uv still has problems (like indy greg builds not having headers) and it's still below v1 so I can't recommend to use it yet. But I've been testing it in different contexts for months now, and it's doing exceptionally well.
I usually take a year of testing before recommending a tool, because I need to see it in action in Windows shops, in Unix shops, with beginners, with non coders, with startup, in a corporate settings, with grey beards, etc. Python versatility means the user base is extremely diverse and you find it in the weirdest envs.
and it gave me a lot of confidence that he is actually not trying to do everything at once, but quite the opposite, nail to the death very specific problems.
I've tried everything in the Python world, with a good hundred of companies envs, and about a thousand people in trainings. Pyenv, poetry, pipenv, pdm, nix, pyflow, pdm, rye, you name it.
The number of ways they can fail is astonishing.
The uv team quickly identifies when there is friction, and fix it at astonishing speed. They just announced they took ownership of the WHOLE python-build-stand-alone project, and they contribute the improvement to it to cpython upstream.
Their dedication to a good doc and great error messages is quite amazing as well.
I think mise gets a lot right. I use it for environment variables, Python virtualenv creation and activation, and task scripts.
I've been a software developer for over twenty years and am usually reluctant to use new tools. But mise has been a fantastic addition to my dev workflow.
I have two problems with Mise: There isn't a page with the most common commands I might want to run, and whenever I try it, some Python imports mysteriously fail. Then I disable it, and everything is fine again.
I might be motivated to persevere if I only had one of the above problems, but with both of them together, it's too much of a hassle.
I posted an issue about the documentation and I see that it was added, so thanks! If you want, I can post more issues of the type "as a new user of mise, I expect to be able to do _______ but I can't see how in the docs". I'll also post about my Python issue when I reproduce it next.
> Because most people barely know in depth the packaging challenges for one ecosystem.
I think you’re greatly overstating the problem, at least insofar as it relates to this tool.
For example, Python has its prefix (where packages are installed) baked into its installation. pip, ux, poetry — whatever — are going to install python packages there.
This tool is unconcerned with package installation — it is only concerned with getting the interpreters installed and managing which one is on your $PATH.
There’s literally nothing to leak.
And regarding “wraping existing tools” as proof of some shortcoming in mise (and/or similar) — if they reinvented the wheel, that’s where things could leak. And separation of concerns is a good thing.
Bootstrapping python incorrectly is the main source of packaging problem.
There is a lot to leak. For exemple, if you install a non wheel compiled extension, you'll need the headers, but some python distro don't provide it.
Then of course, on windows, is your python registered with the py launcher? How does it interact with existing anaconda installations ? On linux on existing system installation ? Is the shim (or path update for mise) affecting /bin/env ? How that works with .pyw association ?
The. what does it implies on the venv creation and activation ? And on using -m ? And .pth files ? user-sites ?
All those questions are linked to bootstrapping.
What happens then is pip install fail or import break, but the user have no idea it's related to his bad python setup because most people don't know how it works.
And now bootstrapping has broken packaging.
This is where most "python packaging sucks" things are born: from unkowingly botching the bootstrapping.
And the vast majority of tools to do it suck. E.g: Shims are all kind of broken (pyenv and rye come to mind).
To suceed, mise would have to know all that, pick the right tool, make a perfect abstraction, create fantastic error reporting, and test all those cases on ci on all platforms.
It's possible, but I know only one project that does this almost correctly. And even this one has a long way to go.
Saying "there is literally nothing to leak" is actually perfectly making my point most people don't know the topic deeply enough to know what they get into.
Then of courses there are all the modes of failure. This article has a good bit about that:
I'm trying to use rye on Windows, but it doesn't want to use the normal installed Python version, only the versions it downloads itself, and it cannot update them easily because it pins an old one to run itself.
Example: I tried to convince our deployment system to deploy patch releases to simplify our hotfix solution. The code was in Node and the deployment tool in Python. I had to thread the needle to come up with a semver pattern that was legal in both Python and NodeJS. Not impossible but annoying. (Then discovered our deployment tool wasn’t using the semver parser in Python and it still didn’t work. Goddamnit.)
Exactly. A task runner for Node.js is already complex enough. And it's not just a task runner itself, but rather an ecosystem of things working together. Now you tell me this can somehow handle Node.js, Python and others. I'll need to see how it actually works in the real world to believe the claim.
I'm not "a developer" so I never got the use case of tools like these. Instead I just use the stuff they mention (asdf, make).
I use Asdf to manage versions of all programs in a monorepo. Works great (well, actually asdf's UX is terrible, but it works reliably, and the plugin design is great).
For development, I don't ever load environment variables into my current shell session. I run a script or Makefile which loads any necessary variables, does a thing, and then exits. It would be a nightmare to have to constantly check if my current shell session had X variable in it.
I use Make for repeatable small commands that will vary per directory, or for simple parallelizing or ordered execution of commands. I have a big one that handles Helm installs, and a few more for Terraform, Packer, asdf, etc. I also use them for deployments in hierarchical environment directories, where environment variables are loaded from parent directories. I love that Make has all the features it has, because I always find myself eventually reaching for something you don't find in "just a task runner", and it makes my life easier.
I use shell scripts when I need to make a composeable tool that'll be slightly longer or more complicated than a Make target should be. I have saved so much time and effort writing these tools in shell rather than Python or something, where there is inevitably way more bugs and dependencies. The only time I have needed to use something more complex than shell is when I have a lot of APIs to deal with that also deal in JSON; if it's a lot of complexity it's better than curl/jq, but if it's only one small task, curl/jq is better.
The end result works great. The whole environment just needs asdf installed (from Homebrew, for example). With stock Make and the stock Bash v3, I can manage everything automatically, everything's version-pinned and automated, all variables get loaded at runtime as needed, and the whole thing can be grokked by just reading some simple Makefiles.
The only thing I want to fix now is to get rid of the superfluous Makefiles from directories (they're all symlinked back to one Makefile). It's a pain to re-symlink them all when I change directory structure. Probably should just write a script for it...
I use Mise as a drop-in replacement for asdf. It's fully backwards compatible with .tool-versions and other config files, and unlike asdf it uses a PATH-based approach instead of shims.
I'm alright with the loading time for shims, because I find the constantly changing path variable a bit jarring. Not a deal breaker, I must admit. But when I try to use shims instead of the shell plugin, I lose the environment manager. I wonder if there is a way to activate just the environment manager.
All the features are opt-in. I started using mise because I wanted something like asdf only without the bad UX, and mise can use asdf plugins.
For env vars, you don't need to load them into your shell if you don't want to. When you run a task, mise will make sure the env vars in your config are set, so thats not something you need to worry about.
I still use shell scripts like you describe, mise just supercharges them a bit. When I need to make sure my teammates have the tools in that script (like jq) installed, mise just ensure they are installed before running the command, as long as you declare them in your tools list.
if you use asdf you can drop mise right in and it'll work the same but faster and with better supply-chain security. people have been doing this for almost 2 years and mise fits that use-case perfectly.
You don't have to touch the env vars and tasks stuff.
I think you should give `mise` a chance. I believe it can help improve your workflow.
It's better at managing tools than `asdf`, very close to `direnv` and superior to `make` as a task runner (more verbose but much easier to understand). One of the advantages is that `mise` tasks can be standalone files (you can even write file tasks in python if you prefer, see https://mise.jdx.dev/tasks/file-tasks.html)
i have similar tree-containing-symlinks-to-one-thing - and i do it by symlinking each x to ../x ; and only the root x (of any tree) is a real thing (or missing, if it lives on some other device). Thus the structure is still tar/archievable.
Of course you can do things like these too:
$ MAKEFLAGS="-f /that/root/makefile" make
or (rude)
$ alias make="make -f /that/rooty/makefile"
but beware that adding another -f somemakefile will load both specified.
Apart of it, my biggest grievance with make is that it cannot handle spaces in names. By design.
I'd really like a better idea on Windows support for mise. Especially for development using WSL. It seems like it probably works there, but the docs are pretty slim.
> Note that Windows support is very minimal for now.
There are multiple tools in this space - even devbox has a direct competitor. With mise, I'm not expecting an isolated environment, but rather a meta package manager. I can install many tools directly into my user shell. If I wanted that with nix, I would be using a nix user install rather than devbox.
You just choose what you like most. And mise seems to have a large fanbase.
Mise just works and takes maybe 1-2 minutes to explain and have a tool running on a team members machine in my experience. I couldn't get into nix, too complicated for me and I've yet to find a resource that makes it click for me. After that I would feel comfortable encouraging the rest of the team to use it but as mise is so easy and even more popular, I will probably just stay using this. I will look into devbox though, thank you for the recommendation!
Lately I was thinking "what's the best way to integrate/use a task runner like mise in a github actions workflow".
Looking at the workflow files in the mise repository it seems like they gave up and just put in a few run: mise steps (having to rewrite / unable to use dependencies etc).
I think it would be better if you could generate the workflow files but I haven't found such a project yet.
there isn't a hook for executing shell source, but it would be possible to add I think. The current hooks already execute inside of a shell function so there would just need to be a way to declare that you want a hook to run inside the shell, maybe like this:
[hooks.enter]
shell = true
run = ". completions/mycli.sh"
You just mentioned the only thing that bugs me with mise: the frequent (sometimes 3x per day) releases.
I say this only because I’m one of the maintainers of the MacPorts port for mise, and while I’ve automated things, I have had more than one port update be outdated before it gets merged because of these releases.
I’ve automated the PR submission steps (not with GHA, but with a shell script I run on my Mac), but after discussion with the gentleman who usually merges those PRs, we decided that we'll probably do them every 2 or 3 days.
I've settled into a mostly daily cadence. If there's a day with more than 1 it's because there is a relatively serious bug that I don't want anyone to need to wait around for—those are the releases you should actually pay attention to so you're not missing something important. It's true I used to do more but I've dialed this back in recent weeks.
That said, it's a selfish strategy that benefits me more than anyone. It ultimately means I don't spend as much time fixing bugs since resolutions go out quicker and users get to test them (often whether they want to or not) while the issue is fresh in my head and I can quickly make an adjustment if needed.
I know especially package maintainers such as yourself would prefer I have nightlies for this purpose and then less frequent releases but that's more work for me and means users generally will be testing changes with a bigger delay.
Users may also think they want this but I actually think it wouldn't serve their interests—it'd mean I spend less time actually improving mise and more time with logistics. I'm also terrible at release notes and commit messages and I'm not sure it's an area I want to improve in simply because that would come at the cost of doing other things. I also don't like doing that stuff and this is (ostensibly) a hobby after all.
That said, I'd really appreciate if you came by our discord and had a conversation about this with me. While those are my reasons for the way things are I'm also certainly not opposed to change. With homebrew I have a release hook to automate this process and perhaps we could do something similar for MacPorts. We could even automate every N releases or something if you think that would be better.
I’ve just started a new job (where I will be pitching mise to replace asdf and maybe a few other things because of your recent push on supply chain security), so it’ll be a bit of time before I can do this. I’ve thought about how to automate the process further and may do so in the holiday break week.
If I get as far as making a GitHub action for this, I will absolutely discuss with you because it would be very good to make this work as quickly as possible.
Direnv can do this, but currently only for bash shells and maybe only when using nix flakes. There is an open issue to get zsh working, but I just stick to bash TBH
I don't really understand why people use direnv, but you could easily replace it with a shell script that could load anything you want when entering a directory.
You basically just write a script that hooks into your shell (using your shell's existing hooks, like PROMPT_COMMAND for bash) and have it load a shell script in any directory you enter.
Obviously this is a security risk, as you could enter a directory controlled by a hacker. This is why (presumably) they only deal with exported variables, though even that's dangerous.
direnv unloads the directory's environment as soon as I leave it, so I don't have to worry about the provenance of any one given shell instance. All I have to care about is its current working dir.
I’ve been using mise at work to manage go versions, it’s been good although a little confusing and undocumented at times. I’ve never used asdf or similar before so there’s probably just a bit of learning curve but this thread is encouraging me to try it more.
I've rejected most of my pain into a black hole, but as the person who wrote the Ansible scripts to bring up dev machines, I had nothing but loathing for nvm. It someone always managed to find bold new ways to misbehave & crud up. It was incredibly unpleasant to work with.
Someone else suggested we switch to asdf & what a rapid & happy migration that was. Good riddance nvm.
nvm is unbearably slow, this is really fast. 200 to 300ms latency whenever you open a terminal is noticible and I was getting far more than that sometimes, up to 1sec. Its really really bad.
This allowed me to change from pyenv, node and everything into a something really neat and simple, the only thing it caught my surprise was `mise trust` but then the CLI help me to understand, Thanks jdxcode!
I find the JetBrains integration spotty - I have to run ‘eval “$(mise activate)”’ in every terminal session for it to pick up the env. I have vague recollections of also having issues with running executables from the IDE as it is missing the env details.
I believe, and I could be wrong about this, that JetBrains only picks up changes to your .shell_rc files on a restart.
What I learned instead was to stop using the built in terminal in WebStorm. If WebStorm crashes you’re fucked. Objectively, it never did that a lot and does less so recently, but not never.
WebStorm likes to pick up file system changes when you give it focus, so any manipulation you do in the builtin terminal doesn’t necessarily do that.
If not makefile then what do you use for a project that has Python for backend and JavaScript for frontend? Does everyone learns all the tools or do you just provide a 'make lint' that works in all codebases?
It is a less heavyweight make. Similar syntax and behavior, no .PHONY, a couple of helper functions and behaviors. It is designed as a task runner rather than a build system.
Have enjoyed replacing makefiles with https://taskfile.dev/ which looks like it could be more powerful due to being able to detect changes etc. But glad Just has been good.
tl;dr: mise has more functionality like parallel tasks, watching for changes, and comprehensive argument parsing support (including custom autocomplete)
the biggest difference is the syntax. just is more concise and you really need to learn the syntax in order to use it. In mise things are more verbose but I feel easier to read for someone unfamiliar with it.
Ignoring the task runner stuff, Mise is great just as a "better asdf". It allows many more package sources (including pipx, go and cargo), and it puts the actual executables on PATH -- not shims like asdf, which has a tendency to break stuff.
mise is asdf written with security and performance in mind (I believe it started with performance, but has become much more obsessed – in a very good way — about security).
Well I'm tired of all this bullshit anyways. I don't want to manage multiple versions of the same language on my system in the first place, but I guess it's useful for those who need it.
I’m guessing you navigated to the https://mise.jdx.dev/environments.html page and saw the TOML syntax (which looks an awful lot like INI), and confused yourself.
Mise (like a lot of software) uses TOML as the format for its config files (as opposed to something like JSON). Mise reads that config, to automatically export environment variables on a per directory tree basis.
When the docs refer to environment variables, they very literally do mean environment variables. The values of which are taken from a format that resembles INI, as you have noticed.
Environment variables are key/value pairs that are passed from parent process to child process when the child is first created. A convention has also arisen where these can be put into environment files so the overarching system (for example, docker) can load them from a consistent place (without having to add them to your shell and risk leakage across projects) and then pass them down to child processes in typical environment variable fashion.
Also there are no sections like there are in ini files.
Unrelated, but triggered by the remark: I can’t wait to meet the person on HN who has the take of “if it’s not in COBOL, I won’t touch it”. I could suggest a lisp, but I think those folks are actually somewhat plentifully here?
Poor reading comprehension; I read "The following shows using mise to install different versions of node." as "The following shows using node to install different versions of mise."
This is already a solved problem with Nix shell (except the task runner part). I don't understand why there are any other alternatives still being developed. Nix supports more packages than any other solution, it's battle tested and cross platform (didn't try on Windows but on Mac OS it works fine). And it's more reproducible than any other package manager.
Nix UX really sucks I agree with that. But it has a very robust core model and is reproducible from the bottom up. Tools like asdf, renv etc. just provide you some binaries. If you need some system libraries installed they don't help with that for example.
Can one provide reproducible dev environment that uses a tool that is not yet in mise registry? Or does one need to wait it to be added into the registry? Also if I want to provide a python runtime that is compiled slightly differently can I do that? Or does it have to be distributed as a precompiled binary?
> that uses a tool that is not yet in mise registry
Yes, you can directly get tools from npm/pypi/cargo/github-releases/asdf-plugins/vfox-plugins without anyone touching the mise registry. The registry is just a convenient index of short names e.g. "fzf@0.56.3" maps to ubi:junegunn/fzf@0.56.3 which will download the appropriate arch binary from the v0.56.3 junegunn/fzf GitHub release.
> if I want to provide a python runtime that is compiled slightly differently
The default uses precompiled binaries, but with one setting it can use python-build/pyenv under the hood, then all the pyenv env vars can be used to configure the build process.
I think the entire concept of nix is a broken model not fixable by docs and better DX. Precompiled, portable binaries are the way to go which is what mise is built on. Trying to maintain this separate build chain where everything is compiled in a sandbox gives your system a "split-brain" problem where you have the "nix world" and the "macos (or whatever) world". Ultimately, this just causes problems. Of course I'm ignoring NixOS but that's a sledgehammer for this supposed "problem" nix is trying to solve in the first place.
mise is for the 90% of developers that just want things to be fast and work and don't care about the nuts and bolts.
Nothing in nix says you have to compile something from source. Just that the resulting artifact needs to be reproducible hermetically. You can download any random blob from the internet as a nix derivation, as long as you tell nix what the resulting hash should be after you download it. Sure, it might have unmet runtime dependencies, but thats orthogonal.
What you are really butting up against is that the nix store is a bit of a split-brained runtime environment. Its not easy to e.g. `gem install` to your system while running a nix-managed ruby. This has nothing to do with the binaries (well... sometimes it does because nix will patch paths to point to the readonly store, but again thats orthogonal).
Don't you kinda need to separate the worlds to avoid borking your system when you update? Eg macOS decides to ship some customizations in its curl,[1] and now you need your own curl because Apple's customization is breaking your project?
I've been using nixpkgs on macos (and without brew) for 3 years now and not sure what kind of split brain problem you're talking about?
I also have no idea about the problems with multiple anaconda and other python builds complain about.
> Precompiled, portable binaries are the way to go which is what mise is built on.
And where are those mystery meat binaries supposed to come from? What do you do if the provided binaries aren't enough? (Wrong version, wrong build flags, what you want isn't even packaged, don't support your platform, etc, etc, etc.)
Binary package managers have been tried over and over, and never work out well.
> gives your system a "split-brain" problem where you have the "nix world" and the "macos (or whatever) world".
Yeah no, that's inherent as soon as you bring in any kind of secondary package manager. Including pyenv or mise or whatever else.
The xz example does not support your case. Not only was every downstream build infected until it was discovered, it also needed a distro-specific modification (to openssh in Debian and Fedora, IIRC) to work at all.
If only they were actually portable. As it stands, mise is just another half-solution, and you can't solve the other half of the problem by ignoring it.
I really want nix to succeed, but it has terrible UX and documentation. It also doesn't help that the community is still fighting over whether flakes should be a thing.
Every time I’ve used a declarative system at work I either eventually become one of the experts and we all have lines outside our door of people who just don’t get it, or I replace it with something imperative so we can all get some fucking peace.
Ant was by far the most stressful. I had to cyberstalk James Duncan Davidson to understand what he was thinking. The mental model for the tool wasn’t in the docs. It was in forum posts spread across three+ different websites. And it was slightly insane. First writer wins broke everyone’s brains across three jobs before someone helped me kill it and replace it with something else.
It’s also a cornerstone of my thesis: never trust software someone says they wrote on an airplane. That’s not enough time to create a good solution, and any decision you make while experiencing altitude sickness is sketchy. (Prior to 2010, airline passengers were experiencing 8000 ft atmosphere on every flight. One of the selling points of the 787 was 5000 ft equivalent pressure)
Is it a frequent experience for you to have to disregard a piece of otherwise appealing software because the developer claims to have written it on a plane?
> This is already a solved problem with Nix shell (except the task runner part).
devenv, a Nix+direnv-based solution, has a pretty cool task runner thing, plus service management.
> I don't understand why there are any other alternatives still being developed.
I love Nix and I believe it's a great choice for many teams, for the same use cases as mise. Nix's paradigm is the future. But Nix's defects are also real and obvious enough if you use it for long. I can understand why someone might see value in trying a fresh new effort, or an approach that asks less commitment.
Reading nixpkgs is pretty amazing info, better then any docs. But I actually think the docs are pretty fantastic. The manual is solid, the CLI help text is pretty excellent. I don’t get when people say the docs are bad.
Not OP but you don’t have to look far when it comes to Nix. Here’s a couple of the more annoying ones:
Example 1:
updating dependencies that are outside of nixpkgs is not a one command ordeal. Especially if you’re doing something like updating the commit shape of a packaged release you’re targeting. I think there’s no reason they couldn’t have some clean way of writing rules automate update non-nixpkgs (why do I have to do this dumb nix-prefetch-url thing myself to compute some hash?).
If I am selling nix to end users as a system package management solution I’m comparing it to tools like brew. As long as you stay in nixpkgs it’s fine - but as soon as you’re out of that (which almost everyone is going to have at least o e package that is either not in nixpkgs or they can’t use the nixpkgs one for some reason) maintenance is no longer as simple as brew update / nix flake update.
Example 2:
Nix just doesn’t have very good debugging tools. It reminds me a lot of terraform. Yes there is an REPL but the simple task of breakpointing and seeing the value of data structures is not really straightforward in nix. If you want to it language to be approachable you need to make it dead simple to immediately be able to spit out its state in a way that an engineer knows how to work with it. I would not say the process of doing so in Nix is dead simple
Use it as a signal to start an investigation into how many agree with that sentiment, try to measure the resulting adoption dropoff, and use those stats to push for a UX redesign.
This is a pretty ignorant stance. It implies that the issues would somehow be easy to address if they were just acknowledged. Are you aware of the efforts that have been put into this so far? Have you considered that achieving the vague improvements you are asking for, working on a groundbreaking technology while holding together a rapidly growing community, is just really hard?
Since then, mise has folded in two capabilities that I needed the most: Task Running and Env Vars.
Overall, it has been a fantastic experience for me. I love how the developer has spent a lot of time ensuring compatibility with existing tools while still building future capabilities.
I will add one thing that I knew I needed but couldn't find anywhere was added through the recent backends feature. I do a lot of trust and R development and there are dev tools that I need installed that I don't use as libraries, just the binaries. It was a problem making sure that those dependencies were installed in a new environment. Now, it's just so easy: I list those in my `mise.toml` file, and that ensures they are installed and installable.