Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mise: Dev tools, env vars, task runner (github.com/jdx)
468 points by ksec 10 months ago | hide | past | favorite | 189 comments


I started using mise back when it was still called rtx. I was a little annoyed by asdf's quirks and having it replicate that behavior while being faster and less intrusive in my shell configuration was great.

Since then, mise has folded in two capabilities that I needed the most: Task Running and Env Vars.

Overall, it has been a fantastic experience for me. I love how the developer has spent a lot of time ensuring compatibility with existing tools while still building future capabilities.

I will add one thing that I knew I needed but couldn't find anywhere was added through the recent backends feature. I do a lot of trust and R development and there are dev tools that I need installed that I don't use as libraries, just the binaries. It was a problem making sure that those dependencies were installed in a new environment. Now, it's just so easy: I list those in my `mise.toml` file, and that ensures they are installed and installable.


The biggest visible boost has been in my shell startup times. Buying a computer after 5 years with 4 times as many cores and it feeling just as sluggish because nvm and pyenv are parsing the same set of bash files reading from disk was not pleasant. Mise actually made me feel, I didn’t just throw the money into a void


I don't understand how people don't notice the massive tax they're paying by using nvm:

    $ hyperfine "~/.nvm/nvm.sh" "mise env"

    Benchmark 1: ~/.nvm/nvm.sh
      Time (mean ± σ):      1.722 s ±  0.032 s    [User: 0.064 s, System: 0.112 s]
      Range (min … max):    1.684 s …  1.805 s    10 runs
     
    Benchmark 2: mise env
      Time (mean ± σ):      13.4 ms ±   5.7 ms    [User: 10.0 ms, System: 21.3 ms]
      Range (min … max):     9.4 ms …  42.2 ms    29 runs
      
    Summary
      mise env ran
      128.14 ± 53.94 times faster than ~/.nvm/nvm.sh
100x is definitely something you'll notice

EDIT: for some reason in discord we're getting very conflicting results with this test. idk why, but maybe try this yourself and just see what happens.


I switched to https://github.com/Schniz/fnm a while ago and it’s been fantastic. I also bake it into all of our Packer images at work.


I built my own version on nvm (called nsnvm, for "Nuño's stupid node version manager") to solve this. You can see it here: https://github.com/NunoSempere/nsnvm Absurdly fewer features (and might break npm install -g), but very worth it for me for the reduced startup times.


I prefer GitHub.com/tj/n. It's really nice.


I notice. It's awful.


Count of grey hairs on my head and face is only increasing so I'm gonna be that guy:

Nix/NixOS and Guix are two solid solutions to the problem, because they spin up completely independent, immutable, environments. You don't need to mess around with shell hacks to swap out the correct `npm` or `ruby` binary based on a string in one of several dozen dotfiles.

More or less python-style virtual envs on steroids where it's not just python stuff that isolated, but the entire setup. All your tools, all your config. Throw in `direnv` so you can make your editor and GUI tools aware of it.

The only initial headache is making sure the package is available to pull in -it's easy when it's distributed, but when tools are published through NPM or RubyGems or Crates or just on github that you have to run `go install` to get, then it's a bit of faff. But the same faff that distro managers have keeping, say, debian's sources up to date.


I feel like nix has been thoroughly discussed in this post already, so you're not the only guy.

> You don't need to mess around with shell hacks

Shell integration is optional, you can use `mise en` just like `nix-shell` or `nix develop`. You could also just invoke commands/scripts through mise tasks or mise exec.

> based on a string in one of several dozen dotfiles

The "Idiomatic" files like .nvmrc, .python-version, etc are supported but most people just use a mise.toml, which (a bit like flake files) contains all the config for the environment.

> but when tools are published through NPM or RubyGems or Crates or just on github that you have to run `go install` to get, then it's a bit of faff

And this is what mise excels at: `mise use npm:cowsay` or `mise use ubi:junegunn/fzf`

I think Nix/Guix are great, but also terrible. For me today, it's not worth the pain.


I recently switched to Mise for all of my JS, ruby, python, and java sdk management needs, and I’ve been delighted with it so far. Not having to install RVM, NVM, some toxic brew of python installers until I get a working python environment, and SDKMan has been such a breath of fresh air.


`brew install uv` and no more python troubles occur


`mise use uv`, there, I fixed it for you.

Mise actually has a great integration with uv, like auto venv activation.


I think uv has auto venv activation and workspaces and stuff, now that uv is the official successor of Rye?


It does but you need to run commands through uv to use it, I assume this means if you run bare python commands in the task runner or whatever mise will use the venv.


Ah, from the Astral folks. Right on, I'll check that out sometime.

Maybe on my work laptop, which resembles this xkcd classic: https://xkcd.com/1987/


uv is its own thing, but direnv + `source .venv/bin/activate` is straightforward nowadays.

Direnv has saved me so much pain nowadays


Why `source .venv/bin/activate` with direnv? I use `layout python` in my .envrc and direnv activates the venv on entry to the directory.


`layout python` means that the venv is managed by the layout script in direnv, but when sourcing manually I can create it with uv (or pyenv, because I sometimes need to pin the python version) and then just add the source line.

`layout python` is great when that just works. I have trouble juggling various Python version effectively with direnv's layout script (I _know_ I'm doing something wrong, but I can just set up a virtual env as a one time operation so...)

(I also like sourcing bceause I know _exactly_ what's happening, as I know more about the activation scripts than the layout script direnv provides. But that's just a personal thing)


I like direnv too. But if you're not planning to use uv, you might want to give pipenv a try - the officially recommended tool for the purpose. Pipenv has just one command to create a virtual environment and install packages. And while pipenv can handle traditional requirements.text file, it's real strength is pipfile - a richer format with supporting lock files.

Pipenv doesn't automatically activate the venv on entry into a shell. But a shell plugin named pipenv-activate supports this. It does what you use direnv for in this case, without an envrc file in the source.

One major difference of pipenv from vanilla venv is that pipenv creates the venv in a common location outside the project (like poetry does). But this shouldn't be a big problem, since you wouldn't commit the venv into VCS anyway.


Pipenv is certainly not "officially recommended", and it should be avoided at all costs: https://chriswarrick.com/blog/2018/07/17/pipenv-promises-a-l...


I'm loving how we're just 3 levels down into "python installers/package management" and it's already a heap of radioactive waste. Within 3 comments, 6 different python packaging and/or environment management tools were mentioned, and as a seasoned python user, I haven't even heard of direnv yet. Every week, a new pip/py/v/dir/fuck/shit/tit/arse -env tool emerges and adds to the pile of turd that is python packaging. It's truly getting to parody level of incompetence that we are displaying in the python community here.


You know: uv works, pipenv works (even in 2024), pyenv works, direnv (non-Python but fits my use cases), pipx works, pip-tools work, pip/python -m venv/virtualenv work, poetry works (in its own opinionated way), apt/dnf/apk, etc work

uv manages python binaries, user python tools, venvs, project/script dependencies, lock files. Other tools do less or different e.g., pipenv may use pyenv to install the desired python version.

I've used all of these tools successfully (at different times and/or for different use cases).

uv is the best attempt to circumvent "no size fits all" so far.

If you think a language that has just one tool is better, then it just means your use-cases are very narrow.

I don't know any other language that have such variety of users, applications (besides C). There may be more popular languages but nothing with such range.


> If you think a language that has just one tool is better, then it just means your use-cases are very narrow.

Counterpoint from “The Zen of Python”:

> There should be one-- and preferably only one --obvious way to do it.

https://peps.python.org/pep-0020/


I'm sorry but you are showcasing everything that's wrong with the python packaging /environment ecosystem. You list 10 tools which all work to varying degrees of overlap between each other. Which one should one use? And you don't even need to answer that because the answer changes every year. Last year it was poetry, this year it's uv. Next year it will be some other silly attempt.

This is a bad joke. It's a mature language where the answer to: "I want to manage my programming language version and installed libraries" is that you have to try a dozen different tools, each of which will cover some but not all of your requirements to do this one simple thing.

> I've used all of these tools successfully (at different times and/or for different use cases).

And you see nothing wrong with that? Pretty much every modern language has one way (or at most just a small handful of ways) to build a library and manage your dependencies/environment. The reason is that packaging and environment management is a side show. A necessary evil we have to do so that the actual crux of our work can be done. When I set out on a project, I don't say: "I want to spend a week figuring out which is the current environment management tool supported by the python mindshare". I don't want to deal with a dozen different ways of installing the dependencies of a package. This is insane. When I pick up a language, I want to know what is the way of managing dependencies and packaging up my stuff. And I don't want this to change on a yearly basis, doubly so for a mature language.

> If you think a language that has just one tool is better, then it just means your use-cases are very narrow.

Yes, I prefer that there's a choice of tools for say time series analysis or running a web service. Competition in those areas is good. That's how innovation is driven forward.

When it comes to package and environment management, I don't want innovation, I want stability. I want one agreed way to do it so that I don't need to fuck around with things which are completely orthogonal to the work I actually want to do and that will put bread on the table. I don't want to spend brain cycles keeping up to date with yet another hare brained way of defining what is a python package.

In my view, the reason why we are in this quagmire is because the roots of python are in a fairly simple scripting language, and the packaging has not well escaped these roots. You can have loose python files, you can have directories containing python files which are automagically modules, then we had to hack in a way to package collections of modules and manage their different versions. It's all hacks upon hacks upon hacks, and it has to be backwards compatible all the way to being able to run a loose python file.

I'm not saying this is easy to solve. It's the posterchild of the "Situation: there are 15 competing standards" XKCD. The time to solve it would've been 15 years ago while Guido was still the BDFL. There are too many stakeholders now to get any sort of consensus.


> Which one should one use?

Well, that heavily depends on what you want to do. Python has a number of concerns when it comes to code and package management, some of which are not present in most other languages. Here's an incomplete list, off the top of my head:

1. installation and management of installed packages

2. management of Python versions present in an environment

3. management of virtual environments (Python version + packages installed and ready to use, in an isolated context)

4. building of distribution packages (nowadays pretty much only wheels, with or without native extensions, which depend on the target platform)

5. publishing of distribution packages (to PyPI or a compatible repo)

6. defining repeatable deployment environments (a subset of #1)

Most developers face some combination of above problems, and various tools offer solutions for certain combinations -- with a few covering all of them, to different levels of quality. It is crucial to understand your needs and select the tool(s) that offer the right solutions in the way that fits your usual workflow the best.

This article [0] is a good starting point to understanding the current Python packaging landscape, with a clear overview of which problem is covered by which tool.

[0] https://alpopkes.com/posts/python/packaging_tools/


Thanks, but it was a rhetorical question :).

My objections are to the fact that someone had to build this atrocity of a Venn diagram just to illustrate the python ecosystem: https://alpopkes.com/posts/python/figures/venn_diagram_updat...

And in this very thread, people seem to accept that this is fine. There's nothing wrong with the fact that there are 6 separate tools just for building a package, some support publishing, some don't, some also manage environments or python versions? Which of these are currently supported, which are deprecated, which are going to become deprecated in 2025?

But also there's a tool which only does publishing (twine)? The diagram is not even correct, because conda itself requires package building, except it's about building conda packages, which are cross-platform/language and separate from building a python package.

Why is there a separate set of tools for package management and package publishing?

The blog post is indeed helpful to allow someone new to python to at least see what options there are and roll the dice, but it will also raise extremely serious alarm bells that there's something fundamentally rotten at the core of the python ecosystem.

The fact that I need to read some unofficial post from 2023 to gain an overview of the python packaging and environment management ecosystem is itself completely nuts. And I can guarantee you that by now this blog post is getting outdated, because some mad genius is cooking the new best tool and ready to unleash it on the unsuspecting world.


> it was a rhetorical question

The only case where this question is rhetorical is when you do not really use Python enough to require deciding which management tool to use. Which tells me everything I need to know about you.


Yes you are right. I do not want to care about which management tool to use. Programming is a difficult to discipline as it is, the tools should make it easier, not more complicated. Some people might do programming purely as an exercise of self-fulfilment and they don't mind. For me it's a tool, a means to achieve some ends. If you think the python landscape and tools are in an optimal state, more power to you. Meanwhile, I have a dozen quants complaining at me that they are wasting time on irrelevant crap learning yet another set of pacakging tools and that the technologists need to figure out one consistent tool (or at least a stable set of tools) for managing packaging/environment concerns instead of inventing a new one each year.

But yes, let's go throwing around thinly veiled insults instead. This tells me everything I need to know about you :)


Well, direnv isn't a python specific tool.


Yup - hard agree on all the Python parts, but I'll happily recommend direnv wherever possible. It doesn't do much, but it brings a ton of sanity and simplicity to shell env without including any installers - it's just an activator, and a very simple one, so it'll survive next year's python and npm and etc tool flavors with no issues.


I think there's a simpler explanation here.

pip solved a lot of baseline problems. It doesn't solve all of them. There were a bunch of failed attempts to be "better pip". They have not worked, really, but people stick around to some tools despite this.

I do think uv is different, on account of working and being very reactive to ecosystem pains.

(direnv is not Python-specific! It's just a tool to set env vars in a directory. But because Python venv's can work through setting two env vars....)


haha, thank you for summarising my thoughts on python package management... and very few people have mentioned poetry which is what we and most teams I know use.


I use poetry as well, but lately I've been looking at uv and had to actively stop myself because it'd be the 3rd tool I migrated to in the last 3 years.

Rust programmers seem to be lousy with ECS frameworks that never get used in any games but seem hell bent on proving that rust is the best language for game programming, and python programmers seem to break out in a case of "packaging tool building". I don't know what causes this. Perhaps some sort of pathological thinking that "I can do better"?

I really like python (and I like rust too), but if I were to take an honest look at the python packaging and environment ecosystem, I'd think that I'm being trolled. I lived through the age of setuptools.py, and while it was not good, at least there was only one approach really. Now we have a bazillion approaches that are all good, and zero consensus on what to use. Each individual tool is much better than what we had before, but the landscape has become so fractured that as a whole it's a complete shitshow.


Rust has no GUI game yet, and it’s the last heavy bastion of C++.

Therefore, I think it’s because so many Rust programmers were C++ programmers they would like to move another major stronghold over.

Pure speculation though.


The first project I built to learn rust - we are talking 2018 - was an ECS. I was speaking of personal experience in that particular jibe :)

Happily, I haven't yet been afflicted by a strong urge to build python packaging tools and inshallah I will escape this dreadful fate.


;-)


I'm using it to unify my teams toolchain without resorting to nix or running everything in docker.

I still use docker to run services and I still like the idea of nix, but the DX of mise is too good. Tasks are really nice too, all my repo scripts now have completions.


Did you try https://devenv.sh/? It uses Nix under the hood but with an improved DX experience. I haven't used it myself personally since I find Nix good enough but I am curious if you would still choose mise over devenv.


We are starting to adopt devenv in our team. Overall it's really good--we have control over our toolchain, environment, and the processes that we start. There are some lingering papercuts though, like they haven't yet released a version where they allow specifying the Go toolchain version, and they seem to periodically re-download the Nix package archive. But I think they are improving fairly quickly.

Ultimately, we might still end up moving to straight Nix flakes, just not sure yet.


> they haven't yet released a version where they allow specifying the Go toolchain version

Devenv's maintainers are friendly and responsive when it comes to contributions from downstream, and like 90% of the devenv repo is plain ol' Nix code written in idioms that are common in the community.

I mention it because my team has hit some papercuts as well, but I've been really happy with how easy it's proven to address them. :)


I agree, the fix is in the main branch, they just haven't released it yet. It's just that the existing released versions just don't allow customizing the Go version because of some hardcoded assumptions in the Nix code. So I'll wait for the released version. I did say these are papercuts, not showstoppers ;-)


I'd be interested in anybody who has tried https://devenv.sh/ and https://www.jetify.com/devbox and chosen one over the other. Tried devbox which has been good, but not devenv.



Thanks that is useful


I briefly tried devenv and I find it much easier to use than raw nix, but I also had issues with my nix install on macos (using both the official and the DS installer). It worked well on my linux machine.

Today though mise has so many other great features, I would still choose it over devenv or devbox.


I've used Devbox for two years for my projects and any work I've done for my clients.

I'm curious what some of the mise features are you like.


I like Devbox and I'm familiar with its features so I'll just mention the extras the mise has.

The ubi backend means I can use nearly any binary published on GitHub without needing to worry about a flake or nixpkgs. Just get the binary directly from the author. Same for many of the other backends https://mise.jdx.dev/dev-tools/backends/.

Tasks are very powerful, they can use dependencies, flags, args, completions, and watches. Also can be defined either as strings in the config or point to shell scripts or even python/node/whatever scripts.

The shebang trick is very fun for writing portable shell scripts https://mise.jdx.dev/tips-and-tricks.html#shebang

The fact that mise doesn't depend on nix is both a blessing and a curse. I have less tools available and I don't have the power of a package manager, but on the flip side I don't need to deal with the idiosyncrasies of nix.


Thank you, TIL about ubi! My brain is now compiling a list of places where this will replace either a shell script or flake for my projects :).

Tasks sounds similar process-compose which is bundled into Devbox. I'll have to read up more on tasks though to see if that's an accurate assessment.

Nix is definitely a double-edged sword... One thing I like about Devbox is that it keeps Nix mostly (!) out of sight, mostly unless I want a binary from a GitHub release :).


FYI, ubi is just one of the backends. Besides many languages specific backends (cargo, go, ...), it has 3 backends that support generic packages - asdf, vfox and its own (core). Besides, the default backends are defined for many packages, so that you can let mise choose it for you.


My experience with such tools is that when you do everything, you don't do anything right.

The chances that it doesn't leak greatly the underlying abstraction and creates troubles to figure it out when it will invariably fail is zero.

Because most people barely know in depth the packaging challenges for one ecosystem. In Python there are maybe a dozen in the world that have a good hang of __all__ of it.

And the devs of this tool would need to know so many.

Of course they don't, they wrap existing tools, which implies exactly what I said above.


I wonder if you misunderstood what mise is based on your mention of "packaging challenges". mise deals with language runtimes and dev tools—it doesn't manage dependencies or package anything.

I often hear suspicion about mise for this reason from people that haven't used it. I suppose it's not surprising. That said, I have spent over a decade in the developer productivity space as well as hundreds if not thousands of hours working on mise in the last 2 years—if there is someone that can build this I'm probably the right guy for the job.

Particularly with dev tools, it's long been the case that mise has solved this problem. Improvements are continuing to be made with things like improving supply chain security and ergonomics with python—though it's not like the python community itself has its DX figured out.

Of course I'm still fixing bugs pretty regularly and that probably won't ever change but there are hundreds of thousands of developers out there using mise (kind of a guess, but I'm pretty sure) and it's working great for them. It's in the top #100 tools in homebrew now: https://formulae.brew.sh/analytics/install-on-request/30d/

This definitely isn't some scrappy project—I've devoted much of my life to this problem and I think all evidence points it it being a resounding success.


I have to say, I've been reading your replies here (and your big reply in the just thread) and I'm super super impressed with your dedication to this project. I can tell just in how you write, the volume of responses in this thread, and your tone that this is a real passion project and you're deeply serious about this. I love seeing this. Thanks for your dedication!

(And now I'm off to go try mise....)


I really appreciate that. It's definitely something I get a ton of satisfaction about of building. Drop by our discord and let me know how it goes!


In all scripting language, packaging problems mostly stem from bootstrapping.

Nvm shims break, python path confusion, gem installed on the wrong ruby interpretters, etc.

Maybe you managed the impossible.

But in 20 years of python I've seen only one tool doing bootstrapping in the right direction, all the other ones have failed.

So I'm suspicious of something that does multiple languages.

In the case of mise, it delegates this complexity to the user. E.g: for python, you have to know the config choices, and choose the right backend like asdf, pyenv, or indygreg.

Then you better understand the consequences of that choice.

To me, that's alreay a leak of the abstraction.


> But in 20 years of python I've seen only one tool doing bootstrapping in the right direction, all the other ones have failed.

Which tool is that?


Specifically for Python, right now, uv is showing the most promises (and mise has an uv backend, btw) for bootstrapping.

They have carefully avoided 90% of the mistakes of all other tools, and I have a long list. They don't live in a bubble.

uv still has problems (like indy greg builds not having headers) and it's still below v1 so I can't recommend to use it yet. But I've been testing it in different contexts for months now, and it's doing exceptionally well.

I usually take a year of testing before recommending a tool, because I need to see it in action in Windows shops, in Unix shops, with beginners, with non coders, with startup, in a corporate settings, with grey beards, etc. Python versatility means the user base is extremely diverse and you find it in the weirdest envs.

I also interviewed Charlie Marsh:

https://www.bitecode.dev/p/charlie-marsh-on-astral-uv-and-th...

and it gave me a lot of confidence that he is actually not trying to do everything at once, but quite the opposite, nail to the death very specific problems.

I've tried everything in the Python world, with a good hundred of companies envs, and about a thousand people in trainings. Pyenv, poetry, pipenv, pdm, nix, pyflow, pdm, rye, you name it.

The number of ways they can fail is astonishing.

The uv team quickly identifies when there is friction, and fix it at astonishing speed. They just announced they took ownership of the WHOLE python-build-stand-alone project, and they contribute the improvement to it to cpython upstream.

Their dedication to a good doc and great error messages is quite amazing as well.

I'm impressed.


I think mise gets a lot right. I use it for environment variables, Python virtualenv creation and activation, and task scripts.

I've been a software developer for over twenty years and am usually reluctant to use new tools. But mise has been a fantastic addition to my dev workflow.


I have two problems with Mise: There isn't a page with the most common commands I might want to run, and whenever I try it, some Python imports mysteriously fail. Then I disable it, and everything is fine again.

I might be motivated to persevere if I only had one of the above problems, but with both of them together, it's too much of a hassle.


there's a list with common commands you might want to run: https://mise.jdx.dev/walkthrough.html

if you post an issue/discussion about the python thing I'd love to investigate it a bit


Oh, is that the page that resulted from the issue I opened? It looks good, thanks!

I'll open an issue for the Python thing, I can reproduce it reliably.


it's very possible, I don't recall when/why I added it


If you want to post a discussion on the mise GH repo, I'd be happy to help. Feel free to @ me in that discussion so I don't miss it.


I posted an issue about the documentation and I see that it was added, so thanks! If you want, I can post more issues of the type "as a new user of mise, I expect to be able to do _______ but I can't see how in the docs". I'll also post about my Python issue when I reproduce it next.


> Because most people barely know in depth the packaging challenges for one ecosystem.

I think you’re greatly overstating the problem, at least insofar as it relates to this tool.

For example, Python has its prefix (where packages are installed) baked into its installation. pip, ux, poetry — whatever — are going to install python packages there.

This tool is unconcerned with package installation — it is only concerned with getting the interpreters installed and managing which one is on your $PATH.

There’s literally nothing to leak.

And regarding “wraping existing tools” as proof of some shortcoming in mise (and/or similar) — if they reinvented the wheel, that’s where things could leak. And separation of concerns is a good thing.


Bootstrapping python incorrectly is the main source of packaging problem.

There is a lot to leak. For exemple, if you install a non wheel compiled extension, you'll need the headers, but some python distro don't provide it.

Then of course, on windows, is your python registered with the py launcher? How does it interact with existing anaconda installations ? On linux on existing system installation ? Is the shim (or path update for mise) affecting /bin/env ? How that works with .pyw association ?

The. what does it implies on the venv creation and activation ? And on using -m ? And .pth files ? user-sites ?

All those questions are linked to bootstrapping.

What happens then is pip install fail or import break, but the user have no idea it's related to his bad python setup because most people don't know how it works.

And now bootstrapping has broken packaging.

This is where most "python packaging sucks" things are born: from unkowingly botching the bootstrapping.

And the vast majority of tools to do it suck. E.g: Shims are all kind of broken (pyenv and rye come to mind).

To suceed, mise would have to know all that, pick the right tool, make a perfect abstraction, create fantastic error reporting, and test all those cases on ci on all platforms.

It's possible, but I know only one project that does this almost correctly. And even this one has a long way to go.

Saying "there is literally nothing to leak" is actually perfectly making my point most people don't know the topic deeply enough to know what they get into.

Then of courses there are all the modes of failure. This article has a good bit about that:

https://www.bitecode.dev/p/why-not-tell-people-to-simply-use

It's cover more than mise's scope, but the idea is there.


I'm trying to use rye on Windows, but it doesn't want to use the normal installed Python version, only the versions it downloads itself, and it cannot update them easily because it pins an old one to run itself.

So far I've wasted more time than I saved.


It might be useful for you to try the tool before complaining about it

I’ve used mise for years. It works perfectly well. I use it for Go, Node, Deno, Java, Python, Ruby, and Rust.


Your experience with "such tools" covers experience with Mise, or are you just making assumptions?

My experience with Mise is that it's a great tool.


>My experience with such tools is that when you do everything, you don't do anything right.

Does it matter? Even dedicated tools don't do everything right.

As long as it does the things one wants to do good enough, and offers a cohesive interface to them...


Example: I tried to convince our deployment system to deploy patch releases to simplify our hotfix solution. The code was in Node and the deployment tool in Python. I had to thread the needle to come up with a semver pattern that was legal in both Python and NodeJS. Not impossible but annoying. (Then discovered our deployment tool wasn’t using the semver parser in Python and it still didn’t work. Goddamnit.)


> My experience with such tools is that when you do everything, you don't do anything right.

As a long time Emacs user, I agree :-)


Exactly. A task runner for Node.js is already complex enough. And it's not just a task runner itself, but rather an ecosystem of things working together. Now you tell me this can somehow handle Node.js, Python and others. I'll need to see how it actually works in the real world to believe the claim.


I'm not "a developer" so I never got the use case of tools like these. Instead I just use the stuff they mention (asdf, make).

I use Asdf to manage versions of all programs in a monorepo. Works great (well, actually asdf's UX is terrible, but it works reliably, and the plugin design is great).

For development, I don't ever load environment variables into my current shell session. I run a script or Makefile which loads any necessary variables, does a thing, and then exits. It would be a nightmare to have to constantly check if my current shell session had X variable in it.

I use Make for repeatable small commands that will vary per directory, or for simple parallelizing or ordered execution of commands. I have a big one that handles Helm installs, and a few more for Terraform, Packer, asdf, etc. I also use them for deployments in hierarchical environment directories, where environment variables are loaded from parent directories. I love that Make has all the features it has, because I always find myself eventually reaching for something you don't find in "just a task runner", and it makes my life easier.

I use shell scripts when I need to make a composeable tool that'll be slightly longer or more complicated than a Make target should be. I have saved so much time and effort writing these tools in shell rather than Python or something, where there is inevitably way more bugs and dependencies. The only time I have needed to use something more complex than shell is when I have a lot of APIs to deal with that also deal in JSON; if it's a lot of complexity it's better than curl/jq, but if it's only one small task, curl/jq is better.

The end result works great. The whole environment just needs asdf installed (from Homebrew, for example). With stock Make and the stock Bash v3, I can manage everything automatically, everything's version-pinned and automated, all variables get loaded at runtime as needed, and the whole thing can be grokked by just reading some simple Makefiles.

The only thing I want to fix now is to get rid of the superfluous Makefiles from directories (they're all symlinked back to one Makefile). It's a pain to re-symlink them all when I change directory structure. Probably should just write a script for it...


I use Mise as a drop-in replacement for asdf. It's fully backwards compatible with .tool-versions and other config files, and unlike asdf it uses a PATH-based approach instead of shims.


I'm alright with the loading time for shims, because I find the constantly changing path variable a bit jarring. Not a deal breaker, I must admit. But when I try to use shims instead of the shell plugin, I lose the environment manager. I wonder if there is a way to activate just the environment manager.


All the features are opt-in. I started using mise because I wanted something like asdf only without the bad UX, and mise can use asdf plugins.

For env vars, you don't need to load them into your shell if you don't want to. When you run a task, mise will make sure the env vars in your config are set, so thats not something you need to worry about.

I still use shell scripts like you describe, mise just supercharges them a bit. When I need to make sure my teammates have the tools in that script (like jq) installed, mise just ensure they are installed before running the command, as long as you declare them in your tools list.

If your setup works for you thats great.


I've been using asdf for years. So far it works great but sometimes the shims breaks which kinda annoys me.

Was it worth the switch?


Definitely, the CLI is more intuitive, and using paths instead of shims makes it faster and more reliable.


if you use asdf you can drop mise right in and it'll work the same but faster and with better supply-chain security. people have been doing this for almost 2 years and mise fits that use-case perfectly.

You don't have to touch the env vars and tasks stuff.


I think you should give `mise` a chance. I believe it can help improve your workflow.

It's better at managing tools than `asdf`, very close to `direnv` and superior to `make` as a task runner (more verbose but much easier to understand). One of the advantages is that `mise` tasks can be standalone files (you can even write file tasks in python if you prefer, see https://mise.jdx.dev/tasks/file-tasks.html)


i have similar tree-containing-symlinks-to-one-thing - and i do it by symlinking each x to ../x ; and only the root x (of any tree) is a real thing (or missing, if it lives on some other device). Thus the structure is still tar/archievable.

Of course you can do things like these too:

$ MAKEFLAGS="-f /that/root/makefile" make

or (rude)

$ alias make="make -f /that/rooty/makefile"

but beware that adding another -f somemakefile will load both specified.

Apart of it, my biggest grievance with make is that it cannot handle spaces in names. By design.


I'd really like a better idea on Windows support for mise. Especially for development using WSL. It seems like it probably works there, but the docs are pretty slim.

> Note that Windows support is very minimal for now.

https://mise.jdx.dev/installing-mise.html#windows


Just using mise as a drop-in asdf replacement has been delightful

Same functionality but much snappier with better ux


I read the project's Readme and all I'm left with is "why?".

I use Devbox[1] and get access to the entire Nix ecosystem, done.

[1] https://github.com/jetify-com/devbox


There are multiple tools in this space - even devbox has a direct competitor. With mise, I'm not expecting an isolated environment, but rather a meta package manager. I can install many tools directly into my user shell. If I wanted that with nix, I would be using a nix user install rather than devbox.

You just choose what you like most. And mise seems to have a large fanbase.


Mise just works and takes maybe 1-2 minutes to explain and have a tool running on a team members machine in my experience. I couldn't get into nix, too complicated for me and I've yet to find a resource that makes it click for me. After that I would feel comfortable encouraging the rest of the team to use it but as mise is so easy and even more popular, I will probably just stay using this. I will look into devbox though, thank you for the recommendation!


I hear you, Nix has been nothing but a waste of time for me every time I tried to do anything with it.

Devbox abstracts it and the only time I had to do a nix-specific thing was when I needed a flake to install a CLI from a GitHub release.


Lately I was thinking "what's the best way to integrate/use a task runner like mise in a github actions workflow".

Looking at the workflow files in the mise repository it seems like they gave up and just put in a few run: mise steps (having to rewrite / unable to use dependencies etc).

I think it would be better if you could generate the workflow files but I haven't found such a project yet.


there is a `mise generate github-action` command which uses https://github.com/jdx/mise-action

not sure I understand what you mean by "mise steps (having to rewrite / unable to use dependencies etc)."


It's a simple tool that makes my life easier for more than a year: thanks to the creator and contributors :)


The name is absolutely perfect.


I'm glad people doing frontend are working towards creating a whole toolchain that does everything, like biomejs.dev.


I use direnv but miss that it can't initialize bash completions as I enter a directory. Can mise do that?


there isn't a hook for executing shell source, but it would be possible to add I think. The current hooks already execute inside of a shell function so there would just need to be a way to declare that you want a hook to run inside the shell, maybe like this:

    [hooks.enter]
    shell = true
    run = ". completions/mycli.sh"
I made an issue if you'd like to track: https://github.com/jdx/mise/issues/3412


Very helpful! If this can be made to work I may switch from direnv


you should see this in tomorrow's release, or you can install with cargo from main branch now: https://github.com/jdx/mise/pull/3414


You just mentioned the only thing that bugs me with mise: the frequent (sometimes 3x per day) releases.

I say this only because I’m one of the maintainers of the MacPorts port for mise, and while I’ve automated things, I have had more than one port update be outdated before it gets merged because of these releases.

I’ve automated the PR submission steps (not with GHA, but with a shell script I run on my Mac), but after discussion with the gentleman who usually merges those PRs, we decided that we'll probably do them every 2 or 3 days.


I've settled into a mostly daily cadence. If there's a day with more than 1 it's because there is a relatively serious bug that I don't want anyone to need to wait around for—those are the releases you should actually pay attention to so you're not missing something important. It's true I used to do more but I've dialed this back in recent weeks.

That said, it's a selfish strategy that benefits me more than anyone. It ultimately means I don't spend as much time fixing bugs since resolutions go out quicker and users get to test them (often whether they want to or not) while the issue is fresh in my head and I can quickly make an adjustment if needed.

I know especially package maintainers such as yourself would prefer I have nightlies for this purpose and then less frequent releases but that's more work for me and means users generally will be testing changes with a bigger delay.

Users may also think they want this but I actually think it wouldn't serve their interests—it'd mean I spend less time actually improving mise and more time with logistics. I'm also terrible at release notes and commit messages and I'm not sure it's an area I want to improve in simply because that would come at the cost of doing other things. I also don't like doing that stuff and this is (ostensibly) a hobby after all.

That said, I'd really appreciate if you came by our discord and had a conversation about this with me. While those are my reasons for the way things are I'm also certainly not opposed to change. With homebrew I have a release hook to automate this process and perhaps we could do something similar for MacPorts. We could even automate every N releases or something if you think that would be better.


I’ve just started a new job (where I will be pitching mise to replace asdf and maybe a few other things because of your recent push on supply chain security), so it’ll be a bit of time before I can do this. I’ve thought about how to automate the process further and may do so in the holiday break week.

If I get as far as making a GitHub action for this, I will absolutely discuss with you because it would be very good to make this work as quickly as possible.


Direnv can do this, but currently only for bash shells and maybe only when using nix flakes. There is an open issue to get zsh working, but I just stick to bash TBH


I didn't know this. I know this ticket is open https://github.com/direnv/direnv/issues/443 but it is not clear to me if it is working for bash


I don't really understand why people use direnv, but you could easily replace it with a shell script that could load anything you want when entering a directory.

You basically just write a script that hooks into your shell (using your shell's existing hooks, like PROMPT_COMMAND for bash) and have it load a shell script in any directory you enter.

Obviously this is a security risk, as you could enter a directory controlled by a hacker. This is why (presumably) they only deal with exported variables, though even that's dangerous.


direnv unloads the directory's environment as soon as I leave it, so I don't have to worry about the provenance of any one given shell instance. All I have to care about is its current working dir.


...or you could use a tool somebody already wrote, which even includes a security mechanism.


I’ve been using mise at work to manage go versions, it’s been good although a little confusing and undocumented at times. I’ve never used asdf or similar before so there’s probably just a bit of learning curve but this thread is encouraging me to try it more.


I think if you only use node, then just nvm is fine. This is more for multi language ppl?


I've rejected most of my pain into a black hole, but as the person who wrote the Ansible scripts to bring up dev machines, I had nothing but loathing for nvm. It someone always managed to find bold new ways to misbehave & crud up. It was incredibly unpleasant to work with.

Someone else suggested we switch to asdf & what a rapid & happy migration that was. Good riddance nvm.


nvm is unbearably slow, this is really fast. 200 to 300ms latency whenever you open a terminal is noticible and I was getting far more than that sometimes, up to 1sec. Its really really bad.


This allowed me to change from pyenv, node and everything into a something really neat and simple, the only thing it caught my surprise was `mise trust` but then the CLI help me to understand, Thanks jdxcode!


I try not to be annoying best I can, but in the `mise trust` case I had to opt for tightening security even if it meant a bit of friction.


As a feedback, having all the examples in that gif on the readme centered around node almost made me look away without a second thought.


This is nice! Jetbrains tools sometimes need a bit more configuration with it, otherwise it's very seamless.


I find the JetBrains integration spotty - I have to run ‘eval “$(mise activate)”’ in every terminal session for it to pick up the env. I have vague recollections of also having issues with running executables from the IDE as it is missing the env details.

The SDK discovery works great though :D


I believe, and I could be wrong about this, that JetBrains only picks up changes to your .shell_rc files on a restart.

What I learned instead was to stop using the built in terminal in WebStorm. If WebStorm crashes you’re fucked. Objectively, it never did that a lot and does less so recently, but not never.

WebStorm likes to pick up file system changes when you give it focus, so any manipulation you do in the builtin terminal doesn’t necessarily do that.


There is a plugin that works well to automatically configure some of the SDKs https://plugins.jetbrains.com/plugin/24904-mise


lmk if there might be a way I can help with that, I don't think anyone's posted an issue about jetbrains issues—not for a long time at least


Big fan of mise, moved over from asdf and never looked back! kudos jdx


Finally I can retire makefiles in my python projects


Just curious: what do you use make for in python projects?


Not the parent, but I use it for projects (including Python projects) to run tests, code generation etc. when I can’t use mise.


If not makefile then what do you use for a project that has Python for backend and JavaScript for frontend? Does everyone learns all the tools or do you just provide a 'make lint' that works in all codebases?


Do what I can do in npm scripts


As a task runner. What do you use?


Just.

It is a less heavyweight make. Similar syntax and behavior, no .PHONY, a couple of helper functions and behaviors. It is designed as a task runner rather than a build system.


I’m not GP but I use poetry. Can’t even imagine working with Python without it.

https://python-poetry.org/docs/cli/


I can’t imagine going back to poetry after using uv.

uv is like 10x faster than poetry for installs and dependency resolution.


Poetry is a package manager though, there's Poe the Poet[0] for a reason. How do you use poetry to run custom tasks?

[0]: https://poethepoet.natn.io/poetry_plugin.html


You might enjoy https://poethepoet.natn.io/ it makes tasks easier, and you don't even need poetry to benefit from it (I use it with UV these days)


Justfile[0] was a more familiar makefiles replacement for me

[0]: https://github.com/casey/just


Have enjoyed replacing makefiles with https://taskfile.dev/ which looks like it could be more powerful due to being able to detect changes etc. But glad Just has been good.


What is the main difference with just?


I wrote a detailed comment yesterday about this: https://news.ycombinator.com/item?id=42353634

tl;dr: mise has more functionality like parallel tasks, watching for changes, and comprehensive argument parsing support (including custom autocomplete)

the biggest difference is the syntax. just is more concise and you really need to learn the syntax in order to use it. In mise things are more verbose but I feel easier to read for someone unfamiliar with it.


Ignoring the task runner stuff, Mise is great just as a "better asdf". It allows many more package sources (including pipx, go and cargo), and it puts the actual executables on PATH -- not shims like asdf, which has a tendency to break stuff.


All that complexity, for what?


Having to manage Python / Ruby / Node etc. versions by hand is less complex?


For me, this competes with asdf.

Otherwise, it would compete with eg nvm, rvm.

I haven't managed versions "by hand" for over a decade.


mise is asdf written with security and performance in mind (I believe it started with performance, but has become much more obsessed – in a very good way — about security).


Well I'm tired of all this bullshit anyways. I don't want to manage multiple versions of the same language on my system in the first place, but I guess it's useful for those who need it.


Why do JavaScript programmers call ini files "environment variables" ?


They don’t.

I’m guessing you navigated to the https://mise.jdx.dev/environments.html page and saw the TOML syntax (which looks an awful lot like INI), and confused yourself.

Mise (like a lot of software) uses TOML as the format for its config files (as opposed to something like JSON). Mise reads that config, to automatically export environment variables on a per directory tree basis.

When the docs refer to environment variables, they very literally do mean environment variables. The values of which are taken from a format that resembles INI, as you have noticed.


Environment variables are key/value pairs that are passed from parent process to child process when the child is first created. A convention has also arisen where these can be put into environment files so the overarching system (for example, docker) can load them from a consistent place (without having to add them to your shell and risk leakage across projects) and then pass them down to child processes in typical environment variable fashion.

Also there are no sections like there are in ini files.


Tbh just having to install node ensures I'll never use this. At least use something like python that's already on my system!


node is used in a lot of the examples (because it's popular) but mise is written in rust and certainly doesn't require node


Unrelated, but triggered by the remark: I can’t wait to meet the person on HN who has the take of “if it’s not in COBOL, I won’t touch it”. I could suggest a lisp, but I think those folks are actually somewhat plentifully here?


What gave you the idea that it needs Node?


Poor reading comprehension; I read "The following shows using mise to install different versions of node." as "The following shows using node to install different versions of mise."


What are you talking about? You don't need to install node for this. It's a self-contained rust binary.


You don't have to install node, or Python. mise is written in rust and distributed as a binary.


This is already a solved problem with Nix shell (except the task runner part). I don't understand why there are any other alternatives still being developed. Nix supports more packages than any other solution, it's battle tested and cross platform (didn't try on Windows but on Mac OS it works fine). And it's more reproducible than any other package manager.


I started writing mise in a bout of frustration after trying to use nix


Nix UX really sucks I agree with that. But it has a very robust core model and is reproducible from the bottom up. Tools like asdf, renv etc. just provide you some binaries. If you need some system libraries installed they don't help with that for example.

Can one provide reproducible dev environment that uses a tool that is not yet in mise registry? Or does one need to wait it to be added into the registry? Also if I want to provide a python runtime that is compiled slightly differently can I do that? Or does it have to be distributed as a precompiled binary?


> that uses a tool that is not yet in mise registry

Yes, you can directly get tools from npm/pypi/cargo/github-releases/asdf-plugins/vfox-plugins without anyone touching the mise registry. The registry is just a convenient index of short names e.g. "fzf@0.56.3" maps to ubi:junegunn/fzf@0.56.3 which will download the appropriate arch binary from the v0.56.3 junegunn/fzf GitHub release.

> if I want to provide a python runtime that is compiled slightly differently

The default uses precompiled binaries, but with one setting it can use python-build/pyenv under the hood, then all the pyenv env vars can be used to configure the build process.


Have you tried devenv? What do you think?


I think the entire concept of nix is a broken model not fixable by docs and better DX. Precompiled, portable binaries are the way to go which is what mise is built on. Trying to maintain this separate build chain where everything is compiled in a sandbox gives your system a "split-brain" problem where you have the "nix world" and the "macos (or whatever) world". Ultimately, this just causes problems. Of course I'm ignoring NixOS but that's a sledgehammer for this supposed "problem" nix is trying to solve in the first place.

mise is for the 90% of developers that just want things to be fast and work and don't care about the nuts and bolts.


Nothing in nix says you have to compile something from source. Just that the resulting artifact needs to be reproducible hermetically. You can download any random blob from the internet as a nix derivation, as long as you tell nix what the resulting hash should be after you download it. Sure, it might have unmet runtime dependencies, but thats orthogonal.

What you are really butting up against is that the nix store is a bit of a split-brained runtime environment. Its not easy to e.g. `gem install` to your system while running a nix-managed ruby. This has nothing to do with the binaries (well... sometimes it does because nix will patch paths to point to the readonly store, but again thats orthogonal).


Don't you kinda need to separate the worlds to avoid borking your system when you update? Eg macOS decides to ship some customizations in its curl,[1] and now you need your own curl because Apple's customization is breaking your project?

[1] https://daniel.haxx.se/blog/2024/03/08/the-apple-curl-securi...


system updates causing breakages with mise tools would be news to me


I've been using nixpkgs on macos (and without brew) for 3 years now and not sure what kind of split brain problem you're talking about? I also have no idea about the problems with multiple anaconda and other python builds complain about.


I don't get it. mise and nix binaries both go somewhere that gets added to PATH.


> Precompiled, portable binaries are the way to go which is what mise is built on.

And where are those mystery meat binaries supposed to come from? What do you do if the provided binaries aren't enough? (Wrong version, wrong build flags, what you want isn't even packaged, don't support your platform, etc, etc, etc.)

Binary package managers have been tried over and over, and never work out well.

> gives your system a "split-brain" problem where you have the "nix world" and the "macos (or whatever) world".

Yeah no, that's inherent as soon as you bring in any kind of secondary package manager. Including pyenv or mise or whatever else.


> And where are those mystery meat binaries supposed to come from?

the vendor


Right, because that never goes wrong.[0,1]

[0]: https://news.ycombinator.com/item?id=42351722

[1]: https://tukaani.org/xz-backdoor/


The xz example does not support your case. Not only was every downstream build infected until it was discovered, it also needed a distro-specific modification (to openssh in Debian and Fedora, IIRC) to work at all.


The xz backdoor relied on a discrepancy between the development repository and the released (source) artifact.

While skipping the released tarballs wouldn't have prevented the problem entirely, it would have made it much harder to hide.


If only they were actually portable. As it stands, mise is just another half-solution, and you can't solve the other half of the problem by ignoring it.


thank you!


I really want nix to succeed, but it has terrible UX and documentation. It also doesn't help that the community is still fighting over whether flakes should be a thing.


Every time I’ve used a declarative system at work I either eventually become one of the experts and we all have lines outside our door of people who just don’t get it, or I replace it with something imperative so we can all get some fucking peace.

Ant was by far the most stressful. I had to cyberstalk James Duncan Davidson to understand what he was thinking. The mental model for the tool wasn’t in the docs. It was in forum posts spread across three+ different websites. And it was slightly insane. First writer wins broke everyone’s brains across three jobs before someone helped me kill it and replace it with something else.

It’s also a cornerstone of my thesis: never trust software someone says they wrote on an airplane. That’s not enough time to create a good solution, and any decision you make while experiencing altitude sickness is sketchy. (Prior to 2010, airline passengers were experiencing 8000 ft atmosphere on every flight. One of the selling points of the 787 was 5000 ft equivalent pressure)


Is it a frequent experience for you to have to disregard a piece of otherwise appealing software because the developer claims to have written it on a plane?


It’s not that common for one to stick. But it is common for people to bitch about using them a lot more than average for tools.


That… was not the thesis I was expecting


> It also doesn't help that the community is still fighting over whether flakes should be a thing.

Could you elaborate on what the debate is? Haven't heard of this before!


I agree with that. But that can be attemped to be solved by wrapping Nix with a better UX instead of building something from scratch.


> This is already a solved problem with Nix shell (except the task runner part).

devenv, a Nix+direnv-based solution, has a pretty cool task runner thing, plus service management.

> I don't understand why there are any other alternatives still being developed.

I love Nix and I believe it's a great choice for many teams, for the same use cases as mise. Nix's paradigm is the future. But Nix's defects are also real and obvious enough if you use it for long. I can understand why someone might see value in trying a fresh new effort, or an approach that asks less commitment.


Docs are terrible, writing new packages is difficult, etc. It may be the right technical direction but the execution is lackluster


Reading nixpkgs is pretty amazing info, better then any docs. But I actually think the docs are pretty fantastic. The manual is solid, the CLI help text is pretty excellent. I don’t get when people say the docs are bad.


I’ve tried to use nixos for my homelab and ran into undocumented errors


Because some people may not want nix. Mise is, in my experience, much easier to get into and start using.


> This is already a solved problem with Nix shell (except the task runner part)

The task runner part is also solved in Nix. See

https://github.com/Platonic-Systems/process-compose-flake

and

https://github.com/juspay/services-flake


Because Nix has dogshit UX and the team seem to be completely oblivious to it regardless of how many times it's brought up.


    $ nix-shell -p nodejs_20
    [nix-shell:~]$ node --version
    v20.18.1
Oh, the UX horror!


> and the team seem to be completely oblivious to it regardless of how many times it's brought up.


Well, "dogshit UX" isn't exactly actionable, is it?


Oh it absolutely is !


Care to give some examples?


Not OP but you don’t have to look far when it comes to Nix. Here’s a couple of the more annoying ones:

Example 1:

updating dependencies that are outside of nixpkgs is not a one command ordeal. Especially if you’re doing something like updating the commit shape of a packaged release you’re targeting. I think there’s no reason they couldn’t have some clean way of writing rules automate update non-nixpkgs (why do I have to do this dumb nix-prefetch-url thing myself to compute some hash?).

If I am selling nix to end users as a system package management solution I’m comparing it to tools like brew. As long as you stay in nixpkgs it’s fine - but as soon as you’re out of that (which almost everyone is going to have at least o e package that is either not in nixpkgs or they can’t use the nixpkgs one for some reason) maintenance is no longer as simple as brew update / nix flake update.

Example 2:

Nix just doesn’t have very good debugging tools. It reminds me a lot of terraform. Yes there is an REPL but the simple task of breakpointing and seeing the value of data structures is not really straightforward in nix. If you want to it language to be approachable you need to make it dead simple to immediately be able to spit out its state in a way that an engineer knows how to work with it. I would not say the process of doing so in Nix is dead simple


Use it as a signal to start an investigation into how many agree with that sentiment, try to measure the resulting adoption dropoff, and use those stats to push for a UX redesign.


This is a pretty ignorant stance. It implies that the issues would somehow be easy to address if they were just acknowledged. Are you aware of the efforts that have been put into this so far? Have you considered that achieving the vague improvements you are asking for, working on a groundbreaking technology while holding together a rapidly growing community, is just really hard?


Can you clarify how you arrived at the conclusion that I was implying fixing the UX is easy, or where you saw me asking for "vague improvements"?

While we're on the subject, I can assure you, Nix is far, far from groundbreaking tech.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: