No specific reason. Mainly because he was one of the most productive and collaborative mathematicians of all time. We actually considered "Poisson" at some point but ended up going with Erdos.
Erdos is also widely considered as the most prolific and productive mathematician of all time (in terms of publications and collaborations). Hopefully you can be as productive with Erdos :)
But productive with it in a different field from the person it's named after? That's weird. It seems disrespectful to him to name a product after him when its purpose is pretty much unrelated to his work.
When models edit the raw JSON behind a Jupyter notebook, they often mess up the cell structure by adding extra cells, misaligning code, or making bad edits. We fix this by giving the model the notebook in Jupytext format instead, which tends to make its edits cleaner and more accurate.
Yeah, we just added support for local models. As I mentioned in an earlier comment, if you have a local model with an OpenAI-compatible v1/chat/completions endpoint (most local models have this option), you can route Erdos to use it in the Erdos AI settings.
Yep — if you have a local model with an OpenAI-compatible v1/chat/completions endpoint (most local models have this option), you can route Erdos to use it in the Erdos AI settings.
Thanks for the comment! As Will mentioned below, hopefully "Rao rules" will help with this. If not, we'll think about intuitive ways to allow the user to run individual segments of code before running/accepting all the changes.
The pricing pays for on-going access to models. Users are not expected to have their own OpenAI/Anthropic API keys.
There is currently not a non-paid option that allows users to bring their own models. If you are really interested in a feature like that though, we'd be happy chat. Feel free to reach out at jorgeguerra@lotas.ai
reply