Cool idea! Although without looking closer I can't tell if "meme" is in reference to the technical or the colloquial meaning of meme.
Admittedly I don't know that much about LLM optimization/configuration, so apologies if I'm asking dumb questions. Isn't the value of needing to copy/paste that prompt in front of your queries a huge bog on net token efficiency? Like wouldn't you need to do some hundred/thousand query translations just to break even? Maybe I don't understand what you've built.
Thank you. That script prompt is just for development and exploration. A production model needs to be trained/fine-tuned on Memelang first. We're working on this now. The math says we can deliver a model 1/2 the size of an equivalent model for SQL.