Published 2025-01-01
Thank you for trying out GLHF over the last few months! It's meant a lot to us to get so much support and outreach from everyone during the beta. We've been hard at work fixing bugs and improving reliability, and we're now launching billing. We're giving everyone who tried the site during the free beta $10 in credits to try it out, and we've also shipped a few new features alongside it, and a new homepage!
We've finally ground down enough bugs and reliability issues, and Matt's bank account has finally run low enough, that we figure it's time to launch billing. We've got a brand new pricing page that explains how it all works, but TL;DR:
We use a credit-buy system similar to the OpenAI API: you can either buy credits with one-time payments, or have your account balance auto-reload credits as you use them.
We've also upped everyone's API rate limits 4x!
As a slightly-belated 🎄 Shipmas gift, we're giving everyone who tried out the beta $10 in credits to keep using the site. You don't need to give us your credit card until the credits run out (although we're willing to take it if you want to). Happy holidays and happy new year!
After much demand, we finally shipped a Privacy Policy! We also have a new Terms of Service. TL;DR: for API users, we don't store any prompts or completions for longer than 14 days, so for privacy-centric use cases the API is a really good fit.
We also now support editing the system prompt in the UI! Head to the thread view and click the little gear icon in the top-right, and you'll be able to set custom system prompts per-conversation. System prompts are often much more powerful than user messages, and models can be steered more dramatically by using them. Also, some model finetunes expect specific system prompts and work much better if you use them. Check out the docs for your favorite model to read more!
You've probably noticed the site becoming smoother and easier on the eyes over the last couple of months. That's no accident, and we'll keep working on it! We also launched support for two more models: QVQ, and Jamba 1.5 Large.
Now that we don't exclusively lose money, we've also shipped a better homepage. Check it out:
If you made it here, thank you so much for your support and being a part of our journey!
We're hard at work on more improvements, including:
temperature
, top_p
, frequency_penalty
,
and presence_penalty
are all parameters supported by the API, but not yet
supported in the UI. We want to add support for them now that we've added
system prompt support.If you have any thoughts or feedback, please continue to reach out at [email protected]. We appreciate all the emails we've gotten so far!
— Matt & Billy