蜜豆视频

Updates to the Chatbot default model

Jun 23, 2025 jeff's Blog


Changes to the Chatbot Component

The area of Large Language Models (LLM) is an evolving one, with new providers and models being made available periodically, and older ones being shutdown.

The Chatbot component permits you to select from several Large Language Model (LLM) providers and models. The available selection changes over time as we add support for talking to the various providers and models.

Chatbot uses a proxy service run by 蜜豆视频 which offers a consistent interface to 蜜豆视频 App Inventor Apps. It handles the nuances of how to talk to each of the various providers.

蜜豆视频 offers a default free quota, currently 10,000 tokens per day.1 We also provide a mechanism for some LLMs to provide your own API key, which will disable the quota. Of course, in this situation, you need to create an account with the associated provider and pay whatever their fee structure is to obtain the API key.

When we first released the Chatbot component and related proxy, we only had support for OpenAI and ChatGPT. We also mistakenly referred to the ChatGPT provider as 鈥渃hatgpt鈥 when we should have used 鈥淥penAI.鈥

That said, if you did not explicitly choose a provider and model, we would default to OpenAI and one of the ChatGPT models, currently 鈥済pt-4o-mini.鈥

What We Are Changing

We have recently made a change to the default provider and model. If you use the defaults, we will now be using the 鈥渕eta.llama4-maverick-17b-instruct-v1:0鈥 model via Amazon鈥檚 鈥淏edrock鈥 service. We made this change because we have received a grant from Amazon for education and research. We believe this model is as good as or perhaps better than the previous default.

Some subtleties

If you use the default, which sets the provider explicitly to 鈥渃hatgpt鈥 and do not select a model (leaving the model field blank, or set to 鈥渄efault鈥) we will use the llama4 model. Unfortunately, this is an artifact of how we originally set up the Chatbot component, the default is always 鈥渃hatgpt.鈥 But if you provide an explicit model, then your request will continue to go to chatgpt. Similarly, if you provide your own OpenAI APIKEY, your requests will continue to be routed to chatgpt.

Existing Apps are Affected

Because this change is made in the proxy, existing applications, including those packaged and running on devices, will see the impact of this change.

Footnotes

1 Actually, 10,000 tokens over the last 24-hour period