The short answer: because every prompt entered into Vyond has real costs associated with it.
To enable the rapid creation of scripts and rough-cut videos, Vyond is integrated to Microsoft Azure’s implementation of OpenAI’s LLM (Large Language Model). In turn, Open AI has its own limits and subscription rates for ChatGPT.
Training and running LLMs is a significant structural cost. A huge amount of computing power is required to run large learning models (LLMs) because they do billions of calculations every time they return a response to a prompt. These calculations also require the immense computing power of graphics processors, or GPUs, now the standard for AI applications because they’re able to perform so many calculations simultaneously.
In recognition of the greatly expanded value provided with Vyond Go, and the concrete costs associated with delivering that value, Vyond’s subscription plans will be adjusted, after the “Get Up to Speed” introductory period ends.