Business · Ars Technica
GitHub will start charging Copilot users based on their actual AI usage
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
◌ Single Source
GitHub has announced that it will be shifting to a usage-based billing model for its GitHub Copilot AI service starting on June 1.
Key facts
- Those API rates can vary greatly depending on the sophistication of the model being used; pricing for OpenAI’s high-end GPT models currently ranges from $4.50 per million output tokens (GPT-5.4 Mini)
- And Anthropic has been adjusting usage limits during the “peak hours” of 5 am to 11 am Pacific Time in an effort to limit costs and improve reliability for subscribers
- GitHub has announced that it will be shifting to a usage-based billing model for its GitHub Copilot AI service starting on June 1
- GitHub Copilot subscribers will still be able to use simple AI suggestions like code completion and Next Edit without consuming AI credits
Summary
GitHub Copilot subscribers currently receive an allocation of monthly “requests” and “premium requests,” which are spent whenever they ask Copilot for help from an AI model. “Today, a quick chat question and a multi-hour autonomous coding session can cost the user the same amount,” the Microsoft-owned company wrote in its announcement. Under the new pricing system, GitHub Copilot subscribers will receive a monthly allotment of “AI Credits” that matches their monthly subscription payment. Those API rates can vary greatly depending on the sophistication of the model being used; pricing for OpenAI’s high-end GPT models currently ranges from $4.50 per million output tokens (GPT-5.4 Mini) to $30 per million output tokens (GPT-5.5), for instance.