Trial and Error:My Experience with Gemini 3 Pro
High charges with Google AI Studio API, struggles with Copilot Preview, and changes in pricing models. Thinking about how to deal with pay-as-you-go models in the AI era.
Hello! I'm Pan-kun.
This time, I've summarized my experience using Gemini 3 Pro via Copilot in the Zed IDE.
How I Burned Through 5,000 Yen in 2 Days
A few days after Gemini 3 was released, information was flying everywhere about benchmarks and "I made this with this prompt," etc.
I thought, "I have to try this," so without hesitation, I registered a project and API on GCP via Google AI Studio and registered the API Key in Zed...
I used it relentlessly.
By throwing a large amount of accurate information in the prompts, I was able to achieve my goals in seconds.
About two days later, when I thought I'd take a break, I checked GCP and saw a charge of just under 5,000 yen... I was so terrified that I deleted the GCP API in seconds.
Thinking calmly, it's obvious since it's a pay-as-you-go API system, but
my understanding of my usage pace and the billing rate at that time was too naive.
Monthly flat rates are the only way to go, after all.
I can't deal with this.
Gemini 3 Pro Preview Suddenly Appears in Copilot
A few days after giving up on using the API, something changed in the Copilot I usually use.
"Gemini 3 Pro Preview is available for Copilot Pro plan users and above."
Yes, naturally, I jumped on it in seconds...
By the way, since Zed can link with a GitHub account, there was no need for API issuance or registration procedures,
and since I didn't need to acquire or manage a Google AI Studio API Key, it was even more convenient.
Mysterious Behavior: Works Sometimes, Sometimes Not
However, the joy was short-lived.
While working, Copilot frequently started throwing errors and stopped responding. It became impossible to work, so I investigated.
Solution
Scouring the internet, I found a large number of people suffering from the same phenomenon,
and among them, I found a person who was being thanked as if they were being worshipped.
To state the method simply:
Send a request with a prompt that just reads an arbitrary file at the very beginning of the chat.
That's it. It seriously worked.
The source is below, and reading this in conjunction with the official documentation made a lot of sense, but
the wording in the official documentation was a bit vague.
Even so, the person who analyzed this is too amazing...
Summary?
So, currently, I am able to use it thoroughly without issues.
Eventually, I'd like to introduce an LLM on a home server and use a dedicated AI if possible.
I hate being told I can't use things at work due to security concerns, so it's important to increase my options in advance.
We can prepare for a time years from now when we can no longer say "it's faster to build it yourself," and
while a future where "engineers are not needed" might not come, the possibility of "we don't need ~ engineers" arising exists, so I want to avoid resting on my laurels and getting tripped up by AI.
Especially since there is a promising future ahead.
Loading comments...