• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

GLM-5, the AI that programmed a Game Boy Advance emulator

IbizaPocholo

NeoGAFs Kent Brockman



GLM5 excels in long-task engineering by maintaining consistency over 700+ tool calls and 800+ context handoffs across 24+ hours, enabling AI to function as a persistent process for complex projects like building a full Game Boy Advance emulator, rather than short conversational interactions.

Summary

  • E01 Research gained early access to GLM5 to test its long-task capabilities, focusing on a challenge to build a Game Boy Advance (GBA) emulator from scratch in JavaScript, embedded in a 3D rendered scene, using a single agent without parallelism.
  • The task simulates real engineering: research, architecture, implementation, testing, and documentation across multiple sessions, stretching beyond context limits via meta-rules and loops.
  • Challenge input: system prompt and hardware documentation; model must scope work, adjust strategies, switch roles (architect, engineer, designer), and hand off context accurately.
  • Two test versions: "Easy mode" with gbajs reference code (GLM5 reimplemented independently, achieving working core emulator, ROM loading, 3D scene; demo at https://e01.ai/gba); "Zero reference" mode (no code or web search, ran 24+ hours, completed CPU instruction set, ongoing progress).
  • Prior models failed by looping, forgetting goals, or erroneous tool calls.
  • Success mechanism: Prompt as a meta-loop (work → test → log → advance), persisting state in files (/notes/progress.md, /notes/decisions.md, /notes/blockers.md) for context resets.
  • Observation 1: GLM5 showed no degradation—consistent tool calls (700+), strict instruction adherence (800+ switches), reliable context relay from files.
  • Implications: Enables goal-driven agents (autonomous planning/execution), parallel multi-agents (one human supervising many), applications beyond code (e.g., AI for Science: experiment design, research); patterns like long-recurring (iterative workflows) and long-exploring (open-ended exploration).
  • Observation 2: Challenges include hidden multi-session loops (human-detected, e.g., brute-forcing 3D model), over-diligence without pause thresholds; needs explicit "stop and ask for help" instructions.
  • Future needs: Observability (visualization/monitoring), intervention (alerts, nudges), evaluation metrics (context relay, progression rate, decay), trust (incremental validation), cost/infra (budgeting, pause/resume), research (relay limits, self-evaluation).
  • Prompt design guide: Define goal + phases with "done" criteria (e.g., CPU core, memory, graphics); conventions (file structure, testing); notes protocol (session updates); testing gates (unit/integration via Node.js, not browser); loop breaking (retry logs, time limits); recovery (read notes/files first).
  • Mistakes to avoid: Vague notes, no loop-breaking/logs (resets across sessions), assuming memory persistence, over-specifying code, skipping tests (compounds errors).
  • Experiment run via OpenCode/Claude Code; tags include GLM5, AI, Long Task, Writing Prompts, Machine Learning.
 
  • Two test versions: "Easy mode" with gbajs reference code (GLM5 reimplemented independently, achieving working core emulator, ROM loading, 3D scene; demo at https://e01.ai/gba); "Zero reference" mode (no code or web search, ran 24+ hours, completed CPU instruction set, ongoing progress).
So... it had access to the complete code of a working GBA emulator? Is "reimplemented" just a fancy way of saying "copied", lmao?

And the version of the model that had no existing GBA reference code ran for 24 hours straight and wasn't able to get it working. "Completed CPU instruction set" wtf does that even mean.

I want AGI as badly as the next guy, but I also equally want to avoid getting duped by people who are heavily incentivized to make us believe AGI is around the corner.
 
i6SuVk1fzQyHoL7D.png
 
It's just regurgitating data from existing emulators, don't trust these ai bots
Only if the said data was present in its training dataset, and if the said dataset was built well enough not to lose any important contents.

On a side note, GLM 5 is twice as larger (= 2x increased VRAM or RAM+VRAM requirements) than GLM 4.7, despite only being 10 to 15% better in benchmarks.

Even though the advances are there, they come at a ridiculous cost and there's no optimization (that doesn't rely on lobotomizing the model) in sight...
 
AI marketing in overdrive, it's attacking from everywhere. I know companies are pouring 100s of billions in it but it's getting tiresome.
Oh we are gonna get a million "this ai is actually sentient, oh wait no it isnt" marketing "scandals" from all these snake oil salesmen.

I love AI, use it daily for work and personal use. The salesman are indeed super insufferable
 
Keep being told how useless AI is then suddenly the craziest shit you ever saw is shown, there's a small pause, and you keep getting told AI is useless.
You consider simply rewriting other peoples (humans) work/code to be "the craziest shit you ever saw"? AI usage can be very helpful, but this particular case shows that's it's really useless most of the time.
 
Keep being told how useless AI is then suddenly the craziest shit you ever saw is shown, there's a small pause, and you keep getting told AI is useless.

Crazy shit would be if it could write a fully working Switch 2 or PS5 emulator from scratch. But it won't. It can't truly create something entirely new that wasn't, in some form, already reflected in its training data.
 
It's just regurgitating data from existing emulators, don't trust these ai bots
Don't look at ai where it currently stands, look at where its heading.
ai works like a person, eventually it won't need people for any kind of help.
A person doesn't know how to draw a circle or know what 1+1 is without another human's help.
Ai basically functions the same way, only vastly superior as it learns.
 
AI news cycle be like:

>AI did some crazy shit
>"Wow, thats incredible! Take that AI deniers!"
>There's a very wonder-breaking catch
>"Nono, the future is what matters! It'll be great in the next 6 months i swear!"
>RAM prices increase by 14278%
 
On a more serious note, the conclusion i'm getting at is that these bots, at the end of the day, will work better as assistants for very specific things. Mainly for repetitive tasks and other types of mule jobs, doesn't seem they'll ever be able to produce something (good) on their own.
 
Last edited:
Don't look at ai where it currently stands, look at where its heading.
ai works like a person, eventually it won't need people for any kind of help.
A person doesn't know how to draw a circle or know what 1+1 is without another human's help.
Ai basically functions the same way, only vastly superior as it learns.

Doubt it! They must have been fed gigazillionbytes of data by now, guzzled up megawattons of electricity by now. Still waiting for Ai to learn!
 
Don't look at ai where it currently stands, look at where its heading.
circular-deals-AI-1090x1387.png

Where it is heading? Seems to be going around in circles as far as I can tell. Although that image is somewhat outdated, as investments in OpenAI seem to be drying up - and that companies are moving away from NVidia when it comes to inference. Which should make for a rather interesting stock market in the near future.
 
circular-deals-AI-1090x1387.png

Where it is heading? Seems to be going around in circles as far as I can tell. Although that image is somewhat outdated, as investments in OpenAI seem to be drying up - and that companies are moving away from NVidia when it comes to inference. Which should make for a rather interesting stock market in the near future.
Ask me again in a few years.
I remember hearing the same thing about cell phones and the internet.
 
Ask me again in a few years.
I remember hearing the same thing about cell phones and the internet.
Wasn't the .com bubble burst one of the most famous ones? AI seems to heading on a similar path, where the tech hype for it doesn't match it's actual uses (that we'll probably only truly understand a decade or so from now).
 
Last edited:
Wasn't the .com bubble burst one of the most famous ones? AI seems to heading on a similar path, where the tech hype for it doesn't match it's actual uses (that we'll probably only truly understand a decade or so from now).
But all those uses became a reality - people just jumped the gun - invested too much too fast. But that was mainly venture capital - that once it was gone it was gone. With AI it is big companies with deep pockets who will likely ride the dips in the hopes that they end up with an Amazon.com and not a boo.com.
 
Sure it might process this but at what cost? The power / water / etc... costs will be huge. Also they'll just make your company dependent on AI and then start charging you giant fees to use their AI that might even be more expensive than some engineers.
 
It would be more impressive if they used AI to create a perfect Dreamcast emulator, or a Model 3/NAOMI arcade emulator. I know there are DC and Model 3 emulators out there, but they are not perfect. So, AI filling the gaps and improving them would be a cool concept.

Or, for the ultimate challenge, a PS4 emulator that can run all PS4 games without any graphical glitches.
 
Last edited:
Wasn't the .com bubble burst one of the most famous ones? AI seems to heading on a similar path, where the tech hype for it doesn't match it's actual uses (that we'll probably only truly understand a decade or so from now).
Yes, the bubble burst but Internet based communications and platforms transformed pretty much every company.

AI Corpos stock valuations may crash but the tech is here to stay (unfortunately) and it will also transform most companies out there.

Edit: The work with the AI agent is freaking awesome. Agent running for 24 hours is crazy good considering where we were say 2 years ago.

And in 24 hours to create CPU side emulation is impressive as hell. Just think what would be equivalent if you had couple mid-tier devs working on same thing.
 
Last edited:
circular-deals-AI-1090x1387.png

Where it is heading? Seems to be going around in circles as far as I can tell. Although that image is somewhat outdated, as investments in OpenAI seem to be drying up - and that companies are moving away from NVidia when it comes to inference. Which should make for a rather interesting stock market in the near future.
The custom hardware advances from Amazon, Google, MS, and startups like Cerebra's, Groq (not Grok, kind of bought by Nvidia) and many more (including in China) are super interesting.

I can certainly see Nvidia market share reducing mid to long term. This is great though for everyone else since one of biggest issues for OpenAI, Grok, Anthropic, etc… are inference costs.
 
But all those uses became a reality - people just jumped the gun - invested too much too fast. But that was mainly venture capital - that once it was gone it was gone. With AI it is big companies with deep pockets who will likely ride the dips in the hopes that they end up with an Amazon.com and not a boo.com.
Yes, the bubble burst but Internet based communications and platforms transformed pretty much every company.

AI Corpos stock valuations may crash but the tech is here to stay (unfortunately) and it will also transform most companies out there.

Edit: The work with the AI agent is freaking awesome. Agent running for 24 hours is crazy good considering where we were say 2 years ago.

And in 24 hours to create CPU side emulation is impressive as hell. Just think what would be equivalent if you had couple mid-tier devs working on same thing.
But the point is everyone misunderstood the value of the tech at the time. During the 90s the belief was that internet would work as some form of advertising, that everyone needed their own site for propaganda and maybe talking to costumers. It wasn't until 15 years later, after smartphones became a thing that everyone understood its true value was in social media and digital distribution.

I'm saying AI may be similar. I don't think we really understand where its actual value lies in yet and many of these companies are jumping the gun. It may not even be server-side AI that'll become the standard, but rather local AIs after hardware that can run it becomes afordable.
 
Last edited:
Yeah, these AI are no joke in terms of recreating technological-related stuff. If you already have a good fundamental understanding of hardware engineering & design, with the right prompts and follow-throughs you could have an AI develop an entire console design for you, including every aspect of how it works at the transistor level. Of course, you need to be knowledgeable enough to see where mistakes are made to see if it trains itself to correct those mistakes.

I think designing hardware architecture is a type of artform in itself, but it's not instinctively "human" like drawing, painting, creative writing etc. Those are things AI will never be able to replicate anywhere near as well as actual people, because you need warmth of a soul for them, and that's something AI will never have. Same with making music; when I hear a lot of these AI-made songs out there, it's so easy to tell they're AI because the song structure is repetitive, rigid, predictable drops & hooks, predictable sound mastering & what not etc. It always tries sounding "perfect" which is actually the giveaway it's 100% artificial.

Curious how GLM-5 can combat the Nintendo ninja lawyers tho; those DMCAs could make it self-delete 🤣

It would be more impressive if they used AI to create a perfect Dreamcast emulator, or a Model 3/NAOMI arcade emulator. I know there are DC and Model 3 emulators out there, but they are not perfect. So, AI filling the gaps and improving them would be a cool concept.

Or, for the ultimate challenge, a PS4 emulator that can run all PS4 games without any graphical glitches.

FWIW I've been using Google AI to help flesh out certain details of alternate timeline consoles, tho the initial ideas were those I came up with having researched other retro consoles and learning about their architectures. NGL, it has been extremely helpful, but I've also been making sure my prompts are very concise and clearly-stated, and I have had to correct it multiple times when it assumed one technical detail when I meant something completely different.

And that's perfectly fine a use-case I feel: as a supplement to help iron out technical details of engineering or programming problems, clarifications etc. But that's because I understand the nature of AI is it will ALWAYS seek to provide the "perfect" or most "correct" answer to any problem. Which makes it very poorly fit for highly creative fields and spaces. Even in technical fields, like I was just explaining above, you'll probably WANT to have imperfections in the design to enforce limitations when considering other products you're trying to fit its market existence into.

Plus things like various business decisions that would've steered market performance one way or another, require an understanding of the human psychology and cultural elements that might influence the corporate environment, even relationship dynamics and the such, which AI (by its nature) would not be able to understand or comprehend.

AI generates GBA emulator by stealing code from existing GBA emulators. Very cool to see

Yeah, that's a big part of it in this context, for sure. Extremely grey-area stuff and gives companies like Nintendo better legal legs to stand on against AI. I mean what's the difference between an AI taking that GBA code from emulators that took it from actual GBA hardware & software?
 
Last edited:
But the point is everyone misunderstood the value of the tech at the time. During the 90s the belief was that internet would work as some form of advertising, that everyone needed their own site for propaganda and maybe talking to costumers. It wasn't until 15 years later, after smartphones became a thing that everyone understood its true value was in social media and digital distribution.

I'm saying AI may be similar. I don't think we really understand where its actual value lies in yet and many of these companies are jumping the gun. It may not even be server-side AI that'll become the standard, but rather local AIs after hardware that can run it becomes afordable.
Nah, I was already working in couple different startups back then and ASPs started getting hot which eventually became modern SaaS.

The frenzy was crazy but remember, Google was already popular, webmail was everywhere, internet stores got established, api based comms started spreading, etc…

AI adoption and capability is going up fast, it's just not fast enough to support the stupid high amount of current investment, IMO. Plus all the financing shenanigans.
 
Hmm, i'm not sure about AI, i'm a hobby artist but i fear that AI will replace me, lol.
It can do scary good 2D work and all those videos which are showing old and new celebrities.
What you guys think will AI replace the 2D concept artist in the future?
 
Hmm, i'm not sure about AI, i'm a hobby artist but i fear that AI will replace me, lol.
It can do scary good 2D work and all those videos which are showing old and new celebrities.
What you guys think will AI replace the 2D concept artist in the future?
It won't replace the artist, but the artist has to learn how to use AI. AI is the new pen.
 
Nah, I was already working in couple different startups back then and ASPs started getting hot which eventually became modern SaaS.

The frenzy was crazy but remember, Google was already popular, webmail was everywhere, internet stores got established, api based comms started spreading, etc…

AI adoption and capability is going up fast, it's just not fast enough to support the stupid high amount of current investment, IMO. Plus all the financing shenanigans.
But the ones investing big money were internet infrastructure companies, they were the ones expecting return from the tech, not the ones developing web-based internet services. If what i said about local AIs becoming more of a thing in the future than server-based ones, or even something as simple as new hardware that can handle AI much more efficiently, current big tech will be losing a lot with these investments.
 
Last edited:
But the ones investing big money were internet infrastructure companies, they were the ones expecting return from the tech, not the ones developing web-based internet services. If what i said about local AIs becoming more of a thing in the future than server-based ones, or even something as simple as new hardware that can handle AI much more efficiently, current big tech will be losing a lot with these investments.
The ones investing now are Amazon, AWS, Google, Nvidia, Oracle plus Chinese equivalents and also Japan and EU are involved as well.

These are all huge Corpos.
 
Top Bottom