Ok I'll try to keep this concise, but it's worth posting rather than PM'ing since when the next person hits this bug the fix will be available as a google result. I know random forum posts in google have helped me in the past. Also this bug is exactly why I haven't looked at OpenGL in like 10 years, it's got so much cruft, even it's deprecations are deprecated!
Reference my earlier long post (save quoting code again), where I thought the stride length in the two glVertexAttribPointer calls was causing the memory access violation. Well, it really is the first of those calls that causes the later glDrawArrays to crash the driver with a bad memory access. It isn't the stride, it's the last parameter, the offset. When you're not using VBOs, the last parameter is a memory pointer to the vertex data, when you are using VBOs it's an offset into the currently bound VBO. Even though a VBO is bound (VBO 7), the nvidia driver crashes because with that last param it's trying to load the vertex data from memory location 0x00000000 (which is NULL, nothing) rather than the start of the bound VBO. The driver is ignoring the bound VBO because of a single call made about 100 calls earlier:
Code:
glEnableClientState(GL_VERTEX_ARRAY)
That call is super deprecated, but still mentioned in the OpenGL manpages, and it's superseded by glEnableVertexAttribArray() in modern GL which your app calls anyhow. All it does is update vertex array 0 on the pipeline,
but according to the spec if you enable array 0 using that old style call, it tells glDrawArrays that the array using index 0 in glVertexAttribPointer
will be a literal pointer to memory. That's my understanding of it from a non-GL user. The earlier glDrawArrays calls in your app don't error because their glVertexAttribPointer calls
do use literal pointers to their vertex data (no VBO is bound). Your post-processing draw call is the only one that uses a bound VBO so that's where it crashes. Disabling the legacy call allows the post-processing draw call to access the bound VBO correctly. The bug only happens on recent nvidia drivers because the spec doesn't explicitly say that binding a VBO disables the behaviour specified by the old style call, the crash is just the result of "unspecified behaviour". Older nvidia drivers and AMD ones make an assumption, if I had to guess.
That's the google-worthy info out of the way.
The reason I found out is that I keep a pristine windows 7 install for testing purposes and it has an older nvidia 331.40 driver. Your test apps worked with that driver, so I compared the GL API logs to when it crashed and they're identical, and I figured it really must be something funny in the memory access of that draw call that the new driver won't tolerate. While looking through them I noticed the earlier working calls used memory pointers instead of offsets and did a quick google and came across this page -
https://code.launchpad.net/~3v1n0/unity/fix-nvidia-glDrawArrays-crash/+merge/117559 scroll to the green highlighted diff at the bottom and you'll see it's the same bug. It's a patch for a linux desktop manager from 2012, but after a bit of checking around with the terminology it seemed the likely culprit in your app too.
So how to disable it in your app? Well I looked for some kind of GL function injector DLL to disable that legacy feature with it's opposite disableclientstate call (something like the many FXAA injectors etc), but I couldn't find anything for injecting specific GL commands. Instead I figured the quickest test is to change the function argument to something invalid. I looked in that gDEBugger program to see roughly where in your app the legacy API call was made, it's from the nme.ndll not the actual executable. Then I loaded the nme.ndll into IDA and located the function that calls glEnableClientState with an argument of GL_VERTEX_ARRAY. Uppercase means a constant so I checked the headers and it translates to 8074 in hex, see -
https://www.khronos.org/registry/glsc/api/1.0/gl.h . That page is also useful for making sure not to change the argument to another valid define which could cause a different bug. With the function located in IDA by hex address, I could load the nme.ndll in a hex editor (I used HxD since it's free), and edited the argument of 8074 to something benign like 0074. Saved the dll in HxD and your app worked fine, with the problematic call now issued as a fairly harmless glEnableClientState(Unknown), since 0074 isn't a valid argument.
I didn't need to reproduce the whole IDA disassembly for all of the other test executables, I just searched each one in HxD for the hex-value "74800000FF15" and changed the 74 80 to 74 00 (the 8074 is backwards because x86 is little-endian). You might be safe to do the same edit on any other versions of your game till the guys at Stencyl get around to addressing it. It depends whether they're actually using the legacy style of OpenGL at any point, and if they are then they need to add glDisableClientState(GL_VERTEX_ARRAY) guards around their modern VBO bindings (which might be time consuming). Otherwise they can probably quickly remove the glEnableClientState(GL_VERTEX_ARRAY) calls from their generated code for the win32 desktop target.
Did I mention, I don't like OpenGL!