OpenGL ES evolves, 1.1 specifications are out!!!

Panajev2001a

GAF's Pleasant Genius
New features in OpenGL ES 1.1 provide enhanced functionality, improved image quality and optimizations to increase performance while reducing memory bandwidth usage to save power. These include:

* OpenGL 1.5 is the reference specification. OpenGL ES 1.0 was defined relative to OpenGL 1.3. OpenGL ES 1.1 is defined relative to the OpenGL 1.5 specification.

* Buffer objects provide a mechanism that clients can use to allocate, initialize and render from memory. Buffer objects can be used to store vertex array and element index data.

* Auto mipmap generation can offload the application from having to generate mip-levels. Hardware implementations can potentially accelerate auto mip-level generation especially for video textures or when rendering to texture. A texture is considered incomplete in OpenGL ES if the set of mipmap arrays are not specified with the same type. The check for completeness is done when a given texture is used to render geometry.

* Enhanced texture processing including a minimum of two multi-textures and texture combiner functionality for effects such as bump-mapping and per-pixel lighting. All OpenGL 1.5 texture environments except for the texture crossbar are supported.

* Vertex skinning functionality using the oes_matix_palette extension allows smooth animation of complex figures and geometries. The extension allow OpenGL ES to support a palette of matrices. The matrix palette defines a set of matrices that can be used to transform a vertex. The matrix palette is not part of the model view matrix stack

* User-defined clip planes permit for efficient early culling of non-visible polygons -increasing performance and saving power

* Enhanced point sprites and point sprite arrays provides a method for application to draw particles using points instead of quads. This enables efficient and realistic particle effects. The point sprites extension also allows an app to specify texture coordinates that are interpolated across the point instead of the same texture coordinate used by traditional GL points. The Point Size Array extension permits an array of point sizes instead of a fixed input point size and provides flexibility for applications to do particle effects.

* Static and Dynamic state queries are supported for static and dynamic state explicitly supported in the profile. The supported GL state queries can be categorized into simple queries, enumerated queries, texture queries, pointer and string queries, and buffer object queries. This enables OpenGL ES to be used in a sophisticated, layered software environment

* Draw Texture defines a mechanism for writing pixel rectangles from one or more textures to a rectangular region of the screen. This capability is useful for fast rendering of background paintings, bitmapped font glyphs, and 2D framing elements in games

* New Core Additons and Profile Extensions for the Common and Common-Lite profiles add subsets of the OES byte coordinates, OES fixed - point, OES single precision and OES matrix get ES-specific extensions as core additions; OES - read format, OES compressed paletted texture, OES point size array and OES point sprite as required profile extensions; and OES matrix palette and OES draw texture as optional profile extensions.

http://www.khronos.org/opengles/whats_new_1_1.html
 
* OpenGL 1.5 is the reference specification. OpenGL ES 1.0 was defined relative to OpenGL 1.3. OpenGL ES 1.1 is defined relative to the OpenGL 1.5 specification.

Why the hell do they have to complicated for normal people?
 
Good info pana, but does not Open GL 2.0 wear the features crown? I also love Open GL's flexibility over DirectX's static feature set.
 
God's Hand said:
DirectX > OGL. There's no way around it. Any smart programmer knows this.

Not if you want people on Linux to play your game. Anyways, when comparing DirectX and OGL, you need to compare D3D and OGL.
 
God's Hand said:
DirectX > OGL. There's no way around it. Any smart programmer knows this.

Please GH, what is it about the words "static feature set" do you not understand? Yeah, I guess Carmack is a "a foolish programmer." LOL.
 
Li Mu Bai said:
Please GH, what is it about the words "static feature set" do you not understand? Yeah, I guess Carmack is a "a foolish programmer." LOL.

Extensions and API "feature sets" will be less important when every card supports fully generalized programmable shaders, and both APIs support HLSL.
 
aaaaa0 said:
Extensions and API "feature sets" will be less important when every card supports fully generalized programmable shaders, and both APIs support HLSL.

True, but this has not yet come to pass aaaa0.
 
Li Mu Bai said:
True, but this has not yet come to pass aaaa0.

I agree.

However the current situation with OpenGL is not good either.

Carmack has so much industry pull that hardware vendors test with DOOM3, not the other way around. :)

Despite this he still had to write a half dozen different code paths and shader paths for DOOM3 because of all the different vendor extensions and ways of doing things in OpenGL. (Well, and to also work around problems in some hardware which shall remain nameless but ends in the letters F and X. :) )

He pulled it off because he's a really good developer.

But really, it sucks that the API isn't doing the job it's supposed to be doing, which is to abstract the hardware differences in an efficient manner, so that the developer doesn't have to worry about most of them.

In this respect, DX9 is actually doing a much better job right now.
 
aaaaa0 said:
Extensions and API "feature sets" will be less important when every card supports fully generalized programmable shaders, and both APIs support HLSL.

True, at the end, what will DirectX will offer over the cross-platform OpenGL?
 
tenchir said:
True, at the end, what will DirectX will offer over the cross-platform OpenGL?

You will use whichever API your target platform supports better.

On Windows, that will pretty much be DirectX.

DirectX will be more supported, more mature, more stable, and have better integration with development tools. It will have better drivers, more hardware support, and more testing done on it then OpenGL.

So I think there are lots of reasons other than "API features" to go with DirectX over OpenGL on Windows for now and the forseeable future.
 
Extensions and API "feature sets" will be less important when every card supports fully generalized programmable shaders, and both APIs support HLSL.

couldn't you have said almost the same in the early days?

Extensions and API "feature sets" will be less important when every card supports anisotropic filtering and texture compression

Shaders didn't even exist then. Who knows whats around the corner?
 
mrklaw said:
couldn't you have said almost the same in the early days?

Extensions and API "feature sets" will be less important when every card supports anisotropic filtering and texture compression

Shaders didn't even exist then. Who knows whats around the corner?

There's a big difference between "texture compression", "anisotropic filtering", and "shaders".

Texture compression and anisotropic filtering are just features that let you do ONE thing.

Shaders let you do an infinite number of things.

Think of it this way:

To use a weak analogy, imagine if your CPUs was structured in a way that a word processing program was hardcoded right into your CPU, and the only thing your CPU could do was run that wordprocessing program.

That's what GPUs looked like a few years ago. Each GPU basically had a hardwired 3D rendering program burned into it, that you couldn't change at all. They exposed various options and features that you could turn on, or turn off, or adjust a couple parameters. But you couldn't really alter much of the way the chip actually worked.

Back then the 3D APIs were all about talking to this hardwired implementation of a 3D pipeline.

Thus exposing "features" was very important.

Different GPU makers would hardwire different effects into their GPUs, and this meant MS would have to add new flags or APIs to D3D, or the GPU maker would create an OpenGL extension to expose some way for programs to control the new features.

Now, fast forward a few years, and GPUs have these things called shaders.

They're really misnamed, because they don't have to necessarily do "shading". What they are is a set of general purpose instructions that let developers come up with their own "features" by writing programs that run on the GPU.

To make a new "feature" you write a new shader, and tell the GPU to run it.

So the whole notion of "features" basically goes away.

You want the card to do MEGA_FILTERING_TECHNIQUE_THAT_I_INVENTED? Write a shader to sample the textures yourself.

You want the card to use MAGIC_TEXTURE_COMPRESSION_METHOD_THAT_I_INVENTED? Write a shader to decompress the textures yourself.

The point is, barring some sort of crazy radical change in the graphics pipeline (which would pretty much invalidate both OpenGL and DirectX APIs and force everyone to start from zero), most new 3D rendering "features" will be constructed using generalized shader instructions.

The limiting factor is no longer the "features" the API exposes, but how good the HLSL compiler is and how fast/capable the hardware is at running shader instructions.

Note that right now we're not quite in this utopian world of generalized shaders, but it's coming, and coming fast.
 
God's Hand said:
DirectX > OGL. There's no way around it. Any smart programmer knows this.

Oh dear...

The smartest programmer is the one who uses whichever API best fits their style and the product's needs.
 
Ok great, mobile phones will now use OGL1.5 :p Maybe PSP too, the featureset is a close enough match of the specs now...

But I thought we wanted to talk about PS3 though...

aaaa0 said:
But really, it sucks that the API isn't doing the job it's supposed to be doing, which is to abstract the hardware differences in an efficient manner, so that the developer doesn't have to worry about most of them.
That's a fine ideal - but fact is that no API comes even remotely close to doing that if you want to work across a range of hw as wide as D3 supports.
Even in DX, he would still have to write a dozen of paths to support all that needs to be supported, as well as make it run in any semblance of optimized fashion.

Anyway, as you said, as graphic subsystems get closer to general programming models, API job gets easier too (the hard work is shifted to vendor supplied compilers :P).

Back on topic though, frankly I am not convinced that any API targeted at static graphics hardware is a great match for PS3 - at least PS3 hw as I expect it to be.
Although something of OGL2.0 caliber should be good enough for early stuff I suppose.
 
Fafalada said:
Ok great, mobile phones will now use OGL1.5 :p Maybe PSP too, the featureset is a close enough match of the specs now...

But I thought we wanted to talk about PS3 though...


That's a fine ideal - but fact is that no API comes even remotely close to doing that if you want to work across a range of hw as wide as D3 supports.
Even in DX, he would still have to write a dozen of paths to support all that needs to be supported, as well as make it run in any semblance of optimized fashion.

Anyway, as you said, as graphic subsystems get closer to general programming models, API job gets easier too (the hard work is shifted to vendor supplied compilers :P).

Back on topic though, frankly I am not convinced that any API targeted at static graphics hardware is a great match for PS3 - at least PS3 hw as I expect it to be.
Although something of OGL2.0 caliber should be good enough for early stuff I suppose.

Well, you need something to drive those APUs and fill in the overlays ;).

I think we will have some extensions still and that lower access will eventually be granted to developers, but this would help get the interest of more developers to the console and would help early on to understand the basics of the console. Things like OpenMAX could be the common set of interfaces for all sorts of Middleware that Sony/SCE is looking for.

Some of the company pushing for OpenGL 2.0, Creative Labs, are the ones also pushing things like the P20 and probably the P30 (if such a thing will ever appear) will probably be closer to a parallel architecture like CELL than other GPUs.

A lot of enphasys will go in the abstraction layer OpenGL ES will use and how that will match the PlayStation 3 Hardware and the resources given by the CELL OS.

[quoteOpenGL ES also includes a specification of a common platform interface layer, called EGL. This layer is platform independent and may optionally be included as part of a vendor’s OpenGL ES distribution. The platform binding also has an associated conformance test. Alternatively, a vendor may choose to define their own platform-specific embedding layer.[/quote]

Another interesting feature of OpenGL ES is this:

Seamless transition from software to hardware rendering
Although the OpenGL ES specification defines a particular graphics processing pipeline, individual calls can be executed on dedicated hardware, run as software routines on the system CPU, or implemented as a combination of both dedicated hardware and software routines. This means that software developers can ship a conformant software 3D engine today, that lets applications and tools seamlessly transition over to using OpenGL ES hardware-acceleration in next generation devices.

It seems perfect for early PlayStation 3 SDKs (for those who get them with this kind of high-level support).
 
Fafalada : simple question about PSP dev : Can you code the PSP to the metal or is Sony using a layer (custom API,...) to hide low level calls (as earlier announced) ?
 
Society said:
Therein lies the prob. If I could play all the latest games on Linux, I would never use windows again.

And it's not the biggest number, but it also allows really easy ports to Mac. Although I agree the benefits really are no longer there. DirectX used to suck royally and it made a lot of sense to go with the superior set then, but now it needlessly complicates the process by this card running DirectX better and this card OpenGL.
 
Quote:
Seamless transition from software to hardware rendering
Although the OpenGL ES specification defines a particular graphics processing pipeline, individual calls can be executed on dedicated hardware, run as software routines on the system CPU, or implemented as a combination of both dedicated hardware and software routines. This means that software developers can ship a conformant software 3D engine today, that lets applications and tools seamlessly transition over to using OpenGL ES hardware-acceleration in next generation devices.

I'm sorry, for clarification's sake, is DX (or related tools) capable of this?
 
Society said:
Therein lies the prob. If I could play all the latest games on Linux, I would never use windows again.

One SDK To rule them all, One SDK to find them, One SDK to bring them all and in their stupidity bind them (to Windows), in the land of Redmond where the monopolistic business practices lie.
 
Li Mu Bai said:
Quote:


I'm sorry, for clarification's sake, is DX (or related tools) capable of this?

Yes, DX is capable of this. For example, Intel ships (crappy) hardware that emulates vertex shaders on the host CPU, but executes pixel shaders on the hardware itself

One extreme example is the reference rasterizer, which is an DX implementation built entirely in software running on the CPU, designed for testing and development purposes. (It's too slow to be used for anything else.)
 
aaaaa0 said:
Yes, DX is capable of this. For example, Intel ships (crappy) hardware that emulates vertex shaders on the host CPU, but executes pixel shaders on the hardware itself

One extreme example is the reference rasterizer, which is an DX implementation built entirely in software running on the CPU, designed for testing and development purposes. (It's too slow to be used for anything else.)

aaaa0, this is not the same thing. Anything can be performed on the CPU, pixel & vertex ops. given the central processor's strength & capability. This is not an applicable DX application in comparison to the aforementioned OpenGL ES.
 
Li Mu Bai said:
aaaa0, this is not the same thing. Anything can be performed on the CPU, pixel & vertex ops. given the central processor's strength & capability. This is not an applicable DX application in comparison to the aforementioned OpenGL ES.

Although the OpenGL ES specification defines a particular graphics processing pipeline, individual calls can be executed on dedicated hardware, run as software routines on the system CPU, or implemented as a combination of both dedicated hardware and software routines.

To expose a 3D rendering device to D3D, you implement a driver, according to DirectX's Device Driver Interface. When an app calls a DirectX API, DX translates that call into the appropriate Driver Interface call(s), and then invokes your driver to do the work the app wants done.

Whether you do this work in hardware or software directly in your driver, DirectX doesn't care. You can choose to implement your 3D pipeline in software (refrast), partially in hardware (Intel), totally in hardware, or something inbetween.

This means that software developers can ship a conformant software 3D engine today, that lets applications and tools seamlessly transition over to using OpenGL ES hardware-acceleration in next generation devices.

The users of your device via DirectX will not know or care if your implementation is in hardware, or software or something in-between (besides the obvious performance implications) if you do your job correctly.

I can take my app written for DX, run it against the reference rasterizer. Then I can take exactly the same app, and run it against a hardware implementation. Seamless transition from software rendering to hardware rendering.

There is nothing technically preventing someone from implementing a DX driver that can sometimes execute a rendering call in software, and sometimes execute that same rendering call in hardware, though I'm not sure if anyone actually does that right now or why they would want to do that.

So how exactly is this not the same thing?
 
I wouldn't bother, man. LMB seems to be just looking for more anti-MS stuff and/or pro-non-MS stuff. Not a big deal when you consider that the new consoles will be using highly customized forms of the APIs anyway. At least, that's my take.
 
MightyHedgehog said:
I wouldn't bother, man. LMB seems to be just looking for more anti-MS stuff and/or pro-non-MS stuff. Not a big deal when you consider that the new consoles will be using highly customized forms of the APIs anyway. At least, that's my take.

Wrong again MH. I respect aaaao's technical intellect, & am generating an honest dialogue with him. I am not anti-MS simply because I'm pro-Nintendo. My bias is not blinding as is yours, as this is a Sony related topic btw. I was referring specifically to this aspect of ES:

This means that software developers can ship a conformant software 3D engine today, that lets applications and tools seamlessly transition over to using OpenGL ES hardware-acceleration in next generation devices.

aaaa0's is obviously pro-DX, while I am pro-Open GL. That is not to say I'm not open to learning its intricacies from one who obviously has experience. As I only have a few friends that program, which keeps me somewhat in the loop so to speak. I would appreciate you not presuming to know my intent unless I explicitly state it to you.
 
Disregarding MH, aaaa0 You'll be suprised at how beneficial a commitee can be vs. a market driven rush to supply a graphical API product to vendors. (yearly as the DX spec is, although I know MS works very closely with vendors. In the end however, it's developed by a sole company & they set what they believe will be the functional capability fixed standard) At the end of the day, developers are engineers who admire well designed solutions, and a peer reviewed (commitee based) specification with input from primarily all major vendors and players will definitely be more powerful than something rushed to market in order to satisfy current quarterly financial requirements. Dont dismiss OpenGL off that fast (when compared to Direct3D).

The 1.5 spec contains the high level vertex and pixel shaders that will be included in GL2 but as ARB extentions - i.e. they are not part of the core spec yet. This will allow to fix any implementation issues that will arise before nailing down the GL2 spec. Carmack believes that 2.0 should become the standard, I tend to agree as it matures.
 
Top Bottom