On every texel access, some floating point instructions involved in
copying 4 floats popped up. Let `Image::texel() const` return a
`FloatVector4 const&` to prevent these operations.
This results in a ~7% FPS increase in GLQuake on my machine.
Looking at how Khronos defines layers:
https://www.khronos.org/opengl/wiki/Array_Texture
We both have 3D textures and layers of 2D textures, which can both be
encoded in our existing `Typed3DBuffer` as depth. Since we support
depth already in the GPU API, remove layer everywhere.
Also pass in `Texture2D::LOG2_MAX_TEXTURE_SIZE` as the maximum number
of mipmap levels, so we do not allocate 999 levels on each Image
instantiation.
In OpenGL this is called the (base) internal format which is an
expectation expressed by the client for the minimum supported texel
storage format in the GPU for textures.
Since we store everything as RGBA in a `FloatVector4`, the only thing
we do in this patch is remember the expected internal format, and when
we write new texels we fixate the value for the alpha channel to 1 for
two formats that require it.
`PixelConverter` has learned how to transform pixels during transfer to
support this.
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
This introduces a new device independent base class for Images in LibGPU
that also keeps track of the device from which it was created in order
to prevent assigning images across devices.
Between the OpenGL client and server, a lot of data type and color
conversion needs to happen. We are performing these conversions both in
`LibSoftGPU` and `LibGL`, which is not ideal. Additionally, some
concepts like the color, depth and stencil buffers should share their
logic but have separate implementations.
This is the first step towards generalizing our `LibSoftGPU` frame
buffer: a generalized `Typed3DBuffer` is introduced for arbitrary 3D
value storage and retrieval, and `Typed2DBuffer` wraps around it to
provide in an easy-to-use 2D pixel buffer. The color, depth and stencil
buffers are replaced by `Typed2DBuffer` and are now managed by the new
`FrameBuffer` class.
The `Image` class now uses multiple `Typed3DBuffer`s for layers and
mipmap levels. Additionally, the textures are now always stored as
BGRA8888, only converting between formats when reading or writing
pixels.
Ideally this refactor should have no functional changes, but some
graphical glitches in Grim Fandango seem to be fixed and most OpenGL
ports get an FPS boost on my machine. :^)
This adds two methods, write_texels and read_texels, to the Image class.
Conversion between image formats happens automatically. The layout of
the client image data is passed in via ImageDataLayout struct.
This serves as the storage for all image types. 1D, 2D, 3D, Cube and
image arrays.
Upon construction a full mipmap chain is generated and the image is
immutable afterwards with respect to its layout.