We don't need intrinsic scale factors for Gfx::Bitmap in Ladybird,
as everything flows through the CSS / device pixel ratio mechanism.
This patch also removes various unused functions instead of adapting
them to the change.
The color indexing transform shouldn't make single-channel images
larger (by needlessly writing a palette). If there <= 16 colors
in the single channel, it should make the image smaller.
...and use a different color name until a (relatively harmless) bug
writing fully-opaque frames to an animation that also has transparent
frames is fixed. (I've had a local fix for that for a while, but
I'm waiting for #24397 to land.)
To determine the palette of colors we use the median cut algorithm.
While being a correct implementation, enhancements are obviously
existing on both the median cut algorithm and the encoding side.
This is useful to find the best matching color palette from an existing
bitmap. It can be used in PixelPaint but also in encoders of old image
formats that only support indexed colors e.g. GIF.
For example, for 7z7c.gif, we now store one 500x500 frame and then
a 94x78 frame at (196, 208) and a 91x78 frame at (198, 208).
This reduces how much data we have to store.
We currently store all pixels in the rect with changed pixels.
We could in the future store pixels that are equal in that rect
as transparent pixels. When inputs are gif files, this would
guaranteee that new frames only have at most 256 distinct colors
(since GIFs require that), which would help a future color indexing
transform. For now, we don't do that though.
The API I'm adding here is a bit ugly:
* WebPs can only store x/y offsets that are a multiple of 2. This
currently leaks into the AnimationWriter base class.
(Since we potentially have to make a webp frame 1 pixel wider
and higher due to this, it's possible to have a frame that has
<= 256 colors in a gif input but > 256 colors in the webp,
if we do the technique above.)
* Every client writing animations has to have logic to track
previous frames, decide which of the two functions to call, etc.
This also adds an opt-out flag to `animation`, because:
1. Some clients apparently assume the size of the last VP8L
chunk is the size of the image
(see https://github.com/discord/lilliput/issues/159).
2. Having incremental frames is good for filesize and for
playing the animation start-to-end, but it makes it hard
to extract arbitrary frames (have to extract all frames
from start to target frame) -- but this is mean tto be a
delivery codec, not an editing codec. It's also more vulnerable to
corrupted bytes in the middle of the file -- but transport
protocols are good these days.
(It'd also be an idea to write a full frame every N frames.)
For https://giphy.com/gifs/XT9HMdwmpHqqOu1f1a (an 184K gif),
output webp size goes from 21M to 11M.
For 7z7c.gif (an 11K gif), output webp size goes from 2.1M to 775K.
(The webp image data still isn't compressed at all.)
Truncating the value is mathematically incorrect, this error made the
conversion to grayscale unstable. In other world, calling `to_grayscale`
on a gray value would return a different value. As an example,
`Color::from_string("#686868ff"sv).to_grayscale()` used to return
#676767ff.
Two bugs:
1. Correctly set bits in VP8X header.
Turns out these were set in the wrong order.
2. Correctly set the `has_alpha` flag.
Also add a test for writing webp files with icc data. With the
additional checks in other commits in this PR, this test catches
the bug in WebPWriter.
Rearrange some existing functions to make it easier to write this test:
* Extract encode_bitmap() from get_roundtrip_bitmap().
encode_bitmap() allows passing extra_args that the test uses to pass
in ICC data.
* Extract expect_bitmaps_equal() from test_roundtrip()
If this turns out to be too strict in practice, we can replace it with
a `dbgln("VP8X and VP8L headers disagree about alpha; ignoring VP8X");`
instead.
ALso update catdog-alert-13-alpha-used-false.webp to not trigger this.
I had manually changed the VP8L alpha flag at offset 0x2a in
da48238fbd to clear it, but I hadn't changed the VP8X flag.
This changes the byte at offset 0x14 from 0x10 (has_alpha) to 0x00
(no alpha) as well, to match.
Explicit template arguments must be wrapped in parens,
else they confuse the preprocessor.
Add the parens instead of avoiding the use of explicit template
arguments.
No behavior change.
Bilevel images are not required to have a BitsPerSample or a
SamplesPerPixel tag, while this is unusual these images are still valid.
The test case has been generated by first making a copy of
ccitt3_1d_fill.tiff and then, using `tiffset` to remove both tags:
tiffset -u 258 ccitt3_no_tags.tiff
tiffset -u 277 ccitt3_no_tags.tiff
...and add a test case that shows why it's incorrect.
If one dimension is 2^n + 1 and the other side is just 1, then the
topmost node will have 2^n x 1 and 1 x 1 children. The first child will
have n levels of children. The 1 x 1 child could end immediately, or it
could require that it also has n levels of (all 1 x 1) children. The
spec isn't clear on which of the two alternatives should happen. We
currently have n levels of 1 x 1 blocks.
This test case shows that a VERIFY we had was incorrect, so remove it.
The alternative implementation is to keep the VERIFY and to add a
if (x_count == 1 && y_count == 1)
level = 0;
to the top of TagTreeNode::create(). Then we don't have multiple levels
of 1 x 1 nodes, and we need to read fewer bits.
The images in the spec suggest that all nodes should have the same
number of levels, so go with that interpretation for now. Once we can
actually decode images, we'll hopefully see which of the two
interpretations is correct.
(The removed VERIFY() is hit when decoding
Tests/LibGfx/test-inputs/jpeg2000/buggie-gray.jpf in a local branch that
has some image decoding implemented. That file contains a packet with
1x3 code-blocks, which hits this case.)
This tests reading JPEG2000 codestreams that aren't embedded in
the ISOBMFF wrapper. It's also useful for debugging bitstream
internals, since the spec lists expected output for many internal
intermediate results.