We need to do this to stop the animation timer and delete the current
animation, otherwise the new image will be shown only for a moment
before the previous animation continues.
The numbers in the previous commit show that going from n = 2 to
n = 3 comes with a big cost in runtime (3-4 times as long) for a
very modest to modest size win (0.5% to 2.5%). The jumps from n = 0
to n = 1 and from n = 1 to n = 2 look much more reasonable.
If image size is the main concern, webp is a better option now.
If PNG size is a big concern, recompressing with something like
zopflipng is currently still necessary anyways.
All in all, I think Default is the better default compression level now.
This effectively reverts #14738.
Affects PNGs written by all apps in the system (PixelPaint, Mandelbrot,
LibWeb's HTMLCanvasElement png serialization, LibWeb's screenshot
feature, `shot`, SpiceAgent, Magnify, `pdf` output, `image` without
--png-compression-level flag).
Using the same two benchmarks as in the previous commit:
1.
n | time | size
--+--------------------+--------
0 | 56.5 ms ± 0.9 ms | 2.3M
1 | 88.2 ms ± 14.0 ms | 962K
2 | 214.8 ms ± 5.6 ms | 908K
3 | 670.8 ms ± 3.6 ms | 903K
Compared to the numbers in the previous commit:
n = 0: 17.3% faster, 23.3% smaller
n = 1: 12.9% faster, 12.5% smaller
n = 2, 24.9% faster, 9.2% smaller
n = 3: 49.6% faster, 9.6% smaller
For comparison,
`sips -s format png -o sunset_retro_sips.png sunset_retro.bmp` writes
a 1.1M file (i.e. it always writes RGBA, not RGB when not necessary),
and it needs 49.9 ms ± 3.0 ms for that (also using a .bmp input). So
our output file size is competitive! We have to get a bit faster though.
For another comparison, `image -o sunset_retro.webp sunset_retro.bmp`
writes a 730K file and needs 32.1 ms ± 0.7 ms for that.
2.
n | time | size
--+----------------+------
0 | 11.334 total | 390M
1 | 13.640 total | 83M
2 | 15.642 total | 73M
3 | 48.643 total | 71M
Compared to the numbers in the previous commit:
n = 0: 15.8% faster, 25.0% smaller
n = 1: 15.5% faster, 7.7% smaller
n = 2: 24.0% faster, 5.2% smaller
n = 3: 29.2% faster, 5.3% smaller
So a relatively bigger speed win for higher levels, and
a bigger size win for lower levels.
Also, the size at n = 2 with this change is now lower than it
was at n = 3 previously.
No change to the default behavior.
This allows collecting some statistics.
1.
hyperfine --warmup 1 \
'Build/lagom/bin/image -o sunset_retro.png sunset_retro.bmp \
--png-compression-level $n'
n | time | size
--+-------------------------+--------
0 | 68.3 ms ± 3.8 ms | 3.0M
1 | 101.3 ms ± 2.1 ms | 1.1M
2 | 286.0 ms ± 2.5 ms | 1.0M
3 | 1.331 s ± 0.005 s | 999K
2.
Using the benchmarking script from #24819, just changed to write
.png files with different --png-compression-level values:
n | time | size
--+----------------+------
0 | 13.467 total | 520M
1 | 16.151 total | 90M
2 | 20.592 total | 77M
3 | 1:08.69 total | 75M
(cherry picked from commit 75216182c9a04741b2f773eb2f26ceb7a96bfbba)
(cherry picked from commit 2a55ab13ef9c735a16674006a518c0e5acf7c88f)
Co-authored-by: Sam Atkins <atkinssj@serenityos.org>
`Module::functions` created clones of all of the functions in the
module. It provided a _slightly_ better API, but ended up costing around
40ms when instantiating spidermonkey.
(cherry picked from commit dc52998341bb86ad8fb790fb72f943e43b16e8e5)
Unroll the first byte as a fast path, and remove a branch. This speeds
up the instantiation of spidermonkey by 10ms.
(cherry picked from commit a6ebd100ecd9ed633e290153f61466362e63b73a)
Instead of multiple loops and multiple vectors, parse Wasm expressions
in a simple loop. This gets us from ~450ms to instantiate spidermonkey
to ~280ms.
(cherry picked from commit 2cfc1873c0436f598f897dd84172b753e2c2b03c)
`swizzle` had the wrong operands, and the vector masking boolean logic
was incorrect in the internal `shuffle_or_0` implementation. `shuffle`
was previously implemented as a dynamic swizzle, when it uses an
immediate operand for lane indices in the spec.
(cherry picked from commit 9cc3e7d32d150dd30d683c1a8cf0bd59676f14ab)
Also make `store_to_memory` take a `MemoryArgument` so that we no longer
have to make "synthetic instructions" in some scenarios.
(cherry picked from commit ea67bc989f58e27a28f473819e4265a0ad0af97f)
This makes it so exit() traps with a known error; an embedder (wasm.cpp)
can simply match this format and handle the request accordingly.
(cherry picked from commit 16dd8d4d3ba0017383df86739b1d1507593dd682)
If the current block has already been terminated, we should just skip
creating a per-iteration environment.
(cherry picked from commit 9a7e6158afedee8f169f10040a79db95a4e9aebc)
The default limit (at least on Linux) causes us to run out of file
descriptors at around 15 tabs. Increase this limit to 8k. This is a
rather arbitrary number, but matches the limit set by Chrome.
(cherry picked from commit d58a8b514647a1137d76a1d601f0c325a51f29b3;
amended to also update Userland/Applications/Browser/main.cpp)
These have a few rules that we didn't follow in most cases:
- CSS-wide keywords are not allowed. (inherit, initial, etc)
- `default` is not allowed.
- The above and any other disallowed identifiers must be tested
case-insensitively.
This introduces a `parse_custom_ident_value()` method, which takes a
list of disallowed identifier names, and handles the above rules.
(cherry picked from commit 6ae2b8c3d901d8a7255046a4517fddd8b0fa84c4)
This stubs out enough to get https://athenacrisis.com/ far enough to
actually load :^)
(cherry picked from commit 52ccd69e49a26b5fd2747730e278503625eb71e7)