All global objects current need to be event targets so that they
can have events dispatched to them. This allows for removing of
verify_cast for these global objects.
The "copy" command is not in the Miscellaneous commands section. The
"defaultParagraphSeparator" command is, however. Let the accompanying
comment reflect that.
This makes these values the same as `start` and `end`. While this is not
entirely correct, it is better than centering which is what we did
previously.
This fixes misaligned images on https://nos.nl
From this action's documentation, "0 means using default retention."
There's no reason for this to have a different retention duration than
the CI results themselves.
Without this, if two CI runs on Linux both fail and want to upload
screenshots, we get an error like this on the second:
> Error: Failed to CreateArtifact: Received non-retryable error: Failed
> request: (409) Conflict: an artifact with this name already exists on
> the workflow run
Using all the inputs as part of the name should make this kind of
conflict impossible.
This allows us to disable test output, which performs expensive assert
tracking. This was making our imported tests run significantly slower
than tests run via `WPT.sh`.
Formatting the output ourselves also allows us to remove unnecessary
information from the test output.
This commit also rebaselines all existing imported WPT tests to follow
the new format.
```
VERIFICATION FAILED: !_temporary_result.is_error()
```
is not really a helpful error message.
When we are including `AK/Format.h`, which is most of the time,
we can easily print a much more useful error message:
```
UNEXPECTED ERROR: Cannot allocate memory (errno=12)
```
The source code position cache was moved from a line based approach
to a "chunk"-based approach to improve performance on large, minified
JavaScript files with few lines, but this has had an adverse effect
on _multi-line_ source files.
Reintroduce some of the old behaviour by caching lines again, with
some added sanity limits to avoid caching empty/overly small lines.
Source code positions in files with few lines will still be cached
less often, since minified JavaScript files can be assumed to be
unusually large, and since stack traces for minified JavaScript
are less useful as well.
On WPT tests with large JavaScript dependencies like
`css/css-masking/animations/clip-interpolation.html` this reduces the
amount of time spent in `SourceCode::range_from_offsets` by as much as
99.98%, for the small small price of 80KB extra memory usage.
The rules for parsing integers don't specify an upper bound on the
value that can be returned, so the `parse_integer_digits` method can be
used to check whether the given arbitrarily-large StringView is valid
according to these rules. The `parse_integer` and
`parse_non_negative_integer` methods would fail for values larger than
2147483647 when they shouldn't have.
restore() corresponding to ApplyFilters should be called after stacking
context content is painted, not before.
Fixes regression introduced in c94b4316e7
Many dependencies aren't currently included in the devShell. As ladybird
is already packaged downstream, we can pull in those buildInputs along
with the extra dev dependencies already defined.
ApplyOpacity internally calls canvas.saveLayer() which requires a
matching canvas.restore() to be called.
Fixes missing header on https://supabase.com/
Setting the `width` or `height` properties of `HTMLCanvasElement` to a
value greater than 2147483647 will now cause the property to be set to
its default value.
ApplyFilter internally calls canvas.saveLayer() which requires a
matching canvas.restore() to be called.
Fixes painting on https://supabase.com/ regressed by
8562b0e33b
This is part of a normative change to the HTML space for WebAssembly JS
module integration and the source phase import proposal, see:
https://github.com/whatwg/html/commit/10ed38ee7
Further changes are required, but this is a start :^)
Previously, we leaked the `curl_slist`s on every request. This also
validates the pointer we get from `curl_slist_append` before setting the
option.
Also, use the `set_option` helper for CURLOPT_RESOLVE as it will print
when there is an error.
Previously, we would stop the repeat timer even if we got a null result.
This caused the pending lookup to:
- Never resolve, and
- Never get purged for too many retries
I believe the underlying issue is something on the socket level, but we
should handle this case regardless.
When the "Consume a component value from input, and do nothing."
step in `Parser::consume_the_remnants_of_a_bad_declaration` was
executed, it would allocate a `ComponentValue` that was then
immediately discarded.
Add explicitly `{}_and_do_nothing` functions for this case that never
allocate a `ComponentValue` in the first place.
Also remove a `(Token)` cast, which was unnecessarily copying a `Token`
as well.
Lazily coercing might have made sense in the past, but since hashing
and comparing requires the `PropertyKey` to be coerced, and since a
`PropertyKey` will be used to index into a hashmap 99% of the time,
which will hash the `PropertyKey` and use it in comparisons, the
extra complexity and branching produced by lazily coercing has
become more trouble than it is worth.
Remove the lazy coercions, which then also neatly allows us to
switch to a `Variant`-based implementation.