This file implements the POSIX APIs from <regex.h>, and is not suitable
for inclusion in a Lagom build. If we do include it, it will override
the host's regex functions and wreak havoc if it's resolved before the
host's implementation.
By passing AF_UNSPEC to getaddrinfo, we're telling the system's
implementation that we are ok getting either (or both) IPv4 and IPv6
addresses in our result. On my Ubuntu 22.04 system, the first addrinfo
returned for "www.google.com" holds an IPv6 address, which when
interpreted as an IPv4 sockaddr_in gives an address of 0.0.0.0.
This fixes TestTLSHandshake in Lagom locally.
This test file had #ifdef macros at the top that caused none of the
content to be compiled unless a developer manually wanted to run the
specific benchmarks within. As such, it has become stale. Remove it for
now, if someone wants to restore it in an always-runnable state, we can
restore the specific tests it's trying to benchmark.
This filter mimics the functionality from Photoshop and other image
editors, allowing you to adjust the hue, saturation, and lightness of
an image. I always found this a very handy feature :^)
This fixes some off-by-one wrapping issues that became visible when
running on x86_64. The problem still existed on i686, but by chance
did not show up due to a -/+ 0.000001 difference between the two.
For testing purposes, this allows opening of any filename by passing it
as an argument.
Additionally, there is a --benchmark option that will just call decode
for 100 frames and then exit, printing the time spent in the decoder.
Previously, saved probability tables were being inserted, causing the
Vector to increase in size when it should say fixed at a size of 4. This
changes the Vector to an Array<T, 4> which will default-initalize and
allow assigning to any index without previously setting size.
Integer overflow could sometimes occur due to counts going above 255,
where the values should instead be clamped at their maximum to avoid
wrapping to 0.
This fixes an issue causing frame 3 of the test video to fail to parse
because a reference vector was incorrectly within the range for a high
precision delta vector read.
This allows the second shown frame of the VP9 test video to be decoded,
as the second chunk uses a superframe to encode a reference frame and
a second to inter predict between the keyframe and the reference frame.
This enables the second frame of the test video to be decoded.
It appears that the test video uses a superframe (group of multiple
frames) for the first chunk of the file, but we haven't implemented
superframe parsing.
We also ignore the show_frame flag, so for now, this
means that the second frame read out is shown when it should not be. To
fix this, another error type needs to be implemented that is "thrown" to
decoder's client so they know to send another sample buffer.
The above interpolation filter mode was being taken from the left side
instead, causing some parsing errors.
This also changes the magic number 3 to SWITCHABLE_FILTERS.
Unfortunately, the spec uses the magic number, so this value was taken
instead from the reference codec, libvpx.
This gets the decoder closer to fully parsing the second frame without
any errors. It will still be unable to output an inter-predicted frame.
The lack of output causes VideoPlayer to crash if it attempts to read
the buffers for frame 1, so it is still limited to the first frame.
This changes MotionVector by removing the cpp file and moving all
functions to the header, where they are now declared as constexpr
so that they can be compile-time evaluated in LookupTables.h.
For testing purposes, the output buffer is taken directly from the
decoder and displayed in an image widget.
The first keyframe can be displayed, but the second will not decode
so VideoPlayer will stop at frame 0 for now.
This implements a BT.709 YCbCr to RGB conversion in VideoPlayer, but
that should be moved to a library for handling color space conversion.
The first keyframe of the test video can be decoded with these changes.
Raw memory allocations in the Parser have been replaced with Vector or
Array to avoid memory leaks and OOBs.
This allows runtime strings, so we can format the errors to make them
more helpful. Errors in the VP9 decoder will now print out a function,
filename and line number for where a read or bitstream requirement
has failed.
The DecoderErrorCategory enum will classify the errors so library users
can show general user-friendly error messages, while providing the
debug information separately.
Any non-DecoderErrorOr<> results can be wrapped by DECODER_TRY to
return from decoder functions. This will also add the extra information
mentioned above to the error message.
init_bool will now check whether there is enough data in the bitstream
for the range coding size to be fully read.
exit_bool will now read the entire padding element regardless of size,
which the spec does not specify a limit on.
Reads will now be done in larger chunks at a time.
The public read_byte() function was removed in favor of a private
fill_reservoir() function which will be used to fill the 64-bit
reservoir field which will then be bit-shifted and masked as necessary
for subsequent arbitrary bit-sized reads.
read_f(n) was renamed to read_bits to be clearer about its use.
The interpolation filter value is not set when reading an intra-only
frame, so printing this for the first keyframe of the file was printing
"220", which is invalid.
Instead of doing anything reasonable, Utf8CodePointIterator returned
invalid code points, for example U+123456. However, many callers of this
iterator assume that a code point is always at most 0x10FFFF.
In fact, this is one of two reasons for the following OSS Fuzz issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49184
This is probably a very old bug.
In the particular case of URLParser, AK::is_url_code_point got confused:
return /* ... */ || code_point >= 0xA0;
If code_point is a "code point" beyond 0x10FFFF, this violates the
condition given in the preceding comment, but satisfies the given
condition, which eventually causes URLParser to crash.
This commit fixes *only* the erroneous UTF-8 decoding, and does not
fully resolve OSS-Fuzz#49184.
In particular, StringView::contains(char) is often used with a u32
code point. When this is done, the compiler will for some reason allow
data corruption to occur silently.
In fact, this is one of two reasons for the following OSS Fuzz issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49184
This is probably a very old bug.
In the particular case of URLParser, AK::is_url_code_point got confused:
return /* ... */ || "!$&'()*+,-./:;=?@_~"sv.contains(code_point);
If code_point is a large code point that happens to have the correct
lower bytes, AK::is_url_code_point is then convinced that the given
code point is okay, even if it is actually problematic.
This commit fixes *only* the silent data corruption due to the erroneous
conversion, and does not fully resolve OSS-Fuzz#49184.
For some odd reason we used to return PhysicalPtr for a page_table_base
result, but when setting it we accepted only a 32 bit value, so we
truncated valid 64 bit addresses into 32 bit addresses by doing that.
With this commit being applied, now PageDirectories can be located
beyond the 4 GiB barrier.
This was found by sin-ack, therefore he should be credited with this fix
appropriately with Co-authored-by sign.
Co-authored-by: sin-ack <sin-ack@users.noreply.github.com>
Add ability to use values passed to grid-template-columns and
grid-template-rows for CSS Grid layout within a repeat() function.
E.g. grid-template-columns: repeat(2, 50px); means to have two columns
of 50px width each.
There is no particular reason why this section should be marked as
`NOBITS` (as it might very well include initialized values), and it
resolves 90% of the mismatches between the input and output sections,
which LLD now warns about when linking.
Hopefully no one else will forget to call set_prototype with the cached
prototype they just retrieved from a realm and spend a long time
wondering why their object has no properties...
Get rid of the bespoke NavigatorObject class and use the modern IDL
strategies for creating platform objects to re-implement Navigator and
its associcated mixin interfaces. While we're here, implement it in a
way that brings WorkerNavigator up to spec :^)
We can now properly add the prototypes and constructors to the global
object of the Worker's inner realm, so we don't need this window for
anything anymore.
This new code generator takes all the .idl files in LibWeb, looks for
each top level interface in there with an [Exposed=Foo] attribute, and
adds code to add the constructor and prototype for each of those exposed
interfaces to the realm of the relevant global object we're initialzing.
It will soon replace WindowObjectHelper as the way that web interfaces
are added to the Window object, and will be used in the future for
creating proper WorkerGlobalScope objects for dedicated and shared
workers.
Instead, create a tree of Parsers all pointing to a top-level Parser.
All module imports and interfaces are stored at the top level, instead
of in a static map. This allows creating multiple IDL::Parsers in the
same process without them stepping on each others toes.