The ISA-PC machine type provides no PCI bus support, no IOAPIC support
and other modern PC features of our generation.
This is mainly a good environment for testing abstractions in the kernel
space, and can help with improving on them for the sake of porting the
OS to other chipsets and CPU architectures.
We use the environment variable SERENITY_SOURCE_DIR to resolve and check
icon links. This is a bit inconvenient as SERENITY_SOURCE_DIR needs to
be set correctly before invoking the markdown checker, but as we use it
through the check-markdown script anyways, I think it's not a problem.
Previously we added it only if spice was available, but it's possible to
build qemu with --disable-spice --enable-spice-protocol, which provides
qemu-vdagent but no spicevmc. In such case we still configured
qemu-vdagent to use "vdagent" device, but never actually defined it, so
the qemu-vdagent was never working.
This allows us to fuzz the generated unicode and timezone database
helpers, and to fuzz things like LibJS using Fuzzilli to get proper
coverage of our unicode handling code.
Update the Azure CI to use the new two-stage build as well, and cleanup
some unused CMake options there.
According to its manpage genext2fs tries to create the file system with
as few inodes as possible. This causes SerenityOS to fail at boot time
when creating temporary files.
We now pass along the toolchain type to all subcommands. This ensures
that gdb will load the correct debug information for kernels compiled
with Clang, and the following warning won't appear with the GNU
toolchain:
> WARNING: unknown toolchain 'gdb'. Defaulting to GNU.
> Valid values are 'Clang', 'GNU' (default)
WebSockets got moved from the HTML standard to their own, the new
WebSockets Standard (https://websockets.spec.whatwg.org).
Move the IDL file and implementation into a new WebSockets directory and
C++ namespace accordingly.
The single 4000-line WrapperGenerator.cpp file was proving to be a pain
to hack, and was filled with spaghetti, split it into a bunch of files
to lessen the impact of the spaghetti.
Also refactor the whole parser to use a class instead of a giant
function with a million lambdas.
I've attempted to handle the errors gracefully where it was clear how to
do so, and simple, but a lot of this was just adding
`release_value_but_fixme_should_propagate_errors()` in places.
I can't imagine how this happened, but it seems we've managed to
conflate the "event listener" and "EventListener" concepts from the DOM
specification in some parts of the code.
We previously had two things:
- DOM::EventListener
- DOM::EventTarget::EventListenerRegistration
DOM::EventListener was roughly the "EventListener" IDL type,
and DOM::EventTarget::EventListenerRegistration was roughly the "event
listener" concept. However, they were used interchangeably (and
incorrectly!) in many places.
After this patch, we now have:
- DOM::IDLEventListener
- DOM::DOMEventListener
DOM::IDLEventListener is the "EventListener" IDL type,
and DOM::DOMEventListener is the "event listener" concept.
This patch also updates the addEventListener() and removeEventListener()
functions to follow the spec more closely, along with the "inner invoke"
function in our EventDispatcher.
We no longer include all the things, so each generated IDL file only
depends on the things it actually needs now.
A possible downside is that all IDL files have to explicitly import
their dependencies.
Note that non-IDL dependencies still remain and are injected into all
generated files, this can be resolved later if desired by allowing IDL
files to import headers.
BCP 47 will be the single source of truth for known calendar and number
system keywords, and their aliases (e.g. "gregory" is an alias for
"gregorian"). Move the generation of available keywords to where we
parse the BCP 47 data, so that hard-coded aliases may be removed from
other generators.
We have a fair amount of hard-coded keywords / aliases that can now be
replaced with real data from BCP 47. As a result, the also changes the
awkward way we were previously generating keys. Before, we were more or
less generating keywords as a CSV list of keys, e.g. for the "nu" key,
we'd generate "latn,arab,grek" (ordered by locale preference). Then at
runtime, we'd split on the comma. We now just generate spans of keywords
directly.
This package was originally meant to be included in CLDR version 40, but
was missed in their release scripts. This has been resolved:
https://unicode-org.atlassian.net/browse/CLDR-15158
Unfortunately, the CLDR was re-released with the same version number. So
to bust the build's CLDR cache, change the "version" used to detect that
we need to redownload the CLDR.
This is no longer needed as BrowsingContextContainer::content_document()
now does the right thing, and HTMLIFrameElement.contentDocument is the
only user of this attribute. Let's not invent our own mechanisms for
things that are important to get right, like same origin comparisons.
The spec version of canonical_numeric_index_string is absurdly complex,
and ends up converting from a string to a number, and then back again
which is both slow and also requires a few allocations and a string
compare.
Instead this patch moves away from using Values to represent canonical
a canonical index. In most cases all we need to know is whether a
PropertyKey is an integer between 0 and 2^^32-2, which we already
compute when we construct a PropertyKey so the existing is_number()
check is sufficient.
The more expensive case is handling strings containing numbers that
don't roundtrip through string conversion. In most cases these turn
into regular string properties, but for TypedArray access these
property names are not treated as normal named properties.
TypedArrays treat these numeric properties as magic indexes that are
ignored on read and are not stored (but are evaluated) on assignment.
For that reason there's now a mode flag on canonical_numeric_index_string
so that only TypedArrays take the cost of the ToString round trip test.
In order to improve the performance of this path this patch includes
some early returns to avoid conversion in cases where we can quickly
know whether a property can round trip.
This adds a generator utility to read an entire file and parse it as a
JSON value. This is heavily used by the CLDR generators. The idea here
is to put the file reading details in the utility so that when we have a
good story for generically reading an entire stream in LibCore, we can
update the generators to use that by only touching this helper.
This also moves the open_file helper to the utility file. It's currently
a lambda redefined in each TZDB/Unicode generator. It used to display
the missing command line flag and other info local to each generator.
After switching to LibMain, it just returns a generic error message, and
is duplicated several times.