As MMIO is placed at fixed physical addressed, and does not need to be
backed by real RAM physical pages, there's no need to use PhysicalPage
instances to track their pages.
This results in slightly reduced allocations, but more importantly
makes MMIO addresses which end up after the normal RAM ranges work,
like 64-bit PCI BARs usually are.
* Matches how the loader is organized
* `compress_VP8L_image_data()` will grow longer when we add actual
compression
* Maybe someone wants to write a lossy compressor one day
No behavior change.
This code path now also compresses to memory once, and then writes to
the output stream.
Since the animation writer has a SeekableStream, it could compress to
the stream directly and fix up offsets later. That's more complicated
though, and keeping the animated and non-animated code paths similar
seems nice. And the drawback is just temporary higher memory use, and
the used memory is smaller than the memory needed by the input bitmap.
Before, we used to compress the image data to memory, then make another
copy to memory, and then write to the output stream.
Now, we compress to memory once and then write to the output stream.
No behavior change.
It is now possible to pass an optional `ImageDataSettings` object to
the `CanvasImageData.createImageData()` and
`CanvasImageData.getImageData()` methods.
We'll want to explicitly load fonts from FontFace and other Web APIs
in the future. A future refactor should also move this completely away
from StyleComputer and call it something like 'FontCache'.
We're now getting errors on CI due to gcc-13 being missing. We can
probably be smarter about what packages we install, depending on the
workflow being run. But let's first unblock CI.
The error we get is a bit strange and inconsistent. Some CI runners seem
to already have gcc-13 installed. Others don't and can't find the gcc-13
package without the test Ubuntu toolchain PPA.
With this only `ContinuePendingUnwind` needs to dynamically check if a
scheduled return needs to go through a `finally` block, making the
interpreter loop a bit nicer
This actually allows us to re-introduce the ldd utility as a symlink to
our dynamic loader, so now ldd behaves exactly like on Linux - it will
load all dynamic dependencies for an ELF exectuable.
This has the advantage that running ldd on an ELF executable will
provide an exact preview of how the order in which the dynamic loader
loads the executable and its dependencies.
As a preparation to introducing ldd as a symlink to /usr/lib/Loader.so
we rename the ldd utility to be elfdeps, at its sole purpose is to list
ELF object dependencies, and not how the dynamic loader loads them.
This is useful for testing ELF binaries that expose other functionalites
based on the argv0 string (BuggieBox, for example, does it to determine
which utility to run).
This change essentially puts the DynamicLoader with 2 roles - the first
one is to be invoked by the kernel to dynamically link an ELF executable
in runtime.
The second role is to allow running ELF executables explicitly from
userspace so the kernel runs the DynamicLoader as the "intended" program
but now the DynamicLoader can do its own commandline argument parsing
and run a specified binary, with future options being possible to
implement easily.
This will be used in the DynamicLoader code, as it can't do syscalls via
LibCore code.
Because we can't use most of the LibCore code, we convert the versioning
code in Version.cpp to use LibC uname() function.
Prepare to remove biglock on PCI::Access in a future commit, so we can
ensure we only lock a spinlock on a precise PCI HostController if needed
instead of the entire subsystem.