VariadicFormatParams only stores pointers to the parameters, so
the device.device_name() parameter will dangle.
This fixes broken dmesgln_pci output on riscv64 GCC.
I added this check in #18216 in commit 6d38824985.
Back then, I mentioned that `m_bit_codes` is only used for writing,
but only added reading support. `CanonicalCode::from_bytes()` sets
up both tables for reading and writing, so I needed the construction
of the writing tables to not crash. This check in
`CanonicalCode::write_symbol()` was dead code back then though.
Later, #24700 added support for writing WebP files, and it can create
canonical codes with more than 288 symbols. This works just fine, is
now under test, and this check in `write_symbol()` isn't needed
(and never was). So remove it again.
No behavior change.
(I saw this in the profiler once, so maybe a tiny speedup for
writing deflate-compressed data, but on the order of < 2%.)
Linux did the same thing 18 years ago and their reasons for the change
are similar to ours - https://github.com/torvalds/linux/commit/7d12e78
Most interrupt handlers (i.e. IRQ handlers) never used the register
state reference anywhere so there's simply no need of passing it around.
I didn't measure the performance boost but surely this change can't make
things worse anyway.
Instead, start by trying to read a buffer with size of Elf_Ehdr, and
check it for the shebang sign. If it's indeed an executable with shebang
then read again from the file, now with PAGE_SIZE size, which should
suffice for finding the interpreter path.
However, if the executable is an ELF, we quickly validate it and then
pass the preliminary buffer to the find_elf_interpreter_for_executable
method.
That method calculates the last byte offset which is needed to read all
of the program headers, so we don't just assume 4096 bytes is sufficient
anymore. The same pattern is applied when loading the interpreter ELF
main header and its program headers.
In Ext2FSInode::compute_block_list_impl(), each call to
process_block_array() creates a new ByteBuffer, which leads to a
kmalloc() call. The ByteBuffer is then discarded when
process_block_array() exits, leading to a kfree() call.
This leads to repeated kmalloc() and kfree() calls as ByteBuffers are
created and destroyed each time process_block_array() is called.
This commit makes it so that only 1 ByteBuffer is created for each level
of inode indirect block (so only 3 ByteBuffers are created at most).
These ByteBuffers are re-used on each call to process_block_array().
This reduces the number of kmalloc() and kfree() calls during
compute_block_list_impl(), especially for larger files.
For some reason, Clang wants AK to work with exceptions enabled so much
that it produces a very annoying warning complaining about the absence
of (completely useless in our setup) unhandled_exception method in
promise type for every coroutine declared. We work around this during
build by suppressing -Wcoroutine-missing-unhandled-exception in
Meta/CMake/common_compile_options.cmake. However, the flag is only added
if build is using Clang. If one builds coroutine code with GCC but still
uses clangd, clangd, obviously, doesn't pick up warning suppression and
annoys user with a squiggly yellow line under each coroutine function
declaration.
This PR implements the standard behavior of displaying the mailbox name
and parenthesized unseen message count in bold when the unseen message
count is greater than zero.
- Use TRY_OR_FAIL() instead of MUST() in a few places
- Use for-each loop in two tests
- Use StringView literals instead of u8 arrays in a few places
No behavior change.
This fixes cases where fuzzy matching would return a better score for
a different pattern than a needle perfectly matching the haystack.
As an example, when searching for "fire" in emojis, "Fire Engine" would
have scored 168, while "Fire" was giving only 160.
This patch makes the latter have the best possible score.
Some callers of this API needlessly provide very large output buffer
even when input is small. Since we throw away all authentication-related
information here anyway, let's make the buffer (that gets
authenticated!) as small as possible.
Using sysretq clobbers rcx and r11 as this instruction loads the rip and
rflags from those registers. This is fine for normal syscalls.
Signal dispatching works like this:
The kernel makes userspace jump to the signal trampoline when a signal
is dispatched. That trampoline then executes the sigreturn syscall after
calling the signal handler to continue executing the code before the
signal was dispatched.
Since e71c320154 the sigreturn syscall is done via the syscall
instruction (and int 0x82 support was removed in the next commit),
which causes the kernel to currently use sysretq to return to userspace.
But signals can happen at any time, not just during syscalls, so the
sigreturn syscall shouldn't clobber the contents of those registers when
returning to userspace.
This allows us to properly limit our block requests to the device's
capabilities, and choose more optimal block counts for I/O operations.
In theory, as Qemu only advertises a block limit above our current
internal block size limit of u16::max and does not advertise any optimal
transfer lengths.
This is apparently what bootloaders do before using a USB storage device
so we should likely do so as well, especially when no BIOS is present,
like on riscv.
Co-Authored-By: Sönke Holz <sholz8530@gmail.com>
Instead calculate the load offset at runtime.
The mapping of the initial stack is now done explicitly. It was
previously included in the 2 MiB-aligned kernel range.
We need to use kernel8.img now, as we no longer hard-code the physical
addresses of all sections in the linker script. QEMU would otherwise
try to load us at KERNEL_MAPPING_BASE.
This is slightly better than assuming that cutting off the high 32 bits
results in the physical address. This worked for now because the kernel
(and stack) is mapped at KERNEL_MAPPING_BASE + PHYSICAL_LOAD_ADDR and
KERNEL_MAPPING_BASE & 0xffff'ffff == 0.
The next commit will move the kernel to KERNEL_MAPPINGS_BASE + 0, so
we need to get the physical address in a slightly less hacky way.
59aaef08d9 LibMedia: Rename LibVideo to LibMedia
aa2f5df0a9 AK: Add a helper to detect which CPU features are supported
0bbf42bca7 LibJS: Introduce the CanonicalizeKeyedCollectionKey AO
e06d74c314 LibWeb: Give DOM Elements a CountersSet
07fe3e57c6 LibWeb: Implement CounterStyleValue
1bc896fa60 LibWeb: Implement counter-[increment,reset,set] properties
78a22f5098 LibWeb: Replace templated retarget function with a regul...
e4fa0e7f63 LibWeb: Implement fetch record from the fetch spec
a8c4f34bff LibWeb: Create separate DedicatedWorkerGlobalScope class
Also add Services to include_dirs for LibWeb. This is necessary
due to the <WebWorker/DedicatedWorkerHost.h> include added in #24870.
They do happen in practice.
We might have to do more to do actual color conversions with
these profiles, but for now let's at least load them.
Fixes rendering of a few images in my thesis in LibPDF.
The images were created in OmniGraffle in 2008, then saved as
PDF, then converted to eps using LaTeX tooling.