Use a LocalSocket to represent the connection between two message ports.
The concept of the port message queue is still missing, however. When
that concept is implemented, the "steps" in step 7 of the message port
transfer steps will need to send the serialized data over the connected
socketpair and run in the event loop of the process that holds onto the
other side of the message port. Doing this should allow centralizing the
behavior of postMessage for Window, MessagePorts and Workers.
When calling `running_execution_context` from other VM APIs, and the
execution context stack is empty, the verification message is inlined
from AK::Vector. Add a specific VERIFY to `running_execution_context` to
help diagnose this issue better.
This compression (tag Compression=2) is not very popular on its own, but
a base to implement CCITT3 2D and CCITT4 compressions.
As the format has no real benefits, it is quite hard to find an app that
accepts tho encode that for you. So I used the following program that
calls `libtiff` directly:
```cpp
#include <vector>
#include <cstdlib>
#include <iostream>
#include <tiffio.h>
// An array containing 0 and 1 of length width * height.
extern std::vector<uint8_t> array;
int main() {
// From: https://stackoverflow.com/a/34257789
TIFF *image = TIFFOpen("input.tif", "w");
int const width = 400;
int const height = 300;
TIFFSetField(image, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(image, TIFFTAG_IMAGELENGTH, height);
TIFFSetField(image, TIFFTAG_PHOTOMETRIC, 0);
TIFFSetField(image, TIFFTAG_COMPRESSION, COMPRESSION_CCITTRLE);
TIFFSetField(image, TIFFTAG_BITSPERSAMPLE, 1);
TIFFSetField(image, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(image, TIFFTAG_ROWSPERSTRIP, 1);
std::vector<uint8_t> scan_line(width / 8 + 8, 0);
int count = 0;
for (int i = 0; i < height; i++) {
std::fill(scan_line.begin(), scan_line.end(), 0);
for (int x = 0; x < width; ++x) {
uint8_t eight_pixels = scan_line.at(x / 8);
eight_pixels = eight_pixels << 1;
eight_pixels |= !array.at(i * width + x);
scan_line.at(x / 8) = eight_pixels;
}
int bytes = int(width / 8.0 + 0.5);
if (TIFFWriteScanline(image, scan_line.data(), i, bytes) != 1)
std::cerr << "Something went wrong\n";
}
TIFFClose(image);
}
```
As pointed out by @nico, while doing a right-shift to downscale is fine,
a left-shift to upscale gives wrong results. As an example, imagine a 2-
bits value containing 3, left-shifting it would give 192 instead of 255.
Fixes following mistakes:
- "scrolling box" for a document is not `scrollable_overflow_rect()`
but size of viewport (initial containing block, like spec says).
- comparing edges of "scrolling box" with edges of target element
does not make any sense because "scrolling box" edges are relative
to page while result of `get_bounding_client_rect()` is relative
to viewport.
These tags are either part of the baseline specification or included by
default by GIMP when exporting TIFF files. Note that we don't add
support for them in the decoder yet. This commit only allows us to parse
the metadata and display it gracefully.
It is implemented in the way identical to how it works in CPU painter:
1. SampleUnderCorners command saves pixels within corners into a
texture.
2. BlitCornerClipping command uses the texture prepared earlier to
restore pixels within corners.
The styling of elements using the `use_pseudo_element()` was only
applied on layout. When an element style was recomputed later that
styling was not overruled with the pseudo element selector styles.
This moves the styling override from `TreeBuilder.cpp` to
`StyleComputer.cpp`. Now the styles are always correctly applied.
I also removed the method `property_id_by_index()` because it was
not needed anymore.
Als some calls to `invalidate_layout()` in the Meter, Progress and
Select elements where not needed anymore because the style values
are update on the changing of the style attribute.
This fixes issue #22278.
Previously, `replace` used `find_all` to find all of the positions to
replace. But `find_all` finds all the *overlapping* instances of the
needle, while `replace` assumed that the next position was always at
least `needle.length()` away from the last one. This led to crashes like
https://github.com/SerenityOS/jakt/issues/1159.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
This makes using the line editor much nicer when multi-code-point
graphemes are present in the input (e.g. flag emojis, or some cjk
glyphs), and avoids messing up the buffer when deleting text, or
cursoring around.
This commit fixes the odd state after pasting some text containing
multibyte code points.
This also increases the input buffer size, as reading 16 bytes at a time
felt slightly laggy when pasting a large number of emojis :^)
We were doing this synchronously, which was unsafe in that caused us to
re-enter the module map entry setting code while iterating over the
map's entries.
The fix is simply to do what the spec says and queue up a task. This way
the processing gets deferred to a later time.
To avoid stepping into this problem again, I've also added a reentrancy
check in ModuleMap.
This fixes a sporadic crash in HTML::ModuleMap::add() caught by ASAN.
In particular, this was happening regularly on https://shopify.com/
Let's not assume there is one global OpenGL context because it might
change once we will start creating >1 page inside single WebContent
process or contexts for WebGL.
The signal handling code (and possibly other code as well) expects this
struct to have an alignment of 16 bytes, as it pushes this struct on the
stack.
This is similar to "unique" shapes, which were removed in commit
3d92c26445.
The key difference is that dictionary shapes don't have a serial number,
but instead have a "cacheable" flag.
Shapes become dictionaries after 64 transitions have occurred, at which
point no further transitions occur.
As long as properties are only added to a dictionary shape, it remains
cacheable. (Since if we've cached the shape pointer in an IC somewhere,
we know the IC is still valid.)
Deleting a property from a dictionary shape causes it to become an
uncacheable dictionary.
Note that deleting a property from a non-dictionary shape still performs
a delete transition.
This fixes an issue on Discord where Object.freeze() would eventually
OOM us, since they add more than 16000 properties to a single object
before freezing it.
It also yields a 15% speedup on Octane/pdfjs.js :^)
MasterPTY::read called DoubleBuffer::read which takes a mutex (which
may block) while holding m_slave's spinlock. If it did block, and was
later rescheduled on a different physical CPU, we would deadlock on
re-locking m_slave inside the unblock callback. (Since our recursive
spinlock implementation is processor based and not process based)
MasterPTY's double buffer unblock callback would take m_slave's
spinlock and then call evaluate_block_conditions() which would take
BlockerSet's spinlock, while on the other hand, BlockerSet's
add_blocker would take BlockerSet's spinlock, and then call
should_add_blocker, which would call unblock_if_conditions_are_met,
which would then call should_unblock, which will finally call
MasterPTY::can_read() which will take m_slave's spinlock.
Resolve this by moving the call to evaluate_block_conditions() out of
the scope of m_slave's spinlock, as there's no need to hold the lock
while calling it anyways.
With this change, Document now always has a Web::Page. This means we no
longer rely on the breakable link between Document and BrowsingContext
to find a relevant Web::Page.
Fixes#22290