Mark the entirety of a heap block's storage poisoned at construction.
Unpoison all of a Cell's memory before allocating it, and re-poison as
much as possible on deallocation. Unfortunately, the entirety of the
FreelistEntry must be kept unpoisoned in order for reallocation to work
correctly.
Decreasing the size of FreelistEntry or adding a larger redzone to Cells
would make the instrumentation even better.
Use this to avoid creating a 16 byte cell allocator on x86_64, where the
size of FreelistEntry is 24 bytes. Every JS::Cell must be at least the
size of the FreelistEntry or things start crashing, so the 16 byte
allocator was wasted on that platform.
This is the coarsest grained ASAN instrumentation possible for the LibJS
heap. Future instrumentation could add red-zones to heap block
allocations, and poison the entire heap block and only un-poison used
cells at the CellAllocator level.
The previous VERIFY() call checked that aligned_alloc() didn't return
MAP_FAILED. When out of memory aligned_alloc() returns a null pointer
so let's check for that instead.
This patch adds a BlockAllocator to the GC heap where we now cache up to
64 HeapBlock-sized mmap's that get recycled when allocating HeapBlocks.
This improves test-js runtime performance by ~35%, pretty cool! :^)
Instead of being its own separate unrelated class.
This automatically makes typed array properties available to it,
as well as making it available to the runtime.
So far we only have two states: Live and Dead. In the future, we can
add additional states to support incremental sweeping and/or multi-
stage cell destruction.
We had two functions for doing mostly the same thing. Combine both
of them into String::find() and use that everywhere.
Also add some tests to cover basic behavior.
By doing the offset calculation in {get,put}_by_index() we would
delegate these operations to Object for any index >= (array length -
byte offset). By doing the offset calculation in data() instead, we can
just use the unaltered property index for indexing the returned Span.
In other words: data()[0] now returns the same value as indexing the
TypedArray at index 0 in JS.
This also fixes a bug in the js REPL which would not consider the byte
offset and subsequently access the underlying ArrayBuffer data with a
wrong index.
Problem:
- `static` variables consume memory and sometimes are less
optimizable.
- `static const` variables can be `constexpr`, usually.
- `static` function-local variables require an initialization check
every time the function is run.
Solution:
- If a global `static` variable is only used in a single function then
move it into the function and make it non-`static` and `constexpr`.
- Make all global `static` variables `constexpr` instead of `const`.
- Change function-local `static const[expr]` variables to be just
`constexpr`.
We already do this for the SimpleIndexedPropertyStorage, so for indexed
properties with GenericIndexedPropertyStorage this would previously
crash. Since overwriting the array-like size with a larger value won't
magically insert values at previously unset indices, we need to handle
such an out of bounds access gracefully and just return an empty value.
Fixes#7043.
This was a bit hard to find as a local variable - rename it to uppercase
LENGTH_SETTER_GENERIC_STORAGE_THRESHOLD and move it to the top (next to
SPARSE_ARRAY_HOLE_THRESHOLD) for good visibility.
Before this patch, every shape would permanently remember every other
shape it had ever transitioned to. This could lead to pathological
accumulation of unused shape objects in some cases.
Fix this by using WeakPtr instead of a strongly visited Shape* in the
the forward transition chain map. This means that we will now miss out
on some shape sharing opportunities, but since this is not required
for correctness it doesn't matter.
Note that the backward transition chain is still strongly cached,
as it's necessary for the reification of property tables.
An interesting future optimization could be to allow property tables
to get garbage collected (by detaching them from the shape object)
and then reconstituted from the backwards transition chain (if needed.)
If we're able to allocate cells from a freelist, we should always
prefer that over the lazy freelist, since this may further defer
faulting in additional memory for the HeapBlock.
Thanks to @gunnarbeutner for pointing this out. :^)
HeapBlock now implements the same lazy freelist as LibC malloc() does,
where new blocks start out in a "bump allocator" mode that gets used
until we've bump-allocated all the way to the end of the block.
Then we fall back to the old freelist style as before.
This means we don't have to pre-initialize the freelist on HeapBlock
construction. This defers page faults and reduces memory usage for
blocks where all cells don't get used. :^)
This had very bad interactions with ccache, often leading to rebuilds
with 100% cache misses, etc. Ali says it wasn't that big of a speedup
in the end anyway, so let's not bother with it.
We can always bring it back in the future if it seems like a good idea.