This reverts commit d1d24eaef4.
I missed the fact that traverse_as_directory uses a temporary buffer,
meaning that entries created based on its callback will point to free'd
memory.
This only moves the issue, as PerformanceEventBuffer::add_process can't
fail yet, but this will allow us to remove the non-failable
Custody::absolute_path API.
This function is an extended version of `chmod(2)` that lets one control
whether to dereference symlinks, and specify a file descriptor to a
directory that will be used as the base for relative paths.
When deleting an entire AddressSpace, we don't need to do TLB flushes
at all (since the entire page directory is going away anyway).
We also don't need to deallocate VM ranges one by one, since the entire
VM range allocator will be deleted anyway.
We can use `StringView::for_each_split_view` here to avoid the potential
allocation of `Vector<StringView>` elements we would get from the normal
Split view functions.
This mechanism was unsafe to use in any multithreaded context, since
the hook function was invoked on a raw pointer *after* decrementing
the local ref count.
Since we don't use it for anything anymore, let's just get rid of it.
Previously we were uncaching inodes from TmpFSInode::one_ref_left().
This was not safe, since one_ref_left() was effectively being called
on a raw pointer after decrementing the local ref count and observing
it become 1. There was a race here where someone else could trigger
the destructor by unreffing to 0 before one_ref_left() got called,
causing us to call one_ref_left() on a deleted inode.
We fix this by using the new remove_from_secondary_lists() mechanism
in ListedRefCounted and synchronizing all access to the TmpFS inode
map with the main Inode::all_instances() lock.
There's probably a nicer way to solve this.
Look for remove_from_secondary_lists() and call it on the ref-counting
target if present *while the lock is held*.
This allows listed-ref-counted objects to be present in multiple lists
and still have synchronized removal on final unref.
For "destructive" disallowance of allocations throughout the system,
Thread gains a member that controls whether allocations are currently
allowed or not. kmalloc checks this member on both allocations and
deallocations (with the exception of early boot) and panics the kernel
if allocations are disabled. This will allow for critical sections that
can't be allowed to allocate to fail-fast, making for easier debugging.
PS: My first proper Kernel commit :^)
The purpose of the PageDirectory::m_page_tables map was really just
to act as ref-counting storage for PhysicalPage objects that were
being used for the directory's page tables.
However, this was basically redundant, since we can find the physical
address of each page table from the page directory, and we can find the
PhysicalPage object from MemoryManager::get_physical_page_entry().
So if we just manually ref() and unref() the pages when they go in and
out of the directory, we no longer need PageDirectory::m_page_tables!
Not only does this remove a bunch of kmalloc() traffic, it also solves
a race condition that would occur when lazily adding a new page table
to a directory:
Previously, when MemoryManager::ensure_pte() would call HashMap::set()
to insert the new page table into m_page_tables, if the HashMap had to
grow its internal storage, it would call kmalloc(). If that kmalloc()
would need to perform heap expansion, it would end up calling
ensure_pte() again, which would clobber the page directory mapping used
by the outer invocation of ensure_pte().
The net result of the above bug would be that any invocation of
MemoryManager::ensure_pte() could erroneously return a pointer into
a kernel page table instead of the correct one!
This whole problem goes away when we remove the HashMap, as ensure_pte()
no longer does anything that allocates from the heap.