This syscall only reads from the shared m_space field, but that field
is only over written to by Process::attach_resources, before the
process was initialized (aka, before syscalls can happen), by
Process::finalize which is only called after all the process' threads
have exited (aka, syscalls can not happen anymore), and by
Process::do_exec which calls all other syscall-capable threads before
doing so. Space's find_region_containing already holds its own lock,
and as such there's no need to hold the big lock.
This syscall doesn't touch any intra-process shared resources and only
accesses the time via the atomic TimeManagement::now so there's no need
to hold the big lock.
This syscall doesn't touch any intra-process shared resources and only
accesses the time via the atomic TimeManagement::current_time so there's
no need to hold the big lock.
This syscall doesn't touch any intra-process shared resources and
reads the time via the atomic TimeManagement::current_time, so it
doesn't need to hold any lock.
When booting AP's, we identity map a region at 0x8000 while doing the
initial bringup sequence. This is the only thing in the kernel that
requires an identity mapping, yet we had a bunch of generic API's and a
dedicated VirtualRangeAllocator in every PageDirectory for this purpose.
This patch simplifies the situation by moving the identity mapping logic
to the AP boot code and removing the generic API's.
...and also RangeAllocator => VirtualRangeAllocator.
This clarifies that the ranges we're dealing with are *virtual* memory
ranges and not anything else.
Now that all KResult and KResultOr are used consistently throughout the
kernel, it's no longer necessary to return negative error codes.
However, we were still doing that in some places, so let's fix all those
(bugs) by removing the minuses. :^)
We were allocating thread FPU state separately in order to ensure a
16-byte alignment. There should be no need to do that.
This patch makes it a regular value member of Thread instead, dodging
one heap allocation during thread creation.
This patch gets rid of the "valid" bit in PageDirectory since it was
only used to communicate an allocation failure during construction.
We now do all the work in the static factory functions instead of in the
constructor, which allows us to simply return nullptr instead of an
"invalid" PageDirectory.
When constructing an AnonymousVMObject with the AllocateNow allocation
strategy we accidentally allocated the committed pages directly through
MemoryManager instead of taking them from our m_unused_physical_pages
CommittedPhysicalPageSet, which meant they were counted as allocated in
MemoryManager, but were still counted as unallocated in the PageSet,
who would then try to uncommit them on destruction, resulting in a
failed assertion.
To help prevent similar issues in the future a Badge<T> was added to
MM::allocate_committed_user_physical_page to prevent allocation of
commited pages not via a CommittedPhysicalPageSet.
When we hit a COW fault and discover than no other process is sharing
the physical page, we simply remap it r/w and save ourselves the
trouble. When this happens, we can also give back (uncommit) one of our
shared committed COW pages, since we won't be needing it.
We had this optimization before, but I mistakenly removed it in
50472fd69f since I had misunderstood
it to be the reason for a panic.
We currently overcommit for COW when forking a process and cloning its
memory regions. Both the parent and child process share a set of.
committed COW pages.
If there's COW sharing across more than two processeses within a lineage
(e.g parent, child & grandchild), it's possible to exhaust these pages.
When the shared set is emptied, the next COW fault in each process must
detach from the shared set and fall back to on demand allocation.
This patch makes sure that we detach from the shared set once we
discover it to be empty (during COW fault handling). This fixes an issue
where we'd try to allocate from an exhausted shared set while building
GNU binutils inside SerenityOS.
It was doing a bunch of things it didn't need to do. I think we had
misunderstood the base class as having copied m_lock in its copy
constructor but it's actually default initialized.
We had issues with committed physical pages getting miscounted in some
situations, and instead of figuring out what was going wrong and making
sure all the commits had matching uncommits, this patch makes the
problem go away by adding an RAII class to manage this instead. :^)
MemoryManager::commit_user_physical_pages() now returns an (optional)
CommittedPhysicalPageSet. You can then allocate pages from the page set
by calling take_one() on it. Any unallocated pages are uncommitted upon
destruction of the page set.
Since we only ever add 1 super physical region, theres no reason to
add the extra redirection and allocation that comes with a dynamically
sized Vector.