Instead of having two separate implementations of AK::RefCounted, one
for userspace and one for kernelspace, there is now RefCounted and
AtomicRefCounted.
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
Unlike Clang, GCC does not support 8-byte atomics on i686 with the
-mno-80387 flag set, so until that is fixed, implement a minimal set of
atomics that are currently required.
Signal dispatch is already protected by the global scheduler lock, but
in some cases we also took Thread::m_lock for some reason. This led to
a number of different deadlocks that started showing up with 4+ CPU's
attached to the system.
As a first step towards solving this, simply don't take the thread lock
and let the scheduler lock cover it.
Eventually, we should work in the other direction and break the
scheduler lock into much finer-grained locks, but let's get out of the
deadlock swamp first.
This is not necessary, and is a leftover from before Thread started
using the ListedRefCounted pattern to be safely removed from lists on
the last call to unref().
As soon as we've saved CR2 (the faulting address), we can re-enable
interrupt processing. This should make the kernel more responsive under
heavy fault loads.
This fixes an issue where a sharing process would map the "lazy
committed page" early and then get stuck with that page even after
it had been replaced in the VMObject by a page fault.
Regressed in 27c1135d30, which made it
happen every time with the backing bitmaps used for WebContent.
Region::physical_page() now takes the VMObject lock while accessing the
physical pages array, and returns a RefPtr<PhysicalPage>. This ensures
that the array access is safe.
Region::physical_page_slot() now VERIFY()'s that the VMObject lock is
held by the caller. Since we're returning a reference to the physical
page slot in the VMObject's physical page array, this is the best we
can do here.
Note that SMP is still off by default, but this basically removes the
weird "SMP on but threads don't get scheduled" behavior we had by
default. If you pass "smp=on" to the kernel, you now get SMP. :^)
We really only need the VMObject lock when accessing the physical pages
array, so once we have a strong pointer to the physical page we want to
remap, we can give up the VMObject lock.
This fixes a deadlock I encountered while building DOOM on SMP.
When handling a page fault, we only need to remap the faulting region in
the current process. There's no need to traverse *all* regions that map
the same VMObject and remap them cross-process as well.
Those other regions will get remapped lazily by their own page fault
handlers eventually. Or maybe they won't and we avoided some work. :^)
- Instead of holding the VMObject lock across physical page allocation
and quick-map + copy, we now only hold it when updating the VMObject's
physical page slot.
Make sure we reject the unveil attempt with EPERM if the veil was locked
by another thread while we were parsing argument (and not holding the
veil state spinlock.)
Thanks Brian for spotting this! :^)
Amendment to #14907.
To ensure that we stay on the same CPU that acquired the spinlock until
we're completely unlocked, we now leave the critical section *before*
re-enabling interrupts.
We want to grab g_scheduler_lock *before* Thread::m_block_lock.
This appears to have fixed a deadlock that I encountered while building
DOOM with make -j2.
Path resolution may do blocking I/O so we must not do it while holding
a spinlock. There are tons of problems like this throughout the kernel
and we need to find and fix all of them.
This fixes an issue where we could get preempted after acquiring the
current Processor pointer, but before calling methods on it.
I strongly suspect this was the cause of "Processor::current() == this"
assertion failures.
This matches out general macro use, and specifically other verification
macros like VERIFY(), VERIFY_NOT_REACHED(), VERIFY_INTERRUPTS_ENABLED(),
and VERIFY_INTERRUPTS_DISABLED().
Instead of locking it twice, we now frontload all the work that doesn't
touch the fd table, and then only lock it towards the end of the
syscall.
The benefit here is simplicity. The downside is that we do a bit of
unnecessary work in the EMFILE error case, but we don't need to optimize
that case anyway.
If the final copy_to_user() call fails when writing the file descriptors
to the output array, we have to make sure the file descriptors don't
remain in the process file descriptor table. Otherwise they are
basically leaked, as userspace is not aware of them.
This matches the behavior of our sys$socketpair() implementation.
We don't need to explicitly check for EMFILE conditions before doing
anything in sys$pipe(). The fd allocation code will take care of it
for us anyway.
We ensure that when we call SharedInodeVMObject::sync we lock the inode
lock before calling Inode virtual write_bytes method directly to avoid
assertion on the unlocked inode lock, as it was regressed recently. This
is not a complete fix as the need to lock from each path before calling
the write_bytes method should be avoided because it can lead to
hard-to-find bugs, and this commit only fixes the problem temporarily.
Previously, when starved for pages, *all* clean file-backed memory
would be released, which is quite excessive.
This patch instead releases just 1 page, since only 1 page is needed
to satisfy the request to `allocate_physical_page()`
Previously, we could only release *all* clean pages.
This patch makes it possible to release a specific amount of clean
pages. If the attempted number of pages to release is more than the
amount of clean pages, all clean pages will be released.
At the point at which we try to map the Region it was already added to
the Process region tree, so we have to make sure to remove it before
freeing it in the mapping failure path, otherwise the tree will contain
a dangling pointer to the free'd instance.
This fixes an issue where failing the fork due to OOM or other error,
we'd end up destroying the Process too early. By the time we got to
WaitBlockerSet::finalize(), it was long gone.
Until now, our only backup plan when running out of physical pages
was to try and purge volatile memory. If that didn't work out, we just
hung userspace out to dry with an ENOMEM.
This patch improves the situation by also considering clean, file-backed
pages (that we could page back in from disk).
This could be better in many ways, but it already allows us to boot to
WindowServer with 256 MiB of RAM. :^)
This enum was created to help put distinction between the commandset and
the interface type, as ATAPI devices are simply ATA devices utilizing
the SCSI commandset. Because we don't support ATAPI, putting such type
of distinction is pointless, so let's remove this for now.