When a Thread is being unblocked and we need to re-lock the process
big_lock and re-locking blocks again, then we may end up in
Thread::block again while still servicing the original lock's
Thread::block. So permit recursion as long as it's only the big_lock
that we block on again.
Fixes#8822
We should never request a regions removal that we don't currently
own. We currently assert this everywhere else by all callers.
Instead lets just push the assert down into the RedBlackTree removal
and assume that we will always successfully remove the region.
The compiler will use these to allocate objects that have alignment
requirements greater than that of our normal `operator new` (4/8 byte
aligned).
This means we can now use smart pointers for over-aligned types.
Fixes a FIXME.
Thread::yield_and_release_relock_big_lock releases the big lock, yields
and then relocks the big lock.
Thread::yield_assuming_not_holding_big_lock yields assuming the big
lock is not being held.
When blocking on a Lock other than the big lock and we're holding the
big lock, we need to release the big lock first. This fixes some
deadlocks where a thread blocks while holding the big lock, preventing
other threads from getting the big lock in order to unblock the waiting
thread.
This enables the Lock class to block a thread even while the thread is
working on a BlockCondition. A thread can still only be either blocked
by a Lock or a BlockCondition.
This also establishes a linked list of threads that are blocked by a
Lock and unblocking directly unlocks threads and wakes them directly.
This was an old SerenityOS-specific syscall for donating the remainder
of the calling thread's time-slice to another thread within the same
process.
Now that Threading::Lock uses a pthread_mutex_t internally, we no
longer need this syscall, which allows us to get rid of a surprising
amount of unnecessary scheduler logic. :^)
This provides more crucial information to be able to do an addr2line
lookup on a backtrace captured with Thread::backtrace.
Also change the offset to hexadecimal as this is what is require for
addr2line.
We were building with red-zone before, but were not accounting for it on
signal handler entries. This should fix that.
Also shorten the stack alignment calculations for this.
Right now we're using the FS segment for our per-CPU struct. On x86_64
there's an instruction to switch between a kernel and usermode GS
segment (swapgs) which we could use.
This patch doesn't update the rest of the code to use swapgs but it
prepares for that by using the GS segment instead of the FS segment.
Now we use WeakPtrs to break Ref-counting cycle. Also, we call the
prepare_for_deletion method to ensure deleted objects are ready for
deletion. This is necessary to ensure we don't keep dead processes,
which would become zombies.
In addition to that, add some debug prints to aid debug in the future.
The new ProcFS design consists of two main parts:
1. The representative ProcFS class, which is derived from the FS class.
The ProcFS and its inodes are much more lean - merely 3 classes to
represent the common type of inodes - regular files, symbolic links and
directories. They're backed by a ProcFSExposedComponent object, which
is responsible for the functional operation behind the scenes.
2. The backend of the ProcFS - the ProcFSComponentsRegistrar class
and all derived classes from the ProcFSExposedComponent class. These
together form the entire backend and handle all the functions you can
expect from the ProcFS.
The ProcFSExposedComponent derived classes split to 3 types in the
manner of lifetime in the kernel:
1. Persistent objects - this category includes all basic objects, like
the root folder, /proc/bus folder, main blob files in the root folders,
etc. These objects are persistent and cannot die ever.
2. Semi-persistent objects - this category includes all PID folders,
and subdirectories to the PID folders. It also includes exposed objects
like the unveil JSON'ed blob. These object are persistent as long as the
the responsible process they represent is still alive.
3. Dynamic objects - this category includes files in the subdirectories
of a PID folder, like /proc/PID/fd/* or /proc/PID/stacks/*. Essentially,
these objects are always created dynamically and when no longer in need
after being used, they're deallocated.
Nevertheless, the new allocated backend objects and inodes try to use
the same InodeIndex if possible - this might change only when a thread
dies and a new thread is born with a new thread stack, or when a file
descriptor is closed and a new one within the same file descriptor
number is opened. This is needed to actually be able to do something
useful with these objects.
The new design assures that many ProcFS instances can be used at once,
with one backend for usage for all instances.
This commit converts naked `new`s to `AK::try_make` and `AK::try_create`
wherever possible. If the called constructor is private, this can not be
done, so we instead now use the standard-defined and compiler-agnostic
`new (nothrow)`.
This adds just enough stubs to make the kernel compile on x86_64. Obviously
it won't do anything useful - in fact it won't even attempt to boot because
Multiboot doesn't support ELF64 binaries - but it gets those compiler errors
out of the way so more progress can be made getting all the missing
functionality in place.
Previously, when e.g. the `SIGABRT` signal was sent to a process,
`Thread::dispatch_signal()` would invoke
`Process::terminate_due_to_signal()` which would then `::die()`. The
result `DispatchSignalResult::Terminate` is then returned to
`Thread::check_dispatch_pending_signal()` which proceeds to invoke
`Process::die()` a second time.
Change the behavior of `::check_dispatch_pending_signal()` to no longer
call `Process::die()` if it receives `::Terminate` as a signal handling
result, since that indicates that the process was already terminated.
This fixes#7289.
After marking a thread for death we might end up finalizing the thread
while it still has code to run, e.g. via:
Thread::block -> Thread::dispatch_one_pending_signal
-> Thread::dispatch_signal -> Process::terminate_due_to_signal
-> Process::die -> Process::kill_all_threads -> Thread::set_should_die
This marks the thread for death. It isn't destroyed at this point
though.
The scheduler then gets invoked via:
Thread::block -> Thread::relock_process
At that point we still have a registered blocker on the stack frame
which belongs to Thread::block. Thread::relock_process drops the
critical section which allows the scheduler to run.
When the thread is then scheduled out the scheduler sets the thread
state to Thread::Dying which allows the finalizer to destroy the Thread
object and its associated resources including the kernel stack.
This probably also affects objects other than blockers which rely
on their destructor to be run, however the problem was most noticible
because blockers are allocated on the stack of the dying thread and
cause an access violation when another thread touches the blocker
which belonged to the now-dead thread.
Fixes#7823.
Replace the AK::String used for Region::m_name with a KString.
This seems beneficial across the board, but as a specific data point,
it reduces time spent in sys$set_mmap_name() by ~50% on test-js. :^)
By constraining two implementations, the compiler will select the best
fitting one. All this will require is duplicating the implementation and
simplifying for the `void` case.
This constraining also informs both the caller and compiler by passing
the callback parameter types as part of the constraint
(e.g.: `IterationFunction<int>`).
Some `for_each` functions in LibELF only take functions which return
`void`. This is a minimal correctness check, as it removes one way for a
function to incompletely do something.
There seems to be a possible idiom where inside a lambda, a `return;` is
the same as `continue;` in a for-loop.
The previous `LOCKER(..)` instrumentation only covered some of the
cases where a lock is actually acquired. By utilizing the new
`AK::SourceLocation` functionality we can now reliably instrument
all calls to lock automatically.
Other changes:
- Tweak the message in `Thread::finalize()` which dumps leaked lock
so it's more readable and includes the function information that is
now available.
- Make the `LOCKER(..)` define a no-op, it will be cleaned up in a
follow up change.
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
This is done also by linux (signal.c:936 in v5.11) at least.
It's a pretty handy notification that allows the parent process to skip
going through a `waitpid` and guesswork to figure out the current state
of a child process.
Alot of code is shared between i386/i686/x86 and x86_64
and a lot probably will be used for compatability modes.
So we start by moving the headers into one Directory.
We will probalby be able to move some cpp files aswell.
We previously ignored these values in the return value of
load_elf_object, which causes us to not allocate a TLS region for
statically-linked programs.
This commit is very invasive, because Thread likes to take a pointer and write
to it. This means that translating between timespec/timeval/Time would have been
more difficult than just changing everything that hands a raw pointer to Thread,
in bulk.
Make more of the kernel compile in 64-bit mode, and make some things
pointer-size-agnostic (by using FlatPtr.)
There's a lot of work to do here before the kernel will even compile.
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.