The current way we handle sync commands is very ugly and depends on lot
of preconditions. Now that we have an end_io handler for a request, we
can use WaitQueue to do sync commands more elegantly.
This does depend on block layer sending one request at a time but this
change is a step forward towards better IO handling.
There was a private variable named m_current_request which was used to
track a single request at a time. This guarantee is given by the block
layer where we wait on each IO. This design will break down in the
driver once the block layer removes that constraint.
Redesign the IO handling in a completely asynchronous way by maintaining
requests up to queue depth. NVMeIO struct is introduced to track an IO
submitted along with other information such whether the IO is still
being processed and an endio callback which will be called during the
end of a request.
A hashmap private variable is created which will key based on the
command id of a request with a value of NVMeIO. endio handler will come
in handy if we are doing a sync request and we want to wake up the wait
queue during the end.
This change also simplified the code by removing some special condition
in submit_sqe function, etc that were marked as FIXME for a long time.
Using sq_tail as cid makes an inherent assumption that we send only
one IO at a time. Use an atomic variable instead for command id of a
submission queue entry.
As sq_tail is not used as cid anymore, remove m_prev_sq_tail which used
to hold the last used sq_tail value.
The SID was duplicated between the process credentials and protected
data. And to make matters worse, the credentials SID was not updated in
sys$setsid.
This patch fixes this by removing the SID from protected data and
updating the credentials SID everywhere.
This closes two race windows:
- ProcessGroup removed itself from the "all process groups" list in its
destructor. It was possible to walk the list between the last unref()
and the destructor invocation, and grab a pointer to a ProcessGroup
that was about to get deleted.
- sys$setsid() could end up creating a process group that already
existed, as there was a race window between checking if the PGID
is used, and actually creating a ProcessGroup with that PGID.
Now that it's no longer using LockRefPtr, we can actually move it into
protected data. (LockRefPtr couldn't be stored there because protected
data is immutable at times, and LockRefPtr uses some of its own bits
for locking.)
This syscall sends a signal to other threads or itself. This mechanism
is already guarded by locking mechanisms, and widely used within the
kernel without help from the big lock.
...and also make the Process tick counters clock_t instead of u32.
It seems harmless to get interrupted in the middle of reading these
counters and reporting slightly fewer ticks in some category.
Expand the following types from 32-bit to 64-bit:
- blkcnt_t
- blksize_t
- dev_t
- nlink_t
- suseconds_t
- clock_t
This matches their size on other 64-bit systems.
This syscall is only concerned with the current thread (except in the
case of a pledge violation, when it will add some details about that
to the process coredump metadata. That stuff is already serialized.)
This syscall operates on the file descriptor table, and on individual
open file descriptions. Both of those are already protected by scoped
locking mechanisms.
This syscall had a TOCTOU where it checked the peer's PPID before
locking the protected data (where the PPID is stored).
After closing the race window, we can mark the syscall as not needing
the big lock.
There was a whole bunch of ref counting churn coming from Mutex, which
had a RefPtr<Thread> m_holder to (mostly) point at the thread holding
the mutex.
Since we never actually dereference the m_holder value, but only use it
for identity checks against thread pointers, we can store it as an
uintptr_t and skip the ref counting entirely.
Threads can't die while holding a mutex anyway, so there's no risk of
them going missing on us.
These were stored in a bunch of places. The main one that's a bit iffy
is the Mutex::m_holder one, which I'm going to simplify in a subsequent
commit.
In Plan9FS and WorkQueue, we can't make the NNRPs const due to
initialization order problems. That's probably doable with further
cleanup, but left as an exercise for our future selves.
Before starting this, I expected the thread blockers to be a problem,
but as it turns out they were super straightforward (for once!) as they
don't mutate the thread after initiating a block, so they can just use
simple const-ified NNRPs.
- Instead of taking the first new thread as an out-parameter, we now
bundle the process and its first thread in a struct and use that
as the return value.
- Make all Process factory functions return ErrorOr. Use this to convert
some places to more TRY().
- Drop the "try_" prefix on Process factory functions.
The only persistent one of these was Thread::m_process and that never
changes after initialization. Make it const to enforce this and switch
everything over to RefPtr & NonnullRefPtr.
- The host custody never changes after initialization, so there's no
need to protect it with a spinlock.
- To enforce the fact that some members don't change after
initialization, make them const.
There was only one permanent storage location for these: as a member
in the Mount class.
That member is never modified after Mount initialization, so we don't
need to worry about races there.