This is expensive because we have to page in the entire executable for every
process up front for this to work. This is due to the page fault code not
being strong enough to run while another process is active.
Note that we already had userspace symbols in *crash* stacks. This patch
adds them generally, so they show up in /proc, Process Manager, etc.
There's room for improvement here, but the debugging benefits way overshadow
the performance penalty right now. :^)
This makes assertion failures generate backtraces again. Sorry to everyone
who suffered from the lack of backtraces lately. :^)
We share code with the /proc/PID/stack implementation. You can now get the
current backtrace for a Thread via Thread::backtrace(), and all the traces
for a Process via Process::backtrace().
With the presence of signal handlers, it is possible that a thread might
be blocked multiple times. Picture for instance a signal handler using
read(), or wait() while the thread is already blocked elsewhere before
the handler is invoked.
To fix this, we turn m_blocker into a chain of handlers. Each block()
call now prepends to the list, and unblocking will only consider the
most recent (first) blocker in the chain.
Fixes#309
The only two places we set m_blocker now are Thread::set_state(), and
Thread::block(). set_state is mostly just an issue of clarity: we don't
want to end up with state() != Blocked with an m_blocker, because that's
weird. It's also possible: if we yield, someone else may set_state() us.
We also now set_state() and set m_blocker under lock in block(), rather
than unlocking which might allow someone else to mess with our internals
while we're in the process of trying to block.
This seems to fix sending STOP & CONT causing a panic.
My guess as to what was happening is this:
thread A blocks in select(): Blocking & m_blocker != nullptr
thread B sends SIGSTOP: Stopped & m_blocker != nullptr
thread B sends SIGCONT: we continue execution. Runnable & m_blocker != nullptr
thread A tries to block in select() again:
* sets m_blocker
* unlocks (in block_helper)
* someone else tries to unblock us? maybe from the old m_blocker? unclear -- clears m_blocker
* sets Blocked (while unlocked!)
So, thread A is left with state Blocked & m_blocker == nullptr, leading
to the scheduler assert (m_blocker != nullptr) failing.
Long story short, let's do all our data management with the lock _held_.
And use this to return EINTR in various places; some of which we were
not handling properly before.
This might expose a few bugs in userspace, but should be more compatible
with other POSIX systems, and is certainly a little cleaner.
Generate a special page containing the "return from signal" trampoline code
on startup and then route signalled threads to it. This avoids a page
allocation in every process that ever receives a signal.
And use it in the scheduler.
IntrusiveList is similar to InlineLinkedList, except that rather than
making assertions about the type (and requiring inheritance), it
provides an IntrusiveListNode type that can be used to put an instance
into many different lists at once.
As a proof of concept, port the scheduler over to use it. The only
downside here is that the "list" global needs to know the position of
the IntrusiveListNode member, so we have to position things a little
awkwardly to make that happen. We also move the runnable lists to
Thread, to avoid having to publicize the node.
Committing some things my hands did while browsing through this code.
- Mark all leaf classes "final".
- FileDescriptionBlocker now stores a NonnullRefPtr<FileDescription>.
- FileDescriptionBlocker::blocked_description() now returns a reference.
- ConditionBlocker takes a Function&&.
"Blocking" is not terribly informative, but now that everything is
ported over, we can force the blocker to provide us with a reason.
This does mean that to_string(State) needed to become a member, but
that's OK.
And use dbgprintf() consistently on a few of the pieces of logging here.
This is useful when trying to track thread switching when you don't
really care about what it's switching _to_.
Replace the class-based snooze alarm mechanism with a per-thread callback.
This makes it easy to block the current thread on an arbitrary condition:
void SomeDevice::wait_for_irq() {
m_interrupted = false;
current->block_until([this] { return m_interrupted; });
}
void SomeDevice::handle_irq() {
m_interrupted = true;
}
Use this in the SB16 driver, and in NetworkTask :^)
After reading a bunch of POSIX specs, I've learned that a file descriptor
is the number that refers to a file description, not the description itself.
So this patch renames FileDescriptor to FileDescription, and Process now has
FileDescription* file_description(int fd).
There are now two thread lists, one for runnable threads and one for non-
runnable threads. Thread::set_state() is responsible for moving threads
between the lists.
Each thread also has a back-pointer to the list it's currently in.
This means that kernel regions will eagerly get physical pages allocated.
It would be nice to zero-fill these on demand instead, but that would
require a bunch of MemoryManager changes.
This patch moves away from using kmalloc memory for thread kernel stacks.
This reduces pressure on kmalloc (16 KB per thread adds up fast) and
prevents kernel stack overflow from scribbling all over random unrelated
kernel memory.
Make the Socket functions take a FileDescriptor& rather than a socket role
throughout the code. Also change threads to block on a FileDescriptor,
rather than either an fd index or a Socket.
We were only destroying the main thread when a process died, leaving any
secondary threads around. They couldn't run, but because they were still
in the global thread list, strange things could happen since they had some
now-stale pointers to their old process.