If an mmap fails to allocate a region, but the addr passed in was
non-zero, non-fixed mmaps should attempt to allocate at any available
virtual address.
This is memory that's loaded from an inode (file) but not modified in
memory, so still identical to what's on disk. This kind of memory can
be freed and reloaded transparently from disk if needed.
Dirty private memory is all memory in non-inode-backed mappings that's
process-private, meaning it's not shared with any other process.
This patch exposes that number via SystemMonitor, giving us an idea of
how much memory each process is responsible for all on its own.
PR #591 defines the rationale for kernel-level timers. They're most
immediately useful for TCP retransmission, but will most likely see use
in many other areas as well.
This patch introduces three separate thread queues, one for each thread
priority available to userspace (Low, Normal and High.)
Each queue operates in a round-robin fashion, but we now always prefer
to schedule the highest priority thread that currently wants to run.
There are tons of tweaks and improvements that we can and should make
to this mechanism, but I think this is a step in the right direction.
This makes WindowServer significantly more responsive while one of its
clients is burning CPU. :^)
We don't care about dead processes that were once members of a specific
process group.
This was causing us to try and send SIGINT to already-dead processes
when pressing Ctrl+C in a terminal whose pgrp they were once in.
Fixes#922.
Instead of panicking right away when we run out of physical pages,
we now try to find a PurgeableVMObject with some volatile pages in it.
If we find one, we purge that entire object and steal one of its pages.
This makes it possible for the kernel to keep going instead of dying.
Very cool. :^)
We were listing the total number of user/super pages as the number of
"available" pages in the system. This was then misinterpreted in the
SystemMonitor program and displayed wrong in the GUI.
Previously we assumed all hosts would have support for IA32_EFER.NXE.
This is mostly true for newer hardware, but older hardware will crash
and burn if you try to use this feature.
Now we check for support via CPUID.80000001[20].
This patch implements a simple version of the futex (fast userspace
mutex) API in the kernel and uses it to make the pthread_cond_t API's
block instead of busily sched_yield().
An arbitrary userspace address is passed to the kernel as a "token"
that identifies the futex and you can then FUTEX_WAIT and FUTEX_WAKE
that specific userspace address.
FUTEX_WAIT corresponds to pthread_cond_wait() and FUTEX_WAKE is used
for pthread_cond_signal() and pthread_cond_broadcast().
I'm pretty sure I'm missing something in this implementation, but it's
hopefully okay for a start. :^)
Introduce one more (CPU) indirection layer in the paging code: the page
directory pointer table (PDPT). Each PageDirectory now has 4 separate
PageDirectoryEntry arrays, governing 1 GB of VM each.
A really neat side-effect of this is that we can now share the physical
page containing the >=3GB kernel-only address space metadata between
all processes, instead of lazily cloning it on page faults.
This will give us access to the NX (No eXecute) bit, allowing us to
prevent execution of memory that's not supposed to be executed.
I'm not sure how I managed to misread the location of this bit twice.
But I did! Here is finally the correct value, according to Intel:
"Page Global Enable (bit 7 of CR4)"
Jeez! :^)
Cautiously use 5 as a limit for now so that we don't blow the stack.
This can be increased in the future if we are sure that we won't be
blowing the stack, or if the implementation is changed to not use
recursion :^)
These scripts assume that they are called from within Kernel/ directory.
For convenience, set the current working directory in the scripts to the
path where they are located.
This allows us to use all the same fun memory protection features as the
rest of the system for ring0 processes. Previously a ring0 process could
over- or underrun its stack and nobody cared, since kmalloc_eternal is the
wild west of memory.
If there are more threads in a process when exit()ing, we need to give
them a chance to unwind any kernel stacks. This means we have to unlock
the process lock before giving control to the scheduler.
Fixes#891 (together with all of the other "no more main thread" work.)
This is a little strange, but it's how I understand things should work.
The first thread in a new process now has TID == PID.
Additional threads subsequently spawned in that process all have unique
TID's generated by the PID allocator. TIDs are now globally unique.
The idea of all processes reliably having a main thread was nice in
some ways, but cumbersome in others. More importantly, it didn't match
up with POSIX thread semantics, so let's move away from it.
This thread gets rid of Process::main_thread() and you now we just have
a bunch of Thread objects floating around each Process.
When the finalizer nukes the last Thread in a Process, it will also
tear down the Process.
There's a bunch of more things to fix around this, but this is where we
get started :^)