Commit graph

54 commits

Author SHA1 Message Date
Linus Groh
2b0c361d04 Everywhere: Fix a bunch of typos 2021-04-18 10:30:03 +02:00
Gunnar Beutner
c3ee70591a Kernel: Read the ELF header from the inode rather than the mapped pages
Reading from the mapping doesn't work when the text segment has a non-zero
offset because in that case the first mapped page doesn't contain the ELF
header.
2021-04-14 13:12:52 +02:00
Gunnar Beutner
2d91761cf6 Kernel: Make sure the offset stays the same when using mremap()
When using mmap() on a file with a non-zero offset subsequent
calls to mremap() would incorrectly reset the offset to zero.
2021-04-14 13:12:52 +02:00
Idan Horowitz
497c759ab7 Kernel: Remove old region from process' regions vector before splitting
This does not affect functionality right now, but it means that the
regions vector will now never have any overlapping regions, which will
allow the use of balance binary search trees instead of a vector in the
future. (since they require keys to be exclusive)
2021-04-12 18:03:44 +02:00
Jean-Baptiste Boric
6698fd84ff Kernel: Refactor storage stack with u64 as mmap offset 2021-03-19 09:15:19 +01:00
Hendiadyoin1
eba3fa5e72 Kernel: Make munmap more posix compliant
In case someone tries to unmap a not mapped region (fallback) we should
not return an error, but silently do nothing
2021-03-13 10:00:46 +01:00
Hendiadyoin1
b7f1171a1c Kernel: munmap multiple regions at a time
This implements a fallback to munmap that unmaps multiple regions at a
time, with splitting some when needed.

The way it is implemented is possibly not optimal, due to it searching
without looking into the cache
2021-03-13 10:00:46 +01:00
Andreas Kling
5e7abea31e Kernel+Profiler: Capture metadata about all profiled processes
The perfcore file format was previously limited to a single process
since the pid/executable/regions data was top-level in the JSON.

This patch moves the process-specific data into a top-level array
named "processes" and we now add entries for each process that has
been sampled during the profile run.

This makes it possible to see samples from multiple threads when
viewing a perfcore file with Profiler. This is extremely cool! :^)
2021-03-02 22:38:06 +01:00
Andreas Kling
272c2e6ec5 Kernel: Use Userspace<T> in sys${munmap,mprotect,madvise,msyscall}() 2021-03-01 15:53:33 +01:00
Andreas Kling
ac71775de5 Kernel: Make all syscall functions return KResultOr<T>
This makes it a lot easier to return errors since we no longer have to
worry about negating EFOO errors and can just return them flat.
2021-03-01 13:54:32 +01:00
Andreas Kling
8f70528f30 Kernel: Take some baby steps towards x86_64
Make more of the kernel compile in 64-bit mode, and make some things
pointer-size-agnostic (by using FlatPtr.)

There's a lot of work to do here before the kernel will even compile.
2021-02-25 16:27:12 +01:00
Andreas Kling
53c6c29158 Kernel: Tighten some typing in Arch/i386/CPU.h
Use more appropriate types for some things.
2021-02-25 11:32:27 +01:00
Andreas Kling
5d180d1f99 Everywhere: Rename ASSERT => VERIFY
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)

Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.

We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
2021-02-23 20:56:54 +01:00
Andreas Kling
84b2d4c475 Kernel: Add "map_fixed" pledge promise
This is a new promise that guards access to mmap() with MAP_FIXED.

Fixed-address mappings are rarely used, but can be useful if you are
trying to groom the process address space for malicious purposes.

None of our programs need this at the moment, as the only user of
MAP_FIXED is DynamicLoader, but the fixed mappings are constructed
before the process has had a chance to pledge anything.
2021-02-21 01:08:48 +01:00
Andreas Kling
eb92ec3149 Kernel: Factor out mmap & friends range expansion to a helper function
sys$mmap() and related syscalls must pad to the nearest page boundary
below the base address *and* above the end address of the specified
range. Since we have to do this in many places, let's make a helper.
2021-02-18 18:04:58 +01:00
Andreas Kling
575c7ed414 Kernel: Make sys$msyscall() EFAULT on non-user address
Fixes #5361.
2021-02-16 11:32:00 +01:00
Andreas Kling
6ee499aeb0 Kernel: Round old address/size in sys$mremap() to page size multiples
Found by fuzz-syscalls. :^)
2021-02-14 13:15:05 +01:00
Andreas Kling
09b1b09c19 Kernel: Assert if rounding-up-to-page-size would wrap around to 0
If we try to align a number above 0xfffff000 to the next multiple of
the page size (4 KiB), it would wrap around to 0. This is most likely
never what we want, so let's assert if that happens.
2021-02-14 10:01:50 +01:00
Andreas Kling
c877612211 Kernel: Round down base of partial ranges provided to munmap/mprotect
We were failing to round down the base of partial VM ranges. This led
to split regions being constructed that could have a non-page-aligned
base address. This would then trip assertions in the VM code.

Found by fuzz-syscalls. :^)
2021-02-13 01:49:44 +01:00
Andreas Kling
7551090056 Kernel: Round up ranges to page size multiples in munmap and mprotect
This prevents passing bad inputs to RangeAllocator who then asserts.

Found by fuzz-syscalls. :^)
2021-02-13 01:18:03 +01:00
Andreas Kling
f1b5def8fd Kernel: Factor address space management out of the Process class
This patch adds Space, a class representing a process's address space.

- Each Process has a Space.
- The Space owns the PageDirectory and all Regions in the Process.

This allows us to reorganize sys$execve() so that it constructs and
populates a new Space fully before committing to it.

Previously, we would construct the new address space while still
running in the old one, and encountering an error meant we had to do
tedious and error-prone rollback.

Those problems are now gone, replaced by what's hopefully a set of much
smaller problems and missing cleanups. :^)
2021-02-08 18:27:28 +01:00
Andreas Kling
d4dd4a82bb Kernel: Don't allow sys$msyscall() on non-mmap regions 2021-02-02 20:16:13 +01:00
Andreas Kling
823186031d Kernel: Add a way to specify which memory regions can make syscalls
This patch adds sys$msyscall() which is loosely based on an OpenBSD
mechanism for preventing syscalls from non-blessed memory regions.

It works similarly to pledge and unveil, you can call it as many
times as you like, and when you're finished, you call it with a null
pointer and it will stop accepting new regions from then on.

If a syscall later happens and doesn't originate from one of the
previously blessed regions, the kernel will simply crash the process.
2021-02-02 20:13:44 +01:00
Andreas Kling
123c37e1c0 Kernel: Fix mix-up between MAP_STACK/MAP_ANONYMOUS in prot validation 2021-01-30 10:30:17 +01:00
Andreas Kling
e55ef70e5e Kernel: Remove "has made executable exception for dynamic loader" flag
As Idan pointed out, this flag is actually not needed, since we don't
allow transitioning from previously-executable to writable anyway.
2021-01-30 10:06:52 +01:00
Andreas Kling
d0c5979d96 Kernel: Add "prot_exec" pledge promise and require it for PROT_EXEC
This prevents sys$mmap() and sys$mprotect() from creating executable
memory mappings in pledged programs that don't have this promise.

Note that the dynamic loader runs before pledging happens, so it's
unaffected by this.
2021-01-29 18:56:34 +01:00
Andreas Kling
51df44534b Kernel: Disallow mapping anonymous memory as executable
This adds another layer of defense against introducing new code into a
running process. The only permitted way of doing so is by mmapping an
open file with PROT_READ | PROT_EXEC.

This does make any future JIT implementations slightly more complicated
but I think it's a worthwhile trade-off at this point. :^)
2021-01-29 14:52:34 +01:00
Andreas Kling
af3d3c5c4a Kernel: Enforce W^X more strictly (like PaX MPROTECT)
This patch adds enforcement of two new rules:

- Memory that was previously writable cannot become executable
- Memory that was previously executable cannot become writable

Unfortunately we have to make an exception for text relocations in the
dynamic loader. Since those necessitate writing into a private copy
of library code, we allow programs to transition from RW to RX under
very specific conditions. See the implementation of sys$mprotect()'s
should_make_executable_exception_for_dynamic_loader() for details.
2021-01-29 14:52:27 +01:00
Sahan Fernando
6876b9a514 Kernel: Prevent mmap-ing as both fixed and randomized 2021-01-29 07:45:00 +01:00
Jorropo
22b0ff05d4
Kernel: sys$mmap PAGE_ROUND_UP size before calling allocate_randomized (#5154)
`allocate_randomized` assert an already sanitized size but `mmap` were
just forwarding whatever the process asked so it was possible to
trigger a kernel panic from an unpriviliged process just by asking some
randomly placed memory and a size non alligned with the page size.
This fixes this issue by rounding up to the next page size before
calling `allocate_randomized`.

Fixes #5149
2021-01-28 22:36:20 +01:00
Andreas Kling
b6937e2560 Kernel+LibC: Add MAP_RANDOMIZED flag for sys$mmap()
This can be used to request random VM placement instead of the highly
predictable regular mmap(nullptr, ...) VM allocation strategy.

It will soon be used to implement ASLR in the dynamic loader. :^)
2021-01-28 16:23:38 +01:00
Andreas Kling
e67402c702 Kernel: Remove Range "valid" state and use Optional<Range> instead
It's easier to understand VM ranges if they are always valid. We can
simply use an empty Optional<Range> to encode absence when needed.
2021-01-27 21:14:42 +01:00
Andreas Kling
5ab27e4bdc Kernel: sys$mmap() without MAP_FIXED should consider address a hint
If we can't use that specific address, it's still okay to put it
anywhere else in VM.
2021-01-27 21:14:42 +01:00
Andreas Kling
1e25d2b734 Kernel: Remove allocate_region() functions that don't take a Range
Let's force callers to provide a VM range when allocating a region.
This makes ENOMEM error handling more visible and removes implicit
VM allocation which felt a bit magical.
2021-01-26 14:13:57 +01:00
Andreas Kling
ab14b0ac64 Kernel: Hoist VM range allocation up to sys$mmap() itself
Instead of letting each File subclass do range allocation in their
mmap() override, do it up front in sys$mmap().

This makes us honor alignment requests for file-backed memory mappings
and simplifies the code somwhat.
2021-01-25 18:57:06 +01:00
Andreas Kling
43109f9614 Kernel: Remove unused syscall sys$minherit()
This is no longer used. We can bring it back the day we need it.
2021-01-16 14:52:04 +01:00
Andreas Kling
64b0d89335 Kernel: Make Process::allocate_region*() return KResultOr<Region*>
This allows region allocation to return specific errors and we don't
have to assume every failure is an ENOMEM.
2021-01-15 19:10:30 +01:00
Tom
e3190bd144 Revert "Kernel: Allocate shared memory regions immediately"
This reverts commit fe6b3f99d1.
2021-01-02 20:56:35 +01:00
Andreas Kling
fe6b3f99d1 Kernel: Allocate shared memory regions immediately
Lazily committed shared memory was not working in situations where one
process would write to the memory and another would only read from it.

Since the reading process would never cause a write fault in the shared
region, we'd never notice that the writing process had added real
physical pages to the VMObject. This happened because the lazily
committed pages were marked "present" in the page table.

This patch solves the issue by always allocating shared memory up front
and not trying to be clever about it.
2021-01-02 16:57:31 +01:00
Andreas Kling
5dae85afe7 Kernel: Pass "shared" flag to Region constructor
Before this change, we would sometimes map a region into the address
space with !is_shared(), and then moments later call set_shared(true).

I found this very confusing while debugging, so this patch makes us pass
the initial shared flag to the Region constructor, ensuring that it's in
the correct state by the time we first map the region.
2021-01-02 16:57:31 +01:00
Tom
476f17b3f1 Kernel: Merge PurgeableVMObject into AnonymousVMObject
This implements memory commitments and lazy-allocation of committed
memory.
2021-01-01 23:43:44 +01:00
Tom
e21cc4cff6 Kernel: Remove MAP_PURGEABLE from mmap
This brings mmap more in line with other operating systems. Prior to
this, it was impossible to request memory that was definitely committed,
instead MAP_PURGEABLE would provide a region that was not actually
purgeable, but also not fully committed, which meant that using such memory
still could cause crashes when the underlying pages could no longer be
allocated.

This fixes some random crashes in low-memory situations where non-volatile
memory is mapped (e.g. malloc, tls, Gfx::Bitmap, etc) but when a page in
these regions is first accessed, there is insufficient physical memory
available to commit a new page.
2021-01-01 23:43:44 +01:00
Tom
c3451899bc Kernel: Add MAP_NORESERVE support to mmap
Rather than lazily committing regions by default, we now commit
the entire region unless MAP_NORESERVE is specified.

This solves random crashes in low-memory situations where e.g. the
malloc heap allocated memory, but using pages that haven't been
used before triggers a crash when no more physical memory is available.

Use this flag to create large regions without actually committing
the backing memory. madvise() can be used to commit arbitrary areas
of such regions after creating them.
2021-01-01 23:43:44 +01:00
Tom
bc5d6992a4 Kernel: Memory purging improvements
This adds the ability for a Region to define volatile/nonvolatile
areas within mapped memory using madvise(). This also means that
memory purging takes into account all views of the PurgeableVMObject
and only purges memory that is not needed by all of them. When calling
madvise() to change an area to nonvolatile memory, return whether
memory from that area was purged. At that time also try to remap
all memory that is requested to be nonvolatile, and if insufficient
pages are available notify the caller of that fact.
2021-01-01 23:43:44 +01:00
Andreas Kling
af28a8ad11 Kernel: Hold InodeVMObject reference while inspecting it in sys$mmap() 2020-12-29 15:43:35 +01:00
Andreas Kling
30dbe9c78a Kernel+LibC: Add a very limited sys$mremap() implementation
This syscall can currently only remap a shared file-backed mapping into
a private file-backed mapping.
2020-12-29 02:20:43 +01:00
Itamar
d64d0451e5 Kernel: Fix mmap with specific address for file backed mappings 2020-12-24 21:34:51 +01:00
Itamar
efe4da57df Loader: Stabilize loader & Use shared libraries everywhere :^)
The dynamic loader is now stable enough to be used everywhere in the
system - so this commit does just that.
No More .a Files, Long Live .so's!
2020-12-14 23:05:53 +01:00
Itamar
9ca1a0731f Kernel: Support TLS allocation from userspace
This adds an allocate_tls syscall through which a userspace process
can request the allocation of a TLS region with a given size.

This will be used by the dynamic loader to allocate TLS for the main
executable & its libraries.
2020-12-14 23:05:53 +01:00
Tom
c8d9f1b9c9 Kernel: Make copy_to/from_user safe and remove unnecessary checks
Since the CPU already does almost all necessary validation steps
for us, we don't really need to attempt to do this. Doing it
ourselves doesn't really work very reliably, because we'd have to
account for other processors modifying virtual memory, and we'd
have to account for e.g. pages not being able to be allocated
due to insufficient resources.

So change the copy_to/from_user (and associated helper functions)
to use the new safe_memcpy, which will return whether it succeeded
or not. The only manual validation step needed (which the CPU
can't perform for us) is making sure the pointers provided by user
mode aren't pointing to kernel mappings.

To make it easier to read/write from/to either kernel or user mode
data add the UserOrKernelBuffer helper class, which will internally
either use copy_from/to_user or directly memcpy, or pass the data
through directly using a temporary buffer on the stack.

Last but not least we need to keep syscall params trivial as we
need to copy them from/to user mode using copy_from/to_user.
2020-09-13 21:19:15 +02:00