For the same reason we ignore interfaces without an IP address when
choosing where to send a route, we should also ignore interfaces without
IP addresses when updating the ARP table on incoming packets from
local addresses.
On an interface with a null address, the mask checking would always
result in zero, which resulted in the system updating the ARP table on
almost every incoming packet from any address (private or public).
This patch fixes this behavior by only applying this check to interfaces
with valid addresses and now the ARP table won't get constantly
hammered.
Closes#13713
Previously the routing table did not store the route flags. This
adds basic support and exposes them in the /proc directory so that a
userspace caller can query the route and identify the type of each
route.
It doesn't make sense after introduction of routing table which allows
having multiple gateways for every interface, and isn't used by any of
the userspace programs now.
Previously the system had no concept of assigning different routes for
different destination addresses as the default gateway IP address was
directly assigned to a network adapter. This default gateway was
statically assigned and any update would remove the previously existing
route.
This patch is a beginning step towards implementing #180. It implements
a simple global routing table that is referenced during the routing
process. With this implementation it is now possible for a user or
service (i.e. DHCP) to dynamically add routes to the table.
The routing table will select the most specific route when possible. It
will select any direct match between the destination and routing entry
addresses. If the destination address overlaps between multiple entries,
the Kernel will use the longest prefix match, or the longest number of
matching bits between the destination address and the routing address.
In the event that there is no entries found for a specific destination
address, this implementation supports entries for a default route to be
set for any specified interface.
This is a small first step towards enhancing the system's routing
capabilities. Future enhancements would include referencing a
configuration file at boot to load pre-defined static routes.
Instead, hold the lock while we copy the contents to a stack-based
Vector then iterate on it without any locking.
Because we rely on heap allocations, we need to propagate errors back
in case of OOM condition, therefore, both PCI::enumerate API function
and PCI::Access::add_host_controller_and_enumerate_attached_devices use
now a ErrorOr<void> return value to propagate errors. OOM Error can only
occur when enumerating the m_device_identifiers vector under a spinlock
and trying to expand the temporary Vector which will be used locklessly
to actually iterate over the PCI::DeviceIdentifiers objects.
This prevents a kernel panic found in CI when m_receive_queue's size is
queried and found to be non-zero, then a different thread clears the
queue, and finally the first thread continues into the if block and
calls the queue's first() method, which then fails an assertion that
the queue's size is non-zero.
We were frequently dropping packets when downloading large files.
Then we had to wait for TCP retransmission which slowed things down.
This patch dramatically improves E1000 throughput by increasing the
number of RX/TX buffers from 32/8 to 256/256.
The largest chunk of JavaScript from Discord now downloads in roughly
1 second instead of 7 seconds. :^)
Rename the bound socket accessor from socket() to bound_socket().
Also return RefPtr<LocalSocket> instead of a raw pointer, to make it
harder for callers to mess up.
1. When receiving FIN while in FinWait1, we now reply with ACK
in addition to the FinWait1->Closing transition.
2. When receiving FIN|ACK while in FinWait1, we now reply with
ACK and transition from FinWait1->TimeWait.
3. When receiving FIN while in FinWait2, we now reply with ACK.
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.
In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
* We no longer support 'upgrading' a shared lock holder to an
exclusive holder when it is the only shared holder and it did not
unlock the lock before relocking it as exclusive. NOTE: Unlike the
rest of these changes, this scenario is not VERIFY-able in an
allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
debug flag was added, this flag lets Mutex allocate in order to
detect such cases when debugging a deadlock.
* We no longer support checking if a Mutex is locked by the current
thread when the Mutex was not locked exclusively, the shared version
of this check was not used anywhere.
* We no longer support force unlocking/relocking a Mutex if the Mutex
was not locked exclusively, the shared version of these functions
was not used anywhere.
Apologies for the enormous commit, but I don't see a way to split this
up nicely. In the vast majority of cases it's a simple change. A few
extra places can use TRY instead of manual error checking though. :^)
When doing the last unref() on a listed-ref-counted object, we keep
the list locked while mutating the ref count. The destructor itself
is invoked after unlocking the list.
This was racy with weakable classes, since their weak pointer factory
still pointed to the object after we'd decided to destroy it. That
opened a small time window where someone could try to strong-ref a weak
pointer to an object after it was removed from the list, but just before
the destructor got invoked.
This patch closes the race window by explicitly revoking all weak
pointers while the list is locked.
Use the same trick as SlavePTY and override unref() to provide safe
removal from the sockets_by_tuple table when destroying a TCPSocket.
This should fix the TCPSocket::from_tuple() flake seen on CI.
Calls to link_up() in the E1000 driver would read the link state
directly from the hardware on every call. This had negative
performance impact in high throughput situations since link_up()
is called every time an IP packet's route is resolved.
This patch takes inspiration from the RTL8139 network adapter where
the link state is stored in a bool and only updated when the hardware
generates an interrupt related to link state change.
After this change I measured a ~9% increase in TCP Tx throughput
using:
cat /dev/zero | nc <host_IP> <host_port> from the Serenity VM to my
host machine
Previously we would crash the process immediately when a promise
violation was found during a syscall. This is error prone, as we
don't unwind the stack. This means that in certain cases we can
leak resources, like an OwnPtr / RefPtr tracked on the stack. Or
even leak a lock acquired in a ScopeLockLocker.
To remedy this situation we move the promise violation handling to
the syscall handler, right before we return to user space. This
allows the code to follow the normal unwind path, and grantees
there is no longer any cleanup that needs to occur.
The Process::require_promise() and Process::require_no_promises()
functions were modified to return ErrorOr<void> so we enforce that
the errors are always propagated by the caller.