We never supported VGA framebuffers and that folder was a big misleading
part of the graphics subsystem.
We do support bare-bones VGA text console (80x25), but that only happens
to be supported because we can't be 100% sure we can always initialize
framebuffer so in the worst scenario we default to plain old VGA console
so the user can still use its own machine.
Therefore, the only remaining parts of VGA is in the GraphicsManagement
code to help driving the VGA text console if needed.
The old methods are already can be considered deprecated, and now after
we removed framebuffer devices entirely, we can safely remove these
methods too, which simplfies the GenericGraphicsAdapter class a lot.
Instead of letting the user to determine whether framebuffer devices
will be created (which is useless because they are gone by now), let's
simplify the flow by allowing the user to choose between full, limited
or disabled functionality. The determination happens only once, so, if
the user decided to disable graphics support, the initialize method
exits immediately. If limited functionality is chosen, then a generic
DisplayConnector is initialized with the preset framebuffer resolution,
if present, and then the initialize method exits. As a default, the code
proceeds to initialize all drivers as usual.
The DisplayConnector class is meant to replace the FramebufferDevice
class. The advantage of this class over the FramebufferDevice class is:
1. It removes the mmap interface entirely. This interface is unsafe, as
multiple processes could try to use it, and when switching to and from
text console mode, there's no "good" way to revoke a memory mapping from
this interface, let alone when there are multiple processes that call
this interface. Therefore, in the DisplayConnector class there's no
implementation for this method at all.
2. The class uses a new real-world structure called ModeSetting, which
takes into account the fact that real hardware requires more than width,
height and pitch settings to mode-set the display resolution.
3. The class assumes all instances should supply some sort of EDID,
so it facilitates such mechanism to do so. Even if a given driver does
not know what is the actual EDID, it will ask to create default-generic
EDID blob.
3. This class shifts the responsibilies of switching between console
mode and graphical mode from a GraphicsAdapter to the DisplayConnector
class, so when doing the switch, the GraphicsManagement code actually
asks each DisplayConnector object to do the switch and doesn't rely on
the GraphicsAdapter objects at all.
This helps solving an issue when we boot with text mode screen so the
Kernel initializes an early text mode console, but even after disabling
it, that console can still access VGA ports. This wouldn't be a problem
for emulated hardware but bare metal hardware might have a "conflict",
especially if the native driver explicitly request to disable the VGA
emulation.
Instead, hold the lock while we copy the contents to a stack-based
Vector then iterate on it without any locking.
Because we rely on heap allocations, we need to propagate errors back
in case of OOM condition, therefore, both PCI::enumerate API function
and PCI::Access::add_host_controller_and_enumerate_attached_devices use
now a ErrorOr<void> return value to propagate errors. OOM Error can only
occur when enumerating the m_device_identifiers vector under a spinlock
and trying to expand the temporary Vector which will be used locklessly
to actually iterate over the PCI::DeviceIdentifiers objects.
If there's no PCI bus, then it's safe to assume that the x86 machine we
run on supports VGA text mode console output with an ISA VGA adapter.
If this is the case, we just instantiate a ISAVGAAdapter object that
assumes this situation and allows us to boot into VGA text mode console.
If the bootloader that loaded us is providing a framebuffer details from
the Multiboot protocol then we can instantiate a framebuffer console.
Otherwise, we should use a text mode console, assuming that the BIOS and
the bootloader didn't try to modeset the screen resolution so we have is
a VGA 80x25 text mode being displayed on screen.
Since "boot_framebuffer_console" is no longer a good representative as a
global variable name, it's changed to g_boot_console to match the fact
that it can be assigned with a text mode console and not framebuffer
console if needed.
When GraphicsManagement initializes the drivers we can disable the
bootloader framebuffer console. Right now we don't yet fully destroy
the no longer needed console as it may be in use by another CPU.
We should only look at the framebuffer structure members if the
MULTIBOOT_INFO_FRAMEBUFFER_INFO bit is set in the flags field.
Also add some logging if we ignored the fbdev command line argument
due to either not having a framebuffer provided by the bootloader, or
because we don't support the framebuffer format.
This allows forcing the use of only the framebuffer set up by the
bootloader and skips instantiating devices for any other graphics
cards that may be present.
We never used that type method except in initialization in
GraphicsManagement, and we used it there to query whether the device is
VGA compatible or not.
Bootmode used to control framebuffers, panic behavior, and SystemServer.
This patch factors framebuffer control into a separate flag.
Note that the combination 'bootmode=self-test fbdev=on' leads to
unexpected behavior, which can only be fixed in a later commit.
A couple of things were changed:
1. Semantic changes - PCI segments are now called PCI domains, to better
match what they are really. It's also the name that Linux gave, and it
seems that Wikipedia also uses this name.
We also remove PCI::ChangeableAddress, because it was used in the past
but now it's no longer being used.
2. There are no WindowedMMIOAccess or MMIOAccess classes anymore, as
they made a bunch of unnecessary complexity. Instead, Windowed access is
removed entirely (this was tested, but never was benchmarked), so we are
left with IO access and memory access options. The memory access option
is essentially mapping the PCI bus (from the chosen PCI domain), to
virtual memory as-is. This means that unless needed, at any time, there
is only one PCI bus being mapped, and this is changed if access to
another PCI bus in the same PCI domain is needed. For now, we don't
support mapping of different PCI buses from different PCI domains at the
same time, because basically it's still a non-issue for most machines
out there.
2. OOM-safety is increased, especially when constructing the Access
object. It means that we pre-allocating any needed resources, and we try
to find PCI domains (if requested to initialize memory access) after we
attempt to construct the Access object, so it's possible to fail at this
point "gracefully".
3. All PCI API functions are now separated into a different header file,
which means only "clients" of the PCI subsystem API will need to include
that header file.
4. Functional changes - we only allow now to enumerate the bus after
a hardware scan. This means that the old method "enumerate_hardware"
is removed, so, when initializing an Access object, the initializing
function must call rescan on it to force it to find devices. This makes
it possible to fail rescan, and also to defer it after construction from
both OOM-safety terms and hotplug capabilities.
Our existing implementation did not check the element type of the other
pointer in the constructors and move assignment operators. This meant
that some operations that would require explicit casting on raw pointers
were done implicitly, such as:
- downcasting a base class to a derived class (e.g. `Kernel::Inode` =>
`Kernel::ProcFSDirectoryInode` in Kernel/ProcFS.cpp),
- casting to an unrelated type (e.g. `Promise<bool>` => `Promise<Empty>`
in LibIMAP/Client.cpp)
This, of course, allows gross violations of the type system, and makes
the need to type-check less obvious before downcasting. Luckily, while
adding the `static_ptr_cast`s, only two truly incorrect usages were
found; in the other instances, our casts just needed to be made
explicit.
This allows us to specify virtual addresses for things the kernel should
access via virtual addresses later on. By doing this we can make the
kernel independent from specific physical addresses.
We use a switch-case statements to ensure we try to find the best
suitable driver for a specific graphics card. In case we don't find
such, we use the default statement to initialize the graphics card as a
generic VGA adapter, if the adapter is VGA compatible.
If we couldn't initialize the driver, we don't touch this adapter
anymore.
Also, GraphicsDevice should not be tied to a PCI::Address member, as it
can be theortically be used with other buses (e.g. ISA cards).
Currently, Kernel::Graphics::FramebufferConsole is written assuming that
the underlying framebuffer memory exists in physically contiguous
memory. There are a bunch of framebuffer devices that would need to use
the components of FramebufferConsole (in particular access to the kernel
bitmap font rendering logic). To reduce code duplication, framebuffer
console has been split into two parts, the abstract
GenericFramebufferConsole class which does the rendering, and the
ContiguousFramebufferConsole class which contains all logic related to
managing the underling vm object.
Also, a new flush method has been added to the class, to support devices
that require an extra flush step to render.
If we have a VGA-capable graphics adapter that we support, we should
prefer it over any legacy VGA because we wouldn't use it in legacy VGA
mode in this case.
This solves the problem where we would only use the legacy VGA card
when both a legacy VGA card as well as a VGA-mode capable adapter is
present.
This fixes a bug that was reported on this discord server by
@ElectrodeYT - due to the confusion of passing arguments in different
orders, we messed up and triggered a page fault due to faulty sizes.
As we removed the support of VBE modesetting that was done by GRUB early
on boot, we need to determine if we can modeset the resolution with our
drivers, and if not, we should enable text mode and ensure that
SystemServer knows about it too.
Also, SystemServer should first check if there's a framebuffer device
node, which is an indication that text mode was not even if it was
requested. Then, if it doesn't find it, it should check what boot_mode
argument the user specified (in case it's self-test). This way if we
try to use bochs-display device (which is not VGA compatible) and
request a text mode, it will not honor the request and will continue
with graphical mode.
Also try to print critical messages with mininum memory allocations
possible.
In LibVT, We make the implementation flexible for kernel-specific
methods that are implemented in ConsoleImpl class.