Everyone who connects to ProtocolServer now gets his own instance.
This means that different users can no longer talk to the same exact
ProtocolServer process, enhanching security and stability.
Instead of painting synchronously whenever a Paint request comes in,
the WebContent process will now buffer pending paints and coalesce
them and defer the actual paint using a zero-timer.
This significantly reduces flickering already, without doing any
double-buffering in the WebContentView widget.
The WebContentView widget now inherits from GUI::ScrollableWidget and
will pass its scroll offset over to the WebContent process as it's
changing. This is not super efficient and can get a bit laggy, but it
will do fine as an initial scrolling implementation.
Just like the single-process Web::PageView widget, WebContentView also
paints scrolled content by translating the Gfx::Painter.
fixes#2575
The extra ResizeEvent would be handled after on_rect_change, and would
reset the size of main_widget to what it was before the resize.
Bonus: Less unnecessary events.
The new ImageDecoder service (available for members of "image" via
/tmp/portal/image) allows you to decode images in a separate process.
This will allow programs to confidently load untrusted images, since
the bulk of the security concerns are sandboxed to a separate process.
The only API right now is a synchronous IPC DecodeImage() call that
takes a shbuf with encoded image data and returns a shared buffer and
metadata for the decoded image.
It also comes with a very simple library for interfacing with the
ImageDecoder service: LibImageDecoderClient. The name is a bit of a
mouthful but I guess we can rename it later if we think of something
nicer to call it.
There's obviously a bit of overhead to spawning a separate process
for every image decode, so this is mostly only appropriate for
untrusted images (e.g stuff downloaded from the web) and not necessary
for trusted local images (e.g stuff in /res)
Port the WebContent service to the new MultiInstance mechanism that
Sergey added. This means that every new WebContentView gets its very
own segregated WebContent process.
"Paint" matches what we call this in the rest of the system. Let's not
confuse things by mixing paint/render/draw all the time. I'm guilty of
this in more places..
Also rename RenderingContext => PaintContext.
CSS defines a very specific paint order. This patch starts steering us
towards respecting that by introducing the PaintPhase enum with values:
- Background
- Border
- Foreground
- Overlay (internal overlays used by inspector)
Basically, to get the right visual result, we have to render the page
multiple times, going one phase at a time.
After layout, we may want to repaint the page, so we now listen for the
PageClient::page_did_invalidate() notification and use it to drive a
client-side repaint.
Note that an invalidation request from LibWeb makes a full roundtrip
to the WebContent client and back since the client drives painting.
The "WebContent" service provides a very restricted instance of LibWeb
running as an unprivileged user account. This will be used to implement
process separation in Browser, among other things.
This first cut of the service only spawns a single WebContent process
when someone connects to /tmp/portal/webcontent. We will soon switch
this over to spawning a new process for each connection.
Since this feature is very immature, we'll be bringing it up inside of
Demos/WebView as a separate demo program. Eventually this will become
a reusable widget that anyone can embed and easily get out-of-process
web content in their GUI.
This is pretty, pretty cool! :^)
We were getting a little overly memey in some places, so let's scale
things back to business-casual.
Informal language is fine in comments, commits and debug logs,
but let's keep the runtime nice and presentable. :^)
Clients now receive HTTP status codes like 200, 404, etc.
Note that a 404 with content is still considered a "successful"
download from ProtocolServer's perspective. It's up to the client
to interpret the status code.
I'm not sure if this is the best API, but it'll work for now.
Instead of always running the responsiveness timer for IPC clients,
we now only start it after sending a message. This avoids waking up
otherwise idle clients to do ping/pong busywork.
- Parsing invalid JSON no longer asserts
Instead of asserting when coming across malformed JSON,
JsonParser::parse now returns an Optional<JsonValue>.
- Disallow trailing commas in JSON objects and arrays
- No longer parse 'undefined', as that is a purely JS thing
- No longer allow non-whitespace after anything consumed by the initial
parse() call. Examples of things that were valid and no longer are:
- undefineddfz
- {"foo": 1}abcd
- [1,2,3]4
- JsonObject.for_each_member now iterates in original insertion order
IPC::ClientConnection now tracks the time since the last time we got
a message from the client and calls a virtual function on itself after
3 seconds: may_have_become_unresponsive().
Subclasses of ClientConnection can then react to this if they like.
We use this mechanism in WindowServer to send out a friendly Ping
message to the client. If he doesn't Pong within 1 second, we mark
the client as "unresponsive" and recompose all of his windows with
a darkened appearance and amended title until he Pongs us.
This is a little on the aggressive side and we should figure out a way
to wake up less often. Perhaps this could only be done to windows the
user is currently interacting with, for example.
Anyways, this is pretty cool! :^)
You can now ask SystemServer to not only listen for connections on the socket,
but to actually accept them, and to spawn an instance of the service for each
client connection. In this case, it's the accepted, not listening, socket that
the service processes will receive using socket takeover.
This mode obviously requires the service to be a multi-instance service.
For this kind of services, there's no single PID of a running instance;
there may be multiple, or no instances of the service running at any time.
No keepalive functionality is available in this mode, since "alive" doesn't
make sense for multi-instance services.
At the moment, there's no way to actually create multiple instances of
a service; this is going to be added in the next commit.