This flag warns on classes which have `virtual` functions but do not
have a `virtual` destructor.
This patch adds both the flag and missing destructors. The access level
of the destructors was determined by a two rules of thumb:
1. A destructor should have a similar or lower access level to that of a
constructor.
2. Having a `private` destructor implicitly deletes the default
constructor, which is probably undesirable for "interface" types
(classes with only virtual functions and no data).
In short, most of the added destructors are `protected`, unless the
compiler complained about access.
This required changing the load_sync API to take a LoadRequest instead
of just a URL. Since HTMLScriptElement was the only (non-test) user of
this API, it didn't seem useful to instead add an overload of load_sync
for this.
I hereby declare these to be full nouns that we don't split,
neither by space, nor by underscore:
- Breadcrumbbar
- Coolbar
- Menubar
- Progressbar
- Scrollbar
- Statusbar
- Taskbar
- Toolbar
This patch makes everything consistent by replacing every other variant
of these with the proper one. :^)
The previous handling of the name and message properties specifically
was breaking websites that created their own error types and relied on
the error prototype working correctly - not assuming an JS::Error this
object, that is.
The way it works now, and it is supposed to work, is:
- Error.prototype.name and Error.prototype.message just have initial
string values and are no longer getters/setters
- When constructing an error with a message, we create a regular
property on the newly created object, so a lookup of the message
property will either get it from the object directly or go though the
prototype chain
- Internal m_name/m_message properties are no longer needed and removed
This makes printing errors slightly more complicated, as we can no
longer rely on the (safe) internal properties, and cannot trust a
property lookup either - get_without_side_effects() is used to solve
this, it's not perfect but something we can revisit later.
I did some refactoring along the way, there was some really old stuff in
there - accessing vm.call_frame().arguments[0] is not something we (have
to) do anymore :^)
Fixes#6245.
According to the Single UNIX Specification, Version 2 that's where
those macros should be defined. This fixes the libiconv port.
This also fixes some (but not all) build errors for the diffutils and nano ports.
We now leverage the VM's promise rejection tracker callbacks and print a
warning in either of these cases:
- A promise was rejected without any handlers
- A handler was added to an already rejected promise
Almost a year after first working on this, it's finally done: an
implementation of Promises for LibJS! :^)
The core functionality is working and closely following the spec [1].
I mostly took the pseudo code and transformed it into C++ - if you read
and understand it, you will know how the spec implements Promises; and
if you read the spec first, the code will look very familiar.
Implemented functions are:
- Promise() constructor
- Promise.prototype.then()
- Promise.prototype.catch()
- Promise.prototype.finally()
- Promise.resolve()
- Promise.reject()
For the tests I added a new function to test-js's global object,
runQueuedPromiseJobs(), which calls vm.run_queued_promise_jobs().
By design, queued jobs normally only run after the script was fully
executed, making it improssible to test handlers in individual test()
calls by default [2].
Subsequent commits include integrations into LibWeb and js(1) -
pretty-printing, running queued promise jobs when necessary.
This has an unusual amount of dbgln() statements, all hidden behind the
PROMISE_DEBUG flag - I'm leaving them in for now as they've been very
useful while debugging this, things can get quite complex with so many
asynchronously executed functions.
I've not extensively explored use of these APIs for promise-based
functionality in LibWeb (fetch(), Notification.requestPermission()
etc.), but we'll get there in due time.
[1]: https://tc39.es/ecma262/#sec-promise-objects
[2]: https://tc39.es/ecma262/#sec-jobs-and-job-queues
This utility traces the route packets take to a user specified host.
QEMU user networking (the type of networking we currently have setup)
does not support sending "real" raw IPv4 packets, and as such we cant
specify a custom TTL value. As a result, this utility will only work
on real hardware, qemu setup with tun networking (requires root) and
other hypervisors that support bridged adapters (VirtualBox/VMWare).
Instead of crashing on a missing input file, looping forever on an
invalid gzip compressed file, and crashing on permissions issues in
the output file, handle all issues gracefully by logging and returning.
and customizable indentation level
An example: cat /proc/net/adapters | jp
Another example: cat /proc/all | jp -i 2 (indents are set to 2 spaces, instead of 4 by default)
This parser should be a little bit more modern and a little more
resilient to zip files from other operating systems. As a side
effect we now also support extracting zip files that are using
DEFLATE compression (using our own LibCompress).
If in 'foo(); bar();' bar fails, we'd get the error of that and then
foo's return value - that's probably not something anyone expects.
Also make sure to return non-success so the process will exit with 1.
Very incompressible data could sometimes produce no backreferences
which would result in no distance huffman code being created (as it
was not needed), so VERIFY the code exists only if it is actually
needed for writing the stream.
This completes our tar utility by implementing the -c option
for archive creation using TarOutputStream and optionally
GzipCompressor for compression via the -z option.