I created this by typing "sup" into TextEdit.app on macOS 13.4,
hitting Cmd-P to bring up the print dialog, clicked the PDF button
at the bottom, changed Title and Author to "sup", clicked
"Security Options…", and checked "Require password to open document"
(with password "sup").
This file tests several things:
- It has a compressed stream as first object. This used to make the
linearization dict detection logic assert.
- It uses AES as encryption key using version 4 of the encryption
dict. This used to not be implemented.
PDF files can be linearized. In that case, they start with a
"linearization dict" that stores the key `/Linearized` and the value
`1`. To check if a file is linearized, we just read the first dict, and
then checked if it has that key.
If the first object of a PDF was a stream with a compression filter
and the input PDF was encrypted and not linearized, then us trying to
decode the linearization dict could crash due to stream contents being
encrypted, decryption state not yet being initialized, and us trying
to decompress stream data before decrypting it.
To prevent this, disable uncompression when parsing the first object
to determine if it's a lineralization dictionary.
(A linearization dict never stores string values, so decryption
not yet being initialized is not a problem. Integer values aren't
encrypted in encrypted PDF files.)
These three lines were added in commits 41d5531, 0f7a651 and 97aca8f all
the way back in June 2020, and went unnoticed until Lucas pointed this
out during my refactoring.
These too should be sorted alphabetically, as evidenced by the fact that
text/markdown was in there twice before this change. :^)
Also broke out tables of sufffixes and basenames we consider plaintext,
and sorted those alphabetically as well.
It makes far more sense to sort by the standard mime type strings,
rather than the ad-hoc variable names associated with each enumeration.
This way, the sort looks nicer, and also matches the corresponding
description enumerations in the file utility.
This enum used to store very precise state about the decoding process,
let's simplify that by only including two steps: HeaderDecoder and
BitmapDecoded.
https://www.haiku-os.org/images/bg-page.png has a size of 0 for
example.
Just ignoring the chunk instead of assuming that the image is sRGB
and has Perceptual rendering intent matches what libpng does
(cf `png_handle_sRGB()`), so let's do that too.
(All other chunk handlers are still strict about size.)
Max width shouldn't be tied to min width, commit d33b99d went too far
and made them the same when the table-root had a specified percentage
width.
Fixes#19940.
Previously, it was assumed that only one filtering option, such as
`-u` or `-p` would be used at a time. With this PR, processes are now
shown if they match any of the specified filters.
Now that we use the HTML image loading algorithm from spec, we can
implement complete correctly.
This (finally) fixes an issue where images were not loading on
https://twinings.co.uk/ :^)
When the intersection root is a Document, we use the viewport itself as
the root intersection rectangle. However, we should only use the size of
the viewport and strip away the current scroll offset.
This is important, as intersections are computed using viewport-relative
element rects, so we're already in a coordinate system where (0, 0) is
the top left of the scrolled viewport.
This fixes an issue where IntersectionObservers would fire at entirely
wrong scroll offsets. :^)
Cell::heap() and Cell::vm() needed to access member functions from
HeapBlock, and wanted to be inline, so they were moved to VM.h.
That approach will no longer work with VM.h not being included in every
file (starting from the next commit), so this commit fixes that circular
import issue by introducing secondary base classes to host the
references to Heap and VM, respectively.
This commit ports `libjodycode` to Serenity, which is a helper library
containing shared code for utilities written by Jody Bruchon. This
library was required for porting `jdupes`.
Previously, we started parsing the ELF file again in a completely
different place, and without the partial mapping that we do while
validating.
Instead of doing manual parsing in two places, just capture the
requested stack size right after we validated it.
All elements of the vector were moved to the left, for each element to
remove. This patch makes the function move each element exactly once.
On the same test case as the previous commit, it makes the function
disappear from the profile. These two commits combined reduce the
decompression time by 12%.
As confusing as it may sound, reusing them is terrible performance wise.
When profiling the PNG decoder, the result (which is dominated by the
Zlib decompression) shows that the `cleanup_unused_chunks()` function
represented 14.26% of the profile before this patch and only 7.7%
afterward.
On a 6.5 MB PNG image, it reduces the decompression time by more than
5%.
Since the underlying HTML::Window can change, caching property accesses
on WindowProxy is not as simple as remembering the shape. Let's disable
caching here for now. We can come back to it in the future when we have
no low-hanging fruit left. :^)
Fixes an assertion failure on https://twinings.co.uk/
Since we can't rely on shape identity (i.e its pointer address) for
unique shapes, give them a serial number that increments whenever a
mutation occurs.
Inline caches can then compare this serial number against what they
have seen before.