A typical request to an image server calls for cropping and/or downscaling operations to be applied to a source image. Both will result in derivative images containing fewer pixels than the source image. A basic image reader, disregarding efficiency, will try to read the entire source image into memory before the cropping and scaling operations are carried out. This works fine for small images, because they can be read quickly and won't consume much memory. But as pixel count increases, read time and memory consumption also increase, an increasing burden is placed on the server, and requests take longer to fulfill.
It would be better if the image reader could read only the requested region, and even employ subsampling to read only the pixels within it that are needed to satisfy the requested scale factor. This would require three things:
The application tries to do the best it can with whatever source formats it is asked to serve, and from whatever data source. However, some image formats are inherently better suited for large sizes and some processor/format combinations will perform better than others.
Every source image served by the application is considered to have a unique identifier, which appears in endpoint URIs, and is used throughout the application to refer to the image.
An identifier may be the same as, or may map to, a filename, pathname, object key, or some other identifier string in the underlying storage. The Getting Started section describes a simple setup in which URI identifiers map to pathname fragments on a filesystem, but this can make for identifiers that are ugly, unstable, and/or insecure. See Sources for information on your source of choice and setting it up to meet a particular use case.
URI-illegal characters in identifiers must be encoded. For example, an image with an identifier of a6/b3/c4.jp2 would need to appear in a URI as a6%2Fb3%2Fc4.jp2. When the application is behind a reverse proxy that cannot pass through encoded slashes (%2F) without decoding them, the
slash_substitute configuration key can be used to specify a different character or character sequence to treat as a slash.
Most processors can read images that have more than 8 bits per sample. However, as most web clients can't display more than 8 bits per sample, all output is limited to 8 bits.
When set to
processor.normalize configuration option normalizes the pixel values to utilize the full dynamic range of the output image. This is useful when working with 16-bit source images (for example) that do not use a full 16 bits of dynamic range and would appear overly dark when scaled down to 8 bits.
JPEG is not an ideal format for high-resolution image delivery because most readers have to read the entire image contents into memory before they can decode any part of it.
JPEG2000 uses advanced compression techniques to enable fast reduced-scale and region-of-interest decoding. With a performant decoder, it is well-suited for use with very large source images.
ImageMagickProcessor's JPEG2000 delegate, if installed, will also use the OpenJPEG library, but less efficiently as it will decompress the whole image, not just the region of interest, and won't be able to discard decomposition levels.
GraphicsMagickProcessor can read and write JPEG2000 using JasPer, if the necessary plugin is installed, with the same caveats as ImageMagickProcessor.
Most processors that support image sources support this format, some perhaps more efficiently than others, though this is untested.
In general, this is not an ideal format for high-resolution image delivery, for the same reason as JPEG.
TIFF is a common format that most processors can read. However, there are some criteria that source images must meet in order to be delivered efficiently.
The Adobe TIFF 6.0 specification permits arrangements of image data in either strips or tiles. Strips may consist of one or more whole rows of pixels, but tiles are typically square. By default, most TIFF encoders produce strip-based TIFFs, which are increasingly slow to read as their size increases. High-resolution TIFFs must be tile-based in order to permit efficient region extraction.
When using an Image I/O-based processor, information about TIFF source images is logged at debug level. These messages will tell you whether a TIFF is strip-based or tile-based. For example, a request for a tiled TIFF will generate a log message like:
DEBUG e.i.l.c.p.c.TIFFImageReader - Acquiring region 0,0/500x500 from 8848x6928 image (128x128 tile size)
Multi-resolution TIFF is a subtype of TIFF that can be read more efficiently at reduced scales. In addition to the main image, a multi-resolution TIFF file will contain a sequence of progressively half-sized sub-images: for example, a 10000×10000 pixel main image would include versions of 5000×5000 pixels, 2500×2500 pixels, 1250×1250 pixels, and so on, in the same file.
Each of the images in a multi-resolution TIFF file can be striped or tiled, just as in a mono-resolution file. (They can even be encoded in other, non-TIFF formats.) Tiled and pyramidal encodings are complementary: the former improves efficiency with reduced regions at large scales, and the latter improves efficiency at reduced scales. For efficient deep zooming, TIFF images need to be pyramidal, and each level of the pyramid must be tiled.
Most processors can "read the TIFF format," but not all can read it efficiently. Currently, Java2dProcessor and JaiProcessor both support multi-resolution TIFF, which is to say that they read the embedded sub-images and choose the smallest one that can fulfill the request. Additionally, both exploit tiled sub-images. JaiProcessor, however, is able to use the JAI processing pipeline to do this more efficiently, so it is currently the best-performing processor for suitably-encoded high-resolution TIFF images.
Image files may have embedded color profiles, which map the color space in which an image was produced to its internal color data. This enables viewers to reproduce image colors accurately, as they were seen by the producer. By embedding a profile, the producer need not know anything about the displays on which an image will be viewed, and need not destructively modify the color values within the image data itself.
Most processors support embedded color profiles and will either automatically copy them into derivative images, or automatically adjust the output pixels; see the table of processor-supported features.
There is typically no need to embed a profile into profile-less images, as viewers will tend to automatically assume that these map to an sRGB space, and apply the conversion themselves. There is therefore no provision for embedding profiles into profile-less images.
Many image file formats are capable of storing supplementary technical and/or descriptive metadata alongside the actual image data. Formats may be able to store standard metadata formats like EXIF, IPTC IIM, and XMP, and they may also define their own metadata formats. More than one such format may be present even within the same file.
When an image request is received—unless it calls for the full-sized unmodified source image—the image server will have to dynamically create and return a derivative image. As this is a whole new image, distinct from the source image, populating it with metadata would require an additional step.
processor.metadata.preserve is set to
true in the configuration file, an effort will be made to copy metadata from source images into derivative images. This doesn't work with all processors; see the Supported Features table for a breakdown. It also does not generally work across formats.