This version of the manual refers to an earlier version of the software.

Images

Size Considerations

Most requests to an image server will call for cropped tiles and/or downscaled derivatives of a source image. Both of these will return less data than is contained in the entire image. A basic image reader, disregarding efficiency, will try to read the entire source image into memory before the cropping and scaling operations are carried out. This works fine for small images—up to a few thousand pixels square—because they can be read quickly and won't consume much memory. As image size increases, though, it works progressively less fine. As source images expand into the range of hundreds of megapixels, a huge burden is placed on the server, which delivers a slow and unsatisfying experience to clients.

It would be much better if the image reader could read only the requested region, and even employ subsampling to read only the pixels within that region that are needed to satisfy the requested scale factor. This strategy would require a reader that is written to support it, as well as an image format capable of facilitating it. Two such formats—JPEG2000 and multi-resolution tiled TIFF—are detailed later in this section.

Cantaloupe tries to do the best it can with whatever source formats it is asked to serve. However, some processor/format combinations will perform better than others.

Source Formats

JPEG

Most processors that support image sources support this format, with (the author assumes) roughly similar performance.

Java2dProcessor and JaiProcessor use the default ImageIO JPEG reader to read JPEGs. The performance of this reader is known to vary greatly from JRE to JRE, with newer versions performing better.

JPEG2000

JPEG2000 uses advanced compression techniques to enable fast reduced-scale and region-of-interest decoding. With a performant decoder, it is well-suited for use with very large source images.

KakaduProcessor is the most efficient processor for this format, and it performs very well, even with huge images. Unfortunately, Kakadu is not free.

OpenJpegProcessor uses the OpenJPEG decoder, which is generally considered, as of this writing, to be the fastest open-source JPEG2000 decoder. (ImageMagickProcessor's JPEG2000 delegate, if installed, will also use the OpenJPEG library, but less efficiently as it won't use its region-extraction or level-reduction features.)

GraphicsMagickProcessor can read and write JPEG2000 using JasPer, if the necessary plugin is installed. This will probably not be fast enough to be usable for most purposes.

TIFF

TIFF is a common format that most processors can read. However, there are some criteria that source images must meet in order to be delivered with maximum efficiency.

Strip-Based vs. Tile-Based

The Adobe TIFF 6.0 specification permits arrangements of image data in either strips or tiles. Most TIFF encoders produce strip-based TIFFs unless told to do otherwise. These are increasingly slow to read as their size increases. High-resolution TIFFs should be tile-based in order to permit efficient region extraction. An easy way to check is with the tiffdump utility:

$ tiffdump image.tif

For strip-based TIFFs, this will print out some information including StripOffsets, StripByteCounts, and so on. For tile-based TIFFs, it will print TileOffsets, TileByteCounts, and so on, instead.

Multi-Resolution (Pyramidal) TIFF

Multi-resolution TIFF is a special type of TIFF file that can be read more efficiently at reduced scales. In addition to the main image, a multi-resolution TIFF file will contain a sequence of progressively half-sized sub-images: for example, a 10000×10000 pixel main image would include derivatives of 5000×5000 pixels, 2500×2500 pixels, 1250×1250 pixels, and so on, all embedded in the same file.

Each of the levels in a multi-resolution TIFF file can be striped or tiled, just as in a mono-resolution file. Tiled and pyramidal encodings are complementary: the former improves efficiency with reduced regions at large scales, and the latter improves efficiency at reduced scales. For efficient deep zooming, TIFF images need to be pyramidal, and each level of the pyramid should be tiled.

BigTIFF

Ordinary TIFF files are limited to 4 GB in size. BigTIFF uses a different data layout that enables them to scale far beyond this. All processors that understand TIFF also understand BigTIFF.

Processor Considerations

To reiterate: most processors can "read the TIFF format," but not all can read it efficiently. Currently, Java2dProcessor and JaiProcessor both support multi-resolution TIFF, which is to say that they actually do read the embedded sub-images and choose the smallest one that can fulfill the request. Additionally, both exploit tiled sub-images. JaiProcessor, however, is able to use the JAI processing pipeline to do this more efficiently, so it is currently the best-performing processor for suitably-encoded high-resolution TIFF images.