Perceptual Rasterization for
Head-mounted Display Image Synthesis

ACM Trans. Graph. (SIGGRAPH 2019)

Sebastian Friston     Tobias Ritschel     Anthony J. Steed

University College London

The processing pipeline.


We suggest a rasterization pipeline tailored towards the needs of HMDs, where latency and field-of-view requirements pose new challenges beyond those of traditional desktop displays. Instead of image warping for low latency, or using multiple passes for foveation, we show how both can be produced directly in a single perceptual rasterization pass. We do this with per-fragment ray-casting. This is enabled by derivations of tight space-time-fovea pixel bounds, introducing just enough flexibility for the requisite geometric tests, but retaining most of the simplicity and efficiency of the traditional rasterizaton pipeline. To produce foveated images, we rasterize to an image with spatially varying pixel density. To compensate for latency, we extend the image formation model to directly produce ``rolling'' images where the time at each pixel depends on its display location. Our approach overcomes limitations of warping with respect to disocclusions, object motion and view-dependent shading, as well as geometric aliasing artifacts in other foveated rendering techniques. A set of perceptual user studies demonstrates the efficacy of our approach.


Head-mounted displays have requirements beyond those of typical desktop display-based systems. HMDs must maintain low and predictable latency but must cover a significant proportion of the user's field of view at as high resolution. Current graphics pipelines struggle to produce the images required: already HMDs are available that support 16 million pixels. This strains the raw bandwidth of even the most recent GPUs. A second problem is that the traditional graphics pipeline computes images at a single snapshot time. This ignores how the display is driven. Many displays support low persistence by scanning the image illumination. However this means that some parts of the screen appear at lower latency than others.

We solve both these problems through perceptual rasterization. Perceptual rasterization is an extension to normal OpenGL-style rasterization that supports non-uniform pixel density. We then support foveated rendering by rasterising in a way that we sample more pixels in the area of the screen corresponding to the user's fovea. We can support rolling display, by resampling pixels depending on the time at which they will be displayed.


Current Head-mounted Displays (HMD) need rendering different from the one on common displays to achieve comfortable viewing.


Virtual Reality (VR) equipment should respond to a users action with low latency: performing a task where response is delayed is much harder, leads to fatigue and ultimately simulator sickness.

Try yourself:

Drag the crosshair onto the target. It will be hard with latency. Click to turn latency on or off. Without latency it is much easier and less tiring. Wearing an HMD, this effect is sickening.

There are many sources of latency, we address a particular hard one: display latency. An image on a displays is shown for a certain period of time, e.g. 16 ms. At the end of this period, the images is outdated: reality would has moved on; the display image was frozen for 16 ms.

Try yourself:

Hover to play. The first row shows the rendered image, the second what is shown on the display, and the third the user's perception. The first column shows reality, second a common display, third a low persistance display with global shutter, and fourth a rolling display.

A typical HMD display sends a short pulse of light (ca. 1 ms) for certain display area at a certain time. This update is ``rolling'' from left to right: The left is updated at the start and the right is updated at the end of the frame interval. In photography, this is known as the rolling shutter effect, seen when imaging a quickly moving objects such as a helicopter blade with a CCD chip. Such displays minimises blur, but do not yet reduce latency. Showing rolling images on a rolling display however, reduces latency from the display period (16 ms) to around the rolling period (ca. 1 ms).

See yourself:

An example of an image taken with a rolling shutter.

Our method allows modern graphics chips (GPUs) to synthesize exactly such rolling images. To this end, we have to extend the straight edges of non-rolling triangles to the curved edges of rolling triangles.

Try yourself:

Move the vertices of the yellow triangle at the start of the display interval or the blue ones at the end of the interval. Our approch produces a rolling trinagle interpolating geometry, including color over time.


Different from a desktop, a HMD covers the entire view, requiring to compute many pixels. With limited compute power, there might not be enough time to compute all pixels at the same quality and visual fidelity is reduced to meet the compute budget. Instead, we focus computation on those pixels that are perceived with detail: the fovea, an area of the human retina with a very high density of receptors. We seek inspiration from the cortical magnification theory in neuroscience: areas perceived with high fidelity are simply represented larger in the cortex. The difficulty is, that a common GPU assume a uniform rectangular grid of pixels.

Try yourself:

Move the mouse over the image on the right to see the representation of the cortex.

We devised a method render synthesize images directly in cortical space. Important areas are larger and cover more pixels while less relevant areas are smaller, with fewer pixels.

Try yourself: Move the vertices of the triangle on the right to see cortical image.


Patent pending.





Sebastian Friston, Tobias Ritschel, Anthony Steed
Perceptual Rasterization for Head-mounted Display Image Synthesis
ACM Trans. Graph. (Proc. SIGGRAPH 2019)

	author	= {Sebastian Friston and Tobias Ritschel and Anthony Steed},
	title	= {Perceptual Rasterization for Head-mounted Display Image Synthesis},
	journal	= {ACM Trans. Graph. (Proc. SIGGRAPH 2019)},
	year	= {2019},
	volume = {38},
	number = {4},


Yuchen Zhang and Linas Beresna for inspiring thesis work around extensions of the topic. David Swapp for experiments and lab support. UCLB for outreach and patent support. Karol Myszkowski, Thomas Leimkühler and Rhaleb Zayer for discussion. Lucy Tallentire for earlier voice-over. Reviewers for insightful suggestions to improve the work.