Understanding touch responsiveness – Touchscreen technology series 2

This is the second article in our touchscreen technology series. In our previous touch article, we explained the components of a touchscreen system and how these parts work to translate a touch input to graphical user feedback. In this article, we’ll continue with the topic of touch responsiveness, and explain the input lag that is experienced when using touch. Read on for more details.

What is touch responsiveness?
Touch responsiveness is the time it takes for the user to get feedback from the device as a consequence of an input. In our particular case, this translates into touch events from touch hardware and frame updates on the display. Concretely, touch responsiveness is the time from when the user presses the touch panel until an application has updated its UI to when this is visible on the display.

Since many parts of the system of the device contribute to touch responsiveness another name for it is “system latency,” which would be a more fitting description. So, it is actually the system latency that the user experiences, not just the touch associated parts. Often, touch hardware is blamed for the system latency. In reality, it plays a small part in system latency.

Why is touch responsiveness important?
Touch responsiveness matters to the end user, such as in the case of gaming, swiping in lists or managing the launcher application. Even if users don’t notice (or don’t look for) system latency, user experience is better with less latency, which is why it’s so important to improve the responsiveness in devices.  The less system latency there is, the snappier and faster the device will feel.

We think of three different latencies that are important to the touch experience – tap latency, initial move latency, and move latency. Tap latency is the time from a “touch up“ or “touch down” event – when the user lifts (or presses) a finger from the touch panel, until the time something happens on the display as a consequence of the event. Initial move latency is the time from the first “move” event until the time something happens on the display as a consequence of the event. Finally, move latency is the same as initial move latency but measured later during a swiping gesture.

Touch ecosystem timing breakdown
All of the steps in the touch event handling take time, and all contribute to overall system latency. Although some parts take longer than others, latency in the touch ecosystem varies depending on the active application and different timings. In the following sections, we’ll go through each part in the system that contributes to the latency (using a very simple use case). The ecosystem components have been described in our previous touch article.

A very simple use case – a running Android app
A simple use case is when we have an Android application running. Our example app responds to touch input and when a “touch down action” is detected, a white rectangle is drawn in a view in the application. A view in Android exposes an interface to access the graphical layout and drawing surface. The discussion that follows is based on this use case.

System latency overview

Touch ecosystem breakdown with timing details.

Touch ecosystem breakdown with timing details.

In the Android world, the above illustration typically represents the total system latency.  The values in the illustration were measured on a Jelly Bean version of Android. How we measure the different parts will be explained in a coming article in the touch series.

Touch Panel
To detect pointers (normally fingers), the touch integrated circuit (IC) scans the channels connected to the sensor. This allows the IC to generate events at about 60Hz to 120Hz, which is the typical report rate in mobile devices today.

Sometimes when the signal is very noisy, the touch IC might need to rescan the panel if the position of the pointer could not be determined. This impacts the report rate negatively, although this should never occur on a well-tuned system. This is common for all capacitive touch panels in mobile devices.

A 60Hz capable touch IC is generating an event every 16.667 ms (1/60s). If a press is just after the previous sensor scan, this could cause a delay of two entire scans, meaning adding maximum 34ms to the latency.

This is exemplified maximum latency from the panel, as described in the following events:

  1. Scan sweeping from top of panel to bottom of panel.
  2. The user taps precisely after the scan on the top of the panel. The touch IC needs to scan the rest of the panel. This introduces a latency between 0 and 16.667ms, including firmware processing. The pointer has not yet been detected.
  3. New scan, the pointer will now be detected during this scan. Including firmware processing, the latency will be 16.667ms.
  4. Scan complete – firmware is processing the data and sending it to host system.

The time it takes for the firmware to scan depends on the number of channels on the touch circuit and amount of active filter algorithms. The touch IC filters address problems such as noise, linearity and jitter, and is activated depending on the signal level that can be achieved from the hardware. Also, enabling multi-touch increases firmware processing time, from the increased data. If the touch IC CPU is not powerful enough, the report rate may decrease.

If we instead have a touch IC capable of running in 120Hz, the scan time is 8.33ms, meaning we can almost reduce the touch panel latency by half. The worst case would be 16.667 (8.33*2).

Touch panel sleep mode
When the touch panel is not used for some time, it usually goes down in a sleep mode to save power. Sleep mode adds more time before the touch panel can send a response to the Android system. The extra time is caused by the touch IC reducing the scan frequency of the touch panel to about 5Hz to 20Hz, depending on what the manufacturer decided to do in the design phase. This adds an additional 50ms to 200ms to the tap latency and initial swipe latency.  Sometimes people mistakenly say that this long wake up latency is due to a bad touch solution, when it actually is a conscious decision to reduce current consumption.

Normal versus reduced scan mode
Normal and reduced scanning are two different modes that that one could use when getting data from a touch panel. Normal mode scans the touch panel sensor at the speed that was decided in the design. So if a touch sensor engineer specified an optimal scan rate of 60Hz, the touch panel sensor scans at 60Hz all the time, as long as a finger or other conductive object is present. On the contrary, reduced mode decreases the scan rate when a finger is present and is not moving. When a user touch is detected, the report rate increases to 60Hz in this example, and stays there as long as the user moves the finger. When the finger stops, the report rate drops until it starts to move again.

While setting the touch IC to scan in reduced mode saves power, one drawback is longer delays if the user pauses with the finger on the touch panel and then starts generating events again, such as moving the finger.

Host Touch Driver latency
Host touch driver latency comes from acting on the interrupts generated by the bus, reading the data, assembling the data into operating system specific touch events and publishing the data on the operating system kernel queue. 50 bytes is a reasonable number used to calculate latency. 50 bytes (400bits) on the 400kHz I2C bus takes 1ms to read (400bits/400000bits/s = 1ms) by the host touch driver. Then the data is assembled and processed. In addition to this, operating system specific behavior like context switching comes into play.

All in all, latency for one pointer in the host driver of about 2 to 3ms is reasonable. Having to handle multiple pointers (multi-touch) increases the latency.

Event and window management
The event management of the high level operating system (or window framework) reads the touch events published by the host driver. In our case, this is done by Android. The event management in Android is not a big contributor to system latency, as it takes very few milliseconds (see image further down) to process and transport the event to the correct target, which in the Android case, would be an Activity, View and/or ViewGroup. Many parts in Android relating to event transport are implemented as Observer patterns. This means the observers are inactive until they are notified that an event has arrived. Another latency factor is that the event will go across several different threads to reach its final destination. When the event is processed within a thread, it needs to be switched in by the operating system scheduler to be executed.

Since the Android JB (Jelly Bean) version, a mechanism called Choreographer has been introduced as a part of the event management. The Choreographer impacts the latency as described below.

The Choreographer and latency
When Choreographer receives a vertical synchronization (VSYNC) signal notification, it invokes the event distribution to the event targets. This mechanism gives apps as much time as possible to process and draw its content before the next frame. However, this may introduce additional latency since the touch event creation from the touch input is not aligned with the VSYNC. Assuming a 60Hz frequency on the touch panel, this adds an additional frame, or 16.67ms latency in case the timing is bad.

SLOP is a threshold for defining a movement as an actual movement. In practice, one will need to move finger a number of pixels before a ”move” event will be sent. This increases the perceived initial latency, since any object moved is further behind. The reason to have the SLOP threshold is to avoid any jitter from the input data. For example, if a move event was sent for a change of one pixel, we’d never be able to tap or press anything since all touches would be considered as moves. One pixel on 1080p resolution 5 inch display is very small, about 58µm.

The time added by the application varies greatly, and is spent in two big steps:

  1. Executing some application logic in response to the received touch event.
  2. Drawing and updating the user interface.

For these steps, applications vary from using less than 1ms to, in really catastrophic cases, several hundreds of milliseconds. An important number to not exceed is 1/60 Hz, that is 16.67ms, and this is for two reasons:

  1. If an application spends more than 16.67ms per event doing processing and drawing, it misses that frame/VSYNC and it has to wait for the next VSYNC adding another 16.67ms to the latency. The more frames missed the longer the latency will become.
  2. Breaking the 16.67ms boundary means that the application may not reach 60 frames per second. The resulting frame drops causes non-smoothness in the graphics rendering resulting in a very bad user experience. The 60 frames per second requirement comes from the fact that all smartphone displays today have an average internal refresh rate of about 60Hz. So this is the maximum number of frames that can be shown to the user. In order to not drop frames the application frame rate must be equal to or higher than this value.

Graphics Composition
Graphics composition is the process that combines multiple content surfaces into a single frame. Graphics composition and managing frame buffers take a lot of time in Android, even in a simple use case. Many of the steps in the graphic compositioning phase is tied up towards VSYNC, this means that more steps in the flow risk spending time waiting. An example of this can be seen in the system trace below.

Steps within the graphics composition that are tied to VSYNC waits include:

  • Invalidating (Telling the system that the frame need to be updated)
  • Drawing (meaning actual drawing and swapping buffers)
  • Layer composition on MDP (Mobile Display Processor)
  • Kernel posting of frame to display

This means many VSYNC timings must be passed, which greatly contributes to latency. We have measured latencies of more than 40ms for the handling in SurfaceFlinger alone. Learn more about how graphics composition works in Android.

Display resolution also is another factor which contributes to latency. A lower resolution translates into less processing by the CPU and GPU, and less data passed around and copied through the system. Our measurements indicate at least a 20ms difference between a 720p (720×1280 pixels) and a 1080p (1080×1920 pixels) resolution. The difference between a qHD (540×960 pixels) and 1080p (1080×1920 pixels) is about 30ms.

System tracing event to graphics management
To understand the system event when a touch event is received, we need to do some tracing. Below is an illustration of a system trace of this very simple use case that we referred to earlier in this text.

Illustration of a system trace.

Illustration of a system trace.

As you can see above, some of the relevant threads are shown from system trace session. Below, the image is enlarged with descriptions of what’s happening from the point where the Android framework reads the input event to where a frame is composed by the graphics framework and ready to be sent to the display driver. We have excluded the parts where the Touch IC scans the sensor and the final handling of the rendered frame by display driver and the display itself.

A close-up on some of the relevant threads from our system trace session.

A close-up on some of the relevant threads from our system trace session.

A close-up on some of the relevant threads from our system trace session.

  1. The Android system reads the event and dispatches it to the event management framework. This takes 2 to 3ms. Next, the event is stalled with the Choreographer, and as shown in the illustration, it’s a long time until the next VSYNC comes. Note that the input event creation is not related to VSYNC in itself and it could come at any point during the previous VSYNC, depending on the report rate. So the latency here can vary with a whole VSYNC timing.
  2. When the Choreographer receives the VSYNC signal, it dispatches the input event to the destination target, in this case our test application.
  3. The CPU0 is currently busy and requires a couple of milliseconds until the application starts processing the event. Note that this is actually a quad-core (4) CPU system but three of the CPUs are currently offline.

    Additional relevant threads from our system trace session.

    Additional relevant threads from our system trace session.

  4. Now the test application does basic drawing – a white rectangle with the 2D API on a black background. When the drawing finishes, the surface is marked as dirty (changed) and tells the graphics framework that it should be composed with any other changed surfaces in a coming display frame update.
  5. The graphics framework takes over and composes the changed surface with all other surfaces to complete the final frame. This starts first when a new VSYNC timing is detected, so the test application must be ready before this VSYNC. Otherwise, we’ll get a frame drop.
  6. Next, VSYNC comes and the touch display driver can further process the finished frame.

Host display driver
The host display driver is the entity between the composed frame located in the frame buffers and the overlays provided by the Android framework. Its role is to provide the data so it can be transferred over the MIPI DSI bus to the display. Our latency measurements indicate that the time spent here is quite marginal compared to the rest of the system, normally taking a few milliseconds to process data.

Regarding latency, there are a couple of aspects we need to consider. The first one is the actual time it takes for the physical material to respond. There are several different display technologies such as LCD with sub-catagories such as VA, TN, and IPS. These are based on liquid crystals. Another common display technology is OLED which is based on a light-emitting compound.

LCD displays vary quite a lot in latency and ranges between 2ms to 100ms depending on colours used and LCD type. OLED displays, on the other hand, are very fast and have microsecond latency times.

Another aspect is the internal refresh time of the display. Today, the displays in mobile devices have an internal refresh of about 60Hz, which translates to about 16.67ms to update the entire display.

A close up of a display, which shows the refresh that occurs when rendering a new frame.

A close up of a display, which shows the refresh that occurs when rendering a new frame.

The image above illustrates the refresh that happens on the display when rendering a new frame – a color gradient is inverted when the display is touched. In the illustration, the right side of the display is fully converted but the left side is not, showing the ongoing refresh of the display that typically takes 16.67ms to complete.

In a RAM-less display, there are no extra latency or wait states, since data must continuously be transferred over from the device platform and the display will update from this stream. In a RAM-based display, the latest frame is stored in the internal memory of the display, so that there is one more buffer to fill before it can be displayed. This way, we add one more frame update to the latency, which is 16.67ms.


This ends our article about system latency and touch responsiveness. In the next article, we’ll continue with a related touch topic covering resampling.

More information

Comments 11

Sort by: Newest | Oldest | Most popular

  1. By J L



    Regarding Normal versus reduced scan mode – Is it possible for an app to somehow stimulate the touch screen (without asking the user to do so) in order to maintain Normal Mode regardless of the user’s behavior?

  2. By Πολύφημος Σπαζοκλαμπάνιας



    i am a 32 year old physicist that is very frustrated with the current state of user interfaces

    let’s compare my xperia z ultra, what is/was considered sony’s flagship device, to a 20 year old calculator with a 4 MHz cpu and a display driver that eats up 10 % of the cpu time just to refresh the screen. an hp48gx.

    z ultra: there is no user feedback. you press something and you don’t know if it was registered as a press or if the software will do something about it. You have to wait and wonder if you should try again.

    hp48: immediately an hourglass announciator, outside of the main screen area, lights up to show that, yes, the keypress registered and yes, it is doing something.

    z ultra: there is no way to know if the device is busy with something and is not ready to accept input. If you press somewhere while it is busy (without you knowing if it is busy or not) then the press will register at some random time in the future on a UI element that you don’t know. It might register, it might not.

    hp48: when the device is busy, the ‘busy’ announciator is on. If you press a key while the device is busy, it will PREDICTABLY and CERTAINLY do what it should do. If the device is busy rendering a menu and during this you press ‘down’, when the menu has been rendered, the cursor will move one position down, without fail.

    z ultra: you can not press anything anywhere unless you are sure the device is idle. If you press something before the device is idle you can not know even if it is going to be a registered press, let alone know WHAT it is going to be registered.

    hp48: you can buffer keystrokes. without even looking at the screen and before even the first screen get’s renderedi can press [right shift], [MODES], [FLAG], [up], [up], [up], [CHK], [OK], [OK], in quick succession, this is what will happen:

    the calculator will switch to the settings screen (right shift MODES), start rendering it, see that there are buffered keystrokes (FLAG), stop rendering the settings screen, switch to the FLAGs screen, start rendering it, see that there are buffered keystrokes (up, up, up), stop rendering it, reset to the new position and start rendering it, see that there are buffered keystrokes (CHK), switch the selected flag, start rendering, see that there are remaining keystrokes (OK), stop rendering, return to the settings screen, start rendering it, etc… in the end, you changed to the settings screen, then the flags screen, scrolled up (wraps around to the end), switched a flag, accepted the new settings, exited back to the settings screen and then the ‘desktop’. Without ever waiting for ANYTHING to render completely

    notice how the problem of slow rendering is completely solved by clever programming?

    It is fast and dependable. It will ALWAYS go as you pressed, without even looking at anything.

    z ultra: laggy interface. You tap now and you MAYBE get a response, at SOME point in the near future, with VARYING delay. The response might even be what you INTENDED it to be. A slightly hasty swipe might be interpreted as a tap, but you won’t know it until it has started.

    hp48: laggy interface, but the INDICATION that the keystroke REGISTERED is ABSOLUTELY INSTANT. Not just “quick” or “very quick”. INSTANT. The response is CONSISTENT. The same operation takes always the same time. You know you did something wrong based on how long it takes for the device to complete, because you KNOW how long each type of operation takes. It ALWAYS takes the same amount of time to enter a menu.

    z ultra: “hidden meat” navigation. There is no clear, beforehand, indication what parts of the screen are interactive nor what they will do if pressed (or long pressed… or swiped up, or down, or left, or right). Inconsistency in operation. Sometimes the back arrow will return to the previous page in the program, sometimes it will “exit” the program.

    hp48: each softmenu button has a text label. You know what it will do based on the text. There is never any ambiguity on what a button will do. It says “MORE”. not an abstract graphic that has a place only in a logic puzzle. The ON key ALWAYS goes ONE step BACK from ANYWHERE you are to where you were BEFORE.

    the hp48’s interface and design is so fluid and powerful that no matter what you want to do, it’s equally easy. Add two numbers? input the numbers, press add. solve differential equation? enter the equation, press solve. Assemble a piece of source code? enter the code, press assemble. DISassemble a binary? press disassemble. Debug a piece of code? have the code on stack, press debug. Edit a BINARY piece of code? start editing… It will automatically disassemble the piece of code, open the editor and when you close the editor, it reassembles the edited code. Want to know what a function, that you didn’t write, does? put the cursor on it, press two keystrokes, and it automatically disassembles that piece of code and opens up a new instance of the editor. If it’s yours, and you edit it, upon closing this instance, it gets reassembled and it returns to the previous instance.

    in short, your work is terrible. ghastly. awful. animations, swipes, flashes and sounds to mask a badly designed and engineered interface.

    I had this device for 17 years and from the looks of it, i will have it for many years to come. It’s incomparably better designed.

    • By James Barker


      Exactly! I could not have stated it better, myself. Thank you Πολύφημος.

  3. By omar jawhar


    and obviously you have done a bad job with Xperia SP screen.

    • By Joe Padre


      Hi Omar,
      Thanks for the feedback on Xperia SP. If you haven’t already, please do provide feedback to your local Xperia care center regarding your issues with the Xperia SP screen.
      Best regards,
      Joe from Developer World

  4. By joseph carmine nero


    Please consider posting in depth analysis of Xperia Z2 s screen.i would really love to get some cold fact instead of marketing

    • By Joe Padre


      Hi Joseph,
      Thanks for the suggestion. We will check to see if this topic can be covered as part of the Touchscreen series.
      Best regards,
      Joe from Developer World

      • By joseph carmine nero


        @Joe Padre
        Thank you.will look forward to it

  5. By joseph carmine nero


    Good read

1-11 of 11 comments.