As explained by Google software engineers Santiago Aboy Solanes and Vladimír Marko, reducing JIT compilation time allows optimizations to take effect more quickly at runtime, directly boosting overall device performance. Faster build times for both JIT and AOT compiled code reduce the workload on devices, leading to improved battery efficiency and thermal behavior, especially on lower-end hardware.
The engineers emphasized the importance of cutting down compilation time without sacrificing the performance of generated code or increasing its memory footprint. They noted that while faster compilers often come at the cost of other trade-offs, this effort was different:
The only resource we were willing to invest was our own development time—dedicating ourselves to deep investigation and research to uncover clever solutions that meet these strict criteria. Let’s take a closer look at how we identified areas for improvement and developed effective fixes.
The team focused on three core initiatives. First, they used tools like pprof to precisely measure compilation times, establishing performance baselines before and after changes. They selected a representative set of first-party and third-party apps, along with the Android OS itself, to analyze typical workloads and prototype potential improvements.
With this carefully chosen set of APKs, we manually triggered local compilations, collected profiling data, and used
pprofto visualize where time was being spent. [...] Thepproftool is extremely powerful, enabling us to slice, filter, and sort data—for instance, identifying which compiler phases or methods consumed the most time.
Leveraging these insights, ART engineers streamlined internal compiler stages by eliminating redundant iterations, applying heuristics, introducing additional caching to avoid expensive computations, deferring calculations to remove unnecessary loops, and simplifying abstractions.
Identifying which optimizations are worth pursuing required careful judgment to minimize wasted effort.
After detecting a high-time-consuming area and investing development effort into improving it, you may sometimes find no viable solution—either because there’s little room for improvement, implementation would take too long, it could regress other metrics, or it might add excessive complexity to the codebase.
To address this, the ART team estimated how much each metric could be improved with minimal effort. This process involved leveraging previously gathered data—or sometimes intuition—to build quick, rough prototypes before finalizing and implementing robust solutions.
Aboy and Marko also shared a list of implemented optimizations, such as reducing the lookup complexity of FindReferenceInfoOf from O(n) to O(1), passing data structures by reference to prevent unnecessary allocation and deallocation, caching computed values, and numerous other refinements too extensive to detail here.
Some of these performance gains were introduced in the June 2025 Android release, with the remaining optimizations rolled out in subsequent updates by year-end. Additionally, devices running Android 12 and later can receive these enhancements through Mainline module updates.