Research: Implementing ART Just-In-Time (JIT) Compiler

Android 7.0 adds a just-in-time (JIT) compiler with code profiling to Android runtime (ART) that constantly improves the performance of Android apps as they run. The JIT compiler complements ART’s current ahead-of-time (AOT) compiler and improves runtime performance, saves storage space, and speeds app updates and system updates.

The JIT compiler also improves upon the AOT compiler by avoiding system slowdown during automatic application updates or recompilation of applications during OTAs. This feature should require minimal device integration on the part of manufacturers.

JIT and AOT use the same compiler with an almost identical set of optimizations. The generated code might not be the same but it depends. JIT makes uses of runtime type information and can do better inlining. Also, with JIT we sometimes do OSR compilation (on stack replacement) which will again generate a bit different code.

Architectural Overview
JIT architecture



Figure 1. JIT architecture – how it works

Flow

JIT compilation works in this manner:

1. The user runs the app, which then triggers ART to load the .dex file.
2. If the .oat file (the AOT binary for the .dex file) is available, ART uses them directly. Note that .oat files are generated regularly. However, that does not imply they contain compiled code (AOT binary).
3. If no .oat file is available, ART runs through either JIT or an interpreter to execute the .dex file. ART will always use the .oat files if available.
4. Otherwise, it will use the APK and extract it in memory to get to the .dex incurring a big memory overhead (equal to the size of the dex files).
JIT is enabled for any application that is not compiled according to the “speed” compilation filter (which says, compile as much as you can from the app).
5. The JIT profile data is dumped to a file in a system directory. Only the application has access to the directory.
6. The AOT compilation (dex2oat) daemon parses that file to drive its compilation.

Profile-guided comp


Figure 2. Profile-guided compilation

As a result, there’s no guarantee the snapshot taken when the application is in the background will contain the complete data (i.e. everything that was JITed).
There is no attempt to make sure we record everything as that will impact runtime performance.
Methods can be in three different states:
interpreted (dex code)
JIT compiled
AOT compiled
If both, JIT and AOT code exists (e.g. due to repeated de-optimizations), the JITed code will be preferred.
The memory requirement to run JIT without impacting foreground app performance depends upon the app in question. Large apps will require more memory than small apps. In general, big apps stabilize around 4 MB.

My own experince

I've started creating my own platform for couriers and it has already got some functionality:

1) finds closest underground stations
2) shows the distance between your endpoint and nearest underground stations

How I did It: Code

JSONObject jsonObject = new JSONObject(result.toString());
                        JSONArray jsonArray = jsonObject.getJSONObject("response").getJSONObject("GeoObjectCollection").getJSONArray("featureMember");
                        String[] longitudeAndLatitudeOfDestination = geoCode.getLocation().split(" ", 2);
                        String[] longitudeAndLatitudeOFUndergroundStation1 = jsonArray.getJSONObject(0).getJSONObject("GeoObject")
                                .getJSONObject("Point").getString("pos").split(" ", 2);
                        String[] longitudeAndLatitudeOFUndergroundStation2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject")
                                .getJSONObject("Point").getString("pos").split(" ", 2);
                        nearestUndergroundStation = jsonArray.getJSONObject(0).getJSONObject("GeoObject").getString("name") +
                                "(" + calculateDistanceFromTheUndergroundStation(Double.parseDouble(longitudeAndLatitudeOfDestination[0]),
                                Double.parseDouble(longitudeAndLatitudeOfDestination[1]), Double.parseDouble(longitudeAndLatitudeOFUndergroundStation1[0]),
                                Double.parseDouble(longitudeAndLatitudeOFUndergroundStation1[1])) + "км), ";
                        nearestUndergroundStation = nearestUndergroundStation.replace("станция ", "");
                        nearestUndergroundStation = nearestUndergroundStation.replace("метро ", "");
                        nearestUndergroundStation2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject").getString("name") +
                                "(" + calculateDistanceFromTheUndergroundStation(Double.parseDouble(longitudeAndLatitudeOfDestination[0]),
                                Double.parseDouble(longitudeAndLatitudeOfDestination[1]), Double.parseDouble(longitudeAndLatitudeOFUndergroundStation2[0]),
                                Double.parseDouble(longitudeAndLatitudeOFUndergroundStation2[1])) + "км)";
                        nearestUndergroundStation2 = nearestUndergroundStation2.replace("станция ", "");
                        nearestUndergroundStation2 = nearestUndergroundStation2.replace("метро ", "");
                        String nameOfLine = jsonArray.getJSONObject(0).getJSONObject("GeoObject").getString("description");
                        String nameOfLine2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject").getString("description");
                        Log.d("KSI", nameOfLine);


I also used the formula to calculate the distance between 2 points on the planet Earth using latitude and longitude and then I converted it to kilometres:

public String calculateDistanceFromTheUndergroundStation(double longitude1, double latitude1, double longitude2, double latitude2){
                double distance = Math.acos(Math.sin(Math.toRadians(latitude1)) * Math.sin(Math.toRadians(latitude2))
                        + Math.cos(Math.toRadians(latitude1)) * Math.cos(Math.toRadians(latitude2))
                        * Math.cos(Math.toRadians(longitude1) - Math.toRadians(longitude2)));
                distance = distance * 6371;
                return String.format("%.1f", distance);
            }


Outputs



Science

To tackle the problem of appearance and behavior, two approaches are necessary:
one from robotics and the other from cognitive science. The approach
from robotics tries to build very humanlike robots based on knowledge
from cognitive science. The approach from cognitive science uses the
robot for verifying hypotheses for understanding humans. We call this crossinterdisciplinary
framework android science.



Previous robotics research also used knowledge of cognitive science while
research in cognitive science utilized robots. However the contribution from
robotics to cognitive science was not enough as robot-like robots were not
sufficient as tools of cognitive science, because appearance and behavior cannot
be separately handled. We expect this problem to be solved by using an
android that has an identical appearance to a human. Robotics research utilizing
hints from cognitive science also has a similar problem as it is difficult to
clearly recognize whether the hints are given for just robot behaviors isolated
from their appearance or for robots that have both the appearance and the
behavior.
In the framework of android science, androids enable us to directly exchange
knowledge between the development of androids in engineering and
the understanding of humans in cognitive science. This conceptual paper discusses
the android science from both viewing points of robotics and cognitive
science.

Very humanlike appearance

The main difference between robot-like robots and androids is appearance.
The appearance of an android is realized by making a copy of an existing
person.
The thickness of the silicon skin is 5mm in our trial manufacture. The
mechanical parts, motors and sensors are covered with polyurethane and the
silicon skin. Figure 3 shows the silicon skin, the inside mechanisms, the head
part and the finished product of a child android made by painting colors on
the silicon skin. As shown in the figure, the details are recreated very well so
they cannot be distinguished from photographs of the real child.



Mechanisms for humanlike movements and reactions

Very humanlike movement is another important factor for developing androids.
For realizing humanlike movement, we developed an adult android
because the child android is too small. Figure 4 shows this developed android.
The android has 42 air actuators for the upper torso except fingers. We decided
the positions of the actuators by analyzing movements of a real human
using a precise 3D motion tracker. The actuators can represent unconscious
movements of the chest from breathing in addition to conscious large movements
of the head and arms. Furthermore, the android has a function for
generating facial expression that is important for interactions with humans.
Figure 5 shows several examples of facial expression. For this purpose, the
android uses 13 of the 42 actuators.
The air actuator has several merits. First, it is very silent, much like a human.
DC servomotors that require several reduction gears make un-humanlike
noise. Second, the reaction of the android as against external force becomes
very natural with the air dumper. If we use DC servomotors with reduction
gears, they need sophisticated compliance control. This is also important for
realizing safe interactions with the android.