0.00
Rating
0.00
Power

Ivan Makachka

Research: Implementing ART Just-In-Time (JIT) Compiler

Android 7.0 adds a just-in-time (JIT) compiler with code profiling to Android runtime (ART) that constantly improves the performance of Android apps as they run. The JIT compiler complements ART’s current ahead-of-time (AOT) compiler and improves runtime performance, saves storage space, and speeds app updates and system updates.

The JIT compiler also improves upon the AOT compiler by avoiding system slowdown during automatic application updates or recompilation of applications during OTAs. This feature should require minimal device integration on the part of manufacturers.

JIT and AOT use the same compiler with an almost identical set of optimizations. The generated code might not be the same but it depends. JIT makes uses of runtime type information and can do better inlining. Also, with JIT we sometimes do OSR compilation (on stack replacement) which will again generate a bit different code.

Architectural Overview
JIT architecture



Figure 1. JIT architecture – how it works

Flow

JIT compilation works in this manner:

1. The user runs the app, which then triggers ART to load the .dex file.
2. If the .oat file (the AOT binary for the .dex file) is available, ART uses them directly. Note that .oat files are generated regularly. However, that does not imply they contain compiled code (AOT binary).
3. If no .oat file is available, ART runs through either JIT or an interpreter to execute the .dex file. ART will always use the .oat files if available.
4. Otherwise, it will use the APK and extract it in memory to get to the .dex incurring a big memory overhead (equal to the size of the dex files).
JIT is enabled for any application that is not compiled according to the “speed” compilation filter (which says, compile as much as you can from the app).
5. The JIT profile data is dumped to a file in a system directory. Only the application has access to the directory.
6. The AOT compilation (dex2oat) daemon parses that file to drive its compilation.

Profile-guided comp


Figure 2. Profile-guided compilation

As a result, there’s no guarantee the snapshot taken when the application is in the background will contain the complete data (i.e. everything that was JITed).
There is no attempt to make sure we record everything as that will impact runtime performance.
Methods can be in three different states:
interpreted (dex code)
JIT compiled
AOT compiled
If both, JIT and AOT code exists (e.g. due to repeated de-optimizations), the JITed code will be preferred.
The memory requirement to run JIT without impacting foreground app performance depends upon the app in question. Large apps will require more memory than small apps. In general, big apps stabilize around 4 MB.

My own experince

I've started creating my own platform for couriers and it has already got some functionality:

1) finds closest underground stations
2) shows the distance between your endpoint and nearest underground stations

How I did It: Code

JSONObject jsonObject = new JSONObject(result.toString());
                        JSONArray jsonArray = jsonObject.getJSONObject("response").getJSONObject("GeoObjectCollection").getJSONArray("featureMember");
                        String[] longitudeAndLatitudeOfDestination = geoCode.getLocation().split(" ", 2);
                        String[] longitudeAndLatitudeOFUndergroundStation1 = jsonArray.getJSONObject(0).getJSONObject("GeoObject")
                                .getJSONObject("Point").getString("pos").split(" ", 2);
                        String[] longitudeAndLatitudeOFUndergroundStation2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject")
                                .getJSONObject("Point").getString("pos").split(" ", 2);
                        nearestUndergroundStation = jsonArray.getJSONObject(0).getJSONObject("GeoObject").getString("name") +
                                "(" + calculateDistanceFromTheUndergroundStation(Double.parseDouble(longitudeAndLatitudeOfDestination[0]),
                                Double.parseDouble(longitudeAndLatitudeOfDestination[1]), Double.parseDouble(longitudeAndLatitudeOFUndergroundStation1[0]),
                                Double.parseDouble(longitudeAndLatitudeOFUndergroundStation1[1])) + "км), ";
                        nearestUndergroundStation = nearestUndergroundStation.replace("станция ", "");
                        nearestUndergroundStation = nearestUndergroundStation.replace("метро ", "");
                        nearestUndergroundStation2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject").getString("name") +
                                "(" + calculateDistanceFromTheUndergroundStation(Double.parseDouble(longitudeAndLatitudeOfDestination[0]),
                                Double.parseDouble(longitudeAndLatitudeOfDestination[1]), Double.parseDouble(longitudeAndLatitudeOFUndergroundStation2[0]),
                                Double.parseDouble(longitudeAndLatitudeOFUndergroundStation2[1])) + "км)";
                        nearestUndergroundStation2 = nearestUndergroundStation2.replace("станция ", "");
                        nearestUndergroundStation2 = nearestUndergroundStation2.replace("метро ", "");
                        String nameOfLine = jsonArray.getJSONObject(0).getJSONObject("GeoObject").getString("description");
                        String nameOfLine2 = jsonArray.getJSONObject(1).getJSONObject("GeoObject").getString("description");
                        Log.d("KSI", nameOfLine);


I also used the formula to calculate the distance between 2 points on the planet Earth using latitude and longitude and then I converted it to kilometres:

public String calculateDistanceFromTheUndergroundStation(double longitude1, double latitude1, double longitude2, double latitude2){
                double distance = Math.acos(Math.sin(Math.toRadians(latitude1)) * Math.sin(Math.toRadians(latitude2))
                        + Math.cos(Math.toRadians(latitude1)) * Math.cos(Math.toRadians(latitude2))
                        * Math.cos(Math.toRadians(longitude1) - Math.toRadians(longitude2)));
                distance = distance * 6371;
                return String.format("%.1f", distance);
            }


Outputs



Science

To tackle the problem of appearance and behavior, two approaches are necessary:
one from robotics and the other from cognitive science. The approach
from robotics tries to build very humanlike robots based on knowledge
from cognitive science. The approach from cognitive science uses the
robot for verifying hypotheses for understanding humans. We call this crossinterdisciplinary
framework android science.



Previous robotics research also used knowledge of cognitive science while
research in cognitive science utilized robots. However the contribution from
robotics to cognitive science was not enough as robot-like robots were not
sufficient as tools of cognitive science, because appearance and behavior cannot
be separately handled. We expect this problem to be solved by using an
android that has an identical appearance to a human. Robotics research utilizing
hints from cognitive science also has a similar problem as it is difficult to
clearly recognize whether the hints are given for just robot behaviors isolated
from their appearance or for robots that have both the appearance and the
behavior.
In the framework of android science, androids enable us to directly exchange
knowledge between the development of androids in engineering and
the understanding of humans in cognitive science. This conceptual paper discusses
the android science from both viewing points of robotics and cognitive
science.

Very humanlike appearance

The main difference between robot-like robots and androids is appearance.
The appearance of an android is realized by making a copy of an existing
person.
The thickness of the silicon skin is 5mm in our trial manufacture. The
mechanical parts, motors and sensors are covered with polyurethane and the
silicon skin. Figure 3 shows the silicon skin, the inside mechanisms, the head
part and the finished product of a child android made by painting colors on
the silicon skin. As shown in the figure, the details are recreated very well so
they cannot be distinguished from photographs of the real child.



Mechanisms for humanlike movements and reactions

Very humanlike movement is another important factor for developing androids.
For realizing humanlike movement, we developed an adult android
because the child android is too small. Figure 4 shows this developed android.
The android has 42 air actuators for the upper torso except fingers. We decided
the positions of the actuators by analyzing movements of a real human
using a precise 3D motion tracker. The actuators can represent unconscious
movements of the chest from breathing in addition to conscious large movements
of the head and arms. Furthermore, the android has a function for
generating facial expression that is important for interactions with humans.
Figure 5 shows several examples of facial expression. For this purpose, the
android uses 13 of the 42 actuators.
The air actuator has several merits. First, it is very silent, much like a human.
DC servomotors that require several reduction gears make un-humanlike
noise. Second, the reaction of the android as against external force becomes
very natural with the air dumper. If we use DC servomotors with reduction
gears, they need sophisticated compliance control. This is also important for
realizing safe interactions with the android.

Engineering Android's Aboot

Android's boot process starts with the firmware*, which loads from a ROM. The exact details of the firmware boot vary between devices and specific architectures (e.g. Qualcomm vs. NVidia). Nonetheless, they can be generalized in the following figure:

Figure 1: The generalized Android Boot Process (from ACC, Chap. 3)


Thus, the SBL loads aboot, and inspects its header to find the «directions» for loading. You can obtain aboot from a factory image (by using imgtool to break apart the bootloader.img) or directly from the device's aboot partition (assuming a rooted device, of course). You can find the aboot partition number by consulting /proc/emmc (where available), or (in JB and later) /dev/block/platform/platformname/by-name/aboot. Using busybox cp or dd, copy the partition to a file (say, «aboot»). Picking up on the experiment in the book, we have:

Output 1: aboot from a factory image


You can toggle between the devices shown here, but note the format of aboot is still very much the same. The Magic (0x0000005) misleads the file(1) command into thinking this is an Hitachi SH COFF object. The SBL, however, is not fooled, and validates the signature (usually 256 bytes, i.e. a 2048-bit PKI, immediately after HeaderSize + CodeSize) with the certificate chain that is loaded from the offset (HeaderSize+ImgSize + CodeSize + SigSize).If the signature is valid, the SBL proceeds to load the code immediately following the header (CodeSize bytes) by mapping into ImgBase.

The first byte of the aboot binary image is identifiable by the «eaXXXXXX» value. The book alludes to this being an ARM B(ranch) instruction, but stops short of explaining its significance — which is explained next.

ARM Processor Boot, in a Nutshell

ARM processors use an Exception Vector during their lifecycle. This vector is a set of 6 or seven addresses, set by the operating system, and instructing the processor what to do when certain traps are encountered. These traps are architecture defined, and so the vector indices are always the same across all oeprating systems**. This is shown in Table 1:

Table 1: The ARM Exception Vector


Disassembling Aboot

As discussed in the book, the standard aboot is derived from the «LittleKernel» project, hosted by CodeAurora. This project (often referred to as «LK»), is partially open sourced, in that some of its components can be found there (A simple way to get the sources is via git clone git://codeaurora.org/kernel/lk.git). You'll see something similar to this:



This should get you started on your own explorations of Android's Boot Loader. The platform support stuff isn't that interesting (USB being somewhat of an exception, due to its usage as a potential attack vector). Apps, in particular, are obviously important. Of special interest is Odin, to which I hope to devote another article at some point.

Full-circle viewing: 360-degree electronic holographic display

Physical Model of this display


Schematic of the design of 360-degree tabletop electronic holographic display, the design concept of which allows several persons to enjoy the hologram contents simultaneously.
Credit: Yongjun Lim, of the 5G Giga Communication Research Laboratory, Electronics and Telecommunications Research Institute, South Korea
Princess Leia, your Star Wars hologram moment may be redeemed.

Method of Engineering Solution



3D-Modelling



In the original 'Star Wars' movie, the inviting but grainy special effects hologram might soon be a true full-color, full-size holographic image, due to advances by a South Korean research team refining 3-D holographic displays.

The team described a novel tabletop display system that allows multiple viewers to simultaneously view a hologram showing a full 3-D image as they walk around the tabletop, giving complete 360-degree access. The paper was published this week in the journal Optics Express, from The Optical Society (OSA).

To be commercially feasible in a range of applications — from medicine to gaming to media — the hologram challenge is daunting. It involves scaling an electronic device to a size small enough to fit on a table top, while making it robust enough to render immense amounts of data needed to create a full-surround 3-D viewing experience from every angle — without the need for special glasses or other viewing aids.

«In the past, researchers interested in holographic display systems proposed or focused on methods for overcoming limitations in the combined spatial resolution and speed of commercially available, spatial light modulators. Representative techniques included space-division multiplexing (SDM), time-division multiplexing (TDM) and combination of those two techniques,» explained Yongjun Lim, of the 5G Giga Communication Research Laboratory, Electronics and Telecommunications Research Institute, South Korea. Lim and his team took a different approach. They devised and added a novel viewing window design.

To implement such a viewing window design, close attention had to be paid to the optical image system. «With a tabletop display, a viewing window can be created by using a magnified virtual hologram, but the plane of the image is tilted with respect to the rotational axis and is projected using with two parabolic mirrors,» Lim explained. «But because the parabolic mirrors do not have an optically-flat surface, visual distortion can result. We needed to solve the visual distortion by designing an aspheric lens.»

Lim further noted, «As a result, multiple viewers are able to observe 3.2-inch size holograms from any position around the table without visual distortion.»

Building on these advances, Lim's team hopes to implement a key design feature of strategically sizing the viewing window so it is closely related to the effective pixel size of the rotating image of the virtual hologram. Watching through this window, observers' eyes are positioned to accept the holographic image light field because the system tilts the virtual hologram plane relative to the rotational axis. To enhance the viewing experience the team hopes to design a system in which observers can see 3.2-inch holographic 3-D images floating on the surface of the parabolic mirror system at a rate of 20 frames per second.

Test results of the system using a 3-D model and computer-generated holograms were promising — though right now still in a monochrome green color. Next, the team wants to produce a full-color experience and resolve issues related to undesirable aberration and brightness mismatch among the four digital micromirror devices used in the display.

«We are developing another version of our system to solve those issues and expect to have the next model in the near future, including enhancement of the color expression,» said Lim. «Many people expect that high quality holograms will entertain them in the near future because visualizations are increasingly sophisticated and highly imaginative due to the use of computer-aided graphics and recently-developed digital devices that provide augmented or virtual reality.»

And the Princess Leia hologram? That old miniature was a motivating experience of their work, Lim explained.

Story Source:

Materials provided by The Optical Society. Note: Content may be edited for style and length.

Journal Reference:

Yongjun Lim, Keehoon Hong, Hwi Kim, Hyun-Eui Kim, Eun-Young Chang, Soohyun Lee, Taeone Kim, Jeho Nam, Hyon-Gon Choo, Jinwoong Kim, Joonku Hahn. 360-degree tabletop electronic holographic display. Optics Express, 2016; 24 (22): 24999 DOI: 10.1364/OE.24.024999